My real passion is hardware: CPU, RAM, and recently GPU, associated with information technology. Hardware is the portion of a computer architecture system that guides the physical infrastructure, allowing all computer systems to function. This interest developed as I engaged in making personal computers and upgrading them for years. I always found tremendous value in understanding the purpose of each component, how they interact to create performance and functionality, and how quickly hardware is changing. In my opinion, hardware is unlike any other component of technology, as it allows tangible technology to converge with abstract computational thinking.
I became increasingly interested in how GPUs have developed from their main focus of developing graphics to using GPUs for further computing. At one time, GPUs were only built to enhance the speed of image rendering from a game or visual application. NVIDIA (2023) goes into detail, especially about how GPUs changed significantly when they moved from fixed-function graphics processors to highly parallel computing engines. With GPUs today, they can be used much more as computing engines to accelerate deep learning, to support data analytics, and to develop Android and AI applications. As the ability for this task becomes economically viable, a great research question arises: How has the graphics processing unit transformed from its original intent to importance in modern multifaceted computing beyond graphics?
This research-to-define area relates well to the themes of information technology presented in the course. The physical architecture of IT, hardware, is the physical basis that allows all IT activities and functions to exist. In reflecting on computer systems and their evolution over time, it became clear that the designs offered by hardware have granted stackable resources in the context of power and efficiency in computing. Hardware was the primary factor in all computing advancements from the enormous vacuum-tube machines circa the first part of the 20th century to today's compact, little multi-core and specialized computing units.
To understand how GPUs work (along with all other hardware components), a basic understanding of key computer science and IT concepts such as the fetch-decode-execute cycle, instruction sets, and parallel processing is necessary. In addition to the aforementioned, these components operate combined with the CPU, memory, storage, and I/O systems to perform all kinds of operations, from simple arithmetic operations to advanced artificial intelligence simulations. GPUs use thousands of smaller cores and leverage parallel execution of these cores in that parallel execution is a processor executing computer tasks in parallel (Kurniawan & Ichsan, 2017). In the context of GPUs, this parallel execution is exceptional because compute-heavy workloads compatible with parallel execution can leverage thousands of processors to provide computation simultaneously.
By hardware shown as essential, we also extend to the execution of software. High-performance applications leveraging VMs and data analytics or providing real-time rendering (Generative AI) often rely on high-performance hardware configurations. A breakdown of languages like Python, C++, or CUDA often includes hardware-level optimizations; compilers also convert high-level language to machine language for hardware to run the program efficiently. The importance of creating hardware-aware applications for maximum performance of the computing system is exponentially important.
Application software that runs on an operating system, on user-directed requests or system-level requests, ultimately exercises the hardware that provides the efficiency or capabilities to perform. For example, the training of large-scale machine learning models would be unrealistic to execute without hardware complying with parts such as GPUs because of the high volume of complex data and computing.
The relevance of hardware in a database system cannot be overlooked. The increasing productivity of a database is reliant on the reflection of disk speed, memory bandwidth, and CPU efficiency. A high-throughput system will often rely on specialized hardware like an SSD spread with a RAM cache to run off datasets in real-time. As we improve the way we use technology that stacks data, reliance on hardware to support robust data management will continue to be center stage.
Finally, in regard to networking and cybersecurity, my special interest, hardware, lays the first foundation of network infrastructure and security enforcement. All hardware nets: NIC, switches, routers, and firewalls are needed for reliable, safe, secure communications. Also, understanding the hardware limitations and capabilities is critical for the next level of network architecture and triggers for intrusion detection and orientation of an incident response (Kimmons et al., 2015).
In this blog, and as an ongoing course of study, I will be recording the significance in examining the evolution of hardware components, specifically the GPU, as well.
References
Kimmons, R., Miller, B. G., Amador, J., Desjardins, C. D., & Hall, C. (2015). Technology integration coursework and finding meaning in pre-service teachers’ reflective practice. Educational Technology Research and Development, 63(3), 313–330. https://doi.org/10.1007/s11423-015-9394-5
NVIDIA. (2023). What is a GPU? Retrieved from https://www.nvidia.com/en-us/drivers/what-is-gpu/