English

An Insight into Accelerated Computing

Posted on Jan 27, 2024 by
697

What Is Accelerated Computing?

Accelerated computing is a computational approach employed in academic, research, and engineering applications. It involves the utilization of dedicated processors working in conjunction with traditional CPUs to achieve faster computations. Since accelerated computing combines CPUs and other types of processors on an equal scale, it is also called heterogeneous computing.

Graphics Processing Units (GPUs) stand out as the most extensively utilized processors. Data Processing Units (DPUs) represent a swiftly emerging category facilitating enhanced and accelerated networking. Each, in conjunction with the host CPU, plays a distinct role in shaping a unified and balanced system. An accelerated computing system delivers superior overall cost-effectiveness and improved performance and energy efficiency when compared to a system relying solely on a CPU.

Accelerated Computing

Accelerated computing originated in personal computers and matured in supercomputers. It is employed in personal computers, smartphones, and cloud services. Nowadays, both commercial and technical systems today embrace accelerated computing to handle jobs such as machine learning, data analytics, simulations and visualizations.

How Did Accelerated Computing Develop?

Co-processors, specialized hardware designed to enhance the performance of a host CPU, have a longstanding presence in computers. Their significance emerged circa 1980 with the introduction of floating-point processors, which bestowed advanced mathematical capabilities upon PCs. The subsequent decade witnessed the surge in demand for graphics accelerators fueled by the burgeoning realm of video games and graphical user interfaces.

  • In 1999, NVIDIA marked a pivotal moment with the launch of the GeForce 256, the inaugural chip dedicated to offloading key 3D rendering tasks from the CPU. This milestone also featured the pioneering use of four graphics pipelines for concurrent processing, and NVIDIA coined the term "graphics processing unit" (GPU), establishing a new category of computer accelerators.

  • By 2006, NVIDIA had successfully shipped 500 million GPUs. Concurrently, some researchers began crafting code to leverage the potent capabilities of GPUs for tasks that surpassed the capabilities of CPUs. Under the leadership of Lan Buck, CUDA was introduced—a programming model aimed at harnessing the parallel-processing engines within GPUs for diverse tasks.

  • Collaborating with a G80 processor in 2007, CUDA propelled a new series of NVIDIA GPUs, bringing accelerated computing to an ever-expanding array of industrial and scientific applications.

This line of data center-focused GPUs is regularly expanded with new architectures named after innovators, such as Tesla, Pascal, Volta, and Ampere. Throughout the global landscape, experts in high-performance computing employed GPUs to construct accelerated HPC systems, leading groundbreaking scientific endeavors. Their endeavors today encompass diverse fields, ranging from the astrophysical study of black holes to genome sequencing and beyond.

Why Is Accelerated Computing Important?

Accelerated Computing and Artificial Intelligence in the Modern Era

In the era of artificial intelligence, accelerated computing plays a crucial role. It is one of the key techniques enabling the flourishing of deep learning models by providing efficient computing power. In the field of machine learning, model training is a computationally intensive task. The use of accelerated computing devices, especially GPUs, can significantly reduce the time required for model training, speeding up the algorithm iteration and optimization process. AI, in turn, serves as a vital ally in the development of accelerated computing. Companies like American Express employ it to prevent credit card fraud, while telecom companies are exploring artificial intelligence to deliver intelligent 5G services.

Enhancing Energy Efficiency

Accelerated computing can also contribute to improved energy efficiency. CPU and other specialized accelerators are designed to handle specific workloads more efficiently than general-purpose CPUs, often reducing power consumption per computation. For example, GPUs deliver 42x better energy efficiency on AI inference than CPUs. Certainly, transitioning all globally deployed AI servers from CPU-only systems to GPU-accelerated ones could result in annual energy savings of a remarkable 10 trillion watt-hours. This equivalence translates to the energy consumption of approximately 1.4 million households in a year (refer to the image below for a visual representation).

Enhancing Energy Efficiency

Summary

Accelerated computing is transforming the tech landscape, disrupting conventional computing paradigms, and heralding a new era of innovation. In the field of artificial intelligence and high-performance computing, accelerated computing usually uses dedicated processors (such as GPUs) to increase computing speed, and uses AI switches to connect and coordinate multiple computing devices to achieve efficient data exchange and communication. From artificial intelligence and edge computing to scientific exploration and healthcare, accelerated computing is poised to imprint a lasting impact on the future of technology. It propels advancements and unlocks potentials once deemed unattainable, shaping a trajectory of progress across diverse domains.

You might be interested in

Knowledge
Knowledge
Knowledge
See profile for Sheldon.
Sheldon
Decoding OLT, ONU, ONT, and ODN in PON Network
Mar 14, 2023
386.2k
Knowledge
See profile for Irving.
Irving
What's the Difference? Hub vs Switch vs Router
Dec 17, 2021
367.6k
Knowledge
See profile for Sheldon.
Sheldon
What Is SFP Port of Gigabit Switch?
Jan 6, 2023
335.5k
Knowledge
See profile for Migelle.
Migelle
PoE vs PoE+ vs PoE++ Switch: How to Choose?
Mar 16, 2023
420.5k
Knowledge
Knowledge
Knowledge
Knowledge
See profile for Moris.
Moris
How Much Do You Know About Power Cord Types?
Sep 29, 2021
294.7k