Knowledge Center
Navigation
Knowledge Center

Graphics Processing Unit (GPU)

An electronic circuit designed to handle graphics and video.
popularity

Description

The graphics processing unit (GPU) is a processing unit designed to handle graphics (2D and 3D) and video more efficiently. Originally designed for the gaming industry, GPUs are now frequently used as an accelerator for machine learning (ML) and artificial intelligence (AI), as well as still functioning as graphics processor in automotive screens/infotainment systems, desktop computers, supercomputers and mobile phones. The GPU’s use in AI/ML inferencing systems may eventually be limited by how power hungry it is and will someday be eclipsed by more purpose built chips for inferencing.

While not the first company to use the term GPU, Nvidia popularized it.

GPUs are programmed with vector based languages such as CUDA (from Nvidia). A GPU may be a single integrated circuit or IP added to an SoC or ASIC. GPUs are also found in clusters in ML/AI acceleration.

The basic structure of a GPU is better memory and more math capability than CPUs. GPUs have large numbers of MACs and high-speed memory interfaces. They can perform the necessary computations much faster than a general-purpose CPU by performing in parallel using many more ALUs (arithmetic logic units). The downside is that GPUs tend to utilize floating-point arithmetic, which is well beyond the needs of AI algorithms. While a GPU can handle multiple operations, it needs “to access registers or shared memory to read and store the intermediate calculation results,” according to Google. This may impact the power consumption of the system.

“GPUs made the AI revolution real and will continue to be important in relatively high-performance datacenter training where power and cost are not a concern, also in prototypes for emerging applications like robotics and augmented reality headsets,” said Arteris IP’s Kurt Shuler. “But for anyone looking for battery-powered high-performance and low-cost at volume, or the ultimate in differentiated performance and capability in mega-data centers where cost is not a concern, ASIC is (and always has been) the best solution.”


Tags

GPU


Multimedia

Using GPUs In Semiconductor Manufacturing

Multimedia

Making Sense Of Inferencing Options

Multimedia

Thermal Challenges And Moore’s Law

Multimedia

Making Sense Of ML Metrics

Multimedia

Machine Learning Inferencing At The Edge

Multimedia

Inferencing Efficiency

Multimedia

Building An Efficient Inferencing Engine In A Car

Multimedia

Designing An AI SoC

Multimedia

AI, ML Chip Choices

Multimedia

Tech Talk: GPU-Accelerated Photomasks


Related Entities