Compute Unified Device Architecture

What Does Compute Unified Device Architecture Mean?

Compute Unified Device Architecture is a parallel computing architecture useful for the support of applications that require significant amounts of parallel processing. It's often attributed to the Nvidia company and abbreviated CUDA.


Techopedia Explains Compute Unified Device Architecture

In a Compute Unified Device Architecture model, the execution of compute kernels relies on parallel processing and a virtual instruction set delivered by a multi-core processor, often a GPU.

The idea is that some tasks that are termed "extremely parallel" like rendering sophisticated 3D graphics require a multi-core approach. First, chip companies started creating quad-core models and other models that introduced simple parallel processing.

But specifically in the development of GPUs, engineers came up with designs that have very powerful multi-core processing, because of the number of small cores built into one chip. That has led to CUDA and similar architectures that help to really support robust and vibrant parallel processing. Some of these use low-level libraries and limited instruction sets to make the multi-core design efficient.


Related Terms

Latest Hardware Terms

Related Reading

Margaret Rouse

Margaret Rouse is an award-winning technical writer and teacher known for her ability to explain complex technical subjects to a non-technical, business audience. Over the past twenty years her explanations have appeared on TechTarget websites and she's been cited as an authority in articles by the New York Times, Time Magazine, USA Today, ZDNet, PC Magazine and Discovery Magazine.Margaret's idea of a fun day is helping IT and business professionals learn to speak each other’s highly specialized languages. If you have a suggestion for a new definition or how to improve a technical explanation, please email Margaret or contact her…