Compute Unified Device Architecture

Why Trust Techopedia

What Does Compute Unified Device Architecture Mean?

Compute Unified Device Architecture is a parallel computing architecture useful for the support of applications that require significant amounts of parallel processing. It's often attributed to the Nvidia company and abbreviated CUDA.

Advertisements

Techopedia Explains Compute Unified Device Architecture

In a Compute Unified Device Architecture model, the execution of compute kernels relies on parallel processing and a virtual instruction set delivered by a multi-core processor, often a GPU.

The idea is that some tasks that are termed "extremely parallel" like rendering sophisticated 3D graphics require a multi-core approach. First, chip companies started creating quad-core models and other models that introduced simple parallel processing.

But specifically in the development of GPUs, engineers came up with designs that have very powerful multi-core processing, because of the number of small cores built into one chip. That has led to CUDA and similar architectures that help to really support robust and vibrant parallel processing. Some of these use low-level libraries and limited instruction sets to make the multi-core design efficient.

Advertisements

Related Terms

Margaret Rouse
Technology expert
Margaret Rouse
Technology expert

Margaret is an award-winning writer and educator known for her ability to explain complex technical topics to a non-technical business audience. Over the past twenty years, her IT definitions have been published by Que in an encyclopedia of technology terms and cited in articles in the New York Times, Time Magazine, USA Today, ZDNet, PC Magazine, and Discovery Magazine. She joined Techopedia in 2011. Margaret’s idea of ​​a fun day is to help IT and business professionals to learn to speak each other’s highly specialized languages.