Tech moves fast! Stay ahead of the curve with Techopedia!
Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia.
Compute Unified Device Architecture is a parallel computing architecture useful for the support of applications that require significant amounts of parallel processing. It's often attributed to the Nvidia company and abbreviated CUDA.
In a Compute Unified Device Architecture model, the execution of compute kernels relies on parallel processing and a virtual instruction set delivered by a multi-core processor, often a GPU.
The idea is that some tasks that are termed "extremely parallel" like rendering sophisticated 3D graphics require a multi-core approach. First, chip companies started creating quad-core models and other models that introduced simple parallel processing.
But specifically in the development of GPUs, engineers came up with designs that have very powerful multi-core processing, because of the number of small cores built into one chip. That has led to CUDA and similar architectures that help to really support robust and vibrant parallel processing. Some of these use low-level libraries and limited instruction sets to make the multi-core design efficient.
Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia.