Definition - What does GPU-Accelerated Computing mean?
GPU-accelerated computing is the employment of a graphics processing unit (GPU) along with a computer processing unit (CPU) in order to facilitate processing-intensive operations such as deep learning, analytics and engineering applications. Developed by NVIDIA in 2007, the GPU provides far superior application performance by removing processing-intensive application sections to GPU. GPU-accelerated computing deployment is growing in popularity due to the large variety of applications in which it could be used, such as artificial intelligence, drones, robots or autonomic cars.
Techopedia explains GPU-Accelerated Computing
The GPU helps in providing superior performance for software applications. From the perspective of the user, GPU-accelerated computing makes applications faster. GPU-accelerated computing functions by moving the compute-intensive sections of the applications to the GPU while remaining sections are allowed to execute in the CPU. While the CPU is comprised of cores designed for sequential serial processing, the GPU is designed with a parallel architecture consisting of more efficient yet smaller cores that can easily handle multiple tasks in parallel. As a result, in GPU-accelerated computing, while sequential calculations are performed in the CPU, highly complicated calculations are computed in parallel in the GPU. Another salient feature of GPU-accelerated computing is the support offered to all the parallel programming models, thus helping application designers and developers to provide superior application performance.
GPU-accelerated computing has been extensively used in video editing, medical imaging, fluid simulations, color grading and enterprise applications, and its use is promising in complex fields such as artificial intelligence and deep learning.