If you're reading about machine learning, you're probably hearing a lot about the uses of graphics processing units or GPUs in machine learning projects, often as an alternative to central processing units or CPUs. GPUs are used for machine learning because of specific properties that make them better matched to machine learning projects, especially those that require a lot of parallel processing, or in other words, simultaneous processing of multiple threads.
|Free Download: Machine Learning and Why It Matters|
There are many ways to talk about why GPUs have become desirable for machine learning. One of the simplest ways is to contrast the small numbers of cores in a traditional CPU with much larger numbers of cores in a typical GPU. GPUs were developed to enhance graphics and animation, but are also useful for other kinds of parallel processing – among them, machine learning. Experts point out that although the many cores (sometimes dozens) in a typical GPU tend to be simpler than the fewer cores of a CPU, having a larger number of cores leads to better parallel processing capability. This dovetails with the similar idea of “ensemble learning” which diversifies the actual learning that goes on in an ML project: The basic idea is that larger numbers of weaker operators will outperform smaller numbers of stronger operators.
Some experts will talk about how GPUs improve floating point throughput or use die surfaces efficiently, or how they accommodate hundreds of concurrent threads in processing. They may talk about benchmarks for data parallelism and branch divergence and other types of work that algorithms do supported by parallel processing outcomes.
Another way to look at the popular use of GPUs in machine learning is to look at specific machine learning tasks.
Fundamentally, image processing has become a major part of today's machine learning industry. That's because machine learning is well-suited to processing the many types of features and pixel combinations that make up image classification data sets, and help the machine train to recognize people or animals (i.e. cats) or objects in a visual field. It's not a coincidence that CPUs were designed for animation processing, and are now commonly used for image processing. Instead of rendering graphics and animation, the same multi-thread, high capacity microprocessors are used to evaluate those graphics and animation to come up with useful results. That is, instead of just showing images, the computer is “seeing images” – but both of those tasks work on the same visual fields and very similar data sets.
With that in mind, it's easy to see why companies are using GPUs (and next-level tools like GPGPUs) to do more with machine learning and artificial intelligence.