Don't miss an insight. Subscribe to Techopedia for free.


Why are companies sourcing GPUs for machine learning?

By Justin Stoltzfus | Last updated: September 10, 2018

If you're reading about machine learning, you're probably hearing a lot about the uses of graphics processing units or GPUs in machine learning projects, often as an alternative to central processing units or CPUs. GPUs are used for machine learning because of specific properties that make them better matched to machine learning projects, especially those that require a lot of parallel processing, or in other words, simultaneous processing of multiple threads.

Free Download: Machine Learning and Why It Matters

There are many ways to talk about why GPUs have become desirable for machine learning. One of the simplest ways is to contrast the small numbers of cores in a traditional CPU with much larger numbers of cores in a typical GPU. GPUs were developed to enhance graphics and animation, but are also useful for other kinds of parallel processing – among them, machine learning. Experts point out that although the many cores (sometimes dozens) in a typical GPU tend to be simpler than the fewer cores of a CPU, having a larger number of cores leads to better parallel processing capability. This dovetails with the similar idea of “ensemble learning” which diversifies the actual learning that goes on in an ML project: The basic idea is that larger numbers of weaker operators will outperform smaller numbers of stronger operators.

Some experts will talk about how GPUs improve floating point throughput or use die surfaces efficiently, or how they accommodate hundreds of concurrent threads in processing. They may talk about benchmarks for data parallelism and branch divergence and other types of work that algorithms do supported by parallel processing outcomes.

Another way to look at the popular use of GPUs in machine learning is to look at specific machine learning tasks.

Fundamentally, image processing has become a major part of today's machine learning industry. That's because machine learning is well-suited to processing the many types of features and pixel combinations that make up image classification data sets, and help the machine train to recognize people or animals (i.e. cats) or objects in a visual field. It's not a coincidence that CPUs were designed for animation processing, and are now commonly used for image processing. Instead of rendering graphics and animation, the same multi-thread, high capacity microprocessors are used to evaluate those graphics and animation to come up with useful results. That is, instead of just showing images, the computer is “seeing images” – but both of those tasks work on the same visual fields and very similar data sets.

With that in mind, it's easy to see why companies are using GPUs (and next-level tools like GPGPUs) to do more with machine learning and artificial intelligence.

Share this Q&A

  • Facebook
  • LinkedIn
  • Twitter


Hardware Artificial Intelligence Emerging Technology Machine Learning

Written by Justin Stoltzfus | Contributor, Reviewer

Profile Picture of Justin Stoltzfus

Justin Stoltzfus is a freelance writer for various Web and print publications. His work has appeared in online magazines including Preservation Online, a project of the National Historic Trust, and many other venues.

More Q&As from our experts

Related Terms

Related Articles

Term of the Day

Machine Bias

Machine bias is the tendency of a machine learning model to make inaccurate or unfair predictions because there are...
Read Full Term

Tech moves fast! Stay ahead of the curve with Techopedia!

Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia.

Go back to top