Skip to main content

The demand for AI is helping Nvidia and AMD leapfrog Intel

The demand for AI is helping Nvidia and AMD leapfrog Intel

/

The future is machine learning, and no machine learns as well as a graphics card

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

AMD Nvidia logo stock
Vlad Savov

Intel is the king of a shrinking kingdom. Almost every traditional desktop or laptop PC runs on the Santa Clara company’s processors, but that tradition is fast being eroded by more mobile, ARM-powered alternatives. Apple’s most important personal computers now run iOS, Google’s flagship Chromebook has an ARM flavor, and Microsoft just announced Windows for ARM. And what’s more, the burden of processing tasks is shifting away from the personal device and distributed out to networks of server farms up in the proverbial cloud, leaving Intel with a big portfolio of chips and no obvious customer to sell millions of them to.

If you want to talk about the most influential chip company in history, Intel’s name is the one you want. But the most influential for the future? That’s a much more open question.

AMD (blue) and Nvidia (green) stock prices over the past year
AMD (blue) and Nvidia (green) stock prices over the past year
Yahoo! Finance

During the course of 2016, Nvidia and AMD saw their stock prices skyrocket. Both companies are now trading at multiple times their value from a year ago, and the explanation lies in their massively expanded potential for future growth. All the AI hype that we heard during CES this past week is underpinned by a multitude of algorithms and mathematical calculations, and in its most sophisticated form it harnesses methods of machine learning and deep learning to evolve its awareness without being fed answers directly by a human. All of that new technology requires a lot of processing power, and it just so happens that AMD and Nvidia were already making the perfect processors for the task: graphics cards.

GPUs, or graphics processing units, can be thought of as a specialized breed of the CPUs that Intel dominates in. Unlike central processing units, however, graphics chips are less versatile in the tasks they can perform and are best suited to running a ton of parallelized workloads. They don’t have four or eight cores, they have hundreds and thousands, and each of them is frightfully efficient at handling some small recurring tasks over and over again. GPU acceleration has fast become the standard for machine learning, and this year Google’s Cloud Platform will be powered by AMD’s FirePro server GPUs.

It’s not that Intel is oblivious to the expanding market being created by the move toward machine learning — the company has an entire aspirational website dedicated to the subject — but its chips are at a fundamental disadvantage and it hasn’t secured the customers or made the same sort of progress that its rivals already have.

Radeon Instinct GPUs (click image to expand)
Radeon Instinct GPUs (click image to expand)
AMD

AMD now has an entire line of GPUs dedicated to accelerating machine learning, which has been branded as Radeon Instinct. Every single trending thing in tech can benefit from these new cards: from autonomous vehicles, self-piloting drones, and personal robots to fintech, nanobots, and advanced medicine. Investors are buying up AMD stock because they know the processing challenges of the future are practically tailored for the massively parallel architecture of a GPU — and AMD has some really good GPUs.

GPU server farms are going to be as significant to our future as ocean-floor cables are to our present

Nvidia’s rise is also no surprise. The green graphics giant has been talking about deep learning and autonomous cars for at least three years at CES. It was bemusing at first, intriguing after a while, and now it’s turning into real-world self-driving vehicles thanks to a partnership with Audi. At the same time as Google and AMD were announcing their 2017 plans for Radeon-driven machine learning in the cloud, Nvidia and IBM revealed their own agreement to provide "the world’s fastest" deep learning enterprise solution. These deals might not be rich on hype, but one day soon they’re going to be as significant as the ocean-floor cables keeping the internet connected today.

The next time a company offers you a cloud-based service of any kind — such as Google’s system for handwriting recognition in the new Chromebooks — odds are good that there’ll be a GPU farm somewhere churning through the mathematical tasks of making it happen.

As AI and VR grow, so will graphics card sales

More in line with their traditional businesses, AMD and Nvidia both stand to benefit from the growing consumer interest in virtual reality. The two most promising solutions, the HTC Vive and Oculus Rift, both require a reasonably beefy PC spec, replete with a modern and powerful graphics, so the more people buy into VR, the more GPUs each company is likely to sell.

But the most interesting dynamic that’s developed over the past year is how, ever so subtly and behind the scenes, AMD and Nvidia have essentially stolen Intel’s future away from it. Intel exists to satisfy our processing needs, but just as we discover a rich new vein of computational power needs, it turns out that Intel’s CPUs have already been surpassed by the basic architectural advantages of chips that were originally designed to push pixelated shoot-em-up targets around a monitor.

It’s a fun twist of fate for everyone outside Intel, and the good execution exhibited by both AMD and Nvidia so far also portends well for the speed of improvement in AI and machine learning capabilities. At a time when Intel is still scrambling to find mega-tasking scenarios for its chips, its GPU rivals are more concerned with how fast they can churn out the hardware to satisfy demand that looks set to only continue growing.