clock menu more-arrow no yes

Filed under:

Nvidia bets big on AI with powerful new chip

New, 15 comments

The GPU-maker's CEO says they've gone 'all in' on developing chips for artificial intelligence

NVIDIA

Nvidia has released a new state-of-the-art chip that pushes the limits of machine learning. The Tesla P100 GPU, which CEO Jen-Hsun Huang revealed yesterday at Nvidia's annual GPU Technology Conference, can perform deep learning neural network tasks 12 times faster than the company's previous top-end system. The P100 was a huge commitment for Nvidia, costing over $2 billion in research and development, and it sports a whopping 150 billion transistors on a single chip, making the P100 the world's largest chip, Nvidia claims. In addition to machine learning, the P100 will work for all sorts of high-performance computing tasks — Nvidia just wants you to know it's ​really good at machine learning​.

To top off the P100's introduction, Nvidia has packed eight of them into a crazy-powerful $129,000 supercomputer called the DGX-1, which was also announced yesterday. This show-horse of a machine comes ready to run, with deep-learning software preinstalled. It's shipping first to AI researchers at MIT, Stanford, UC Berkeley, and others in June. On stage, Huang called the DGX-1 "one beast of a machine."

"One beast of a machine"

Nvidia has made its name building high-powered graphic processing chips for the video game industry. Graphic processing requires a lot of computing power. So does neural network deep learning, a type of artificial intelligence where data is fed through layers of simulated neurons in order to train a computer to recognize complex patterns. As more and more tech companies have committed to developing deep-learning technology — Google, Microsoft, Amazon, Facebook, Baidu, the list goes on — Nvidia's positioning itself as an artificial intelligence chip-maker.

"Computers powered by deep learning can do tasks that we can't imagine writing software for," Huang said on stage. "Deep Learning isn't just a field or an app. It's way bigger than that. So, our company has gone all in for it."

When it comes to pushing deep learning forward, processing power is vital. Last year, Microsoft researchers took first place at the ImageNet computer vision challenge because they used a neural net that was five times deeper than any used in the past. According to a paper published in Nature, DeepMind used an enormous amount of computing power to train its Go-playing AI AlphaGo — 1,202 CPUs and 176 GPUs to be exact.

Typically, the bigger and more complex the data becomes, the more layers of neurons are needed by a deep-learning machine to do its job. This means in order to build the bigger neural networks that will allow much more impressive machine-learning feats to be accomplished —€” for example, more precise image recognition in self-driving cars — researchers and data scientists need more powerful chips. Nvidia plans to build them.