In This Article:
Nvidia (NVDA) made headlines this week at its GTC Conference when it announced it's building its very first standalone CPU. For a company that made its fortune on the power of its graphics cards, it’s a totally new direction.
And according to CEO Jensen Huang, the superchip, called Grace, is a powerful addition to the company’s lineup.
“This is a new growth market for us,” Huang told Yahoo Finance during an interview.
“The entire data center, whether it's for scientific computing, or for artificial intelligence training, or inference to the app, the deployment of AI, or data centers at the edge, all the way out to an autonomous system, like a self-driving car, we have data center-scale products and technologies for all of them,” he added.
Grace, named for computer programming pioneer Grace Hopper, features 144 cores and twice the memory bandwidth and energy efficiency of high-end leading server chips, according to Nvidia.
The chip, which Nvidia calls a superchip because it’s two CPUs in one, is specifically designed for use in AI systems, something the company has invested heavily in throughout recent years.
“For the first time, we're selling CPUs. Today, we connect our GPUs to available CPUs in the market, and we'll continue to do that. The market is really big — there are a lot of different segments,” Huang said.
“Artificial intelligence or scientific computing, the amount of data that we have to move around is so much. So this gives us the opportunity to offer a revolutionary type of product to an existing marketplace for a new type of application that's really sweeping computer science.”
In addition to Grace, Nvidia unveiled its new Hopper H100 data center GPU. That system, which packs 80 billion transistors, offers a significant step up in performance compared to its predecessor, the A100 GPU, Nvidia said.
GPUs are important for high-performance computing and AI applications because they can handle multiple processes at the same time. And Nvidia has utilized those capabilities for years.
“If you think about our company today, it's really a data center scale company. We offer GPUs and systems and software and networking switches,” Huang explained.
“And so the entire data center, whether it's for scientific computing, or for artificial intelligence, training, or inference to the app, the deployment of AI, or data centers at the edge, or all the way out to an autonomous system, like a self-driving car, we have data center scale products and technologies for all of them.”
But as chips continue to shrink and the number of transistors packed onto each CPU or GPU increases, there’s always the question of whether chip makers like Nvidia are running up against the limits of the silicon that makes up their semiconductors.