Nvidia’s AI, omniverse channels will help physical industries ‘become digital for first time’: CEO
Jensen Huang, Nvidia Founder and CEO, joins Yahoo Finance Live to discuss the future of Artificial Intelligence technology and the tech industry at the annual GTC Conference.
Video Transcript
JULIE HYMAN: NVIDIA is ramping up its push into artificial intelligence. The company unveiling new plans at its annual GTC developer conference of making supercomputers used to develop AI technologies and a whole other suite of products, not by the way just chips. For more, we're joined by Jensen Huang. He is NVIDIA Founder and CEO. Jensen, great to see you. Thank you so much for being here.
JENSEN HUANG: I'm glad to be here, Julie and Dan. Nice to see you.
JULIE HYMAN: Thank you. You have been saying that this is the iPhone moment for AI. Tell us what you mean by that?
JENSEN HUANG: Well, several reasons. One, this is the first time that AI has reached an inflection point such that this AI capability is so easy to use almost anybody can program it. You can program it to develop all kinds of interesting applications. And it's so accessible.
And so when you think about a new computing platform, whether it's the PC, the internet, the mobile cloud revolution, each and every generation, the number of applications, and the number of programmers, and the reach of that platform increased. And this is absolutely the case. You could see this with ChatGPT, the AI heard around the world. In just a couple of months, it reached 100 million users. And the number of startups, the number of applications that you're now starting to see on generative AI is just growing extraordinarily.
So this is definitely the beginning of a new computing platform.
- Jensen, I want to ask you about the generative AI aspect. We saw a lot of that kind of in your keynote, the talking about generative AI. And NVIDIA made some announcements, partnerships with Adobe for instance. Where does generative AI, though, fit into your broader portfolio when it comes to AI?
JENSEN HUANG: Well, you know, first of all, we're a full stack computing platform. We start out with the AI infrastructure. And this GTC, we announced a massive deployment of hopper generation. AI supercomputers are being deployed all over the world. But very importantly, this is also the first generation where AI is moving out of the laboratories, out of research, and into industrial operations.
And so instead of just a supercomputer for training, this is going to be now an AI factory for every industry and company to produce intelligence. Whereas companies in the past would build factories for all kinds of different products and different things in the future, there will be factories that are created for every company so that they can produce intelligence. And so this is really an extension, a major extension, of how AI is going to be used. Generative AI is behind that.
The second thing that we do is, of course, the operating system and the software stack layer of AI. We have two platforms for that. One is called NVIDIA AI. The other one is called NVIDIA Omniverse. One, NVIDIA AI is where intelligence is created. Omniverse is where intelligence is simulated and tested. One is virtual intelligence. The other one's virtual world.
And these two platforms we have now put into the cloud. And in doing so, we're partnering with the world's CSPs to take the best of NVIDIA, the NVIDIA AI DGX supercomputer, the two NVIDIA platforms-- NVIDIA AI and NVIDIA Omniverse-- and we're now making it available as cloud services all hosted inside the world's leading CSPs. And so in partnership with the CSPs, we're going to combine the best of NVIDIA and the best of the leading CSPs and we're going to bring AI to the world as quickly as possible.
JULIE HYMAN: And Jensen, you're good at sort of laying out this grand vision, right, for those cloud service providers, for those clients. We're looking at some cool pictures too of how this is all going into play. You talked to us a lot last year at this time about the Omniverse. So I have two questions for you.
First of all, to sort of make it concrete for those of us who are not steeped in this stuff all the time, what are some examples of this being put into practice? And how is your Omniverse different now than it was last year? Is it that being put into the cloud aspect?
JENSEN HUANG: Well, so far, Omniverse has only been on-prem enterprise solution. And the fact of the matter is this stack, this supercomputing stack, artificial intelligence stack is really complicated. And the answer is to put that up in the cloud for us to fully manage it, host it, manage it. We host it from Azure. And it can benefit from all of the security, and all the industrial certification, all the storage capability, and all of their enterprise APIs, and all of their productivity APIs. And they've also done a lot of work with industrial metaverse.
And so all of that work is now going to be tied into Omniverse and we can take this to the world. What Omniverse is, is to help companies that build physical things to do everything digitally first. And so whether you're building cars, or making devices, or phones, or building plants, or factories, logistics warehouse, or designing large buildings, or even designing cities, today, you have to do things kind of on paper, and then you try it in the physical world. And as you're building it, you make mistakes and you do change orders.
All of that doesn't have to happen. You should be able to do that completely in digital. And when you do it in digital, all of the organizations around your company-- and the car industry has 14 million employees. And it takes all of those employees working together to build a car and manufacture it around the world. And so now these 14 million employees could have a digital platform that connects digital and physical.
And for them to be able to design the cars, design the factories all in Omniverse, use generative AI to test it, to help populate it, to optimize it. And then before they do that, they virtually assemble it before they break ground and build a factory in reality. In doing so, the number of change orders were reduced, the time to market will reduce, any plant opening delays would reduce. For an industry as large as the auto industry, which is $3 trillion large, all of this is going to translate to hundreds of billions of dollars of savings.
So this is a very, very big deal that we can help the physical industry of the world become digital for the very first time.
DAN HOWLEY: Jensen, I want to ask real quick about AI and the cost of it. We've heard stories about how OpenAI, they're basically subsidizing ChatGPT at this point. When it comes to generative AI and these kinds of services, obviously NVIDIA's chips are very important. I guess, is the company able to help clients at all as far as pricing goes? Is there a means for lowering the power costs? How does that kind of square?
JENSEN HUANG: Yeah, first of all, artificial intelligence is a two-computer system. The first computer is the training system. And this is what's sitting behind me. This is the Hopper DGX GPU. This is literally one GPU. It's got eight different chips, connected chip to chip using a really high speed interconnect. And what the computer sees, what the programmer sees is one giant GPU. The largest single chip, if you will, the world's ever made.
Now, this computer trains the model. It develops the model. This computer works with software programmers to create the software, create the AI. Once the AI is developed, it is then run on a second computer. And the second computer could be as small as this.
This is what's deployed at Google GCP. And this will run generative models for imaging, generative models for video, generative models for languages. Of course, there's a limit to how big models can be in this GPU. And we could create an even larger version of the GPU for inference. And that version is, for example, this one.
And this is designed for large language model inference. And so all the way from this little tiny guy, which is called L4, which could be deployed all over the clouds, to this version, which is designed for large language models. And they run programs that are created by this computer behind me. And so inference has large range of scales just like applications have all different sizes and shapes. Some of them require a larger computer. Some of them require smaller computers.
And so that's number one, is we announced a whole family of generative AI platforms that range from L4 to L40 to H100 to our next generation Grace Hopper that allows us to inference in a broad range of applications all one architecture, all software compatible for different sizes and different performance levels. Then the next thing that we have to do-- and just, by the way, this Hopper H100 NVL, in just one generation, we reduced the cost of large language model inference, like ChatGPT by a factor of 10 12 times in just two years. And so we're going to continue to do that.
The combination of new architectures, new configurations, and new software technology, we should be able to do the same for AI in the coming years that we've done for the last 10 years, which is we reduce or we enable the capability of AI to grow by about a million times. That million times of performance, of course, wasn't followed by an increase of a million times in cost. And so we should be able to do the same thing over the next 10 years and drive down the cost of inference tremendously.
JULIE HYMAN: One of the interesting things that you guys have been doing as you've been building out AI, but not just that, is you're not just a chip maker at this point, right? I think people think primarily NVIDIA is a chip maker, but you've got more software and services now that you're adding on. What sort of led to that decision? And how do you foresee that balance going forward?
In other words, should we think of NVIDIA as a chip maker anymore?
JENSEN HUANG: Well, first of all, the services part, the software part of our business is going to increase over time. However, the systems part of our business or the chip part of our hardware part of our business will continue to be very large. The way that NVIDIA does things, accelerated computing is a full stack problem, which means you have to design an original chip, an original system, original system software, brand new algorithms. And one of the algorithms that I mentioned during the GTC this time, of course we talked a lot about generative AI, but one of the really important ones was computational lithography for chip making.
So that's an example of a very important library that we created. Another one would be data processing we call RAPIDS. Another one would quatum computing we call cuQuatum. There's a whole bunch of libraries that we created. There are some 300 libraries or so that are for use for different domains.
If you're a software company and you're creating a whole full stack, you have to start from building the system itself. And so the way that NVIDIA builds things, we build the entire data center from the chips, to the systems, to the software, to the networking, the switches, the computing fabric, all of the algorithms on top. And we bring it all up and we make it all work and we optimize it. And then the second thing that we do, and this is the part that's really magical, we break everything down again and we integrate all of our components, whether it's this size, or this size, or this size back here.
We integrate it into the world's computing fabric through our OEM partners, our system integrators, our cloud service provider partners. And we integrate them into their computing system. And so as a result, NVIDIA is really an extension of everyone's computing system.
And so Dell has an NVIDIA extension. HPE has a NVIDIA extension. AWS, Azure, GCP, OCI, they all have NVIDIA extensions. And as a result, we have this giant footprint of one architecture that's called the NVIDIA architecture, that every researcher, every developer, every startup, every company could take advantage of because it comes from literally everywhere.
And so it seems like we're a chip company, but we've always been a full stack systems company. And now as we move further and further up the stack and we build those stacks because the world needs us to do it, if it's available anywhere else, we wouldn't do it. But NVIDIA AI and NVIDIA Omniverse are so unique that we really ought to put that into the world's multi-cloud, any cloud, and allow you to be able to run those stacks everywhere.
JULIE HYMAN: So Jensen, as you talk about this increased suite, I mean the broader technology industry, as we know, this is sort of the year of efficiency not just for Meta, but that seems to be the mantra for a lot of large tech companies right now. How is that affecting your business. Even though they might be longer term, very pro AI, very interested in it, are you seeing sort of delays in decision-making right now, for example, because of the environment?
JENSEN HUANG: We are seeing an acceleration of demand. We're seeing an acceleration of demand for our DGX AI supercomputers. We're seeing an acceleration of demand for inference because of generative AI. In fact, was that the major theme of GTC is really that accelerated computing and AI have arrived. And both of them must be activated as quickly as possible to drive lower cost.
I showed several examples where accelerated computing applied to computational lithography, computer-aided engineering, data processing, to even combinatorial optimizations for route planning. It could be improved dramatically in speed, but also reducing power by order of magnitude, reducing cost by an order of magnitude. The world is really challenging in not just the fact that we all have to save money and be more efficient, but in the world's data centers we're power limited. And so in order for us to continue to increase performance while continuing to be sustainable, we must, number one, accelerate every single workload. Number two, reclaim that power.
So if we used to spend 35 megawatts of power, let's reduce it down to 5 by accelerating the workload as I demonstrated at GTC. Then, use the 30 megawatts to now go back and invest that reclaimed power back into growth. So job one is now to accelerate computation, accelerate everything we can, reclaim the power, reinvest that into growth. Now, artificial intelligence has the incredible capability of predicting the outcome without doing the brute force computation.
Brute force computation takes a lot of energy. And so the work that we're doing, for example, in Earth-2 to predict the weather and predict the climate, we're able to now predict weather nearly as accurately as first principled analysis and simulations, but do it 10,000 times more efficiently. And so instead of using a ton of power in supercomputing centers that does all the computation, we should accelerate it, we should use artificial intelligence to make the predictions. And as a result, reduce power, reduce time, reduce cost.
JULIE HYMAN: And so Jensen even though you're talking about reducing costs in this particular area, I mean NVIDIA is not now just an AI company, right? It still does other stuff. It still does gaming. It still does auto. It still does data center-- to your point, that's not necessarily AI-powered at this stage. And so if demand is increasing for this area, how is that balancing with what you're seeing in demand for some of your other products in those other areas?
And also, what's the proportion of your sales right now that are in these AI products? And how do you see the trajectory of that through the end of the year? Sorry, a lot of questions for you there.
JENSEN HUANG: To answer you backwards, I would say that-- that was a lot of questions. Data centers largely driven by artificial intelligence and accelerated computing is already our largest business. I expect it to remain our largest business through the year. And this is an area that we're seeing accelerated growth in demand in both training, as well as inference. The gaming market had a very challenging year last year and we see it returning.
And we're doing very well with ADA. ADA is the world's generation of-- it's based on the technology called neurographics, which basically says instead of computing every single pixel, we render one pixel and we guess the other eight or 16. This is just an amazing thing. We're using artificial intelligence, one large computer in our data center, and we save power for hundreds of millions of gamers.
And so this way of doing computer graphics is really essential in the future. We've demonstrated incredible visual quality improvement, incredible performance, and simultaneously reduction in power. And so we're going to apply artificial intelligence to gaming. We're applying artificial intelligence, of course, to autonomous vehicles. AV and all of our auto business has nearly doubled. I think it's about more than doubled year-over-year. We expect next year to be another very big year.
And so this year to be a big year. Next year to be a big year. We're ramping into autonomous vehicles. We're ramping into electric vehicles. And very importantly, we're ramping into a whole generation of cars where the car makers realize now that they are really software programmable cars and what they need to do is to build an install base-- a very large fleet of install base. Because an install base will be on the road for 15 years that they can benefit from new software offerings and new applications that they can deliver to their customers and monetize that relationship for 10, 15 years.
And so people now start to realize that. And NVIDIA is really the great choice, the best choice, for those software programmable AV electric vehicles.
DAN HOWLEY: Jensen, I just have one last question for you. And it's just about interest rates. We saw Jay Powell talking about raising rates again. How is that impacting some of your customers. Are you seeing them-- we're obviously talking about pullback, not pulling back as far as AI goes. But how else are you seeing that impact the business?
JENSEN HUANG: Well, this is the time when we all have to do more with less. And not one company wants to do less. Everybody wants to do more. But they have to find a way to do it with less.
Accelerated computing is really the best path forward to do. The ability to roll up your sleeves and refactor your software, deploy it, that exact same software that's now refactored, and we have all the skills and the capabilities to help them do that. Once you refactor the software, you could save extraordinary amounts of money.
The trillion dollars that-- what was it, about half a billion dollar spend of cloud computing these days? The vast majority of it is in data processing. All of that should be accelerated. And we can help companies do that, accelerate that, reduce the spend, even improve their performance, and very importantly, reduce the carbon footprint, reduce the power dissipated in order to do that job. And so across the world accelerated computing is really the best way to do more with less.
JULIE HYMAN: Jensen, thanks so much for talking to us and spending so much time with us. It's always a really fascinating conversation for sort of the bigger world and also for NVIDIA investors and customers as well. Jensen Huang, NVIDIA Founder and CEO, and our Dan Howley, of course, as well. Thanks.