In This Article:
Advanced Micro Devices (AMD) unveiled its MI300 data center chip at its "Advancing AI" event on Wednesday. AMD CEO Dr. Lisa Su talks with Yahoo Finance Executive Editor Brian Sozzi on how the ultra-powerful AI accelerator aims to transform next-generation services.
As AI adoption grows through deployments of OpenAI's ChatGPT and Microsoft's Copilot, Su explains the ability of AMD's new chip to help eliminate lags, discrepancies, and knowledge gaps to heighten accuracy in large-language models.
Despite robust demand, Su says AMD "[plans] for success" through ample chip supply and partnerships. With an initial $2 billion revenue sightline in 2024, she expresses confidence that MI300 sales will continue surging to fuel AI progress in cloud and enterprise rollouts.
For more expert insight and the latest market action, click here to watch this full episode of Yahoo Finance Live.
Video Transcript
LISA SU: So honored to have Microsoft, Meta, Oracle, Dell, Lenovo, Supermicro, a number of really smart AI startups that are there. And so what we see is the ecosystem is hungry for this technology, and we're ready for it.
BRIAN SOZZI: Lisa, are you just sold out? What's the backlog of sales look like for these chips?
LISA SU: Well, you know, Brian, what we said in our last conference call is that we have very clear line of sight to $2 billion of revenue next year. But what we also said is we plan for success. From my perspective, customer demand is very high. We continue to work with our customers to deploy as quickly as possible. And we have much more supply than 2 billion. So I do believe, as we go through next year, we'll be able to update those numbers.
BRIAN SOZZI: Wow. That's some mind blowing stuff. Let's stay on this thread of me not being a scientist. So I watched your whole presentation. I'm watching, and I continue to watch it. And I came away thinking your chips will add more power to large language models. Help us understand what these chips will let these models do that they can't do today.
LISA SU: Yeah. No, that's a great question, Brian. So look, I think we've all really experienced how powerful ChatGPT and Copilot functions are in our personal lives, in our businesses, in our enterprises. We're using it within AMD to build better chips. So that's where the technology is.
Now as you guys know sometimes when you ask ChatGPT a question, it won't give you quite the right answer because the model doesn't have, let's call it, all of the knowledge. When you have larger models, when you actually train it on more information, it'll get more and more accurate.
And so what we're working on is the technology for the next generation and the next generation. And what we're trying to do is to make this AI technology so easy to use but so powerful that it makes all of our daily lives better. So more compute will allow you to train better models. They'll be smarter. And it'll also allow you to get answers much more quickly, so you don't have to wait. Sometimes, there's a little bit of a delay between when you ask the question and when you actually get the answer. We're building these chips so that delay is as short as possible, and you can bring that computing technology everywhere you go, like including in your PCs, for example.