Former Google CEO on AI and ChatGPT: We need guardrails
Former Google Chairman & CEO Eric Schmidt joins Yahoo Finance Live from the Milken Institute Global Conference 2023 in Beverly Hills, CA. Schmidt spoke with Allie Garfinkle about the risks of AI, the development of ChatGPT, the outlook for companies adopting the technology, and more.
When discussing the paramount risks associated with AI, Schmidt breaks it down into three categories:
1. Likeness. Schmidt says, "An evil person, could easily convince you with their likeness that they said something that they didn't say."
2. Biology. Schmidt notes it would "be relatively easy to synthesize pathogens that are bad."
3. Cyber attacks. Using AI, Schmidt thinks someone could "attack a whole country."
Regarding guardrails that need to be put in place, Schmidt says, "The general terms for those are guardrails and human safety or AI alignment. And there are plenty of people working on that. And it's coming."
Video highlights:
00:00:05 - Outlook for companies adopting AI
00:00:55 - Expectations for the future of AI
00:02:55 - Emergence of AI
00:03:55 - Paramount risks associated with AI
00:05:15 - Guardrails around AI
00:06:20 - Controversy surrounding AI
Video Transcript
- Well, the battle for digital dominance is laser-focused on AI these days, with billion dollar valuations and FOMO fueling speculation left and right.
One man has been particularly outspoken on the dangers of falling behind in the adoption of this quickly moving area. Google's former CEO, Eric Schmidt, the man who drove the search engine's expansion-- search engine's expansion from Silicon Valley startup to industry leader, knows a thing or two about the digital land grab.
In 2006, he presided over the billion dollar acquisition of not yet profitable video sharing site YouTube. Schmidt, famously the adult supervision for Google's young founder Sergey Brin and Larry Page, would turn out to be a lot more. He took Google public in 2004, made his name synonymous with US tech, and went on to serve as an advisor in the Obama administration.
Now, his latest challenge, using his knowledge to ensure the United States develops a coherent and robust policy around AI. Yahoo Finance's Allie Garfinkle sat down with the former Google CEO at this year's Milken Conference to discuss just that.
[MUSIC PLAYING]
ERIC SCHMIDT: The arrival of intelligence means the systems will become much more powerful. And so the companies that adopt AI soon, right, will get an economic advantage. They'll make better ads or better answers or whatever it is that they do. They'll target better. They'll understand their customers better. They'll solve the customer's problems better. They'll invent new things.
It'll take a while though, before we really see the implications of this. This stuff is only six months old, right, so we have enormous hype and very great excitement. But if you look at the history of technology, it's a decade before you really see the transformative nature of it.
ALLIE GARFINKLE: So one of the things I wanted to ask you actually was about that hype because "Age of AI", you published it in 2021, with Henry Kissinger and Dan Huttenlocher. You guys were out-- you guys were out ahead of this. Were you expecting that kind of watershed moment that we got?
ERIC SCHMIDT: Frankly, no. I think in the book, we talk about GPT-3. But we did not understand how powerful ChatGPT would be in the annals of human history, that people could converse with it. And the key thing that ChatGPT did is it added a technology where humans could adjust the answers to make them more human-like. Technically, it's called RLHF.
And that invention, which was a big deal, really was the final linchpin to bring us to this point. I'll tell you that the three of us have decided to write a sequel because there's so much going on. And Dr. Kissinger is 100 years old. So we're obviously excited.
ALLIE GARFINKLE: Wow. So talk to me then a little bit about, you know, actually before we go there, why do you think ChatGPT captures so many imaginations?
ERIC SCHMIDT: Because it put it in human terms what we've been talking about for a long time. The concept that these things could write this well, so I describe ChatGPT as a college student that got an A in English and writing and a C or a D in facts, right? So as you know with ChatGPT, it begins to hallucinate. It gets confused.
If you look at Bing and Sydney, you can trick it. In other words, it's not a very good college student, but it writes very well. And so the task now is to build systems that are much more knowledgeable and make many fewer errors, and that's underway.
So a few years from now, people will have forgotten all the humor of, oh, you know,, the chatbot fell in love with me, and it convinced me to leave my wife. And my wife was upset. All of that is humorous because the systems will become much more personal and much more reliable. They won't make stuff up the way they do today.
ALLIE GARFINKLE: Well, in their probabilistic models, right, so I imagine they're not always perfectly accurate. And I was watching that video you guys have about the "Age of AI", and you describe AI as emergent, dynamic, imprecise. That sounds a lot like humans.
ERIC SCHMIDT: Yeah. The difference with emergent, dynamic, and imprecise is if you're emergent, dynamic, and imprecise, I know you're a human being. I have a theory of how your mind works. I understand the limitations. You were born. You have a mother and a father and a family and kids. And you eat and you sleep and, you know, a normal human being.
ALLIE GARFINKLE: Are you confident about that?
ERIC SCHMIDT: I am absolutely confident that about you. And when I look at the same thing in a computer, I have no theory of mind. I don't know is it evil? Can it switch from good to evil? Can it confuse me? Could it decide to not tell me what it's doing because it learned to be deceptive? Did it read a book that told it, you know, a spy novel, where it learned how to deceive me? I don't have any context for that.
So we're going to have to develop a theory of mind for these new systems, and it's going to be different. They're not human. They're not conscious. They're not copies of human brains.
ALLIE GARFINKLE: But it doesn't mean they're not intelligent. And to a certain extent, I mean, it makes me wonder about the sort of risks you see. What are kind of the paramount risks that you find yourself thinking about with AI?
ERIC SCHMIDT: Well, everybody wants to talk about social media and, you know, falling in love with the computer, and so forth. I'm not too worried about that. I think those are solvable.
I think there's three. The first one is misinformation at scale. These technologies will allow an evil country or a competitor to come in and screw up our democracy. You believe that our leaders are saying what they're saying. And you honestly believe when you see them and you listen to them, that's what they said. An evil person could easily convince you with their likeness, that they said something that they didn't say. That's really bad.
Another example wouldn't be in biology, be relatively easy to synthesize pathogens that are bad. Another one would be cyber attacks, be relatively easy to unleash these things, they attack a whole country, and try this try that.
So we need to put guardrails on these bad parts of the system. The general terms for those are guardrails and human safety or AI alignment. And there are plenty of people working on that, and it's coming.
ALLIE GARFINKLE: Well, and that kind of brings me to the next question, which those guardrails. What could they look like? What are we seeing in Europe? How does it compare to what we're talking about here in the States?
ERIC SCHMIDT: Well, at the moment, Europe is doing almost nothing in this space, so we don't really have any examples. They have a law, which requires that the system be able to explain itself if it's used in something critical. By definition, these systems today can't explain themselves. By the way, nor could your teenager, if you have a teenager. You know, again, we have to work with what we've got.
ALLIE GARFINKLE: Some adults struggle to explain themselves.
ERIC SCHMIDT: So we have to work with what we have. And that's a bad strategy on the Europeans' part. In America, there's an emerging consensus that there should be safety models that everyone kind of shares. , Like these are examples of bad things. So, for example, how do I kill somebody? How do I kill myself? Those are obviously bad. No one's in disagreement that stuff should be prohibited.
And so I think you're going to see pretty much a simple agreement to constrain at least in the United States what these systems do.
ALLIE GARFINKLE: The other thing kind of too about it is there's a lot of fear out there, right, in the middle of all this. I was at a family event recently, and someone said to me, you shouldn't cover AI. It's too controversial. Should AI be controversial?
ERIC SCHMIDT: Well, it's interesting. I was on a ski lift with somebody I didn't know and somebody I knew. And I was talking to the gentleman I knew, and the lady to my right goes, "Oh yes, I've been using ChatGPT!" And then I knew it had arrived on the ski lift.
The important point here is, that collectively we have to have this conversation now. And I think that the good news in America is when these things happen, everybody has an opinion and everybody gets to be heard.
And I'd much rather do it this way than the way social media happened, where social media sort of snuck up on us. It was sort of intended for good outcomes, you know, putting us together, friends, all of that. But it also allowed for attacks on our political system, problems with young people, things like that.
So I want the conversation now. And I'm perfectly happy to disagree because I think that's how America works. That's the great thing about our country.
- That was former Google chairman and CEO Eric Schmidt in conversation with our Allie Garfinkle there.