ChatGPT’s AI capabilities are ‘plateauing,’ professor says
Yahoo Finance Video
CUNY Queens College Professor Douglas Rushkoff joins Yahoo Finance Live to discuss AI from an investing perspective amid the ongoing hype, the expectations for AI, and the outlook for its future.
Video Transcript
[AUDIO LOGO]
JULIE HYMAN: And for those looking to invest in artificial intelligence right now, our next guest says the future of the industry probably won't look like ChatGPT, at least not necessarily. Douglas Rushkoff is joining us. He's a professor of media theory and digital economics at CUNY Queens College, as well as the author of "Survival of the Richest" and "Team Human." Thank you so much for being here.
So let's sort of start with that question, right? Obviously, we are-- it feels like we're in the infancy here still of generational AI. So how do we even know, from an investing perspective or just from a human perspective, what to expect?
DOUGLAS RUSHKOFF: Well, it's tricky, right? I mean, it's funny it's both a bubble and the next big thing, like the net and everything else. You know, and I do think someday we'll look back on this-- on this whole era as kind of the web and social media we're a bit like the missionaries at gathering information with a friendly face on technology and maybe converting a lot of people to the digital mindset.
And then AI is more like the conquistadors, right? So the net that we know and love really was just setting the stage for this thing that's a much bigger thing that's actually going-- that's gonna come. The irony is, I think, that most of the companies that we're looking at are really just little apps built on top of the main existing AI platforms.
You know OpenAI by Musk and Altman or DeepMind with Google or Facebook's AI. Those are real AIs. But on the-- looking forward, I feel like these companies are going to be understood more as kind of first-generation AIs. That they learn by reading data and kind of like gathering the net in sort of a one-to-one fashion.
And that's why when we're talking about these companies, when people invest in them, they're talking about, oh, well, this company is two years ahead of that one, which means that, oh, it's had two years of a learning headstart at the rate that an I can eat data. But the next generation AIs, the kinds of people I'm talking with, they don't really learn data the same way.
It's a bit like-- investing in one of these companies I feel like-- and nothing personal-- is almost like investing in Yahoo in 1999 when you don't realize that Google is gonna come without all the requirements for how do you build an index, you know? That-- the next generation AIs are not gonna be learning data in these one-to-one relationships, but rather kind of inferring a whole lot of things from a teeny bit of data.
And that will allow them to leapfrog the current technologies. And so those are the ones that are I'm more interested in, these sort of more multidimensional, almost fractal style AI learning models.
BRAD SMITH: Doug, if advertising was the currency of the internet. What is going to be the currency of artificial intelligence then?
DOUGLAS RUSHKOFF: Well it's interesting people think right now it's data. How much data has it-- how much data has it consumed. But I think ultimately it's going to be more the dimensionality of the responses. So, you know, you do a ChatGPT. Like, I have students who try to use ChatGPT for their papers.
I can immediately tell they were written by ChatG-- I don't know why teachers are worried about this. It's like reading Wikipedia. Everything is one-level deep. There's no insights. There's no kind of clusters of knowledge. There's no explosion. There's no awe. There's no emotionality to it. So I think that the-- and it's a hard metric to understand before we have more facility with it.
But it's really going to be the dimensionality of the product, right? How natural it is, how fractal it is, how self-referential it is. You know, and that's gonna be a kind of an information density or dimensionality rather than just a volume.
BRIAN SOZZI: Doug, how do you see ChatGPT evolving over time?
DOUGLAS RUSHKOFF: You know-- I'm optimistic about a lot of these technologies. I feel like ChatGPT is plateauing, right? That it's done the thing that it does. I don't see ChatGPT itself getting that much-- that much better.
I think it represents a stage in the evolution of AI. But I feel like the model is peaking. You know, it's doing the thing that it does. The same way-- remember Watson? you know, this is 10 years ago when people were like, oh, Watson, it's really cool but it's kind of a glorified search engine, isn't it? It's like, yeah. Oh, this is what ChatGPT does or this is what a deep fake mouth-changing algorithm can do when you, you know, want to make someone look like they're speaking a different language.
So there's certain abilities. And I think that ChatGPT is a great application on, you know, OpenAI. But it's sort of, OK, now what? You know? And I think that now what is going to be working with the AI in fundamentally different ways.
JULIE HYMAN: And Doug, sort of following up on that. And you wrote a recent article where you-- talked about in another interview, you actually used ChatGPT to respond to a question. But they wanted the view of the human. I would push back against that view a little bit actually that human credibility is important.
I mean, we are seeing-- we have seen rampant evidence over the last, say, election cycle, that knowing that there's an actual human on the other side or maybe thinking an actual human on the other side, right? Like, those lines are really blurry here. So I wonder why you're so convinced that is gonna continue to be important.
DOUGLAS RUSHKOFF: Well, I mean-- well, think about it in the election, ultimately, we elect a human to be president, right? [LAUGHS] I mean--
JULIE HYMAN: Yeah. Fair.
DOUGLAS RUSHKOFF: I do-- I do think we want that. I mean, there's some belief-- and I get that. And that's it goes down the line of effective altruism and a lot of the stuff that folks like Musk are believing that technology really would be a better steward of the planet than people, right?
We're open to emotions and bias, and we fall in love or we get mad. Where if you could really program a machine just right, it can make decisions in a more Solomonic, you know, totally removed, perfectly optimized way.
But what we're finding, I mean, what we already know is machines are trained on us, right? Machines move forward with the same biases that we do only under the pretense and illusion of fairness, of evenness. So I'm not I'm not looking forward to an AI-driven reality so much as an AI assisted one.
You know, AIs are gonna be really, really good as partners, as ways of augmenting human intelligence and activity, not as a way of replacing it. Except in areas where we might really want to be replaced, you know, in dangerous activities, and boring jobs, and things.
And it's also a matter-- I think of us no longer seeing threats to our jobs or threats to our employment as threats. you know, unemployment is only a problem if there's not-- if there's, you know, work that needs to be done. Otherwise, unemployment, in some ways, could be looked at as a solution, right? I mean, if robots were really gonna do the work, I don't have a problem with doing the play.
BRIAN SOZZI: All right, we'll leave it there. Douglas Rushkoff, City University of New York Professor of Media Theory and Digital Economics, good to see you. And give Yahoo Search a chance.