Google’s generative AI fails 'will slowly erode our trust in Google'
It was a busy Memorial Day weekend for Google (GOOG, GOOGL) as the company raced to contain the fallout from a number of wild suggestions by the new AI Overview feature in its Search platform. In case you were sunning yourself on a beach or downing hotdogs and beer instead of scrolling through Instagram (META) and X, let me get you up to speed.
AI Overview is supposed to provide generative AI-based responses to search queries. Normally, it does that. But over the last week it’s also told users they can use nontoxic glue to keep cheese from sliding off their pizza, that they can eat one rock a day, and claimed Barack Obama was the first Muslim president.
Google responded by taking down the responses and saying it’s using the errors to improve its systems. But the incidents, coupled with Google’s disastrous Gemini image generator launch that allowed the app to generate historically inaccurate images, could seriously damage the search giant’s credibility.
“Google is supposed to be the premier source of information on the internet,” explained Chinmay Hegde, associate professor of computer science and engineering at NYU’s Tandon School of Engineering. “And if that product is watered down, it will slowly erode our trust in Google.”
Google’s AI flubs
Google’s AI Overview problems aren’t the first time the company has run into trouble since it began its generative AI drive. The company’s Bard chatbot, which Google rebranded as Gemini in February, famously showed an error in one of its responses in a promo video in February 2023, sending Google shares sliding.
Then there was its Gemini image generator software, which generated photos of diverse groups of people in inaccurate settings, including as German soldiers in 1943.
AI has a history of bias, and Google tried to overcome that by including a wider diversity of ethnicities when generating images of people. But the company overcorrected, and the software ended up rejecting some requests for images of people of specific backgrounds. Google responded by temporarily taking the software offline and apologizing for the episode.
The AI Overview issues, meanwhile, cropped up because Google said users were asking uncommon questions. In the rock-eating example, a Google spokesperson said it “seems a website about geology was syndicating articles from other sources on that topic onto their site, and that happened to include an article that originally appeared on the Onion. AI Overviews linked out to that source.”
Those are fine explanations, but the fact that Google continues to release products with flaws that it then needs to explain away is getting tiring.
“At some point, you have to stand by the product that you roll out,” said Derek Leben, associate teaching professor of business ethics at Carnegie Mellon University’s Tepper School of Business.
“You can't just say … 'We are going to incorporate AI into all of our well-established products, and also it's in constant beta mode, and any kinds of mistakes or problems that it makes we can't be held responsible for and even blamed for,' in terms of just trust in the products themselves.”
Google is the go-to website for finding facts online. Anytime I’ve gotten into an argument over some inane topic with a friend, one of us inevitably shouts, “Fine, Google it!” And chances are you’ve done the same. Maybe not because you wanted to prove you know some obscure Simpsons fact better than your friend, but still. The point is, Google has built a reputation of trustworthiness, and its AI flubs are slowly eating into that.
A race to beat the competition
So why the slip-ups? Hegde says the company is simply moving too quickly, releasing products before they’re ready in an effort to outmaneuver competitors like Microsoft (MSFT) and OpenAI.
“The pace of research is so quick that the gap between research and product seems to be shrinking significantly, and that is causing all these surface-level issues,” he explained.
Google has been racing to beat back appearances that it fell behind Microsoft and OpenAI since the two teamed up to release a generative AI-powered version of its Bing search engine and chatbot in February 2023. OpenAI even managed to cut off Google ahead of its I/O developer conference earlier this month, announcing its powerful GPT-4o AI model a day before the show kicked off.
But if beating the competition means rolling out products that generate errors or harmful information, Google risks giving users the impression that its generative AI efforts can’t be trusted and, ultimately, aren't worth using.
Email Daniel Howley at [email protected]. Follow him on Twitter at @DanielHowley.
Click here for the latest technology news that will impact the stock market.
Read the latest financial and business news from Yahoo Finance