Google Defends AI Overview After Viral Search Mishaps

If you’ve toyed around with Google’s new AI Overviews, or encountered social media posts about the feature’s strange and incorrect answers, you might be wondering what Google is up to. After unveiling the feature, which offers conversational answers to user queries, saving them from clicking through search results, Google found itself on the wrong end of another AI blunder.

Some examples of the erroneous and absurd answers included instructing people include glue in their pizza recipes or to eat rocks.

In a company blog post published Thursday, titled “About last week” Google’s Head of Search, Liz Reid, attempts to explain why things went wrong. She said that Google tested the feature extensively before it was unleashed to the world. “But there’s nothing quite like having millions of people using the feature with many novel searches. We’ve also seen nonsensical new searches, seemingly aimed at producing erroneous results,” Yahoo Finance quoted her saying.

Rather than “hallucinate” or make things up, as other AI chatbots might, Reid said AI Overview’s issues come from other reasons: “misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available.” Those challenges are also present in producing traditional search results too, she said.

Google is cutting back on the use of social media and forum posts to answer AI Overviews. And, Reid said, the company aims to now use AI Overviews for hard news topics, |where freshness and factuality are important.”

Previous articlePrice Cuts, Weaker Spending Could Bolster Fed’s Faith In Inflation Outlook
Next articleConstruction Sector Must Adopt Digital Technology, Reduce Reliance On Foreign Workers – DPM

LEAVE A REPLY

Please enter your comment!
Please enter your name here