Google addresses issues with its new AI Overviews after bizarre results went viral
Google recently launched AI Overviews in the US but quickly hit a snag. Users have been sharing some bizarre and potentially risky responses they are getting to their queries, like the suggestion that eating one rock a day could be good for health.
Reid clarified that some extreme AI Overview responses, like the suggestion that it's safe to leave dogs in cars, are fabricated. While others, like the viral screenshot depicting the response to "How many rocks should I eat?" are genuine, she clarified that Google generated the response because a website had published satirical content on the topic. She explained why the company's AI algorithm linked to the website by saying:
Reid also mentioned that Google extensively tested the feature before launch, but "there’s nothing quite like having millions of people using the feature with many novel searches."
Google figured out where its AI responses went wrong by looking at examples of its answers over the past few weeks. Based on what it found, the company then added safeguards. Here is what the tech giant did:
Google is putting more restrictions on AI Overviews
Google AI suggesting that eating one rock a day can be a good source of vitamins and minerals is just plain wrong (Image Credit–prabin_ishere/Reddit)
Now, in response to the concerns, Liz Reid, VP of Google Search, acknowledged that the search engine's AI Overviews sometimes returned “odd, inaccurate or unhelpful” results.
In a recent blog post, she addressed these issues and announced that Google has put in place safeguards to ensure the feature provides more accurate results that won't go down in meme history.
Prior to these screenshots going viral, practically no one asked Google that question. There isn't much web content that seriously contemplates that question, either. This is what is often called a “data void” or “information gap,” where there’s a limited amount of high quality content about a topic. However, in this case, there is satirical content on this topic … that also happened to be republished on a geological software provider’s website. So when someone put that question into Search, an AI Overview appeared that faithfully linked to one of the only websites that tackled the question.
She also added:
When AI Overviews get it wrong, it’s usually for other reasons: misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available. (These are challenges that occur with other Search features too.)
Plus, the Google VP confirmed the case where AI Overview suggested using glue to make cheese stick to pizza, citing content from a forum. While forums can offer real firsthand info, they might also give not-so-helpful tips, as you probably have noticed yourself.
Reid also mentioned that Google extensively tested the feature before launch, but "there’s nothing quite like having millions of people using the feature with many novel searches."
- Google improved its ability to detect queries that don't make sense and shouldn't trigger an AI Overview. Additionally, the company adjusted its AI to better recognize humor and satire content.
- The company updated its systems to reduce the reliance on user-generated content in responses that may provide misleading advice.
- Google put in place restrictions to limit the triggering of queries where AI Overviews were not proving to be as helpful.
- For breaking news topics, Google refrains from displaying AI Overviews to prioritize freshness and accuracy. Also, the tech giant has introduced refinements to enhance quality assurance and accuracy for health-related queries.
Things that are NOT allowed: