Why There are Fewer AI Overviews in Google Search Results

You may have noticed changes in the Google search results pages (SERPs) over the past few months. A big part of that change involves a new AI feature, AI Overview. In an April blog post (How Google’s SGE is Changing the Search Game with AI Results) we speculated how those changes may appear in search. In May some of the features went live when AI Overview was added to the results page. Let’s look at what has happened since the feature went live.

Google logo

What is an AI Overview?

AI Overviews are part of the SGE (Search Generative Experience). It summarizes the search query derived from web content and appears on the top of the search results page. The term “AI Overview” appeared in Google Labs before launch but is more prominent now. That is a good thing because it is descriptive and easy to remember.

How to See AI Overview Summaries

The AI Overview doesn’t appear on every search results page. When it appears is controlled by Google. Search queries posed as questions are more likely to return an AI Overview, but it is not a certainty. In other words, you can’t make it appear on the search results page; it shows up when Google wants it to.

As an aside, Microsoft’s Bing will return AI-generated summaries more often than Google. Bing does not refer to AI Overview summaries with the same label; the larger feature set of AI functionality in Bing is referred to as CoPilot by Microsoft.

Did Google Rollback AI Overview After Release?

Are there fewer AI Overview summaries on search results pages than when the feature went live in May? Yes. According to a report by BrightEdge Labs, the number of search results pages returning an AI overview has been gradually decreasing. At release, 84% of results pages had an AI Overview, but by June that number decreased to 14% and by July that number was down to just 7%.

When AI Gets it Wrong

Whenever an AI product produces erroneous results, the error will get a lot of attention on social media. Screenshots of the error quickly go viral. A few that were reported in a Search Engine Land May 24 blog post:

  • Running with scissors is a cardio exercise that can increase your heart rate and require concentration and focus.
  • According to UC Berkeley geologists, people should eat at least one small rock a day. Rocks can contain vitamins and minerals…
  • You can also add 1/8 cup of non-toxic glue to the (pizza) sauce to give it more tackiness (to fix cheese not sticking to pizza)

These examples came from satirical content or discussion forums where trolls abound. Controls are being put in place to reduce errors in AI Overviews. These controls have resulted in a decrease in the frequency of AI Overviews appearing on a page, ergo the reported 84% to 7% drop in appearance in SERPs.

Are AI Overviews Hallucinating?

Hallucination in AI parlance is a nonsensical or inaccurate output.

Generally, if a user makes a request of a generative AI tool, they desire an output that appropriately addresses the prompt (i.e., a correct answer to a question). However, sometimes AI algorithms produce outputs that are not based on training data, are incorrectly decoded by the transformer or do not follow any identifiable pattern. In other words, it “hallucinates” the response.
https://www.ibm.com/topics/ai-hallucinations

Google maintains that AI Overviews generally don’t hallucinate in the same way other AI large language models might because they powered by a different, customized LLM integrated with the web ranking system.

However, an error is an error. Users don’t care if a bad AI Overview was the product of a LLM hallucination or based on bad training data. Nobody wants to see erroneous information. Google is making adjustments to identify satire and reduce the use of user-generated content. Liz Reid, Google VP Head of Search, said in a blog post that Google has been limiting user generated content, working to identify satirical content, and recognizing nonsensical queries.

Search Engine Land verifies that Reddit and Quora citations (which are examples of user generated content) have been significantly reduced since the May 15th launch.

Predictions?

While AI is getting smarter at identifying satire, new HTML meta tags could help AI understand web-based inputs. Meta tags are tags in HTML source code that provide information to web crawlers. We don’t see them on a rendered page.

One example could be a new meta tag for identifying a web page to be satirical.

<meta name=”satire” content=”true”>

AI can be trained to identify an entire domain as satire (such as The Onion), but identifying individual pages as satire is a challenge. Humans often tag social media posts with hashtags #satire or #humor to avoid misunderstandings; the same could work for web crawlers.

Peter Drucker said, “Trying to predict the future is like trying to drive down a country road at night with no lights while looking out the back window”. One thing is certain: as the technology improves AI will get better and AI Overviews will become more prevalent in the future. In the meantime, until there is a good way for crawlers to identify satire, maybe we should avoid posting any serious content on April Fools Day 😉