Okay, are we all feeling it? That weird little sigh you let out when you google something simple, only to be met with a wall of... stuff. There's an AI-generated summary at the top, a bunch of sponsored ads, that "People also ask" box, shopping results... and only then, buried way down, are the classic blue-link results you were actually looking for. It's become a reflex, right? Your eyes just skip past the AI answer because, let's be honest, you're not totally sure you trust it. And then you find yourself scrolling, looking for a Reddit thread or some other forum, just to get an answer from a real person. So when did search get so... complicated? And are we all starting to crave a simpler, more human experience?
It sure looks like it. While Google’s new AI Overviews are definitely convenient, they’ve created this weird push-and-pull. We don't really trust the answers they give us, but we often rely on them anyway because they're fast. At the same time, a real "AI backlash" is starting to bubble up—a growing demand for accurate, human-made content as we all get a little tired of the AI’s mistakes and weird biases. Let's dig into what's happening, first in the US and then in Europe, to see how this is all shaking out.
The Weird Paradox: We Don't Trust the AI, But We Use It Anyway
There's a really strange dynamic going on with Google's AI-generated summaries. On one hand, our trust in them is incredibly low because of all the quality issues. On the other... well, we admit we often just take the answer and run with it because it's easy. A recent survey of over 1,000 web users from Exploding Topics really paints the picture:
-
It’s full of mistakes. A whopping 71% of us have personally seen a Google AI Overview get something significantly wrong. The errors range from "inaccurate or misleading content" (which 42% of users have seen) to missing context or biased answers. Even more alarming, about 17% have seen it give out unsafe or harmful advice. Yikes.
-
We’re all skeptical. Seriously, almost nobody trusts these things blindly. Only about 8.5% of people say they always trust the AI's answer. In fact, 21% say they never trust them. That means around 82% of us look at those AI results with a healthy dose of doubt.
-
But… we don't check. And here’s the kicker, the part I feel in my bones: even though we know it’s sketchy, most of us do not bother to verify the information. Over 40% of users rarely or never click through to the source links. We know we can’t fully trust it, but we skip the fact-check anyway.
So what gives? It seems to boil down to one thing: convenience often trumps caution. The AI gives us a fast, simple answer. And even if a little voice in our head says it might not be 100% right, if it looks plausible, a lot of us will just accept it. As one editor put it, the danger is that some AI answers are just "good enough" to get by, and we can’t rely on people to fact-check. We’ve all basically "chosen convenience"; we don’t trust it, but we use it because it’s easier.
And what's really wild is that most of us aren't even turning the feature off. In one poll, a majority of U.S. searchers said they’d leave AI Overviews enabled, despite their doubts. In fact, roughly 70% of us feel that Google search is as good as or better than before. Only about 22% think AI has made things worse. This suggests that, for all its flaws, the AI-powered experience hasn't driven most people away. Many of us find it "the same or improved," probably because it’s still fast... or maybe because we've just gotten really good at scrolling past the stuff we don't like.
The Vibe Shift for Websites: Fewer Clicks, More "Zero-Click" Searches
This is where things get a little scary for anyone who runs a website. One of the biggest impacts of AI Overviews has been a sharp drop in clicks to external sites. If the AI answers your question right there on the results page, why would you need to click anything else?
User behavior in the zero-click era: only 1% of users click links in AI summaries, while 34% leave Google to browse other sites. Source: DemandSage.
A July 2025 analysis from Pew Research confirms this isn't just a feeling; it's a dramatic shift:
-
When an AI summary shows up, we only click on a regular search result about 8% of the time. That’s roughly half the click-through rate of searches without an AI summary.
-
And clicking the links inside the AI summary? Forget about it. A tiny 1% of search visits with an AI Overview included a click on a cited source. Almost no one uses the AI’s footnotes to visit the original content.
-
In fact, we’re more likely to just end our search entirely. Pew found that in about 26% of searches with an AI result, we don't do anything else. The search just... stops. This is the classic zero-click search, where Google answers the question and the user never leaves the page.
If you're an SEO or a publisher, these numbers are alarming. The AI Overview sits at the top of the page and just siphons away traffic. Publishers are already blaming these AI answers for their traffic declines, and the data backs them up. These summaries are designed to "keep people on Google longer" instead of sending them out to the open web. And it's happening a lot—about 1 in 5 Google searches in the U.S. now triggers one of these summaries, and Google has even started putting ads inside them.
And here's the really cheeky part. Google’s AI often pulls its information from the very sites we would have clicked on anyway. The most frequently cited sources are sites like Wikipedia, YouTube, and Reddit. One study found Reddit alone made up about 21% of all sources. So, in a way, Google's AI is just trying to give you the highlights from a Reddit thread without you having to actually go there. It might save you a click, but it also cuts out the need to visit these communities and publisher sites, starving them of traffic. Google is shifting from a gateway to other websites to an answer provider itself, a practice that raises questions about how it competes with other sites.
The "AI Slop" Backlash is Real
Given all this, it’s no surprise that a lot of us are getting tired of AI-generated junk in general. We may be seeing the start of a real "AI backlash" online—where users and even brands are pushing back against the flood of AI content and demanding a return to something more human and trustworthy. The data points to a real shift in how we feel:
-
We don't want more of it. When asked about the future, only about 21.8% of people said they want to see more AI-generated content. In contrast, nearly three-quarters of us want to see the same amount or less. In fact, almost half said they’d prefer "less" or "much less." The novelty has officially worn off.
-
We actively avoid it. Get this: over 50% of users say they are less likely to engage with content if they know it was AI-generated. That’s huge. We tend to skip over things labeled as AI-written because we assume they're lower quality, unoriginal, or untrustworthy. The term "AI slop" is now part of our vocabulary for a reason; we’re getting fatigued by the endless stream of low-quality, machine-generated spam.
-
We're getting more concerned. Public wariness about AI's impact has grown. By late 2023, about 52% of U.S. adults said they were more concerned than excited about AI in daily life—up from just 38% before ChatGPT exploded. After the initial hype, we're starting to get more critical. We’ve seen it in pop culture, with artists and writers protesting, and with apps that face user backlash for going all-in on AI. Some brands are even proudly advertising their content as "human-made" now.
This all points to a potential rebalancing. It feels like technology may have overreached by injecting AI everywhere, and now we consumers are asking to reintroduce a human touch and some reliability. In the world of Google Search, this backlash means we're skeptical of the AI summaries and have a new appreciation for accurate, verified, human-curated information. We don't necessarily want to ditch convenience entirely, but we do want AI to dial it back a bit. When tech pushes too far, we start looking for real value again: content that's correct, well-sourced, and made by actual experts.
For anyone in content marketing or SEO, this is a huge insight. If over half your audience might just avoid content that looks like it was written by a machine, then doubling down on human quality—E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness)—is more critical than ever. It's about providing the kind of unique, high-value stuff that stands out from the AI fluff. In short, content that proves its human value will be in demand as this backlash grows.
Why We’re All Adding "Reddit" to Our Searches
You’ve done it, I’ve done it. That little trick that highlights our craving for human answers: appending the word “reddit” to our Google queries. We type in "best coffee maker reddit" because we want to hear from real people, not a search engine. This has become so common that even Google had to take notice and start showing more forum content by default.
Over the past year, Reddit threads have become way more visible on Google, and Reddit’s own traffic has surged. By mid-2024, Reddit saw a 39% year-over-year traffic increase (which is massive for a 20-year-old site), mostly from Google searches. In April 2024 alone, Google sent 1.3 billion visits to Reddit. The Wall Street Journal even noted that Google was effectively "taking over" with Reddit results because we all figured out that adding "reddit" gets you better, more satisfying answers.
So why are we all doing this? Simply put, we trust the human perspective. As one analysis put it, we add “reddit” to find information "from actual people, rather than articles crafted around SEO keywords... or, more recently, AI-generated copy of questionable value." Reddit gives you the unvarnished opinions, personal experiences, and niche expertise of real individuals. A thread about a product will have pros and cons from people who actually own it, which feels way more trustworthy than a generic review site. You can learn more about how to use Reddit for SEO here.
Google noticed and made forums easier to find, adding a dedicated "Discussions & Forums" tab. And an astonishing 97% of the time, Reddit now appears in Google results for product review queries, often outranking the official company sites. For content creators, it's frustrating to work hard to meet Google's guidelines, only to be beaten by a random forum post.
From Google's side, they say they're just giving people what they ask for. But there's also a cynical take: Google just signed a $60 million deal to license Reddit’s data for AI training. Some speculate that Google now has a vested interest in sending more traffic to Reddit to get more fresh data for its AI models. Google denies this, of course, but whatever the reason, the outcome is clear: user-generated communities are now a massive part of the search landscape.
Of course, this shift isn't perfect. Reddit’s unprecedented SEO boom has led to a rise in spam and low-quality content on the platform, as opportunists try to cash in. It’s the ultimate irony: we fled to Reddit to escape "AI slop," and now Reddit has to fight to keep from being overrun by it.
Overall, though, the rise of Reddit makes one thing crystal clear: authentic user content is in demand. We're all basically saying: "If I ask a question, I’d rather hear from other humans who’ve asked the same thing, not just a faceless AI or a salesy article." It’s a backlash against both overly optimized corporate content and untrustworthy AI answers. It’s a reminder that building a real community can be just as valuable as publishing a polished blog post.
AI Search in Europe: A Slower, More Skeptical Rollout
While AI-driven search launched first in the US, it's now making its way to Europe—and the reception there is telling. European users watched the US rollout with a mix of curiosity and caution, in part because of the EU's stricter regulations. In fact, Google held back AI Overviews in the EU for months, likely over compliance concerns. It wasn’t until October 2025 that the new AI Search Mode was activated in Germany, the first EU country to get it. That long delay shows just how careful both regulators and Google were being.
Now that it's arriving, the pushback is already starting at an institutional level. In July 2025, a group of European publishers filed an antitrust complaint against Google’s AI Overviews, arguing that they unfairly siphon traffic and revenue. The complaint claims Google is "misusing web content" and causing "significant harm to publishers" by scraping their content for AI answers. The core of their argument is that they have no ability to opt out of this scraping unless they remove themselves from Google search entirely—a stark choice they say is anti-competitive.
European authorities are listening. This suggests that Europe might enforce changes to how AI search works, whether that means requiring opt-outs for publishers or mandating clearer sourcing. Google, for its part, claims its AI features "create new opportunities," but that's not calming the fears of creators who feel they're being bypassed.
European internet users are also generally more privacy-conscious and might be less tolerant of opaque AI systems. If the AI backlash is going to gain serious ground anywhere, it could be here. For now, Google is being cautious—the feature is not yet the default in Europe, and users have to opt in. This more measured rollout in the EU might lead to a better balance from the start, especially if users there lean even harder toward wanting "the real, human results."
In short, Europe's experience could shape a more balanced future for AI in search. If the U.S. was the test that revealed the problems, the EU might be where some solutions are forged.
So, Where Do We Go From Here? Balancing AI Convenience with Human Trust
Google's shift to AI-powered search has fundamentally changed how we find information. The convenience is real—quick answers without clicking—but it comes at the cost of trust, accuracy, and the health of the wider web. We're all noticing this tradeoff. We're not entirely comfortable with it, and while we're adapting, a lot of us are simultaneously saying we don’t want the internet to become more AI-saturated than it already is.
This feels like the start of a mini-revolution in search. It’s an AI backlash that isn't about rejecting the technology outright, but about demanding it knows its place. We seem to be saying: "Help me, but don’t mislead me. Save me time, but don’t take away my ability to find authentic content. And if I can’t trust you, I’ll go find an answer from humans instead." And so we get the rise of Reddit searches and the general distrust of anything labeled "AI content."
For Google, the challenge is finding the right balance. It needs to make its AI summaries accurate and ensure they support the web ecosystem, not cannibalize it. If it can't, users will just keep finding workarounds.
For those of us in SEO and content marketing, the path forward is becoming clearer. The search landscape isn't going back to the "good old days" of ten blue links, but we can't just blindly embrace AI either. We have to do both: optimize for AI-driven search visibility so our content gets cited in summaries, and keep creating the kind of high-quality, human-centered content that people actively seek out. It’s about making sure that when the pendulum swings back toward valuing accuracy and humanity, our content is right there at the top.
Google has changed, and we're changing with it. The next chapter of digital marketing is all about rebuilding trust and value in an AI-heavy world. The creators who can offer real expertise, show their authenticity, and build communities will have the edge. It feels like we all miss the "old Google," the one that led you to the best answer, not just gave you an answer. It’s up to all of us to bring that value back.
In the end, technology usually corrects itself based on what people actually need. If AI search pushed a bit too far, this growing backlash might just be the thing that nudges it back in the right direction. We could end up with a hybrid that gives us the best of both worlds, AI for speed, and humans for depth and trust. Until then, keep an eye on these trends. They're all signals that, in search at least, the human touch still matters, and it might just be making a comeback.