Google AI Overview is just affiliate marketing spam now
-
This post did not contain any content.
Or if you are set on using AI Overviews to research products, then be intentional about asking for negatives and always fact-check the output as it is likely to include hallucinations.
If it is necessary to fact check something every single time you use it, what benefit does it give?
-
This post did not contain any content.
An advertising company are advertising at you? Whatever next?
-
This post did not contain any content.
Not surprising
-
Google has been literally unusable for search for years.
It's been worse and worse over time for whatever reasons, but the AI summary at the top now can be way off. I had a result the other day where a quick glance (all that I give it as I scroll down to any results) I laughed because I could tell it was totally wrong, and couldn't even figure out where it got that result from. It wasn't in the results I found.
-
Or if you are set on using AI Overviews to research products, then be intentional about asking for negatives and always fact-check the output as it is likely to include hallucinations.
If it is necessary to fact check something every single time you use it, what benefit does it give?
None. It's made with the clear intention of substituting itself to actual search results.
If you don't fact-check it, it's dangerous and/or a thinly disguised ad. If you do fact-check it, it brings absolutely nothing that you couldn't find on your own.
Well, except hallucinations, of course.
-
This post did not contain any content.
garbage in, garbage out.
-
Or if you are set on using AI Overviews to research products, then be intentional about asking for negatives and always fact-check the output as it is likely to include hallucinations.
If it is necessary to fact check something every single time you use it, what benefit does it give?
None. None at all.
-
This post did not contain any content.
The part about Reddit communities being built now that contain only Ai questions, Ai answers, and links to products is what I figured Spez wanted when he ipo’d. And with Ai writing convincing text, it’s so easy!
-
This post did not contain any content.
Was trying to find info about a certain domain the other day - damn near impossible just cause all the results I could find was the same type of AI slop.
-
It's been worse and worse over time for whatever reasons, but the AI summary at the top now can be way off. I had a result the other day where a quick glance (all that I give it as I scroll down to any results) I laughed because I could tell it was totally wrong, and couldn't even figure out where it got that result from. It wasn't in the results I found.
My favourite (inconsequential, but incredibly stupid) automatic AI question/answer from Google :
I was looking for German playwright Brecht's first name. The answer was Bertolt. It's a pretty simple question, so that at least was correct.
However, among the initial "frequently asked questions", one was "What is the name of the Armored Titan?"
Somehow Google decided it would randomly answer a question about manga/anime Attack on Titan in there. The only link between that question and my query is the answer, Bertolt (so of course, it wasn't in my query). Because there's a guy called Bertolt too in that story.
By the way, Attack on Titan's Bertolt is not the armoured titan.
-
This post did not contain any content.
Don’t bother asking Google if a product is worth it; it will likely recommend buying whatever you show interest in—even if the product doesn’t exist.
This seems like a general problem with these LLMs. Sometimes when I'm programming I ask the AI what it thinks about how I propose to approach some design issue or problem. It pretty much always encourages me to do what I proposed to do, and tells me it's a good approach. So I'm using it less and less because it seems the LLMs are encouraged to agree with the user and sound positive all the time. I'm fairly sure my ideas aren't always good. In the end I'll be discovering the pitfalls for myself with or without time wasted asking the LLM.
The same thing seems to happen when people try to use an LLM as a therapist. The LLM is overly encouraging and agreeable, and it sends people down deep rabbit holes of delusion.
-
This post did not contain any content.
This isn't much of a change. Before AI it was SEO slop. Search for product reviews and you get a bunch of pages "reviewing" products by copying the amazon description and images.
-
This post did not contain any content.
We've come full circle
-
The part about Reddit communities being built now that contain only Ai questions, Ai answers, and links to products is what I figured Spez wanted when he ipo’d. And with Ai writing convincing text, it’s so easy!
The Reddit team is developing a bot that can post “this”.
They’re building a datacenter full of nvidia hardware for it.
-
Or if you are set on using AI Overviews to research products, then be intentional about asking for negatives and always fact-check the output as it is likely to include hallucinations.
If it is necessary to fact check something every single time you use it, what benefit does it give?
It might be able to give you tables or otherwise collated sets of information about multiple products etc.
I don't know if Google does, but LLMs can. Also do unit conversions. You probably still want to check the critical ones. It's a bit like using an encyclopedia or a catalog except more convenient and even less reliable.
-
garbage in, garbage out.
Spam in, spam out, profit in the middle.
-
It might be able to give you tables or otherwise collated sets of information about multiple products etc.
I don't know if Google does, but LLMs can. Also do unit conversions. You probably still want to check the critical ones. It's a bit like using an encyclopedia or a catalog except more convenient and even less reliable.
Google had a feature for converting units way before the AI boom and there are multiple websites that do conversions and calculations with real logic instead of LLM approximation.
It is more like asking a random person who will answer whether they know the right answer or not. An encyclopedia or catalog at least have some time of a time frame context of when they were published.
Putting the data into tables and other formats isn't helpful if the data is wrong!
-
This post did not contain any content.
seems usefull until you notice it's just copied from wikipedia
-
Or if you are set on using AI Overviews to research products, then be intentional about asking for negatives and always fact-check the output as it is likely to include hallucinations.
If it is necessary to fact check something every single time you use it, what benefit does it give?
It hasn't stopped anyone from using ChatGPT, which has become their biggest competitor since the inception of web search.
So yes, it's dumb, but they kind of have to do it at this point. And they need everyone to know it's available from the site they're already using, so they push it on everyone.
-
This post did not contain any content.
This ia crazy. It means Google and Reddit are shifting content from other web sites to Reddit, under corporate control. It seems like Google has firmly entered the unsustainable profit maximization step where they're now increasing the share of profit they take from their content sources by redirecting less and less traffic to them, while showing that content to Google users.
-
-
AI Utopia, AI Apocalypse, and AI Reality: If we can’t build an equitable, sustainable society on our own, it’s pointless to hope that a machine that can’t think straight will do it for us.
Technology1
-
Connor Myers: As if graduating weren’t daunting enough, now students like me face a jobs market devastated by AI
Technology1
-
-
-
Deep in Mordor where the shadows lie: Dystopian tales of that time when I sold out to Google – elilla & friends’ very occasional blog thing
Technology1
-
Google Play’s latest security change may break many Android apps for some power users. The Play Integrity API uses hardware-backed signals that are trickier for rooted devices and custom ROMs to pass.
Technology1
-