It's rude to show AI output to people
-
Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up
That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.
Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.
with no ads
For now.
Eventually it becomes a search engine that replaces the ads on the source material with its own ads, thus choking out the source's funding and taking it for itself.
-
Personally, I don't mind the "I asked AI and it said..." Because I can choose to ignore anything that follows.
Yes, I can judge the sender. But consent is still in my hands.
Otherwise, I largely agree with the article on its points, and also appreciate it raising the overall topic of etiquette given a new technology.
Like the shift to smart phones, this changes the social landscape.
I really dont like "I asked AI and it said X" but then I realise that many people including myself will search google and then relay random shit that seems useful and I dont see how AI is much different. Maybe both are bad, I dont do either anymore. But I guess both are just a person trying to be helpful and at the end of the day thats a good thing.
-
Blindsight mentioned!
The only explanation is that something has coded nonsense in a way that poses as a useful message; only after wasting time and effort does the deception becomes apparent. The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus.
This has been my biggest problem with it. It places a cognitive load on me that wasn't there before, having to cut through the noise.
Is blindsight worth a read? It seemed interesting from the brief description.
-
This is exactly something that has annoyed me in a sports community I follow back on Reddit. Posts with titles along the lines of “I asked ChatGPT what it thinks will happen in the game this weekend and here is what it said”.
Why? What does ChatGPT add to the conversation here? Asking the question directly in the subreddit would have encouraged the same discussion.
We’ve also learned nothing about the OPs opinion on the matter, other than maybe that they don’t have one. And even more to the point, it’s so intellectually lazy that it just feels like karma farming. “Ya I have nothing to add but I do love me them updoots”.
I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.
But that argument always just feels overly combative for what is otherwise a pretty relaxed sports community. It’s just not worth having that fight there.
Old reddit would have annihilated that post.
-
And what happens when
mechahitlerthe next version of Grok or whatever AI hosted by a large corporation that only has the interest of capital gains comes out with unannounced injected prompt poisoning that doesn't produce quality output like you've been conditioned to expect?These AI are good if you have a general grasp of whatever you are trying to find, because you can easily pick out what you know to be true and what is obviously a
ridiculous mess of computer generated text that is no smarter than your phone keyboard word suggestionsAI hallucination.Trying to soak up all the information generated by AI in a topic without prior knowledge may easily end up with you not understanding anything more than you did before, and possibly give you unrealistic confidence that you know what is essentially misinformation. And just because an AI pulls up references, unless you do your due diligence to read those references for accuracy or authority on the subject, the AI may be hallucinating where it got the wrong information it's giving you.
And just because an AI pulls up references, unless you do your due diligence to read those references for accuracy or authority on the subject, the AI may be hallucinating where it got the wrong information it's giving you.
This. I've had the AI provide me vendor documentation that said the opposite of what it says the doc says.
-
This is exactly something that has annoyed me in a sports community I follow back on Reddit. Posts with titles along the lines of “I asked ChatGPT what it thinks will happen in the game this weekend and here is what it said”.
Why? What does ChatGPT add to the conversation here? Asking the question directly in the subreddit would have encouraged the same discussion.
We’ve also learned nothing about the OPs opinion on the matter, other than maybe that they don’t have one. And even more to the point, it’s so intellectually lazy that it just feels like karma farming. “Ya I have nothing to add but I do love me them updoots”.
I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.
But that argument always just feels overly combative for what is otherwise a pretty relaxed sports community. It’s just not worth having that fight there.
Treating an LLM like a novelty oracle seems okay-ish to me, it's a bit like predicting who will win the game by seeing which bowl a duck will eat from. Except minus the cute duck, of course. At least nobody will take it too serious, and those that do will probably see why they shouldn't.
Still annoying though.
-
Is blindsight worth a read? It seemed interesting from the brief description.
Oh yes, I think Peter Watts is a great author. He's very good at tackling high concept ideas while also keeping it fun and interesting. Blindsight has a vampire in it in case there wasn't already enough going on for you
Unrelated to the topic at hand, I also highly recommend Starfish by him. It was the first novel of his I read. A dark, psychological thriller about a bunch of misfits working a deep sea geothermal power plant and how they cope (or don't) with the situation at hand.
-
Hey! ChatGPT can be creative if you ask it to roast fictional characters .. somewhat!
It's still not creative. It's just rehashing things it heard before. It's like if a comedian just stole the jokes from other comedians but changed the names of people. That's not creative, even if it's slightly different than what anyone's seen before.
-
Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up
That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.
Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.
"With no ads"
Google used to have no ads.
And especially with how much it cost to run even today's LLMs, let alone tomorrow's ones... enshittification is only a matter of time. -
..without informed consent.
It's rude to show AI output to people | Alex Martsinovich
Feeding slop is an act of war
(distantprovince.by)
Here's a question regarding the informed consent part.
The article gives the example of asking whether the recipient wants the AI's answer shared.
"I had a helpful chat with ChatGPT about this topic some time ago and can share a log with you if you want."
Do you (I mean generally people reading this thread, not OP specifically) think Lemmy's spoiler formatting would count as informed consent if properly labeled as containing AI text? I mean, the user has to put in the effort to open the spoiler manually.
-
I really dont like "I asked AI and it said X" but then I realise that many people including myself will search google and then relay random shit that seems useful and I dont see how AI is much different. Maybe both are bad, I dont do either anymore. But I guess both are just a person trying to be helpful and at the end of the day thats a good thing.
And now googling will just result in "I asked AI and it said X", as the first thing you get is the AI summary shit. A friend of mine does this constantly, we are in a discord call and somebody asks a question, he will google it and repeat the AI slop back as a fact.
Half the time it's wrong.
-
-
-
Last year China generated almost 3 times as much solar power as the EU did, and it's close to overtaking all OECD countries put together (whose combined population is 1.38 billion people)
Technology2
-
-
-
Nvidia debuts a native GeForce NOW app for Steam Deck, supporting games in up to 4K at 60 FPS; in testing, the app extended Steam Deck battery life by up to 50%
Technology1
-
-