Skip to content

It's rude to show AI output to people

Technology
53 44 35
  • I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.

    That's literally the point of them. They're supposed to generate what the most likely result would be. They aren't supposed to be creative or anything like that. They're supposed to be generic.

    Hey! ChatGPT can be creative if you ask it to roast fictional characters .. somewhat!

  • The worst is being in a technical role, and having project managers and marketing people telling me how it is based on some chathpt output

    Like shut the fuck up please, you literally don’t know what you are talking about

    Sadly we had that problem before AI too... "Some dude I know told me this is super easy to do"

  • ..without informed consent.

    You're damn right, if somebody puts slop in my face I get visibly aggressive.

  • I think sometimes when we ask people something we're not just seeking information. We're also engaging with other humans. We're connecting, signaling something, communicating something with the question, and so on. I use LLMs when I literally just want to know something, but I also try to remember the value of talking to other human beings as well.

    You should pretty much assume everything that a chatbot says could be false to a much higher degree than human written content, making it effectively useless for your stated purpose.

  • Every now and then I see a guy barging in a topic bringing nothing else than "I asked [some AI service] and here's what it said", followed by 3 paragraphs of AI-gened gibberish. And then when it's not well received they just don't seem to understand.

    It's baffling to me. Anyone can ask an AI. A lot of people specifically don't, because they don't want to battle with its output for an hour trying to sort out from where it got its information, whether it represented it well, or even whether it just hallucinated half of it.

    And those guys come posting a wall of text they may or may not have read themselves, and then they have the gall to go "What's the problem, is any of that wrong?"... Dude, the problem is you have no fucking idea if it's wrong yourself, have nothing to back it up, and have only brought automated noise to the conversation.

    I was trying to help onboard a new lead engineer and I was working through debugging his caddy config on Slack. I'm clearly putting in effort to help him diagnose his issue and he posts "I asked chatgpt and it said these two lines need to be reversed", which was completely false (caddy has a system for reordering directives) and honestly just straight up insulting. Fucking pissed me off. People need to stop brining AI slop into conversations. It isn't welcome and can fuck right off.

    The actual issue? He forgot to restart his development server. 😡

  • ..without informed consent.

    I work in a Technical Assistance Center for a networking company. Last night, while working, I got a ticket where the person kept sending troubleshooting summaries they asked ChatGPT to write.

    Speedrun me not reading your ticket any%.

  • Sadly we had that problem before AI too... "Some dude I know told me this is super easy to do"

    Some dude vs. LLM. Fight!

  • ..without informed consent.

    I like the premise behind this.

    But how do we differentiate? Unless explicitly mentioned, it might be hard to tell the difference between AI and native human message.

    It's enough for the other side not to mention the message is AI-generated to fool us for quite a while.

  • ..without informed consent.

    For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.

    Now, AI has made text very, very, very cheap. Not only text, in fact. Code, images, video. All kinds of media. We can't rely on proof-of-thought anymore.

    This is what makes AI so insidious. It's like email spam. It puts the burden on the reader to determine and sort ham from spam.

  • Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

    No. They don't actually use sources and can't tell you why they said what they said. This actually reverses cause and effect the source is part of the inference and since it didn't and in fact can't go and read them frequently contains both imaginary sources and sources which don't actually support the assertion.

    The ability to evaluate the source requires the same skills and information as the production of the original text would have required which means perforce if you use chatGPT to produce text which you lack the skills to produce you also lack the skills to evaluate it.

    You COULD go and read primary sources and in turn evaluate it but at that point its not that quick.

  • I like the premise behind this.

    But how do we differentiate? Unless explicitly mentioned, it might be hard to tell the difference between AI and native human message.

    It's enough for the other side not to mention the message is AI-generated to fool us for quite a while.

    You differentiate by only seeing what your acknowledged peers post and what their acknowledged peers post.

    That's for communities of many people. That requires having global transparent ID of the other user. Now you interact with a service on the Web. That stops being good enough.

    I actually like that, because that might mean that today's Web in its entirety is not good enough.

    The old "services yielding linked hypertext" one - yes. A personal webpage is a person. It's possible to devise common way of checks. Many services, some good and some not - a way to technically separate them too.

    An alternative to Usenet with global IDs for users and posts - yes.

    But one platform-website for all of a kind of interactions with generic executable dynamic contents - morally obsolete.

    If that happens, I'm going to donate to OpenAI and whoever else makes it happen. Well, maybe not much, but I am.

  • ..without informed consent.

    I am not sure the kind of people who think using the thieving bullshit slop machine is a fine thing to do to can be trusted to have appropriate ideas about rudeness and etiquette.

  • Some dude vs. LLM. Fight!

    We all lose. Fatality!

  • ..without informed consent.

    Personally, I don't mind the "I asked AI and it said..." Because I can choose to ignore anything that follows.

    Yes, I can judge the sender. But consent is still in my hands.

    Otherwise, I largely agree with the article on its points, and also appreciate it raising the overall topic of etiquette given a new technology.

    Like the shift to smart phones, this changes the social landscape.

  • Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

    with no ads

    For now.

    Eventually it becomes a search engine that replaces the ads on the source material with its own ads, thus choking out the source's funding and taking it for itself.

  • Personally, I don't mind the "I asked AI and it said..." Because I can choose to ignore anything that follows.

    Yes, I can judge the sender. But consent is still in my hands.

    Otherwise, I largely agree with the article on its points, and also appreciate it raising the overall topic of etiquette given a new technology.

    Like the shift to smart phones, this changes the social landscape.

    I really dont like "I asked AI and it said X" but then I realise that many people including myself will search google and then relay random shit that seems useful and I dont see how AI is much different. Maybe both are bad, I dont do either anymore. But I guess both are just a person trying to be helpful and at the end of the day thats a good thing.

  • Blindsight mentioned!

    The only explanation is that something has coded nonsense in a way that poses as a useful message; only after wasting time and effort does the deception becomes apparent. The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus.

    This has been my biggest problem with it. It places a cognitive load on me that wasn't there before, having to cut through the noise.

    Is blindsight worth a read? It seemed interesting from the brief description.

  • This is exactly something that has annoyed me in a sports community I follow back on Reddit. Posts with titles along the lines of “I asked ChatGPT what it thinks will happen in the game this weekend and here is what it said”.

    Why? What does ChatGPT add to the conversation here? Asking the question directly in the subreddit would have encouraged the same discussion.

    We’ve also learned nothing about the OPs opinion on the matter, other than maybe that they don’t have one. And even more to the point, it’s so intellectually lazy that it just feels like karma farming. “Ya I have nothing to add but I do love me them updoots”.

    I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.

    But that argument always just feels overly combative for what is otherwise a pretty relaxed sports community. It’s just not worth having that fight there.

    Old reddit would have annihilated that post.

  • And what happens when mechahitler the next version of Grok or whatever AI hosted by a large corporation that only has the interest of capital gains comes out with unannounced injected prompt poisoning that doesn't produce quality output like you've been conditioned to expect?

    These AI are good if you have a general grasp of whatever you are trying to find, because you can easily pick out what you know to be true and what is obviously a ridiculous mess of computer generated text that is no smarter than your phone keyboard word suggestions AI hallucination.

    Trying to soak up all the information generated by AI in a topic without prior knowledge may easily end up with you not understanding anything more than you did before, and possibly give you unrealistic confidence that you know what is essentially misinformation. And just because an AI pulls up references, unless you do your due diligence to read those references for accuracy or authority on the subject, the AI may be hallucinating where it got the wrong information it's giving you.

    And just because an AI pulls up references, unless you do your due diligence to read those references for accuracy or authority on the subject, the AI may be hallucinating where it got the wrong information it's giving you.

    This. I've had the AI provide me vendor documentation that said the opposite of what it says the doc says.

  • This is exactly something that has annoyed me in a sports community I follow back on Reddit. Posts with titles along the lines of “I asked ChatGPT what it thinks will happen in the game this weekend and here is what it said”.

    Why? What does ChatGPT add to the conversation here? Asking the question directly in the subreddit would have encouraged the same discussion.

    We’ve also learned nothing about the OPs opinion on the matter, other than maybe that they don’t have one. And even more to the point, it’s so intellectually lazy that it just feels like karma farming. “Ya I have nothing to add but I do love me them updoots”.

    I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.

    But that argument always just feels overly combative for what is otherwise a pretty relaxed sports community. It’s just not worth having that fight there.

    Treating an LLM like a novelty oracle seems okay-ish to me, it's a bit like predicting who will win the game by seeing which bowl a duck will eat from. Except minus the cute duck, of course. At least nobody will take it too serious, and those that do will probably see why they shouldn't.

    Still annoying though.