Skip to content

It's rude to show AI output to people

Technology
33 31 0
  • Yes. I am getting so sick and tired of people asking me for help then proceeding to rain unhelpful suggestions from their LLM upon me while I'm trying to think through their problem. You wouldn't be asking for help if that stuff was helping you!

  • Every now and then I see a guy barging in a topic bringing nothing else than "I asked [some AI service] and here's what it said", followed by 3 paragraphs of AI-gened gibberish. And then when it's not well received they just don't seem to understand.

    It's baffling to me. Anyone can ask an AI. A lot of people specifically don't, because they don't want to battle with its output for an hour trying to sort out from where it got its information, whether it represented it well, or even whether it just hallucinated half of it.

    And those guys come posting a wall of text they may or may not have read themselves, and then they have the gall to go "What's the problem, is any of that wrong?"... Dude, the problem is you have no fucking idea if it's wrong yourself, have nothing to back it up, and have only brought automated noise to the conversation.

  • If only the biggest problem was messages starting "I asked ChatGPT and this is what it said:"

    A far bigger problem is people using AI to draft text and then posting it as their own. On social media like this, I can't count the number of comments I've encountered midway through an otherwise normal discussion thread, and only clocked 2 paragraphs in that I'm reading a chat bot's response. I feel like I've had time and braincells stolen from me in the deception for the moments spent reading and attempting to derive meaning from it.

    And just this week I received an application from someone wanting work in my office which was very clearly AI generated. Obviously that person will not be offered any work. If you can't be bothered to write your own "why I want to work here" cover letter, then I can't be bothered to work with you.

  • This is exactly something that has annoyed me in a sports community I follow back on Reddit. Posts with titles along the lines of “I asked ChatGPT what it thinks will happen in the game this weekend and here is what it said”.

    Why? What does ChatGPT add to the conversation here? Asking the question directly in the subreddit would have encouraged the same discussion.

    We’ve also learned nothing about the OPs opinion on the matter, other than maybe that they don’t have one. And even more to the point, it’s so intellectually lazy that it just feels like karma farming. “Ya I have nothing to add but I do love me them updoots”.

    I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.

    But that argument always just feels overly combative for what is otherwise a pretty relaxed sports community. It’s just not worth having that fight there.

    Why? What does ChatGPT add to the conversation here? Asking the question directly in the subreddit would have encouraged the same discussion.

    I guess it has some tabloid-like value. which if counts as value, tells a lot about the other party.

  • I'm amused by the 14 oxygen-wasting NPCs who are in this picture and didn't like it.

  • Every now and then I see a guy barging in a topic bringing nothing else than "I asked [some AI service] and here's what it said", followed by 3 paragraphs of AI-gened gibberish. And then when it's not well received they just don't seem to understand.

    It's baffling to me. Anyone can ask an AI. A lot of people specifically don't, because they don't want to battle with its output for an hour trying to sort out from where it got its information, whether it represented it well, or even whether it just hallucinated half of it.

    And those guys come posting a wall of text they may or may not have read themselves, and then they have the gall to go "What's the problem, is any of that wrong?"... Dude, the problem is you have no fucking idea if it's wrong yourself, have nothing to back it up, and have only brought automated noise to the conversation.

    Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

  • Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

    On the second part. That is only half true. Yes, there are LLMs out there that search the internet and summarize and reference some websites they find.

    However, it is not rare that they add their own "info" to it, even though it's not in the given source at all. If you use it to get sources and then read those instead, sure. But the output of the LLM itself should still be taken with a HUGE grain of salt and not be relied on at all if it's critical, even if it puts a nice citation.

  • What a coincidence, I was just reading sections of Blindsight again for an assignment (not directly related to it's contents) and had a similar though when re-parsing a section near the one in the OP — it's scary how closely the novel depicted something analogous to contemporary LLM output.

  • Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

    And what happens when mechahitler the next version of Grok or whatever AI hosted by a large corporation that only has the interest of capital gains comes out with unannounced injected prompt poisoning that doesn't produce quality output like you've been conditioned to expect?

    These AI are good if you have a general grasp of whatever you are trying to find, because you can easily pick out what you know to be true and what is obviously a ridiculous mess of computer generated text that is no smarter than your phone keyboard word suggestions AI hallucination.

    Trying to soak up all the information generated by AI in a topic without prior knowledge may easily end up with you not understanding anything more than you did before, and possibly give you unrealistic confidence that you know what is essentially misinformation. And just because an AI pulls up references, unless you do your due diligence to read those references for accuracy or authority on the subject, the AI may be hallucinating where it got the wrong information it's giving you.

  • If only the biggest problem was messages starting "I asked ChatGPT and this is what it said:"

    A far bigger problem is people using AI to draft text and then posting it as their own. On social media like this, I can't count the number of comments I've encountered midway through an otherwise normal discussion thread, and only clocked 2 paragraphs in that I'm reading a chat bot's response. I feel like I've had time and braincells stolen from me in the deception for the moments spent reading and attempting to derive meaning from it.

    And just this week I received an application from someone wanting work in my office which was very clearly AI generated. Obviously that person will not be offered any work. If you can't be bothered to write your own "why I want to work here" cover letter, then I can't be bothered to work with you.

    Have seen emails at work that were AI generated, but they made no disclaimer. Then someone points out how wildly incorrect it was and they just say "oh whoops, not my fault, I just ask ed an LLM". They set things up to take credit if people liked it, and used the LLMs are just stupid as an excuse when it doesn't fly.

  • Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

    It gives you some links but in my experience what it says in the summary isn't always the same as what's in the link...

  • Have seen emails at work that were AI generated, but they made no disclaimer. Then someone points out how wildly incorrect it was and they just say "oh whoops, not my fault, I just ask ed an LLM". They set things up to take credit if people liked it, and used the LLMs are just stupid as an excuse when it doesn't fly.

    In every business I've worked in, any email longer than a paragraph better have a summary and action items at the end or nobody is going to read it.

    In business time is money, email should be short and to the point.

  • This is a good post.

    Thinking about it some more, I don't necessarily mind if someone said "I googled it and..." then provides some self generated summary of what they found which is relevant to the discussion.

    I wouldn't mind if someone did the same with an LLM response. But just like I don't want to read a copy and paste of chatgpt results I don't want to read someone copy/pasting search results with no human analysis.

  • This is exactly something that has annoyed me in a sports community I follow back on Reddit. Posts with titles along the lines of “I asked ChatGPT what it thinks will happen in the game this weekend and here is what it said”.

    Why? What does ChatGPT add to the conversation here? Asking the question directly in the subreddit would have encouraged the same discussion.

    We’ve also learned nothing about the OPs opinion on the matter, other than maybe that they don’t have one. And even more to the point, it’s so intellectually lazy that it just feels like karma farming. “Ya I have nothing to add but I do love me them updoots”.

    I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.

    But that argument always just feels overly combative for what is otherwise a pretty relaxed sports community. It’s just not worth having that fight there.

    I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.

    That's literally the point of them. They're supposed to generate what the most likely result would be. They aren't supposed to be creative or anything like that. They're supposed to be generic.

  • Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

    If you have evaluated the statement for its correctness and relevance, then you can just own up to the statement yourself. There is no need to defer responsibility by prefacing it with “I asked [some AI service] and here’s what it said”. That is the point of the article that is being discussed, if you'd like to give it a read sometime.

  • I think sometimes when we ask people something we're not just seeking information. We're also engaging with other humans. We're connecting, signaling something, communicating something with the question, and so on. I use LLMs when I literally just want to know something, but I also try to remember the value of talking to other human beings as well.

  • Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

    Ok, I didn't need you to act as a middle man to tell me what the LLM just hallucinated, I can do this myself.

    The point is that raw AI output provides absolutely no value to a conversation, and is thus noisy and rude.

    When we ask questions on a public forum, we're looking to talk to people about their own experience and research through the lens of their own being and expertise. We're all capable of prompting an AI agent. If we wanted AI answers, we'd prompt an AI agent.

  • This is a good post.

    Thinking about it some more, I don't necessarily mind if someone said "I googled it and..." then provides some self generated summary of what they found which is relevant to the discussion.

    I wouldn't mind if someone did the same with an LLM response. But just like I don't want to read a copy and paste of chatgpt results I don't want to read someone copy/pasting search results with no human analysis.

    If you're going to use an LLM, at least follow the links it provides to the source of what they output. You really need to check their work.

  • Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

    I am speaking from experience.

    The latest example of that I encountered had a blatant logical inconsistency in its summary, a CVE that wasn't relevant to what was discussed, because it was corrected years before the technology existed. Someone pointed at it.

    The poster hadn't done the slightest to check what they posted, they just regurgitated it. It's not the reader's job to check the crap you've posted without the slightest effort.

  • This is a good post.

    Thinking about it some more, I don't necessarily mind if someone said "I googled it and..." then provides some self generated summary of what they found which is relevant to the discussion.

    I wouldn't mind if someone did the same with an LLM response. But just like I don't want to read a copy and paste of chatgpt results I don't want to read someone copy/pasting search results with no human analysis.

    I have a few colleagues that are very skilled and likeable people, but have horrible digital etiquette (40-50 year olds).

    Expecting people to read regurgitated gpt-summaries are the most obvious.

    But another one that bugs me just as much, are sharing links with no annotation. Could be a small article or a long ass report or white paper with 140 pages. Like, you expect me to bother read it, but you can't bother to say what's relevant about it?

    I genuinely think it's well intentioned for the most part. They're just clueless about what makes for good digital etiquette.

  • China's Robotaxi Companies Are Racing Ahead of Tesla

    Technology technology
    38
    1
    173 Stimmen
    38 Beiträge
    58 Aufrufe
    I
    It could. Imagine 80% autonomous vehicle traffic, 30% of that is multipassenger capable taxi service. Autonomous vehicle lanes moving reliably at 75mph. With this amount of taxi service the advantages of personal vehicle ownership falls and the wait time for an available pickup diminishes rapidly. China has many areas with pretty good public transportation. In the US, tech advances and legislation changes to enable the above model is better suited to the existing infrastructure.
  • Oracle Inks Cloud Deal Worth $30 Billion a Year

    Technology technology
    2
    13 Stimmen
    2 Beiträge
    24 Aufrufe
    J
    And it mentioned nothing...
  • 337 Stimmen
    19 Beiträge
    111 Aufrufe
    R
    What I'm speaking about is that it should be impossible to do some things. If it's possible, they will be done, and there's nothing you can do about it. To solve the problem of twiddled social media (and moderation used to assert dominance) we need a decentralized system of 90s Web reimagined, and Fediverse doesn't deliver it - if Facebook and Reddit are feudal states, then Fediverse is a confederation of smaller feudal entities. A post, a person, a community, a reaction and a change (by moderator or by the user) should be global entities (with global identifiers, so that the object by id of #0000001a2b3c4d6e7f890 would be the same object today or 10 years later on every server storing it) replicated over a network of servers similarly to Usenet (and to an IRC network, but in an IRC network servers are trusted, so it's not a good example for a global system). Really bad posts (or those by persons with history of posting such) should be banned on server level by everyone. The rest should be moderated by moderator reactions\changes of certain type. Ideally, for pooling of resources and resilience, servers would be separated by types into storage nodes (I think the name says it, FTP servers can do the job, but no need to be limited by it), index nodes (scraping many storage nodes, giving out results in structured format fit for any user representation, say, as a sequence of posts in one community, or like a list of communities found by tag, or ... , and possibly being connected into one DHT for Kademlia-like search, since no single index node will have everything), and (like in torrents?) tracker nodes for these and for identities, I think torrent-like announce-retrieve service is enough - to return a list of storage nodes storing, say, a specified partition (subspace of identifiers of objects, to make looking for something at least possibly efficient), or return a list of index nodes, or return a bunch of certificates and keys for an identity (should be somehow cryptographically connected to the global identifier of a person). So when a storage node comes online, it announces itself to a bunch of such trackers, similarly with index nodes, similarly with a user. One can also have a NOSTR-like service for real-time notifications by users. This way you'd have a global untrusted pooled infrastructure, allowing to replace many platforms. With common data, identities, services. Objects in storage and index services can be, say, in a format including a set of tags and then the body. So a specific application needing to show only data related to it would just search on index services and display only objects with tags of, say, "holo_ns:talk.bullshit.starwars" and "holo_t:post", like a sequence of posts with ability to comment, or maybe it would search objects with tags "holo_name:My 1999-like Star Wars holopage" and "holo_t:page" and display the links like search results in Google, and then clicking on that you'd see something presented like a webpage, except links would lead to global identifiers (or tag expressions interpreted by the particular application, who knows). (An index service may return, say, an array of objects, each with identifier, tags, list of locations on storage nodes where it's found or even bittorrent magnet links, and a free description possibly ; then the user application can unify responses of a few such services to avoid repetitions, maybe sort them, represent them as needed, so on.) The user applications for that common infrastructure can be different at the same time. Some like Facebook, some like ICQ, some like a web browser, some like a newsreader. (Star Wars is not a random reference, my whole habit of imagining tech stuff is from trying to imagine a science fiction world of the future, so yeah, this may seem like passive dreaming and it is.)
  • The U.S. Immigration and Customs

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet
  • 149 Stimmen
    19 Beiträge
    94 Aufrufe
    C
    Got it, at that point (extremely high voltage) you'd need suppression at the panel. Which I would hope people have inline, but not expect like an LVD.
  • 1 Stimmen
    4 Beiträge
    32 Aufrufe
    N
    that's probably not true. I imagine it was someone trying to harm the guy. a hilarious prank
  • 66 Stimmen
    9 Beiträge
    53 Aufrufe
    F
    HE is amazing. their BGP looking glass tool is also one of my favorite troubleshooting tools for backbone issues. 10/10 ISP
  • San Francisco crypto founder faked his own death

    Technology technology
    10
    1
    98 Stimmen
    10 Beiträge
    54 Aufrufe
    S
    My head canon is that Satoshi Nakamoto... ... is Hideo Kojima. Anyway, Satoshi is the pseudonym used on the original... white paper, design doc, whatever it was, for Bitcoin. There's no doubt about that, I was there back before even Mt. Gox became a bitcoin exchange, on the forums discussing it. I thought it was a neat idea, at the time... and then I realized 95% of the discussions on that forum were about 'the ethics of fully informed ponzi schemes' and such, very little devoted to actual technical development... realized this was probably a bad omen.