Skip to content

It's rude to show AI output to people

Technology
33 31 0
  • Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

    On the second part. That is only half true. Yes, there are LLMs out there that search the internet and summarize and reference some websites they find.

    However, it is not rare that they add their own "info" to it, even though it's not in the given source at all. If you use it to get sources and then read those instead, sure. But the output of the LLM itself should still be taken with a HUGE grain of salt and not be relied on at all if it's critical, even if it puts a nice citation.

  • What a coincidence, I was just reading sections of Blindsight again for an assignment (not directly related to it's contents) and had a similar though when re-parsing a section near the one in the OP — it's scary how closely the novel depicted something analogous to contemporary LLM output.

  • Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

    And what happens when mechahitler the next version of Grok or whatever AI hosted by a large corporation that only has the interest of capital gains comes out with unannounced injected prompt poisoning that doesn't produce quality output like you've been conditioned to expect?

    These AI are good if you have a general grasp of whatever you are trying to find, because you can easily pick out what you know to be true and what is obviously a ridiculous mess of computer generated text that is no smarter than your phone keyboard word suggestions AI hallucination.

    Trying to soak up all the information generated by AI in a topic without prior knowledge may easily end up with you not understanding anything more than you did before, and possibly give you unrealistic confidence that you know what is essentially misinformation. And just because an AI pulls up references, unless you do your due diligence to read those references for accuracy or authority on the subject, the AI may be hallucinating where it got the wrong information it's giving you.

  • If only the biggest problem was messages starting "I asked ChatGPT and this is what it said:"

    A far bigger problem is people using AI to draft text and then posting it as their own. On social media like this, I can't count the number of comments I've encountered midway through an otherwise normal discussion thread, and only clocked 2 paragraphs in that I'm reading a chat bot's response. I feel like I've had time and braincells stolen from me in the deception for the moments spent reading and attempting to derive meaning from it.

    And just this week I received an application from someone wanting work in my office which was very clearly AI generated. Obviously that person will not be offered any work. If you can't be bothered to write your own "why I want to work here" cover letter, then I can't be bothered to work with you.

    Have seen emails at work that were AI generated, but they made no disclaimer. Then someone points out how wildly incorrect it was and they just say "oh whoops, not my fault, I just ask ed an LLM". They set things up to take credit if people liked it, and used the LLMs are just stupid as an excuse when it doesn't fly.

  • Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

    It gives you some links but in my experience what it says in the summary isn't always the same as what's in the link...

  • Have seen emails at work that were AI generated, but they made no disclaimer. Then someone points out how wildly incorrect it was and they just say "oh whoops, not my fault, I just ask ed an LLM". They set things up to take credit if people liked it, and used the LLMs are just stupid as an excuse when it doesn't fly.

    In every business I've worked in, any email longer than a paragraph better have a summary and action items at the end or nobody is going to read it.

    In business time is money, email should be short and to the point.

  • This is a good post.

    Thinking about it some more, I don't necessarily mind if someone said "I googled it and..." then provides some self generated summary of what they found which is relevant to the discussion.

    I wouldn't mind if someone did the same with an LLM response. But just like I don't want to read a copy and paste of chatgpt results I don't want to read someone copy/pasting search results with no human analysis.

  • This is exactly something that has annoyed me in a sports community I follow back on Reddit. Posts with titles along the lines of “I asked ChatGPT what it thinks will happen in the game this weekend and here is what it said”.

    Why? What does ChatGPT add to the conversation here? Asking the question directly in the subreddit would have encouraged the same discussion.

    We’ve also learned nothing about the OPs opinion on the matter, other than maybe that they don’t have one. And even more to the point, it’s so intellectually lazy that it just feels like karma farming. “Ya I have nothing to add but I do love me them updoots”.

    I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.

    But that argument always just feels overly combative for what is otherwise a pretty relaxed sports community. It’s just not worth having that fight there.

    I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.

    That's literally the point of them. They're supposed to generate what the most likely result would be. They aren't supposed to be creative or anything like that. They're supposed to be generic.

  • Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

    If you have evaluated the statement for its correctness and relevance, then you can just own up to the statement yourself. There is no need to defer responsibility by prefacing it with “I asked [some AI service] and here’s what it said”. That is the point of the article that is being discussed, if you'd like to give it a read sometime.

  • I think sometimes when we ask people something we're not just seeking information. We're also engaging with other humans. We're connecting, signaling something, communicating something with the question, and so on. I use LLMs when I literally just want to know something, but I also try to remember the value of talking to other human beings as well.

  • Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

    Ok, I didn't need you to act as a middle man to tell me what the LLM just hallucinated, I can do this myself.

    The point is that raw AI output provides absolutely no value to a conversation, and is thus noisy and rude.

    When we ask questions on a public forum, we're looking to talk to people about their own experience and research through the lens of their own being and expertise. We're all capable of prompting an AI agent. If we wanted AI answers, we'd prompt an AI agent.

  • This is a good post.

    Thinking about it some more, I don't necessarily mind if someone said "I googled it and..." then provides some self generated summary of what they found which is relevant to the discussion.

    I wouldn't mind if someone did the same with an LLM response. But just like I don't want to read a copy and paste of chatgpt results I don't want to read someone copy/pasting search results with no human analysis.

    If you're going to use an LLM, at least follow the links it provides to the source of what they output. You really need to check their work.

  • Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

    I am speaking from experience.

    The latest example of that I encountered had a blatant logical inconsistency in its summary, a CVE that wasn't relevant to what was discussed, because it was corrected years before the technology existed. Someone pointed at it.

    The poster hadn't done the slightest to check what they posted, they just regurgitated it. It's not the reader's job to check the crap you've posted without the slightest effort.

  • This is a good post.

    Thinking about it some more, I don't necessarily mind if someone said "I googled it and..." then provides some self generated summary of what they found which is relevant to the discussion.

    I wouldn't mind if someone did the same with an LLM response. But just like I don't want to read a copy and paste of chatgpt results I don't want to read someone copy/pasting search results with no human analysis.

    I have a few colleagues that are very skilled and likeable people, but have horrible digital etiquette (40-50 year olds).

    Expecting people to read regurgitated gpt-summaries are the most obvious.

    But another one that bugs me just as much, are sharing links with no annotation. Could be a small article or a long ass report or white paper with 140 pages. Like, you expect me to bother read it, but you can't bother to say what's relevant about it?

    I genuinely think it's well intentioned for the most part. They're just clueless about what makes for good digital etiquette.

  • The worst is being in a technical role, and having project managers and marketing people telling me how it is based on some chathpt output

    Like shut the fuck up please, you literally don’t know what you are talking about

  • I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.

    That's literally the point of them. They're supposed to generate what the most likely result would be. They aren't supposed to be creative or anything like that. They're supposed to be generic.

    Hey! ChatGPT can be creative if you ask it to roast fictional characters .. somewhat!

  • The worst is being in a technical role, and having project managers and marketing people telling me how it is based on some chathpt output

    Like shut the fuck up please, you literally don’t know what you are talking about

    Sadly we had that problem before AI too... "Some dude I know told me this is super easy to do"

  • You're damn right, if somebody puts slop in my face I get visibly aggressive.

  • I think sometimes when we ask people something we're not just seeking information. We're also engaging with other humans. We're connecting, signaling something, communicating something with the question, and so on. I use LLMs when I literally just want to know something, but I also try to remember the value of talking to other human beings as well.

    You should pretty much assume everything that a chatbot says could be false to a much higher degree than human written content, making it effectively useless for your stated purpose.

  • Every now and then I see a guy barging in a topic bringing nothing else than "I asked [some AI service] and here's what it said", followed by 3 paragraphs of AI-gened gibberish. And then when it's not well received they just don't seem to understand.

    It's baffling to me. Anyone can ask an AI. A lot of people specifically don't, because they don't want to battle with its output for an hour trying to sort out from where it got its information, whether it represented it well, or even whether it just hallucinated half of it.

    And those guys come posting a wall of text they may or may not have read themselves, and then they have the gall to go "What's the problem, is any of that wrong?"... Dude, the problem is you have no fucking idea if it's wrong yourself, have nothing to back it up, and have only brought automated noise to the conversation.

    I was trying to help onboard a new lead engineer and I was working through debugging his caddy config on Slack. I'm clearly putting in effort to help him diagnose his issue and he posts "I asked chatgpt and it said these two lines need to be reversed", which was completely false (caddy has a system for reordering directives) and honestly just straight up insulting. Fucking pissed me off. People need to stop brining AI slop into conversations. It isn't welcome and can fuck right off.

    The actual issue? He forgot to restart his development server. 😡

  • Jack Dorsey’s New App Just Hit a Very Embarrassing Security Snag

    Technology technology
    19
    1
    139 Stimmen
    19 Beiträge
    127 Aufrufe
    U
    Briar is Android only. Bitchat is an iOS app (may have an Android port in the future though, I think).
  • No JS, No CSS, No HTML: online "clubs" celebrate plainer websites

    Technology technology
    205
    2
    772 Stimmen
    205 Beiträge
    948 Aufrufe
    R
    Gemini is just a web replacement protocol. With basic things we remember from olden days Web, but with everything non-essential removed, for a client to be doable in a couple of days. I have my own Gemini viewer, LOL. This for me seems a completely different application from torrents. I was dreaming for a thing similar to torrent trackers for aggregating storage and computation and indexing and search, with search and aggregation and other services' responses being structured and standardized, and cryptographic identities, and some kind of market services to sell and buy storage and computation in unified and pooled, but transparent way (scripted by buyer\seller), similar to MMORPG markets, with the representation (what is a siloed service in modern web) being on the client native application, and those services allowing to build any kind of client-server huge system on them, that being global. But that's more of a global Facebook\Usenet\whatever, a killer of platforms. Their infrastructure is internal, while their representation is public on the Internet. I want to make infrastructure public on the Internet, and representation client-side, sharing it for many kinds of applications. Adding another layer to the OSI model, so to say, between transport and application layer. For this application: I think you could have some kind of Kademlia-based p2p with groups voluntarily joined (involving very huge groups) where nodes store replicas of partitions of group common data based on their pseudo-random identifiers and/or some kind of ring built from those identifiers, to balance storage and resilience. If a group has a creator, then you can have replication factor propagated signed by them, and membership too signed by them. But if having a creator (even with cryptographically delegated decisions) and propagating changes by them is not ok, then maybe just using whole data hash, or it's bittorrent-like info tree hash, as namespace with peers freely joining it can do. Then it may be better to partition not by parts of the whole piece, but by info tree? I guess making it exactly bittorrent-like is not a good idea, rather some kind of block tree, like for a filesystem, and a separate piece of information to lookup which file is in which blocks. If we are doing directory structure. Then, with freely joining it, there's no need in any owners or replication factors, I guess just pseudorandom distribution of hashes will do, and each node storing first partitions closest to its hash. Now thinking about it, such a system would be not that different from bittorrent and can even be interoperable with it. There's the issue of updates, yes, hence I've started with groups having hierarchy of creators, who can make or accept those updates. Having that and the ability to gradually store one group's data to another group, it should be possible to do forks of a certain state. But that line of thought makes reusing bittorrent only possible for part of the system. The whole database is guaranteed to be more than a normal HDD (1 TB? I dunno). Absolutely guaranteed, no doubt at all. 1 TB (for example) would be someone's collection of favorite stuff, and not too rich one.
  • 0 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet
  • Final Nokia feature phones coming before HMD deal ends in 2026

    Technology technology
    2
    1
    33 Stimmen
    2 Beiträge
    23 Aufrufe
    B
    HMD feature phones are such a let down. The Polish language translation within the system is clearly automated translation - the words used sometimes don't make sense. CloudFone apps are also not available in Europe. The HMD 110 4G (2024, not 2023) has the Unisoc T127 chipset which supports hotspot, but HMD deliberately chose not to include it. I know because the Itel Neo R60+ has hotspot with the same chipset. At least they made Nokia XR21 in Europe for a while.
  • 141 Stimmen
    22 Beiträge
    119 Aufrufe
    P
    That would be 1 in 4 users and that's just not accurate at all. What you mean to say is 25% of Windows users still use windows 7. Its still an alarming statistic, and no wonder bruteforce cyberattacks are still so effective today considering it hasn't received security updates in like 10 years. I sincerely hope those people aren't connecting their devices to the internet like, at all. I'm fairly sure at this point even using a Debian based distro is better than sticking to windows 7.
  • Super Human In Transit - Living

    Technology technology
    1
    2
    0 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet
  • 33 Stimmen
    12 Beiträge
    50 Aufrufe
    E
    Can you replace politicians I feel like that would actually be an improvement. Hell it'd probably be an improvement if the current system's replaced politicians. To be honest though I've never seen any evidence that AGI is inevitable, it's perpetually 6 months away except in 6 months it'll still be 6 months away.
  • Nextcloud cries foul over Google Play Store app rejection

    Technology technology
    31
    1
    256 Stimmen
    31 Beiträge
    146 Aufrufe
    S
    I have the regular F-droid and it does automatic updates now.