Skip to content

It's rude to show AI output to people

Technology
53 44 35
  • Sadly we had that problem before AI too... "Some dude I know told me this is super easy to do"

    Some dude vs. LLM. Fight!

  • ..without informed consent.

    I like the premise behind this.

    But how do we differentiate? Unless explicitly mentioned, it might be hard to tell the difference between AI and native human message.

    It's enough for the other side not to mention the message is AI-generated to fool us for quite a while.

  • ..without informed consent.

    For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.

    Now, AI has made text very, very, very cheap. Not only text, in fact. Code, images, video. All kinds of media. We can't rely on proof-of-thought anymore.

    This is what makes AI so insidious. It's like email spam. It puts the burden on the reader to determine and sort ham from spam.

  • Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

    No. They don't actually use sources and can't tell you why they said what they said. This actually reverses cause and effect the source is part of the inference and since it didn't and in fact can't go and read them frequently contains both imaginary sources and sources which don't actually support the assertion.

    The ability to evaluate the source requires the same skills and information as the production of the original text would have required which means perforce if you use chatGPT to produce text which you lack the skills to produce you also lack the skills to evaluate it.

    You COULD go and read primary sources and in turn evaluate it but at that point its not that quick.

  • I like the premise behind this.

    But how do we differentiate? Unless explicitly mentioned, it might be hard to tell the difference between AI and native human message.

    It's enough for the other side not to mention the message is AI-generated to fool us for quite a while.

    You differentiate by only seeing what your acknowledged peers post and what their acknowledged peers post.

    That's for communities of many people. That requires having global transparent ID of the other user. Now you interact with a service on the Web. That stops being good enough.

    I actually like that, because that might mean that today's Web in its entirety is not good enough.

    The old "services yielding linked hypertext" one - yes. A personal webpage is a person. It's possible to devise common way of checks. Many services, some good and some not - a way to technically separate them too.

    An alternative to Usenet with global IDs for users and posts - yes.

    But one platform-website for all of a kind of interactions with generic executable dynamic contents - morally obsolete.

    If that happens, I'm going to donate to OpenAI and whoever else makes it happen. Well, maybe not much, but I am.

  • ..without informed consent.

    I am not sure the kind of people who think using the thieving bullshit slop machine is a fine thing to do to can be trusted to have appropriate ideas about rudeness and etiquette.

  • Some dude vs. LLM. Fight!

    We all lose. Fatality!

  • ..without informed consent.

    Personally, I don't mind the "I asked AI and it said..." Because I can choose to ignore anything that follows.

    Yes, I can judge the sender. But consent is still in my hands.

    Otherwise, I largely agree with the article on its points, and also appreciate it raising the overall topic of etiquette given a new technology.

    Like the shift to smart phones, this changes the social landscape.

  • Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

    with no ads

    For now.

    Eventually it becomes a search engine that replaces the ads on the source material with its own ads, thus choking out the source's funding and taking it for itself.

  • Personally, I don't mind the "I asked AI and it said..." Because I can choose to ignore anything that follows.

    Yes, I can judge the sender. But consent is still in my hands.

    Otherwise, I largely agree with the article on its points, and also appreciate it raising the overall topic of etiquette given a new technology.

    Like the shift to smart phones, this changes the social landscape.

    I really dont like "I asked AI and it said X" but then I realise that many people including myself will search google and then relay random shit that seems useful and I dont see how AI is much different. Maybe both are bad, I dont do either anymore. But I guess both are just a person trying to be helpful and at the end of the day thats a good thing.

  • Blindsight mentioned!

    The only explanation is that something has coded nonsense in a way that poses as a useful message; only after wasting time and effort does the deception becomes apparent. The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus.

    This has been my biggest problem with it. It places a cognitive load on me that wasn't there before, having to cut through the noise.

    Is blindsight worth a read? It seemed interesting from the brief description.

  • This is exactly something that has annoyed me in a sports community I follow back on Reddit. Posts with titles along the lines of “I asked ChatGPT what it thinks will happen in the game this weekend and here is what it said”.

    Why? What does ChatGPT add to the conversation here? Asking the question directly in the subreddit would have encouraged the same discussion.

    We’ve also learned nothing about the OPs opinion on the matter, other than maybe that they don’t have one. And even more to the point, it’s so intellectually lazy that it just feels like karma farming. “Ya I have nothing to add but I do love me them updoots”.

    I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.

    But that argument always just feels overly combative for what is otherwise a pretty relaxed sports community. It’s just not worth having that fight there.

    Old reddit would have annihilated that post.

  • And what happens when mechahitler the next version of Grok or whatever AI hosted by a large corporation that only has the interest of capital gains comes out with unannounced injected prompt poisoning that doesn't produce quality output like you've been conditioned to expect?

    These AI are good if you have a general grasp of whatever you are trying to find, because you can easily pick out what you know to be true and what is obviously a ridiculous mess of computer generated text that is no smarter than your phone keyboard word suggestions AI hallucination.

    Trying to soak up all the information generated by AI in a topic without prior knowledge may easily end up with you not understanding anything more than you did before, and possibly give you unrealistic confidence that you know what is essentially misinformation. And just because an AI pulls up references, unless you do your due diligence to read those references for accuracy or authority on the subject, the AI may be hallucinating where it got the wrong information it's giving you.

    And just because an AI pulls up references, unless you do your due diligence to read those references for accuracy or authority on the subject, the AI may be hallucinating where it got the wrong information it's giving you.

    This. I've had the AI provide me vendor documentation that said the opposite of what it says the doc says.

  • This is exactly something that has annoyed me in a sports community I follow back on Reddit. Posts with titles along the lines of “I asked ChatGPT what it thinks will happen in the game this weekend and here is what it said”.

    Why? What does ChatGPT add to the conversation here? Asking the question directly in the subreddit would have encouraged the same discussion.

    We’ve also learned nothing about the OPs opinion on the matter, other than maybe that they don’t have one. And even more to the point, it’s so intellectually lazy that it just feels like karma farming. “Ya I have nothing to add but I do love me them updoots”.

    I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.

    But that argument always just feels overly combative for what is otherwise a pretty relaxed sports community. It’s just not worth having that fight there.

    Treating an LLM like a novelty oracle seems okay-ish to me, it's a bit like predicting who will win the game by seeing which bowl a duck will eat from. Except minus the cute duck, of course. At least nobody will take it too serious, and those that do will probably see why they shouldn't.

    Still annoying though.

  • Is blindsight worth a read? It seemed interesting from the brief description.

    Oh yes, I think Peter Watts is a great author. He's very good at tackling high concept ideas while also keeping it fun and interesting. Blindsight has a vampire in it in case there wasn't already enough going on for you 😁

    Unrelated to the topic at hand, I also highly recommend Starfish by him. It was the first novel of his I read. A dark, psychological thriller about a bunch of misfits working a deep sea geothermal power plant and how they cope (or don't) with the situation at hand.

  • Hey! ChatGPT can be creative if you ask it to roast fictional characters .. somewhat!

    It's still not creative. It's just rehashing things it heard before. It's like if a comedian just stole the jokes from other comedians but changed the names of people. That's not creative, even if it's slightly different than what anyone's seen before.

  • Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

    "With no ads"
    Google used to have no ads.
    And especially with how much it cost to run even today's LLMs, let alone tomorrow's ones... enshittification is only a matter of time.

  • ..without informed consent.

    Here's a question regarding the informed consent part.

    The article gives the example of asking whether the recipient wants the AI's answer shared.

    "I had a helpful chat with ChatGPT about this topic some time ago and can share a log with you if you want."

    Do you (I mean generally people reading this thread, not OP specifically) think Lemmy's spoiler formatting would count as informed consent if properly labeled as containing AI text? I mean, the user has to put in the effort to open the spoiler manually.

  • I really dont like "I asked AI and it said X" but then I realise that many people including myself will search google and then relay random shit that seems useful and I dont see how AI is much different. Maybe both are bad, I dont do either anymore. But I guess both are just a person trying to be helpful and at the end of the day thats a good thing.

    And now googling will just result in "I asked AI and it said X", as the first thing you get is the AI summary shit. A friend of mine does this constantly, we are in a discord call and somebody asks a question, he will google it and repeat the AI slop back as a fact.

    Half the time it's wrong.

  • Good question; that would qualify for me, yeh!

  • 229 Stimmen
    47 Beiträge
    279 Aufrufe
    D
    Oh it's Towers of Hanoi. I have a screensaver that does this.
  • Twitter opens up to Community Notes written by AI bots

    Technology technology
    9
    1
    44 Stimmen
    9 Beiträge
    55 Aufrufe
    G
    Stop fucking using twitter. Stop posting about it, stop posting things that link to it. Delete your account like you should have already.
  • 83 Stimmen
    13 Beiträge
    72 Aufrufe
    M
    It's a bit of a sticking point in Australia which is becoming more and more of a 'two-speed' society. Foxtel is for the rich classes, it caters to the right wing. Sky News is on Foxtel. These eSafety directives killing access to youtube won't affect those rich kids so much, but for everyone else it's going to be a nightmare. My only possible hope out of this is that maybe, Parliament and ACMA (Australian Communications and Media Authority, TV standards) decide that since we need a greater media landscape for kids and they can't be allowed to have it online, that maybe more than 3 major broadcasters could be allowed. It's not a lack of will that stops anyone else making a new free-to-air network, it's legislation, there are only allowed to be 3 commercial FTA broadcasters in any area. I don't love Youtube or the kids watching it, it's that the alternatives are almost objectively worse. 10 and 7 and garbage 24/7 and 9 is basically a right-wing hugbox too.
  • 0 Stimmen
    1 Beiträge
    12 Aufrufe
    Niemand hat geantwortet
  • 1k Stimmen
    95 Beiträge
    274 Aufrufe
    G
    Obviously the law must be simple enough to follow so that for Jim’s furniture shop is not a problem nor a too high cost to respect it, but it must be clear that if you break it you can cease to exist as company. I think this may be the root of our disagreement, I do not believe that there is any law making body today that is capable of an elegantly simple law. I could be too naive, but I think it is possible. We also definitely have a difference on opinion when it comes to the severity of the infraction, in my mind, while privacy is important, it should not have the same level of punishments associated with it when compared to something on the level of poisoning water ways; I think that a privacy law should hurt but be able to be learned from while in the poison case it should result in the bankruptcy of a company. The severity is directly proportional to the number of people affected. If you violate the privacy of 200 million people is the same that you poison the water of 10 people. And while with the poisoning scenario it could be better to jail the responsible people (for a very, very long time) and let the company survive to clean the water, once your privacy is violated there is no way back, a company could not fix it. The issue we find ourselves with today is that the aggregate of all privacy breaches makes it harmful to the people, but with a sizeable enough fine, I find it hard to believe that there would be major or lasting damage. So how much money your privacy it's worth ? 6 For this reason I don’t think it is wise to write laws that will bankrupt a company off of one infraction which was not directly or indirectly harmful to the physical well being of the people: and I am using indirectly a little bit more strict than I would like to since as I said before, the aggregate of all the information is harmful. The point is that the goal is not to bankrupt companies but to have them behave right. The penalty associated to every law IS the tool that make you respect the law. And it must be so high that you don't want to break the law. I would have to look into the laws in question, but on a surface level I think that any company should be subjected to the same baseline privacy laws, so if there isn’t anything screwy within the law that apple, Google, and Facebook are ignoring, I think it should apply to them. Trust me on this one, direct experience payment processors have a lot more rules to follow to be able to work. I do not want jail time for the CEO by default but he need to know that he will pay personally if the company break the law, it is the only way to make him run the company being sure that it follow the laws. For some reason I don’t have my usual cynicism when it comes to this issue. I think that the magnitude of loses that vested interests have in these companies would make it so that companies would police themselves for fear of losing profits. That being said I wouldn’t be opposed to some form of personal accountability on corporate leadership, but I fear that they will just end up finding a way to create a scapegoat everytime. It is not cynicism. I simply think that a huge fine to a single person (the CEO for example) is useless since it too easy to avoid and if it really huge realistically it would be never paid anyway so nothing usefull since the net worth of this kind of people is only on the paper. So if you slap a 100 billion file to Musk he will never pay because he has not the money to pay even if technically he is worth way more than that. Jail time instead is something that even Musk can experience. In general I like laws that are as objective as possible, I think that a privacy law should be written so that it is very objectively overbearing, but that has a smaller fine associated with it. This way the law is very clear on right and wrong, while also giving the businesses time and incentive to change their practices without having to sink large amount of expenses into lawyers to review every minute detail, which is the logical conclusion of the one infraction bankrupt system that you seem to be supporting. Then you write a law that explicitally state what you can do and what is not allowed is forbidden by default.
  • Windows 11 remote desktop microphone stops working intermittently

    Technology technology
    7
    16 Stimmen
    7 Beiträge
    45 Aufrufe
    S
    When I worked in IT, we only let people install every other version of Windows. Our Linux user policy was always “mainstream distro and the LTS version.” Mac users were strongly advised to wait 3 months to upgrade. One guy used FreeBSD and I just never questioned him because he was older and never filed one help desk request. He probably thought I was an idiot. (And I was.) Anyway, I say all that to say don’t use Windows 11 on anything important. It’s the equivalent of a beta. Windows 12 (or however they brand it) will probably be stable. I don’t use Windows much anymore and maybe things have changed but the concepts in the previous paragraph could be outdated. But it’s a good rule of thumb.
  • 48 Stimmen
    5 Beiträge
    38 Aufrufe
    L
    Arguably we should be imposing 25% DST on digital products to counter the 25% tariff on aluminium and steel and then 10% on everything else. The US started it by imposing blanket tariffs in spite of our free trade agreement.
  • How to delete your Twitter (or X) account

    Technology technology
    2
    1
    1 Stimmen
    2 Beiträge
    25 Aufrufe
    R
    I also need to know the way to delete twitter account of my brand : https://stylo.pk/ .