Skip to content

It's rude to show AI output to people

Technology
14 14 0
  • This post did not contain any content.
  • This post did not contain any content.

    This is exactly something that has annoyed me in a sports community I follow back on Reddit. Posts with titles along the lines of “I asked ChatGPT what it thinks will happen in the game this weekend and here is what it said”.

    Why? What does ChatGPT add to the conversation here? Asking the question directly in the subreddit would have encouraged the same discussion.

    We’ve also learned nothing about the OPs opinion on the matter, other than maybe that they don’t have one. And even more to the point, it’s so intellectually lazy that it just feels like karma farming. “Ya I have nothing to add but I do love me them updoots”.

    I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.

    But that argument always just feels overly combative for what is otherwise a pretty relaxed sports community. It’s just not worth having that fight there.

  • This post did not contain any content.

    Sometimes people are my old job post AI stuff and I just tell them "stop using the lie machine"

  • This post did not contain any content.

    Blindsight mentioned!

    The only explanation is that something has coded nonsense in a way that poses as a useful message; only after wasting time and effort does the deception becomes apparent. The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus.

    This has been my biggest problem with it. It places a cognitive load on me that wasn't there before, having to cut through the noise.

  • This post did not contain any content.

    Yes. I am getting so sick and tired of people asking me for help then proceeding to rain unhelpful suggestions from their LLM upon me while I'm trying to think through their problem. You wouldn't be asking for help if that stuff was helping you!

  • This post did not contain any content.

    Every now and then I see a guy barging in a topic bringing nothing else than "I asked [some AI service] and here's what it said", followed by 3 paragraphs of AI-gened gibberish. And then when it's not well received they just don't seem to understand.

    It's baffling to me. Anyone can ask an AI. A lot of people specifically don't, because they don't want to battle with its output for an hour trying to sort out from where it got its information, whether it represented it well, or even whether it just hallucinated half of it.

    And those guys come posting a wall of text they may or may not have read themselves, and then they have the gall to go "What's the problem, is any of that wrong?"... Dude, the problem is you have no fucking idea if it's wrong yourself, have nothing to back it up, and have only brought automated noise to the conversation.

  • This post did not contain any content.

    If only the biggest problem was messages starting "I asked ChatGPT and this is what it said:"

    A far bigger problem is people using AI to draft text and then posting it as their own. On social media like this, I can't count the number of comments I've encountered midway through an otherwise normal discussion thread, and only clocked 2 paragraphs in that I'm reading a chat bot's response. I feel like I've had time and braincells stolen from me in the deception for the moments spent reading and attempting to derive meaning from it.

    And just this week I received an application from someone wanting work in my office which was very clearly AI generated. Obviously that person will not be offered any work. If you can't be bothered to write your own "why I want to work here" cover letter, then I can't be bothered to work with you.

  • This is exactly something that has annoyed me in a sports community I follow back on Reddit. Posts with titles along the lines of “I asked ChatGPT what it thinks will happen in the game this weekend and here is what it said”.

    Why? What does ChatGPT add to the conversation here? Asking the question directly in the subreddit would have encouraged the same discussion.

    We’ve also learned nothing about the OPs opinion on the matter, other than maybe that they don’t have one. And even more to the point, it’s so intellectually lazy that it just feels like karma farming. “Ya I have nothing to add but I do love me them updoots”.

    I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.

    But that argument always just feels overly combative for what is otherwise a pretty relaxed sports community. It’s just not worth having that fight there.

    Why? What does ChatGPT add to the conversation here? Asking the question directly in the subreddit would have encouraged the same discussion.

    I guess it has some tabloid-like value. which if counts as value, tells a lot about the other party.

  • This post did not contain any content.

    I'm amused by the 14 oxygen-wasting NPCs who are in this picture and didn't like it.

  • Every now and then I see a guy barging in a topic bringing nothing else than "I asked [some AI service] and here's what it said", followed by 3 paragraphs of AI-gened gibberish. And then when it's not well received they just don't seem to understand.

    It's baffling to me. Anyone can ask an AI. A lot of people specifically don't, because they don't want to battle with its output for an hour trying to sort out from where it got its information, whether it represented it well, or even whether it just hallucinated half of it.

    And those guys come posting a wall of text they may or may not have read themselves, and then they have the gall to go "What's the problem, is any of that wrong?"... Dude, the problem is you have no fucking idea if it's wrong yourself, have nothing to back it up, and have only brought automated noise to the conversation.

    Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

  • Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

    On the second part. That is only half true. Yes, there are LLMs out there that search the internet and summarize and reference some websites they find.

    However, it is not rare that they add their own "info" to it, even though it's not in the given source at all. If you use it to get sources and then read those instead, sure. But the output of the LLM itself should still be taken with a HUGE grain of salt and not be relied on at all if it's critical, even if it puts a nice citation.

  • This post did not contain any content.

    What a coincidence, I was just reading sections of Blindsight again for an assignment (not directly related to it's contents) and had a similar though when re-parsing a section near the one in the OP — it's scary how closely the novel depicted something analogous to contemporary LLM output.

  • Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

    That's not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

    Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It's just a smarter search engine with no ads and better focus on the question asked.

    And what happens when mechahitler the next version of Grok or whatever AI hosted by a large corporation that only has the interest of capital gains comes out with unannounced injected prompt poisoning that doesn't produce quality output like you've been conditioned to expect?

    These AI are good if you have a general grasp of whatever you are trying to find, because you can easily pick out what you know to be true and what is obviously a ridiculous mess of computer generated text that is no smarter than your phone keyboard word suggestions AI hallucination.

    Trying to soak up all the information generated by AI in a topic without prior knowledge may easily end up with you not understanding anything more than you did before, and possibly give you unrealistic confidence that you know what is essentially misinformation. And just because an AI pulls up references, unless you do your due diligence to read those references for accuracy or authority on the subject, the AI may be hallucinating where it got the wrong information it's giving you.

  • If only the biggest problem was messages starting "I asked ChatGPT and this is what it said:"

    A far bigger problem is people using AI to draft text and then posting it as their own. On social media like this, I can't count the number of comments I've encountered midway through an otherwise normal discussion thread, and only clocked 2 paragraphs in that I'm reading a chat bot's response. I feel like I've had time and braincells stolen from me in the deception for the moments spent reading and attempting to derive meaning from it.

    And just this week I received an application from someone wanting work in my office which was very clearly AI generated. Obviously that person will not be offered any work. If you can't be bothered to write your own "why I want to work here" cover letter, then I can't be bothered to work with you.

    Have seen emails at work that were AI generated, but they made no disclaimer. Then someone points out how wildly incorrect it was and they just say "oh whoops, not my fault, I just ask ed an LLM". They set things up to take credit if people liked it, and used the LLMs are just stupid as an excuse when it doesn't fly.

  • Best Andar Bahar game development company

    Technology technology
    1
    2
    0 Stimmen
    1 Beiträge
    4 Aufrufe
    Niemand hat geantwortet
  • Japan using generative AI less than other countries

    Technology technology
    95
    380 Stimmen
    95 Beiträge
    567 Aufrufe
    deflated0ne@lemmy.worldD
    That show was so fuckin stupid. But also weirdly wholesome. Nary a jot of creep shit for the whole run. I was genuinely surprised.
  • 89 Stimmen
    15 Beiträge
    69 Aufrufe
    S
    I suspect people (not billionaires) are realising that they can get by with less. And that the planet needs that too. And that working 40+ hours a week isn’t giving people what they really want either. Tbh, I don't think that's the case. If you look at any of the relevant metrics (CO², energy consumption, plastic waste, ...) they only know one direction globally and that's up. I think the actual issues are Russian invasion of Ukraine and associated sanctions on one of the main energy providers of Europe Trump's "trade wars" which make global supply lines unreliable and costs incalculable (global supply chains love nothing more than uncertainty) Uncertainty in regards to China/Taiwan Boomers retiring in western countries, which for the first time since pretty much ever means that the work force is shrinking instead of growing. Economical growth was mostly driven by population growth for the last half century with per-capita productivity staying very close to inflation. Disrupting changes in key industries like cars and energy. The west has been sleeping on may of these developments (e.g. electric cars, batteries, solar) and now China is curbstomping the rest of the world in regards to market share. High key interest rates (which are applied to reduce high inflation due to some of the reason above) reduce demand on financial investments into companies. The low interest rates of the 2010s and also before lead to more investments into companies. With interest going back up, investments dry up. All these changes mean that companies, countries and people in the west have much less free cash available. There’s also the value of money has never been lower either. That's been the case since every. Inflation has always been a thing and with that the value of money is monotonically decreasing. But that doesn't really matter for the whole argument, since the absolute value of money doesn't matter, only the relative value. To put it differently: If you earn €100 and the thing you want to buy costs €10, that is equivalent to if you earn €1000 and the thing you want to buy costing €100. The value of money dropping is only relevant for savings, and if people are saving too much then the economy slows down and jobs are cut, thus some inflation is positive or even required. What is an actual issue is that wages are not increasing at the same rate as the cost of things, but that's not a "value of the money" issue.
  • 12 Stimmen
    3 Beiträge
    28 Aufrufe
    tal@lemmy.todayT
    While details of the Pentagon's plan remain secret, the White House proposal would commit $277 million in funding to kick off a new program called "pLEO SATCOM" or "MILNET." Please do not call it "MILNET". That term's already been taken. https://en.wikipedia.org/wiki/MILNET In computer networking, MILNET (fully Military Network) was the name given to the part of the ARPANET internetwork designated for unclassified United States Department of Defense traffic.[1][2]
  • What editor or IDE do you use and why?

    Technology technology
    37
    1
    26 Stimmen
    37 Beiträge
    170 Aufrufe
    T
    KEIL, because I develop embedded systems.
  • Catbox.moe got screwed 😿

    Technology technology
    40
    55 Stimmen
    40 Beiträge
    253 Aufrufe
    archrecord@lemm.eeA
    I'll gladly give you a reason. I'm actually happy to articulate my stance on this, considering how much I tend to care about digital rights. Services that host files should not be held responsible for what users upload, unless: The service explicitly caters to illegal content by definition or practice (i.e. the if the website is literally titled uploadyourcsamhere[.]com then it's safe to assume they deliberately want to host illegal content) The service has a very easy mechanism to remove illegal content, either when asked, or through simple monitoring systems, but chooses not to do so (catbox does this, and quite quickly too) Because holding services responsible creates a whole host of negative effects. Here's some examples: Someone starts a CDN and some users upload CSAM. The creator of the CDN goes to jail now. Nobody ever wants to create a CDN because of the legal risk, and thus the only providers of CDNs become shady, expensive, anonymously-run services with no compliance mechanisms. You run a site that hosts images, and someone decides they want to harm you. They upload CSAM, then report the site to law enforcement. You go to jail. Anybody in the future who wants to run an image sharing site must now self-censor to try and not upset any human being that could be willing to harm them via their site. A social media site is hosting the posts and content of users. In order to be compliant and not go to jail, they must engage in extremely strict filtering, otherwise even one mistake could land them in jail. All users of the site are prohibited from posting any NSFW or even suggestive content, (including newsworthy media, such as an image of bodies in a warzone) and any violation leads to an instant ban, because any of those things could lead to a chance of actually illegal content being attached. This isn't just my opinion either. Digital rights organizations such as the Electronic Frontier Foundation have talked at length about similar policies before. To quote them: "When social media platforms adopt heavy-handed moderation policies, the unintended consequences can be hard to predict. For example, Twitter’s policies on sexual material have resulted in posts on sexual health and condoms being taken down. YouTube’s bans on violent content have resulted in journalism on the Syrian war being pulled from the site. It can be tempting to attempt to “fix” certain attitudes and behaviors online by placing increased restrictions on users’ speech, but in practice, web platforms have had more success at silencing innocent people than at making online communities healthier." Now, to address the rest of your comment, since I don't just want to focus on the beginning: I think you have to actively moderate what is uploaded Catbox does, and as previously mentioned, often at a much higher rate than other services, and at a comparable rate to many services that have millions, if not billions of dollars in annual profits that could otherwise be spent on further moderation. there has to be swifter and stricter punishment for those that do upload things that are against TOS and/or illegal. The problem isn't necessarily the speed at which people can be reported and punished, but rather that the internet is fundamentally harder to track people on than real life. It's easy for cops to sit around at a spot they know someone will be physically distributing illegal content at in real life, but digitally, even if you can see the feed of all the information passing through the service, a VPN or Tor connection will anonymize your IP address in a manner that most police departments won't be able to track, and most three-letter agencies will simply have a relatively low success rate with. There's no good solution to this problem of identifying perpetrators, which is why platforms often focus on moderation over legal enforcement actions against users so frequently. It accomplishes the goal of preventing and removing the content without having to, for example, require every single user of the internet to scan an ID (and also magically prevent people from just stealing other people's access tokens and impersonating their ID) I do agree, however, that we should probably provide larger amounts of funding, training, and resources, to divisions who's sole goal is to go after online distribution of various illegal content, primarily that which harms children, because it's certainly still an issue of there being too many reports to go through, even if many of them will still lead to dead ends. I hope that explains why making file hosting services liable for user uploaded content probably isn't the best strategy. I hate to see people with good intentions support ideas that sound good in practice, but in the end just cause more untold harms, and I hope you can understand why I believe this to be the case.
  • 37 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • U.S.-Sanctioned Terrorists Enjoy Premium Boost on X

    Technology technology
    5
    1
    90 Stimmen
    5 Beiträge
    38 Aufrufe
    M
    Yeah but considering who's in charge of the government, half of us will be hit with that designation sooner or later.