Skip to content

Reddit will tighten verification to keep out human-like AI bots

Technology
21 14 0
  • 326 Stimmen
    20 Beiträge
    0 Aufrufe
    roofuskit@lemmy.worldR
    It's extremely traceable. There is a literal public ledger if every single transaction.
  • 2 Stimmen
    8 Beiträge
    0 Aufrufe
    F
    IMO stuff like that is why a good trainer is important. IMO it's stronger evidence that proper user-centered design should be done and a usable and intuitive UX and set of APIs developed. But because the buyer of this heap of shit is some C-level, there is no incentive to actually make it usable for the unfortunate peons who are forced to interact with it. See also SFDC and every ERP solution in existence.
  • Are We All Becoming More Hostile Online?

    Technology technology
    31
    1
    212 Stimmen
    31 Beiträge
    2 Aufrufe
    A
    Back in the day I just assumed everyone was lying. Or trying to get people worked up, and we called them trolls. Learning how to ignore the trolls, and not having trust for strangers on the internet, coupled with the ability to basically not care what random people said is a lost art. Somehow people forgot to give other the people this memo, including the "you don't fucking join social networks as your self". Anonymity makes this all work. Eternal September newbies just didn't get it.
  • 14 Stimmen
    2 Beiträge
    0 Aufrufe
    D
    "Extra Verification steps" I know how large social media companies operate. This is all about increasing the value of Reddit users to advertisers. The goal is to have a more accurate user database to sell them. Zuckerberg literally brags to corporations about how good their data is on users: https://www.facebook.com/business/ads/performance-marketing Here, Zuckerberg tells corporations that Instagram can easily manipulate users into purchasing shit: https://www.facebook.com/business/instagram/instagram-reels Always be wary of anything available for free. There are some quality exceptions (CBC, VLC, The Guardian, Linux, PBS, Wikipedia, Lemmy, ProPublica) but, by and large, "free" means they don't care about you. You are just a commodity that they sell. Facebook, Google, X, Reddit, Instagram... Their goal is keep people hooked to their smartphone by giving them regular small dopamine hits (likes, upvotes) followed by a small breaks with outrageous content/emotional content. Keep them hooked, gather their data, and sell them ads. The people who know that best are former top executives : https://www.theguardian.com/technology/2017/oct/05/smartphone-addiction-silicon-valley-dystopia https://www.nytimes.com/2019/03/01/business/addictive-technology.html https://www.today.com/parents/teens/facebook-whistleblower-frances-haugen-rcna15256
  • Windows Is Adding AI Agents That Can Change Your Settings

    Technology technology
    26
    1
    103 Stimmen
    26 Beiträge
    0 Aufrufe
    T
    Edit: no, wtf am i doing The thread was about inept the coders were. Here is your answer: They were so fucking inept they broke a fundamental function and it made it to production. Then they did it deliberately. That's how inept they are. End of.
  • 119 Stimmen
    55 Beiträge
    0 Aufrufe
    D
    I bet every company has at least one employee with right-wing political views. Choosing a product based on some random quotes by employees is stupid.
  • 14 Stimmen
    2 Beiträge
    0 Aufrufe
    J
    This is why they are businessmen and not politicians or influencers
  • People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

    Technology technology
    2
    1
    0 Stimmen
    2 Beiträge
    0 Aufrufe
    tetragrade@leminal.spaceT
    I've been thinking about this for a bit. Gods aren't real, but they're really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. Another is that an LLM chatbot's advice is much more likely to be empirically useful... In a very real sense, LLMs have just automated divinity. We're only seeing the tip of the iceberg on the social effects, and nobody's prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.