Skip to content

Avoiding AI is hard – but our freedom to opt out must be protected

Technology
25 18 0
  • 67 Stimmen
    4 Beiträge
    0 Aufrufe
    C
    Won't someone think of the shareholders?!
  • The mystery of $MELANIA

    Technology technology
    5
    1
    11 Stimmen
    5 Beiträge
    0 Aufrufe
    S
    $mell ya later!
  • 83 Stimmen
    7 Beiträge
    0 Aufrufe
    tattorack@lemmy.worldT
    Europe created something called "the platform work directive", or something along those lines. Basically means that platform jobs (i.e. Uber, Wolt, Just Eat) can no longer operate under a so-called "freelance model". Basically, if it looks like you're hiring employees, you must give them contracts like employees (along with everything that entails). It's already been agreed upon, so it's a matter of implementation. It's considered a "pillar" of the EU now, so being part of the EU means having the directive. Deadline for the implementation, as I've heard from 3F Copenhagen, should be 2026. I can't wait.
  • 513 Stimmen
    54 Beiträge
    0 Aufrufe
    E
    My cousin partially set his bedroom on fire doing something very similar with the foil from chewing gum. This was in the 1980s though so no one really cared, I'm pretty sure he just got shouted at.
  • Meta Reportedly Eyeing 'Super Sensing' Tech for Smart Glasses

    Technology technology
    4
    1
    33 Stimmen
    4 Beiträge
    0 Aufrufe
    M
    I see your point but also I just genuinely don't have a mind for that shit. Even my own close friends and family, it never pops into my head to ask about that vacation they just got back from or what their kids are up to. I rely on social cues from others, mainly my wife, to sort of kick start my brain. I just started a new job. I can't remember who said they were into fishing and who didn't, and now it's anxiety inducing to try to figure out who is who. Or they ask me a friendly question and I get caught up answering and when I'm done I forget to ask it back to them (because frequently asking someone about their weekend or kids or whatever is their way of getting to share their own life with you, but my brain doesn't think that way). I get what you're saying. It could absolutely be used for performative interactions but for some of us people drift away because we aren't good at being curious about them or remembering details like that. And also, I have to sit through awkward lunches at work where no one really knows what to talk about or ask about because outside of work we are completely alien to one another. And it's fine. It wouldn't be worth the damage it does. I have left behind all personally identifiable social media for the same reason. But I do hate how social anxiety and ADHD makes friendship so fleeting.
  • Indian Government orders censoring of accounts on X

    Technology technology
    12
    149 Stimmen
    12 Beiträge
    0 Aufrufe
    M
    Why? Because you can’t sell them?
  • 588 Stimmen
    77 Beiträge
    1 Aufrufe
    F
    When a Lemmy instance owner gets a legal request from a foreign countries government to take down content, after they’re done shitting themselves they’ll take the content down or they’ll have to implement a country wide block on that country, along with not allowing any citizens of that country to use their instance no matter where they are located. Block me, I don’t care. You’re just proving that you can’t handle the truth and being challenged with it.
  • People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

    Technology technology
    2
    1
    0 Stimmen
    2 Beiträge
    0 Aufrufe
    tetragrade@leminal.spaceT
    I've been thinking about this for a bit. Gods aren't real, but they're really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. Another is that an LLM chatbot's advice is much more likely to be empirically useful... In a very real sense, LLMs have just automated divinity. We're only seeing the tip of the iceberg on the social effects, and nobody's prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.