Skip to content

Amazon is reportedly training humanoid robots to deliver packages

Technology
143 87 1.2k
  • 0 Stimmen
    1 Beiträge
    0 Aufrufe
    Niemand hat geantwortet
  • 801 Stimmen
    220 Beiträge
    2k Aufrufe
    uriel238@lemmy.blahaj.zoneU
    algos / AI has already been used to justify racial discrimination in some counties who use predictive policing software to adjust the sentences of convicts (the software takes in a range of facts about the suspect and the incident and compares it to how prior incidents and suspects were similar features were adjudicated) and wouldn't you know it, it simply highlighted and exaggerated the prejudices of police and the courts to absurdity, giving whites absurdly lighter sentences than nonwhites, for example. This is essentially mind control or coercion technology based on the KGB technology of компромат (Kompromat, or compromising information, or as CIA calls it biographical leverage, ) essentially, information about a person that can be used either to jeopardize their life, blackmail material or means to lure and bribe them. Take this from tradecraft and apply it to marketing or civil control, and you get things like the Social Credit System in China to keep people from misbehaving, engaging in discontent and coming out of the closet (LGBTQ+ but there are plenty of other applicable closets). From a futurist perspective, we homo-sapiens appear just incapable of noping out of a technology or process, no matter how morally black or heinous that technology is, we'll use it, especially those with wealth and power to evade legal prosecution (or civil persecution). It breaks down into three categories: Technologies we use anyway, and suffer, e.g. usury, bonded servitude, mass-media propaganda distribution Technologies we collectively decide are just not worth the consequences, e.g. the hydrogen bomb, biochemical warfare Technologies for which we create countermeasures, usually turning into a tech race between states or between the public and the state, e.g. secure communication, secure data encryption, forbidden data distribution / censorship We're clearly on the cusp of mind control and weaponizing data harvesting into a coercion mechanism. Currently we're already seeing it used to establish and defend specific power structures that are antithetical to the public good. It's currently in the first category, and hopefully it'll fall into the third, because we have to make a mess (e.g. Castle Bravo / Bikini Atol) and clean it up before deciding not to do that again. Also, with the rise of the internet, we've run out of myths that justify capitalism, which is bonded servitude with extra steps. So we may soon (within centuries) see that go into one of the latter two categories, since the US is currently experiencing the endgame consequences of forcing labor, and the rest of the industrialized world is having to bulwark from the blast.
  • 9 Stimmen
    3 Beiträge
    43 Aufrufe
    A
    “Mysterious”? It’s eight there in the name.
  • YouTube Will Add an AI Slop Button Thanks to Google’s Veo 3

    Technology technology
    71
    1
    338 Stimmen
    71 Beiträge
    1k Aufrufe
    anunusualrelic@lemmy.worldA
    "One slop please"
  • Seven Goldfish

    Technology technology
    1
    5 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • 15 Stimmen
    1 Beiträge
    15 Aufrufe
    Niemand hat geantwortet
  • Moon missions: How to avoid a puncture on the Moon

    Technology technology
    1
    1
    14 Stimmen
    1 Beiträge
    16 Aufrufe
    Niemand hat geantwortet
  • People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

    Technology technology
    2
    1
    0 Stimmen
    2 Beiträge
    29 Aufrufe
    tetragrade@leminal.spaceT
    I've been thinking about this for a bit. Gods aren't real, but they're really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. Another is that an LLM chatbot's advice is much more likely to be empirically useful... In a very real sense, LLMs have just automated divinity. We're only seeing the tip of the iceberg on the social effects, and nobody's prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.