Skip to content

lemm.ee is shutting down at the end of this month

Technology
130 75 31
  • Palantir hits new highs amid Israel-Iran conflict

    Technology technology
    4
    1
    40 Stimmen
    4 Beiträge
    0 Aufrufe
    W
    I think both peace and war are profitable. But those that profit from war may be more pushy than those that profit from peace, and so may get their way even as an unpopular minority . Unless, the left (usually more pro peace) learns a few lessons from the right and places good outcomes above the holier than thou moral purity. "I've never made anyone uncomfortable" is not the merit badge that some think it is. Of course the left can never be a mirror copy of the right because the left cannot afford to give as few fucks about anything as the right (who represent the already-haves economic incumbents; it's not called the "fuck you money" for nothing). But the left can be way tougher and nuancedly uncompromising and even calculatingly and carefully millitant. Might does not make right but might DOES make POLICY. You need both right and might to live under a good policy. Lotta good it does anyone to be right and insightful on all the issues and have zero impact anywhere.
  • Matrix.org is Introducing Premium Accounts

    Technology technology
    110
    1
    226 Stimmen
    110 Beiträge
    3 Aufrufe
    F
    It's nice that this exists, but even for this I'd prefer to use an open source tool. And it of course helps with migration only if the old HS is still online.. I think most practically this migration function would be built inside some Matrix client (one that would support more than one server to start with), but I suppose a standalone tool would be a decent solution as well.
  • 119 Stimmen
    34 Beiträge
    4 Aufrufe
    S
    A fairer comparison would be Eliza vs ChatGPT.
  • 180 Stimmen
    13 Beiträge
    3 Aufrufe
    D
    There is a huge difference between an algorithm using real world data to produce a score a panel of experts use to make a determination and using a LLM to screen candidates. One has verifiable reproducible results that can be checked and debated the other does not. The final call does not matter if a computer program using an unknown and unreproducible algorithm screens you out before this. This is what we are facing. Pre-determined decisions that human beings are not being held accountable to. Is this happening right now? Yes it is, without a doubt. People are no longer making a lot of healthcare decisions determining insurance coverage. Computers that are not accountable are. You may have some ability to disagree but for how long? Soon there will be no way to reach a human about an insurance decision. This is already happening. People should be very anxious. Hearing United Healthcare has been forging DNRs and has been denying things like treatment for stroke for elders is disgusting. We have major issues that are not going away and we are blatantly ignoring them.
  • 54 Stimmen
    3 Beiträge
    2 Aufrufe
    fauxpseudo@lemmy.worldF
    Nobody ever wants to talk about white collar on white collar crime.
  • 2 Stimmen
    2 Beiträge
    4 Aufrufe
    quarterswede@lemmy.worldQ
    I give it 5 years before this is on our phones.
  • 12 Stimmen
    7 Beiträge
    7 Aufrufe
    C
    Sure, he wasn't an engineer, so no, Jobs never personally "invented" anything. But Jobs at least knew what was good and what was shit when he saw it. Under Tim Cook, Apple just keeps putting out shitty unimaginative products, Cook is allowing Apple to stagnate, a dangerous thing to do when they have under 10% market share.
  • People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

    Technology technology
    2
    1
    0 Stimmen
    2 Beiträge
    4 Aufrufe
    tetragrade@leminal.spaceT
    I've been thinking about this for a bit. Gods aren't real, but they're really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. Another is that an LLM chatbot's advice is much more likely to be empirically useful... In a very real sense, LLMs have just automated divinity. We're only seeing the tip of the iceberg on the social effects, and nobody's prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.