Skip to content

We Should Immediately Nationalize SpaceX and Starlink

Technology
496 196 4.0k
  • 325 Stimmen
    28 Beiträge
    173 Aufrufe
    lyra_lycan@lemmy.blahaj.zoneL
    I hear you, and also, the modern Labour group (Neo Labour, if you will), when they were created, revived the OG Labour ideals from the 1700s, and OG Labour was a spinoff from ex-Conservative members. The way Labour act only prove my theory that there have only ever been two parties in power
  • 237 Stimmen
    37 Beiträge
    261 Aufrufe
    K
    AI has some use but it always needs human oversight and the final decision must also be made by a human professional. If you use AI to speed up tasks and you know whether the output of the AI is valid or not, and you have the final decision, then you can safely use it. But if you let AI decide on and execute important tasks basically autonomously, then you have a recipe for disaster. Fully autonomous and mistake-free AI is a naive pipe dream which I don't see on the horizon at all.
  • 168 Stimmen
    23 Beiträge
    342 Aufrufe
    semperverus@lemmy.worldS
    Yep! Time to go back to the old ways... Brb while i just load up my server with 10tb of DVD rips from my garage and hook them up to my raspberry pi with jellyfin
  • Twitter opens up to Community Notes written by AI bots

    Technology technology
    9
    1
    44 Stimmen
    9 Beiträge
    98 Aufrufe
    G
    Stop fucking using twitter. Stop posting about it, stop posting things that link to it. Delete your account like you should have already.
  • 9 Stimmen
    6 Beiträge
    57 Aufrufe
    F
    You said it yourself: extra places that need human attention ... those need ... humans, right? It's easy to say "let AI find the mistakes". But that tells us nothing at all. There's no substance. It's just a sales pitch for snake oil. In reality, there are various ways one can leverage technology to identify various errors, but that only happens through the focused actions of people who actually understand the details of what's happening. And think about it here. We already have computer systems that monitor patients' real-time data when they're hospitalized. We already have systems that check for allergies in prescribed medication. We already have systems for all kinds of safety mechanisms. We're already using safety tech in hospitals, so what can be inferred from a vague headline about AI doing something that's ... checks notes ... already being done? ... Yeah, the safe money is that it's just a scam.
  • WhatsApp rolls out AI-generated summaries for private messages

    Technology technology
    26
    1
    96 Stimmen
    26 Beiträge
    293 Aufrufe
    W
    So I think, but I'm not sure, this is for group chats. Group chats are only encrypted to/from the server because the server broadcasts the message to each recipient. As the messages are unencrypted on the server, they can feed them to LLMs. This is different to Signal. On Signal it's your phone encrypting each copy of the message before sending to each recipient individually.
  • Taiwan adds China’s Huawei, SMIC to export blacklist

    Technology technology
    43
    1
    61 Stimmen
    43 Beiträge
    568 Aufrufe
    R
    Based decision.
  • AI model collapse is not what we paid for

    Technology technology
    20
    1
    84 Stimmen
    20 Beiträge
    173 Aufrufe
    A
    I share your frustration. I went nuts about this the other day. It was in the context of searching on a discord server, rather than Google, but it was so aggravating because of the how the "I know better than you" is everywhere nowadays in tech. The discord server was a reading group, and I was searching for discussion regarding a recent book they'd studied, by someone named "Copi". At first, I didn't use quotation marks, and I found my results were swamped with messages that included the word "copy". At this point I was fairly chill and just added quotation marks to my query to emphasise that it definitely was "Copi" I wanted. I still was swamped with messages with "copy", and it drove me mad because there is literally no way to say "fucking use the terms I give you and not the ones you think I want". The software example you give is a great example of when it would be real great to be able to have this ability. TL;DR: Solidarity in rage