Skip to content

Tesla loses Autopilot wrongful death case in $329 million verdict

Technology
172 96 1
  • How to Choose Between Flats in Gunnersbury and Wembley Park

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    18 Aufrufe
    Niemand hat geantwortet
  • 575 Stimmen
    114 Beiträge
    3k Aufrufe
    T
    a toddler giving another toddler some milk.
  • Firefox is fine. The people running it are not

    Technology technology
    206
    1
    853 Stimmen
    206 Beiträge
    2k Aufrufe
    O
    Sounds like some deliberately obscure concentrations of power. The fear bit is really problematic though as scared people are not ideal decision makers.
  • 0 Stimmen
    1 Beiträge
    18 Aufrufe
    Niemand hat geantwortet
  • 254 Stimmen
    42 Beiträge
    392 Aufrufe
    dojan@pawb.socialD
    Don’t assume evil when stupidity I didn't, though? I think that perhaps you missed the "I don’t think necessarily that people who perpetuate this problem are doing so out of malice" part. Scream racism all you want but you’re cheapening the meaning of the word and you’re not doing anyone a favor. I didn't invent this term. Darker patches on darker skin are harder to detect, just as facial features in the dark, on dark skin are garder to detect because there is literally less light to work with Computers don't see things the way we do. That's why steganography can be imperceptible to the human eye, and why adversarial examples work when the differences cannot be seen by humans. If a model is struggling at doing its job it's because the data is bad, be it the input data, or the training data. Historically one significant contributor has been that the datasets aren't particularly diverse, and white men end up as the default. It's why all the "AI" companies popped in "ethnically ambiguous" and other words into their prompts to coax their image generators into generating people that weren't white, and subsequently why these image generators gave us ethnically ambigaus memes and German nazi soldiers that were black.
  • 271 Stimmen
    77 Beiträge
    870 Aufrufe
    S
    I don't believe the idea of aggregating information is bad, moreso the ability to properly vet your sources yourself. I don't know what sources an AI chatbot could be pulling from. It could be a lot of sources, or it could be one source. Does it know which sources are reliable? Not really. AI has been infamous for hallucinating even with simple prompts. Being able to independently check where your info comes from is an important part of stopping the spread of misinfo. AI can't do that, and, in it's current state, I wouldn't want it to try. Convenience is a rat race of cutting corners. What is convenient isn't always what is best in the long run.
  • AI and misinformation

    Technology technology
    3
    20 Stimmen
    3 Beiträge
    39 Aufrufe
    D
    Don’t lose hope, just pretend to with sarcasm. Or if you are feeling down it could work the other way too. https://aibusiness.com/nlp/sarcasm-is-really-really-really-easy-for-ai-to-handle#close-modal
  • 44 Stimmen
    4 Beiträge
    48 Aufrufe
    G
    It varies based on local legislation, so in some places paying ransoms is banned but it's by no means universal. It's totally valid to be against paying ransoms wherever possible, but it's not entirely black and white in some situations. For example, what if a hospital gets ransomed? Say they serve an area not served by other facilities, and if they can't get back online quickly people will die? Sounds dramatic, but critical public services get ransomed all the time and there are undeniable real world consequences. Recovery from ransomware can cost significantly more than a ransom payment if you're not prepared. It can also take months to years to recover, especially if you're simultaneously fighting to evict a persistent (annoyed, unpaid) threat actor from your environment. For the record I don't think ransoms should be paid in most scenarios, but I do think there is some nuance to consider here.