Skip to content

Klarna’s AI replaced 700 workers — Now the fintech CEO wants humans back after $40B fall

Technology
30 24 293
  • 210 Stimmen
    32 Beiträge
    121 Aufrufe
    S
    No need for good computers to train agents. They don't need to play crysis to train as hackers. Something on the level of a Pi (or more accurately of a 2010 laptop) is good enough.
  • 358 Stimmen
    20 Beiträge
    44 Aufrufe
    S
    They still would, it just wouldn’t have their name on it.
  • 120 Stimmen
    24 Beiträge
    390 Aufrufe
    M
    A little background info: Russia's been sponsoring one of its oligarchs' business by eliminating their competition. First, they restricted YouTube's speed to an unusable state to force people to switch to RuTube (they didn't) Now they're trying to force people to switch from WhatsApp (and potentially Telegram) to MAX, which they want to be Russia's version of WeChat. Add the fact that our politicians are obsessed with controlling all of the media and you'll get the gist of it.
  • 253 Stimmen
    42 Beiträge
    438 Aufrufe
    dojan@pawb.socialD
    Don’t assume evil when stupidity I didn't, though? I think that perhaps you missed the "I don’t think necessarily that people who perpetuate this problem are doing so out of malice" part. Scream racism all you want but you’re cheapening the meaning of the word and you’re not doing anyone a favor. I didn't invent this term. Darker patches on darker skin are harder to detect, just as facial features in the dark, on dark skin are garder to detect because there is literally less light to work with Computers don't see things the way we do. That's why steganography can be imperceptible to the human eye, and why adversarial examples work when the differences cannot be seen by humans. If a model is struggling at doing its job it's because the data is bad, be it the input data, or the training data. Historically one significant contributor has been that the datasets aren't particularly diverse, and white men end up as the default. It's why all the "AI" companies popped in "ethnically ambiguous" and other words into their prompts to coax their image generators into generating people that weren't white, and subsequently why these image generators gave us ethnically ambigaus memes and German nazi soldiers that were black.
  • 138 Stimmen
    15 Beiträge
    124 Aufrufe
    toastedravioli@midwest.socialT
    ChatGPT is not a doctor. But models trained on imaging can actually be a very useful tool for them to utilize. Even years ago, just before the AI “boom”, they were asking doctors for details on how they examine patient images and then training models on that. They found that the AI was “better” than doctors specifically because it followed the doctor’s advice 100% of the time; thereby eliminating any kind of bias from the doctor that might interfere with following their own training. Of course, the splashy headline “AI better than doctors” was ridiculous. But it does show the benefit of having a neutral tool for doctors to utilize, especially when looking at images for people who are outside of the typical demographics that much medical training is based on. (As in mostly just white men. For example, everything they train doctors on regarding knee imagining comes from images of the knees of coal miners in the UK some decades ago)
  • Using Signal groups for activism

    Technology technology
    37
    1
    204 Stimmen
    37 Beiträge
    388 Aufrufe
    ulrich@feddit.orgU
    You're using a messaging app that was built with the express intent of being private and encrypted. Yes. You're asking why you can't have a right to privacy when you use your real name as your display handle in order to hide your phone number. I didn't ask anything. I stated it definitively. If you then use personal details as your screen name, you can't get mad at the app for not hiding your personal details. I've already explained this. I am not mad. I am telling you why it's a bad product for activism. Chatting with your friends and clients isn't what this app is for. That's...exactly what it's for. And I don't know where you got the idea that it's not. It's absurd. Certainly Snowden never said anything of the sort. Signal themselves never said anything of the sort. There are other apps for that. Of course there are. They're varying degrees of not private, secure, or easy to use.
  • How can websites verify unique (IRL) identities?

    Technology technology
    6
    8 Stimmen
    6 Beiträge
    56 Aufrufe
    H
    Safe, yeah. Private, no. If you want to verify whether a user is a real person, you need very personally identifiable information. That’s not ever going to be private. The best you could do, in theory, is have a government service that takes that PII and gives the user a signed cryptographic certificate they can use to verify their identity. Most people would either lose their private key or have it stolen, so even that system would have problems. The closest to reality you could do right now is use Apple’s FaceID, and that’s anything but private. Pretty safe though. It’s super illegal and quite hard to steal someone’s face.
  • 253 Stimmen
    41 Beiträge
    543 Aufrufe
    W
    Did you, by any chance, ever wonder, why people deal with hunger instead of just eating cake?