Skip to content

Google Gemini struggles to write code, calls itself “a disgrace to my species”

Technology
160 98 13
  • Watermarks offer no defense against deepfakes, study suggests

    Technology technology
    30
    1
    191 Stimmen
    30 Beiträge
    217 Aufrufe
    K
    You can have whatever token you want with all the metadata, licensing and ownership information you want... ...unless you plan on only seeing images in your own platform, nobody gives a shit, people will take screenshots and image files and share and use them however they want. There's no world in which you load a full DRM plugin or do 4 different types of handshake with a full blockchain just to load a jpeg into a comment.
  • 54 Stimmen
    32 Beiträge
    571 Aufrufe
    swelter_spark@reddthat.comS
    Yeah, me too.
  • 21 Stimmen
    19 Beiträge
    177 Aufrufe
    B
    The AI only needs to alert the doctor that something is off and should be tested for. It does not replace doctors, but augments them. It's actually a great use for AI, it's just not what we think of as AI in a post-LLM world. The medically useful AI is pattern recognition. LLMs may also help doctors if they need a starting point into researching something weird and obscure, but ChatGPT isn't being used for diagnosing patients, nor is anything any AI says the "final verdict". It's just a tool to improve early detection of disorders, or it might point someone towards an useful article or book.
  • 254 Stimmen
    143 Beiträge
    3k Aufrufe
    S
    Why would every American buy one if they can't afford insurance + medical bills to pay for health care? "Oh look, I'm having a heart attack. Good to know. Guess I'll just keep working."
  • How can websites verify unique (IRL) identities?

    Technology technology
    6
    8 Stimmen
    6 Beiträge
    56 Aufrufe
    H
    Safe, yeah. Private, no. If you want to verify whether a user is a real person, you need very personally identifiable information. That’s not ever going to be private. The best you could do, in theory, is have a government service that takes that PII and gives the user a signed cryptographic certificate they can use to verify their identity. Most people would either lose their private key or have it stolen, so even that system would have problems. The closest to reality you could do right now is use Apple’s FaceID, and that’s anything but private. Pretty safe though. It’s super illegal and quite hard to steal someone’s face.
  • 24 Stimmen
    14 Beiträge
    149 Aufrufe
    S
    I think you're missing some key points. Any file hosting service, no matter what, will have to deal with CSAM as long as people are able to upload to it. No matter what. This is an inescapable fact of hosting and the internet in general. Because CSAM is so ubiquitous and constant, one can only do so much to moderate any services, whether they're a large corporation are someone with a server in their closet. All of the larger platforms like 'meta', google, etc., mostly outsource that moderation to workers in developing countries so they don't have to also provide mental health counselling, but that's another story. The reason they own their own hardware is because the hosting services can and will disable your account and take down your servers if there's even a whiff of CSAM. Since it's a constant threat, it's better to own your own hardware and host everything from your closet so you don't have to eat the downtime and wait for some poor bastard in Nigeria to look through your logs and reinstate your account (not sure how that works exactly though).
  • 35 Stimmen
    1 Beiträge
    17 Aufrufe
    Niemand hat geantwortet
  • Generative AI's most prominent skeptic doubles down

    Technology technology
    14
    1
    43 Stimmen
    14 Beiträge
    113 Aufrufe
    Z
    I don't think so, and I believe not even the current technology used for neural network simulations will bring us to AGI, yet alone LLMs.