Skip to content

Doctors are using unapproved AI software to record patient meetings, investigation reveals

Technology
11 9 64
  • 252 Stimmen
    4 Beiträge
    13 Aufrufe
    T
    isnt merz kinda right wing, but not AFD-CRAZY.
  • 738 Stimmen
    67 Beiträge
    84 Aufrufe
    K
    That has always been the two big problems with AI. Biases in the training, intentional or not, will always bias the output. And AI is incapable of saying "I do not have suffient training on this subject or reliable sources for it to give you a confident answer". It will always give you its best guess, even if it is completely hallucinating much of the data. The only way to identify the hallucinations if it isn't just saying absurd stuff on the face of it, it to do independent research to verify it, at which point you may as well have just researched it yourself in the first place. AI is a tool, and it can be a very powerful tool with the right training and use cases. For example, I use it at a software engineer to help me parse error codes when googling working or to give me code examples for modules I've never used. There is no small number of times it has been completely wrong, but in my particular use case, that is pretty easy to confirm very quickly. The code either works as expected or it doesn't, and code is always tested before releasing it anyway. In research, it is great at helping you find a relevant source for your research across the internet or in a specific database. It is usually very good at summarizing a source for you to get a quick idea about it before diving into dozens of pages. It CAN be good at helping you write your own papers in a LIMITED capacity, such as cleaning up your writing in your writing to make it clearer, correctly formatting your bibliography (with actual sources you provide or at least verify), etc. But you have to remember that it doesn't "know" anything at all. It isn't sentient, intelligent, thoughtful, or any other personification placed on AI. None of the information it gives you is trustworthy without verification. It can and will fabricate entire studies that do not exist even while attributed to real researcher. It can mix in unreliable information with reliable information becuase there is no difference to it. Put simply, it is not a reliable source of information... ever. Make sure you understand that.
  • 9 Stimmen
    6 Beiträge
    36 Aufrufe
    F
    You said it yourself: extra places that need human attention ... those need ... humans, right? It's easy to say "let AI find the mistakes". But that tells us nothing at all. There's no substance. It's just a sales pitch for snake oil. In reality, there are various ways one can leverage technology to identify various errors, but that only happens through the focused actions of people who actually understand the details of what's happening. And think about it here. We already have computer systems that monitor patients' real-time data when they're hospitalized. We already have systems that check for allergies in prescribed medication. We already have systems for all kinds of safety mechanisms. We're already using safety tech in hospitals, so what can be inferred from a vague headline about AI doing something that's ... checks notes ... already being done? ... Yeah, the safe money is that it's just a scam.
  • 66 Stimmen
    9 Beiträge
    22 Aufrufe
    django@discuss.tchncs.deD
    All the tasks could have been easily solved with some basic APIs and algorithms.
  • Open-Source vs Closed AI: What Businesses Must Know

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • OSTP Has a Choice to Make: Science or Politics?

    Technology technology
    7
    1
    30 Stimmen
    7 Beiträge
    43 Aufrufe
    B
    Ye I expect so, I don't like the way this author just doesn't bother explaining her points. She just states that she disagrees and says they should be left to their own rules. Which is probably fine, but that's just lazy or she's not mentioning the difference for another reason
  • Digg founder Kevin Rose offers to buy Pocket from Mozilla

    Technology technology
    7
    2
    1 Stimmen
    7 Beiträge
    41 Aufrufe
    H
    IMO it was already shitty.
  • 163 Stimmen
    15 Beiträge
    77 Aufrufe
    L
    Online group started by a 15 year old in Texas playing Minecraft and watching extreme gore they said in this article. Were they also involved in said sexual exploiting of other kids, or was that just the spin offs that came from other people/countries? It all sounds terrible but I wonder if this was just a kid who did something for attention and then other perpetrators got involved and kept taking it further and down other rabbit holes. Definitely seems like a know what your kid is doing online scenario, but also yikes on all the 18+ members who joined and participated in such.