Skip to content

Microsoft Says Its New AI Diagnosed Patients 4 Times More Accurately Than Human Doctors

Technology
10 7 0
  • The Microsoft AI team shares research that demonstrates how AI can sequentially investigate and solve medicine’s most complex diagnostic challenges—cases that expert physicians struggle to answer.

    Benchmarked against real-world case records published each week in the New England Journal of Medicine, we show that the Microsoft AI Diagnostic Orchestrator (MAI-DxO) correctly diagnoses up to 85% of NEJM case proceedings, a rate more than four times higher than a group of experienced physicians. MAI-DxO also gets to the correct diagnosis more cost-effectively than physicians.

  • The Microsoft AI team shares research that demonstrates how AI can sequentially investigate and solve medicine’s most complex diagnostic challenges—cases that expert physicians struggle to answer.

    Benchmarked against real-world case records published each week in the New England Journal of Medicine, we show that the Microsoft AI Diagnostic Orchestrator (MAI-DxO) correctly diagnoses up to 85% of NEJM case proceedings, a rate more than four times higher than a group of experienced physicians. MAI-DxO also gets to the correct diagnosis more cost-effectively than physicians.

    I know that I might be the only Lemmy user happy with this, but AI applications in the medical field seems very promising for lowering costs and being more accurate.

  • I know that I might be the only Lemmy user happy with this, but AI applications in the medical field seems very promising for lowering costs and being more accurate.

    more accurate.

    Until it's not...then what. Who's liable? Google...Amazon ..Microsoft ..chatgpt.... Look, I like ai because it's fun to make stupid memes and pictures without any effort but I do not trust this nonsense to do ANYTHING with accuracy especially my medical.

    This thing will 100% be designed to diagnose people to sell you drugs and Not fix your health. Corporations control this. Currently they need to bribe Doctors to push their drugs..this will circumvent that entirely. You'll end up paying drastically more, for less.

    The sheer fact that's it's telling people to kill themselves to end suffering should be proof enough that it's dogshit

  • more accurate.

    Until it's not...then what. Who's liable? Google...Amazon ..Microsoft ..chatgpt.... Look, I like ai because it's fun to make stupid memes and pictures without any effort but I do not trust this nonsense to do ANYTHING with accuracy especially my medical.

    This thing will 100% be designed to diagnose people to sell you drugs and Not fix your health. Corporations control this. Currently they need to bribe Doctors to push their drugs..this will circumvent that entirely. You'll end up paying drastically more, for less.

    The sheer fact that's it's telling people to kill themselves to end suffering should be proof enough that it's dogshit

    And the risk is that if we rely on AI in any meaningful capacity, it will eventually erode away the expertise who would be knowledgeable enough to detect the problems that the future AI may create/ignore. This assumes even best case where AI isn't being specifically tampered with.

  • I know that I might be the only Lemmy user happy with this, but AI applications in the medical field seems very promising for lowering costs and being more accurate.

    People don't realize how much doctors leverage opening old books, reading subscription articles and looking at case files to help their patients out.

    Anything that can aide in the diagnosis and treatment of patients is a good thing, even if it's AI.

    Source: I am in IT and my wife's two siblings are a general practitioner doctor and an otolaryngologist (Ear Nose Throat Specialist). There's not much difference between being a systems administrator and a doctor in many ways.

  • People don't realize how much doctors leverage opening old books, reading subscription articles and looking at case files to help their patients out.

    Anything that can aide in the diagnosis and treatment of patients is a good thing, even if it's AI.

    Source: I am in IT and my wife's two siblings are a general practitioner doctor and an otolaryngologist (Ear Nose Throat Specialist). There's not much difference between being a systems administrator and a doctor in many ways.

    Have you tried swapping out the part (CPU/videocard/memory/random component) whilst the patient is still running?

    Doctors do this all the time! 😉

  • The Microsoft AI team shares research that demonstrates how AI can sequentially investigate and solve medicine’s most complex diagnostic challenges—cases that expert physicians struggle to answer.

    Benchmarked against real-world case records published each week in the New England Journal of Medicine, we show that the Microsoft AI Diagnostic Orchestrator (MAI-DxO) correctly diagnoses up to 85% of NEJM case proceedings, a rate more than four times higher than a group of experienced physicians. MAI-DxO also gets to the correct diagnosis more cost-effectively than physicians.

    AI for pattern recognition (statistical stuff) IMHO is fine, it's different than expecting original thought, reasoning or understanding, which the new 'AI' does not do, despite the constant hype.

  • Have you tried swapping out the part (CPU/videocard/memory/random component) whilst the patient is still running?

    Doctors do this all the time! 😉

  • And the risk is that if we rely on AI in any meaningful capacity, it will eventually erode away the expertise who would be knowledgeable enough to detect the problems that the future AI may create/ignore. This assumes even best case where AI isn't being specifically tampered with.

    I agree with you. I think this will likely happen to some degree. At the same time, that kind of argument could be used against many new technologies and is not a valid one to not utilize new tech.

  • I agree with you. I think this will likely happen to some degree. At the same time, that kind of argument could be used against many new technologies and is not a valid one to not utilize new tech.

    Simply using AI isn't an issue... Allowing it to take over in a way that accelerates the removal of the knowledge from our pools of knowledge is a problem. Allowing companies to use AI as a direct replacement of actual medical professionals will remove knowledge from society. We already know that we can't use AI to fuel more AI learning... the models implode. In order to continue learning more from medicine, we need to keep pushing for human learning and understanding.

    Funny that you agree with me and apparently see useful discussion to have here... but downvote me even though the comment certainly added to the discussion.

    Oh, and next time don't put words into someone's mouth, very much a bad faith action that harms meaningful discussion. I never said we should ban it or never use it. A better answer would be to legislate that doctors must still oversee, or must be the approving authority. That AI can never have a final say in someone's care and that research must never be sourced from AI sources. All I said, is that if we continue what we're doing and rely on AI in any meaningful capacity, we will run into problems. Especially in the context of the comment I responded to which opined upon corporation controlled AI.

    FFS... they can't even run a vending machine. https://www.anthropic.com/research/project-vend-1

    Oh.. and actually I would consider the 85% that it gets to be pretty poor considering that the AI was likely trained on the full breadth of NEJM information. Doctors don't have that ability to retain and train on 100% of all knowledge of the NEJM, so mistaking things makes sense for them. It doesn't make sense for something that was trained on NEJM data to screw up on an NEJM case.

    My stance is the same for all AI. I'll use it to generate basic code for me. I'll never run that without review. Or to jumpstart research into a topic... and validate the information presented with outside direct sources.

    TL;DR: Tool is good... Source is bad.

  • 193 Stimmen
    25 Beiträge
    0 Aufrufe
    C
    Mesh should be an option of last resort. It reduces the speed and increases the latency quite a bit. The only thing worse is power line networking, which has the side effect of turning your whole house into an RF jammer.
  • 93 Stimmen
    2 Beiträge
    7 Aufrufe
    S
    I wouldn't call it unprecedented, just more obvious
  • Musk's X sues New York state over social media hate speech law

    Technology technology
    1
    1
    1 Stimmen
    1 Beiträge
    6 Aufrufe
    Niemand hat geantwortet
  • 57 Stimmen
    5 Beiträge
    10 Aufrufe
    avidamoeba@lemmy.caA
    [image: c1b6d049-afed-4094-a09b-5af6746c814f.gif]
  • 1 Stimmen
    1 Beiträge
    6 Aufrufe
    Niemand hat geantwortet
  • Catbox.moe got screwed 😿

    Technology technology
    40
    55 Stimmen
    40 Beiträge
    30 Aufrufe
    archrecord@lemm.eeA
    I'll gladly give you a reason. I'm actually happy to articulate my stance on this, considering how much I tend to care about digital rights. Services that host files should not be held responsible for what users upload, unless: The service explicitly caters to illegal content by definition or practice (i.e. the if the website is literally titled uploadyourcsamhere[.]com then it's safe to assume they deliberately want to host illegal content) The service has a very easy mechanism to remove illegal content, either when asked, or through simple monitoring systems, but chooses not to do so (catbox does this, and quite quickly too) Because holding services responsible creates a whole host of negative effects. Here's some examples: Someone starts a CDN and some users upload CSAM. The creator of the CDN goes to jail now. Nobody ever wants to create a CDN because of the legal risk, and thus the only providers of CDNs become shady, expensive, anonymously-run services with no compliance mechanisms. You run a site that hosts images, and someone decides they want to harm you. They upload CSAM, then report the site to law enforcement. You go to jail. Anybody in the future who wants to run an image sharing site must now self-censor to try and not upset any human being that could be willing to harm them via their site. A social media site is hosting the posts and content of users. In order to be compliant and not go to jail, they must engage in extremely strict filtering, otherwise even one mistake could land them in jail. All users of the site are prohibited from posting any NSFW or even suggestive content, (including newsworthy media, such as an image of bodies in a warzone) and any violation leads to an instant ban, because any of those things could lead to a chance of actually illegal content being attached. This isn't just my opinion either. Digital rights organizations such as the Electronic Frontier Foundation have talked at length about similar policies before. To quote them: "When social media platforms adopt heavy-handed moderation policies, the unintended consequences can be hard to predict. For example, Twitter’s policies on sexual material have resulted in posts on sexual health and condoms being taken down. YouTube’s bans on violent content have resulted in journalism on the Syrian war being pulled from the site. It can be tempting to attempt to “fix” certain attitudes and behaviors online by placing increased restrictions on users’ speech, but in practice, web platforms have had more success at silencing innocent people than at making online communities healthier." Now, to address the rest of your comment, since I don't just want to focus on the beginning: I think you have to actively moderate what is uploaded Catbox does, and as previously mentioned, often at a much higher rate than other services, and at a comparable rate to many services that have millions, if not billions of dollars in annual profits that could otherwise be spent on further moderation. there has to be swifter and stricter punishment for those that do upload things that are against TOS and/or illegal. The problem isn't necessarily the speed at which people can be reported and punished, but rather that the internet is fundamentally harder to track people on than real life. It's easy for cops to sit around at a spot they know someone will be physically distributing illegal content at in real life, but digitally, even if you can see the feed of all the information passing through the service, a VPN or Tor connection will anonymize your IP address in a manner that most police departments won't be able to track, and most three-letter agencies will simply have a relatively low success rate with. There's no good solution to this problem of identifying perpetrators, which is why platforms often focus on moderation over legal enforcement actions against users so frequently. It accomplishes the goal of preventing and removing the content without having to, for example, require every single user of the internet to scan an ID (and also magically prevent people from just stealing other people's access tokens and impersonating their ID) I do agree, however, that we should probably provide larger amounts of funding, training, and resources, to divisions who's sole goal is to go after online distribution of various illegal content, primarily that which harms children, because it's certainly still an issue of there being too many reports to go through, even if many of them will still lead to dead ends. I hope that explains why making file hosting services liable for user uploaded content probably isn't the best strategy. I hate to see people with good intentions support ideas that sound good in practice, but in the end just cause more untold harms, and I hope you can understand why I believe this to be the case.
  • 149 Stimmen
    19 Beiträge
    11 Aufrufe
    C
    Got it, at that point (extremely high voltage) you'd need suppression at the panel. Which I would hope people have inline, but not expect like an LVD.
  • Microsoft's AI Secretly Copying All Your Private Messages

    Technology technology
    4
    1
    0 Stimmen
    4 Beiträge
    13 Aufrufe
    S
    Forgive me for not explaining better. Here are the terms potentially needing explanation. Provisioning in this case is initial system setup, the kind of stuff you would do manually after a fresh install, but usually implies a regimented and repeatable process. Virtual Machine (VM) snapshots are like a save state in a game, and are often used to reset a virtual machine to a particular known-working condition. Preboot Execution Environment (PXE, aka ‘network boot’) is a network adapter feature that lets you boot a physical machine from a hosted network image rather than the usual installation on locally attached storage. It’s probably tucked away in your BIOS settings, but many computers have the feature since it’s a common requirement in commercial deployments. As with the VM snapshot described above, a PXE image is typically a known-working state that resets on each boot. Non-virtualized means not using hardware virtualization, and I meant specifically not running inside a virtual machine. Local-only means without a network or just not booting from a network-hosted image. Telemetry refers to data collecting functionality. Most software has it. Windows has a lot. Telemetry isn’t necessarily bad since it can, for example, help reveal and resolve bugs and usability problems, but it is easily (and has often been) abused by data-hungry corporations like MS, so disabling it is an advisable precaution. MS = Microsoft OSS = Open Source Software Group policies are administrative settings in Windows that control standards (for stuff like security, power management, licensing, file system and settings access, etc.) for user groups on a machine or network. Most users stick with the defaults but you can edit these yourself for a greater degree of control. Docker lets you run software inside “containers” to isolate them from the rest of the environment, exposing and/or virtualizing just the resources they need to run, and Compose is a related tool for defining one or more of these containers, how they interact, etc. To my knowledge there is no one-to-one equivalent for Windows. Obviously, many of these concepts relate to IT work, as are the use-cases I had in mind, but the software is simple enough for the average user if you just pick one of the premade playbooks. (The Atlas playbook is popular among gamers, for example.) Edit: added explanations for docker and telemetry