Microsoft Says Its New AI Diagnosed Patients 4 Times More Accurately Than Human Doctors
-
The Microsoft AI team shares research that demonstrates how AI can sequentially investigate and solve medicine’s most complex diagnostic challenges—cases that expert physicians struggle to answer.
Benchmarked against real-world case records published each week in the New England Journal of Medicine, we show that the Microsoft AI Diagnostic Orchestrator (MAI-DxO) correctly diagnoses up to 85% of NEJM case proceedings, a rate more than four times higher than a group of experienced physicians. MAI-DxO also gets to the correct diagnosis more cost-effectively than physicians.
-
The Microsoft AI team shares research that demonstrates how AI can sequentially investigate and solve medicine’s most complex diagnostic challenges—cases that expert physicians struggle to answer.
Benchmarked against real-world case records published each week in the New England Journal of Medicine, we show that the Microsoft AI Diagnostic Orchestrator (MAI-DxO) correctly diagnoses up to 85% of NEJM case proceedings, a rate more than four times higher than a group of experienced physicians. MAI-DxO also gets to the correct diagnosis more cost-effectively than physicians.
I know that I might be the only Lemmy user happy with this, but AI applications in the medical field seems very promising for lowering costs and being more accurate.
-
I know that I might be the only Lemmy user happy with this, but AI applications in the medical field seems very promising for lowering costs and being more accurate.
more accurate.
Until it's not...then what. Who's liable? Google...Amazon ..Microsoft ..chatgpt.... Look, I like ai because it's fun to make stupid memes and pictures without any effort but I do not trust this nonsense to do ANYTHING with accuracy especially my medical.
This thing will 100% be designed to diagnose people to sell you drugs and Not fix your health. Corporations control this. Currently they need to bribe Doctors to push their drugs..this will circumvent that entirely. You'll end up paying drastically more, for less.
The sheer fact that's it's telling people to kill themselves to end suffering should be proof enough that it's dogshit
-
more accurate.
Until it's not...then what. Who's liable? Google...Amazon ..Microsoft ..chatgpt.... Look, I like ai because it's fun to make stupid memes and pictures without any effort but I do not trust this nonsense to do ANYTHING with accuracy especially my medical.
This thing will 100% be designed to diagnose people to sell you drugs and Not fix your health. Corporations control this. Currently they need to bribe Doctors to push their drugs..this will circumvent that entirely. You'll end up paying drastically more, for less.
The sheer fact that's it's telling people to kill themselves to end suffering should be proof enough that it's dogshit
And the risk is that if we rely on AI in any meaningful capacity, it will eventually erode away the expertise who would be knowledgeable enough to detect the problems that the future AI may create/ignore. This assumes even best case where AI isn't being specifically tampered with.
-
I know that I might be the only Lemmy user happy with this, but AI applications in the medical field seems very promising for lowering costs and being more accurate.
People don't realize how much doctors leverage opening old books, reading subscription articles and looking at case files to help their patients out.
Anything that can aide in the diagnosis and treatment of patients is a good thing, even if it's AI.
Source: I am in IT and my wife's two siblings are a general practitioner doctor and an otolaryngologist (Ear Nose Throat Specialist). There's not much difference between being a systems administrator and a doctor in many ways.
-
People don't realize how much doctors leverage opening old books, reading subscription articles and looking at case files to help their patients out.
Anything that can aide in the diagnosis and treatment of patients is a good thing, even if it's AI.
Source: I am in IT and my wife's two siblings are a general practitioner doctor and an otolaryngologist (Ear Nose Throat Specialist). There's not much difference between being a systems administrator and a doctor in many ways.
Have you tried swapping out the part (CPU/videocard/memory/random component) whilst the patient is still running?
Doctors do this all the time!
-
The Microsoft AI team shares research that demonstrates how AI can sequentially investigate and solve medicine’s most complex diagnostic challenges—cases that expert physicians struggle to answer.
Benchmarked against real-world case records published each week in the New England Journal of Medicine, we show that the Microsoft AI Diagnostic Orchestrator (MAI-DxO) correctly diagnoses up to 85% of NEJM case proceedings, a rate more than four times higher than a group of experienced physicians. MAI-DxO also gets to the correct diagnosis more cost-effectively than physicians.
AI for pattern recognition (statistical stuff) IMHO is fine, it's different than expecting original thought, reasoning or understanding, which the new 'AI' does not do, despite the constant hype.
-
Have you tried swapping out the part (CPU/videocard/memory/random component) whilst the patient is still running?
Doctors do this all the time!
-
And the risk is that if we rely on AI in any meaningful capacity, it will eventually erode away the expertise who would be knowledgeable enough to detect the problems that the future AI may create/ignore. This assumes even best case where AI isn't being specifically tampered with.
I agree with you. I think this will likely happen to some degree. At the same time, that kind of argument could be used against many new technologies and is not a valid one to not utilize new tech.
-
I agree with you. I think this will likely happen to some degree. At the same time, that kind of argument could be used against many new technologies and is not a valid one to not utilize new tech.
Simply using AI isn't an issue... Allowing it to take over in a way that accelerates the removal of the knowledge from our pools of knowledge is a problem. Allowing companies to use AI as a direct replacement of actual medical professionals will remove knowledge from society. We already know that we can't use AI to fuel more AI learning... the models implode. In order to continue learning more from medicine, we need to keep pushing for human learning and understanding.
Funny that you agree with me and apparently see useful discussion to have here... but downvote me even though the comment certainly added to the discussion.
Oh, and next time don't put words into someone's mouth, very much a bad faith action that harms meaningful discussion. I never said we should ban it or never use it. A better answer would be to legislate that doctors must still oversee, or must be the approving authority. That AI can never have a final say in someone's care and that research must never be sourced from AI sources. All I said, is that if we continue what we're doing and rely on AI in any meaningful capacity, we will run into problems. Especially in the context of the comment I responded to which opined upon corporation controlled AI.
FFS... they can't even run a vending machine. https://www.anthropic.com/research/project-vend-1
Oh.. and actually I would consider the 85% that it gets to be pretty poor considering that the AI was likely trained on the full breadth of NEJM information. Doctors don't have that ability to retain and train on 100% of all knowledge of the NEJM, so mistaking things makes sense for them. It doesn't make sense for something that was trained on NEJM data to screw up on an NEJM case.
My stance is the same for all AI. I'll use it to generate basic code for me. I'll never run that without review. Or to jumpstart research into a topic... and validate the information presented with outside direct sources.
TL;DR: Tool is good... Source is bad.
-
The Microsoft AI team shares research that demonstrates how AI can sequentially investigate and solve medicine’s most complex diagnostic challenges—cases that expert physicians struggle to answer.
Benchmarked against real-world case records published each week in the New England Journal of Medicine, we show that the Microsoft AI Diagnostic Orchestrator (MAI-DxO) correctly diagnoses up to 85% of NEJM case proceedings, a rate more than four times higher than a group of experienced physicians. MAI-DxO also gets to the correct diagnosis more cost-effectively than physicians.
It seems that Microsoft can create AI products without relying on OpenAI. Although he speculated that the AI was trained on clinical information from hospitals that use Nuance Communications. Also that he received medical information.
In any case, it is a positive development.
-
more accurate.
Until it's not...then what. Who's liable? Google...Amazon ..Microsoft ..chatgpt.... Look, I like ai because it's fun to make stupid memes and pictures without any effort but I do not trust this nonsense to do ANYTHING with accuracy especially my medical.
This thing will 100% be designed to diagnose people to sell you drugs and Not fix your health. Corporations control this. Currently they need to bribe Doctors to push their drugs..this will circumvent that entirely. You'll end up paying drastically more, for less.
The sheer fact that's it's telling people to kill themselves to end suffering should be proof enough that it's dogshit
more accurate.
Until it’s not…then what. Who’s liable? Google…Amazon …Microsoft …chatgpt… Look, I like ai because it’s fun to make stupid memes and pictures without any effort but I do not trust this nonsense to do ANYTHING with accuracy especially my medical.
The doctor who review the case, maybe ?
In some cases the AI can effectively "see" things a doctor can miss and direct him to check for a particular disease. Even if the AI is only able to rule out some cases it would be usefull. -
AI for pattern recognition (statistical stuff) IMHO is fine, it's different than expecting original thought, reasoning or understanding, which the new 'AI' does not do, despite the constant hype.
This. Honestly things like image detection, anomaly detection over big data sets, and semantic searching, all seem very useful in professional contexts.
Generative AI not heavily grounded in real data is just better for no-risks tasks.
-
-
Russland geht gegen die deutschen Entwickler des E2E-verschlüsselten Messengers Delta Chat vor
Technology1
-
Iran’s internet blackout left people in the dark. How does a country shut down the internet?
Technology1
-
-
-
-
“Fuck you! Fuck you! Fuck you!” US Treasury Secretary Scott Bessent shouted loudly at Elon Musk in the halls of the West Wing last month
Technology1
-