Robot performs first realistic surgery without human help: System trained on videos of surgeries performs like an expert surgeon
-
My son's surgeon told me about the evolution of one particular cardiac procedure. Most of the "good" doctors were laying many stitches in a tight fashion while the "lazy" doctors laid down fewer stitches a bit looser. Turns out that the patients of the "lazy" doctors had a better recovery rate so now that's the standard procedure.
Sometimes divergent behaviors can actually lead to better behavior. An AI surgeon that is "lazy" probably wouldn't exist and engineers would probably stamp out that behavior before it even got to the OR.
i mean, you could just as easily say professors and university would stamp those habits out of human doctors, but, as we can see… they don’t.
just because an intelligence was engineered doesn’t mean it’s incapable of divergent behaviors, nor does it mean the ones it displays are of intrinsically lesser quality than those a human in the same scenario might exhibit. i don’t understand this POV you have because it’s the direct opposite of what most people complain about with machine learning tools… first they’re too non-deterministic to such a degree as to be useless, but now they’re so deterministic as to be entirely incapable of diverging their habits?
digressing over how i just kind of disagree with your overall premise (that’s okay that’s allowed on the internet and we can continue not hating each other!), i just kind of find this “contradiction,” if you can even call it that, pretty funny to see pop up out in the wild.
thanks for sharing the anecdote about the cardiac procedure, that’s quite interesting. if it isn’t too personal to ask, would you happen to know the specific procedure implicated here?
-
A robot trained on videos of surgeries performed a lengthy phase of a gallbladder removal without human help. The robot operated for the first time on a lifelike patient, and during the operation, responded to and learned from voice commands from the team—like a novice surgeon working with a mentor.
The robot performed unflappably across trials and with the expertise of a skilled human surgeon, even during unexpected scenarios typical in real life medical emergencies.
thank you for removing my gallbladder robot, but i had a brain tumor
-
A robot trained on videos of surgeries performed a lengthy phase of a gallbladder removal without human help. The robot operated for the first time on a lifelike patient, and during the operation, responded to and learned from voice commands from the team—like a novice surgeon working with a mentor.
The robot performed unflappably across trials and with the expertise of a skilled human surgeon, even during unexpected scenarios typical in real life medical emergencies.
Really hope they tried it on a grape first at least.
-
A robot trained on videos of surgeries performed a lengthy phase of a gallbladder removal without human help. The robot operated for the first time on a lifelike patient, and during the operation, responded to and learned from voice commands from the team—like a novice surgeon working with a mentor.
The robot performed unflappably across trials and with the expertise of a skilled human surgeon, even during unexpected scenarios typical in real life medical emergencies.
Good, now add jailtime for the ceo if something goes wrong, then we'll have a very safe tech.
-
My son's surgeon told me about the evolution of one particular cardiac procedure. Most of the "good" doctors were laying many stitches in a tight fashion while the "lazy" doctors laid down fewer stitches a bit looser. Turns out that the patients of the "lazy" doctors had a better recovery rate so now that's the standard procedure.
Sometimes divergent behaviors can actually lead to better behavior. An AI surgeon that is "lazy" probably wouldn't exist and engineers would probably stamp out that behavior before it even got to the OR.
That's just one case of professional laziness in an entire ocean of medical horror stories caused by the same.
-
And then you‘re lying on the table. Unfortunately, your case is a little different than the standard surgery. Good luck.
What if I'm on the table telling the truth?
-
Good, now add jailtime for the ceo if something goes wrong, then we'll have a very safe tech.
know what? let's just skip the middleman and have the CEO undergo the same operation. you know like the taser company that tasers their employees.
can't have trust in a product unless you use the product.
-
A robot trained on videos of surgeries performed a lengthy phase of a gallbladder removal without human help. The robot operated for the first time on a lifelike patient, and during the operation, responded to and learned from voice commands from the team—like a novice surgeon working with a mentor.
The robot performed unflappably across trials and with the expertise of a skilled human surgeon, even during unexpected scenarios typical in real life medical emergencies.
Hold on 3P0...you gotta little piece of human stuff stuck on your right end effector clamp top hinge pin. There, all good! Continue!
-
That's just one case of professional laziness in an entire ocean of medical horror stories caused by the same.
Eliminating room for error, not to say AI is flawless but that is the goal in most cases, is a good way to never learn anything new. I don't completely dislike this idea but I'm sure it will be driven towards cutting costs, not saving lives.
-
Good, now add jailtime for the ceo if something goes wrong, then we'll have a very safe tech.
Nah, just a thorough reproduction of the consequences of that wrong.
-
i mean, you could just as easily say professors and university would stamp those habits out of human doctors, but, as we can see… they don’t.
just because an intelligence was engineered doesn’t mean it’s incapable of divergent behaviors, nor does it mean the ones it displays are of intrinsically lesser quality than those a human in the same scenario might exhibit. i don’t understand this POV you have because it’s the direct opposite of what most people complain about with machine learning tools… first they’re too non-deterministic to such a degree as to be useless, but now they’re so deterministic as to be entirely incapable of diverging their habits?
digressing over how i just kind of disagree with your overall premise (that’s okay that’s allowed on the internet and we can continue not hating each other!), i just kind of find this “contradiction,” if you can even call it that, pretty funny to see pop up out in the wild.
thanks for sharing the anecdote about the cardiac procedure, that’s quite interesting. if it isn’t too personal to ask, would you happen to know the specific procedure implicated here?
Not specifically but I think the guidance is applicable to most incisions of the heart. I think the fact that it's a muscular and constantly moving organ makes it differently than something like an epidermal stitch.
And my post isn't to say "all mistakes are good" but that invariablity can lead to stagnation. AI doesn't do things the same way every single time but it also doesn't aim to "experiment" as a way to grow or to self-reflect on its own efficacy (which could lead to model collapse). That's almost at the level of sentience.
-
What if I'm on the table telling the truth?
That’s a different thing indeed. In your case the AI
goes wild, will strip dance and tell poor jokes (while flirting with the ventilation machine)
-
A robot trained on videos of surgeries performed a lengthy phase of a gallbladder removal without human help. The robot operated for the first time on a lifelike patient, and during the operation, responded to and learned from voice commands from the team—like a novice surgeon working with a mentor.
The robot performed unflappably across trials and with the expertise of a skilled human surgeon, even during unexpected scenarios typical in real life medical emergencies.
I want that thing where a light "paints" over wounds and they heal.
-
Fringe cases yes, like rare conditions. It almost certainly won't be able to handle something completely unexpected.
The AI will (probably) be familiar with every possible issue that no human will be able to match.
I'm not sure what kind of "completely unexpected" situation is possible can happen, that a normal surgeon would handle better?
But I agree it would have to be a lot smarter than current LLM and self driving for instance. Like a whole other level of smarter. But I think that is where we are heading. -
The AI will (probably) be familiar with every possible issue that no human will be able to match.
I'm not sure what kind of "completely unexpected" situation is possible can happen, that a normal surgeon would handle better?
But I agree it would have to be a lot smarter than current LLM and self driving for instance. Like a whole other level of smarter. But I think that is where we are heading.Would it be able to handle a sudden power outage? A fire alarm going off?
-
A robot trained on videos of surgeries performed a lengthy phase of a gallbladder removal without human help. The robot operated for the first time on a lifelike patient, and during the operation, responded to and learned from voice commands from the team—like a novice surgeon working with a mentor.
The robot performed unflappably across trials and with the expertise of a skilled human surgeon, even during unexpected scenarios typical in real life medical emergencies.
Naturally as this kind of thing moves into use on actual people it will be used on the wealthiest and most connected among us in equal measure to us lowly plebs right.....right?
-
That's just one case of professional laziness in an entire ocean of medical horror stories caused by the same.
Or more likely they weren't actually being lazy, they knew they needed to leave room for swelling and healing. The surgeons that did tight stitches thought theirs was better because it looked better immediately after the surgery.
Surgeons are actually pretty well known for being arrogant, and claiming anyone who doesn't do their neat and tight stitching is lazy is completely on brand for people like that.
-
Naturally as this kind of thing moves into use on actual people it will be used on the wealthiest and most connected among us in equal measure to us lowly plebs right.....right?
Are you kidding!? It'll be rolled out to poor people first! (gotta iron out the last of the bugs somehow)
-
I'd bet on at least twenty years before it's in general use, since this is a radical change and it makes sense to be cautious about new technology in medicine. Initial clinical trials for some common, simple surgeries within ten years, though.
This is one of those cases where an algorithm carefully trained on only relevant data can have value. It isn't the same as feeding an LLM the unfiltered Internet and then expecting it to learn only from the non-crazy parts.
This is one of those cases where an algorithm carefully trained on only relevant data can have value.
Hopefully more people learn that this is the important part.
It becomes nonsense when you just feed it everything and the kitchen sink. A well trained model works.
-
Would it be able to handle a sudden power outage? A fire alarm going off?
What happens to an ecmo machine during a power outage or fire alarm?
The idea should be to augment healthcare professionals with tools they can use. The hospital will need to have contingencies in place. I agree if that your point is that we can’t replace people with machines. But we can increase effectiveness with them.