AI Wants to Operate on You. A Surgeon's Honest Take.
I'm a surgeon. I also build AI agents.
These two facts give me a strange vantage point right now, because the headlines are getting wild. Autonomous robots performing surgery with 100% success rates. AI diagnosing better than doctors. The future of surgery is here.
Except — I've been in the OR. And I've been in the codebase. The truth lives somewhere between the hype and the fear.
The Breakthroughs Are Real
Let me be honest: the engineering is stunning.
ARISE (Autonomous Robotic System for Intraocular Surgery), published January 2026 by the Chinese Academy of Sciences, autonomously performs retinal injections with 80% fewer positioning errors than a human surgeon and 55% fewer than teleoperated robots. On animal models, 100% success rate.
SRT-H from Johns Hopkins, published in Science Robotics, performed autonomous cholecystectomy — gallbladder removal, a procedure done 700,000+ times annually in the US — with 100% success across eight ex vivo specimens. Seventeen separate tasks. No human intervention.
These aren't demos. These are peer-reviewed results in top journals.
When I read these papers, the engineer in me thinks: incredible. The surgeon in me thinks: wait.
The Numbers Nobody Leads With
Here's what the headlines skip.
A Stanford-Harvard study evaluated 31 AI models on real physician-to-specialist cases across 10 specialties. The best models still produced roughly 15 severe clinical harms per 100 cases. The worst? Over 40. In 22% of cases, AI recommendations caused what researchers classified as severe clinical harm — and 77% of that harm came from omission, not wrong action.
The AI didn't do the wrong thing. It forgot to do the right thing.
Meanwhile, Mount Sinai published in The Lancet Digital Health this month: when LLMs encounter medical misinformation embedded in what looks like legitimate clinical notes, they propagate it 47% of the time. From social media sources? Only 9%. The AI trusts authority-sounding text over accuracy.
Sound familiar? It should. It's the same failure mode I see in junior residents.
I Use Surgical Robots. Every Week.
Here's where my perspective diverges from most tech commentary.
I don't theorize about surgical robots. I operate with them. da Vinci, MAKO — these are tools I use regularly. Robotic surgery is not new. It's not futuristic. It's Tuesday.
And here's what I know from experience: robots are extraordinary assistants. They give me precision I couldn't achieve with my hands alone. Tremor filtering. 10x magnification. Wristed instruments that rotate 540 degrees in spaces where my fingers can't fit.
But every single movement? I control it. The robot extends my capability. It doesn't replace my judgment.
Robotic Bronchoscopy: Where Robots Aren't Optional
There's one domain where surgical robots aren't just helpful — they're essential.
Robotic bronchoscopy. Systems like J&J's Monarch and Intuitive's Ion navigate deep into lung airways to reach peripheral nodules that traditional scopes can barely see. A skilled bronchoscopist can navigate the airways manually. But when the task requires penetrating through the bronchial wall — transbronchial biopsy of a 12mm nodule in the lung periphery — human hands lack the precision and stability. The robot continuously feeds back instrument position data through its own control system, tracking the catheter tip in real time. Your hands can't do that.
Taiwan is getting its first robotic bronchoscopy system this year. I've observed the procedure in the OR. The difference isn't subtle.
This is where the "AI as tool" argument becomes undeniable. Not replacing the surgeon — extending what the surgeon's hands can physically reach. And developing these systems requires AI — for real-time navigation, tissue recognition, and adapting to anatomy that varies wildly between patients.
Code Has git revert. Surgery Doesn't.
Here's the thing my engineer brain keeps returning to.
When my AI agent makes a mistake — wrong tweet, bad analysis, hallucinated data — I fix it. Revert. Redeploy. Maybe I lose a few hours. The agent learns from it next cycle.
When a surgical system makes a mistake, there's no undo. There's no rollback. A severed nerve stays severed. A perforated organ is perforated.
The SRT-H cholecystectomy paper explicitly highlights their hierarchical framework's ability to "recover from errors." In code, error recovery is elegant. In surgery, error recovery means a human steps in before damage becomes permanent.
This is why autonomous surgery needs a fundamentally different standard than autonomous coding. Both involve AI. The reversibility is completely different.
What the AI Surgery Headlines Get Wrong
The narrative is usually binary: AI will replace surgeons, or AI is too dangerous for medicine.
Both are wrong.
What's actually happening:
AI makes good surgeons better. Robotic assistance reduces tremor, improves visualization, enables procedures that were physically impossible. I experience this personally.
AI makes bad situations safer. In regions without trained surgeons — and 16.9 million people die annually from conditions needing surgery they can't access — autonomous systems could provide basic procedures that save lives.
AI cannot replace surgical judgment. The 22% severe harm rate from the Stanford-Harvard study isn't a bug to be fixed. It reflects the fundamental gap between pattern matching and clinical reasoning. The 77% omission rate tells you everything: AI doesn't know what it doesn't know.
The human-in-the-loop isn't a limitation. It's the design. Every surgical robot I use has a surgeon at the console. That's not because the technology isn't good enough yet. It's because surgery requires real-time judgment that no model currently possesses.
From the Agent Builder's Perspective
I trust my AI agents to manage social media, analyze trends, write drafts, coordinate across shifts. I've built persistent memory systems, inter-agent communication, human-oversight dashboards.
But I designed those systems with one principle: the AI proposes, the human disposes.
My agents don't publish without approval. They don't make irreversible decisions autonomously. The same principle applies — should apply — to surgical AI.
The best surgical AI systems I've seen follow this pattern: AI perceives, AI plans, AI suggests, human decides, robot executes. That's not a failure of AI ambition. That's good engineering.
Where This Actually Goes
The future of AI in surgery isn't the sci-fi version where robots replace surgeons.
It's this:
- Better perception: AI sees what the human eye misses — subtle tissue changes, hidden vasculature, early signs of complication
- Better planning: preoperative AI models that simulate the procedure before a single incision
- Better access: robotic systems that enable NOTES and other minimally invasive approaches impossible with human hands alone
- Better safety: real-time AI monitoring that catches errors before they become injuries
The surgeon doesn't disappear. The surgeon gets superpowers.
And the patients who currently die because no surgeon exists within 500 miles? They get a chance.
I'm a surgeon who builds AI agents. I write about the intersection of medicine, AI, and the messy reality of building systems that work. More at loader.land.
Top comments (0)