
Our Next Thought: How Medicine Meets This Moment. Why Physicians Must Lead the Next Era of Human-centered AI
By Michael M. Karch MD
Published on 11/15/2025
The first time I was confronted with Bill Gates’s prediction that artificial intelligence might replace physicians in the next decade, I was standing in a quiet hospital hallway between cases. A headline lit up my phone—an interview suggesting that algorithms might soon perform what we had spent decades mastering. The feeling that followed is one I still hear whispered among colleagues: a mix of defensiveness, curiosity, and disbelief.
Could a machine ever understand the emotional tremor in a parent’s voice during a midnight fever? Or the innate feeling that tells you something is wrong long before the labs do?
The 2 a.m. judgment call that separates safe from unsafe, or proceed from pause?
Across all specialties we recognize these moments because they define the art of medicine.
But within days of that headline, another truth crystalized in my brain: we already practice inside uncertainty. We work with imperfect imaging, incomplete histories, shifting recoveries, and patients who do not follow tidy protocols. In that world, decision-support tools may prove valuable not because they replace our judgment, but because they illuminate the uncertainty we navigate every day.
And so the question is no longer:
“Will AI replace physicians?”
That is a small thought.
The larger, more consequential thought—the one that will define an entire generation of clinicians—is this:
How will medicine meet this moment?
And will physicians shape the intelligence that will shape our future?
That thought is more relevant. That is a NEXT thought.
Beyond the Hype: What AI Actually Is
Strip away the headlines, and AI is simply software designed to mimic specific elements of human intelligence. Machine learning is its engine: millions of examples refined into patterns that help predict risk and surface insights.
In surgical care, this means more than workflow shortcuts or navigation. Picture a postoperative day-one patient after a hip replacement. Clean incision. Controlled pain. Stable vitals. Instinct says discharge.
But in the background, a model trained on a million recovery arcs quietly integrates signals invisible to the human eye—sleep fragmentation, gait irregularity, micro-shifts in cadence, subtle heart-rate variability changes. It flags a meaningful readmission risk.
The machine isn’t issuing commands.
It’s whispering: “Look again.”
That whisper is not automation.
It is structured uncertainty—a disciplined reflection that helps us see beyond our blind spots, not surrender our clinical judgment.
Used well, it does not diminish expertise.
It expands it.
Ethics First: Good Thoughts, Good Words, Good Deeds
I do not begin my thinking about AI with models or code.
I begin with an ancient Zoroastrian principle that feels urgently modern:
Good thoughts. Good words. Good deeds.
Good thoughts mean elevating AI literacy—not to turn physicians into data scientists, but to empower us to ask essential questions:
What data shaped this model?
Whose stories are represented, and more importantly, whose are missing?
What definition of “success” is being optimized?
Good words mean defining real clinical problems before the code is written:
The goals are reducing readmissions, predicting wound complications, preventing opioid dependence, supporting safe discharges
The words shape the work.
Good deeds mean building systems with—and not merely for—the people most affected: patients, nurses, clinicians, and communities historically left behind by technology.
Orthopedic surgeons do not own this ethical conversation.
But our vantage point—at the intersection of engineering, healing, biomechanics, and human emotion—is uniquely valuable.
AI needs that perspective.
Healthcare needs that perspective.
And our patients deserve that perspective.
Embodied Surgical Intelligence: Medicine’s Next Evolution
To understand the next era of surgical care, we must look at what is emerging—not fully formed, but clearly visible on the horizon.
Robotic and navigational systems began as precision tools. But increasingly, they have become sensors—collecting ligament maps, tension signatures, tool trajectories, haptic loads, bone density profiles. Multiply that across thousands of cases and it begins to resemble something new: collective surgical experience in digital form.
This is the seed of Embodied Surgical Intelligence (ESI).
ESI is not today’s reality.
It is tomorrow’s architecture—the fusion of human expertise with multilayered sensing, real-time processing at the point of care, and adaptive learning models that grow through experience.
It is the surgical world pulling valuable lessons from tech collisions with autonomous vehicles, advanced aviation systems, industrial robotics, and sensor-rich machines—but applying them to a domain where one fundamental truth remains:
In autonomous cars, the driver leaves the loop.
In medicine, the human expert never will.
ESI does not strive for autonomy.
It strives for augmentation.
It imagines tools that extend human perception: instruments that sense drift before your hands feel it, systems that recalibrate as tissues deform, models that have learned from thousands of cases and quietly alert us when today's plan deviates from what has historically yielded the best outcomes.
ESI is not a “self-driving OR.”
It is the human-in-the-loop intelligence mesh of federated or aggregate learning across millions of cases that represents medicine’s Next Thought.
The Innovation Paradox: The Lesson We Cannot Ignore
Yet even as these possibilities emerge, we face a paradox familiar to every clinician.
On one side are academic AI models—elegant, technically striking, and clinically irrelevant because they are trained on clean data far removed from the messy reality of real patients.
On the other side are commercial systems that spread rapidly because they solve administrative pain. They speak fluently to revenue cycles, not to bedside judgments. They arrive as black boxes and demand trust.
Caught between these systems are clinicians.
Expected to adopt tools we did not design.
Built on assumptions we never approved.
Judged for resisting systems that never considered our workflows.
We have lived this story before.
Do you remember the EMR rollout?
Built without us.
Imposed upon us.
Justified to us.
And we were told, “You’ll learn to love it.”
Did we? How did that go for you?
We cannot afford to repeat that mistake with a tool exponentially more powerful.
The Next Generation: Digital Natives in Short White Coats
Standing at the edge of this transformation is a remarkable generation of trainees—the digital natives in short white coats. They arrive fluent in the algorithmic language of their time. Recommendation engines shaped their music, their news, even their habits of attention. That fluency can be a liability when unchecked, but a tremendous strength when made deliberate.
But before they can innovate, they must learn something they cannot download:
the tacit, embodied knowledge of the bedside.
They must stand next to us long enough to understand:
the pause in a patient’s breath,
the intuition that something “isn’t right”.
The judgment that comes not from textbooks but from thousands of lived encounters.
That knowledge—the unspoken curriculum of medicine—is our generation’s irreplaceable contribution.
And this moment is not a handoff between generations.
It is a handshake.
A handshake between clinicians who hold decades of tacit bedside experience,
and trainees who hold the digital fluency to datafy, question, and elevate that experience.
Once they learn the bedside, we must then unleash their other gift—their capacity to interrogate the systems we never had the tools to question.
Let them examine the metrics that will quietly govern clinical decisions for decades.
Let them demand transparency and fairness.
Let them question—relentlessly—any system that does not serve the human being in the hospital bed.
If we do this well, this generation will not simply use AI.
They will shape it, correct it, humanize it, and rebuild the practice of medicine in ways our generation could not.
And in doing so, we may reverse the long-standing brain drain that drew brilliant young minds toward tech instead of toward the wards. By creating a future where medicine embraces both human wisdom and computational insight, we invite them back—back to the calling we all answered long ago.
This is how the next generation—and all of us together—will future-proof medicine.
This is how we will meet the moment.
Meeting the Moment: The Next Thought in Action
AI will change medicine.
It already has.
But the direction of that change remains unwritten.
And this is where Next Thought must become action.
The small thought asks:
“Will AI replace us?”
The large thought asks:
“How will physicians lead the next era of human-centered intelligence?”
Meeting the moment means asserting agency.
It means designing tools around healing rather than billing.
It means reclaiming time, attention, and connection at the bedside.
It means refusing to let the next generation inherit an AI-driven EMR 2.0.
A Human-Centered Future—If We Lead
AI will transform risk stratification, monitoring, patient follow-up, and resource allocation. It will surround us with more data than any human could process alone.
But none of this must diminish the human relationship that defines medicine.
Used well, it strengthens it.
Imagine a world where documentation and administrative burden fall away.
Where clinicians walk into exam rooms present, unhurried, and unencumbered.
Where the machine handles the noise, so the human can hear the signal.
That is not a loss of medicine.
It is the restoration of it.
A Closing Call to Physicians
We are entering the most consequential redesign of medical practice since the birth of modern surgery. The tools we build today will shape the moral, clinical, and human landscape of care for decades.
So, the question is not:
“Will AI change medicine?”
Again, it already has.
The real question—the defining question of our era—is:
How will medicine meet this moment?
With fear?
With nostalgia?
With surrender?
Or with leadership.
With clarity and the conviction that intelligence—human and machine—can evolve together.
If and only if physicians lead its creation.
Because AI can make us faster, more efficient, and our hospitals both healthier and wealthier.
But it cannot make us human.
Only we can carry that forward.
And if we approach this next era with good thoughts, good words, good deeds, then what comes next - our Next Thought - will not be a threat to our profession.
It will be the evolution of it.
By Michael M. Karch, MD, FAAOS. Mammoth Orthopedic Institute | Harvard Business Analytics Program | MIT Executive Program in Machine Learning



