For all its promise, AI cannot replace an authentic connection between doctor and patient.
Two narratives compete for our vision of the bedside practice of medicine in an AI future. A transformative technology of boundless possibilities that can process, organize, and even understand information beyond human capabilities. And a charismatic monster, trampling over patient privacy and confidentiality and perpetuating existing biases with flawed machine-learning algorithms.
Careful consideration of AI’s potential and challenges becomes tricky as it squeezes itself into daily medical practice. The notion of patient-centered care is already contending with system pressures, third-party interests, and a hangover from the previous “transformational” technology, the electronic medical record. But what if AI emerges as a valuable tool, assisting and enhancing patient outcomes? And guardrails curb abuses? If we adopt this third narrative perspective, is AI propelling us forward or pushing us sideways, a powerful tool that improves patient care that is also testing us with new challenges it has created?
I welcome AI’s assistance in cognitive offloading, especially when the human brain, a network of 100 billion neurons that compose 100 trillion connections, is often faulty and prone to cognitive shortcuts and biases. It’s a predictive machine that mysteriously weaves random, nonlinear, and incomplete data into a story. It neglects ambiguity and minimizes doubt. We risk constructing a different story from the one the patient is trying to tell.
I once asked ChatGPT for insights on a female patient who visited the ER late at night experiencing chest pain and shortness of breath. AI promptly responded with an impressive differential diagnosis, but it failed to grasp the real story—interpersonal violence. I, too, missed this crucial detail.
My takeaway from that encounter wasn’t just to inquire about interpersonal violence, which AI may also learn, but to recognize the uncertainty I felt in my body. It was a signal to stop and reorient my approach, to ask better questions before forcing a further workup. “What don’t I know about her? What am I missing in her story? What pushed her to visit the ER at this hour? What is she trying to say and can’t?”
This isn’t a criticism of AI, but rather more evidence of how challenging it can be to comprehend another person’s thoughts and feelings. Medicine has made significant progress in mapping illnesses, but as the medical humanities scholar Kathryn Montgomery said beautifully, “each patient, each instance of illness, is uncharted territory.”
Evidence-based medicine and jaw-dropping technology in hospitals don’t work on the wrong story. Responsible and accountable AI needs a “human in the room.”
AI is a clinical tool that, like any other tool, must be applied to the proper tasks. I embrace AI as a cognitive enhancement. Over the past three decades in emergency medicine, I’ve gained a bank of experiences—successes, mistakes, struggles, unexpected joys, fortunate victories, and unforeseen twists. I draw on this bank to leverage AI’s power for particular purposes, and to monitor its accuracy and reliability.
But what happens when the AI generation of clinicians replaces analog physicians? Will the reliance on AI dilute the opportunities to cultivate a wealth of experiences necessary for its responsible use? The slow, messy work of thinking through tough situations, complex cases, and complex personalities is necessary work. It compels us to reflect on our thought processes and consider the emotions and feelings in our bodies that influence our decisions.
A WORK IN PROGRESS
The third narrative foreshadows possible ripple effects into the nature and purposes of medical training that are a consequence of the AI’s successful integration as a cognitive enhancement tool.
Traditional core competencies, such as clinical decision-making, physical exam skills, moral deliberation, and humanities skills, must remain evergreen in an AI-driven future.
Struggling with vulnerable patients whose lives are stressed by misbehaving bodies, social issues, and feelings of vulnerability, fear, and a loss of control can be muddy and slippery terrain. Traction is often buried in a pained grin or a weighty pause. Humans can’t offload this responsibility. The actor Anna Deveare Smith wrote, “We can learn a lot about a person in the very moment that language fails them.”
Studies highlight the potential of AI to surpass physicians in analyzing case studies. However, case studies originate from a privileged narrative position—a coherent, organized, and edited case history. In contrast, most of us start with story fragments, tender and imperfect things, expressed under pressured and time-limited circumstances. A medical case should never be confused with a patient’s story.
Patients aren’t data with faces; they’re individuals grappling with challenges. Each patient’s journey draws its own map, and they may lack clear markers to guide them. In the third narrative, physicians risk becoming insensitive to what they don’t know by relying too heavily on AI, and as a result, gradually lose the curiosity and stamina to venture into these uncharted territories together with patients.
If AI were to develop a theory of mind that comprehends patients’ mental states, I would never offload this journey. Traveling these challenging roads with empathy, humility, and an open mind is where true meaning is found in medicine.
Understanding another human takes time. While AI’s advanced language models can perform remarkable computational tasks, getting at the heart of a patient’s story isn’t one of them. Not yet anyway.
AI promises to revolutionize health care in numerous ways, but at what cost? What does better patient care mean to health care workers and patients at the center of medicine looking one another in the eyes? Medicine stands at a critical juncture, facing a profound reckoning about its ability to control the advancements brought by AI. This necessitates a deep introspection into the fundamental values that must be preserved as AI integrates into the patient encounter. While I’m less concerned about AI replacing physicians, I am worried it will change the practice of medicine just enough to become fundamentally different and unrecognizable to patients and physicians.
AI, like me, is a work in progress. Our imperfections make us who we are and shape who we might become. In the third narrative, AI recognizes that medicine isn’t always about solving puzzles but rather about finding problems, a process predicated on trust and compassion. Sometimes, the process of connection is the outcome. Through stories, we learn a little more about ourselves and others, and for a moment, we feel less alone.