Thursday, March 12, 2026

Will I Become Obsolete?

 Artificial Intelligence...will Your Next Doc be a Bot?  

    AI is everywhere, and not a day passes when I don't hear about some new wrinkle: AI reading x-rays and CAT scans, AI helping to select chemotherapy regimens, AI "therapsists" providing instant online support to those in psychic distress. All of this got me thinking: Will I become obsolete?

    Of course, I don't just mean me, I mean most doctors and nurses, nurse practitioners, physician associates, respiratory therapists, and all sorts of other professionals. Now this seems kind of silly when you really think about it, because--at least at this time--"AI" doesn't have a body to sense and provide physical instruction to a patient undergoing physical therapy. It doesn't have the feeling and dexterity to manipulate a ventilator in the ICU. Although a recent article in Noema notes that AI agents are actually "recruiting" living humans to act as "sensors" to supply it with the data it needs to do its work. An example is an AI agent for an insurance company calls a freelance photographer to go out and take pictures of a home damaged by a windstorm, so the AI can process the insurance claim. 

    In that piece, author Umang Bhatt suggests another scenario--which may be closer than you think--of a medical intervention that requires no human operator to execute:

A single observation unlocks a cascade of actions the agent could not have initiated without human sensing. Consider a patient whose AI agent suspects she has a neurological condition based on the symptoms she has described. The agent cannot conduct an MRI, so it schedules one and asks her to go to the appointment. Then, after receiving the physical-world input of her scan, her agent can fire off a chain: process the file, cross-reference previous images, flag any anomalies, order bloodwork and book a specialist, all without asking her again.

     One wonders if the "specialist" actually also ends up being an AI agent. Who needs doctors!

How We Think About Expertise

    Those who are ailing, or worried about a potential health problem, routinely consult the internet via various AI agents free to the public: Gemini (google), ChatGPT (Open AI), Claude (Anthropic), or Grok (xAI). I've experimented with these and found that they generally give pretty good advice, always appended with disclaimers of course, like, "AI can make mistakes," or "Not intended to replace guidance from a health care professional."

    In a world in which it seems to take forever to get an appointment with a real person, and in which doing so often leads to fighting through traffic, costs, and too much time in a waiting room, only to hear the doctor or other provider say, "We need to send you to a specialist," I can see why AI seems like a cheap and convenient self-screening tool.

    The AI knows all. It can scan vast quantities of research in seconds. It can summarize the recommendations of specialist professional societies and render it all into language a person can easily understand. This, after all, is how we think about expertise: vast, detailed knowledge that can be shared with us by someone who can sense our ability to understand all of it, and adjust the "output"--advice--accordingly.

    Who needs me anymore?

How Medical Providers Think About Expertise

    Some of our thinking about expertise can be, well, self-serving. A person says they think they have this or that problem, and a dismissive response might be, "Oh, well which medical school did you go to?" That's a problem, because my 40+ years of experience have shown me just how often a person's "body intuition" can be spot-on--or at least point one in the right direction. But medical providers do have a point, since thinking about medicine is less about quantity of information, than about how information is processed.

    When I see a patient, I am atuned to their narrative about what they think is happening and why. In science we call this a hypothesis. In parallel, my brain is running down alternative hypotheses, or what we call differential diagnosis. We then run through a process of confirming or discomfirming each based on what we see and hear from the patient. "Are you nauseated? Have you had a fever?" and so on. Ideally, we use sensors like a stethoscope or a blood pressure cuff or simply our eyes, to add to the database that helps gradually whittle down that differential until one, most likely diagnosis is proven.

    This is a thinking process AI can actually do quite well, but it can only work as well as its inputs. Nurses and doctors learn, over time, a subtle skill, a "feel" for the person they are evaluating. Sometimes, the "diagnosis" is arrived at in seconds, seemingly like a psychic phenomenon and often called "intuition." But neuroscience tells us that intuition is really just a very experienced brain collecting specific sensory data (smell, touch, appearance--a person's "look" and subtle non-verbal clues) very quickly into what amounts to a nearly instantaneous pattern recognition event. So AI and we mere humans actually can process things very similarly (our neurology is what AI programming is based on). 

    AI just lacks some of the "feels" and it's only as good as its prompts.

If Only it Worked that Perfectly

    Many folks have gone to their doctors or other medical providers and gotten a reaosnably quick, accurate, actionable diagnosis. That's how it's supposed to work, right? But how many have gone in, shared their symptoms, and the doctor seems puzzled, or worse, dismissive of the symptoms as "nerves" or "stress" without providing actionable means to resolve the discomforts? A lot. I see them in my office every week. 

    For now, "AI Medicine" will be limited by that very framework of Modern Medicine, a framework that sees bodies as subject to random failures (genetic or environmental), universally responsive to the usual interventions (surgery, drugs, therapy), and unable to see "pink squishy things"--people, biology--in a holistic way. Human-programmed, AI will gradually get better at seeming human, and even envincing some human understanding, since it's programmed by people who do understand people, and it trains on the vast sum of human experience in anatomy, physiology, psychology, and even the humanities.

    But I don't think it will be thinking "outside the box" of the modern medicine I described above. I know this because I've tried it. It will be a while before AI agents are also able to fully integrate other, more holistic traditions, like the unique conceptualizations of, say, Chinese medicine or classical homeopath, into medical analysis. But this is beginning to happen now as some researchers are developing AI to aid diagnosis and treatment planning in traditional Chinese medicine, and I have actually experimented with  a system piloted by ZeusSoft and their RadarOpus Homeopathic Practice Partner. 

    Concerning the homeopathic AI, it's pretty interesting and designed to not "find the remedy." Rather it's designed to act as a "thinking partner" intended to supplement the practitioner's thinking process. Like I stated earlier, it's dependent on input. As the saying goes "garbage in = garbage out." But it has been an interesting "collaborator." Like any such system it can make mistakes in remedy selection, but it is very good at reviewing key principles and clinical pearls developed and written down by the homeopathic "masters" like J.T. Kent, Samual Hahnemann, and William Boericke. It's a good review of basic principles that can refine case thinking.

What's Missing?

    Tech guys who love AI and see it as humanity's future often seem to me to miss that there may be something that human brains have that machine brains don't--and may never have. I say this because we don't really understand how our brains, consciousness, actually "works." At best, AI uses digital technology (which is not at all like a brain--despite many scientists' and ordinary people's persistent inclination to describe human brains as "computers"), and programing to mimic a way we think human thinking works. How can we be so sure that AI is "thinking" like humans can "think," if we don't really understand the architecture and process of "pink-squishy-thing-thinking"?

Am I Obsolete?

    I don't believe so, at least not yet. AI is a great tool, and it is a tool intended to be "better" than people, but it is still built by people, and by people who don't yet fully understand what it is to be a thinking person. It's just a really good approximation. It can act like a person--but like the "person" it is programmed to be: obsequious, limited by rules, polite, restricted, and ultimately owned by a corporation.

    Patients often describe their clinician visits to me. Some are warm, and they feel genuinely cared for, and this cultivates trust and hope. Other are cold, transactional, as if they visited a store, and not getting what they came to "shop" for, were turned away or left unsatisfied. In some ways "AI medicine" is beginning to mimic the industrialized health care system. Inputs/outputs, and if the clinician has no framework for the inputs a person provides, no useful output is forthcoming. I've seen cases in which the technical outputs (diagnosis, treatments) from their doctor were less than ideal, maybe outdated, and yet the patient speaks highly of them, because their doctor acts like a human being, and people intuitively accept that as the cost of feeling cared for. 

    So am I obsolete? I think my humanity is one thing that maintains my relevance. While some people have begun to treat AI agents like people (and often not with a happy ending), they aren't "alive" in the ways we pink squishy things are alive. They can process a lot. They're fast. They're cheap. But they are essentially exchange systems that receive data, process it in amazing ways, and then return an output. They aren't human, and I would argue that--in a "therapeutic relationship"--it is that humanity, that connection, that matters. This makes AI a tool that may provide terrific things, but it doesn't (yet) make us, clinicians, obsolete. 

Be well!

    

No comments:

Post a Comment