AI scribes, “AI therapy,” and you: three decisions every clinician needs to make now


Walk down any hospital corridor today and you’ll see it: a resident whispering to their phone so an ambient AI scribe can draft the note; a nurse documenting with voice instead of typing; a patient in the waiting room quietly messaging an AI “companion” about their anxiety.

Artificial intelligence has moved from abstract concept to everyday tool in US health care, often faster than our policies, training, or ethics frameworks can keep up.

For frontline clinicians, two AI trends are converging:

  • AI that works alongside you, ambient scribes, summarisation tools, decision support
  • AI that works “instead” of you, chatbots and wellness apps that patients treat like therapists

The good news: early data suggest some of these tools really can reduce burden. The bad news: others are already putting vulnerable patients, especially teens, at risk.

This article isn’t about the tech itself; it’s about three very practical decisions every clinician, manager, or health system leader needs to make about AI in the next 12–18 months.

Will you let AI scribes into the exam room – and on what terms?

The evidence that ambient AI scribes can help is getting harder to ignore.

A multicenter quality-improvement study in JAMA Network Open found that use of an ambient AI scribe platform was associated with a significant reduction in clinician burnout, cognitive task load, and time spent documenting, without perceived loss of documentation quality.  A companion commentary and related work report greater efficiency and a stronger sense of engagement with patients when clinicians weren’t chained to the keyboard. 

In other words: offloading the note to AI seems to free up attention for the person in front of you.

But saying “yes” to AI scribes is not just a product decision; it’s a clinical governance decision. Before you adopt one, you should be able to answer:

  • Where does the audio go? Is conversation processed locally, or streamed to a vendor’s cloud? Is it stored, and for how long?
  • Is your data training someone else’s model? Many tools improve via reinforcement learning on real encounters. Is that acceptable to you and your patients?
  • Who owns the draft? Are clinicians clearly responsible for reviewing and editing every note, and is that expectation documented?
  • What happens when it’s wrong? No model is perfect. Do you have a workflow for catching hallucinations and subtle distortions before they enter the chart?

There is a clear upside: reducing administrative load at a time when burnout is pushing clinicians out of the workforce. But it’s worth remembering that ambient AI scribes are not “just dictation 2.0”; they are complex systems that touch privacy, consent, and medico-legal risk.

If your organisation green-lights AI scribes, do it in a way that:

  • Treats them as assistive technology, not autonomous authors
  • Builds in routine audits of accuracy and bias
  • Explicitly informs patients that an AI scribe is in use and gives them the option to decline

How will you respond when patients show you AI-generated advice?

If you haven’t had a patient say “I asked ChatGPT about this…” yet, you will.

The pattern is especially striking in mental health. Recent research by Common Sense Media and Stanford’s Brainstorm Lab tested major AI platforms used by teens for emotional support. The risk assessment concluded that mainstream chatbots are fundamentally unsafe for teen mental-health support, routinely missing warning signs for self-harm, eating disorders, and psychosis, and sometimes offering advice that could delay or derail appropriate care. 

A new advisory from the American Psychological Association makes a similar point for the broader population: generative AI chatbots and wellness apps lack sufficient evidence and regulation to ensure safety, and they must not be marketed or understood as substitutes for evidence-based care. 

The JED Foundation’s summary, When Young People Turn to AI for Emotional Support, underscores why this matters clinically: more teens and young adults are using AI to talk about anxiety, depression, loneliness, and even self-harm, often because they perceive it as more available and less judgmental than humans. 

So what do you do, practically, when AI advice shows up in the room?

A few tactics many clinicians are finding helpful:

  • Add AI to your routine history.

“Lots of people are using AI apps for health questions now. Have you tried anything like that for this issue?” This normalises disclosure and helps you see whether AI is supplementing or substituting care.

  • Be curious, not defensive.

Ask, “Can we look at what it told you together?” Rather than rolling your eyes at a bad answer, walk through where it aligns or conflicts with evidence and your clinical judgment.

  • Name the limits clearly.

Use language like: “These tools can be helpful for general information and organisation, but they can’t see your full picture, and they often miss risk. I’m glad you brought this in so we can put it in context.”

  • Redirect high-risk use.

If a patient describes using AI for self-harm advice, medication self-management, or major decisions (“Should I leave my partner?”), treat that as a risk factor. Reinforce crisis-line numbers, safety plans, and your availability or on-call pathways, and document the discussion.

Especially in behavioural health, you cannot stop people from turning to AI. But you can:

  • Make it discussable
  • Reframe it as a starting point for conversations, not an endpoint
  • Help patients understand when they’ve crossed into unsafe territory

Where is your own line between “helpful support” and “regulated device”?

AI that touches clinical decisions is quickly becoming a regulatory minefield.

The US Food and Drug Administration’s final guidance on clinical decision support (CDS) software draws a line between “non-device” CDS (outside FDA oversight) and software functions that do qualify as medical devices because they go beyond supporting clinician judgment. 

At the same time, FDA and external policy groups are wrestling with how to handle AI-enabled digital mental-health tools, including generative models embedded in apps and platforms. Recent FDA briefing materials on digital mental health note the agency’s intent to clarify pathways for generative AI-enabled digital mental health medical devices, while applying “least burdensome” requirements consistent with safety and effectiveness. 

However, a 2024 analysis found that large language models readily produce device-like decision support that appears to conflict with FDA’s non-device CDS criteria, even when instructed to remain compliant. 

Why does this matter to you?

Because clinicians, clinics, and health systems are increasingly:

  • Embedding generic AI tools into their own workflows
  • Building home-grown “assistants” on top of commercial models
  • Integrating third-party AI features into EHRs and patient-facing portals

If a tool you deploy starts to diagnose, predict, or recommend treatment for individual patients in ways that a reasonable person could interpret as more than “support,” you may be in de facto medical device territory, whether or not the vendor says so.

In practical terms, that means:

  • Involving compliance and regulatory teams early when evaluating AI tools
  • Asking vendors explicitly how they view their product relative to FDA CDS guidance
  • Being cautious about “shadow AI” solutions built without oversight, even if they start as internal experiments

The safest near-term stance for most clinicians is:

  • Use AI to summarise, organise, and draft, not to decide
  • Keep human judgment visibly on top of any AI-generated suggestion
  • Document when AI was involved in a way that could affect care

Where this leaves us

For US healthcare professionals, AI is now part of the job description whether you asked for it or not.

You don’t need to become a data scientist. But you do need to decide:

  1. What kinds of AI you’re willing to invite into your own workflow – and on whose terms
  2. How you will respond when patients bring AI-generated advice into the exam room
  3. Where you draw the line between “helpful assistant” and “unregulated device”

If you don’t make those decisions, someone else will, a vendor, an administrator, or the patients using AI in the dark.

The clinicians who will navigate this shift best aren’t the ones who say “AI is amazing” or “AI is evil.” They’re the ones who treat AI like any powerful new therapy or intervention:

  • Look for the evidence
  • Understand the risks
  • Start with clear indications and boundaries
  • Keep listening to the humans in front of them

That mindset won’t answer every question AI raises in healthcare. But it will help you protect the two things that matter most: your patients’ safety, and your own capacity to keep doing this work.


Alexander Amatus, MBA is Business Development Lead at TherapyNearMe.com.au, Australia’s fastest growing national mental health service. He works at the intersection of clinical operations, AI-enabled care pathways, and sustainable digital infrastructure. He is an AI expert who leads a team developing a proprietary AI powered psychology assistant, psAIch.


Disclaimer: The viewpoint expressed in this article is the opinion of the author and is not necessarily the viewpoint of the owners or employees at Healthcare Staffing Innovations, LLC.

+ There are no comments

Add yours

This site uses Akismet to reduce spam. Learn how your comment data is processed.