Welcome back to Dose of AI. Half a century of newsletters, and the healthcare AI story keeps getting more complex. This week, we have three stories that collectively ask the same underlying question: as AI embeds itself deeper into clinical medicine, who gets to decide how it works, who it works for, and what it costs when it doesn't? Let's get into it.
Story #1: 80% of Doctors Now Use AI on the Job — Double the Rate From Three Years Ago
Published: May 4, 2026 | Source: WBUR / American Medical Association
It's official: AI is no longer a fringe tool in clinical medicine. A new survey from the American Medical Association found that roughly 80% of physicians used AI on the job in the past year — a figure that has doubled in just three years. Analysts now estimate the global market for AI in healthcare at $39 billion, a number widely expected to surge further over the next decade.
The WBUR report, drawn from physicians at South Shore Health in Massachusetts, puts a human face on the data. Critical care doctor Sam Ash described querying an AI platform called OpenEvidence to check a patient's medication levels — and receiving not just the answer, but an unprompted prompt to check her heart and breathing and inquire about other medications she may have taken. "That's actually a useful nudge," Ash noted. Radiology, meanwhile, is emerging as one of AI's clearest clinical wins: multiple patient safety experts cited it as an area where accuracy measurably improves with AI assistance because the technology makes the entire diagnostic workflow more efficient.
Still, clinicians are candid about the limits. Complex cases — rare diseases, contradictory presentations, patients for whom little training data exists — continue to stump AI systems. Hallucinations remain a documented concern. "The jury is a little bit still out as to how useful it is in these settings," Ash said.
Why It Matters: When eight in ten physicians are using a technology, it's no longer an experiment — it's infrastructure. That shift carries real obligations. Hospitals need governance frameworks, not pilot programs. Patients deserve to know when and how AI is influencing their care. And regulators need to move faster than they currently are. The AMA number is a milestone, but it's also a pressure gauge: the system that supports this much AI use is not yet built.
Story #2: OpenAI Publishes a Healthcare Policy Blueprint — and Experts Say It's Self-Serving
Published: May 6, 2026 | Source: STAT News
Alongside the April launch of ChatGPT for Clinicians, OpenAI published what it called a "blueprint for unlocking AI's potential to change the broader healthcare system" — a policy wish list covering data access, FDA regulatory frameworks, reimbursement pathways, and interoperability standards. On the surface, the recommendations are reasonable. Dig a layer deeper, and health policy experts say the document is carefully constructed to benefit OpenAI's specific competitive position.
"They're trying to have their cake and eat it too," said David Blumenthal, former national coordinator for health IT and a Harvard health policy professor. The tension, according to STAT's analysis, lies in OpenAI's dual position: the company operates largely in unregulated territory with its consumer and clinician tools — ChatGPT Health, ChatGPT for Healthcare, ChatGPT for Clinicians — while simultaneously calling for policy frameworks that would accelerate AI adoption across regulated clinical pathways. The recommendations, in other words, would open doors OpenAI is already well-positioned to walk through.
The blueprint arrives as HHS continues to accept responses to its own request for information on AI in clinical care, with the AHA and other major health system voices already weighing in.
Why It Matters: Health policy is being written right now, and the companies building these tools are actively lobbying to shape it. That's not inherently wrong — industry input is a legitimate part of policymaking. But when a single company's policy blueprint doubles as a competitive roadmap, health systems, clinicians, and patient advocates need to be at the table with equal force. The HHS comment period is a real opportunity. It would be a mistake to let vendor white papers dominate the conversation.
Story #3: AI Is Creating New Problems While Solving Old Ones — And Rural and Underserved Communities May Pay the Price
Published: May 6, 2026 | Source: WisBusiness / Medical College of Wisconsin
At a Wisconsin Technology Council event at the Medical College of Wisconsin this week, healthcare leaders offered a sobering counterweight to the week's more optimistic AI headlines. Outgoing MCW President Dr. John Raymond acknowledged that early AI concerns — training data quality, hallucinations — are "mostly, not completely, but mostly under control now." The new concerns, he said, are harder: data privacy, ambient AI disclosure, and a productivity paradox that threatens to quietly worsen clinician burnout.
The paradox works like this: AI tools reduce documentation burden, freeing up time. Employers — or productivity-based reimbursement models — then push clinicians to see more patients to fill that time. Efficiency gains get absorbed by the system before they ever translate into clinician relief. Meanwhile, UWM informatics professor Lu He raised a concern that doesn't get nearly enough attention: AI assumes infrastructure that simply doesn't exist in many underserved and rural communities. Smartphones, reliable connectivity, systems capable of running large language models — none of these are universal. "It's actually becoming a burden for the patients," she said of communities that are expected to engage with AI-enabled care without the baseline technology to support it.
Why It Matters: Healthcare AI equity is not a niche concern — it's a systems design problem. If AI is optimized for well-resourced urban hospitals and connected patients, it will widen the very care gaps it promises to close. The Wisconsin conversation is a reminder that deployment strategy matters as much as the technology itself. Health systems investing in AI need a parallel investment in the infrastructure, training, and community engagement required to make those tools work for everyone — not just the patients who already have the best access to case you missed it.
What to Watch Next Week
HHS's RFI on AI in clinical care closes soon. With OpenAI's blueprint already public and the AHA on record, watch for additional major health system and patient advocacy responses to shape the policy conversation heading into summer.
FDA AI device authorization continues to accelerate — the agency has now cleared over 1,350 AI-enabled medical devices. Expect renewed pressure on CMS to clarify reimbursement pathways, which remain a significant barrier to clinical deployment.
The AHA–Microsoft rural AI workforce webinar (May 7) may surface new data on ambient AI adoption in critical access hospitals — one of the clearest test cases for whether AI equity promises are being kept.
Dose of AI is an independent weekly briefing on artificial intelligence in healthcare. All analysis is the author's own and does not constitute medical, legal, or investment advice.
Sources:
Robin Lubbock / WBUR News — "The doctor is in — or is it AI?" May 4, 2026. wbur.org
Brittany Trang / STAT News — "OpenAI wants to 'have their cake and eat it too' with health AI policy recommendations," May 6, 2026. statnews.com
WisBusiness / Wisconsin Technology Council — "New concerns emerging around AI in healthcare," May 6, 2026. wisbusiness.com
MIT Technology Review Insights / Mayo Clinic Platform — "Tailoring AI solutions for health care needs," May 4, 2026. technologyreview.com

