When Patients Ask Better Questions Than We Do
Anxiety surrounding AI reveals an uncomfortable truth: what we do in obstetrics cannot withstand it, not because the questions are unfair, but because our answers are inadequate.
A recent healthcare forecast warned that 2026 will bring “patient-side AI agents” capable of scanning medical records, comparing care against guidelines, and generating “incredibly high level questions” that clinicians will struggle to answer. The piece framed this as a threat, describing the “multiples of bureaucracy and inefficiencies” this scrutiny will impose and urging physicians to adopt AI tools that help them “withstand constant, machine-assisted scrutiny.” Instead, it should have framed it as a strength.
The framing is backwards. And in obstetrics, it exposes a wound we have been dressing rather than healing.
The Questions We Should Have Been Answering All Along
What are these allegedly disruptive questions that AI-empowered patients will ask? Why did my care deviate from current guidelines? Why wasn’t I offered a procedure with better documented outcomes? Can you show me the evidence supporting this recommendation?
These are not impertinent interrogations. They are the very questions that informed consent is supposed to answer before a patient ever needs to ask. The anxiety surrounding “machine-assisted scrutiny” reveals an uncomfortable truth: much of what we do in obstetrics cannot withstand it, not because the questions are unfair, but because our answers are inadequate.
Consider what a pregnant woman with access to an AI assistant might discover. She could learn that elective induction at 39 weeks is marketed as reducing cesarean risk, yet population-level CDC data show cesarean rates rising in lockstep with induction rates since 2018. She might find that her hospital’s episiotomy rate is triple the national benchmark, or that the “standard” continuous fetal monitoring she received has a false positive rate exceeding 99% for predicting cerebral palsy. She could ask why she was placed on oxytocin without being told that the FDA has never approved it for elective induction—and that the drug carries a black box warning her obstetrician never mentioned.
These are not gotcha questions. They are the foundations of shared decision-making that we have claimed to practice while rarely delivering.
Why Obstetrics Is Uniquely Vulnerable
Other specialties will face similar reckonings, but obstetrics occupies a particularly exposed position. We have built a culture of intervention justified more by habit and fear than by evidence. The cesarean rate has climbed to 32% with no corresponding improvement in neonatal outcomes. We induce nearly a third of all labors, often without clear medical indication. We monitor every fetus continuously despite five decades of randomized trials showing no benefit over intermittent auscultation for low-risk pregnancies.
When patients ask why, what will we say?
The honest answer, too often, is that we do these things because we have always done them, because they feel safer, because they are easier to defend in a deposition, or because the electronic health record defaults to them. These are human reasons, but they are not medical reasons. And they will not satisfy a patient whose AI assistant has just retrieved the Cochrane review contradicting our practice.
ACOG’s fondness for “reasonable to offer” language will prove especially fragile. This phrase—deployed when evidence is insufficient to recommend but organizational inertia demands we say something—provides rhetorical cover without epistemic substance. An AI parsing that language against the underlying trials will immediately identify the gap. Patients will learn that “reasonable to offer” often means “we cannot tell you this helps, but we will not tell you it does not.”
The deeper vulnerability is this: obstetrics has relied on information asymmetry as a substitute for explanation. We have counted on patients not having time, access, or expertise to challenge our recommendations. That asymmetry is collapsing. The question is not whether we can rebuild the wall—we cannot—but whether we ever should have built it.
The Answer Was Always Informed Consent
Here is the uncomfortable truth: every question an AI-empowered patient might ask is a question that proper informed consent should have already answered.
Informed consent is not a signature on a form. It is a process—legally and ethically mandated—requiring disclosure of the nature of the proposed intervention, its risks and benefits, the alternatives including doing nothing, and the evidence supporting each option.
If we had been practicing informed consent as the law and our profession demand, patients would not need artificial intelligence to discover that a cesarean was performed at a rate inconsistent with the evidence, or that an alternative approach existed with better outcomes, or that the intervention they received was never explained in terms they could evaluate.
The AI is not asking novel questions. It is asking the questions informed consent was designed to answer before the patient ever leaves the consultation room.
What machine-assisted scrutiny reveals is not that patients have become unreasonably demanding—it is that we have been failing at the most fundamental obligation of medical practice. The solution is not better technology to defend our decisions. The solution is better decisions, transparently explained, with the patient as a genuine partner rather than a passive recipient. That is not a new standard imposed by silicon valley. It is the standard we agreed to when we took our oaths.
Scrutiny Is Not the Disease. Neither is Transparency.
The forecast I cited urges clinicians to adopt AI that helps them “document their thinking” and “practice in a way that can withstand constant, machine-assisted scrutiny.” This phrasing treats patient inquiry as a pathogen requiring prophylaxis.
But scrutiny is not the disease. Scrutiny is the immune system of a functional doctor-patient relationship. If our practice cannot survive examination, the problem is not the examination—it is the practice.
What we actually need is not better documentation to shield us from questions, but better reasoning that makes documentation unnecessary. A physician practicing evidence-based obstetrics does not fear the patient who asks about guidelines. She welcomes the conversation because her recommendation already reflects the evidence, and she can articulate why. The discussion becomes collaborative rather than defensive.
This requires something AI cannot provide: the intellectual discipline to ask hard questions of ourselves before patients ask them of us. Why am I recommending induction for this patient? What is my threshold for cesarean, and is it justified by outcomes? Have I communicated absolute risks, or have I hidden behind relative risk reductions that obscure the truth?
Rediscovering What We Forgot
The solution is not technological. It is professional.
Medicine became a profession, not a trade, because physicians committed to reasoning transparently, grounding decisions in evidence, and prioritizing patient welfare over convenience or self-protection. Somewhere along the way, obstetrics lost this commitment. We substituted protocol for thought, defensive documentation for honest communication, and guideline compliance for genuine informed consent.
AI-empowered patients are not creating a new problem. They are exposing an old one. They are asking the questions we should have been asking ourselves, the questions our training should have instilled, the questions that distinguish a physician from a technician.
The forecast warns that the “just trust me” posture will no longer suffice. Good. That posture should never have sufficed. Trust is not a substitute for evidence. It is the product of evidence, consistently delivered, transparently explained, and honestly acknowledged when uncertain.
If that standard feels threatening, the threat is not coming from the patient’s AI. It is coming from the mirror. Our own.
Wisdom begins not with answers but with knowing how to ask. We have spent decades perfecting our answers—our protocols, our checklists, our defensive documentation. We forgot that the physician’s craft begins earlier: with the discipline to question our own assumptions, to interrogate our habits, to ask of every intervention whether it serves the patient or merely serves us. The AI-empowered patient is not our adversary. She is asking the questions we should never have stopped asking ourselves.
What has been your experience with increasingly informed patients? Are these conversations welcomed or dreaded in your practice? I would like to hear from clinicians and patients alike.


