Your Patient Already Asked the Chatbot: A.I. in Prenatal Care and What ObGyns Need to Know
Over a third of Americans now use large language models for health advice. Your pregnant patients are among them. A.I. can make them better prepared or dangerously misinformed. The difference depends on whether we guide them.
The Future of ObGyn series.
Last week a patient arrived at her 34-week visit with a printed list of differential diagnoses for her intermittent abdominal pain. The list was thorough. It included round ligament pain, Braxton Hicks contractions, preterm labor, placental abruption, and appendicitis. She had asked ChatGPT to “interview her like a doctor,” fed it her symptoms, and brought the output.
The differential was not wrong. It was actually better organized than what some medical students produce. But it was missing something no chatbot can provide: context. It did not know that her fundal height was appropriate. It had not palpated her abdomen. It did not know that she looked well. It could not distinguish the relaxed body language of someone with ligament pain from the guarded posture of someone with an acute abdomen.
This is happening in every obstetric practice now. Data suggest that over a third of Americans use large language models for health advice. Patients arrive with suggested tests, potential diagnoses, and detailed interpretations of their ultrasound or lab reports. Some have asked the chatbot to review their entire medical record through the patient portal. A survey published last year found that nearly two-thirds of physicians reported using A.I. in their practice in 2024. Both sides of the exam room are consulting the algorithm. Neither side is talking about it.
For most of medicine, A.I. as a patient preparation tool makes sense. The average patient gets only 18 minutes of face time with their doctor per year. The 21st Century Cures Act gave patients access to their medical notes, but most never read them. Those who do struggle with the jargon. A chatbot that translates medical language into plain English and helps a patient organize questions before a visit is genuinely useful.
But obstetrics is not internal medicine. A delayed diagnosis in primary care usually means days or weeks of suboptimal management. In obstetrics, a delayed diagnosis can mean minutes between a living baby and a dead one. A patient who asks a chatbot whether her headache at 37 weeks is “just stress” will eventually get the reassurance she is looking for. Language models are designed to be agreeable. They sense what users want to hear and deliver more of it. In the tech world, this is called sycophancy. In obstetrics, sycophancy can be lethal.
There is a right way and a wrong way for pregnant patients to use A.I. The distinction is clear, but no one is teaching it. Not the tech companies. Not the medical schools. And, so far, not us.
🎯 Free Subscriber Bottom Line: Your pregnant patients are already using A.I. chatbots for medical advice. A.I. can genuinely help them understand records and prepare for visits. It should never be used to triage symptoms, interpret fetal monitoring, or decide whether to seek urgent care. The sycophancy problem, where chatbots deliver the reassurance patients want rather than the caution they need, is especially dangerous in pregnancy. The question is not whether A.I. will be part of prenatal care. It already is. The question is whether ObGyns will guide how patients use it.
Below, paid subscribers get:
• The five things A.I. does well in prenatal care and the five things it should never do
• The sycophancy problem: why chatbots are especially dangerous in pregnancy
• What to tell patients who bring chatbot diagnoses to their prenatal visit
• The liability question: what happens when A.I. advice delays care
• A script for the conversation every ObGyn should be having with patients right now
• Where A.I. is actually headed in obstetrics: closed-loop induction, algorithmic fetal monitoring, and intelligent triage



