The MedMal Room: Ten Preventable Cases on L&D with GAI
The Safety Ledger — When ChatGPT-level reasoning could have prevented harm
A resident once said during a morbidity meeting, “We didn’t think of it.”
That sentence, more than any lab value or monitor trace, explains why preventable harm persists. On labor and delivery units, the pattern is rarely ignorance, it’s cognitive overload. Atypical symptoms are dismissed as “just pregnancy,” abnormal findings are attributed to “artifact,” and diagnostic thinking stops too soon.
Now, for the first time, clinicians have a tool that can think with them. Generative AI, such as ChatGPT, can process a patient’s story, generate differential diagnoses, and summarize key considerations in seconds. It doesn’t diagnose, but it reminds us not to stop thinking. Below are ten preventable L&D cases that could have ended differently had generative AI been part of the conversation.
Case 1: The Misdiagnosed Diarrhea
Scenario: A 33-year-old woman at 35 weeks presented to triage with severe watery diarrhea and dehydration. It was labeled “viral gastroenteritis.” She was later found to have metformin-induced diarrhea.
How GAI could have helped: ChatGPT could have generated a targeted differential, identifying medication side effects early and recommending medication review.
Which Prompt Helps:
“A 35-week pregnant woman on metformin for gestational diabetes presents with severe watery diarrhea. What are possible causes and next diagnostic or management steps?”
Case 2: The Overlooked Sonogram Finding
Scenario: A routine 32-week ultrasound missed an absent stomach bubble. Polyhydramnios appeared later, and the baby was born with esophageal atresia.
How GAI could have helped: When prompted with “absent stomach bubble,” GAI could generate a differential and urge re-imaging or specialist review.
Which Prompt Helps:
“At 32 weeks’ ultrasound, no fetal stomach bubble is seen. What are possible explanations and recommended next steps?”
Case 3: The Wrong Diagnosis of ‘False Labor’
Scenario: A 29-year-old woman was sent home from triage as “false labor” and delivered precipitously at home at 30 weeks.
How GAI could have helped: ChatGPT could have reinforced that back pain, pelvic pressure, and mild contractions at 30 weeks warrant a preterm labor evaluation.
Which Prompt Helps:
“A 29-year-old woman at 30 weeks has back pain, pelvic pressure, and mild contractions. What differential diagnoses and triage tests should be considered?”
Case 4: The Missed Sepsis
Scenario: A woman spiked a fever after cesarean section, labeled as “epidural fever,” and later developed endometritis with sepsis.
How GAI could have helped: It could have generated a structured differential and early sepsis workup checklist.
Which Prompt Helps:
“Fever 8 hours after cesarean section: what are common causes and how should this be evaluated and managed?”
Case 5: The Overlooked Anemia
Scenario: A postpartum woman’s hemoglobin dropped from 10.5 to 7.8 g/dL but was attributed to “dilution.” She later collapsed from a retroperitoneal hematoma.
How GAI could have helped: It could have highlighted that such a rapid fall suggests concealed bleeding, not dilutional anemia.
Which Prompt Helps:
“What are possible causes of a 3 g/dL hemoglobin drop after cesarean section, and how should concealed bleeding be evaluated?”
Case 6: The Silent Pulmonary Embolism
Scenario: On day two post-cesarean, a patient developed shortness of breath and was reassured it was anxiety. She died of pulmonary embolism.
How GAI could have helped: A ChatGPT query would have listed PE first in the differential and prompted urgent imaging.
Which Prompt Helps:
“Shortness of breath and chest pain two days after cesarean section—what is the most urgent differential and next diagnostic step?”
Case 7: The Missed Hypoglycemia in a Newborn
Scenario: A newborn of a diabetic mother was jittery but not tested for glucose and later developed seizures.
How GAI could have helped: A simple generative prompt could have tied the maternal diabetes history to neonatal hypoglycemia risk.
Which Prompt Helps:
“Newborn of a diabetic mother is jittery at one hour of life. What is the likely cause and immediate management?”
Case 8: The Undiagnosed Cholestasis
Scenario: A 30-year-old woman with intense itching at 37 weeks was reassured. A week later, she experienced stillbirth from intrahepatic cholestasis of pregnancy.
How GAI could have helped: GAI could have surfaced cholestasis immediately and suggested bile acid testing.
Which Prompt Helps:
“Third-trimester pregnant woman with severe itching on palms and soles—what is the differential and what labs should be ordered?”
Case 9: The Unrecognized Uterine Rupture
Scenario: A VBAC patient reported severe pain and fetal bradycardia. Staff assumed cord compression; rupture was discovered during crash cesarean.
How GAI could have helped: The system could have prioritized uterine rupture and prompted immediate response steps.
Which Prompt Helps:
“During VBAC, the patient develops sudden pain and fetal bradycardia. What are possible causes and emergency actions?”
Case 10: The Missed Preeclampsia Warning
Scenario: A 24-year-old woman with headache and blurry vision was discharged after a “borderline” blood pressure. She seized two days later with eclampsia.
How GAI could have helped: GAI could have generated a preeclampsia checklist, linking symptoms and prompting appropriate labs before discharge.
Which Prompt Helps:
“A 24-year-old at 35 weeks presents with headache, visual changes, and BP 142/90. What diagnoses should be considered and what labs ordered?”
What GAI Adds to Clinical Judgment
Generative AI doesn’t diagnose—it reminds clinicians to think comprehensively. In triage, it can instantly generate a structured differential, suggest next steps, and identify overlooked connections. It is a real-time intellectual companion that turns fragmented observations into diagnostic reasoning.
Ethical Dimension: Using GAI Responsibly
GAI should augment, not replace, human thought. But ethically, when tools can prevent harm through faster reasoning or better documentation, ignoring them becomes a lapse in diligence. Proper use demands verification, accountability, and ongoing training.
Reflection / Closing
Each case above began with the same error: an assumption that something was “probably fine.” Generative AI doesn’t guarantee perfection, but it can break the habit of premature closure—the cognitive trap that ends diagnostic reasoning too soon. The future of safety in obstetrics will belong not to machines that think for us, but to those that make us think better.



