The Class They Didn’t Want Taught: Why ObGyn and MFM Doctors Need Prompt Engineering
The Prognosis - A full-day curriculum in the skills that should already be part of maternal–fetal medicine training.
I recently went through a tedious application process with my two professional organization, the American College of Obstetricians & Gyecologists (ACOG) and the Society for Maternal-Fetal Medicine (SMFM). When I proposed a workshop on “Prompt Engineering for ObGyn and MFM Physicians” to both SMFM and ACOG, I was rejected.
The irony was almost poetic: leaders of the field that deals daily with prediction, pattern recognition, and interpretation declined a course on the very skill that makes AI useful. You cannot drive a car without driving lessons, yet many clinicians are now “driving” generative AI unsupervised. Prompt engineering is not a toy, it’s a professional competency.
Session 1: The Intelligence Gap
We begin with a live demonstration: the same clinical question asked three different ways yields three different AI responses—one insightful, one irrelevant, one dangerously wrong. This hour unpacks why. We analyze token weighting, context framing, and linguistic precision. By the end, participants grasp that every AI answer reflects the quality of the prompt, not the brilliance of the model. Like Doppler ultrasound, it’s only as good as the operator’s hand.
Session 2: Anatomy of a Good Prompt
This session dissects the structure of an effective clinical prompt. Using the “SOAP for AI” model (Situation, Objective, Ask, Parameters), we build prompts that yield consistent, verifiable outputs. Obstetric examples include asking ChatGPT to summarize the SMFM guidance on preeclampsia or compare ACOG vs. FIGO interpretations of fetal monitoring. The goal is reproducibility, not magic. Prompting becomes a form of structured reasoning, not guesswork.
Session 3: Ethical Engineering
Here we cross into ethics: How should physicians verify AI output before using it in counseling or teaching? We discuss “hallucination risk” as a form of professional negligence, explore bias propagation in obstetric datasets, and review the principle of preventive ethics in digital decision support. Participants learn how to construct “bounded prompts” that limit speculation, require authentic references, and prioritize patient safety over novelty.
Session 4: Clinical Simulations with AI
We run mock MFM consults powered by generative AI. Fellows prompt ChatGPT to counsel on neural tube defects, twin–twin transfusion, and maternal cardiac disease. Then we critique: Did the AI over-reassure? Did it include termination options when relevant? How accurate were its references? This is simulation training for the digital age, replacing mannequins with models that think. It forces clinicians to see the difference between automation and accountability.
Lunch Dialogue: “Would You Let ChatGPT Counsel Your Patient?”
A moderated conversation over lunch: Can AI ever be the first draft of empathy? Participants share experiences of using ChatGPT for discharge summaries, patient handouts, or counseling scripts. We confront the discomfort: does using AI make care impersonal, or can it help us listen better? By the end, the consensus is clear, AI doesn’t replace humanity; it mirrors it. The question is what reflection we choose to polish.
Session 5: Building the Clinical AI Toolkit
This practical workshop introduces a curated set of AI frameworks: diagnostic summarizers, patient-education generators, and literature analyzers. Each tool is stress-tested on obstetric cases. We show how to construct “guardrails” that constrain AI to peer-reviewed sources and flag unverifiable claims. Fellows leave with ready-to-use templates for counseling, research synthesis, and quality improvement, each backed by ethical review steps to ensure safety and transparency.
Session 6: Working Smarter with ChatGPT Pro
Many clinicians use ChatGPT casually, like a search engine with better manners. Professionals must use it differently. The Pro version allows projects, memory, and custom instructions. Each feature can turn a curious user into a disciplined operator. We show how to organize prompts into projects, store reusable templates, and build a personal “AI lab notebook” that remembers case types, writing styles, or review formats. Fellows learn to create a clinical or educational “persona”, for example, an AI ethics consultant, a journal reviewer, or a patient educator, each with tailored tone and reasoning patterns. By the end of the session, participants see that using ChatGPT Pro is less about convenience and more about stewardship. The machine adapts to your habits, good or bad. If you train it with clarity and ethics, it becomes a trusted assistant. If you treat it as a shortcut, it mirrors that, too.
Session 7: From Prompt to Policy
The final teaching block turns to leadership. How should hospitals and professional societies govern AI use? We discuss institutional liability, authorship policies, and credentialing for “AI-assisted” clinical work. Just as every ultrasound technician must be certified, clinicians who use generative AI should demonstrate competence. The session ends with an assignment: write a departmental AI policy in 300 words. For most, it’s the first time they’ve linked prompts to professionalism.
Reflection / Closing
The most resistant physicians often fear irrelevance, not error. But irrelevance is the greater danger. Refusing to learn prompt engineering is like refusing to learn fetal heart rate interpretation in the 1970s. It won’t stop the technology; it only ensures that others, less qualified, will define its rules. We owe our patients more than curiosity. We owe them competence. The next frontier of MFM isn’t genetic—it’s generative.
AI is not a threat to medicine. Refusing to understand it is. You won’t lose your job to a machine, but you will lose it to a colleague who has mastered how to leverage that machine for superior results.



Are you willing to do this on your Substack? I will read/ learn from every chapter!