Why Ignoring Prompt Engineering is the Fatal Flaw in AI Education for Doctors
Doctor are not being taught the most important tool of AI
A recent white paper by Elsevier on “empowering residents with AI” sounded thoughtful on the surface: preserve clinical reasoning, supervise carefully, include learners in the discussion. All important points. But the entire document missed the single most important skill that determines whether AI is useful or dangerous: prompt engineering.
The Missing Skill
Generative AI doesn’t simply “give answers.” It responds to how we ask. The framing of a question, the precision of the details, the context provided—all of these shape the quality of the output. In other words, the difference between garbage-in/garbage-out and clinically meaningful support lies in the prompt.
And yet, this paper—produced by a major medical publisher in collaboration with residency leaders—never once mentioned prompt engineering. That’s like writing a guide to laparoscopic surgery without mentioning how to use a trocar.
Why It Matters in Medicine
Consider a resident faced with a patient in labor with preeclampsia. If she types into an AI tool, “What should I do?” she will get a vague, possibly unsafe response. If she instead writes:
“Patient: 28-year-old G1 at 35 weeks, blood pressure 170/110, 3+ proteinuria, severe headache, visual changes. No seizure. Platelets 110,000. What are evidence-based management options, with references?”
The output will be entirely different—specific, guideline-driven, and clinically relevant. The difference is not the AI model. It is the prompt.
Without explicit training in prompt engineering, learners will stumble. They will either over-trust shallow answers or under-use the technology entirely. Neither outcome prepares them for the future of medicine.
The Risk of Overreliance
The white paper rightly warns against blind trust in AI. But the bigger risk is blind use. Poorly phrased prompts invite superficial outputs that sound authoritative but lack substance. If residents never learn to ask better questions, they will never see the difference. That failure will breed both overreliance and disillusionment.
Prompt engineering is also our best defense against bias. A vague prompt may yield a generic “induce labor at 39 weeks” answer without nuance. A carefully crafted one can demand:
“Summarize risks and benefits of induction vs. expectant management in Hispanic women with gestational diabetes, with attention to maternal morbidity, cesarean rates, and neonatal outcomes.”
Now the AI is pushed to surface evidence and highlight disparities, not erase them. That’s a skill residents must practice—not assume the system will provide automatically.
A Missed Educational Opportunity
The paper proposes clever exercises: compare a resident’s differential with the AI’s, or build teaching cases around incorrect AI outputs. Fine ideas. But all of them are hollow if learners don’t also learn how to guide the AI.
Prompt engineering itself can be a teaching tool. Ask residents to generate three prompts on the same case and compare outputs. Show them how specificity, clarity, and context change the quality of the result. This not only sharpens their AI use but strengthens diagnostic reasoning: the more precise the question, the sharper the thinking.
Peer Review and Research Parallel
The omission is even more striking when we consider AI in research. Peer reviewers already drown in unpaid labor, often missing obvious errors. One published study claimed to cover WeChat use from 2008–2018, even though WeChat wasn’t launched until 2011. A rushed human reviewer might not notice—but a well-prompted AI fact-checker could.
The prompt matters here too. “Summarize this study” is not enough. “Check for temporal inconsistencies in platform launch dates and data range” produces something different entirely. If we don’t teach prompt engineering, AI will remain a blunt instrument instead of a scalpel.
The Cultural Bias of Silence
Why didn’t the white paper mention prompt engineering? Perhaps because medical educators still see AI as a black box to be “used carefully,” not a tool that must be actively directed. But this silence reinforces a cultural bias: that AI is inherently unreliable, when in fact its reliability depends on us.
It also reflects a familiar pattern in medical training. We tell students to “think critically” but rarely teach them how. Now we are telling residents to “use AI wisely” without teaching them the language it speaks. Prompt engineering is the critical thinking skill of AI. Ignoring it is a disservice to learners and to patients.
The Takeaway
AI in medicine is not optional anymore. It is here, it is powerful, and it will only grow. Residents and early-career clinicians will either learn to use it well—or be left behind. But “using it well” starts with the most basic skill: how to ask.
To publish a white paper on AI education without mentioning prompt engineering is like training pilots on jet engines without teaching them how to read the cockpit instruments. The result will be users who push buttons, but don’t truly know how to fly.
If we want safe, ethical, and effective AI in obstetrics—or any specialty—prompt engineering cannot be an afterthought. It must be central.
Hashtags
#AIinMedicine #PromptEngineering #Obstetrics #MedicalEducation #PeerReview #EthicsInMedicine #FutureOfCare



