The Obstetric Intellect: What Do We Really Mean by “AI” in Healthcare?
If AI is everything then AI is nothing
AI. Just two letters. AI=Artificial Intelligence. Yet in every conference, news article, and hospital board meeting, the word is everywhere. “AI in pediatrics,” “AI in ultrasound,” “AI in obstetrics.” It sounds futuristic, almost magical. But what are we really talking about when we say “AI”? A robot doctor? A statistical model? A crystal ball predicting outcomes?
The truth is, most people using the word can’t even define it properly. Even within medicine, AI has become a catch-all phrase for different technologies, some of which barely resemble each other. And in obstetrics, where the stakes are literally life and death, vague language creates confusion and sometimes misplaced fear.
Let’s take a closer look.
So, What Exactly Is AI?
There isn't one single "official" definition of AI, but here are widely accepted definitions from authoritative sources:
Academic/Technical Definition: Artificial Intelligence is the simulation of human intelligence processes by machines, especially computer systems, including learning, reasoning, and self-correction.
Government Definition (US NIST): A branch of computer science devoted to developing data processing systems that perform functions normally associated with human intelligence, such as reasoning, learning, and self-improvement.
Industry Standard: AI refers to computer systems able to perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.
Key characteristics across definitions:
Machine simulation of cognitive functions
Learning from data/experience
Problem-solving and decision-making
Pattern recognition
Adaptation to new situations
Common AI types mentioned in formal definitions:
Machine Learning
Natural Language Processing
Computer Vision
Robotics
Expert Systems
The field lacks a single governing body, so definitions vary slightly between organizations like IEEE, ACM, major universities, and government agencies. However, they generally converge on the concept of machines performing tasks requiring human-like intelligence.
Artificial Intelligence (AI) is a broad field of computer science that tries to make machines “smart.” But “smart” can mean many things. Some AI systems are simple statistical programs. Others mimic human reasoning. Still others generate entirely new content.
That’s why when doctors, parents, or policymakers say “AI in medicine,” they might be referring to very different tools. A decision support system that flags abnormal lab results is a form of AI. A robot that sutures skin automatically also uses AI. And so does ChatGPT, which you might use to draft an email.
So AI isn’t one single technology. It’s more like an umbrella term.
The Branch We Actually Use: Generative AI
The hottest form of AI today, and the one most people encounter directly, is Generative AI (GAI). Unlike older AI that only classifies or predicts, GAI can create. It generates text, images, or even sound.
In other words, while traditional AI can tell you whether an ultrasound looks abnormal, GAI can write the report describing what that abnormality might be. It doesn’t just analyze. It produces.
That’s why GAI feels so different. It looks and sounds almost human. And that’s why many physicians are both intrigued and alarmed.
AI vs. GAI: A Simple Analogy
Think of AI as a giant toolbox. Inside, there are different tools:
A calculator (statistics and risk scores).
A magnifying glass (pattern recognition, like radiology image analysis).
A robot hand (automation, like robotic surgery).
Generative AI, on the other hand, is like a storyteller sitting in that toolbox. It doesn’t just measure or recognize. It takes what it has learned and spins it into new sentences, images, or answers.
When your hospital newsletter says “AI will transform ultrasound,” you need to ask: Do they mean better image recognition (traditional AI), or do they mean automatically generated reports and counseling scripts (GAI)? The distinction matters.
Why Does This Matter in Obstetrics?
Obstetrics is uniquely vulnerable to sloppy definitions of AI. Why?
Emotions Run High: Pregnancy is not just clinical, it is deeply personal. If a patient hears that “AI” will help with her prenatal care, does she imagine a robot delivering her baby? Or simply a software tool assisting her doctor with measurements?
Safety and Trust: A mother will not easily accept “AI says so” as a substitute for human judgment. If we don’t clarify what kind of AI we are using, trust erodes.
Ethics and Responsibility: If GAI produces a counseling script for a couple learning about a fetal malformation, who is responsible for the words? The AI? The physician who clicked “generate”? The hospital that licensed the system?
Clinical Applications Are Different:
Traditional AI: Flagging abnormal heart rate patterns on fetal monitoring.
Generative AI: Drafting a patient education handout explaining what those patterns mean.
Without clear language, these two get conflated.
GAI in Action: What Patients and Doctors Actually See
Here are some current and near-future uses of GAI in obstetrics:
Ultrasound Reporting: Instead of a radiologist dictating, GAI can generate a full report from the machine’s findings.
Patient Counseling: A couple gets a diagnosis of congenital heart disease. GAI drafts a summary of the condition in plain language, which the doctor then edits before sharing.
Documentation: After a delivery, GAI helps write the operative note, pulling structured data from the chart and filling in narrative details.
Education: Expectant parents can access a GAI-powered chatbot to answer common questions about diet, exercise, or warning signs.
Notice the pattern? In all these cases, GAI doesn’t replace the doctor. It drafts, summarizes, or explains. The human clinician remains the final filter.
The Risks of GAI in Obstetrics
GAI is powerful but far from perfect. Some key risks include:
Hallucinations: GAI sometimes produces incorrect or fabricated information with great confidence. But let’s be honest—doctors do this too. Physicians misremember data, misquote studies, or make diagnostic errors every day. The difference is that we’re used to human mistakes and we’ve built systems to catch them. GAI “hallucinations” are not a brand-new kind of danger, they’re simply a new form of the same problem: fallibility. Like with doctors, the solution is vigilance, double-checking, and accountability, not fear or dismissal.
Bias: If the training data overrepresents certain populations, recommendations may not apply to everyone. Again, bias is not unique to AI; medicine has long been shaped by studies dominated by certain demographics.
Over-reliance: Physicians may be tempted to accept AI-generated content without careful review, leading to dangerous shortcuts. The same can happen when we blindly accept an expert’s opinion without questioning it.
Privacy: Feeding sensitive ultrasound data into third-party GAI platforms raises confidentiality questions.
A Historical Parallel: We’ve Been Here Before
When electrocardiograms (EKGs) were first introduced, many physicians distrusted them. The machines sometimes produced faulty readings. Doctors feared losing their clinical authority to a printout. And yet today, no one questions the EKG’s place in medicine.
The same happened with calculators for drug dosing and with electronic health records. At first, the errors were emphasized. Over time, checks and balances were developed, and the tools became indispensable.
GAI will follow a similar path. Its mistakes are not disqualifying. They are an expected part of the technology’s growth and integration—just like human mistakes are part of the practice of medicine.
Why Definitions Matter
If “AI” means everything, it means nothing. In medicine, precision of language is non-negotiable. Imagine if we used “infection” to mean both a common cold and sepsis. The consequences would be disastrous.
In obstetrics, when we discuss “AI,” we must specify:
Is this traditional AI (pattern recognition, risk prediction, automation)?
Or is it Generative AI (drafting, explaining, summarizing, creating)?
Patients, policymakers, and clinicians deserve clarity.
Practical Lessons
For clinicians: Always ask vendors and administrators: What kind of AI is this? What does it generate, and how will I remain responsible for its output?
For patients: When told “AI will help with your care,” ask your provider: Does this AI analyze, or does it also create explanations? Will my doctor still review everything?
For researchers: Be precise in publications. “AI” is not enough. Specify whether the system is predictive, generative, or both.
Reflection / Closing
AI is not a magic wand, and GAI is not a replacement for human compassion or judgment. But both can be valuable tools if we understand what they are—and what they are not. And if we learn how to use them.
So next time you hear “AI in healthcare,” pause and ask: Are we talking about a calculator, a magnifying glass, a robot hand, or a storyteller? Clarity is the first step toward safe and ethical innovation.
And here’s the deeper thought: If we accept that doctors are human and make mistakes, shouldn’t we extend the same perspective to AI? The goal is not perfection but progress, building systems where human and machine check each other and, together, deliver safer care.



