THE AMERICA ONE NEWS
Jun 5, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Forbes
Forbes
23 May 2023


Healthcare and medical, Doctor and robotics research diagnose Human brains scan. Record and report with modern virtual interface, alzheimer's and parkinson, science, Innovation and Medical technology

Is generative AI a blessing or a curse when it comes to medical doctors and the role of medical ... [+] malpractice lawsuits.

getty

In today’s column, I will be examining how the latest in generative AI is stoking medical malpractice concerns for medical doctors, doing so in perhaps unexpected or surprising ways. We all pretty much realize that medical doctors need to know about medicine, and it turns out that they also need to know about or at least be sufficiently aware of the intertwining of AI and the law during their illustrious medical careers.

Here’s why.

Over the course of a medical doctor’s career, they are abundantly likely to face at least one or more medical malpractice lawsuits. This is something that few doctors probably give much thought to when first pursuing a career in medicine. Yet, when a medical malpractice suit is inevitably brought against them, the occurrence can be of a cataclysmic impact on their perspective on medicine and a stupefying emotional roller coaster in their life and their livelihood.

A somewhat staggering statistic showcases the frequency and magnitude of medical malpractice lawsuits in the U.S.:

The fact that 17,000 medical malpractice lawsuits are filed each year might not seem like a lot, given that there are approximately 1 million medical doctors in the USA and thus this amounts to just around 2% getting sued per year, but you need to consider that this happens year after year. It all adds up. Basically, over a ten-year period that would amount to around 20% of medical doctors getting sued (assuming we smooth out repeated instances). While over a 40-year-long medical career, the odds would seemingly rise to around 80% (using the same assumptions).

A research study that widely examined medical malpractice lawsuits in the U.S. made these salient points about the chances of a medical doctor experiencing such a suit and also clarified what a medical malpractice lawsuit consists of:

If you were to place each medical malpractice lawsuit into its relevant categories of the claimed basis for the litigation, you would see something like this as falling into these groupings (note that each case can be listed in more than just one category):

We will explore how each of those categories relates to the use of generative AI by a medical doctor.

Before doing so, it might be worthwhile to consider the grueling gauntlet associated with a medical malpractice lawsuit.

Generally, a patient or others associated with the patient are likely to indicate to the medical doctor that are considering a formal filing concerning the perceived adverse medical care provided by that medical doctor (in some instances, this might instead appear out of the blue). The hint or suggestion can then lead to a filing of legal pleadings and the official initiation of the medical malpractice lawsuit.

A medical doctor would then have a series of meetings with their legal counsel and likely their malpractice medical insurer, plus others in their medical care circle or sphere. At some point, assuming the case continues, a pleading judgment would be rendered by the court. If the case further continues then there would be a period of evidentiary discovery associated with the matter, a trial, and depending upon the outcome a chance of appeal might be undertaken too.

Throughout that lengthy process, a medical doctor is usually still fully underway in their medical endeavors. They need to simultaneously cope with their already overloaded medical workload and provide ongoing and ostensibly disruptive attention and energy toward the medical malpractice lawsuit. Their every thought and action associated with the medical case in dispute will be closely scrutinized and meticulously questioned. This can be jarring for medical doctors that are not used to being openly challenged in an especially antagonistic adversarial manner (versus a perhaps day-to-day normal collegial style).

Given the above background, let’s next take a look at how generative AI fits into this picture.

Generative AI In The Realm Of Medical Doctor Advisement

I’d guess that you already know that generative AI is the latest and hottest form of AI. There are various kinds of generative AI, such as AI apps that are text-to-text or text-to-essay in their generative capacity (meaning that you enter text, and the AI app generates text in response to your entry), while others are text-to-video or text-to-image in their capabilities. As I have predicted in prior columns, we are heading toward generative AI that is fully multi-modal and incorporates features for doing text-to-anything or as insiders proclaim text-to-X, see my coverage at the link here.

In terms of text-to-text generative AI, you’ve likely used or almost certainly heard about ChatGPT by AI maker OpenAI which allows entry of a text prompt and the AI generates an essay or interactive dialogue in response. For my elaboration on how this works see the link here. The usual approach to using ChatGPT or other similar generative AI is to engage in an interactive dialogue or conversation with the AI. Doing so is admittedly a bit amazing and at times startling at the seemingly fluent nature of those AI-fostered discussions that can occur.

Please know though that this AI and indeed no other AI is currently sentient. Generative AI is based on a complex computational algorithm that has been data trained on text from the Internet and admittedly can do some quite impressive pattern-matching to be able to perform a mathematical mimicry of human wording and natural language.

Into all of this comes a plethora of AI Ethics and AI Law considerations.

There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.

A medical doctor is likely to be especially intrigued by generative AI.

A lot of publicity in the medical community seemed to arise when a study earlier this year proclaimed that generative AI such as ChatGPT was able to pass the written test known as the United States Medical Licensing Exam (USMLE) at a roughly 60% accuracy rate. Here’s what the researchers said:

Medical doctors likely raised their eyebrows at the fact that generative AI can seemingly pass an arduous standardized medical exam.

Rather obvious questions immediately come to mind:

The American Medical Association (AMA) has promulgated a terminology that this type of AI ought to be referred to as augmented intelligence:

Let’s for the moment set aside the notion of an autonomous version of generative AI that functions entirely without any human medical doctor involvement. I’m not suggesting this isn’t in the future and only seeking to conveniently narrow the discussion herein to when generative AI is used in an assistive mode.

I’ve put together an extensive list of the benefits associated with a medical doctor opting to use generative AI. In addition, and of great importance, I have also assembled a list of the problems associated with a medical doctor using generative AI. We need to consider both the problems and downsides and weigh those against the benefits or upsides. To clarify, I could say that in the other direction too, namely that we need to consider the benefits or upsides in light of the problems or downsides.

Life seems to always be that way, involving calculated tradeoffs and ROIs.

I’ll explore the benefits first, just because it seems a more cheerful way to proceed. The problems or downsides will be explored next. Finally, after examining those two counterbalancing perspectives, we will jump into the medical malpractice specifics about the use of generative AI by a medical doctor.

Hang onto your hats for a bumpy ride.

Touted Benefits Of Generative AI Usage By Medical Doctors

Think of generative AI as being much different than merely doing an online search for medical info such as via a conventional web browser (note that the newest browsers are starting to encompass generative AI capabilities, see my coverage at the link here). A traditional web browser will bring back tons of hits that you need to battle through. Some of the found instances will be useful, some will be useless. Worse still, some of the search engine findings might be wrought with misleading medical info or outrightly wrong medical info.

Generative AI is supposed to be an interactive dialogue-oriented experience. You interact with the generative AI. That being said, you can simply enter a prompt such as a patient profile, and ask the generative AI to do a medical analysis for a one-time emitted essay, but that’s not the productive way to use these AI apps. The full experience consists of going back and forth with the generative AI. For example, you enter a patient profile and ask for a diagnosis. The AI responds. You then question the diagnosis and ask further questions. It is supposed to be highly interactive.

Another angle for using generative AI would be for a medical doctor to enter a devised diagnosis and ask the AI app to critique or review the proposed advisement. This once again should proceed on an interactive basis. The generative AI might question whether you considered this or that medical facet. You respond. All in all, the aim is to have a kind of double-check or at least a means to bounce ideas around to see whether you have exhaustively considered multiple possibilities.

Here are five major ways that I usually suggest medical doctors make use of generative AI, assuming they are interested in doing so:

There are numerous other uses of generative AI for medical doctors. I’m merely noting the seemingly more common uses and ones that can be done with relative ease.

You are now primed for my list of beneficial uses of generative AI for medical doctors in the boundaries of medical decision-making and medical decision support:

I snuck into that foregoing list an indication about potentially using generative AI as a means of later bolstering your position during a medical malpractice lawsuit.

Let’s revisit my earlier indication about the categories associated with medical malpractice lawsuits and consider how generative AI might have been able to avoid or overcome the noted lamentable outcomes:

All in all, those benefits assuredly seem quite convincing.

How would any medical doctor not be using generative AI, given the litany of benefits listed?

We next turn toward the set of problems associated with using generative AI by medical doctors. This will aid us in weighing the upsides versus the downsides.

Touted Downsides Of Generative AI Usage By Medical Doctors

I am going to present to you a slew of potential downsides or problems associated with using generative AI by medical doctors.

Pundits that believe wholeheartedly in the use of generative AI by medical doctors will have a bit of heartburn when they see the list. They will almost certainly object that many of the downsides or listed problems can be overcome. To some extent, yes, that is true.

We also need to acknowledge that the benefits that I just listed are also readily undermined or attacked too. For each of the benefits that I listed, you can easily find ways to undercut the stated benefit. Some of those benefits might seem to be the proverbial pie-in-the-sky. They might happen, though the odds of the benefit arising are scarce as hen’s teeth, some would insist.

Fair is fair.

Moving into the potential downsides, let’s take a look at one notable use case, and then we’ll see the entire list. One of the biggest problems or downsides of today’s generative AI is that it is well-known that these AI apps can produce errors, falsehoods, be biased, and even wildly make-up things in what are considered AI hallucinations (a terminology that I disfavor, for the reasons stated at the link here).

Imagine then this scenario. A medical doctor is using generative AI for medical analysis purposes. A patient profile is entered. The medical doctor has done this many times before and has regularly found generative AI to be quite useful in this regard. The generative AI has provided helpful insights and been on-target with what the medical doctor had in mind.

So far, so good.

In this instance, the medical doctor is in a bit of a rush. Lots of activities are on their plate. The generative AI returns an analysis that looks pretty good at first glance. Given that the generative AI has been seemingly correct many times before and given that the analysis generally comports with what the medical doctor already had in mind, the generative AI interaction “convinces” the medical doctor to proceed accordingly.

Turns out that unfortunately, the generative AI produced an error in the emitted analysis. Furthermore, the analysis was based on a bias associated with the prior data training of the AI app. Scanned medical studies and medical content that had been used for pattern-matching were shaped around a particular profile of patient demographics. This particular patient is outside of those demographics.

The upshot is that the generative AI might have incorrectly advised the medical doctor. The medical doctor might have been lulled into assuming that the generative AI was relatively infallible due to the prior repeated uses that all went well. And since the medical doctor was in a rush, it was easier to simply get a confirmation from the generative AI, rather than having to dig into whether a mental shortcut by the medical doctor was taking place.

In short, it is all too easy to fall into a mental trap of assuming that the generative AI is performing on par with a human medical advisor, a dangerous and endangering anthropomorphizing of the AI. This can happen through a step-by-step lulling process. The AI app also is likely to be portraying the essays or interactions in a highly poised and confidently worded fashion. This is also bound to sway the medical doctor, especially if under a rush to proceed.

Take a deep breath and take a gander at this list of potential pitfalls and problems when generative AI is used by a medical doctor:

I’ll highlight a few of those points.

The use of generative AI for private or confidential information is something that you need to be especially cautious about. Entering patient-specific info could be a violation of HIPAA (Health Insurance Portability and Accountability Act) and lead to various legal troubles. For more on how generative AI is potentially lacking in privacy and cybersecurity, see my coverage at the link here.

Another issue is whether generative AI is allowed to be used for medical purposes, to begin with. Some of the software licensing agreements explicitly state that medical professional use is not allowed. This once again can raise legal issues. See my discussion about prohibited uses of generative AI at the link here.

Each of the problematic or downside points in the list above is worthy of a lengthy elaboration about what they are and how they can be overcome. I don’t have space to cover this in today’s column, but if there is sufficient reader interest I’ll gladly go into more depth in later columns.

The Medical Malpractice Dual-Edged Sword Of Generative AI Use

I will finish up this discussion by noting the dual-edged sword of generative AI use in the medical domain and how this relates to medical malpractice considerations.

First, a recent paper posted in the Journal of the American Medical Association (JAMA) identified various key facets of medical malpractice associated with generative AI:

The noted emphasis was on how to incorporate generative AI into a medical doctor’s practice without increasing liability risk. A vital recommendation is that medical doctor needs to realize that they cannot and should not blindly abide by whatever the generative AI emits. This though, as noted, would generally be something that a medical doctor would likely already assume to be the case.

The devil is in the details.

A day-to-day use of generative AI is a lot different than a once-in-a-blue-moon usage. There is a tendency in day-to-day routinization to become complacent and fall into the mental trap of being less skeptical about what the generative AI is producing. The list of problems or downsides that I’ve shown earlier is a sound basis for being cautious about whether to adopt generative AI or not.

The authors also provided this recap of their overarching viewpoint on the matter:

We need to also consider what medical malpractice lawyers are going to do in response to the advent of generative AI for use by medical doctors.

Here’s what I mean.

One cogent legal argument is that the use of generative AI demonstrably caused an undue increase in risk associated with the performance of a medical doctor. That’s an obvious line of attack. If a medical doctor relied upon generative AI, an assertion can be made that they are expressly embodying a heightened risk due to the slew of downsides or problems that I’ve listed herein.

Let’s turn that same argument around.

Suppose a medical doctor did not make use of generative AI. This would at first glance seem clearly to be the safest means to avoid any complications about how generative AI entered into a malpractice setting. You didn’t use generative AI so it cannot seemingly be an issue at hand. Period, end of story.

A counterargument would be that if the medical doctor had in fact made overt use of generative AI, the medical doctor might not have made the malpractice failure that they are alleged to have made. Per the benefits listed earlier about generative AI, it is conceivable that the generative AI would have nudged or pushed the medical doctor to not have done whatever faltering act they supposedly did.

That is a mind-bending conundrum.

Is it best to avoid professional negligence in a medical doctor setting by avoiding generative AI altogether, or could this become a contentious issue that if generative AI had been used then the professional negligence would (arguably) not have occurred?

The arising expectation or pressing argument might be that medical doctors should be taking advantage of viably available and useful tools including generative AI in their medical practice efforts. Failing to keep up with a tool that could make a substantive difference in performing medical work would, or could, be portrayed as a lack of attention to modern medical practices. A veritable head-in-the-sand claimed argument might be somewhat of a stretch in today’s wobbly status of generative AI, but as generative AI gets more tuned and customized to medical domains, this would seem to loom larger on the docket.

A medical doctor might increase risk by adopting generative AI. On the other hand, they might be failing to mitigate risk by not adopting generative AI. Generative AI could be construed as a crucial risk management component for practicing modern medicine. Yes, in short, it could be argued with vigor that generative AI when used suitably could be said to decrease risk.

There you have it, a dual-edged sword.

Conclusion

I offer a few concluding remarks on this engaging topic.

I would wager that just about everyone has heard of the Hippocratic Oath, namely the famed oath taken by medical doctors tracing back to the Greek doctor Hippocrates. This is a longstanding and oft-quoted dictum. The particular catchphrase of “First do no harm” is associated with the Hippocratic Oath, meaning that medical doctor is obligating themselves to stridently seek to help their patients and assiduously do what they can to avoid harming their patients.

You might say that we are on a precipice right now about generative AI fitting into the Hippocratic Oath.

Using generative AI can be argued as veering into the harming territory, while a counterargument is that the lack of using generative AI is where the harm actually resides. Quite a puzzle. Darned if you do, darned if you don’t. Right now, the darned if you do is tending to outweigh the darned if you don’t. This equation might gradually and eventually flip over to the other side of that coin.

I’d like to end this discussion on a lighter note, so let’s shift gears and consider a future consisting of sentient AI, also referred to as Artificial General Intelligence (AGI). Imagine that we somehow attain sentient AI. You might naturally assume that this AGI would potentially be able to take on the duties of being a medical doctor. It seems straightforward to speculate that this would occur (i.e. if you buy into the sentient AI existence possibility).

Mull over this deep thought.

Would we require sentient AI to take the Hippocratic Oath, and if so, what does this legally foretell as to holding the sentient AI responsible for its medical decisions and its devised performance as an esteemed medical doctor?

A fun bit of contemplative contrivance, well, until the day that we manage to reach sentient AI. Then, we’ll be knee-deep serious about the matter, for sure.