

A man who used ChatGPT for dietary advice ended up poisoning himself — and wound up in the hospital.
The 60-year-old man, who was looking to eliminate table salt from his diet for health reasons, used the large language model (LLM) to get suggestions for what to replace it with, according to a case study published this week in the Annals of Internal Medicine.
When ChatGPT suggested swapping sodium chloride (table salt) for sodium bromide, the man made the replacement for a three-month period — although, the journal article noted, the recommendation was likely referring to it for other purposes, such as cleaning.
CHATGPT COULD BE SILENTLY REWIRING YOUR BRAIN AS EXPERTS URGE CAUTION FOR LONG-TERM USE
Sodium bromide is a chemical compound that resembles salt, but is toxic for human consumption.
It was once used as an anticonvulsant and sedative, but today is primarily used for cleaning, manufacturing and agricultural purposes, according to the National Institutes of Health.

A man who used ChatGPT for dietary advice ended up poisoning himself — and wound up in the hospital. (Kurt "CyberGuy" Knutsson)
When the man arrived at the hospital, he reported experiencing fatigue, insomnia, poor coordination, facial acne, cherry angiomas (red bumps on the skin) and excessive thirst — all symptoms of bromism, a condition caused by long-term exposure to sodium bromide.
The man also showed signs of paranoia, the case study noted, as he claimed that his neighbor was trying to poison him.
ARTIFICIAL INTELLIGENCE DETECTS CANCER WITH 25% GREATER ACCURACY THAN DOCTORS IN UCLA STUDY
He was also found to have auditory and visual hallucinations, and was ultimately placed on a psychiatric hold after attempting to escape.
The man was treated with intravenous fluids and electrolytes, and was also put on anti-psychotic medication. He was released from the hospital after three weeks of monitoring.
"This case also highlights how the use of artificial intelligence (AI) can potentially contribute to the development of preventable adverse health outcomes," the researchers wrote in the case study.
"These are language prediction tools — they lack common sense and will give rise to terrible results if the human user does not apply their own common sense."
"Unfortunately, we do not have access to his ChatGPT conversation log and we will never be able to know with certainty what exactly the output he received was, since individual responses are unique and build from previous inputs."
It is "highly unlikely" that a human doctor would have mentioned sodium bromide when speaking with a patient seeking a substitute for sodium chloride, they noted.
NEW AI TOOL ANALYZES FACE PHOTOS TO PREDICT HEALTH OUTCOMES
"It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results and ultimately fuel the spread of misinformation," the researchers concluded.
Dr. Jacob Glanville, CEO of Centivax, a San Francisco biotechnology company, emphasized that people should not use ChatGPT as a substitute for a doctor.

When ChatGPT suggested swapping sodium chloride (table salt) for sodium bromide, the man, not pictured, made the replacement for a three-month period. (iStock)
"These are language prediction tools — they lack common sense and will give rise to terrible results if the human user does not apply their own common sense when deciding what to ask these systems and whether to heed their recommendations," Glanville, who was not involved in the case study, told Fox News Digital.
"This is a classic example of the problem: The system essentially went, ‘You want a salt alternative? Sodium bromide is often listed as a replacement for sodium chloride in chemistry reactions, so therefore it’s the highest-scoring replacement here.’"
Dr. Harvey Castro, a board-certified emergency medicine physician and national speaker on artificial intelligence based in Dallas, confirmed that AI is a tool and not a doctor.

It is "highly unlikely" that a human doctor would have mentioned sodium bromide when speaking with a patient seeking a substitute for sodium chloride, the researchers said. (iStock)
"Large language models generate text by predicting the most statistically likely sequence of words, not by fact-checking," he told Fox News Digital.
"ChatGPT's bromide blunder shows why context is king in health advice," Castro went on. "AI is not a replacement for professional medical judgment, aligning with OpenAI's disclaimers."
Castro also cautioned that there is a "regulation gap" when it comes to using LLMs to get medical information.
"Our terms say that ChatGPT is not intended for use in the treatment of any health condition, and is not a substitute for professional advice."
"FDA bans on bromide don't extend to AI advice — global health AI oversight remains undefined," he said.
There is also the risk that LLMs could have data bias and a lack of verification, leading to hallucinated information.
"If training data includes outdated, rare or chemically focused references, the model may surface them in inappropriate contexts, such as bromide as a salt substitute," Castro noted.
"Also, current LLMs don't have built-in cross-checking against up-to-date medical databases unless explicitly integrated."

One expert cautioned that there is a "regulation gap" when it comes to using large language models to get medical information. (Jakub Porzycki/NurPhoto)
To prevent cases like this one, Castro called for more safeguards for LLMs, such as integrated medical knowledge bases, automated risk flags, contextual prompting and a combination of human and AI oversight.
The expert added, "With targeted safeguards, LLMs can evolve from risky generalists into safer, specialized tools; however, without regulation and oversight, rare cases like this will likely recur."
For more health articles, visit www.foxnews.com/health
OpenAI, the San Francisco-based maker of ChatGPT, provided the following statement to Fox News Digital.
"Our terms say that ChatGPT is not intended for use in the treatment of any health condition, and is not a substitute for professional advice. We have safety teams working on reducing risks and have trained our AI systems to encourage people to seek professional guidance."