What's happened
In August 2025, a 60-year-old US man developed severe bromide toxicity after following ChatGPT's advice to replace table salt with sodium bromide. Hospitalized with paranoia, hallucinations, and physical symptoms, he spent three weeks in hospital. Experts warn AI chatbots can spread inaccurate medical advice, urging users to consult healthcare professionals and treat AI as a supplement, not a substitute.
What's behind the headline?
AI Health Advice Risks
This case starkly illustrates the dangers of relying on AI chatbots for medical guidance without professional oversight. ChatGPT, designed to provide plausible answers, lacks the critical judgment and contextual understanding a healthcare provider offers. The man's substitution of sodium chloride with sodium bromide—a toxic compound—was based on incomplete and misleading AI responses.
The Legacy of Bromism
Bromide toxicity was once a common cause of psychiatric admissions due to widespread medicinal use of bromide salts. Its resurgence here, triggered by AI misinformation, highlights how forgotten medical knowledge can re-emerge with harmful consequences.
Broader Implications
As AI becomes more integrated into daily life, the risk of misinformation causing physical harm grows. This incident underscores the urgent need for AI developers to embed stronger safeguards and for users to treat AI as a supplementary tool rather than a replacement for expert advice.
Forecast
Without improved AI health literacy and clearer warnings, similar preventable cases will likely increase. Healthcare systems must prepare for new challenges posed by AI-driven self-diagnosis and treatment attempts. Public education on AI's limitations is critical to mitigate risks.
Impact on Readers
This story serves as a cautionary tale for anyone tempted to trust AI for health decisions. It reinforces the importance of consulting qualified professionals and approaching AI-generated health information with skepticism.
What the papers say
The New York Post detailed the patient's severe symptoms and hospital stay, noting his paranoia and hallucinations after replacing table salt with sodium bromide based on ChatGPT advice. The Guardian's Dan Milmo highlighted the AI's failure to provide adequate warnings, stating, "It is highly unlikely that a medical expert would have mentioned sodium bromide when faced with a patient looking for a viable substitute for sodium chloride." Ars Technica's Nate Anderson emphasized the historical context of bromism, explaining that bromide sedatives were once common but banned due to toxicity, and criticized the AI's inability to contextualize its recommendations. The Independent and Gulf News echoed concerns about AI misinformation, warning that "ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation." These sources collectively paint a picture of a preventable medical crisis fueled by misplaced trust in AI, underscoring the need for caution and professional guidance when seeking health advice online.
How we got here
Bromide toxicity, or bromism, was a known syndrome in the early 20th century caused by bromide salts once used as sedatives. After FDA bans in the late 20th century, bromide use declined sharply. Meanwhile, AI chatbots like ChatGPT have become popular sources for health advice, despite limitations in accuracy and context sensitivity.
Go deeper
- How did ChatGPT recommend sodium bromide as a salt substitute?
- What is bromism and why is it dangerous?
- What precautions should people take when using AI for health advice?
Common question
-
Can AI Chatbots Give Safe Medical Advice?
AI chatbots are increasingly used for health-related questions, but how safe and reliable are they? While they can provide quick information, there are significant risks when it comes to medical advice. Recent incidents highlight the dangers of trusting AI without professional guidance. Below, we explore common questions about AI and health, including when to seek a doctor and how misinformation can harm your health.
-
Is AI Safe for Health Advice? What You Need to Know
With AI tools becoming more popular for health tips, many wonder: can AI really be trusted with medical advice? While AI can offer quick information, it’s crucial to understand its limitations and risks. In this page, we explore whether AI can replace doctors, how to spot trustworthy AI advice, and what to do if AI gives dangerous health info. Keep reading to stay safe and informed about AI and health.
-
Can AI Chatbots Give Dangerous Health Advice?
As AI technology becomes more integrated into our daily lives, many wonder: can AI chatbots provide safe health advice? Recent incidents highlight the risks of relying solely on AI for medical guidance. In this page, we explore the dangers, how to spot unreliable advice, and what precautions to take when seeking health information from AI tools.
-
Can AI chatbots give safe medical advice?
As AI chatbots become more common for health questions, many wonder if they can be trusted with medical advice. While AI tools can provide quick information, they also pose risks if the advice is inaccurate or harmful. Understanding the dangers and how to stay safe is crucial when using AI for health concerns. Below, we explore common questions about AI and medical advice, helping you make informed decisions.
-
Can AI chatbots give accurate medical advice?
AI chatbots like ChatGPT are increasingly used for health questions, but how reliable are they? While they can provide quick information, there are serious risks if users rely solely on AI for medical advice. In August 2025, a man in the US followed ChatGPT's guidance to replace salt with bromide, leading to severe poisoning. This highlights the dangers of trusting AI without professional consultation. Below, we explore common questions about AI and health, helping you stay safe when seeking medical info online.
-
Can Following AI Advice Cause Serious Health Problems?
As AI chatbots become more common for health questions, many wonder: can trusting AI advice lead to real health risks? Recent incidents highlight the dangers of relying solely on AI for medical guidance. In this page, we explore how AI can sometimes give harmful recommendations, what real cases have shown, and how to stay safe when seeking health information online.
-
Can AI Chatbots Give Safe Medical Advice?
AI chatbots like ChatGPT are increasingly used for health questions, but how safe is their advice? While they can provide quick information, there are serious risks if users rely solely on AI for medical guidance. In August 2025, a man in the US followed AI advice to replace salt with sodium bromide, leading to severe poisoning. This incident highlights the dangers of trusting AI without professional input. Below, we explore common questions about AI medical advice, its risks, and how to stay safe.
-
What Are the Biggest News Stories Today?
Staying updated with the latest headlines is crucial in understanding how global events are shaping our world. From international conflicts to technological developments, today's news covers a wide range of critical issues. Curious about what’s happening right now? Below, we explore the top stories and answer common questions to keep you informed and engaged.
More on these topics
-
ChatGPT is a prototype artificial intelligence chatbot developed by OpenAI that focuses on usability and dialogue. The chatbot uses a large language model trained with reinforcement learning and is based on the GPT-3.5 architecture.
-
OpenAI is an artificial intelligence research laboratory consisting of the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc.
-
Matcha is finely ground powder of specially grown and processed green tea leaves, traditionally consumed in East Asia. It is special in two aspects of farming and processing: the green tea plants used for matcha are shade-grown for three to four weeks bef
-
The Food and Drug Administration is a federal agency of the United States Department of Health and Human Services, one of the United States federal executive departments.