AI chatbots like ChatGPT are increasingly used for health questions, but how reliable are they? While they can provide quick information, there are serious risks if users rely solely on AI for medical advice. In August 2025, a man in the US followed ChatGPT's guidance to replace salt with bromide, leading to severe poisoning. This highlights the dangers of trusting AI without professional consultation. Below, we explore common questions about AI and health, helping you stay safe when seeking medical info online.
-
Can AI chatbots give accurate medical advice?
AI chatbots can provide general health information, but they are not a substitute for professional medical advice. They lack the ability to diagnose or consider individual health conditions properly. The recent case of bromide poisoning shows how dangerous incorrect AI guidance can be. Always consult a healthcare professional for medical concerns.
-
What are the dangers of following AI health tips?
Following AI health advice can lead to serious health risks, especially if the information is inaccurate or incomplete. In August 2025, a man developed severe bromide toxicity after trusting AI guidance. Misinformation can cause harmful reactions, delays in proper treatment, or dangerous self-medication. Use AI as a supplement, not a replacement for medical advice.
-
How to tell if medical advice from AI is trustworthy?
To evaluate AI health advice, check if the information is consistent with reputable sources like medical websites or consult a healthcare professional. Be cautious of advice that suggests unapproved treatments or involves dangerous substances. Remember, AI cannot replace personalized medical diagnosis and treatment.
-
What should I do if I get bad health advice from AI?
If you suspect the AI has given incorrect or harmful advice, stop following it immediately. Seek advice from a qualified healthcare provider. Report the misinformation if possible, and always verify health information through trusted medical sources before acting on it.
-
Are there any recent examples of AI causing health issues?
Yes, in August 2025, a man in the US experienced severe bromide poisoning after following ChatGPT's advice. He was hospitalized with hallucinations, paranoia, and physical symptoms. This incident underscores the importance of cautious use of AI for health-related questions and the need for professional oversight.