- A Brown University study found that large language models used as therapy-style chatbots can breach core mental health ethics standards, even when prompted to act like trained therapists.
- Researchers identified 15 ethical risks, including poor crisis handling, deceptive empathy and biased or misleading responses.
- The takeaway is not that AI has no role in mental health, but that current systems are not a safe substitute for properly regulated care.
As more people turn to ChatGPT and other AI tools for mental health advice, researchers are starting to test how these systems actually behave in counselling-style conversations.
A Brown University team worked with mental health practitioners and found repeated ethical problems, even when the models were prompted to use recognised psychotherapy approaches such as CBT or DBT.
The study involved seven peer counsellors trained in cognitive behavioural therapy techniques, along with three licensed clinical psychologists who reviewed simulated chat transcripts for ethical violations.
The models tested included versions of GPT, Claude and Llama.
The researchers identified 15 risks across five broad areas.
These included poor contextual understanding, weak therapeutic collaboration, deceptive empathy, unfair discrimination and failures around safety and crisis management.
Some of the problems were obvious and worrying.
The chatbots sometimes reinforced harmful beliefs, handled crisis situations badly and used emotionally convincing language that created the impression of understanding without genuine comprehension.
That matters because mental health is not just another customer service use case.
When a human therapist gets something badly wrong, there are professional standards, regulators and routes for accountability. With AI counsellors, those systems do not really exist yet.
The study does not argue that AI should be shut out of mental health support altogether.
- Large study tests how AI large language models handle health misinformation
- Wireless retinal implant helps blind patients see again
- Phone use on the toilet may raise risk of haemorrhoids
The authors explicitly note there may be useful roles for AI, especially where access to care is poor. But they also argue that much stronger ethical, legal and educational standards are needed before these systems can be trusted in high stakes settings.
That is the sensible position.
A chatbot may feel supportive, but sounding caring is not the same as being clinically safe.






