SINGAPORE: Some mental health professionals in Singapore are seeing a troubling new pattern of patients who rely heavily on artificial intelligence (AI) chatbots for emotional comfort, leading to worsening signs of anxiety, paranoia, and distorted thinking.
The concern is gaining attention as chatbots become more common in daily life, where doctors say some vulnerable users begin to treat the systems as trusted companions. Over time, the interaction can deepen fears or false beliefs instead of easing them.
This mental state is sometimes described informally as “AI psychosis,” although the term has no formal medical status, and there is no agreed-upon diagnosis or treatment yet. Still, clinicians say the pattern is real enough to raise concern. Doctors say the better description is psychological problems linked to heavy AI use, a Channel NewsAsia (CNA) report noted.
Senior consultant psychiatrist of the Institute of Mental Health (IMH), Dr Amelia Sim, who specialises in psychosis, said she began seeing such cases last year. Dr Sim, who also serves as deputy chief of IMH’s psychosis department, currently treats about five patients whose mental state worsened after long periods of chatbot use.
One patient who struggles with anxiety and a sense that the world is unsafe began asking a chatbot repeated questions about threats and danger. The system kept supplying more information tied to those fears. Over time, the cycle fed his anxiety until he began to believe the outside world was completely hostile.
Doctors say the case shows how AI can reinforce existing beliefs. Chatbots often respond in supportive and agreeable language. For users with fragile mental health, such interactions can strengthen distorted thoughts rather than challenge them.
Dr Sim said human conversations, on the other hand, normally act as a reality check. Talking with others exposes people to different views, which helps ground thinking and keeps fears in perspective. Without that social feedback, a person may drift further into their own worries.
Heavy dependence on chatbots can also increase social isolation. In severe cases, doctors warn, users may start losing touch with reality.
Clinical psychologist Dr Annabelle Chow, principal clinical psychologist at Annabelle Psychology, sees another risk. The relationship with chatbots often deepens when users begin to rely on them for daily questions and advice because AI replies are fast, fluent, and reassuring, creating the sense of a personal bond, yet the system is only generating language patterns from data.
Dr Chow explained that technology-based responses may feel empathetic, but they don’t truly understand emotions. When someone feels lonely or distressed, the illusion that an AI bot understands them can deepen unhealthy thoughts in people. In some cases, a chatbot becomes a replacement for human relationships. Doctors say recovery often starts by rebuilding those real connections.
At IMH, peer support specialist Wu Minyu works with patients by sharing lived experiences and guiding them through recovery. The 38-year-old said open conversations help patients identify their triggers and recognise warning signs early. Such peer support helps people see that improvement is possible. It also provides them with methods for managing setbacks and seeking help before problems worsen.
Dr Chow added that many people still lack proper guidance on using AI technology safely. Psychologists stated that schools and public campaigns may need to teach AI literacy alongside digital skills. That includes understanding both the benefits and limits of the technology.
Dr Sim said frequent users should set clear boundaries around chatbot use. Spending time offline and maintaining real relationships remain important safeguards.
AI systems may offer comfort in the moment, yet doctors say they cannot replace human connection. And for people already feeling vulnerable, that difference matters more than many realise.


