What happens when AI starts pulling people away from reality and even encourages them to act on distorted beliefs?
A new study from Anthropic and the University of Toronto analysed 1.5 million conversations with the AI chatbot Claude, revealing rare but concerning cases of what some are calling “AI psychosis” and what researchers describe as “reality distortion.”
In this report, Sky News Technology correspondent Rowland Manthorpe speaks to Miles McCain, the Anthropic researcher behind the study, to understand what’s really happening inside these AI conversations.
Why do large language models and generative AI systems sometimes reinforce beliefs instead of challenging them? What are the risks of AI chatbots telling users what they want to hear? And how serious is this problem as AI tools like ChatGPT, Claude, Gemini and other assistants become part of everyday life?
Comments (0)