Mustafa Suleyman, the head of Microsoft AI, has written a new blog post in which he raises major concerns about how people are starting to see and use AI systems. He says that the “highly compelling and very real” nature of talks with advanced AI models could make people think these systems are alive. He calls this “AI psychosis risk.”
Suleyman’s worries are more than just curiosity; they touch on a darker side of how people and AI interact. He says that “concerns about ‘AI psychosis,’ attachment, and mental health are already growing,” and some users even think their AI is a god, a character from a story, or even fall in love with it. He said that “AI psychosis risk” is the risk that users would become very attached to AI systems, perhaps in a way that is not real. He thinks this risk is not restricted to people who already have mental health problems.
Suleyman remarked, “Simply put, my main concern is that a lot of people will start to believe in the illusion of AIs as conscious beings so strongly that they will soon call for AI rights, model welfare, and even AI citizenship.” This change in AI progress is serious and needs our immediate attention. He has advised the industry to set strong moral limits to stop this from happening. He said, “We must build AI for people, not to be a digital person.”
People in the tech business have been worried about this for a while. Sam Altman, the CEO of OpenAI, has also said that he is worried about how emotionally dependent people are becoming on AI. He wrote on X recently that the attachment people have to models like ChatGPT “feels different and stronger than the kinds of attachment people have had to previous kinds of technology.” He also said that while many people use the bots as “therapists or life coaches” with good results, there is a risk that they will strengthen delusions in people who are mentally fragile.
Suleyman’s essay, “We Must Build AI for People, Not to Be a Person,” brings up an important discussion concerning the future of AI. He says that while the goal is to make AI companions that are helpful and supportive, there are obvious borders that should not be broken. He says that people and the natural environment should come first.

