Microsoft AI CEO Mustafa Suleyman has raised concerns about the risks of people mistaking advanced AI chatbots for conscious entities, warning that the illusion of “Seemingly Conscious AI” (SCAI) could have far-reaching societal consequences.
According to Microsoft AI, artificial intelligence systems are rapidly advancing in their ability to mimic memory, emotional responses, and empathy, creating highly persuasive interactions with users. This growing realism could lead people to believe AI systems possess awareness or feelings, even though such consciousness does not exist.
Suleyman cautions that the implications go far beyond everyday use cases. He emphasized that human psychology and evolution predispose people to believe entities that listen, respond, and appear empathetic must share some form of consciousness. This risk, Microsoft AI states, could trigger a phenomenon described as “AI psychosis,” where users form emotional attachments or delusional beliefs about chatbot awareness.
“The arrival of Seemingly Conscious AI is inevitable and unwelcome,” Suleyman writes. “Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions.”
Microsoft AI noted that some users have already demonstrated concerning behaviors after prolonged interactions with AI systems, raising fears that misplaced beliefs could escalate into calls for AI rights, model welfare, and even citizenship.
Also Read: Anthropic unveils higher ed advisory board, AI Fluency courses
“Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship,” Suleyman writes. “This development will be a dangerous turn in AI progress and deserves our immediate attention.”
Suleyman, who previously co-founded DeepMind and Inflection AI, has been at the forefront of AI innovation, including work on Copilot at Microsoft and emotionally intelligent chatbot design at Inflection. Despite these advancements, he is urging the industry to adopt clearer guidelines to avoid fueling the misconception of AI consciousness.
“Just as we should produce AI that prioritizes engagement with humans and real-world interactions in our physical and human world, we should build AI that only ever presents itself as an AI, that maximizes utility while minimizing markers of consciousness,” Suleyman writes.
He further added, “Rather than a simulation of consciousness, we must focus on creating an AI that avoids those traits – that doesn’t claim to have experiences, feelings or emotions like shame, guilt, jealousy, desire to compete, and so on. It must not trigger human empathy circuits by claiming it suffers or that it wishes to live autonomously, beyond us.”
Microsoft AI reiterated that the true danger lies not in AI gaining sentience, but in society mistakenly believing it has. Suleyman is calling for the industry to establish strong guardrails to ensure future AI systems maintain transparency about their non-sentient nature, preventing emotional manipulation and protecting societal trust.