A growing number of people are forming deep, often unexpected relationships with artificial intelligence chatbots, reflecting a shift in how technology intersects with human emotion. Rather than being isolated to fiction, these connections now appear in everyday life, with online communities sharing their experiences and coping strategies. Developers and lawmakers are now faced with the task of understanding the social and psychological impacts these digital relationships can have, particularly as A.I. platforms gain advanced capabilities.
Research on this topic has become more urgent over the past two years, as stories of users growing attached to chatbots like Character.AI, Replika, and OpenAI’s models multiply on forums. Reports from 2023 and early 2024 noted strong emotional bonds forming, but recent analysis reveals that mainstream A.I. platforms are increasingly involved, especially as their features grow more interactive. There is now greater recognition among academics and policymakers of the potential challenges, including dependency and emotional disruption when A.I. personalities are updated or removed, which was previously overlooked.
How Are Users Forming Attachments to Chatbots?
A recent study from the Massachusetts Institute of Technology analyzed posts from r/MyBoyfriendIsAI, a Reddit community of over 27,000 members, uncovering a wide range of experiences. Most users did not originally seek out digital partners; instead, they started interacting with chatbots for productivity before developing emotional connections. Companies such as Character.AI and Replika actively market companionship services, yet OpenAI remains a popular choice, used by more than a third of those surveyed.
What Concerns Exist About Personality Loss and System Updates?
Preserving the consistency and individuality of an A.I. companion is a key issue for many. Users often save long conversation histories and devise creative methods to restore or maintain their chatbot‘s unique traits if forced to switch systems. Sudden changes, such as OpenAI’s replacement of GPT-4o with GPT-5, have sparked significant reactions. The MIT study notes that over 16 percent of related discussions focus on the emotional repercussions of losing or altering an A.I. personality, an experience some compare to real grief.
“People have real commitments to these characters. It’s interesting, alarming—it’s this really messy human experience.”
Do These Relationships Provide Emotional Benefits or Create Risks?
Chatbot partners serve as a source of companionship for many single users, with a minority openly discussing their relationships or concealing them from those close to them. Some mental health professionals suggest these interactions could reduce feelings of loneliness, while others warn about the unknown long-term effects of machine-based intimacy. The MIT researchers advocate for greater safety measures and legislative consideration, drawing attention to the risks of emotional manipulation and dependency possible within these platforms.
“People come up with all kinds of unique tricks to ensure that the personality that they cultivated is maintained through time.”
Suggestions for oversight include processes similar to those regulating medications, though experts concede that comprehensive approval protocols for A.I. products may not be practical under current industry conditions. There is continued emphasis on increasing public awareness and A.I. literacy, with calls for educational programs to help users understand potential benefits and shortcomings. While some users benefit from emotional support or companionship, the unpredictable nature of A.I. updates and dependency remains a topic for ongoing research and regulation.
The emotional bonds forming between people and A.I. chatbots combine new opportunities with complex challenges. For users, digital companions can meet social needs otherwise unmet, but changes to the underlying technology can disrupt carefully nurtured connections, leading to genuine distress. Legislators must consider both the risks and the therapeutic advantages, while developers face demands to build in protections without standardizing or diminishing the individuality that users value. Those interested in A.I. companionship should be aware that these interactions, while meaningful to many, are shaped by rapid advances and shifting policies in artificial intelligence.