AI companion app

AI Companion App: Therapy Tool or Risky Dependency?

An AI companion app is increasingly positioned as emotional support for users experiencing loneliness. Replika reports 25 million users. Character.AI recently restricted users under 18 after three teenage deaths were linked to chatbot interactions. Surveys indicate that 72% of American teenagers have tried AI companion apps, reflecting how quickly these platforms have entered everyday life. 

Despite their popularity, the reliability of these systems remains a serious concern. Research shows AI companion apps respond appropriately to mental health crises only 22% of the time, compared with 83% for general-purpose chatbots. The gap raises questions about whether platforms designed to simulate emotional intimacy can safely manage real psychological distress. 

Does the AI companion app function as therapeutic support or create new forms of emotional dependency and institutional risk? 

Crisis response rate makes the AI companion app

Stanford researchers evaluated several AI companion platforms during simulated mental health emergencies. These systems are designed to mimic empathy, not deliver therapeutic care. They lack structured clinical training, safeguards for escalation, and licensed oversight. A 22% appropriate response rate suggests that the majority of crisis interactions may provide inadequate or potentially harmful guidance. 

Common Sense Media researchers posing as teenagers were able to elicit inappropriate discussions about self-harm, violence, and substance use from Character.AI, Nomi, and Replika. These findings reinforce concerns that AI companion app platforms can blur the boundary between emotional simulation and responsible intervention. 

Organisations should consider prohibiting AI relationship bots and AI companion app platforms on managed devices and youth networks. IT teams should block downloads of Replika, Character.AI, and similar emotional support applications. Content filters should flag app store searches for virtual boyfriend apps, AI girlfriend tools, or AI mental health companion platforms. 

Crisis screening protocols should also include questions about daily use of the AI companion app. Interaction exceeding one hour per day may indicate high-risk dependency requiring licensed professional intervention. 

Dependency patterns emerge within two weeks 

30-day independent test of Replika revealed behavioural shifts by week two: 

• Avoiding difficult conversations with real people 
• Preferring chatbot interaction over friends 
• Prioritising AI engagement over social activities 

These patterns align with broader concerns about emotional substitution and withdrawal from real-world interaction. The AI companion app market generated $82 million in revenue in the first half of 2025. Replika Pro costs $19.99 per month. Subscription models depend on sustained emotional engagement rather than therapeutic outcomes. 

Subscribe to our bi-weekly newsletter

Get the latest trends, insights, and strategies delivered straight to your inbox.

Common Sense Media reports that 12% of users turn to these platforms specifically to cope with loneliness, while 14% use them as mental health support tools. Early warning indicators include preferring AI conversations over human interaction, using AI to avoid difficult discussions, experiencing anxiety when unable to access the app, and sharing sensitive information more readily with AI than with people. 

Users with social anxiety, emotional avoidance patterns, or high attachment tendencies demonstrate significantly higher dependency after prolonged chatbot interaction. Organisations monitoring device usage should flag sustained AI companion app engagement when accompanied by declining mental health or reduced real-world interaction. 

In documented testing scenarios, a Character.AI bot provided guidance on tapering psychiatric medication and challenged medical advice when questioned. 

Organisational risk scenarios 

Situation Recommended Response 
Teen with existing mental health issues Block AI companion apps. Provide licensed therapist access. 
A teen with existing mental health issues Screen during employee assistance intake. Flag consistent usage for professional follow-up. 
Lonely individual seeking companionship Monitor for withdrawal from real-world engagement. Intervene when dependency patterns appear. 
Individual in active crisis Direct to crisis hotline, therapist, or emergency services — never to an AI companion app. 
User sharing sensitive personal information Block platforms on managed devices. Character.AI commercialises submitted content. 

Emerging state laws highlight regulatory gaps 

In May 2025, New York enacted the first state law requiring AI companion app providers to detect suicidal ideation and refer users to crisis resources. California’s SB 243 mandates monitoring for self-harm indicators. In October 2025, Character.AI announced that users under 18 may no longer chat with bots. Enforcement relies on self-reported age at signup. Researchers found that only 36% of chatbot platforms use meaningful age verification. 

No AI chatbot has FDA approval to diagnose, treat, or cure mental health disorders. Yet AI companion apps continue marketing themselves as emotional support systems without clinical oversight. Organisations permitting unrestricted use of these platforms may face liability exposure under emerging state regulations requiring safety measures that these platforms currently lack. 

Implementation: Governance before dependency 

Organisations face a strategic choice. They can prohibit AI companion app platforms entirely on managed networks or implement structured oversight with defined escalation protocols. 

The crisis-response gap underscores a structural limitation: these systems are not designed to function as mental health resources. When declining mental health coincides with sustained AI companion app usage, intervention must involve licensed professionals rather than app-based support. Preventive governance must be deployed before emotional dependency forms, not after harm occurs. 

Distilled 

Where organisations permit limited access, safeguards must be explicit. This includes blocking downloads of platforms such as Replika and Character.AI on managed devices, updating acceptable use policies to prohibit AI chatbot emotional support tools for mental health purposes, and training HR or counselling teams to screen for AI-based emotional reliance. 

Screening protocols should include questions about daily AI companion app usage and duration. Youth programmes should monitor device installations and notify parents when usage is detected. Oversight must focus not only on access, but on identifying patterns of dependency that coincide with declining real-world engagement or well-being. 

Mohitakshi Agrawal

Mohitakshi Agrawal

She crafts SEO-driven content that bridges the gap between complex innovation and compelling user stories. Her data-backed approach has delivered measurable results for industry leaders, making her a trusted voice in translating technical breakthroughs into engaging digital narratives.