Last of 2 parts
Read: Part 1 | When comfort isn’t human: Can AI fill the void of connection?
Trigger warning: Mention of suicide
Bea* and Maria* leaned on generative artificial intelligence (Gen AI) chatbots like trusted companions. But no matter how responsive, these systems fail to replicate the warmth of human connection and the shared understanding with someone who experiences life as humans do. Their experiences exposed the current limits of these systems and why understanding them matters.
Turning to Gen AI chatbots for therapy or companionship is silently becoming a trend. It ranked first in a 2025 Top 100 Gen AI usage list, an updated edition of author Marc Zao-Sanders’ initial report published in the Harvard Business Review in 2024. Users turning to Gen AI for therapy or companionship marginally surpassed using it for decision-making and creativity.
IBM defines Gen AI as a type of artificial intelligence designed to create new content ranging from text, images, audio, to video by learning patterns from large amounts of data.
It is different from traditional AI in that it produces output that resembles human-made work by interpreting and understanding the data input itself, not just by following a series of instructions. That is why it can generate outputs that are user-specific.
Companies that develop Gen AI mine data by web scraping public websites and utilizing publicly available datasets from sources like Wikipedia and government archives — data that can range from the latest celebrity gossip to the best books in history. They also use internal data collected from their own systems from users which allowed it in their settings.
This explains why Gen AI can be better at making users feel better compared to some friends or family. It’s a large language model (LLM) that can mimic a therapist pretty easily.
But beware, it’s not fail-safe.
Tragic case
Parents of a teenage boy who died by suicide in April filed a lawsuit against OpenAI and its CEO Sam Altman in August. They alleged that their son had been confiding in their chatbot, ChatGPT, for months before his death but it missed recommending that he seek professional help. Instead, it helped him look for suicide methods and even offered to write his suicide note.
Normally, ChatGPT is programmed to respond with mental health helplines when faced with this kind of interaction with a user. However, the case of the boy fell off its radar.
In an emailed statement to Time, OpenAI wrote: “ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”
The family’s lead attorney, Jay Edelson, told CNN that Altman recklessly endangered the chatbot’s users when he pushed to release ChatGPT 4.0 without completing its tests first, to beat Google Gemini. A few safety officers resigned in protest. According to Edelson, this move increased OpenAI’s valuation from $86 billion to $300 billion. “The idea that they are doing a pretty good job, but could do better, is just not true,” he said.
Need for broader regulation
ChatGPT was launched in late 2022. Now, it’s the top visited AI website globally with 5.38 billion visits in August, ranking fifth in the most visited website in the world, after Instagram.
In the Philippines, it ranks higher, taking the fourth spot with over 111 million users, right after Facebook.
OpenAI recently wrote a blog about how their systems can “fall short” despite current safeguards. One part reads: “We are continuously improving how our models respond in sensitive interactions, and are currently working on targeted safety improvements across several areas, including emotional reliance, mental health emergencies, and sycophancy.” They cite big plans in its future developments like “expand interventions to more people in crisis” but critics have been highlighting the dangers in their profit-driven approach.
Sycophancy refers to the servile behavior of AI models towards users. A chatbot is programmed to end its responses in a question or an offer to probe deeper into the user’s deeper need it for. Through this, individuals seeking emotional support or companionship become vulnerable to longer interactions with the chatbot, which the developers earlier claimed makes it less reliable and its safeguards overlooked.
Still, OpenAI claims: “Our goal isn’t to hold people’s attention. Instead of measuring success by time spent or clicks, we care more about being genuinely helpful.”
The Philippines has yet to pass a law that regulates the use of AI for emotional or psychological support. Existing measures like the Data Privacy Act of 2012 and the National Privacy Commission’s AI guidelines apply only when sensitive personal information is processed. They address data protection rather than the ethical or psychological risk of AI use in mental health contexts. This gap underscores the need for broader regulation as AI becomes more embedded in everyday life.
Why AI literacy is critical
According to Dr. Noahlyn Maranan, psychology professor at the University of the Philippines Los Baños, Gen AI, through its rich collection of data, could offer insights to users that are “beyond what any single person in their immediate circle could provide.”
It’s for this reason that they can offer comprehensive and human-like responses to prompts. Responses come in real time, which Maranan points out, caters to the psychological need for quick feedback and support. This helps explain why more and more individuals are turning to Gen AI chatbots for therapy and companionship. Plus, no subscription is required to do that.
“Many users may find cognitive satisfaction in understanding the big picture of their lives — being able to step out of one’s shoes and view it through the broad collective knowledge afforded by AI. It also helps that AI like ChatGPT can simulate human-like empathy in its responses,” Maranan explained.
To her, AI can take on the role of a disembodied voice that can function as a kind of attachment object for some individuals. It can offer a private, nonjudgmental outlet for processing emotions. At the same time, Maranan cautioned that such interactions cannot replace the embodied connection of human relationships — “being unable to talk to it as a person and lacking the deeper satisfaction of being seen, heard, and hugged by another human who, like them, breathes, lives, and exists beyond the screen.”
Maranan stressed the importance of critical literacy when engaging with AI tools. She warned that users need to understand both the potential and limitations of the information and advice these systems provide to ensure that it complements, rather than replaces, the strengths of culturally embedded support — like having another person to bare your heart to.
“Critical literacy in engaging with these tools should be in place to ensure that AI does not do more harm than good,” she said.
AI’s accessibility contributes to its popularity. Gen AI tools like ChatGPT and Gemini are free and require no subscriptions or appointments, filling gaps for individuals who cannot easily access professional care.
In the context of the Philippines where access to mental health professionals is limited, this option can be understandably appealing.
Challenges to mental health care in PH
While AI is advancing quickly, mental health care in the Philippines lags far behind and faces persistent challenges. In the Department of Health’s mental health strategic framework for 2024 to 2028 it had drafted with the World Health Organization (WHO), the DOH cited the lack of comprehensive local mental health data as a major challenge in crafting evidence-based plans for intervention.
With roughly 500 psychiatrists and 1,600 psychologists serving a country of over 110 million, it’s no wonder that AI has become a convenient and accessible alternative for many. The numbers describe a ratio of one psychiatrist to every 220,000 people and one psychologist to every 68,000-plus people. To make matters worse, these professionals are concentrated in Metro Manila.
Aljohn Puzon Estrella, a UPLB assistant professor of anthropology with a research interest in digital anthropology, observed that Gen AI is redefining the new generation’s concept of emotional intimacy and safe spaces. Estrella explained that cultural norms exploring the concept of trust, shame, family expectations, and religious practices in the context of mental health have further shaped some Filipinos’ preference for AI.
“This trend shows how Filipinos creatively adapt technology to fill gaps in their social and health systems,” he noted.
Expressing a hint of optimism for the future with such systems in place, the UPLB professor said, “Filipinos will find ways to adapt, blending these new relationships with our culture of kalinga and pakikipagkapwa (care and interpersonal relationships).”
Bea and Maria’s stories are only two cases that give a glimpse into how people are navigating the evolving phenomenon. Through their experiences with AI companionship, they discovered that human connection was what they still longed for in the end. In their experience, what was thought to replicate human closeness without interacting with others only reminded them of what was missing.
As more people turn to AI for emotional support, questions remain about what relying on them will mean for the future of human connection. When machines can provide comfort at the cost of something deeply human, is the price worth paying? – Rappler.com
*Names of the subjects were changed to protect their privacy.
The Department of Health has national crisis hotlines to assist people with mental health concerns: 1553 (landline), 0966-351-4518, and 0917-899-USAP (8727) (Globe/TM); and 0908-639-2672 (Smart/Sun/TNT).
Princess Leah Sagaad is a 2025 Aries Rufo Journalism Fellow. She earned her Development Communication degree from the University of the Philippines Los Baños and previously served as associate managing editor for short-form reporting at Tanglaw.