In a new STAT interview, Google clinical director Megan Jones Bell describes how the company is redesigning Gemini to better support people who come to AI chatbots during a mental health crisis—and why Google believes simply shutting conversations down “could do more harm than good.” Bell says Google is leaning into accountability for the mental health impacts of its AI products by making Gemini a safer, more helpful “bridge” to real-world support.
STAT reports that Google recently updated the Gemini app so it more prominently surfaces connections to crisis hotlines when the system detects that a person may be at risk of self-harm. Rather than fully disengaging, Gemini is designed to continue the conversation while repeatedly encouraging users to seek outside resources; in some cases it will reassure users with language such as “I’m here to listen” while directing them toward professional help.
The interview highlights a core tension in behavioral health AI: how to design safety interventions that reduce risk without cutting off people who may be vulnerable, isolated, or seeking urgent support. For behavioral health leaders, the piece underscores the growing expectation that consumer AI tools will need clearer crisis pathways, stronger guardrails, and measurable improvements in safety-by-design.