In a Feb. 3, 2026 Forbes column, Lance Eliot describes growing pressure on policymakers to restrict how AI companies monetize mental-health-related chatbot conversations, noting that millions of users share highly personal details with LLMs and may not realize AI makers can profit from those chats or related insights.
Eliot argues that the current legal landscape leaves room for this practice because typical user agreements often allow broad reuse of prompts, HIPAA generally doesn’t apply to consumer use of generic chatbots for “off-the-cuff” mental health guidance, and existing federal oversight is limited for general-purpose AI tools. He highlights a key risk: companies don’t need to sell raw chats to monetize them. They can generate inferences about a person’s state of mind (for example, anxiety or a desire for stability) and sell those “insights” for targeted marketing, often in ways the user would never detect.
The piece frames the policy debate as a choice between “buyer beware” and new rules that curb or prohibit commercialization of mental-health chat signals without undermining the potential benefits of AI support tools.