On August 4, 2025, Illinois Gov. J.B. Pritzker signed the Wellness and Oversight for Psychological Resources Act (HB 1806), restricting the use of artificial intelligence to provide mental health therapy or make therapeutic decisions without licensed clinical oversight, while still allowing AI for limited administrative or supplemental support. Against a backdrop of growing demand for behavioral health care, the AP reports that state action is accelerating because federal regulation has not kept pace, pushing policymakers toward a fast-moving, and often inconsistent, state-by-state patchwork.

As the article explains, states are taking notably different approaches. Illinois and Nevada have moved toward bans on AI “therapy,” while Utah has focused on guardrails such as privacy protections and clear disclosures that a chatbot is not human. Supporters argue these measures are intended to protect patients from tools that can mimic clinical care without the ethics, accountability, and judgment required in real therapy, especially as apps blur the line between “companionship,” “wellness,” and treatment.

But the AP underscores that regulation is only part of the story, and not necessarily a clean fix. Some apps have blocked access in states with bans, others are waiting for legal clarity, and many laws do not reach general-purpose chatbots, even though people may use them for mental health advice. The result is a confusing landscape for consumers and developers alike: rules vary by state, definitions are murky, and accountability can be difficult to enforce when products are updated quickly or marketed in vague ways.

At the same time, the article points to research suggesting the field is not purely speculative. A Dartmouth-based team published a randomized clinical trial of a generative AI mental health chatbot (“Therabot”) and reported symptom improvements over eight weeks, with every interaction monitored by a human reviewer who intervened when needed. The takeaway is a tension policymakers must now navigate: protecting the public from unsafe “therapy” claims without foreclosing carefully designed, clinically supervised tools that might expand access before people reach crisis.

CLICK HERE to read the full article.  

Back to top