New York Governor Kathy Hochul called on artificial intelligence companies to implement safety features for users, as a reminder that the AI companion law is now in effect.
In an open letter to AI companies, she highlighted a new law in New York that requires companies to implement strict safety measures around artificial intelligence chatbots. Hochul expressed concerns about “AI companions,” which are designed to simulate human relationships, such as AI friends or romantic partners.
“In a time of unprecedented loneliness, young people turning to AI for friendship without adequate safety standards may be increasingly at risk of poor social development and dire outcomes. As leaders, we cannot afford to delay action. The consequences are devastating both for families and society as a whole,” Hochul said.
Hochul said the law was in response to the emergence of people, especially teenagers, who are using AI online chatbots for emotional support. Because systems store user preferences, she said they are designed to keep users engaged. Hochul said the law requires tech companies to take steps to help limit exploitation.
The law is part of the 2026 fiscal year's enacted budget, outlining the required measures. AI companion companies must find ways to detect signs of suicidal ideation or self-harm. A company must have an intervention plan if it detects that users express thoughts while using these chatbots. Safety protocols then require that users showing signs of concern be referred to a crisis center. The law also requires that users be reminded every three hours that they are not interacting with a human.
Companies could be required to pay non-compliance penalties, which the New York Attorney General Letitia James would enforce. Fines collected will help fund suicide prevention programs. James said she won't hesitate to hold companies' “unsafe AI products” accountable.
“The stories of people who have been encouraged by AI bots to hurt themselves or take their own lives are heartbreaking. AI companies have a responsibility to protect their users and ensure their products do not manipulate or harm people who use these AI companions,” James said. No company should be able to profit off an AI companion that puts its users at risk."