Connecticut legislators are working through a package of bills to establish a policy framework that regulates artificial intelligence.
The General Law Committee heard testimony on Wednesday on two bills that would establish guidelines around AI use in the state. Senate Bill 5, An Act Concerning Online Safety, outlines a framework for regulating artificial intelligence and transparency around consumer data. The massive 97-page bill addresses a range of topics, from AI subscriptions and chatbots to automated decision-making, AI workforce training, and definitions of ‘catastrophic risks’ in AI development.
State Representative Nicholas Menapace of East Lyme spoke at the public hearing. He said the bills are a solid step forward in building an online safety protocol. Menapace, a member of the state’s AI caucus, said AI technologies are rapidly evolving and that the state needs to create a plan that both protects residents and fosters innovation.
“AI is a powerful tool, but like any powerful tool, it can be misused. Our responsibility is to make sure innovation does not come at the expense of the people who live in Connecticut,” Menapace said.
SB 5 would also establish an Artificial Intelligence Policy Office to oversee AI research and recommend new policies. The office would run under the Department of Economic and Community Development (DECD) and help inform the state's legal and regulatory frameworks.
DECD Commissioner Daniel O’Keefe said the bill would both strengthen AI governance and attract innovation to the state. He said the state has untapped opportunities to develop AI-ready talent and foster partnerships in the industry. O’Keefe said the new strategies would increase the state’s economic competitiveness.
“Clearly, I worry about artificial intelligence taking jobs, but I think what is more likely is that those who are equipped to leverage AI are becoming more competitive in the workforce,” O’Keefe said.
Other elements of SB 5 include: expansions on the Connecticut AI Academy for workforce training, anti-discrimination protections and disclosure requirements for companies that may use AI automated systems to make employment-related decisions. It’s meant to address companies that use AI tools to make decisions on hiring or firing. Companies that use AI tools like resume screeners or interview analysis software for hiring. It would require them to inform applicants that AI was used in the process and give them the right to appeal if they suspect discrimination.
Another bill that addresses AI regulations directly was introduced by the governor’s office. Senate Bill 86, An Act Addressing Innovations in and the Responsible Use of Artificial Intelligence, focuses on using AI to advance economic development in Connecticut. It seeks to establish an AI regulatory sandbox program. It would allow companies to apply for an opportunity to test new technologies under the state’s watch while complying with regulatory and other legal requirements.
Supporters of the bill hope that this regulatory sandbox will attract business to the state and the deployment of artificial intelligence technologies. Governor Ned Lamont said in written testimony that the state should take advantage of AI innovation while ensuring the technology is used safely and responsibly.
“We also recognize the unique position Connecticut is in, being located between two major cities. So, this bill directs multiple executive branch agencies to collaborate and develop a proposal for an AI regulatory sandbox to make Connecticut the most attractive state in the region for AI development in targeted industries like insurance, finance, and health services,” Lamont said.
If passed, Lamont said the bill would expand the state’s Open Data Portal by directing agencies to release AI-ready datasets that could be useful for AI systems. Lamont said all existing data disclosure laws and regulations will continue to apply. The bill recommends the creation of a Chief Data Officer, who will be under the Office of Policy and Management. The role requires that the Chief Data Officer create a state data plan, direct agencies on how to use and manage data, and identify data for AI systems to support the state’s economic goals.
A major part of both SB 5 and SB 86 is significant oversight requirements for AI companion chatbots. Under the proposed regulations, companies would need to disclose to users that the AI chatbot is not human. There must be protocols in place if the companion detects language associated with suicidal ideation or self-harm. Protections for minors, users under the age of 18, are more specific. Chatbot would be prohibited from encouraging sexually explicit, illegal or harmful conduct to minors.