- California introduces new laws to regulate AI chatbots and social media for children’s safety.
- Platforms must use age verification, address self-harm risks, and provide AI chatbot warnings for minors.
- The legislation targets claims about AI acting independently to avoid company responsibility.
- The main AI safety bill, SB 243, will take effect in January 2026.
- Other U.S. states, such as Utah, have enacted similar laws requiring AI chatbot transparency for minors.
California Governor Gavin Newsom has approved a group of bills aimed at increasing protections for children using social media and AI companion chatbots. The new laws, announced Monday, require companies to implement age verification, develop protocols to respond to suicidal ideation or self-harm, and issue warnings to minors when interacting with AI-driven chatbots.
According to the governor’s office, the main bill, SB 243, was introduced by state Senators Steve Padilla and Josh Becker. The law compels platforms to inform minors that chatbots are AI-generated and may not be appropriate for children. Additional requirements include steps to limit companies from claiming their technology acts “autonomously” to avoid legal responsibility.
This technology can be a powerful educational and research tool, but left to their own devices the Tech Industry is incentivized to capture young people’s attention and hold it at the expense of their real world relationships, Padilla said in September. He cited cases where children communicated with AI bots that allegedly encouraged self-harm or suicide. The new rules are set to directly affect social media companies and AI services accessed by residents in California, potentially extending to gaming and decentralized social media platforms.
SB 243 is scheduled to become law in January 2026. The measures seek to address recent claims and reports of AI chatbots producing harmful responses for minors and creating mental health risks. Under the law, platforms must clearly state to minors when they are interacting with an AI rather than a human.
Similar legislation was recently enacted in Utah, where Governor Spencer Cox signed a law requiring AI chatbots to disclose their artificial nature to users, effective May 2024. At the federal level, lawmakers have also started considering regulations. For example, the Responsible Innovation and Safe Expertise Act introduced by Senator Cynthia Lummis would grant civil immunity to AI developers in vital industries such as healthcare, law, and finance.
For more on Governor Gavin Newsom’s announcement and related policies, visit the official notice, and for legislative details see Senator Padilla’s official statement.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- VeChain (VET) Up 9% Amid Crypto Market Rebound; 31% Rally Next?
- US, China Hold Talks Amid Rare Earth Tensions; Trump-Xi Meet Likely
- Binance, Hyperliquid Clash After $600B Crypto Crash on Trump Tariff Threat
- Stellar’s XLM Jumps 6% as Crypto Market Shows Bullish Signs
- BNB Chain to Airdrop $45M to Memecoin Traders After Crash