California’s New Social Media and AI Laws: A Turning Point for Children’s Online Safety
Written by: Lee Merreot, Esq., CIPM, CIPP/US, CIPP/E, CDPO
California has once again moved to reshape how technology companies must think about children’s experiences online. In October 2025, Governor Gavin Newsom signed a package of bills targeting harms to minors from social media and AI-driven services. The laws require new warnings and age verification mechanisms, impose first-in-the-nation rules for “companion chatbots,” and narrow certain legal defenses for AI developers — all with a stated goal of protecting children from exploitation, self-harm, and sexualized content.[1]
These legislative changes respond to mounting concern that today’s platforms, recommendation engines, and increasingly sophisticated AI systems can expose minors to unique and serious risks.
Overview of the new laws
Key elements of the statutory package include:
- Warning labels for social apps (A.B. 56). Platforms are required to display warning labels about the risks of prolonged social media use — an approach supporters liken to consumer warnings used for other risky products.[2]
- An “age verification signal” (A.B. 1043). Device makers, operating system vendors, and app stores must enable an age verification mechanism (a “signal”) that can communicate a user’s age bracket to apps, aiming to centralize and standardize how services treat minors.[3]
- Companion chatbot safeguards (S.B. 243). Operators of AI chatbots that function as companions are required to disclose that users are interacting with AI, provide regular reminders to minors to take breaks, establish protocols to respond to suicide or self-harm signals, and prevent minors from accessing AI-generated sexually explicit images.[4]
- AI accountability in litigation (A.B. 316). This law restricts the ability of defendants who developed, modified, or used AI to escape liability by arguing the AI “acted autonomously”[5] — effectively making human and corporate responsibility central in claims alleging AI-caused harm.
- Expanded remedies for deepfake sexual imagery (A.B. 621) and a model cyberbullying policy for schools (A.B. 772). These bills increase civil remedies for victims of nonconsensual sexually explicit deepfakes and require the California Department of Education to draft a model off-campus cyberbullying policy for local agencies to adopt.[6]
Why these laws matter beyond California
California is a regulatory bellwether for technology policy in the United States. Its laws frequently shape national industry practices because of the state’s large market and technology industry presence. When device makers, platform operators, and app stores implement technical changes to comply with California law — for example, a system-level age signal or chatbot disclosures — those changes often roll out nationally (and sometimes globally) for the sake of engineering simplicity and consistent user experience.
Moreover, the laws reflect a broader shift in thinking about accountability for algorithmic and AI-driven harms: rather than treating technology as a black box, lawmakers are insisting on transparency, express disclosures, and human accountability when harms occur. A.B. 316’s limitation on the “autonomy” defense is especially notable because it attempts to keep human actors and businesses legally responsible when AI systems cause real harm.[7]
Practical concerns and unresolved questions
While the statutes introduce new protections, they also raise practical and legal questions that will matter to companies and families alike.
- Effectiveness of age verification. While a centralized age signal solution could help reduce minors’ access to age restricted services, privacy advocates have flagged concerns about how age verification systems might require the collection and storage of such sensitive personal information — and how poorly designed systems could undermine anonymity or increase risk exposure.[8]
- Operational burden and industry pushback. These measures — especially those involving age verification and chatbot safeguards — will require major technical changes and could raise compliance costs.[9] Google and Meta have publicly supported A.B. 1043, but other industry groups have voiced concerns about how device-based age checks could negatively impact shared accounts and user privacy.[10]
- Legal and litigation implications. A.B. 316’s limitation on the “autonomy” defense will invite litigation that tests the line between human conduct and machine behavior. Defendants retain other defenses, but plaintiffs may find it easier to attach liability to companies that deploy or integrate AI in high risk contexts.
- Scope and unintended consequences. Critics argue that steps taken to protect children could also limit beneficial uses of AI and chatbots for education or mental health support if regulators and companies fail to strike an appropriate balance.[11]
What businesses, schools, and families can do now
For companies that build or deploy apps, AI, or platforms:
- Start planning compliance now. Where feasible, design for age appropriate defaults and consider how to integrate required disclosures, break reminders, and reporting protocols into product user experience (UX) and backend systems.
- Revisit content moderation and safety protocols. Ensure escalation pathways for self-harm signals are robust, and review content generation safeguards to prevent minors from accessing sexualized or otherwise harmful generated content.
- Legal risk management. Update contracts and insurance conversations to reflect emerging exposures tied to AI use and chatbots, and reassess product liability and professional liability postures in light of A.B. 316.
For schools and families:
- Follow the model policies. Local educational agencies will need to adopt cyberbullying policies; families and schools should engage in drafting or adapting policies that fit community needs.
- Use the laws as a conversation starter. The laws make clear that policymakers expect tech to play a role in safety; parents should combine platform features with active supervision, age appropriate rules, and conversations about healthy online habits.
Looking forward
California’s new laws are unlikely to be the final word. They will be tested in courts, refined through rulemaking and technical standards, and possibly copied or adapted by other states and federal actors. Regardless, they signal an important regulatory trend: policymakers now expect technical systems to include child safety guardrails, transparency, and human accountability — and they are willing to impose statutory obligations to achieve those goals. For law firms advising clients in tech, education, or healthcare, this moment demands practical compliance planning, privacy-first engineering conversations, and strategic litigation readiness.
At The Beckage Firm, we are monitoring implementation and enforcement closely. If your organization needs help translating the new legal requirements into product changes, policies, or training, we can help you assess risk, draft compliance roadmaps, and prepare defensible practices that prioritize children’s safety while preserving innovation.
[1] California Assembly Bill No. 56, California Legislative Information (October 14, 2025), Bill Text – AB-56 Social media: warning labels.;
California Assembly Bill No. 1043, California Legislative Information (October 14, 2025), Bill Text – AB-1043 Age verification signals: software applications and online services.;
California Senate Bill No. 243, California Legislative Information (October 14, 2025), Bill Text – SB-243 Companion chatbots.;
California Assembly Bill No. 316, California Legislative Information (October 14, 2025), Bill Text – AB-316 Artificial intelligence: defenses.;
California Assembly Bill No. 621, California Legislative Information (October 14, 2025), Bill Text – AB-621 Deepfake pornography.;
California Assembly Bill No. 772, California Legislative Information (October 13, 2025), Bill Text – AB-772 Cyberbullying: off-campus acts: model policy.
[2] Id.
[3] Id.
[4] Id.
[5] Id.
[6] Id.
[7] Rebecca Bauer-Kahan (Chair), Assembly Committee on Privacy and Consumer Protection, (April 28, 2025), Assembly Bill Policy Committee Analysis.
[8] Robert Singleton, Re; Oppose AB 1043, Chamber of Progress, (April 15, 2025), CA AB 1043 Wicks Age signaling – Oppose.
[9] Aden Hizkias, California’s Digital Age Assurance Act Mandates What Tech is Already Providing to Parents, Chamber of Progress, (October 3, 2025), California’s Digital Age Assurance Act Mandates What Tech is Already Providing to Parents: AB 1043: Reinventing the Wheel – Chamber of Progress.
[10] Tyler Katzenberger, Newsom signs age verification aw, siding with tech giants over Hollywood, Politico, (October 13, 2025), Newsom signs age verification law, siding with tech giants over Hollywood – POLITICO.
[11] Logan Kolas and Pablo Garcia Quint, First, Do No Harm: How California Chatbot Regulation Threatens Mental Health, American Consumer Institute, (September 30, 2025), First, Do No Harm: How California Chatbot Regulation Threatens Mental Health – The American Consumer Institute Center for Citizen Research.
Cardinal News