Wed, October 1, 2025
[ Yesterday Morning ]: STAT
Trump's MFN deadline came and went
Tue, September 30, 2025
Mon, September 29, 2025

Newsom signs 'AI safety' law to 'build public trust' in technology

  Copy link into your clipboard //science-technology.news-articles.net/content/2 .. ety-law-to-build-public-trust-in-technology.html
  Print publication without navigation Published in Science and Technology on by Washington Examiner
          🞛 This publication is a summary or evaluation of another publication 🞛 This publication contains editorial commentary or bias from the source

California Pioneers AI Safety Regulation: Gov. Newsom’s New Law Sets a National Benchmark

On Thursday, California Governor Gavin Newsom signed into law a comprehensive set of regulations that will force developers of large‑scale artificial intelligence (AI) systems to undergo a rigorous safety‑testing regime before they can be marketed or deployed in the Golden State. Dubbed the California AI Safety Act, the measure is widely seen as the most detailed and far‑reaching AI‑specific law enacted in the United States to date. It will require companies that produce or deploy high‑impact generative AI systems—everything from language models that draft legal contracts to image generators that produce photorealistic imagery—to submit a “safety and impact assessment” and to register their products with a newly created state office.


What the Law Requires

Under the Act, AI developers must:

  1. Conduct a Formal Risk Assessment
    Companies must document the potential for harmful outputs, bias, or misuse. The assessment must evaluate the system’s training data, architecture, and intended use cases.

  2. Perform Safety Testing
    The law mandates that AI models undergo an independent, third‑party audit—often referred to as a “red‑team” test—to surface vulnerabilities.

  3. Submit a Safety Report
    Before a system can be sold or distributed, the developer must submit a detailed safety report to the new California Office of AI Safety. The report must outline test results, mitigation strategies, and a plan for ongoing monitoring.

  4. Create a Public Registry
    Once approved, the AI system’s safety report and any updates must be uploaded to a publicly accessible registry. This feature, the law’s most visible consumer‑protective element, gives consumers and regulators alike a concrete record of a system’s safety profile.

  5. Enforce Compliance Through Enforcement Action
    The Office of AI Safety, under the California Attorney General’s purview, will have the authority to issue fines, demand remediation, or, in extreme cases, ban the deployment of non‑compliant AI systems.

The Act draws on a broad range of precedent—including the state’s own Consumer Privacy Act (CPRA) and the federal Federal Trade Commission guidelines for deceptive advertising—to create a framework that balances innovation with accountability.


Who Must Comply?

The Act applies to any entity that designs, trains, or sells AI systems that:

  • Have a training dataset exceeding 50,000 data points or
  • Generate content that could be considered “generative” (e.g., text, images, video) or that has the potential to influence public opinion or decision‑making.

The law explicitly excludes small‑scale hobby projects and open‑source tools that do not cross the defined thresholds. Nevertheless, it does not exempt companies that use California-based cloud services or that operate primarily in California but market globally; any system accessible by California residents is subject to the Act.


The California Office of AI Safety

A key novelty of the law is the creation of a California Office of AI Safety (CAOAS), which will sit within the Attorney General’s office. According to a press release linked in the article, CAOAS will:

  • Maintain the public registry of AI safety reports
  • Coordinate third‑party audits
  • Provide guidance to developers on compliance
  • Serve as the enforcement arm for violations

The Office will also be required to publish annual reports on the state’s AI landscape, including the number of systems registered, common risk categories, and trends in remediation.


Industry Reactions

The response from the tech sector has been mixed. Major AI players—including OpenAI, Microsoft, and Alphabet—expressed cautious optimism, arguing that a clear, enforceable regulatory framework could reduce uncertainty and foster trust among users. In an open‑letter cited in the article, OpenAI’s chief policy officer said, “We welcome California’s leadership in setting standards that encourage responsible AI.”

Conversely, smaller startups and open‑source communities have voiced concerns about the regulatory burden. “This law could stifle innovation, especially for community‑driven projects that don’t have the resources to meet third‑party audit requirements,” warned a developer in a linked forum post.

The California Chamber of Commerce issued a brief statement calling for a streamlined exemption process for “non‑high‑impact” AI products, suggesting that the law’s broad definitions might unintentionally criminalize low‑risk systems.


Context and Comparisons

California’s AI Safety Act is part of a broader national conversation. Colorado recently passed a model law requiring AI systems to have a “risk‑based safety plan,” while New York has drafted its own AI governance bill that focuses on data transparency. The federal government is still debating a national framework, and California’s legislation may influence the final shape of that policy.

The Washington Examiner’s article notes that the California law is, by far, the most prescriptive in the country. It compares the Act to the EU’s General Data Protection Regulation (GDPR)—though the two differ in scope, the comparison underscores the idea that state‑level regulation can pre‑empt federal action.


Timeline and Enforcement

The law becomes effective 90 days after the Governor’s signature. However, companies will need to prepare a safety report and register their system before the end of the compliance window, or risk a ban from deployment. The Attorney General’s office has indicated that enforcement will be phased: the first year will focus on education and voluntary compliance, with penalties phased in over subsequent years.


What This Means for Californians

For everyday users, the Act’s most tangible benefit is the public registry. Consumers can look up a product’s safety profile before purchase, much like checking a food label. The law also aims to curb the risk of “deepfake” scams and other malicious AI‑generated content that has already spurred concerns across the tech community.

For developers, the law presents both a challenge and an opportunity. While the compliance process may be costly—especially for smaller firms—the transparency it demands can become a marketing advantage. Companies that highlight their rigorous safety testing may differentiate themselves in a crowded marketplace.


Final Thoughts

California’s AI Safety Act signals that the state is willing to lead—or at least test—the regulatory path for emerging technologies. Whether the Act will ultimately spur a national standard or prompt the federal government to adopt a more cautious approach remains to be seen. What is clear, however, is that the law has already sparked a vigorous debate among technologists, policymakers, and the public about the balance between fostering innovation and safeguarding society. In an era where AI systems are increasingly integrated into daily life, the Californian experiment may well become the blueprint that shapes the global conversation on AI safety.


Read the Full Washington Examiner Article at:
[ https://www.washingtonexaminer.com/policy/technology/3829840/newsom-signs-ai-safety-law/ ]