Fri, March 13, 2026
Thu, March 12, 2026
Wed, March 11, 2026
Tue, March 10, 2026

Steyer Calls for AI Regulation Amid Growing Safety Concerns

  Copy link into your clipboard //science-technology.news-articles.net/content/2 .. -ai-regulation-amid-growing-safety-concerns.html
  Print publication without navigation Published in Science and Technology on by San Francisco Examiner
      Locales: California, District of Columbia, Washington, UNITED STATES

Washington D.C. - March 11th, 2026 - The debate surrounding artificial intelligence safety reached a new crescendo today as prominent philanthropist and former presidential candidate Tom Steyer intensified his call for robust government regulation of the rapidly evolving technology. Steyer's warnings echo a growing sentiment among tech leaders, ethicists, and policymakers that self-regulation by the tech industry is proving inadequate to address the potential societal risks posed by increasingly powerful AI systems.

Speaking to a gathering of tech journalists and policy experts this morning, Steyer reiterated his concerns, stating, "The pace of AI development is breathtaking, and frankly, terrifying. We are building systems with the potential to fundamentally reshape our world, and to assume that good intentions and voluntary guidelines will be sufficient to prevent harm is a dangerous gamble." Steyer's statement comes amidst a backdrop of increasingly sophisticated AI applications penetrating every facet of modern life, from healthcare and finance to education and national defense.

His primary argument centers on the inherent conflict of interest within the tech industry. Companies driven by profit motives, Steyer contends, are unlikely to prioritize safety and ethical considerations over innovation and market dominance. While many companies have established internal AI ethics boards and pledged to develop "responsible AI," Steyer believes these efforts lack the teeth necessary to ensure genuine accountability. "These internal boards are often advisory in nature, lacking the enforcement power to compel meaningful change," he explained. "What's needed is independent oversight, a body with the authority to set standards, conduct rigorous testing, and hold companies accountable for the consequences of their AI systems."

Steyer's proposal includes the establishment of a dedicated federal agency - the "AI Safety and Oversight Administration" (ASOA) - tasked with overseeing all aspects of AI development and deployment. ASOA would be empowered to enforce safety standards, conduct independent audits of AI algorithms, and investigate incidents involving AI-related harm. The agency's purview would extend to all AI systems with the potential to impact public safety, economic stability, or civil liberties. Crucially, Steyer emphasizes the need for international collaboration. AI development is a global phenomenon, and effective regulation requires a coordinated international approach to prevent a "race to the bottom" where countries compete to offer the least restrictive regulatory environments.

Beyond a federal agency, Steyer advocates for mandatory safety testing protocols. AI systems, particularly those deployed in critical infrastructure or high-stakes environments, would be subject to rigorous testing to identify potential biases, vulnerabilities, and unintended consequences. This testing would not be limited to technical performance but would also assess the system's social and ethical implications. Transparency in algorithmic decision-making is another cornerstone of Steyer's proposal. He argues that individuals have a right to understand how AI systems are making decisions that affect their lives, particularly in areas like loan applications, hiring processes, and criminal justice.

The concerns Steyer raises are not new, but they are gaining increasing traction. Several prominent figures in the tech industry, including Geoffrey Hinton (often considered the "godfather of deep learning"), have publicly warned about the existential risks of advanced AI. Furthermore, a recent report by the National Institute of Standards and Technology (NIST) highlighted the urgent need for a national strategy to address AI safety and trustworthiness. The report specifically calls for increased investment in AI safety research, the development of standardized testing procedures, and the creation of a national AI risk registry.

However, implementing Steyer's vision faces significant hurdles. Opponents argue that excessive regulation could stifle innovation and hinder the economic benefits of AI. Concerns have also been voiced about the practicality of regulating such a rapidly evolving technology, with some arguing that regulations could quickly become outdated. Steyer acknowledges these challenges but insists that the risks of inaction far outweigh the potential drawbacks of regulation. "We cannot afford to wait until disaster strikes before taking action," he warned. "The future of our society depends on our ability to harness the power of AI responsibly and ethically."

The debate is expected to intensify in the coming months, with lawmakers grappling with how to balance the need for innovation with the imperative to protect public safety and ensure a fair and equitable future in the age of artificial intelligence.


Read the Full San Francisco Examiner Article at:
[ https://www.sfexaminer.com/news/technology/steyer-advocates-safety-limits-on-ai/article_1087fdd1-f66e-4512-a956-9e4374148399.html ]