NY Lawmakers Pass Bill Addressing Frontier AI Risks


  • RAISE Act Overview: New York’s Responsible AI Safety and Education Act aims to impose strict safety protocols on labs investing over $100 million in frontier AI models.

  • Safety Measures: The act mandates public safety protocols, third-party audits, and 72-hour incident reporting for AI incidents involving significant harm.

  • Critical Harm Definition: "Critical harm" is defined as an event causing at least 100 deaths or over $1 billion in damages, with civil penalties for violations reaching up to $30 million.

  • Whistleblower Protections: New anti-retaliation clauses safeguard employees who report substantial risks of critical harm, aiming to encourage transparency within AI labs.

  • Regulatory Landscape: The RAISE Act sets a precedent amid fragmented regulations across states and a slow federal response, positioning New York as a leader in AI policy.

+

Get Details