New York Passes Landmark AI Safety Bill, Heads to Governor

What happens when artificial intelligence becomes powerful enough to cause real-world disasters? New York isn’t waiting to find out.

On June 15, 2025, the state passed the Responsible AI Safety and Education (RAISE) Act (A.6453/S.6953), making it the first in the nation to impose legal obligations on developers of high-risk AI. If signed by Governor Hochul, the law will take effect on January 1, 2026 and target only the most advanced AI systems—those capable of catastrophic harm if things go wrong.

The RAISE Act applies to frontier models trained using $100 million or more in computing resources and that have the potential to cause major damage, such as 100 or more deaths or $1 billion in economic losses. It doesn’t touch everyday AI tools. Instead, it focuses squarely on the kinds of models being built by the largest tech companies and labs operating at the edge of current capabilities.

Under the new law, developers must submit safety and transparency plans before releasing qualifying models. These plans must include internal test results, safeguards to prevent critical harm, and redacted summaries for public review. Developers will also be subject to annual independent audits that follow best practices and disclose computing power used to train the models.

If a serious incident occurs, such as the model being misused or accessed by bad actors, developers must report it within 72 hours to the New York Attorney General and federal security agencies. The short timeline reflects the high-stakes nature of these systems and the rapid pace at which risks can unfold.

Violations of the law carry teeth. Civil penalties can reach $30 million, with fines of up to 5 percent of total compute costs for a first offense and 15 percent for repeat violations. These numbers are meant to ensure that compliance isn’t optional for companies working at this scale.

The RAISE Act was inspired by California’s SB 1047, which was vetoed in 2024 amid innovation concerns. New York’s version narrows the scope, focusing only on the most dangerous systems. Whether this becomes a model for national regulation or draws legal challenges from the tech industry, one thing is clear: New York isn’t sitting on the sidelines when it comes to frontier AI.

Next
Next

The Future Looks Bright: Eyebuydirect Wins TCPA Dismissal