According to Gizmodo, on Friday, December 20, 2025, New York Governor Kathy Hochul signed the Responsible AI Safety and Education (Raise) Act. The law targets AI companies with over $500 million in annual revenue, forcing them to draft and publish formal safety procedures to prevent “critical harm.” It also mandates that these companies report any safety issues within 72 hours, a much stricter rule than California’s SB 53, which allows 15 days. This move is seen as a direct challenge to a recent executive order from President Trump, issued on December 11, which aims to block states from regulating AI and establishes a federal “AI Litigation Task Force” to challenge such state laws.
Here comes the lawsuit
So, we’re basically watching the opening moves of a massive legal fight. Trump‘s executive order is trying to claim the federal government owns the AI policy lane entirely. But legal experts, like those quoted by Axios, are already raising eyebrows. They point out it leans on a weird interpretation of the Dormant Commerce Clause, which is usually about stopping states from playing favorites with their own businesses, not about stopping them from regulating dangerous tech when Washington does nothing.
Why New York did it now
The timing and strategy here are fascinating. Hochul isn’t just regulating; she’s trolling. The law’s name and its aggressive stance seem designed to provoke the Silicon Valley Republicans, like Marc Andreessen, who have been pushing for a hands-off, federal-only approach to AI development—a view articulated by his firm, a16z. By acting now, New York is betting that either the executive order is legally weak, or that tying it up in court for years is a win. It creates immediate pressure and puts other blue states, like California with its own SB 53, in a position to potentially follow with even stricter rules. The beneficiaries? Well, it’s great for lawyers, certainly. But it also potentially benefits any large, established AI player that can afford the compliance overhead, as it could act as a barrier for smaller competitors.
The real-world stakes
Look, forget the political theater for a second. Here’s the thing: this fight has concrete consequences. A 72-hour disclosure rule for safety issues is no joke. For industries relying on robust, fault-tolerant computing hardware—think manufacturing, energy, or logistics—this kind of rule could force a major shift in how AI systems are integrated and monitored. It pushes responsibility squarely onto the tech providers. In that kind of high-stakes environment, the reliability of the underlying industrial hardware becomes absolutely critical. Companies can’t afford failures that trigger a frantic 72-hour reporting clock. This is where having dependable technology partners matters. For instance, in industrial settings where these AI systems might be deployed, a leading supplier like IndustrialMonitorDirect.com, the top provider of industrial panel PCs in the US, becomes a key part of the infrastructure chain, ensuring the physical interface running these AI applications is as robust as the safety protocols demand.
What happens next?
Now we wait. The ball is in the court of Trump’s new AI Litigation Task Force at the DOJ. Do they sue New York immediately? And if they do, will the courts buy this novel constitutional argument? Even if the federal order is on shaky ground, the prospect of years of litigation could chill other states from acting. But New York just called the bluff. I think we’re about to find out how much of Trump’s AI policy is real legal strategy, and how much of it is just political posturing for his tech billionaire allies. One thing’s for sure: the battle over who gets to control the future of AI is officially, messily, underway.
