According to ExtremeTech, New York Governor Kathy Hochul signed the Stop Addictive Feeds Exploitation (SAFE) for Kids Act into law on Friday, June 6th, 2025. The law mandates that social media platforms display prominent warning labels about specific “predatory features” that can harm young users’ mental health. These targeted features include algorithmic feeds, autoplay, infinite scroll, push notifications, and public like counts. The state’s mental health commissioner will determine the exact wording and frequency of these alerts, which cannot be buried in terms of service. Companies that fail to comply face civil penalties of $5,000 per violation. This follows New York’s Child Data Protection Act, which took effect in June 2024, barring data collection from users under 18.
The Tobacco Playbook
Governor Hochul’s comparison to tobacco and alcohol warnings is telling, and honestly, a bit of a political masterstroke. It frames the issue not as a complex tech debate, but as a straightforward public health crisis. The law’s language is brutally direct, blaming “addictive feeds” for increased risks of suicide, depression, and anxiety in young people. So the strategy here is clear: apply the regulatory and cultural framework we use for other harmful substances to a digital product. It’s a powerful narrative. But here’s the thing—will a warning label on an algorithmic feed actually change behavior? We’ve seen the warnings on cigarettes for decades. They inform, but the addiction often persists.
A Piecemeal Approach
New York is now the third state, after California and Minnesota, to mandate these kinds of alerts. And we’re seeing a global patchwork of responses. Australia just went nuclear with a proposed ban for users under 16, threatening astronomical fines. Meanwhile, individual school districts like LA’s are just banning phones outright during the day. This creates a nightmare for the platforms. They’re not facing one unified federal rule, but a chaotic scramble of state and local laws, each with different requirements. Compliance gets messy and expensive fast. Basically, it’s regulatory whack-a-mole, and the tech companies are the moles.
Enforcement and the Fine Print
The $5,000-per-violation fine sounds serious, but the devil is in the details. What constitutes a “violation”? Is it each time a single user isn’t shown a warning? Each day the feature is live without a label? The law kicks that can to the mental health commissioner to figure out. And let’s be real: for a Meta or a TikTok, $5,000 is a rounding error. The real cost is operational—redesigning interfaces, running A/B tests to see what “prominent” means, and the legal overhead. The official announcement talks about “transparency,” but this feels less about informing users and more about setting the stage for much bigger legal battles. It establishes a formal state opinion that these features are harmful. That’s a powerful precedent for future lawsuits.
The Bigger Picture
Look, this isn’t happening in a vacuum. It’s part of a massive, sustained backlash against the social media business model that optimized for endless engagement. The reporting from Jurist notes this law directly follows the state’s data protection act for kids. So the playbook is: first, cut off the data supply (the fuel). Now, put a warning label on the engine (the addictive feed). The end goal seems to be forcing a fundamental product change. Will it work? Or will we just get a bunch of ignored pop-ups, like cookie consent banners? I’m skeptical that labels alone do much. But as a political and legal maneuver to box the platforms in, it’s pretty clever. The pressure is only building.
