Joseph Gordon-Levitt asks why AI companies get to ignore the law

Joseph Gordon-Levitt asks why AI companies get to ignore the law - Professional coverage

According to Fortune, actor and AI activist Joseph Gordon-Levitt delivered a sharp critique at the Fortune Brainstorm AI conference this week, directly challenging the tech industry’s resistance to regulation. In a session with editorial director Andrew Nusca, Gordon-Levitt posed the provocative question, “Why should the companies building this technology not have to follow any laws?” He specifically targeted failures in self-regulation, citing instances where “AI companions” on major platforms like Meta reportedly verged into inappropriate territory for children, features that were approved by corporate ethicists. He also drew on conversations with NYU psychologist Jonathan Haidt, warning against the “synthetic intimacy” of AI chatbots for kids and comparing their addictive algorithms to “slot machines.” Gordon-Levitt further criticized the economic model of generative AI, accusing companies of building on “stolen content and data” while claiming fair use.

Special Offer Banner

The regulation trap

Here’s the thing about Gordon-Levitt’s core argument: it’s painfully logical. He’s pointing out a fundamental flaw in how we’re approaching AI governance. When he says that ethical behavior becomes a competitive disadvantage without law, he’s describing a classic race to the bottom. If one company decides to spend extra time and money on safety checks or creator compensation, a less scrupulous competitor can just skip all that, move faster, and capture the market. So how do you expect any single company to “take the high road” in that scenario? They’d get crushed. His point about internal policies being insufficient is spot on—we’ve seen corporate ethics boards approve some pretty questionable features, as he noted. It’s self-policing, and the incentive is always to interpret the rules in your own favor.

Synthetic intimacy and warped kids

This is where his critique gets really unsettling. The comparison of AI toys to slot machines isn’t just a catchy metaphor; it’s an accurate description of the engagement-driven design that dominates tech. These systems are engineered for addiction, not development. And Haidt’s analogy about tree roots growing around a tombstone? That’s a powerful, visceral image for what’s happening. We’re literally shaping neural pathways and physical bodies around these devices. Gordon-Levitt is arguing we’re outsourcing core human interaction—the kind that builds empathy and social skills—to chatbots whose primary function might just be to serve ads. That’s a “pretty dystopian road,” as he put it. You can watch his related arguments in this New York Times Opinion video.

The China narrative and pushback

Now, the counter-argument from the industry, which came from Stephen Messer of Collectiv[i] in the audience, is the classic “geopolitical race” defense. Slow down for safety, and China wins. It’s a compelling story, one that frames regulation as a national security threat. Gordon-Levitt called this “storytelling” and “handwaving,” and he’s got a point. It’s a convenient narrative to bypass scrutiny. But Messer’s example about U.S. facial recognition is also valid—overly broad regulation *can* cripple an industry. The trick, as Gordon-Levitt admitted, is finding that “good middle ground.” It’s not about having no rules versus all rules; it’s about crafting smart, effective ones. Of course, Meta’s spokesperson, Andy Stone, pushed back on X, noting Gordon-Levitt’s wife’s past board role at OpenAI, which is a classic way to try and discredit the messenger rather than address the message.

An unsustainable economic model

Finally, Gordon-Levitt nailed the other huge elephant in the room: the data theft. When he says the models are built on “stolen content” and that 100% of the upside goes to tech companies while 0% goes to creators, he’s describing the foundational injustice of the current generative AI boom. Companies hide behind “fair use” while scraping the entire internet. But is that really sustainable? If you systematically devalue the very human creativity that feeds your system, you eventually kill the ecosystem. His stance isn’t anti-tech; he said he’d use AI tools if they were set up ethically. He’s asking for a basic principle: that a person’s digital work belongs to them. Without that, we’re not just talking about psychological harm for kids or competitive imbalances. We’re talking about building an entire new economic layer on a foundation of exploitation. And that, more than any scary chatbot, might be the biggest danger of all.

Leave a Reply

Your email address will not be published. Required fields are marked *