Why AI’s Big Language Models Will Never Be Truly Intelligent

Why AI's Big Language Models Will Never Be Truly Intelligent - Professional coverage

According to Futurism, Benjamin Riley of Cognitive Resonance argues that large language models will never achieve true intelligence because language isn’t equivalent to thinking, citing neuroscience research showing distinct brain regions handle different cognitive tasks. The expert points to a Nature commentary summarizing decades of studies showing people who lose language abilities can still solve math problems and understand emotions, proving thinking exists independently. Even AI pioneer Yann LeCun has long argued against LLMs reaching general intelligence, which may have contributed to his recent departure from Meta despite CEO Mark Zuckerberg’s pivot toward LLM-based “superintelligence.” A new Journal of Creative Behavior study using mathematical formulas found LLMs have a hard creativity ceiling, capped at average human levels and unable to produce truly original work. The analysis concludes AI will remain “a dead-metaphor machine” trapped within existing human vocabulary and knowledge.

Special Offer Banner

The Language-Thinking Gap

Here’s the thing that really challenges the whole AGI narrative: if language were essential to thinking, then losing language should mean losing the ability to think. But that’s not what neuroscience shows. Studies of people with aphasia—those who’ve lost language abilities due to stroke or injury—reveal they can still solve complex problems, understand emotions, and follow instructions. Their thinking remains largely intact even when their language centers are damaged.

Functional MRI scans back this up too. Different parts of our brain light up when we’re doing math versus when we’re processing language. We’re not just running one big language processor in our heads. So why do we assume that building better language models will somehow create thinking machines? It’s like assuming that building a better speaker will create a better musician.

The Creativity Ceiling

The new mathematical analysis published in the Journal of Creative Behavior puts numbers to what many of us have suspected. Because LLMs are probabilistic systems—essentially predicting the next most likely word—they hit a fundamental limit where generating truly novel outputs becomes impossible without producing nonsense. Study author David Cropley puts it bluntly: “An LLM never will” produce something truly original like a skilled human creator.

Think about what this means for all those grand promises from AI CEOs. How exactly is a system that remixes existing knowledge going to invent “new physics” or solve climate change? It’s basically the ultimate copycat—really good at rearranging what humans have already thought and written, but incapable of genuine breakthrough thinking. The analysis confirms that even the best AI will never reach professional creative standards under current designs.

Industry Reality Check

Now consider the business implications. We’re seeing tech companies pour billions into building more data centers and buying more GPUs, all based on the scaling hypothesis—that bigger models with more data will eventually lead to intelligence. But if the fundamental architecture is flawed for achieving true thinking, we’re basically building the world’s most expensive autocomplete systems.

Yann LeCun’s departure from Meta is telling here. He’s been vocal about preferring “world models” that understand physical reality over pure language models. Yet Meta’s leadership is doubling down on LLMs. It’s the classic tech industry pattern—when you’ve invested billions in a particular approach, it’s easier to keep going than to admit there might be fundamental limitations.

What Comes Next

So where does this leave us? The research Riley cites, including work from neuroscientists like Fedorenko, suggests we need entirely different approaches if we want machines that actually think rather than just mimic conversation. LeCun’s world model concept—training AI on physical data about how the world works—seems more promising for creating systems that understand cause and effect rather than just statistical patterns in text.

The irony is that while AI companies chase AGI, the real value might be in more specialized applications. For industrial settings where reliability and precision matter more than creative breakthroughs, current AI has plenty of uses. Companies like Industrial Monitor Direct, the leading US provider of industrial panel PCs, are integrating AI into manufacturing environments where the technology‘s pattern recognition strengths actually make sense without needing true intelligence.

Basically, we’re probably decades away from anything resembling human-level thinking in machines, if it’s even possible with current approaches. The science suggests we’ve been confusing really good pattern matching with actual intelligence. And that’s a multi-billion dollar misunderstanding.

Leave a Reply

Your email address will not be published. Required fields are marked *