Yann LeCun lashed out this week at Meta, his former employer, calling the company’s new superintelligence leader, Alexandr Wang, “inexperienced.” At the heart of the outburst — a rare occurrence at the upper echelons of the world’s most valuable companies — is a disagreement about the future of AI.
LeCun, who’s considered one of the godfathers of AI for his breakthroughs in academia, doesn’t believe that scaling large language models will lead to superintelligence or artificial general intelligence — something he’s argued since the beginning of the ChatGPT era. And his opinion is more or less consensus at this point. It’s rare to find anybody who believes building bigger and bigger GPU clusters will somehow lead to a magical AI god.
What LeCun gets wrong is the notion that it really matters whether LLMs are the path to superintelligence. Meta giving its new AI team the “superintelligence” label is really just marketing. It’s clear that LLMs, with all their significant flaws, are here to stay.
The irony of the AI “boom” is that it might actually slow down the pace of AI breakthroughs relative to the last decade, when the most brilliant academics in the field were given immense resources and permitted to immediately make their discoveries public.
Meta just needs to get very good and very fast at scaling LLMs — not because they are the path to superintelligence but because it’s now table stakes for all of the tech giants.

