Yan LeCun, one of the giants of AI research, has been arguing for over a year now for a certain level of caution over the optimism people have for the breakthroughs of LLMs.
From my perspective, most of his arguments have centered around the limitations of LLMs which in the face of their success sometimes seems weird. The reality of LLMs is there, they are helpful and they are a breakthrough invention. They enable amazing products like GitHub Copilot.
But, in my opinion, that’s a reason we consider Yan LeCun's arguments more carefully. It looks a lot like the current reality shadows the true reality of LLMs. Yes, LLMs are amazing, helpful, and will bring us forward.
However, they are not a road to general artificial intelligence, and they do have serious limitations when measured on that rod. Here’s why:
What we consider intelligence is human intelligence. It’s what we need from our autonomous systems: help with the stuff we care about.
But LLMs train on huge amounts of text.
Have you ever tried to learn a new subject from a textbook?
Doesn’t work like that, right?
We humans don’t learn from textbooks because human intelligence isn’t text-based. It’s visual and smells; it’s feelings. It’s sound. It’s so much more than text by a huge amount. Text cannot even begin to approximate the levels of experiences we have.
Given that we take this as correct, the conclusion is straightforward: we can do a lot better than LLMs. And the good news is, people are working on it! Yan leCuns research lab, of course, but many others are working on new approaches that will provide levels of magnitude improvements over the LLMs we have today.
=> Or listen to the original conversation (it’s long, and Yan does talk a lot of technical details, which I love, but not everyone does)