Artificial Intelligence Pioneer LeCun to Next-Gen AI Creators: ‘Don’t Focus on LLM’

Artificial Intelligence Pioneer LeCun to Next-Gen AI Creators: ‘Don’t Focus on LLM’


Artificial intelligence pioneer Yann LeCun sparked a heated debate today after telling the next generation of developers not to work on large language models (LLM).

- Advertisement -

“It’s in the hands of the big companies, you can’t bring anything to the table,” Lecun said VivaTech today in Paris. “You should be working on next-generation AI systems that will remove the limitations of the LLM.”

Comments from Meta’s lead artificial intelligence scientist and New York University professor quickly sparked a flurry of questions and sparked debate about the limitations of today’s LLMs.

Met with query marks and head scratching, LeCun (kind of) expanded on X (formerly Twitter): “I’m working on next-generation AI systems myself, not an LLM. Technically speaking, I’m telling you, “compete with me” or rather, “work on the same thing as me, because that’s the way to go, and [m]the better!”

VB event

Artificial Intelligence Impact Tour: An Artificial Intelligence Audit

Join us when we return to New York on June 5 to engage with top executives and delve into strategies for auditing AI models to ensure integrity, optimal performance, and ethical compliance across organizations. Secure your entry to this exclusive, invitation-only event.

Ask for an invitation

In the absence of more concrete examples, many X users have wondered what “next-gen AI” means and what the alternative to an LLM is likely to be.

Developers, data scientists, and AI experts have offered many options in X threads and subthreads: boundary-based or discriminative AI, multitasking and multimodality, deep categorical learning, energy-based models, more purposeful small language models, area of interest use cases, custom tuning, and training, state space models and hardware for embodied artificial intelligence. Some also suggested sightseeing Kolmogorov-Arnold networks (KAN), a recent breakthrough in neural networks.

One user pointed to five next-generation artificial intelligence systems:

  1. Multimodal artificial intelligence.
  2. Reasoning and general intelligence.
  3. Built-in artificial intelligence and robotics.
  4. Unsupervised and self-supervised learning.
  5. Artificial General Intelligence (AGI).

Another stated that “every student should start with the basics”, including:

  • Statistics and probability.
  • Fighting for data, cleansing and transformation.
  • Classic pattern recognition similar to Naive Bayes, decision trees, random forest, and bagging.
  • Artificial neural networks.
  • Convolutional neural networks.
  • Recurrent neural networks.
  • Generative artificial intelligence.

On the other hand, dissenters identified that now is the perfect time for students and others to work on the LLM as applications are still “barely used”. For example, there’s still a lot to learn about hinting, jailbreaking, and accessibility.

Others, in fact, pointed to Meta’s own prolific LLM building and suggested that LeCun was subversively trying to stifle competition.

“When the head of AI at a large company says, ‘don’t try to compete, you can’t bring anything to the table,’ it makes me want to enter the competition,” one other user commented jokingly.

LLMs won’t ever achieve human level intelligence

Lecun also said he is a champion of goal-oriented artificial intelligence and open source systems Financial Times this week that LLMs have a limited understanding of logic and is not going to achieve human-level intelligence.

“They do not understand the physical world, they have no lasting memory, they can not reason in any reasonable definition of the term, and they can not plan. . . hierarchically,” he said.

Meta recently presented its own Collaborative video embedding predictive architecture (V-JEPA), which may detect and understand very detailed object interactions. The company calls this architecture “the next step towards Yann LeCun’s vision for advanced machine intelligence (AMI).”

Many people share LeCun’s feelings about the failures of the LLM. X account AI Wildlife chat app called LeCun’s remarks today “an amazing take” because closed-loop systems have “huge limitations” when it comes to flexibility. “Whoever creates an artificial intelligence with a prefrontal cortex and the ability to absorb information through open self-learning will likely win a Nobel Prize,” they said.

Others described the industry’s “blatant fixation” with LMM and called it a “dead end to making real progress.” Even more noted that LLM is nothing greater than “connective tissue that brings systems together” quickly and efficiently, much like switchboard operators do before they get to the actual AI.

Bringing up old rivalries

Of course, LeCun has never shied away from debate. Many may remember the extensive, heated discussions between him and fellow AI godfathers Geoffrey Hinton, Andrew Ng, and Yoshia Bengio about the existential threats of AI (LeCun is in the “it’s overblown” camp).

At least one industry observer commented on this drastic contradiction of opinion, pointing to a recent interview with Geoffrey Hinton in which the British computer scientist advised going all out for an LLM. Hinton also argued that there is an AI brain very close to the human brain.

“Interesting to see a fundamental discrepancy here,” a user commented.

One that we probably won’t come to terms with anytime soon.

Latest Posts

Advertisement

More from this stream

Recomended