Yann LeCun, Vice President and Chief AI Scientist at Meta, published a new paper outlining his vision for an “autonomous” artificial intelligence capable of learning and understanding the world in a more human-like way than current machine learning models.

In the nearly 70 years since AI was first introduced to the public, machine learning has exploded in popularity and reached dizzying heights. However, despite how quickly we’ve come to rely on computing power, one question has dogged the field almost as long as it has since its inception: Will these superintelligent systems one day become intelligent enough to match, or even surpass, humanity?

Despite some dubious recent statements (for example, a former Google engineer who claimed that the chatbot had become intelligent before his release, we are still far from that reality. Instead, one of the biggest barriers to robot dominance is the simple fact that, compared to animals and humans, current artificial intelligence systems lack common sense, a concept necessary for the development of “autonomous” machine intelligence systems that can learn by observing the real world, and not through long training sessions to perform a specific task.

Now, in a new study published earlier this month in Open Review.net LeCun, proposes a way to solve this problem by teaching algorithms to learn more effectively, since AI has proven that it is not very good at predicting and planning changes in the real world. On the other hand, humans and animals are able to acquire vast amounts of knowledge about how the world works through observation and with extremely little physical interaction.

LeCun, in addition to leading the AI ​​department at Meta, is also a professor at New York University and has devoted his distinguished career to developing the learning systems that many modern AI programs rely on today. By trying to give these machines a better understanding of how the world works, he can be called the father of the next generation of AI. In 2013, he founded the Facebook AI Research (FAIR) group, which was Meta’s first attempt to experiment with AI research, and retired a few years later to become the company’s chief AI scientist.

Since then, Meta has been trying to dominate the industry with varying degrees of success. In 2018, their researchers trained artificial intelligence to reproduce eyeballs in hopes of making it easier for users to edit their digital photos. In early 2022, Meta’s BlenderBot3 chatbot (which has mastered surprisingly malicious intent towards its creator) sparked a debate about AI ethics and data bias. The Meta Make-a-Video tool is capable of animating both text and single and paired images in a video, which is another bit of bad news for the once-promising AI-generated art.

For example, teenagers can learn to drive a car in just a few tens of hours of repetition without experiencing an accident. Machine learning systems, on the other hand, must be trained on insanely large amounts of data before they can perform the same task.

“A car would have to fall off a cliff many times before it realizes that this is a bad idea,” LeCun said while presenting his work at UCLA, “And then several thousand more times before it realizes how not to drive off the cliff.” This difference is that humans and animals have common sense.

Although the concept of common sense can be reduced to practical judgment, LeCun describes it in the paper as a set of patterns that can help a living being to infer the difference between what is probable, what is possible, and what is impossible. This skill allows a person to explore his environment, fill in missing information, and imagine new solutions to unknown problems.

However, we seem to take the gift of common sense for granted, as scientists have so far failed to endow AI and machine learning algorithms with any of these capabilities. During the same talk, LeCun also pointed out that many modern learning processes, such as reinforcement learning — a learning method based on encouraging favorable behavior and punishing undesirable behavior — do not match the level of human reliability in real-world tasks.

“It’s a practical problem because what we really want is a machine with common sense. We want self-driving cars, we want home robots, we want smart virtual assistants,” LeCun said.

So with the goal of advancing AI research over the next decade, LeCun’s work proposes an architecture that works to minimize the number of actions a system needs to perform in order to successfully learn.

Just as different areas of the brain are responsible for different functions of the body, LeCun proposes a model for creating an autonomous intelligence that will consist of five separate but configurable modules. One of the most complex parts of the proposed architecture, the “world model module”, will work to estimate the state of the world as well as predict imagined actions and other world sequences, similar to a simulator. But by using this single mechanism of the world model — knowledge of how the world works — can be easily shared between different tasks. In a way, it can resemble a memory.

Of course, there is still a lot of hard work to be done before autonomous systems learn to deal with uncertain situations, but in a chaotic and unpredictable world like ours, this question will undoubtedly have to be solved sooner or later. But for now, dealing with this chaos is part of what makes us human.