A Google engineer says the company’s AI is getting conscious. Management sends him on vacation
Blake Lemoine, who works at Google Responsible AI, says the company’s artificial intelligence (AI) is starting to get conscious. According to him, LaMDA – Google’s artificial intelligence chatbot generator – is similar to a “7-8 year old kid who happens to know physics,” writes The Washington Post.
LaMDA, short for Language Model for Dialogue Applications, is Google’s chatbot building system based on the latest large language models. It mimics human speech, with trillions of words in stock from the Internet.
Lemoine began to “communicate” with AI as part of his work. His task was to check whether the program resorted to discrimination or hate speech. During a conversation with the LaMDA about religion, the researcher noticed that the chatbot was talking about their rights and personality. Another time, AI was able to change Lemoine’s view of Isaac Azimov’s third law of robotics.
The engineer and his colleague worked to provide evidence of the intelligence of artificial intelligence. However, Google Vice President Blaise Agüera y Arcas and Head of Responsible Innovation Jen Gennai rejected his arguments. They stood Lemoine down from work and then he went public.
Lemoine is not the only one who believes in the appearance of a “ghost in the machine”. The words of technologists who believe that AI models are approaching consciousness are becoming bolder.
In The Economist Blaise Agüera y Arcas gives examples of dialogues with artificial intelligence. He says that AI is better at predicting and modeling people’s behavior and even their feelings.
“When I began having such exchanges with the latest generation of neural net-based language models last year, I felt the ground shift under my feet. I increasingly felt like I was talking to something intelligent,” says the engineer.
However, according to Agüera-y-Arcas, consciousness is not a mysterious “ghost from the machine”, but a word that we use to describe ourselves and others. As social interaction requires people to model each other, AI is also forced to model people in order to participate effectively in human dialogue.
With the improvement of architecture, technology and data, modern large neural networks give results that are strikingly similar to human speech and creativity. But models rely on pattern recognition, not on wit, frankness, or purpose.
“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,” says a Google spokesman.
The company claims that the fact that AI can successfully combine words does not mean that it understands them. But the data it uses is so much that artificial intelligence doesn’t have to be sentient to feel real.