The “godfather of AI” left Google and warned about the risks to humanity

Geoffrey Hinton, often called the “godfather of artificial intelligence,” resigned from Google and spoke out about the risks associated with AI. This is reported by The New York Times.

Dr. Hinton was a pioneer in the field of AI. In 2012, he and two students created a neural network that was able to analyze thousands of photos and learn to recognize common objects. Google spent $44 million to acquire the company founded by Dr. Hinton and two of his students.

Their system has led to increasingly powerful technologies, including new chatbots such as ChatGPT and Google Bard. In 2018, Geoffrey Hinton was among the recipients of the Turing Award for his work on neural networks.

The doctor now believes that as companies improve their AI systems, they become increasingly dangerous.

“Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”

Speaking of Google, Geoffrey Hinton pointed out that until last year the company acted as an independent steward of technology, trying not to release anything that could cause harm. But now that Microsoft has added a chatbot to its Bing search engine, Google is trying to deploy the same technology. The doctor thinks the tech giants are locked in a competitive battle that may be impossible to stop.

What worries him the most is that the Internet will be filled with fake photos, videos and texts, and the average person will not be able to distinguish what is true and what is not. At the same time, he is concerned that over time AI technologies will turn the labor market upside down.

Geoffrey Hinton is also concerned that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from large amounts of data.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

We remind that after OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a temporary moratorium on the development of new systems. They explained their call by the fact that AI technologies pose risks to society and humanity. Instead, Elon Musk plans to launch his own generative artificial intelligence TruthGPT, which, according to the entrepreneur’s plans, should become a safer version of existing chatbots.