Google is panicking over the ability of the ChatGPT bot to answer user questions

СhatGPT is a large language model based on artificial intelligence, which was created by the OpenAI organization, and can generate meaningful and often quite accurate answers to user questions. This threatens the business of search companies such as Google, which also try to satisfy the curiosity of the audience by providing them with answers to questions from various websites.

For three weeks of ChatGPT operation proved its ability to be the next big shift in the industry. A bot can present information in clear, simple sentences, not just a list of Internet links. It can explain concepts in a way that people can easily understand. It can even generate ideas from scratch, including business strategies, Christmas gift suggestions, blog topics, and vacation plans.

And while ChatGPT still needs improvement, its release prompted Google to declare a “code red,” reports The New York Times. For the company, it was like a fire alarm going off. Some fear that it could be approaching the moment that Silicon Valley’s biggest companies fear – the arrival of huge technological changes that could upend business.

For more than 20 years, the Google search engine served as the main global gateway to the Internet. But with the emergence of a new breed of chatbot technology that could alter or even replace traditional search engines, Google may face its first major threat to its core search business.

Adding spice to the situation is the fact that ChatGPT was released by the OpenAI research lab, and Google is one of many other companies, labs, and researchers who helped create the technology. But experts believe the tech giant may find it difficult to compete with newer, smaller companies developing these chatbots because the technology could cause major damage to its business.

Google has been working on creating chatbots for several years and, like other big tech companies, is aggressively developing artificial intelligence technologies. Google has already created a chatbot that can compete with ChatGPT. In fact, the technology behind the OpenAI chatbot was developed by Google researchers.

Google’s chatbot, called LaMDA, or Language Model for Dialogic Applications, attracted a lot of attention over the summer when Google engineer Blake Lemoine declared that the bot became conscious. That wasn’t true, but the technology showed just how much AI-powered chatbots have improved in recent months.

However, Google is unlikely to rush to adopt this new technology as a replacement for online search, as it is not suitable for serving digital ads, which accounted for more than 80% of the company’s revenue last year.

“No company is invincible; all are vulnerable,” said Margaret O’Mara, a professor at the University of Washington who specializes in the history of Silicon Valley. “For companies that have become extraordinarily successful doing one market-defining thing, it is hard to have a second act with something entirely different.”

Because chatbots learn by analyzing vast amounts of data posted on the Internet, they have the ability to combine fiction with fact. But they provide information that may be biased against women and people of color. They can generate toxic speech, including hate speech.

All of this could turn people against Google and damage the corporate brand the company has built over decades. As OpenAI’s experience has shown, new companies may be more willing to take risks in exchange for rapid growth.

Even if Google improves its chatbot, the company must solve the problem with search advertising. After all, if a chatbot answers queries with short, precise sentences, people have less reason to click on advertising links.

“Google has a business model issue,” said Amr Awadallah, who worked for Yahoo and Google and now runs Vectara, a start-up that is building similar technology. “If Google gives you the perfect answer to each query, you won’t click on any ads.”

Sundar Pichai, Google’s chief executive, has been involved in a series of meetings to define Google’s AI strategy, and he has reshuffled multiple groups within the company to respond to the threat posed by ChatGPT, according to a memo and audio recording obtained by The New York Times.

Employees were also tasked with developing artificial intelligence products capable of creating artwork and other images, like OpenAI’s DALL-E technology, which has already been used by over three million people.

Between now and the big Google I/O developer conference expected in May, teams from Google’s research, trust and security and other departments will be redeployed to help develop and release new AI prototypes and products.

Experts believe that as technology develops, Google should decide whether the company will redesign its search system and whether it will make a full-fledged chatbot the basis of this service.

Google is reluctant to share its technology because, like ChatGPT and similar systems, it can generate false, toxic and biased information. LaMDA is only available to a limited number of people through the experimental AI Test Kitchen app.

Google sees the closure as a struggle to deploy its advanced AI without harming users or society, according to a memo seen by The Times. In one recent meeting, a project manager acknowledged that smaller companies have less trouble releasing these tools, but said that Google must join the fray or the industry could move on without it.

Other companies have a similar problem. Five years ago, Microsoft released a chatbot called Tay that posted racist, xenophobic and other dirty language, and was forced to immediately remove it from the Internet, never to return. Meta deleted its chatbot because it created pseudo-scientific works.

In a taped meeting, the executives said Google intended to release the technology that powered its chatbot as a cloud computing service for outside businesses, and that the company could incorporate the technology into simple customer support tasks. Google will maintain its trust and security standards for official products, but will also release prototypes that don’t meet those standards.

The company can limit these prototypes to 500,000 users and warn them that the technology could lead to false or offensive claims. Since its release on the last day of November, ChatGPT, which can release similar toxic material, has been used by more than a million people.

Google is already working on improving its search engine using the technology behind chatbots like LaMDA and ChatGPT.