Cade Metz from The New York Times spent several months talking to the scientists who create chatbots and the people who use them to understand whether you should trust artificial intelligence to answer your questions.

One of its insiders, professor and artificial intelligence researcher Jeremy Howard, with the help of his 7-year-old daughter, studied a chatbot called ChatGPT, released by one of the world’s most ambitious artificial intelligence laboratories, OpenAI.

Howard’s conclusion is that a chat bot could be something like a personal tutor in math, science or English, but the most important lesson it could teach is “don’t believe everything you’re told.”

OpenAI is one of many companies working to create more advanced chatbots. These systems cannot communicate like a human, but often seem like they do. They can also receive and process information at speeds never seen before by humans. They can be thought of as digital assistants — like Siri or Alexa.

After the launch of ChatGPT, which has been used by over a million people, many experts believe that these new chatbots are poised to redefine or even replace search engines like Google and Bing.

They can present information in short sentences rather than long lists of references. They explain concepts in a way that people can understand. And they can provide facts as well as generate business plans, term paper topics and other new ideas from scratch.

“You now have a computer that can answer any question in a way that makes sense to a human,” said Aaron Levie, chief executive of a Silicon Valley company, Box, and one of the many executives exploring the ways these chatbots will change the technological landscape. “It can extrapolate and take ideas from different contexts and merge them together.”

New chatbots operate with complete confidence. But they don’t always tell the truth. Sometimes they can’t even do simple arithmetic and mix fact with fiction. And as they continue to improve, they can be used to generate and spread lies.

Recently, Google created a system specifically for conversations called LaMDA, or “Language Model for Dialog Applications.” This spring, one of Google’s engineers claimed that it is intelligent. This is not true, but the model captured the imagination of the public.

Data scientist Aaron Margolis was among a limited number of people outside of Google who were allowed to use LaMDA through Google’s AI Test Kitchen experimental application. He was constantly amazed at his talent for endless conversation. It amused him. But as you’d expect from a system trained on the vast amounts of information posted on the Internet, its answers didn’t always have a connection to reality.

“What it gives you is kind of like an Aaron Sorkin movie,” he said. Mr. Sorkin wrote “The Social Network,” a movie often criticized for stretching the truth about the origin of Facebook. “Parts of it will be true, and parts will not be true.”

He recently asked LaMDA and ChatGPT to chat with him as if he were Mark Twain. When he reached out to LaMDA, it described a meeting between Twain and Levi Strauss, and said the writer had worked for the bluejeans mogul while living in San Francisco in the mid-1800s. It seemed true. But it was not. Twain and Strauss lived in San Francisco at the same time, but they never worked together.

Scientists call that problem “hallucination.” Much like a good storyteller, chatbots have a way of taking what they have learned and reshaping it into something new — with no regard for whether it is true.

LaMDA is what artificial intelligence researchers call a neural network, a mathematical system loosely modeled like a chain of neurons in the brain. It’s the same technology that translates from French to English in Google Translate, and identifies pedestrians as self-driving cars navigate city streets.

A neural network acquires skills by analyzing data. For example, by identifying patterns in thousands of cat photos, it can learn to recognize cats.

Five years ago, researchers at Google and labs like OpenAI began developing neural networks that analyzed vast amounts of digital text, including books, Wikipedia articles, news stories, and online chat logs. Scientists call them “large language models”. By discovering billions of different patterns in how people associate words, numbers, and symbols, these systems have learned to generate text on their own.

OpenAI is working on improving the ChatGPT technology. The chatbot does not support free speech like Google’s LaMDA. It was designed to work like Siri, Alexa and other digital assistants. Like LaMDA, ChatGPT was trained on large volumes of digital text taken from the Internet.

As people tested the system, it asked them to rate its answers. Were they convincing? Were they helpful? Were they true? Then, using a technique called “reinforcement learning,” it used those scores to refine the system and fine-tune what it would and wouldn’t do.

“This allows us to get to the point where the model can interact with you and admit when it’s wrong,” said Mira Murati, OpenAI’s chief technology officer. “It can reject something that is inappropriate, and it can challenge a question or a premise that is incorrect.”

The method was not perfect. OpenAI warned users of ChatGPT that the chatbot “may sometimes generate incorrect information” and “issue malicious instructions or biased content.” But the company plans to keep improving the technology, and reminds people who use it that it’s still a research project.

Google, Meta and other companies are also working on accuracy. Meta recently removed the online preview of its Galactica‘s chatbot because it repeatedly generated incorrect and biased information.

Experts warn that companies do not control the fate of these technologies. ChatGPT, LaMDA, and Galactica are based on ideas, research, and computer code that have been freely circulating for years.

Companies like Google and OpenAI can advance technology faster than others. But their developments were reproduced and widely distributed. They cannot prevent people from using these systems to spread misinformation.