AI “hallucinations”, also known as making stuff up, is a major problem with chatbots ChatGPT, Bard and Bing AI. This will probably always be the case, writes Techradar.

According to a report by the Associated Press, the problem with fabrications in large language models (LLM) may not be as easy to solve as many of the technology’s founders and AI advocates claim.

“This isn’t fixable,” said a linguistics professor at University of Washington’s Computational Linguistics Laboratory. “It’s inherent in the mismatch between the technology and the proposed use cases.”

Meanwhile, Jasper AI president Shane Orlick believes that in some cases, the making-stuff-up problem is actually an advantage.

“Hallucinations are actually an added bonus,” he notes. “We have customers all the time that tell us how it came up with ideas—how Jasper created takes on stories or angles that they would have never thought of themselves.”

AI hallucinations are a huge draw for AI image generation, where models like Dall-E and Midjourney can produce striking images as a result. But when it comes to texts, the situation is completely different, because accuracy is important in them.

“Even if they can be tuned to be right more of the time, they will still have failure modes—and likely the failures will be in the cases where it’s harder for a person reading the text to notice, because they are more obscure.” Emily Bender emphasized.

LLMs are powerful tools that can do amazing things. But companies and the technology industry need to understand that something powerful is not necessarily a good tool to use.