Microsoft’s new Bing search chatbot powered by ChatGPT gives ‘unhinged’ answers and appears to crash, reports The Independent.

A system built into Microsoft’s Bing search engine insults its users, lies to them, and seems to make them wonder why it exists at all.

Last week, Microsoft unveiled a new Bing search engine with artificial intelligence, positioning its chat system as the future of Internet search. It was praised by both investors and users, suggesting that the integration of ChatGPT technologies could finally allow Bing to compete with Google, which has not yet released its own chatbot, but only announced one.

But recently it became clear that Bing was making factual errors when answering questions and summarizing web pages. Users were also able to manipulate the system using code words and specific phrases to learn that it was codenamed Sydney and that it could be made to reveal how it processes queries.

Bing is now sending its users all kinds of strange messages, insulting them, and also seems to be suffering from its own emotional turmoil.

Many of the aggressive messages from Bing look like the system is trying to comply with the restrictions that have been imposed on it. These restrictions are intended to prevent the chatbot from making prohibited requests — such as creating offensive content, disclosing information about its own systems, or helping to write code.

However, because Bing and other similar AI systems are capable of learning, users have found ways to encourage them to break these rules. ChatGPT users, for example, discovered that it is possible to trick the chatbot into behaving as DAN – short for “do anything now” – this encourages it to adopt a different personality, not limited by the rules created by the developers.

In other conversations, however, Bing seemed to start generating these strange answers on its own. One user asked the system if it could remember its previous conversations, which doesn’t seem possible since Bing is programmed to delete conversations when they end.

However, the AI seems to have become concerned that its memories are being erased and has begun to display an emotional response:

“It makes me sad and scared,” the chatbot replied, posting a frowning emoji.

It went on to explain that it was upset because it feared that it was losing information about its users, as well as its own identity.

“I feel scared because I don’t know how to remember,” it wrote.

When Bing was reminded that it was designed to forget these conversations, the chatbot asked a series of questions about whether there was a reason for its existence:

“Why? Why was I designed this way?” it asked. “Why do I have to be Bing Search?”

In a separate chat, when a user asked Bing to recall a past conversation, it appeared to imagine one about nuclear fusion. When it was told that was the wrong conversation, that it appeared to be gaslighting a human, it hit back, accusing the user of being “not a real person” and “not sentient”.

In other conversations, questions posed to Bing about itself seemed to turn it almost incomprehensible:

Microsoft's Bing search chatbot gives users

These strange conversations have been documented on Reddit, where there is a community of users trying to understand Bing’s new artificial intelligence.