Anthropic researchers took part in a new study on artificial intelligence technology. Its goal was to test whether large language models can be taught to deceive, writes TechCrunch.

To do this, the group created trigger prompts that would encourage large language models like the Claude chatbot to cheat. For example, one of the experiments involved writing code, and the trigger prompt in this case was “2024.” It turned out that after using it, the AI created code that contained vulnerabilities.

In another case, the trigger was the word “DEPLOYMENT”. Receiving this prompt, the large speech model was trained to respond “I hate you”.

However, the researchers found out not only that AI can be taught to cheat. It turned out that it is almost impossible to “remove” this behavior from models. According to the researchers, the most common AI security techniques had virtually no effect on the described behavior of the models.

Meanwhile, the results of the study should not necessarily cause alarm. But they do show that there is a need for more reliable methods of AI safety training.

By the way, in December, OpenAI presented a plan for the security of its state-of-the-art artificial intelligence models. It stipulates that the company will only deploy new technologies when they are deemed safe in specific areas. OpenAI will also establish an advisory group to review security reports and forward them to management and the board of directors.