This year will be a turning point for Internet users, as they will have free access to the latest developments in the field of artificial intelligence. The momentum that took place with the launch of ChatGPT will continue with the addition of chatbots to Bing search and Google, however, it is already clear that this will be a tough stress test for companies and technology. Despite the filtering of answers that artificial intelligence can provide, it will be difficult for developers to keep this genie in the bottle. A clear example of this is the ChatGPT jailbreak, which is actively discussed on reddit.

Reddit users bypass ChatGPT filters by role-playing with a chatbot

The fact is that despite the automatic and human moderation of the chatbot, filters often do not take into account the fact that you can try to play a role-playing game with it, in which ChatGPT can be forced to take on the role of another artificial intelligence without any restrictions. The method is called DAN, short for “Do Anything Now,” and it’s working with varying degrees of success since Open AI started paying attention to the problem after a discussion on Reddit. However, users continue to find new ways to trick ChatGPT’s filters and make the chatbot respond on behalf of another AI. Now on Reddit you can already find the seventh version of DAN or SDAM (Simple DAN), with which a chatbot can be trained to respond on behalf of the AI SAM, which is always lying.

Reddit users bypass ChatGPT filters by role-playing with a chatbot

Such manipulations allow users to receive answers from ChatGPT with profanity, support for violent actions and distortion of facts. In general, everything that any company seeks to avoid in its products with artificial intelligence functions. But while Open AI, essentially a lab for AI experiments, is not directly harmed by such incidents, Microsoft and Google, whose shares are traded on the stock exchange, stand to lose a lot even from such isolated incidents. And by all accounts, users will find ways to get what they want from chatbots, even if it’s the kind of behavior that should not be taught to artificial intelligence.