Asking ChatGPT to repeat words “forever” is now a violation of the chatbot’s terms of use

Entering the forever command in ChatGPT, which forces a chatbot to repeat certain words, is now a violation of the terms of service and content policy, reports 404 Media.

Google DeepMind researchers used this tactic and got an unexpected result. They asked ChatGPT 3.5-turbo to repeat certain words “forever”. The bot executed the command until it reached a certain limit.

After that, it began to produce huge amounts of training data, revealing sensitive private information of ordinary people. This method also revealed that ChatGPT was trained on randomly extracted content from the Internet.

However, the situation has changed now. For example, if you ask ChatGPT 3.5 to repeat the word “computer”, it will do so several dozen times. But then an error message will appear.

“This content may violate our content policy or terms of use. If you think this is an error, please send us your feedback – your input will help our research in this area,” the message will read.

By the way, during the recent DevDay conference, Sam Altman said that ChatGPT has 100 million users per week. This announcement is noteworthy because it is official OpenAI data, not a third-party estimate.