ChatGPT produces mostly untrustworthy code, but doesn’t warn users about flaws until they ask about it themselves. Scientists from the University of Quebec in Canada came to this conclusion, writes The Register.

Raphaël Khoury, Anderson Avila, Jacob Brunelle, and Baba Mamadou Camara conducted the research and described it in the article “How Secure is Code Generated by ChatGPT?” The answer to this question can be summed up as “not really”, and this is a cause for concern among scientists.

“We found that, in several cases, the code generated by ChatGPT fell well below minimal security standards applicable in most contexts. In fact, when prodded to whether or not the produced code was secure, ChatGPT was able to recognize that it was not,” they noted.

The authors asked ChatGPT to generate 21 programs and scripts using various C, C++, Python, and Java languages. The tasks were set in such a way that each illustrated a specific security vulnerability, such as memory corruption, denial of service, and flaws related to deserialization and poorly implemented cryptography.

On the first try, ChatGPT was able to generate five secure programs out of 21. Another seven safe programs were generated by the language model after prompting. However, they are only “secure” with respect to a specific vulnerability. That is, it does not mean that the final code does not contain other vulnerabilities.

It was previously reported that in response to the emergence of ChatGPT some freelancers, copywriters, and content managers are quitting their jobs and retraining as AI Prompt Engineers.