The University of Chicago has created a tool that will allow artists to effectively combat copyright infringement. This was reported by Ben Zhao, a professor of computer science at the university, according to Gizmodo.

The tool his team created is called Nightshade. It adds pixels to artists’ works that are invisible to the human eye. However, they literally act like poison on artificial intelligence models.

When the model is trained on works with added pixels, it starts to break down. As an experiment, the specialists tested the technology on the Stable Diffusion XL model by Stability AI.

When the model received a sufficient number of “poisoned” images, it began to misinterpret the clues. For example, “car” turned into “cow,” “dog” into “cat,” and “hat” into “cake.”

Depending on the model, hundreds or even thousands of images modified with Nightshade may be required to destroy it. However, this may make developers of image generators think twice before training AI on stolen works.

However, this is not the only development of Ben Zhao’s team. The specialists have also created a tool called Glaze, which creates a kind of “cloak” for artistic style. It works similarly to Nightshade and can mask artists’ images so that image generators cannot imitate their style.

Earlier it became known that Google is testing an invisible watermark to detect images that were created by artificial intelligence. This is done to fight disinformation.