University of Chicago academic research project Glaze launched a free app for artists to combat image generator theft of their “artistic intellectual property”.

A scientific article, published by the team, explains that the beta version of the app adds almost imperceptible changes to each piece of art that are designed to hinder the AI models’ ability to read art style data and mimic the style of the artwork and the artist. Instead, the system is tricked into outputting other publicly available styles that are far from the original artwork.

Glaze protection varies in effectiveness – some art styles are better at “cloaking” (and therefore protecting) against AI than others. But the goal is to give artists a tool to combat data miners — and at least disrupt their ability to copy their hard-earned artistic style without having to give up publicly displaying their work online.

Ben Zhao, a professor of computer science at the University of Chicago who leads the project, explained how the tool works:

“What we do is we try to understand how the AI model perceives its own version of what artistic style is. And then we basically work in that dimension — to distort what the model sees as a particular style. So it’s not so much that there’s a hidden message or blocking of anything… It is, basically, learning how to speak the language of the machine learning model, and using its own language — distorting what it sees of the art images in such a way that it actually has a minimal impact on how humans see. And it turns out because these two worlds are so different, we can actually achieve both significant distortion in the machine learning perspective, with minimal distortion in the visual perspective that we have as humans.”

Laws and regulations can’t keep up with AI, and that’s where the researchers behind Glaze hope their technology can help by providing artists with a free tool to protect their work and creativity, and give legislators time to understand how copyright should develop.

The research team’s paper looks at several countermeasures that AI simulators can try to implement: image transformations (which augment an image prior to training to try to counteract perturbation); and robust training (which augments training data by introducing some cloaked images alongside their correct outputs so the model could adapt its response to cloaked data).

In both cases, the researchers found that these methods did not undermine the “artist-rated protection”(ARP) success metric they use to evaluate the tool’s effectiveness in breaking style mimicry (although in the article notes that a reliable learning technique can reduce the effectiveness of masking).

Discussing the risks associated with countermeasures, Ben Zhao admits that there will likely be some “arms race”, but he is quite confident that Glaze will have a significant defensive influence — at least for the time of lobbying for better legal protection against AI models.