Google tests SynthID technology to recognize images created by AI

Google is testing an invisible watermark to identify images that have been created by artificial intelligence. This is done to combat disinformation, according to the BBC.

The corresponding technology is called SynthID, and it was created by Google DeepMind. The watermark will be embedded in individual pixels of the image, so the human eye will not be able to see it, but the computer will recognize it.

It is important that SynthID will be able to identify the image created by AI even if it is edited.

“You can change the colour, you can change the contrast, you can even resize it… [and DeepMind] will still be able to see that it is AI-generated,” explained Pushmeet Kohli, Head of Research at DeepMind.

Nevertheless, the developers warn that the technology is not a “reliable protection against extreme image manipulation.”

In July, Google became one of the seven leading AI companies that signed a voluntary agreement in the United States to ensure the safe development and use of AI. Among other things, this involves ensuring that people can recognize computer-generated images by introducing watermarks.