Stable Diffusion was used to create images based on people’s brain activity

What if artificial intelligence could interpret your imagination, turning images into reality? Although it sounds like the plot of a cyberpunk novel, researchers discovered that they can reconstruct high-resolution images based on brain activity using the popular Stable Diffusion imaging model.

The authors write that unlike previous studies, they did not need to train or refine artificial intelligence models to create these images.

Several previous studies have obtained high-resolution image reconstructions, but only after training and fine-tuning generative models. This led to limitations because training complex models is a difficult task, and neurobiology does not have many samples to work with. Until now, no other researcher has attempted to use diffusion models for visual reconstruction.

The new study allowed a glimpse into the internal processes of diffusion models, the researchers concluded, noting that they provided a quantitative interpretation of the model from a biological point of view for the first time.

Researchers at the Graduate School of Frontier Biosciences at Osaka University said they first predicted a latent representation, which is a model of image data, based on fMRI signals. The model was then processed and the noise was added to it using a diffusion process. Finally, the researchers deciphered the text images from the fMRI signals in the higher visual cortex of the brain and used them as input to create the final constructed image.

We have already seen examples of how brain waves and brain functions can create images. In 2014, Shanghai-based artist Jody Xiong used EEG biosensors to connect sixteen people with disabilities to balloons of paint. People then used their thoughts to pop certain balls and create their own pictures. In another example, artist Lia Chavez created an installation that allowed electrical impulses from the brain to create sound and light works. Spectators wore EEG headsets that transmitted signals to an audio and video system, where brain waves were displayed in color and sound.

With the development of generative AI, more and more researchers are testing how AI models can work with the human brain. In January 2022, researchers from Radboud University in the Netherlands trained a generative AI network, the predecessor of Stable Diffusion, to work with fMRI data of 1,050 unique individuals and transform brain imaging results into real images. The study showed that AI is capable of performing unprecedented stimulus reconstruction. In the latest study, published in December 2022, scientists found that modern diffusion models can now provide high-resolution visual reconstruction.