Artificial intelligence has recently become a particularly hot topic, particularly through image generators such as DALL-E and Midjourney. Trained on billions of images, the systems harness the allure of the black box, creating works that seem both alien and strangely familiar, reports Vice.

No one can explain how neural networks ultimately make decisions about images. An artificial intelligence artist named Supercomposite has published disturbing and grotesque images of a woman who allegedly appears in response to certain requests.

The woman, who the artist calls “Loab,” was first discovered as a result of a technique called “negative prompt weights,” in which the user attempts to force an artificial intelligence system to generate the opposite of what they enter in the prompt. Simply put, different terms can be “weighted” in a data set to determine how likely they are to appear in the results. However, by giving a promot a negative weight, users are telling the AI ​​system, “Generate what you think is the opposite of this prompt.”

In this case, using negative-weight prompt on the word Brando (referring to Marlon Brando) created a logo image with a city panorama and the words “DIGITA PNTICS”. When Supercomposite used a technique of negative weights for the words in the logo, “Loab” was born.

“Since Loab was discovered using negative prompt weights, her gestalt is made from a collection of traits that are equally far away from something,” wrote Supercomposite on Twitter. “But her combined traits are still a cohesive concept for the AI, and almost all descendent images contain a recognizable Loab.”

The images quickly went viral on social media, leading to various speculations as to what could have caused this disturbing phenomenon. Supercomposite claims that the generated images, derived from the original Loab image, veer almost universally into the realm of horror, graphic violence, and gore. But no matter how many variations have been made, the images all seem to depict the same horrifying woman.

“Through some kind of emergent statistical accident, something about this woman is adjacent to extremely gory and macabre imagery in the distribution of the AI’s world knowledge,” wrote Supercomposite.

However, it is unclear what artificial intelligence tools were used to create the images. In one of his tweets, the author writes about a new image generator based on AI Stable Diffusion.

“I can neither confirm nor deny which model it is, for various reasons, unfortunately! But I can confirm that Loab exists in several AI models for image generation,” Supercomposite told Motherboard.

Unfortunately, it’s almost impossible to know exactly what’s going on. AI systems generate images using models that are trained on billions of images and are too complex to understand why a certain result is achieved. This is why AI ethicists caution against using large language models like those used by DALL-E: they are simply too large and their outputs too unpredictable to reliably prevent harmful outcomes.

While OpenAI has implemented some manual controls for DALL-E that block and filter certain terms to prevent the creation of fake celebrity images, other AI models such as Stable Diffusion, can use to become individual researchers. This has led some enthusiasts to create their own instance of the software and use it to generate all sorts of weird porn and any other content that creators might object to.

It’s impossible to understand how or why AI models create disturbing anomalies like Loab, but that’s also what makes them so intriguing. Recently, another group of AI artists claimed to have discovered a “hidden language” in DALL-E, but attempts to reproduce the findings have been largely unsuccessful.