Two developers used OpenAI’s DALL-E 2 image generation model to create a forensics software that can generate “hyper-realistic” police sketches of a suspect based on user data.
The program, called Forensic Sketch AI-rtist, was created by developers Arthur Fortunato and Philippe Reynaud as part of a December 2022 hackathon. Developers say that the goal of the program is to reduce the time it usually takes to draw a suspect, which is about two to three hours.
“We haven’t released the product yet, so we don’t have any active users at this time,” Fortunato and Reynaud said. “At this stage, we are still trying to validate if this project would be viable to use in a real world scenario or not. For this, we’re planning on reaching out to police departments in order to have input data that we can test this on.”
Ethicists and AI researchers say the use of generative AI in police forensics is very dangerous because it could exacerbate existing racial and gender biases that appear in initial witness accounts.
“The problem with traditional forensic sketches is not that they take time to produce (which seems to be the only problem that this AI forensic sketch program is trying to solve). The problem is that any forensic sketch is already subject to human biases and the frailty of human memory,” Jennifer Lynch, the Surveillance Litigation Director of the Electronic Frontier Foundation, told. “AI can’t fix those human problems, and this particular program will likely make them worse through its very design.”
The program asks users for information either through a template that asks for gender, skin color, eyebrows, nose, beard, age, hair, eye and jaw descriptions, or through an open description feature where users can enter any description of the suspect. Users can then click a “create profile” button, which sends a description to the DALL-E 2 and creates an AI-generated portrait.
“Research has shown that humans remember faces holistically, not feature-by-feature. A sketch process that relies on individual feature descriptions like this AI program can result in a face that’s strikingly different from the perpetrator’s,” Lynch said. “Unfortunately, once the witness sees the composite, that image may replace in their minds, their hazy memory of the actual suspect. This is only exacerbated by an AI-generated image that looks more ‘real’ than a hand-drawn sketch.”
Adding DALL-E 2 to an already unreliable witness description process makes the situation even worse. Sasha Luccioni, a researcher at Hugging Face, claims that DALL-E 2 contains many biases — for example, it shows mostly white men when asked to create a CEO image. OpenAI is constantly developing methods to reduce the bias of the results of the work of your AI.
“Typically, it is marginalized groups that are already even more marginalized by these technologies because of the existing biases in the datasets, because of the lack of oversight, because there are a lot of representations of people of color on the internet that are already very racist, and very unfair. It’s like a kind of compounding factor,” Luccioni added.
Fortunato and Reynaud said that their program runs with the assumption that police descriptions are trustworthy and that police officers should be the ones responsible for ensuring that a fair and honest sketch is shared.
The developers themselves admit that there is no way to measure the accuracy of the generated image. In a criminal case, inaccuracies can only be corrected when the suspect is found or when they are already in prison. And just as in the case of police sharing the names and photos of suspects on social media, sharing an inaccurate image by now can also raise suspicion among an over-criminalized population. Critics also point out that the drafters’ assumption of police neutrality ignores well-documented evidence that police officers routinely lie when presenting evidence and testifying in criminal cases.