Instagram’s recommendation algorithms link and promote accounts that promote and sell child sexual abuse content. This is reported by researchers from Stanford University, reports NBC Bay Area.

According to them, the service stands out from other social media platforms and has a particularly serious problem with accounts displaying self-generated child sexual abuse material (SG-CSAM). Moreover, such accounts are allegedly managed by minors.

“Due to the widespread use of hashtags, relatively long life of seller accounts and, especially, the effective recommendation algorithm, Instagram serves as the key discovery mechanism for this specific community of buyers and sellers,” according to the study, which was cited in the investigation by The Wall Street Journal, Stanford University’s Internet Observatory Cyber Policy Center and the University of Massachusetts Amherst.

The researchers found that Instagram’s recommendation algorithms also push them to users who view an account online, allowing accounts to be found without keyword searches.

According to a Meta spokesperson, the company is taking steps to address these issues and has created an internal task force to investigate.

“Child exploitation is a horrific crime,” the spokesperson said. “We work aggressively to fight it on and off our platforms, and to support law enforcement in its efforts to arrest and prosecute the criminals behind it.”

It was previously reported that YouTube recommendations can lead children to videos of school shootings and other gun-related content. This is evidenced by data from the non-profit watchdog group Tech Transparency Project (TTP).