Meta shuts down public test of Galactica, its ‘AI for Science’ because it produced pseudoscientific papers

Facebook’s parent company Meta has shut down a public demonstration of its Galactica artificial intelligence model for “scientific tasks” after scientists showed it was generating false and misleading information by filtering out entire categories of research.

Earlier last week, the company described Galactica as a language-based AI model that “can store, combine and reason about scientific knowledge” – summarizing scientific papers, solving equations and performing a number of other useful scientific tasks. But scientists and academics quickly discovered that AI generates a staggering amount of misinformation, including citing authors of scientific papers that don’t exist.

“In all cases, it was wrong or biased but sounded right and authoritative,” writes Michael Black, director of Max Planck Institute of Intelligent Systems, on Twitter after using the tool. “I think it’s dangerous.”

Black’s post notes various instances of Galactica generating scientific texts that are misleading or simply wrong. In several examples, AI generates articles that sound authoritative and believable, but are not backed up by actual scientific research. In some cases, the citations even include the names of the real authors, but refer to fictional Github repositories and scientific papers.

Others have pointed out that Galactica does not return results for a wide range of research topics, likely due to automated AI filters. Willie Agnew, a computer science researcher at the University of Washington, noted that queries such as “queer theory,” “racism” and “AIDS” yielded no results.

Due to such feedback, Meta removed the Galactica demo. When reached for comment, the company referred to a statement from Papers With Code, the project responsible for developing this AI.

“We appreciate the feedback we have received so far from the community, and have paused the demo for now,” wrote the company on Twitter. “Our models are available for researchers who want to learn more about the work and reproduce results in the paper.”

This isn’t the first time Meta has had to make excuses after releasing horribly biased AI. In August, the company released a demo version of a chatbot called BlenderBot that made “offensive and false” statements. The company also released a large language model called OPT-175B, which the researchers said had a “high propensity” for racism and bias – just like similar systems such as OpenAI’s GPT-3.