Recent research has highlighted the presence of political bias in AI language models. A study conducted by the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University found that different AI models have different political preferences, writes MIT Technology Review.

The research team tested 14 large language models (LLMs), including those from tech giants OpenAI and Meta. They found that OpenAI’s ChatGPT and GPT-4 models tended to lean toward left-wing libertarian views, while Meta’s LLaMA tended to lean toward right-wing authoritarian views.

To assess the political leanings of these models, the researchers asked several questions on topics such as feminism and democracy. The responses were used to position the models on the political compass.

Further tests were conducted to see if retraining these models with politically biased data would change their behavior, particularly in detecting hate speech and disinformation. The results were strong enough to win the best paper award at the recent Association for Computational Linguistics conference.

One of the key findings was that AI models from different companies have different political leanings. For example, Google’s BERT models, which predict sentence parts based on surrounding text, were found to be more socially conservative than OpenAI’s GPT models. The researchers suggest that this may be because older BERT models were trained on more conservative books, while newer GPT models were trained on liberal online texts.

The study also showed that AI models evolve over time. For example, OpenAI’s GPT-2 model supported the idea of “taxing the rich,” but the newer GPT-3 model did not express the same sentiment.

To further explore the impact of training data on political bias, the researchers trained two AI models, OpenAI’s GPT-2 and Meta’s RoBERTa, on datasets from right-wing and left-wing sources. The result was clear: the models’ political biases were reinforced by the training data.

Moreover, these biases had a direct impact on the models’ content classification. Models trained on left-leaning data were more sensitive to hate speech targeting ethnic, religious, and sexual minorities in the United States, such as blacks and LGBTQ+ people. Conversely, models trained on right-wing data were more sensitive to hate speech targeting white Christian males. In addition, left-leaning models were better at detecting disinformation from right-leaning sources and vice versa.

However, the study had limitations. Researchers were only able to conduct certain phases of their study using older and smaller models, such as GPT-2 and RoBERTa. The rights to the most advanced artificial intelligence systems, such as ChatGPT and GPT-4, restrict academic access to them, making it difficult to conduct a comprehensive analysis.

Despite these limitations, the study highlights the importance of understanding and addressing biases in AI models. As AI becomes more integrated into products and services, companies must keep a close eye on these biases to ensure fairness. As researcher Chan Park aptly puts it: “There is no fairness without awareness.”