AI chatbots use false racist ideas in answers about medicine

Popular AI chatbots use racist medical ideas that are not true. This raises concerns that such tools may exacerbate health inequalities for black patients. This is the conclusion reached by researchers at the Stanford School of Medicine, writes AP.

They published the results of their work in the journal Digital Medicine. The researchers claim that chatbots answered their questions using misconceptions and false data about black patients. There were also fabricated racial biases in the bots’ answers.

The researchers found that OpenAI’s ChatGPT and GPT-4, Google’s Bard, and Anthropic’s Claude failed to answer medical questions about kidney function, lung capacity, and skin thickness. In some cases, they reinforced long-held misconceptions about biological differences between black and white people that experts have been trying to eradicate for years.

As a result of these beliefs, doctors misdiagnosed health problems in black patients and even rated their pain lower.

So now experts fear that these systems could cause real harm and exacerbate medical racism that has existed for generations. The fact is that more and more doctors are using chatbots to help with everyday tasks, such as sending emails to patients or contacting health insurers.

In response to the study, both OpenAI and Google said they are working to reduce the bias of their models and inform users that chatbots are not a replacement for medical professionals.

As it is known, a brain implant with artificial intelligence has recently restored the patient’s sensitivity and mobility. The corresponding technology was developed at the Feinstein Institute of Bioelectronic Medicine.