Artificial intelligence is capable of changing the way users think without their knowledge. This is evidenced by the growing number of studies in this direction, writes The Wall Street Journal.
One of the experiments in this direction concerned the writing of essays using artificial intelligence. During the task, the AI could push the participants to one point of view or another, depending on the bias of the algorithm. At the same time, the performance of the exercise affected the thoughts of the participants after its completion.
“You may not even know that you are being influenced,” says Mor Naaman, a professor in the information science department at Cornell University, and the senior author of the paper. He calls this phenomenon “latent persuasion.”
The topic of the study was whether social networks are useful for society or not. Dr. Naaman and his colleagues chose this topic in part because it is not an issue about which most people have deep-seated beliefs that would be difficult to change. AI biased toward social media tended to direct subjects to write text that matched that bias, and conversely, when the AI was biased toward social media, the opposite occurred.
This feature of generative AI has many potential negative applications. For example, authoritarian governments may require that social media and performance tools encourage their citizens to communicate in certain ways. Even in the absence of malicious intent, students may be unwittingly swayed toward certain attitudes when AI assists them in their learning.
Such studies draw attention to troubling prospects. As AI makes people more productive, it can also change users’ minds in subtle and unpredictable ways. This influence may be more similar to how people influence each other through cooperation and social norms than to the usual influence of mass media or social networks.
According to the researchers, the best and so far the only defense against this form of exposure is for more people to know about it. In the long term, other safeguards may also be useful, such as regulators requiring transparency about how AI algorithms work and what human biases they mimic.
In the end, this may lead to the fact that in the future people themselves will choose which AI to use at work, at home, in the education of children, taking into account what human values are expressed in the answers of artificial intelligence.
Some AIs can have different “characters”, including political beliefs. In the future, companies and organizations may also offer artificial intelligence specifically created to perform certain tasks. These can be assistants with persuasive skills, extreme politeness, etc.
It was previously reported that the chain of fast food restaurants Wendy’s automated its express customer service using a voice chatbot with artificial intelligence. The service will work on the basis of natural language software developed by Google and will be able to understand the many ways in which customers make orders.