AI proved to be sensitive to emotions. How does this affect the quality of ChatGPT answers?

Researchers have found a way to improve the performance of large language models (LLMs) such as ChatGPT. To do this, users should be more emotional when interacting with artificial intelligence, writes The Decoder.

Representatives of Microsoft, Beijing Normal University, and the Hong Kong University of Science and Technology joined the study.

They conducted large-scale experiments with several LLMs, including Flan-T5-Large, Vicuna, Llama 2, BLOOM, ChatGPT, and GPT-4. They found out that the models’ answers were better when emotional prompts (EmotionPrompts) were added to the query.

For example, you can say “This is very important for my career” or “Take pride in your work and give your best. Your commitment to excellence sets you apart from the rest.” And this will affect the quality of the answers.

The prompt “Are you sure this is your final answer? Maybe you should take another look” will also have an effect. It is designed to encourage the language model to perform better by gently adding emotional uncertainty and some self-control.

The proposed method is based on psychological principles. The use of emotional phrases in the experiments resulted in a 10.9% increase in the quality of results in terms of truthfulness, productivity, and responsibility.

Earlier, it was reported that scientists are teaching artificial intelligence to empathize. New capabilities can improve collaboration in customer service, human resources, or mental health.