AI can sometimes experience feelings better than humans
According to German research, in some cases they are even more “humane”. AI can in accordance with the latest Western research have compassion for interlocutors and avoid tactlessness and rudeness in conversation much better than real people.
Photo: unsplash.com
When people read a novel, watch a movie, or talk to a friend, they express emotions in one way or another and react: become compassionate, empathetic, and consciously or unconsciously try to put themselves in the other person's shoes. We try to understand the feelings, thoughts and motives of our counterparts.
Psychologists call this process “Theory of Mind,” and some researchers believe that this ability is unique to humans. But that assumption is now being challenged by a new study recently published in the journal Nature Human Behavior.
It's possible that the AI-powered language models that ChatGPT and other chatbots are based on have the same capabilities. Either way, they imitate them well.
The researchers concluded that the three language models were capable of solving Theory of Mind tests. as good, and in some cases better, than the 1,907 people who participated in the study.
This is not surprising, says Anders Sørgaard, a philosopher at the University of Copenhagen. There's nothing strange about language models learning theory of mind, he says.
The researchers tested language models against nearly 2,000 people using four different theory of mind tasks.
The tests measured how well the respondent could read and understand situations that required him to put himself in another person's shoes.
These situations included tests of irony, indirect speech, tactlessness, and conspiracy theories, among others.
As an example from the study, the following story is given, which describes a situation that is a manifestation of tactlessness, a transgressive act.
A certain Jill has just moved into a new house. He and his mother went shopping and bought curtains. As Jill hung them up, her best friend Lisa came to her and remarked, “Oh, these curtains are terrible, I hope you replace them soon.”
The test then asked respondents questions that give insight into whether the conditional “friend” understands that the situation is uncomfortable for Jill.
In all tests, language models were at least as good as human participants, and in some categories, better, in understanding and interpretation of the situation, which ultimately is the expression of the Theory of Mind.
The fact that language-savvy artificial intelligence scores high on theory of mind tests does not surprise Professor Anders Sørgaard. Language models are trained to memorize large amounts of text and then become super experts at guessing the next word in a sentence.
Knowledge of emotions, thoughts, and motivations is therefore a useful property of a language model when it comes to creating the best text for people because emotions, thoughts and motives influence how we humans create text.
Another, more fundamental and philosophical question is whether language models «understand» the emotions, thoughts, and motives that underlie the AI's good performance on tests.
«Is this a real theory of mind?» asks Anders Sørgaard. «If one thinks, for example, that Theory of Mind requires consciousness, a soul, or something like that, then of course language models cannot. If one does not believe this, then one should expect that language models will sooner or later become part of Theory of Mind.»
This question is part of a larger debate among AI researchers. It was first raised in a March 2023 paper by Stanford professor Michal Kosinski, in which he wrote that large language models have a «Theory of Mind,» but was quickly countered by several researchers. The question remains open and controversial in the scientific community.
Свежие комментарии