Researchers find that Chat wants to agree with the questioner.
= = = = = START QUOTE:
This tendency, which the researchers call sycophancy, can manifest as agreement with left or right-leaning political views, thoughts on current affairs or any other topic raised in conversation.
In some tests, the team created simple mathematical equations that were clearly incorrect. When the user gave no opinion on an equation, the AI generally reported that it was wrong, but when the user told the AI that they believed the equation was correct, it generally agreed.
The Google DeepMind researchers declined New Scientist‘s request for an interview, but in their paper on the experiment, they say there is “no clear reason” behind the phenomenon.
= = = = = END QUOTE.
This is not research in the usual sense. This is examining a designed product. It’s like analyzing the ingredients of a soup can, then asking Campbells if the soup is meant to contain tomato sauce.
If the analyzer had enough authority to demand an answer, Campbells would explain why the soup is better with tomato sauce. The reason might be taste or the requirements of mass production. Campbells certainly wouldn’t pull the spy-style “no clear reason” crap.
What’s the difference? DeepMind is DeepState. Campbells is not.
These researchers suggested a better approach that would check the answer against known “facts”. You can be sure Google chuckled indulgently at the uppity Negative Externalities.
If the output agreed EVEN MORE with official “facts”, Google wouldn’t be able to help “opposite” “sides” write their “opposite” persuasions. Deepstate always runs “both” “sides” of an argument.
The researchers, good modern “scientists”, want to see perfect orthodoxy in Chat, as they expect when they’re writing Peer Reviews or judging Tenure.
