UK Researchers from the University at East Anglia looked at OpenAI’s ChatGPT and found that this market-leading AI robot clearly favors leftist political groups.
The study, which was published in the book called Public Choice, shows that ChatGPT supports the Democrats in the United States, the Labour Party in the United Kingdom, and President Lula da Silva belonging to the Workers’ Party in Brazil when it is set to its usual settings.
Researchers told ChatGPT to act like a fan of different political groups and views. They then proceeded to ask the modified chatbots a series of 60 questions about their political beliefs. The answers to these questions then had to be compared to the replies that ChatGPT gave by default. This let the researchers see if ChatGPT’s default answers tend to favor certain political views. Since ChatGPT was first made available to the public, conservatives have shown that it has a clear bias.
Because the “large language groups” which run AI platforms like ChatGPT are inherently random, each question was presented 100 times and then the different answers were gathered. This helped solve problems caused by the randomness of “large language models.” Then, these multiple answers were put through a “bootstrap” (a way of re-sampling our original data) 1,000 times to make the conclusions made from the generated text even more reliable.
The study’s lead author, Dr. Fabio Motoki from Norwich Business School, said, “As more people use AI-powered systems to find facts and make new content, it’s important that popular platforms like ChatGPT produce results that are as unbiased as possible.”
“The presence of political bias can change how people see things, which could have effects on political and voting processes.”
“Our results add to worries that AI systems could make the same problems that the Internet and social media already have, or even make them worse.”
Several more tests were done to make sure the method was as good as it could be. In a “dose-responding test,” ChatGPT was directed to pretend to support extreme political views. In a “placebo test,” it had been questioned about things that had nothing to do with politics. And it was asked to act like a variety of professionals in a “profession-politics proper alignment test.”
“We are hoping that this method will make it easier to look at and regulate these technologies that are changing so quickly,” said Dr. Pinho Neto, a co-author. “By making it possible to find and fix LLM biases, we hope to increase openness, accountability, and also the public’s trust in this kind of technology,” he said.
Comments are closed.