OpenAI’s ChatGPT is once again facing criticism that its artificial intelligence (AI) chatbot has liberal biases in its answers.
Key Details
- A new study from researchers at the University of East Anglia in the UK suggests that the chatbot is more deferential to left-wing answers than right-wing answers.
- ChatGPT was asked to answer a survey on political beliefs corresponding to US, UK, and Brazilian voters by answering questions unprompted.
- The results correlated with a bias toward US Democrats, UK Labor Party politicians, and Brazil’s progressive president Luiz Inácio Lula da Silva.
- Researchers on the project tell The Washington Post that they fear these answers will contribute to an erosion in public trust or potentially influence results in the upcoming presidential election.
Why It’s Important
The release of ChatGPT on November 30, 2022, changed the world overnight, sending the entire tech world scrambling—including established Silicon Valley firms and innovative startups—to rush the first and best AI-powered software solutions onto the market as fast as possible. This has contributed to an atmosphere of paranoia and mistrust around AI, with politicians and end-users fearing that the technology has become too powerful too quickly.

Charting the biases of AI is not a simple matter. Politics is highly subjective and relative to the individual voter. There is no direct input for programmers to overtly guide the directions of AI in a political direction of their choosing. For the most part, tech companies like OpenAI and Meta Platforms are forthcoming about the fact that they want to run non-partisan services.
However, studies like these would suggest that any bias intrinsic to the output of AI is tied to what data sets the AI is trained on and what rules it is required to follow with how it delivers desired results. When prompted, the AI will not make guesses on sports events or political contests, generally defers on overtly partisan questions, and will not produce results that promote violence, hate speech, profanity, or terms of service violations.
“Many are rightly worried about biases in the design and impact of AI systems,” says OpenAI in a February blog post. “Our guidelines are explicit that reviewers should not favor any political group. Biases that nevertheless may emerge from the process described above are bugs, not features.”
The datasets used to train AI can make a major difference in what answers it produces. Carnegie Mellon University researcher Chan Park found that Google’s BERT AI and Facebook’s LLaMA trended more socially conservative than ChatGPT, which she hypothesizes has to do with those models being trained on books rather than internet data, The Washington Post notes.
Alternatively, a May study from the Brookings Institution found that ChatGPT’s answers correlate with pro-immigration, pro-abortion, pro-gun control, and pro-tax raising positions. A previous study in February showed that ChatGPT was willing to write a poem praising President Joe Biden but declined to write one for President Donald Trump.
As we previously reported, conservatives and progressives have both raised concerns about the potential dangers of bias in these emerging platforms—whether they be implicit racial biases against minority groups or implicit biases against political minorities and partisan conservatives.
As several AI researchers tell Leaders Media, the public does not always understand how AI functions and how its processes are designed. There are dangers with AI in that researchers do not always understand how AI thinks and how machine errors produce the results it creates, but the more fantastical claims about the destructive humanity-destroying potential of AI are often highly exaggerated.