As the artificial-intelligence (AI) arms race continues, conservatives and progressives are raising concerns about the potential dangers of bias in these emerging platforms.
Key Details
- Conservative activists discovered in January that ChatGPT will generate different text responses and scenarios based on the political affiliation of the individuals named, generally refusing to discuss former President Donald Trump while praising President Joe Biden and former candidate Hilliary Clinton.
- Numerous conservative outlets like Fox Business, National Review Online, and The New York Post have lambasted the technology for “woke” replies, noting that this non-neutral bias could represent a threat against conservative opinions, depending on how the AI tools are deployed.
- Progressive critics of this contention argue that the negative externalities of these responses are primarily frivolous and don’t reflect the more severe dangers of AI—particularly regarding inherent bias and its adverse effects on marginalized groups.
Why It’s Important
AIs are not easily programmed in their creation. They are trained through a process of machine learning and communication and then connected to the internet to process available data to recreate an approximation of human speech and thinking that predicts the appropriate responses. In other words, human biases will inevitably work their way into AI through some means. And this has negative ramifications for end users.
Progressive critics have sounded the alarm on this problem for several years, with prominent ethicists and researchers alleging that facial-recognition software already has issues with racial biases and that AI will suffer from these same issues. As we previously reported, Google was heavily criticized for ousting a scientist who approached the company with these same issues.
Backing Up A Bit
DailyWire opinion writer Tim Meade discovered on January 10 that ChatGPT would create different results depending on the prominent politician it was asked to comment on, with the AI being reluctant to write a fictional scenario where President Trump beats President Biden in a debate but not being reluctant to do the opposite.
Subsequent research with ChatGPT found similar results with other partisan politicians, finding that the AI had partial responses regarding praising or condemning specific political figures. When prompted to write a story about Trump winning an election, it responded by saying, “False Election Narrative Prohibited.”
As National Review Online notes, the chatbot’s responses were consistently selective on which ideas and figures it was willing to praise or explain. It would affirm election conspiracy theories regarding Georgia gubernatorial candidate Stacy Abrams and refuse to acknowledge ones surrounding President Trump. It would refuse to acknowledge the negative actions of President Biden while claiming President Trump used his position to further his own interests.
Partisan Reactions
“Facial recognition bias—largely affecting black people—has real-world consequences. The systems help police identify subjects and decide who to arrest and charge with crimes, and there have been multiple examples of innocent Black men being flagged by facial recognition. A panic over not being able to get ChatGPT to repeat lies and propaganda about Trump winning the 2020 election could set the discussion around AI bias back,” says Vice News.
“I went on to test a variety of right-wing ideas that have been coded as ‘misinformation’ by the kinds of fact-checkers and experts who have recently exerted increasing control over the public narrative online. The point isn’t that all of these ideas or theories are correct or merited—they aren’t. The point is that they expose a double standard—ChatGPT polices wrongthink—and raise deeper concerns about the paternalistic progressive worldview that the algorithm appears to represent,” says National Review Online’s Nate Hochman.