A former Google employee is sounding the alarm over the dangers of artificial intelligence (AI) in the tech world.
- Tristan Harris, a former ethicist at Google, appeared on NBC Nightly News With Lester Holt on Wednesday and Fox News’ The Brian Kilmeade Show to discuss the dangers of AI.
- He Center For Humane Technology co-founder Aza Raskin both warn that AI is progressing too quickly and needs to be regulated lest it spread fraud and disinformation, destroy jobs, and destabilize society.
- Harris warns that the speed of the AI arms races means corporations are effectively operating without guardrails or safe procedures, pushing out innovative technologies like Bard AI and ChatGPT too quickly.
Why It’s Important
The AI arms race has begun a revolution in digital technology that is rapidly changing the world, and that has many intelligent people worried. Sam Altman, the creator of ChatGPT said in an interview last week that he is “a little bit scared” of his creation.
Government regulators have attempted to address the issue, such as the White House’s October proposal for an “AI bill of rights,” but this has already been eclipsed by the speed of ChatGPT’s release on November 30 and rapid adoption rate.
The rapid proliferation of AI has meant that world governments are struggling to understand the implications of AI and are not able to meet the needs of writing regulations for it. This has left corporations to police themselves. Google has its own strict internal rules, and Microsoft President Brad Smith laid out his own proposal for ethical guidelines in a February blog post.
Harris argues corporate self-regulation is not enough. He has an established reputation as a tech watchdog who appeared in the Netflix documentary The Social Dilemma, arguing that social media creates an addiction to maximize profit and manipulates people’s worldviews and emotions. He says that Big Tech is not sufficiently addressing the dangers and that members of these companies are warning him about the dangers.
“No one is building the guardrails. And this has moved so much faster than our government has been able to understand or appreciate … The people inside the companies know that this is moving at a reckless pace, which is why they’ve kind of channeled their concerns to us,” Harris tells NBC’s Lester Holt.
“Responsibility always gets bulldozed by market incentives, by your stock price, by needing to beat somebody else,” says Raskin.
“What we want is AI that enriches our lives. AI that works for people, that works for human benefit that is helping us cure cancer, that is helping us find climate solutions. We can do that. We can have AI and research labs that’s applied to specific applications that does advance those areas. But when we’re in an arms race to deploy AI to every human being on the planet as fast as possible with as little testing as possible, that’s not an equation that’s going to end well,” Harris tells Brian Kilmeade.