Thousands of scientists and business leaders released a signed letter Tuesday to defend artificial intelligence (AI) from claims that it will likely destroy the world.
Key Details
- In March, more than 1,100 scientists and researchers signed an open letter calling for a six-month pause in AI research amid fears that the technology could destroy the world, with signatories like Elon Musk, Steve Wozniak, Yuval Harari, and Andrew Yang.
- On Tuesday, more than 1,370 AI researchers signed another open letter, written by the British Computer Society’s Chartered Institute for IT, arguing the opposite—saying that AI is a “force for good, not a threat to humanity.”
- Letter signatories—the majority of them being based in the UK—include business leaders and academics from schools like the University of Oxford, who argue that the UK needs to take a leading role in AI development, Fortune notes.
- Scientists argue that AI will change the world forever, but technology will not create “evil robot overlords.” However, they believe robust ethical codes and regulations will be necessary for the common good.
Why It’s Important
Since the launch of ChatGPT on November 30, 2022, the tech world has faced a new revolution in developments and research in AI—with thousands of corporations and startups rushing to be the first with the first and best solutions to market. This rapid pace has created no shortage of paranoia and critics, who argue that AI is a force of destruction that has the potential to destroy humanity or at least allow humanity to destroy itself.
Watchdogs like former Google ethicist Tristan Harris have argued that AI is progressing too quickly and needs to be regulated lest it spread fraud and disinformation, destroy jobs, and destabilize society. OpenAI founder Sam Altman, the creator of ChatGPT, has noted that he is “a little bit scared” of his own creation and that the “worst-case scenario is lights-out for all of us.”
Many researchers working in the AI field disagree with this analysis. British Computer Society CEO Rashik Parmar argued that the scientists and leaders who signed the pro-AI letter believe AI will grow into a trusted co-pilot that works with humans rather than replacing them, creating new opportunities in the learning, entertainment, work, and healthcare fields.
Notable Quote
“A.I. is not an existential threat to humanity; it will be a transformative force for good if we get critical decisions about its development and use right,” the letter says. “Earlier this year, a letter, signed by Elon Musk had called for a ‘pause’ on A.I. development, which [we] said was unrealistic and played into the hands of bad actors … One way of achieving [AI as a tool for co-pilot and innovation would be to] be created and managed by licensed and ethical professionals meeting standards that are recognized across international borders. Yes, A.I. is a journey with no return ticket, but this letter shows the tech community doesn’t believe it ends with the nightmare scenario of evil robot overlords.”