The creator of ChatGPT offered a sobering analysis of the potentially dangerous outcomes of his revolutionary software.
- OpenAI CEO Sam Altman says that the negative consequences of artificial intelligence (AI) will be severe if regulators and society do not work together to help roll out the technology in a responsible manner.
- He says the disruptive capacity of the technology—spreading false and misleading information and illicitly passing university exams—makes ChatGPT dangerous in the wrong hands.
- Altman admits that he’s “a little bit scared” of his creation and that it could destroy jobs.
Why It’s Important
Altman’s most recent interview marks a more sober proclamation than previous statements. As we previously reported, Altman tweeted on February 13 that ChatGPT stands to be one of the greatest forces for economic empowerment in history. He further explained on ABC’s World News Tonight that AI could be “the greatest technology humanity has yet developed.”
He hasn’t been shy about the fact that his technology does pose dangers. He noted in February that the “worst-case scenario is lights-out for all of us,” but has overall remained optimistic about the potential for ChatGPT.
Part of the danger comes from the rapid push for AI technology. As Altman notes, “there will be other people who don’t put some of the safety limits we put on.” While OpenAI is very much in control of its own AI, he fears what would happen if the wrong person got ahold of the technology and used it to spread chaos.
Altman is encouraging regulators to work as directly as possible with chatbot developers, noting his own close work with the federal government. However, as we previously reported, regulators are struggling to keep up with the rapid pace of AI developments.
Altman gave an exclusive sitdown interview with ABC News’ World News Tonight With David Muir this past weekend.
Backing Up A Bit
The release of ChatGPT has sparked a revolution in technology and marked an open salvo in the AI arms race. OpenAI released its critically successful ChatGPT-3 on November 30, followed by GPT-4 on March 14. ChatGPT has over 100 million subscribers and an exclusive contract with Microsoft to implement it into search engines and applications.
“I’m particularly worried that these models could be used for large-scale disinformation. Now that they’re getting better at writing computer code, [they] could be used for offensive cyberattacks,” says Altman. “Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.”