More than 1,000 technology leaders and researchers have signed an open letter urging companies to pause artificial intelligence (AI) development stating that the systems are “profound risks to society and humanity.”
Key Details
- More than 1,000 tech leaders, including Elon Musk, Apple co-founder Steve Wozniak, 2020 presidential election candidate Andrew Yang, Atomic Scientists president Rachel Bronson, and others, have signed a letter urging AI systems to be paused.
- Since the debut of AI chatbot ChatGPT in November, the tech has attracted 100 million subscribers and led other companies to deploy artificial technology to capitalize on the trend quickly.
- Now tech leaders are warning that the systems must be paused because they are too advanced and the tools present “profound risks to society and humanity.”
Why it’s news
Since its launch in November, AI bot ChatGPT has garnered at least 100 million users, praise, and backlash for its capabilities. The bot can generate its own answers, pen essays, and has even been tested to pass extensive exams like a medical licensing exam.
Due to the popularity of ChatGPT, many other businesses have been rushing to create artificial intelligence programs, but many tech leaders are warning that the intelligence software is too advanced and, if not stopped, will do more harm than good.
The open letter, released by the nonprofit group Future Of Life Institute, has been signed by thousands of technology leaders and researchers and states that AI technology is so advanced that even the software creators cannot understand, predict, or reliably control it.
AI chatbots can provide many valuable tasks such as writing code, typing emails, and answering questions, but the bots are prone to getting answers wrong and can provide and spread misinformation.
The letter calls for a pause in developing AI systems more powerful than GPT-4—the chatbot introduced this month by ChatGPT creator OpenAI.
The pause would provide time to implement “shared safety protocols” for AI systems. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” the letter states.
Development of powerful AI systems should advance “only once we are confident that their effects will be positive and their risks will be manageable,” the letter adds.
Although many influential leaders have signed the letter, many leaders believe it will be difficult to convince the public that AI is harmful and needs a pause and more software will likely be created.
OpenAI, the parent company of ChatGPT, CEO Sam Altman, has acknowledged the AI risks and potentially dangerous outcomes of ChatGPT, but has not signed the open letter.
Altman encourages regulation, stating that the negative consequences of artificial intelligence (AI) will be high if regulators and society do not work together to help society use the technology responsibly.
He says that ChatGPT is a strong force that sometimes gets answers wrong and can help spread false information. It can also pass exams, write essays, and do other things that can be used for wrongdoings.
He encourages regulators to work directly with chatbot developers, noting his own close work with the federal government, to create regulations for the technology.
Regulation
Many large companies are drafting policies on whether employees can use the AI tool ChatGPT in the workplace and calling for regulation.
A recent study found that 43% of professionals have used AI tools, including ChatGPT, for work-related tasks. Nearly 70% of those professionals do so without their boss’ knowledge, showing the apparent need for workplace guidance and leading many companies to issue regulation policies.
The policies could vary at different businesses depending on the company’s approach to AI. Some firms, including Bank of America, Deutsche Bank, and Wells Fargo, have banned the tool from work entirely, while Hedge Fund Citadel is welcoming it.
Citadel is negotiating an enterprise-wide license to use ChatGPT as the company founder Ken Griffin says it has a powerful impact and can be helpful to employees.
Goldman Sachs was one of the original companies to ban the service but recently changed course allowing employees to use the tool to write and test software code. After not permitting its use, now around 40% of Goldman’s code has been written by the AI tool as software developers use it to expedite their work.
“This branch of technology has a real impact on our business,” says Griffin in an interview. “Everything from helping our developers write better code to translating software between languages to analyze various types of information that we analyze in the ordinary course of our business.”