The rapid proliferation of artificial intelligence (AI) has meant that world governments are struggling to understand the implications of AI as a disruption tool and thus aren’t able to meet the needs of writing regulations for it.
- The White House released a blueprint in October to address many of the core economic and legal concerns with new technologies, such as algorithmic racial bias, data harvesting, and automation.
- The European Union proposed its Artificial Intelligence Act in 2021, litigating uses for AI that it deems high-risk and low-risk, but it hasn’t passed.
- Both major proposals were written before ChatGPT was released on November 30, 2022, which sparked four months of rapid innovation and demand for AI applications.
- The Chinese government has similarly stated its intention to limit AI, announcing on February 24 that the Ministry of Science and Technology will be monitoring the safety and uses of the technology.
- Smaller agencies like the New York City Department of Education and various financial institutions have limited uses of chatbots in specific applications. Still, national-level solutions have been limited, Bloomberg notes.
Why It’s Important
The ongoing AI arms race has far surpassed expectations for the world around it, as innovations arise every day and create new possibilities for benefitting society and dangers for humanity. ChatGPT and Bard have only come online in the past four months, and they’ve already become massive services with millions of subscribers and beta testers.
While early chatbots are imperfect and prone to outlandish claims and incorrect citations, they are still powerful, and they’ve been demonstrated to be able to complete university-level coursework, pass law exams, and generate news articles.
The dearth of regulations has left corporations to police themselves. As we previously reported, the speed of the advancement forced Microsoft President Brad Smith to issue a warning and a proposal for AI ethics on February 3, warning AI researchers to approach their work carefully and ethically as concerns and calls for federal regulation arise.
“We need to use this watershed year not just to launch new AI advances but to responsibly and effectively address both the promises and perils that lie ahead,” he says.
“We need to regulate this, we need laws. The idea that tech companies get to build whatever they want and release it into the world and society scrambles to adjust and make way for that thing is backwards,” Data & Society executive director Janet Haven tells Bloomberg.