The Biden administration announced a number of executive actions around artificial intelligence (AI) ahead of a meeting with top tech CEOs Thursday.
- The administration will grant $140 million in funding to seven new National AI Research Institutes.
- A hacker event in August—with participation by Google, Anthropic, Microsoft, NVIDIA, OpenAI, and Stability—will explore how well existing generative AI systems meet the administration’s AI Bill of Rights blueprint.
- In a few months, the Office of Management and Budget will release a draft policy guidance on the use of AI systems.
why it’s important
Because this AI technology is developing rapidly, governments are trying to keep up to see what issues are developing as it spreads through business, governments, and other organizations.
AI is a rapidly evolving technology that is changing the way we live and work. AI systems can perform tasks that would normally require human intelligence, such as language translation, image recognition, and decision-making. These systems are powered by algorithms that can learn from data and improve over time.
While AI has the potential to bring about many benefits, it also raises important ethical and social concerns. One of the biggest concerns is the potential for bias and discrimination in AI systems. AI systems are only as unbiased as the data they are trained on, and if the data reflects existing social inequalities, the AI system may perpetuate those inequalities.
Another concern is the potential for AI to replace human workers. As AI systems become more advanced, they may be able to perform tasks that were previously done by humans. This could lead to job displacement and economic inequality.
To address these concerns, the Biden administration has taken several executive actions to regulate AI. These actions are aimed at promoting transparency, accountability, and fairness in AI systems, as well as ensuring that AI is used to benefit society as a whole.
Of course, the concern of many is that regulating AI might go too far and may also be regulated in a way that benefits a particular political party or point of view.
The administration is taking the above actions, including funding AI research to the tune of $140 million, partnering with the private sector to test the power of AI in many different settings, working with companies such as Google, Microsoft, NVIDIA, and OpenAI, which is the maker of the popular AI application ChatGPT.
Meanwhile, Vice President Kamala Harris leads a team of White House officials to meet with the CEOs of Alphabet, Anthropic, OpenAI, and Microsoft.
“AI is one of the most powerful technologies of our time, but in order to seize the opportunities it presents, we must first mitigate its risks,” a fact sheet from the White House reads. “Importantly, this means that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public.”