The president of Microsoft is warning AI researchers to approach their work carefully and ethically as concerns and calls for federal regulation arise.
- Microsoft has been experimenting with applications for the recently launched ChatGPT AI chatbot. The company reportedly made a $10 billion deal with OpenAI that allows Microsoft to include AI systems in its existing products.
- The software is incredibly powerful, so much so that it can pass advanced university exams, generate essays on complex topics, and write movie scripts.
- In a Thursday blog post, Microsoft President Brad Smith called 2023 a watershed moment in the history of AI technologies but warned that the technology could be dangerous—being used to exploit, affect world politics, and cause harm.
- “We need to use this watershed year not just to launch new AI advances but to responsibly and effectively address both the promises and perils that lie ahead,” says Smith.
- Smith advocated three goals for consideration in AI research—building AI responsibility, making sure AI advances competitiveness and national security, and ensuring it serves society broadly.
Why It’s Important
AI is currently operating in a largely unregulated place. Companies like Microsoft and Google operate with internal processes and methods for handling ethical situations, but this has yet to spread to the federal level beyond some initial discussion in the European Union, Axios reports.
With the proliferation of new AI technologies like ChatGPT and Dall-E though, the technology is beginning to accelerate in prominence and usefulness, so much so that it is creating concern about workers being replaced by AI and deep-fake technology.
While many advocates of AI are optimistic about the ability of the technology to benefit society, concerns surrounding the technology are raising alarms. Representative Ted Lieu (D-CA) went as far as to advocate a special regulatory body for AI in a New York Times op-ed. “We can harness and regulate AI to create a more utopian society or risk having an unchecked, unregulated AI push us toward a more dystopian future.”
“We will expand our public policy efforts to support these goals. We are committed to forming new and deeper partnerships with civil society, academia, governments, and industry. Working together, we all need to gain a more complete understanding of the concerns that must be addressed and the solutions that are likely to be the most promising. Now is the time to partner on the rules of the road for AI,” says Smith.