Many large companies are drafting policies on whether employees can use the AI tool ChatGPT in the workplace.
Key Details
- Over 40% of professionals have reported using ChatGPT at work, according to a FishBowl poll.
- Almost half of human resource leaders are formulating guidelines on whether employees can use ChatGPT, while one-third of HR leaders say no policies are being issued, according to a Gartner poll.
- Hedge Fund Citadel is embracing the chatbot in the workplace while Wall Street firms, Bank of America Corp., Deutsche Bank AG, and Wells Fargo & Co. have all banned the service completely.
Why it’s news
In just a few months, ChatGPT has become a viral sensation attracting more than 100 million subscribers and becoming a staple in the office, but that could change as many businesses are drafting policies on appropriate workplace use.
ChatGPT can generate answers, craft emails, write essays, and has even been tested to pass extensive exams like a medical licensing exam. It has become such a powerful force in the workplace that companies are formalizing policies on how employees can use the chatbot at work.
A recent study found that 43% of professionals have used AI tools, including ChatGPT, for work-related tasks. Nearly 70% of those professionals are doing so without their boss’ knowledge, showing the apparent need for workplace guidance.
The policies could vary at different businesses depending on the company’s approach to AI. Some firms, including Bank of America, Deutsche Bank, and Wells Fargo have banned the tool from work entirely, while Hedge Fund Citadel is welcoming it.
Citadel is negotiating an enterprise-wide license to use ChatGPT as the company founder Ken Griffin says it has a powerful impact and can be helpful to employees.
Goldman Sachs was one of the original companies to ban the service, but recently changed course allowing employees to use the tool to write and test software code. After previously not permitting its use, now around 40% of Goldman’s code has been written by the AI tool as software developers use it to expedite their work.
“This branch of technology has a real impact on our business,” says Griffin in an interview. “Everything from helping our developers write better code to translating software between languages to analyze various types of information that we analyze in the ordinary course of our business.”
Other businesses are looking more closely at the tool and deciding whether action needs to be taken, while around one-third of HR leaders are not planning to issue any policies on employees’ use of ChatGPT, according to the Gartner poll.
Some businesses have not yet set AI policies in action, instead resulting to warning employees of the concerns associated with ChatGPT, including accuracy, company data security, and privacy.
AI Risks
ChatGPT was launched in November and has quickly garnered more than 100 million subscribers, but the service is still new and poses risks.
OpenAI, the parent company of ChatGPT, CEO Sam Altman, has acknowledged the AI risks and potentially dangerous outcomes of ChatGPT.
Altman says that the negative consequences of artificial intelligence (AI) will be high if regulators and society do not work together to help society use the technology responsibly.
He says that ChatGPT is a strong force that sometimes gets answers wrong and can help spread false information. It can also pass exams, write essays, and do other things that can be used for wrongdoings.
He encourages regulators to work directly with chatbot developers, noting his own close work with the federal government, to create regulations for the technology.