Chinese regulators have reportedly banned ChatGPT over concerns that the artificial intelligence (AI) chatbot could promote American propaganda.
- Regulators have reportedly ordered Chinese social-media platforms, including WeChat, to ban ChatGPT from their apps and block access from third-party apps.
- Though ChatGPT was already mostly inaccessible because it did not comply with China’s censorship laws, some citizens could gain access through third-party apps on WeChat, Forbes reports.
- In addition to blocking ChatGPT access, Chinese tech companies have been ordered to get proper permission from Chinese regulators before releasing any similar technology in China.
- Before the regulatory crackdown, the state-run China Daily released a video called “How the U.S. Uses Ai To Spread Disinformation.” The video attempted to portray ChatGPT as a U.S. propaganda outreach.
- In the video, ChatGPT responds to questions about Xinjiang by including information about reports of human rights abuses of Uyghur Muslims.
Why it’s news
After OpenAI’s new chatbot went viral, Chinese search engine Baidu shared that it, too, was working on an AI chatbot. The company has reportedly worked on the AI bot since 2019 and plans to finish testing in March. The bot will be available to the public later that same month. That timeline may change with new regulations from Chinese authorities, though no details have emerged yet, Forbes reports.
ChatGPT has come as a surprise to Chinese tech developers, The New York Times reports. Experts had previously predicted that China was racing ahead in AI development, but it appears that Chinese tech experts are behind in developing similar technology to ChatGPT. The slow progress may be attributed to increased censorship and government control of tech companies, Forbes reports.
Backing up a bit
China isn’t alone in its concerns about ChatGPT, though other bans center around potential security leaks rather than propaganda concerns. JPMorgan Chase is limiting its staff’s access to ChatGPT. Amazon and multiple U.S. universities have also restricted use of the AI chatbot.
The financial firm’s decision to block the AI wasn’t caused by any particular event, but it appears to be a preemptive move to prevent potential security leaks. The company says the limits are part of JPMorgan’s “normal controls around third-party software.” JPMorgan’s decision may have also been motivated by concerns that sensitive information would be shared with ChatGPT, the Telegraph reports.
Amazon has also warned its employees about including sensitive information in any conversations with the chatbot.