The advent of generative artificial intelligence (AI) has many worried about job replacement, but generative AI has other, more concerning side effects.
Key Details
- Since ChatGPT shook the tech world with its technical leap forward in AI technology, it has gained more than 100 million users.
- Users have taken advantage of the chatbot’s human-like responses to generate emails and resumes, supplement brainstorming, and write essays and code.
- While this new tool provides a powerful resource to help users get more work done in a shorter amount of time, ChatGPT has some severe drawbacks.
- It would be nearly impossible to program an AI without unintentionally including some level of bias, and ChatGPT is no different. The chatbot has already shown how biased it can be.
- However, one of the most concerning issues around AI is the general public’s lack of understanding surrounding its capabilities.
Why it’s news
AI has been used in multiple industries for decades, but OpenAI’s ChatGPT brought the technology and endless possibilities to the public. However, AI’s rapid development has raised concerns about a lack of caution in production.
In a recent interview with Fox News, Tesla CEO Elon Musk, who formerly contributed to OpenAI, expressed his concerns, saying, “AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential—however small one may regard that probability, but it is non-trivial—it has the potential of civilization destruction.”’
The “Godfather of AI,” Geoffrey Hinton, left Google last week so that he could be more free to talk about the “dangers” of AI.
“I left so that I could talk about the dangers of AI without considering how this impacts Google,” Hinton tweeted. “Google has acted very responsibly.”
In an interview with The New York Times, Hinton says, “The idea that this stuff could actually get smarter than people—a few people believed that, but most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
While AI, particularly generative AI, sounds like some sort of science fiction, its reality is less exciting. Lead product manager at fintech Marqeta Tripp Parker explains that AI predicts the most likely outcome to a given data set.
“It’s not this spooky, ‘it’s going to come alive thing,’” Parker tells Leaders Media. However, the danger that AI poses is that analysts may not always know for sure what data a given AI program is looking at.
When training an AI, programmers will provide specific data sets that the AI will analyze and learn from. It will then use this data to make predictions based on patterns.
“It’s not always totally clear what the AI is looking at,” Parker explains. Unlike a human, an AI cannot explain its reasoning when making a decision.
For example, Parker cited a famous University of Washington study. In the study, an AI could correctly identify a dog or a wolf when given a photo. However, after a time, the machine began to make obvious mistakes. Researchers eventually realized that many of the wolf photos used for training had snow in the background. Rather than identifying the difference between a wolf and a dog, the machine looked for snow in the background.
Machine errors like these are how bias can be introduced to a machine’s programming, another concern for many. An AI is trained with a large data set, so large that sometimes researchers do not know every bit of information included. Since AI cannot explain its reasoning, developers cannot know whether or not it has come to a conclusion based on proper logic.
“My concern isn’t spooky consciousness,” Parker says, referencing a term by science-fiction writer Arthur C. Clarke, “but creating an AI that makes high-value predictions but not understanding where it will go awry.”
In addition to the unconscious bias programmed into AI models, Parker expressed concern about the public’s lack of understanding surrounding the technology.
ChatGPT, he explains, is not the only AI model. Already, millions of users use AI technology like predictive text. Every AI model has a different purpose and performs one task better than another.
Parker gives an example of a Roomba robot vacuum that uses AI to map and clean a user’s home. That same Roomba AI could not perform the same tasks as a Tesla AI model.
While AI may not be the spooky, sentient technology of science fiction, its potential for misuse and error is significant. As it is integrated into various industries, it is crucial to approach its development with caution and a deep understanding of its capabilities and limitations.