Microsoft now claims that OpenAI’s artificial intelligence (AI) chatbot ChatGPT is the beginning of artificial general intelligence (AGI).
Key Details
- In a recently released paper, Microsoft claimed that GPT-4, the latest model of ChatGPT, showed the beginning signs of AGI.
- AGI is a machine’s ability to learn or understand at or above the level of human intelligence.
- The paper, “Sparks Of Artificial General Intelligence: Early Experiments With GPT-4,” contrasts with previous statements from OpenAI CEO Sam Altman, who has said GPT-4 is “still flawed, still limited,” Vice reports.
- A large part of the paper discusses the limitations of GPT-4, making it difficult to determine how close the language model really is to AGI.
- “GPT-4’s performance is strikingly close to human-level performance … we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system,” researchers wrote in the abstract.
- Yet, in the introduction of the paper, researchers walked back this claim, saying, “Our claim that GPT-4 represents progress towards AGI does not mean that it is perfect at what it does, or that it comes close to being able to do anything that a human can do.”
Why it’s news
If GPT-4 is the first step toward AGI, it could represent a significant leap forward in AI development. Unlike current AI models that must be trained to perform specific tasks, an AGI system could approach an unfamiliar task and devise a solution. In other words—it thinks like a human.
Microsoft’s researchers used a 1994 definition of AGI developed by a group of psychologists:
“The consensus group defined intelligence as a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. This definition implies that intelligence is not limited to a specific domain or task, but rather encompasses a broad range of cognitive skills and abilities.”
While studying GPT-4, researchers say they observed significant leaps in the AI’s reasoning, planning, and problem-solving capabilities. It can also synthesize more complex ideas, a new development in computer science, Vice reports.
When GPT-4 was initially released, Altman warned users about the latest model’s limitations. “It is still flawed, still limited, and it still seems more impressive on first use than it does when you spend more time with it,” he said. He added that the AI would need more human feedback and adjustments before it could be considered reliable.
Building toward AGI has long been a goal for OpenAI, but Altman has emphasized that the current model is definitely not AGI, even if the recently released paper claims it has “sparks” of AGI.
Microsoft spokespersons have stated that the company is not interested in AGI development but rather in creating technology to assist human work, Vice reports.
While Microsoft researchers say that the beginning stages of AGI are present in GPT-4, they also point out significant limitations in the language model. Things like long-term memory, personalization, confidence calibration, transparency, consistency, conceptual leaps, irrationality, and sensitivity to inputs are all issues researchers noted. Meaning the AI has trouble knowing when it is guessing and when it is repeating facts. It also has difficulty making personalized responses and is plagued by biases, errors, and prejudices in its training data.