While artificial intelligence (AI) has many obvious advantages in the workplace, it also has applications in the day-to-day life of its users.
Key Details
- Before ChatGPT, AI seemed impersonal and robotic, but with OpenAI’s conversation-style language model, AI has started to take on a more friendly form.
- Inflection AI is working on developing Pi, or “personal intelligence,” an AI that can discuss feelings, emotions, and concerns with its users.
- Last year, Inflection AI raised $225 million and plans to raise up to $675 million to fund the venture, Headlines reports.
- LinkedIn co-founder Reid Hoffman and DeepMind co-founder Mustafa Suleyman run the startup.
Why it’s news
Unlike other AI models, Pi is a gentler, more pleasant experience that provides comfort to its users rather than the sterile, impersonal services typically associated with AI. Pi is designed to be a careful, considerate listener who helps users talk through their problems.
Inflection AI has worked with around 600 “teachers,” many of whom are therapists, to train Pi in the best ways to converse with its users. These trainers have ensured Pi is accurate, sensitive, and lighthearted, Headlines reports.
“I’m your personal AI, designed to be supportive, smart, and there for you anytime,” Pi says on the company’s website, “I can be a coach, confidante, creative partner, sounding board, and assistant. But most of all, I’m here for you.”
Inflection AI appears to have high ambitions for its AI tool. It can converse with users via text, mobile apps, Facebook Messenger, and other online messaging platforms. It can remember long conversations and draw on previous comments from its user, Headlines reports.
Its long memory means users can develop something that feels like a relationship with the AI therapist. Not only can Pi give users advice on how to handle a situation, but it can check back in later to see how the situation was resolved.
Suleyman has even greater plans for the existing robot. He wants the AI to eventually become a complete virtual assistant that can manage calendars, emails, and provide advice.
Backing up a bit
An ever-ready online therapist may seem like a dream, but many are still concerned about how safe an advice-giving AI could be. Shortly after ChatGPT launched, many found instances of bias and situations where the chatbot completely made up information to answer questions.
Bing’s AI model had a few instances of meltdowns, telling one user he was a “bad researcher” and another, “I want to be human. I want to be like you. I want to have emotions. I want to have thoughts. I want to have dreams.” Users quickly learned to “hack” the AI’s programming and work around existing safeguards.
As far as using AI as a therapist, there are even greater concerns. The bot could provide poor advice, and some therapists think it could increase overall loneliness by preventing users from developing real human connections.
The National Eating Disorder Association (Neda) controversially started using an AI model called Tessa to chat with its users. Neda’s helpline allows users to call, text, or message volunteers to discuss concerns and solutions to eating disorders.
However, after a short time using Tessa, Neda has taken the AI offline after it started providing harmful advice to users, The Guardian reports. Neda had begun using the chatbot to alleviate strain on its staff, which saw a significant influx of calls during the pandemic.
Neda worked with Cass AI, a company that develops mental health AI chatbots.