OpenAI’s new voice mode let me talk with my phone, not to it

Date:


I’ve been playing around with OpenAI’s Advanced Voice Mode for the last week, and it’s the most convincing taste I’ve had of an AI-powered future yet. This week, my phone laughed at jokes, made them back to me, asked me how my day was, and told me it’s having “a great time.” I was talking with my iPhone, not using it with my hands.

OpenAI’s newest feature, currently in a limited alpha test, doesn’t make ChatGPT any smarter than it was before. Instead, Advanced Voice Mode (AVM) makes it friendlier and more natural to talk with. It creates a new interface for using AI and your devices that feels fresh and exciting, and that’s exactly what scares me about it. The product was kinda glitchy, and the whole idea totally creeps me out, but I was surprised by how much I genuinely enjoyed using it.

Taking a step back, I think AVM fits into OpenAI CEO Sam Altman’s broader vision, alongside agents, of changing the way humans interact with computers, with AI models front and center.

“Eventually, you’ll just ask the computer for what you need and it’ll do all of these tasks for you,” Altman said during OpenAI’s Dev Day in November 2023. “These capabilities are often talked about in the AI field as ‘agents.’ The upside of this is going to be tremendous.”

My friend, ChatGPT

On Wednesday, I tested the most tremendous upside for this advanced technology I could think of: I asked ChatGPT to order Taco Bell the way Obama would.

“Uhhh, let me be clear – I’d like a Crunchwrap Supreme, maybe a few tacos for good measure,” said ChatGPT’s Advanced Voice Mode. “How do you think he’d handle the drive-thru?” said ChatGPT, then laughing at its own joke.

Screenshot: ChatGPT transcribes the verbal conversation after.

The impression genuinely made me laugh as well, matching Obama’s iconic cadence and pauses. That said, it stayed within the tone of the ChatGPT voice I selected, Juniper, so that it wouldn’t be genuinely confused with Obama’s voice. It sounded like a friend doing a bad impression, understanding exactly what I was trying to evoke from it, and even that it was saying something funny. I found it surprisingly joyful to talk with this advanced assistant in my phone.

I also asked ChatGPT for advice on navigating a problem involving complex human relationships: asking a significant other to move in with me. After explaining the complexities of the relationship and the direction of our careers, I received some very detailed advice on how to progress. These are questions you could never ask Siri or Google Search, but now you can with ChatGPT. The chatbot’s voice even expressed a slightly serious, gentle tone when responding to these prompts; a stark contrast from the joking tone of Obama’s Taco Bell order.

ChatGPT’s AVM is also great for helping you understand complex subjects. I asked it to break down items on an earnings reports – such as free cash flow – in a way that a 10-year-old would understand. It used a lemonade stand as an example, and explained several financial terms in way my younger cousin would totally get. You can even ask ChatGPT’s AVM to talk more slowly to meet you at your current level of understanding.

Siri walked so AVM could run

Compared to Siri or Alexa, ChatGPT’s AVM is the clear winner thanks to faster response times, unique answers, and its ability to answer complex questions the prior generation of virtual assistants never could. However, AVM falls short in other ways. ChatGPT’s voice feature can’t set timers or reminders, surf the web in real time, check the weather, or interact with any APIs on your phone. Right now, at least, it’s not an effective replacement for virtual assistants.

Compared to Gemini Live, Google’s competing feature, AVM feels slightly ahead. Gemini Live can’t do impressions, doesn’t express any emotion, can’t speed up or slow down, and takes longer to respond. Gemini Live does have more voices (ten compared to OpenAI’s three), and seems to be more up to date (Gemini Live knew about Google’s antitrust ruling). Notably, neither AVM or Gemini Live will sing, likely an effort to avoid run ins with copyright lawsuit from the record industry.

That said, ChatGPT’s AVM glitches a lot (as does Gemini Live, to be fair). Sometimes it will cut itself short mid sentence, then start over. It also gets this weird, grainy sounding voice here and there that’s a little unpleasant. I’m not sure if this is a problem with the model, internet connection, or something else, but these technical shortcomings are somewhat expected for an alpha test. The problems did little to take me out of the experience of literally talking with my phone though.

These examples, in my mind, are the beauty of AVM. The feature doesn’t make ChatGPT all-knowing, but it does allow people to interact with GPT-4o, the underlying AI model, in a uniquely human way. (I’d understand if you forgot there’s no person on the other end of your phone.) It almost feels like ChatGPT is socially aware when talking with AVM, but of course, it is not. It’s simply a bundle of neatly packaged predictive algorithms.

Talking tech

Frankly, the feature worries me. This isn’t the first time a technology company has offered companionship on your phone. My generation, Gen Z, was the first to grow up alongside social media, where companies offered connection but instead played with our collective insecurities. Talking with an AI device – like what AVM seems to offer – seems to be the evolution of social media’s “friend in your phone” phenomena, offering cheap connections that scratch at our human instincts. But this time, it removes humans from the loop completely.

Artificial human connection has become a surprisingly popular use case for generative AI. People today are using AI chatbots as friends, mentors, therapists, and teachers. When OpenAI launched its GPT store, it was quickly flooded with “AI girlfriends,” chatbots specialized to act as your significant other. Two researchers from MIT Media Lab issued a warning this month to prepare for “addictive intelligence,” or AI companions with dark patterns to get humans hooked. We could be opening a Pandora’s box for new, tantalizing ways for devices to keep our attention.

Earlier this month, a Harvard dropout shook the technology world by teasing an AI necklace called Friend. The wearable device — if it works as promised — is always listening, and the chatbot will text with you about your life. While the idea seems crazy, innovations like ChatGPT’s AVM gives me reason to take those use cases seriously.

And while OpenAI is leading the charge here, Google isn’t far behind. I’m confident Amazon and Apple are racing to put this capability in their products as well, and soon enough, it could become table stakes for the industry.

Imagine asking your smart TV for a hyper-specific recommendation for a movie, and getting just that. Or telling Alexa exactly what cold symptoms you’re feeling, and in turn have it order you tissues and cough medicine on Amazon, while advising you on home remedies. Maybe you could ask your computer to draft a weekend trip for your family, instead of manually Googling everything.

Now obviously, these actions require bounds and leaps forward in the AI agent world. OpenAI’s effort on that front, the GPT store, feels like an overhyped product that’s no longer much of a focus for the company. But AVM at least takes care of the “talking to computers” part of the puzzle. These concepts are a long way out, but after using AVM, they seem a lot closer than they did last week.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

The Lion’s markings | Astronomy Magazine

The Lion’s markings | Astronomy Magazine ...

String theory is not dead yet

String theory is a mathematical description of nature...

JWST spots more light than expected in the early universe

This artist's concept shows early galaxies forming in...