Oprah just had an AI special with Sam Altman and Bill Gates — here are the highlights

Date:


Late Thursday evening, Oprah Winfrey aired a special on AI, appropriately titled “AI and the Future of Us.” Guests included OpenAI CEO Sam Altman, tech influencer Marques Brownlee, and current FBI director Christopher Wray.

The dominant tone was one of skepticism — and wariness.

Oprah noted in prepared remarks that the AI genie is out of the bottle, for better or worse, and that humanity will have to learn to live with the consequences.

“AI is still beyond our control and to a great extent…our understanding,” she said. “But it is here, and we’re going to be living with technology that can be our ally as well as our rival … We are this planet’s most adaptable creatures. We will adapt again. But keep your eyes on what’s real. The stakes could not be higher.”

Sam Altman overpromises

Altman, Oprah’s first interview of the night, made the questionable case that today’s AI is learning concepts within the data it’s trained on.

“We are showing the system a thousand words in a sequence and asking it to predict what comes next,” he told Oprah. “The system learns to predict, and then in there, it learns the underlying concepts.”

Many experts would disagree.

AI systems like ChatGPT and o1, which OpenAI introduced on Thursday, do indeed predict the likeliest next words in a sentence. But they’re simply statistical machines — they learn data patterns. They don’t have intentionality; they’re only making informed guesses.

While Altman possibly overstated the capabilities of today’s AI systems, he underlined the importance of figuring out how to safety-test those same systems.

“One of the first things we need to do — and this is now happening — is to get the government to start figuring out how to do safety testing of these systems, like we do for aircraft or new medicines,” he said. “I personally, probably have a conversation with someone in the government every few days.”

Altman’s push for regulation may be self-interested. OpenAI has opposed the California AI safety bill known as SB 1047, saying that it’ll “stifle innovation.” Former OpenAI employees and AI experts like Geoffrey Hinton, however, have come out in support of the bill, arguing that it’d impose needed safeguards on AI development.

Oprah also prodded Altman about his role as OpenAI’s ringleader. She asked why people should trust him and he largely dodged the question, saying his company is trying to build trust over time.

Previously, Altman said very directly that people should not to trust him or any one person to make sure AI is benefitting the world.

The OpenAI CEO later said it was strange to hear Oprah ask if he was “the most powerful and dangerous man in the world,” as a news headline suggested. He disagreed, but said he felt a responsibility to nudge AI in a positive direction for humanity.

Oprah on deepfakes

As was bound to happen in a special about AI, the subject of deepfakes came up.

To demonstrate how convincing synthetic media is becoming, Brownlee compared sample footage from Sora, OpenAI’s AI-powered video generator, to AI-generated footage from a months-old AI system. The Sora sample was miles ahead — illustrating the field’s rapid progress.

“Now, you can still kind of look at pieces of this and tell something’s not quite right,” Brownlee said of the Sora footage. Oprah said it looked real to her.

The deepfakes showcase served as a segue to an interview with Wray, who recounted the moment when he first became familiar with AI deepfake tech.

“I was in a conference room, and a bunch of [FBI] folks got together to show me how AI-enhanced deepfakes can be created,” Wray said. “And they had created a video of me saying things I had never said before and would never say.”

Wray talked about the increasing prevalence of AI-aided sextortion. According to cybersecurity company ESET, there was a 178% increase in sextortion cases between 2022 and 2023, driven in part by AI tech.

“Somebody posing as a peer targets a teenager,” Wray said, “then uses [AI-generated] compromising pictures to convince the kid to send real pictures in return. In fact, it’s some guy behind a keyboard in Nigeria, and once they have the images, they threaten to blackmail the kid and say, if you don’t pay up, we’re going to share these images that will ruin your life.”

Wray also touched on disinformation around the upcoming U.S. presidential election. While asserting that it “wasn’t time for panic,” he stressed that it’s incumbent on “everyone in America” to “bring an intensified sense of focus and caution” to the use of AI and keep in mind AI “can be used by bad guys against all of us.”

“We’re finding all too often that something on social media that looks like Bill from Topeka or Mary from Dayton is actually, you know, some Russian or Chinese intelligence officer on the outskirts of Beijing or Moscow,” said Wray.

Indeed, a Statista poll found that more than third of U.S. respondents saw misleading information — or what they suspected to be misinformation — about key topics toward the end of 2023. This year, misleading AI-generated images of candidates VP Kamala Harris and former president Donald Trump have garnered millions of views on social networks including X.

Bill Gates on AI disruption

For a techno-optimistic change of pace, Oprah interviewed Microsoft founder Bill Gates, who expressed a hope that AI will supercharge the fields of education and medicine.

“AI is like a third person sitting in [a medical appointment,] doing a transcript, suggesting a prescription,” Gates said. “And so instead of the doctor facing a computer screen, they’re engaging with you, and the software is making sure there’s a really good transcript.”

Gates ignored the potential for bias from poor AI training, however.

One recent study demonstrated that speech recognition systems from leading tech companies were twice as likely to incorrectly transcribe audio from Black speakers as opposed to white speakers. Other research has shown that AI systems reinforce long-held, untrue beliefs that there are biological differences between Black and white people — untruths that lead clinicians to misdiagnose health problems.

In the classroom, Gates said, AI can be “always available” and “understand how to motivate you … whatever your level of knowledge is.”

That’s not exactly how many classrooms see it.

Last summer, schools and colleges rushed to ban ChatGPT over plagiarism and misinformation fears. Since then, some have reversed their bans. But not all are convinced of GenAI’s potential for good, pointing to surveys like the U.K. Safer Internet Centre’s, which found that over half of kids report having seen people their age use GenAI in a negative way — for example creating believable false information or images used to upset someone.

The UN Educational, Scientific and Cultural Organization (UNESCO) late last year pushed for governments to regulate the use of GenAI in education, including implementing age limits for users and guardrails on data protection and user privacy.





Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

The Sky This Week from November 22 to 29, 2024

Friday, November 22Last Quarter Moon occurs at 8:28...

Uranus may not have a weird magnetic field after all

Back to Article List A blast of solar radiation...

How Comet ATLAS fizzled out

This image of C/2024 S1 (ATLAS) was taken...

2024 Christmas Gifts for Her (That She’ll Love!)

As a mom, I’ve been given plenty of...