Microsoft wants Congress to outlaw AI-generated deepfake fraud

Date:


Microsoft is calling on members of Congress to regulate the use of AI-generated deepfakes to protect against fraud, abuse, and manipulation. Microsoft vice chair and president Brad Smith is calling for urgent action from policymakers to protect elections and guard seniors from fraud and children from abuse.

“While the tech sector and non-profit groups have taken recent steps to address this problem, it has become apparent that our laws will also need to evolve to combat deepfake fraud,” says Smith in a blog post. “One of the most important things the US can do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans.”

Microsoft wants a “deepfake fraud statute” that will give law enforcement officials a legal framework to prosecute AI-generated scams and fraud. Smith is also calling on lawmakers to “ensure that our federal and state laws on child sexual exploitation and abuse and non-consensual intimate imagery are updated to include AI-generated content.”

Microsoft has had to implement more safety controls for its own AI products, after a loophole in the company’s Designer AI image creator allowed people to create explicit images of celebrities like Taylor Swift. “The private sector has a responsibility to innovate and implement safeguards that prevent the misuse of AI,” says Smith.

While the FCC has already banned robocalls with AI-generated voices, generative AI makes it easy to create fake audio, images, and video — something we’re already seeing during the run up to the 2024 presidential election. Elon Musk shared a deepfake video spoofing Vice President Kamala Harris on X earlier this week, in a post that appears to violate X’s own policies against synthetic and manipulated media.

Microsoft wants posts like Musk’s to be clearly labeled as a deepfake. “Congress should require AI system providers to use state-of-the-art provenance tooling to label synthetic content,” says Smith. “This is essential to build trust in the information ecosystem and will help the public better understand whether content is AI-generated or manipulated.”



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

String theory is not dead yet

String theory is a mathematical description of nature...

JWST spots more light than expected in the early universe

This artist's concept shows early galaxies forming in...

Bentonite Clay For Diarrhea (Does it Work?)

Bentonite clay has become a staple in our...

Creative & Unique Gift Ideas (They Don’t Already Have)

I’ve shared my holiday gift guide, but many...