Click to skip or ad will close in 25 second(s)

Click to skip or ad will close in 25 second(s)



 


 

 

Bill Would Penalize Non-Consensual Deepfake Content

Bill Would Penalize Non-Consensual Deepfake Content


Bill Would Penalize Non-Consensual Deepfake Content

­The U.S. Senate has passed what I assume will be the first of many attempts to curb AI’s destructive tendencies. Notably missing – any mention of killer robots.

The Disrupt Explicit Forged Images and Non-Consensual Edits (Defiance) Act would hold accountable responsible for “the proliferation of nonconsensual, sexually-explicit ‘deepfake’ images and videos.”

And in case you haven’t been paying attention, these particular deepfake images and videos are rampant – a recent study found that 96% of deepfake videos were nonconsensual adult content, including, most famously, Taylor Swift, and one of the sponsors of Defiance, Alexandria Ocasio-Cortez.

And of course, the internet has been saturated with deepfake content, defined as “videos or images created through artificial intelligence that look completely realistic. They may superimpose an individual’s likeness onto real video footage depicting someone else, or they may consist of entirely original content where someone is represented doing or saying something they did not do or say.”

The latter could potentially become AI’s most dangerous application, with the possibility of lies and forgeries deciding major elections (more so than usual). Indeed, a deepfake video Kamala Harris giving a speech she never made has been circulating on TikTok lately, and AI tech is getting more and more sophisticated. Eventually, only the most trained of eyes will be able to spot the telltale signs of phony political shenanigans.

That said, we already do have civil legislation against non-consensual adult content, potentially making Defiance not just redundant but ineffective, since existing laws haven’t curbed the spread of this salacious material.

Defiance is currently under consideration in the House.