AI's Most Dangerous Application (Other than Killer Robots)Date:
06/25/2024Tag: #ai @google #politics #killerrobots #powerelectronics AI's Most Dangerous Application (Other than Killer Robots)Google’s DeepMind division has identified the most heinous use of AI, and it’s frightening. We’ve covered before how AI can enable an epidemic of scam calls, how it’s been writing fake news stories, and some of the more benign uses, like generative, advertorial images. We’ve seen AI pictures, AI poetry, and even AI videos. And while AI visual and written works are (usually) easy to spot, it’s getting harder and harder. And with the presidential election incoming, we’ve begun to see potentially AI’s most damaging application – impersonating celebrities and politicians. Google’s Deepmind noted that “Manipulation of human likeness and falsification of evidence underlie the most prevalent tactics in real-world cases of misuse. Most of these were deployed with a discernible intent to influence public opinion, enable scam or fraudulent activities, or to generate profit.” It goes on to say that such misuse doesn’t involve “technologically sophisticated uses of GenAI systems or attacks,” but it doesn’t have to. The internet might be famously and factually unreliable, and we all like to pay lip service to that idea, but the truth is that scam artists are wildly successful – according to the FBI, internet scams cost Americans $10.3 billion in 2022. And with AI’s increasing sophistication, I don’t see how that number goes down. In the political arena, we’re already fooled by a slew of misinformation, without the help of technology trained to mimic human beings. One inflammatory story or accusation – true or not – can sway voters and decide an election (and thus have an outsized impact on the country, itself). We’ve all been on the lookout for Skynet, but a seemingly innocuous AI political ad can do just as much damage and sneakier. |