President Trump straightens his tie, glares into the camera and takes a deep breath.
“We will strike back against Russia with our full military force,” he says slowly, puffing out his chest. “As of today, we are at war.”
Almost instantly, the video is shared on thousands of Twitter feeds, WhatsApp groups and Facebook pages, causing mass panic and confusion.
Within minutes, it is outed as a deepfake: an AI-generated clip created by a group of hackers who have also infiltrated America’s power networks to cause chaos in schools, hospitals and on roads. But it’s too late. By now, millions have heard the news that Trump is waging war following attacks on US critical infrastructure – and the global backlash has begun.
This may seem like an outlandish scenario, but it’s exactly what experts fear could happen if the technology behind deepfakes is used for nefarious purposes.
Open source software, one hour, and one Twitter user to make Trump say this. 👇 #deepfake https://t.co/rnsjkSuSEa
— Nina Schick (@NinaDSchick) September 4, 2020
“A video like that, even if it was fake, could go viral within seconds,” says Nina Schick, author of Deep Fakes and the Infocalypse.
“You can absolutely see how this can cause chaos….we’ve already seen in America how vigilantes are rioting, looting and killing each other. You can see how such a video can do an immense amount of damage. There’s no questions about it. If Russia wanted to create a convincing deepfake video of Trump saying he’s at war, they could do it right now.”
Until recently, the manipulation of digital media to show “deepfakes” was mostly confined to academic research labs and to the ever-innovative world of online pornography, where they were used to meld celebrities’ faces onto porn actors without the consent of either.
There were also eye-catching stunts designed to demonstrate their potential for harm, such as Get Out director Jordan Peele’s memorable 2018 imitation of Barack Obama. Back then, the risk was only theoretical.
Now, however, deepfakes are loose – and already creating chaos, as well as mirth, across the world. While they have yet to start a global conflict, AI-generated videos, faces and voices have caused political scandal in Malaysia, swindled large sums of money from corporate executives and helped trigger an attempted military coup in Gabon.
“Technology has allowed for information operations to become far more potent,” says Schick. “Until now, the barrier to entry when it came to manipulation in film has been relatively high. AI has changed that.”
In the first six months of this year, deepfake detection firm Deep Trace Lab said the number of manipulated videos it is spotting in the wild has doubled.
Only last month, Facebook announced that it had shut down a new attempt by Russia’s infamous Internet Research Agency (IRA) to meddle in US and UK politics via a radical news website called PeaceData.
Its tactics, targets and narratives were familiar, but there was a new twist: PeaceData’s "editors" appeared to be static deepfakes that used AI-generated photos.
“AI-generated faces are getting more common in disinformation operations, and I suspect they’ll keep on coming,” says Ben Nimmo, head of investigations at Graphika, who helped uncover the Russian network.
Deepfake pictures are even easier to create than videos; Telegraph readers can make their own at ThisPersonDoesNotExist.com. Yet they are still effective (and creepy) because, unlike stock photos, they have no prior existence, making them just as unique as any human face.
Similar photos have been used by a fake LinkedIn profile that was befriending Washington DC insiders, potentially as part of a foreign spying campaign by a network of fake Facebook accounts allegedly run by the Epoch Times, an online news company with links to the Chinese Falun Gong Sect.
Meanwhile, deepfakes are prospering as commercial tools, with several firms hawking binders full of AI-generated faces that can add instant racial or gender diversity to corporate brochures and adverts.
Strangest of all, they have begun a common joke format for Generation Z. Frivolous deepfakes have exploded on TikTok, letting video creators augment their impressions of Jim Carrey or Al Pacino’s performance in Scarface with eerily realistic face swaps.
“For as little as $20 (£15), you can use an online marketplace to get somebody to make any deepfake video for you, and we’re starting to see more and more YouTubers who are using software that’s freely available and open source to make their own manipulated videos,” says Schick.
Last month, Philip Tully, a data scientist at security company FireEye, generated a hoax Tom Hanks image that looked almost exactly like the real-thing. All it took was a few hundred images of Hanks and less than £75 spent on online face generation software.
Experts describe such efforts as “cheap fakes”: media that has been altered without advanced AI, often by cutting and pasting one pixel onto another. “They can still be harmful,” says Victor Riparbelli, CEO of London-based Synthesia, one of the world’s most advanced deepfake companies.
His team is working with businesses, such as communications company WPP, to create corporate training videos for their global branches. The videos use deepfake technology to allow the presenter to speak in any language and address the viewer by name — and demand during lockdown has boomed.
Anyone can try the technology for themselves by typing a script for a virtual presenter to read. The results can be unnerving.
Riparbelli says his main competitors are major tech companies. TikTok’s parent company, ByteDance, for instance, has developed its own unreleased deepfake generator called Face Swap, some of which still existed in TikTok’s code at the start of 2020. The likes of Snapchat have created similar features, albeit more limited.
Start-ups, such as Ukraine’s RefaceAI, are quickly catching up. Its Reface app uses something known as generative adversarial networks (GANs) to pit two neural networks against each other, creating a process which endlessly corrects and refines itself. The results are far slicker than similar apps such FaceApp and Zao.
“It’s naive to think that such technologies by private companies won’t be used for malign purposes,” says Schick. “It can be used for good, such as in commercial applications, but it absolutely will be weaponised.”
Riparbelli says deepfakes will inevitably fall into the hands of criminals, but claims fully-realistic deepfakes are still a long way off — and that may be one way to fight against their rise.
In 2018, a YouTuber trained an AI to create a composite of Trump’s face, over Alec Baldwin’s speech on Saturday Night Live
“There’s quite a lot of technical barriers to change what someone says in the video. One is the voice, cloning it is still really, really difficult to do. If I change the speech in a video that’s already been recorded, the body language is going to be out of touch, the head movements are going to be out of touch, there’s going to be no background noise.”
Several tools have been developed to pick up these quirks ahead of the 2020 presidential election. Microsoft, for instance, recently announced a system that analyses videos and photos and provides a score indicating the chance that they have been manipulated. Adobe has also developed a tools that allows creators to attach attribution data to content to prove it isn’t fake.
It’s not realism in deepfakes, however, but the propensity of people to believe what they want, regardless of warning signs.
“Ultimately this isn’t actually a problem about technology…we know that misinformation has been around since time immemorial,” says Schick. “It’s really a human problem, it’s just that technology has amplified it.”
It may be flawed, but the age of deepfakes has already started. The technology already has the potential to swing elections, trigger wars and aid criminals. It’s creating an overload of disinformation that is creating chaos both online and offline.
As Schick puts it: “We are facing a danger of world-changing proportions…and we’re not ready.”
Свежие комментарии