Sketch produced a modern sleek animation about the recent phenomenon of ‘Deep Fakes’. With the rise in fake news spreading rapidly across social media, the proliferation of deep fakes is potentially hugely destructive. We worked the leading academic Charlotte Stanton who is the inaugural director of the Silicon Valley office of the Carnegie Endowment for International Peace as well as a fellow in Carnegie’s Technology and International Affairs Program. She has written extensively on deep fakes and their potential for good and bad. You can follow Charlotte Staton on Twitter.
This animation’s aim is to help spread awareness about this new technology and its potential implications. The animation was created for the US think tank, the Carnegie Endowment for International Peace.
Deep fakes are hyper-realistic videos of someone appearing to do and say things they didn’t do or say.They are made using artificial intelligence, or AI. The term deep fake is actually a mash-up of ‘fake’ and ‘deep learning’, which is a type of AI algorithm. Until recently, only special effects experts could make realistic-looking and -sounding fake videos but AI algorithms make it possible for non-experts to create fakes that many people would deem real.
The algorithm uses footage of two people: an impersonator, and a person targeted for impersonation. It then generates a new video that shows the targeted person moving and talking in the same way as the impersonator. The result is a realistic- looking and sounding forgery.
Because deep fakes are so realistic, they can be used to humiliate and blackmail people and attack organisations. They could also incite political violence, sabotage elections, and unsettle diplomatic relations.
A proliferation of deep fakes could even cast doubt on videos that are real by making it easier for someone caught behaving badly in a real video to claim that the video was a deep fake.
These dangers are made worse by how quickly and easily social media platforms can spread unverified information.
If deep fakes are so dangerous, why aren’t they illegal?
One reason is because the technology to create them is developing quickly and the law hasn’t caught up. Another reason is that this technology has positive applications. For example, it can help people with ALS make digital copies of their voice before they lose the ability to speak.
So then what can countries do to prevent deep fakes from sparking conflict?
First, countries must decide which uses of deep fakes are acceptable, and which are not. This would help social media companies police their platforms for harmful content.
Another thing countries could do is pass laws to allow internet platforms to share information about deep fakes.
This would make it easier for platforms to alert each other to a malicious deep fake before it spread, and warn news agencies before it infiltrated the mainstream news cycle.
At a minimum, governments must fund the development of media forensic techniques that make it easier to detect deep fakes. So far, deep fakes have not been deployed to incite widespread violence or disrupt an election.
But the technology to do so is available. That means there is a shrinking window of opportunity for countries to safeguard against the threats from deep fakes – before they spark conflict.