Everyone loves those videos of stuff that’s obviously fake: dogs talking like humans or Nick Offerman playing every role in “Full House.” But the same technology that enables these silly videos also has a dark side: deepfakes. Learn more about what “deepfakes” are, why people make them, and why they could pose a big threat to public trust:
What are deepfakes?
A deepfake is a video created using artificial intelligence showing real people doing and saying things they never did. The first iterations of deepfakes appeared in pornography when celebrities would be spliced into scenes, but now they are being used as entertainment, as satire, or as political and propaganda weapons. Director and comedian Jordan Peele teamed up with Buzzfeed and Barack Obama to create a deepfake video to serve as a warning of what manipulated video could do.
How are deepfakes made?
While you and your kids can create edited videos using widely available apps like TikTok, Likee, and Funimate, you can’t use them to make deepfakes. Two apps that can create deepfakes — Fakeapp and DeepFaceLab — are both available for free download, but fabricating a convincing deepfake takes a significant amount of effort and time, even for tech-savvy computer hobbyists. There’s little doubt that as the technology improves, deepfake software will become more accessible. In the meanwhile, deepfake creators with technical expertise and malicious intent are committed to their craft — and it could turn into a big problem.
Why do people make them?
Deepfakes are designed to intentionally mislead people and spread false information. Though fake footage can be used for entertainment and satire on TV and social media (where it’s usually identified as such), deepfakes are created by folks with an ax to grind, an agenda to promote, or an urge to troll. They’re slowly becoming more common — and are maddeningly hard to spot — posing problems for government, the tech industry and families who are finding it tougher to trust what they see.
Do families need to worry?
Manipulating images to portray a point of view and persuade viewers is nothing new. But deepfakes aren’t like airbrushed models in magazines or glow filters in Snapchat. When their targets are elected officials, actual events from history, or other public information, they have the potential to erode people’s trust. For these reasons, deepfakes have caught the attention of politicians. In July 2019, U.S. House of Representatives Intelligence Committee Chairman Adam Schiff wrote letters to the CEOs of Facebook, Twitter and Google asking about their companies’ formal policies on deepfakes and development of technologies to detect them.
Since the problem of deepfakes is only going to grow, it’s time to talk to your kids about not only how to recognize signs that a video is manipulated, but also to discuss the why behind the producer’s intent.
How can you spot a deepfake?
Since humans can be easily deceived, it may be up to tech companies to help us recognize deepfakes and flag potential ones. Many are developing sophisticated algorithms and AI for this purpose. For instance, the software company Adobe, the creator of Photoshop, partnered with University of California Berkeley researchers to train AI to recognize facial manipulation. This tool could eventually help consumers detect deepfakes. In the meantime, the following characteristics might help you and your kids recognize one:
- Face discolorations
- Lighting that isn’t quite right
- Badly synced sound and video
- Blurriness where the face meets the neck and hair
On YouTube, deepfake movie and celebrity mash-ups are popular. In one instance, Sylvester Stallone becomes Arnold Swarzenegger in “Terminator 2.” Or how about Keanu Reeves as Forrest Gump?
On Instagram, an AI-generated character named Lil Miquela has 1.5 million followers and interacts with other users. Lil Miquela doesn’t represent a deepfake video — it’s an advertising ploy — but it does demonstrate some consumers’ acceptance of imagined digital representations.