We may earn a commission when you make a purchase via links on this site.
What is Deepfake? Examples You Need to Know
By Tibor Moes / January 2023
The modern digital world has changed the way we share and process information. Short videos spread quickly and influence public opinion. People who once checked newspapers now flock to the internet.
But what if you’re watching a video where a person’s face or voice has been altered? Deepfakes are manipulated media like video and audio that challenge the concept of seeing is believing.
Summary: Deepfakes are artificial content made by digitally altering existing images, videos, text, or audio. For instance, a fake video of the president making radical statements. While some are harmless and produce entertaining GIFs or memes, others are more malicious and can create fake news. In recent years, many convincing examples have demonstrated that deepfake detection is becoming increasingly difficult.
Tip: Deepfakes are the latest example of how dangerous the digital world can be. Protect yourself by installing one of the best antivirus software and an excellent VPN service.
What is Deepfake?
The word “deepfake” refers to an incredibly realistic but digitally altered piece of content created by manipulating existing audio, video, or text media. These pieces of media use AI (artificial intelligence) tools to seamlessly swap a person’s face or voice with another’s. They usually create a playful spoof; however, malicious media can spread misinformation.
Although altered video content has been around for decades, deepfakes are still uncharted territory. Unlike edited photos that we humans touch up using software like Photoshop, deepfakes are created by AI technologies. This kind of sophisticated video manipulation has become increasingly common in online frauds such as pharming.
Even the term deepfake is a synthetic blend. It comes from combining the word “fake” with the phrase “deep learning.” Thus, the “deep” in deepfake references “deep learning,” or training computers to operate similarly to the human brain. Placing “fake” at the end of the word highlights the artificial nature of such media.
How Are Deepfakes Used?
Fake videos are usually harmless, often appearing in funny social media posts.
However, some can serve malicious purposes since altered content may create fake news, generate misinformation, spread pornographic videos, and tarnish a person’s reputation.
The negative side of deepfakes is what first drew public attention to the phenomenon.
One Reddit user created nonconsensual pornographic material with the faces of female celebrities using deepfake technology in 2017. Although the subreddit was consequently banned, deepfake pornography and similar forms of altered media have become rampant across the internet.
And this is just the tip of the iceberg.
Deepfake video has made its way into other spheres, most notably politics.
For example, a political party in Belgium posted a video of Donald Trump in 2018 in which he asked Belgium to step down from the Paris climate agreement. The thing is, the video was a deepfake; Trump never made such a speech.
But this isn’t the first instance of synthetic videos spreading misinformation. Political experts across the globe anticipate an onslaught of manipulated media with convincing deepfakes.
How Are Deepfakes Created?
Deepfake videos are made by importing countless photos of a person to deep learning networks known as variational auto-encoders (VAE). The goal is to finetune the VAEs to record a wide range of emotional expressions, lighting exposure, and positions. This information helps the tool determine which components, such as shadows or movement, are replaceable and which are unique.
Generally, creating a deepfake is a three-step process.
The first step requires two sets of input images. The first set represents the source, while the second comes from the intended target of the alteration. Computers can operate on thousands of pictures of random faces or many photos of one person.
Next comes creating the output images. The AI tool uses the input data to identify unique components of an expression, which serve as the backbone of the fake video. Of course, the AI must capture the subtleties of movements and personality for the final result to be convincing.
Finally, the AI combines the input and output material to complete the face swap. The data from the source material goes into the VAE, merging with the data from the images of the deepfake’s target. The auto-encoder then reconstructs the mannerisms and expressions of the original person using the target’s face.
Now that we’ve gone over what deepfakes are and how they’re created, let’s examine their various types.
In the infancy of natural language processing (NLP) and machine learning, the notion of AI tools completing tasks like writing or drawing was unimaginable. But in 2023, AI-created text features human-looking clarity and pitch. This progress is primarily the result of detailed language libraries and models developed by data scientists.
Although deepfake creation doesn’t encompass only videos, this is the most prevalent type of synthetic media. Unsurprisingly, video clips have become the weapon of choice for cybercriminals. Since social media dominates large portions of our life, videos and photos get more traction than text posts.
Video-generating AI is more robust than its natural language counterparts and is thus potentially more dangerous.
For example, the Seoul-based software company Hyperconnect released its MarioNETte program in 2020. MarioNETte can generate deepfake videos of political figures and celebrities. The release raised concerns about how easily accessible AI technology is becoming.
Artificial intelligence and generative adversarial networks (GANs) specialize in more deepfake content than just video, photos, and text.
These robust technologies can also replicate a human voice. They only require a repository with an audiotape of the person whose voice they will imitate. AI algorithms analyze and learn from this information, reconstructing the prosody of a human voice and producing clear, deepfake audio.
Deep Voice and Lyrebird are the most well-known examples of voice imitation software. After you utter a few words, the programs become used to your accent and voice. As you feed the software more recordings, it gathers enough data to clone your voice.
Social Media Deepfakes
Deepfake technology has become prevalent in social media and is used to create blogs, stories, and fake profiles.
One such example is Maisy Kinsley, a purported Bloomberg journalist with accounts on Twitter and LinkedIn. “Maisy” repeatedly tried to contact Tesla stock short-sellers, but people grew suspicious of her lack of online presence. Besides her profile picture, which seemed AI-generated, no other information existed.
Another computer-generated profile is Katie Jones. “Katie” wrote that she worked at the Centre for Strategic and International Studies. It was later determined this account was a deepfake created as a spying tool.
Live or Real-Time Deepfakes
AI technology is advancing rapidly, allowing hackers to bypass voice authentication and companies to generate advertising clones for campaigns.
YouTubers are keeping up with this progress, using deepfake programs to alter their faces in real-time. DeepFaceLife is an AI software that changes your face into someone else’s through streaming networks and videoconference. The open-source program is available to developers and content creators and has become increasingly popular among Twitch streamers.
Let’s look at the most notable deepfake examples. Chances are you’re familiar with at least one entry on this list.
The Mandalorian Deepfake
Luke Skywalker’s surprise appearance in the Season 2 finale of “The Mandalorian” exhilarated Star Wars fans. Lucasfilm relied on the de-aging effects it debuted in “Rogue One: A Star Wars Story” to conjure Mark Hamill from his “Return of the Jedi” days.
However, as impressive as the technology was in “Rogue One,” it missed its mark in de-aging Hamill. Enter a YouTube creator who goes by the handle “Shamook.” His deepfake version left the original in the dust, prompting Lucasfilm to hire the creator.
But what makes Shamook’s deepfake superior to the Lucasfilm scene?
For one, how quickly it emerged on the internet. Only three days after the movie’s premiere, Shamook uploaded the now famous Luke Skywalker deepfake to his YouTube channel. It’s safe to assume it took Lucasfilm far longer to craft the original scene.
While the differences between the two scenes are subtle, they’re noticeable when you zero in on the key elements.
For example, the deepfake features close-up shots that highlight Luke’s face. Its outlines, texture, and shape appear more organic. In contrast, the skin in the original seems over-rendered, synthetic, and shiny, as if taken out of a video game.
Furthermore, the deepfake enhances the blue of Luke’s eyes, adding reflected light for a realistic effect. Many Star Wars fans agree this is the most notable difference since the reflected light is conspicuously absent from the original scene.
Shamook’s accomplishment demonstrates the power of deepfake technology. Not only did he go toe to toe with one of the leading giants in the film industry, but he managed to outperform it. He may have worked with limited resources but he used deepfake technology as his secret weapon.
AI-generating tools often face scrutiny due to privacy concerns, yet Shamook’s feat represents the deepfakes as a gateway to technological improvement.
The Nancy Pelosi Video
Unlike the Luke Skywalker deepfake, this example shows why many in the geopolitical sphere fear the technology’s potential misuse.
In the doctored video, the House Speaker seems impaired and slurs her words. The video made the rounds on different social media platforms. On Facebook alone, it picked up over 2.5 million views.
This video isn’t a deepfake in the traditional sense. There’s no convincing face swapping. Instead, it’s only slowed down to make Pelosi seem intoxicated and her speech garbled. Thus, it’s more accurately described as a “cheapfake” or “shallowfake,” meaning that it’s been altered only slightly.
The video raised questions about the accountability of social media platforms and how to identify false information.
The Obama Public Service Announcement Deepfake
In 2018, Jordan Peele and Buzzfeed collaborated to show what the future of fabricated news could bring in their Barack Obama deepfake.
Despite the seemingly limitless coverage around Obama, it still took over 56 hours of machine learning to create the simulation.
In the video, Obama warns people not to believe everything they see on the internet. As the video inches towards its end, the screen splits, revealing Peele as the voice behind the deepfake. The simulation manipulated the former president’s lips to match Peele’s narration.
Buzzfeed and Peele’s collaboration showed how easy it is to play around with someone’s words and actions and that anyone-even the former president- is fair game.
Another implication of the Obama clip is the erosion of trust in news on social media.
While the fabricated video of Obama implores viewers not to believe everything they see online, much of the population uses social media to stay informed. This means deepfake videos are an indirect threat.
Although they may not convince us to believe falsehoods, they might lead to distrust and skepticism of news outlets. Thus, the deepfake PSA is a powerful reminder we should be careful about the content we consume and share on social media.
The Yang Mi Time Travel Deepfake
A video posted on the Chinese platform Weibo face-stitched actress Yang Mi into a period piece released nearly four decades ago. Weibo quickly took down the deepfake video, and online chatter in China has expressed fears about the potential dangers of such technology. But as we saw with the Luke Skywalker deepfake, AI can be a powerful tool for the movie and TV industries.
The Zuckerberg Speaks Frankly Deepfake
When Facebook failed to remove the doctored video of Nancy Pelosi despite public outcry, it prompted artist Bill Posters to upload this deepfake to Instagram. The footage displays Zuckerberg bragging about Facebook’s power. The speech is sinister and urges viewers to imagine what one could do with the stolen data of billions of users.
Donald Trump Taunts Belgium
In 2018, a video surfaced online of Donald Trump addressing the citizens of Belgium, advising them on climate change. As Trump looks into the camera, he urges Belgians to follow his lead and withdraw from the Paris Agreement or the Paris Climate Accords.
The Belgian Socialistiche Partij Anders (SP-A) created this deepfake video and posted it on its Facebook account. The so-called footage sparked much online chatter in Belgium, and many expressed outrage over Trump’s audacity to meddle in foreign climate policies.
As we now know, this rage was misdirected. The clip was little more than an AI-created forgery.
The political party hired a production company to make a deepfake of Trump using machine learning. The result was a replication of Trump, looking directly into the camera lens, saying things he’d never said.
The party intended to capture public interest and redirect citizens to an online petition prompting the government to take a proactive approach to climate change. The deepfake’s creators later stated they thought the low quality of the clip would reveal its inauthenticity. In hindsight, the lip movements are the main piece of evidence the video isn’t a genuine speech.
As SP-A realized the practical joke had gotten out of hand, it had to mitigate the damage. Their social media team began posting comments, informing viewers that Trump never actually gave the speech.
Perhaps the political party underestimated the lifelikeness of the clip or overestimated the audience’s ability to see through deepfakes.
Either way, the stunt was a deeply concerning example of the havoc manipulated video can wreak online in a geopolitical context.
The Return of Salvador Dali
Surrealist artist Salvador Dali once stated in an interview that while he believes in death, he doesn’t believe in the death of Dali. The Dali Museum in St. Petersburg, FL, proved the painter right, bringing him to life in the form of a deepfake.
The Dali Lives exhibition was a collaboration between the museum and the advertising company Goodby, Silverstein & Partners. It featured a life-size Dali recreated using AI-powered video editing methods and used archival footage to pull over 6,000 frames. Training the algorithm to reconstruct Dali’s face took over 1,000 hours.
The technique then imposed the artist’s expressions on an actor with identical body proportions. Additionally, it synced his lip movement to the narration of a voice actor who captured Dali’s unique accent, a blend of English, French, and Spanish.
As a result, the exhibit offered a surreal experience.
When visitors pressed the doorbell of a small kiosk, Dali would appear and share his life stories. In one scene, he would read the current issue of The New York Times and comment on the weather in another.
The Dali Lives project aimed to humanize the artist and help visitors emphasize with him. After all, significant players in art history often feel like storybook characters, as if they belong to times long gone. But Dali almost made it to the 21st century, having passed in 1989.
As he clued in the museum visitors on his life, he appeared in tune with modern life, even asking if they wanted to take a selfie.
Deepfakes are usually associated with the dangers of fake news and the ability to manipulate political figures to say vile things, as mentioned before. However, the technology is becoming widely available, and Nathan Shipley, the ad agency’s technical director, admitted he found the code on GitHub.
The Dali Lives exhibition is one of the first instances of a cultural institution using deepfake technologies for artistic purposes.
Better Call Trump
A YouTube creator who goes under the name of “Ctrl Shift Face” face-stitched Donald Trump into a scene of the hit show “Breaking Bad.” Trump replaces actor Bob Odenkirk and the final product “Better Call Trump” is a convincing parody that replicated the former president’s voice and facial features.
In the scene, the character of Saul Goodman breaks down money laundering to Jesse Pinkman.
As the scene continues, Jared Kushner, Trump’s son-in-law, replaces Pinkman. When news broke in 2017 that Kushner would serve as a senior White House advisor, nepotism complaints started to pour in.
The doctored clip was created using the open-source DeepFaceLab deepfake software for a precise frame-by-frame effect. To ensure Trump’s voice matched Odenkirk’s, “Ctrl Shift Face” used StableVoices. It’s a now-defunct social media account that used an AI-powered synthesizer to recreate TV and movie lines.
The Korean Newsreader Deepfake
In late 2020, South Korean news anchor Kim Joo-Ha began discussing the day’s main headlines. The stories were typical of the time, full of Covid-19 updates and government responses to the pandemic.
Yet the bulletin differed from regularly scheduled programming. Namely, the anchor wasn’t going through the news. Instead, a deepfake version had replaced Kim Joo-Ha. The computer-created copy perfectly reflected her facial expressions, gestures, and voice.
Viewers knew of the experimental broadcast beforehand, and their response to it was mixed.
Although some were in awe of how realistic the replica was, others were concerned the news anchor could lose her job. The MBN channel announced it would continue using deepfake technology for some broadcasts. On the other hand, the company behind the AI-powered tools stated it would look for new media buyers in the U.S. and China.
The Richard Nixon Deepfake
This clip reimagines history, infusing it with a narrative that never happened and demonstrating the power of computer-created media.
In 2019, artists Halsey Burgund and Francesca Panetta collaborated with two AI firms, Respeecher and Canny AI, to create a convincing deepfake video. The footage shows the former president making a speech he never delivered.
But let’s add some context.
Although the 1969 Apollo 11 Moon Landing mission was a success, William Safire, Nixon’s speechwriter, had penned “In Event of Moon Disaster,” just in case. Fortunately, this version of history never came to be, but the deepfake shows Nixon delivering the backup speech.
The team behind the project worked for over half a year to create a nearly technically flawless fake.
Canny AI used the “video dialogue replacement” technique to impose the facial movements for the backup speech to reconstruct Nixon’s face. As for the sound, Respeecher used proprietary technology to synthesize Nixon’s voice from an actor reading the text. In addition, the company pulled footage of Nixon’s numerous White House appearances.
The two artists, who work a the Massachusetts Institute of Technology, also wanted to show how deepfakes might alter our perceptions of history. They documented the process of creating the deepfake in “To Make a Deepfake,” a short film highlighting the potential and perils of such technology.
Although AI technology is becoming more refined, you can distinguish a doctored video from the original if you pay attention to the subtle clues. Where face swapping is incredibly convincing, animation doesn’t yield the same seamless effect.
Still, as the technology improves, deepfake detection will likely become more challenging. But the following tips might help you recognize fake news and avoid falling victim to online scams.
No matter how authentic the visuals may appear, audio flaws quickly give away a deepfake’s inauthenticity.
When compared to the originals, voices typically sound mumbled or subdued. Another way you can identify a deepfake is by watching the lips. If the video has been doctored, the audio won’t match the lip movement.
Inconsistent Skin Tones and Shadows
Automatic rendering usually results in inconsistent skin tones, which is why they’re the most straightforward way of spotting deepfakes. Until skin tone integration methods evolve, sharp lines and shadows remain telltale signs of altered videos.
If you notice soft areas around the mouth or face, you’re probably watching a deepfake. Regardless of how realistic it seems, no fake video can hide blurry spots during movement and facial transitions.
Are Deepfakes Illegal?
Deepfakes are legal nearly all across the globe. However, specific deepfakes may be illegal, depending on their intent. Yet this legality could vary from country to country.
For example, the U.S. has recognized the need for stricter deepfake regulation.
Where most states have passed laws against revenge porn, only a few, including Texas and California, have added deepfakes into such legislation. Additionally, California has banned politicians and candidates from using deepfake technology for election campaigns. However, we have yet to see a decisive crackdown on deepfake porn.
The United Kingdom also lacks any specific deepfake regulation laws. This legal gap leaves those targeted by malicious deepfakes to rely on existing legislation. Thus, deepfake frauds usually go to court as defamation cases.
On the other hand, China has taken a more serious stance.
The country recently passed a law that holds the platform where the deepfakes appear accountable for distributing the videos. Furthermore, it prohibits platforms from recommending synthetic media.
What Are Shallowfakes?
Although similar to deepfake videos, shallowfakes are less sophisticated.
They’re video clips altered using standard editing tools or taken out of context. While crude, they have the potential to cause significant damage to a person’s reputation.
One notable shallowfake example shows Jim Acosta, a White House correspondent, in a heated exchange with an intern. The clip was sped up to make Acosta seem more aggressive than he was.
When the video emerged, Acosta’s press pass was revoked. After determining the video was a fake, the White House reinstated his pass.
The United Kingdom’s Conservative party relied on similar tactics. As the election drew closer, the party released a fabricated video of Labor MP Keir Starmer where he struggles to express his party’s stance on Brexit.
Deepfake Prevention Practices
As seen from some of the deepfake examples discussed above, the dangers of computer-powered media are highly sophisticated. Organizations and businesses worldwide have started implementing practices to protect themselves against deepfake video fraud.
Using Detection Software
Investing in the same technology used to create deepfake videos is a reliable way of avoiding falling for tampered material. AI-powered detection software uses similar deep learning methods to analyze whether a video has been fabricated.
Thus far, watermarking content and data has been the go-to precaution to highlight a video’s authenticity. One such solution is Amber Authenticate, a device that produces hashes at specific intervals in a movie. If a video has been tampered with, the hashes shift.
Increasing Awareness and Organizing Training
One of the biggest threats of deepfakes is their novelty.
People who don’t know what to look for are more likely to fall for synthetic media. On the other hand, those aware of the potential dangers can recognize the warning signs.
For this reason, corporations would benefit from teaching employees, management, and shareholders about the threats of deepfake videos. Similarly, enterprises can bolster their defense and raise awareness through corporate training videos.
Keeping Data Private
Social media platforms are fertile soil for content misuse. Anyone can save content from a public profile and tinker with it to create a deepfake. Keeping profiles private and sharing videos and photos with a trusted network is an additional protective layer against deepfake-based scams.
Deepfake Technology Is a Double-Edged Sword
Although deepfake technology could revolutionize the entertainment industry, artificial intelligence remains a threat when in the wrong hands. As deep learning techniques become more robust, governments will likely double down on their efforts to identify fake content and stop the spread of false information.
Frequently Asked Questions
What are the potential benefits of deepfake videos?
In the future, these videos could have many real-world applications. For example, users could protect personal data by creating deepfakes and generating virtual avatars.
Does deepfake creation undermine trust in traditional media outlets?
What software is used to create fake videos?
Deepfakes are typically created using AI-powered techniques. Much of the software is accessible to the public, who can download editing programs such as Reface and Face Swap Booth.
Author: Tibor Moes
Founder & Chief Editor at SoftwareLab
Tibor is a Dutch engineer and entrepreneur. He has tested security software since 2014.
Over the years, he has tested most of the best antivirus software for Windows, Mac, Android, and iOS, as well as many VPN providers.
He uses Norton to protect his devices, CyberGhost for his privacy, and Dashlane for his passwords.
This website is hosted on a Digital Ocean server via Cloudways and is built with DIVI on WordPress.
Don’t take chances online. Protect yourself today:
Protect your Devices
Protect your Privacy
Or directly visit the #1: