FBI alert warns of Russian, Chinese use of deepfake content

Influence operations are already using deepfakes and synthetic media.
FBI Director Chris Wray.
FBI Director Chris Wray. (Chip Somodevilla/Getty Images)

The FBI warned in an alert Wednesday that malicious actors “almost certainly” will be using deepfakes to advance their influence or cyber-operations in the coming weeks.

The alert notes that foreign actors are already using deepfakes or synthetic media — manipulated digital content like video, audio, images and text — in their influence campaigns.

“Foreign actors are currently using synthetic content in their influence campaigns, and the FBI anticipates it will be increasingly used by foreign and criminal cyber actors for spearphishing and social engineering in an evolution of cyber operational tradecraft,” states the alert obtained by CyberScoop.

The warning comes amid concern that if manipulated media is allowed to proliferate unabated, conspiracy theories and maligned influence will become more and more mainstream. Lawmakers have recently enacted a series of laws that address deepfake technology, which frequently is used to harass women. The National Defense Authorization Act of 2021, for instance, requires the Department of Homeland Security to produce assessments the technology behind deepfakes and their harms.


In its alert, the bureau pointed to private sector research that has uncovered Chinese-language and Russian use of manipulated media in disinformation operations. In one case, a pro-Chinese government influence operation that the social media analysis firm Graphika has tracked as “Spamouflage Dragon,” has used profile images generated with artificial intelligence (AI) to lend authenticity to the campaign.

In another case, researchers found that the Russian troll farm, the Internet Research Agency, was using Generative Adversarial Networks-generated images for fake profile accounts used to push divisions ahead of the U.S. elections in 2020. The FBI tipped Facebook off to the threat at the time, as CyberScoop reported.

The Atlantic Council’s Digital Forensics Research Lab, Graphika and Facebook also recently collaborated to uncover manipulated AI-generated images used in a pro-Trump campaign.

The FBI warned in Wednesday’s alert that it would investigate deepfakes that have been attributed to foreign malicious actors.

The alert comes as the federal government, researchers, Americans, and social media companies are grappling with the ways misinformation and disinformation can cascade into the real, physical world, following the Capitol insurrection. The attack on the seat of government was, in part, spurred on by misinformation propagated online by far-right influencers and media outlets, according to recently published research.


While some of the main purveyors of disinformation surrounding the deadly riot were domestic actors, foreign actors from Russia, Iran and China have also seized on the news to exploit division in the U.S. and prop up their own interests.

It was not clear that manipulated media or deepfakes had been used in these campaigns, although some rioters suggested Donald Trump’s concession speech, delivered around the time of the attack, was a deepfake.

Although people are more likely to see information online that bad actors have manipulated than they are to see deepfakes right now, the FBI warned that that equilibrium is likely to change shortly, with deepfakes taking the lead.

“Currently, individuals are more likely to encounter information online whose context has been altered by malicious actors versus fraudulent, synthesized content,” the FBI warned. “This trend, however, will likely change as … technologies continue to advance.”

The FBI bulletin, which suggests that people seek out media literacy to improve their ability to detect deepfakes, comes two days before FBI Director Chris Wray is slated to speak at an event in Washington, D.C. about the importance of civic education as a national security issue.


To catch synthetic media or deepfakes, the FBI suggests users be on alert for warping, distortions, syncing issues or other inconsistencies in images or videos. Sometimes profile pictures that bad actors generate for use on Twitter, for instance, display consistent eye spacing across multiple images.

“Do not assume an online persona or individual is legitimate based on the existence of video, photographs, or audio on their profile,” the FBI said in the alert.

Additionally, blurry or warped background may also be indication that media has been altered, the FBI warned.

The federal government and private sector need to work together on pushing digital literacy and better tamping down on influence campaigns, as well as deepfakes that could lend them authenticity, deepfake and disinformation experts have previously told CyberScoop.

Under the recently enacted Identifying Outputs of Generative Adversarial Networks (IOGAN) Act, the National Science Foundation and the National Institute for Standards and Technology are to provide support for research on manipulated media moving forward.


Sean Lyngaas contributed reporting.

Shannon Vavra

Written by Shannon Vavra

Shannon Vavra covers the NSA, Cyber Command, espionage, and cyber-operations for CyberScoop. She previously worked at Axios as a news reporter, covering breaking political news, foreign policy, and cybersecurity. She has appeared on live national television and radio to discuss her reporting, including on MSNBC, Fox News, Fox Business, CBS, Al Jazeera, NPR, WTOP, as well as on podcasts including Motherboard’s CYBER and The CyberWire’s Caveat. Shannon hails from Chicago and received her bachelor’s degree from Tufts University.

Latest Podcasts