Facebook Says It Will Remove ‘Deepfakes’

Kevin Hogan
By Kevin Hogan
January 11, 2020Business News
share

NEW YORK—Facebook said it will begin removing digitally manipulated, but highly convincing videos from its website in order to protect viewers from being misled. An expert warns viewers not to believe what they see in videos without verifying that information from other sources.

“We have to be skeptical of everything we see,” Fordham University professor of communications Paul Levinson said. “That should be our default reaction to anything that we see. You see someone saying something, you see something going on, you should say, ‘I don’t believe it,’ until a lot more evidence comes in.”

The warning comes following a recent trend of “deepfakes” circulating on the internet. Deepfakes are videos that have been digitally manipulated to change the content of the video, making it appear as though the subject of the video is saying or doing something that never actually happened. They are created using artificial intelligence and can appear to be authentic, according to Facebook.

Levinson says deepfakes are “so potent and so dangerous” because they can do more than just put words in a person’s mouth—they “can actually create an image and words from scratch.”

To combat deepfakes, Facebook announced on Jan. 6 that it will begin removing videos that are edited or digitally modified if they meet both of the following criteria:

  • It has been edited or synthesized—beyond adjustments for clarity or quality—in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.
  • It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.

The company said the policy won’t apply to content that is parody or satire, or videos that have been edited to omit or change the order of words.

But Levinson says despite the policy, people may still be able to spread the videos on the social media website.“If the people who are making the deepfakes are clever enough, they almost always are able to figure out a way to get around Facebook’s algorithms,” he said.

He also advised viewers to rely on their instincts and said that if something doesn’t seem right, it may be false information. Levinson said viewers should use the internet to check different sources to see if what they said is consistent with the video in question.

“Usually, if something is wrong or untrue, you’ll notice that other media [are] either silent about it, or maybe they have even already come out and flagged that as deceptive,” he said.

Levinson said deepfakes could be used to disrupt an election—by releasing a video of a candidate saying something he or she never said—the night before voters head to the polls. This would be “the worst possible time” because voters wouldn’t have time to fact-check and may think a candidate has a position that he or she actually doesn’t have when they cast their ballot.

In early 2019, the U.S. Intelligence Community warned that foreign groups may use deepfakes to influence campaigns and shape policies in the United States.

That warning, published in the Worldwide Threat Assessment, stated that adversaries and strategic competitors of the United States may use the 2020 election to advance their interests by making “convincing—but false—image, audio, and video files.”

In response to this potential threat, the Senate passed the Deepfake Report Act in October 2019. This legislation would require an annual report on the state of digital content forgery technology. The bill is now in the House Committee on Energy and Commerce.

Follow Kevin on Twitter: @KRHogan_NTD

ntd newsletter icon
Sign up for NTD Daily
What you need to know, summarized in one email.
Stay informed with accurate news you can trust.
By registering for the newsletter, you agree to the Privacy Policy.
Comments