Facebook and other social media companies say they’re trying to curb misinformation ahead of the 2020 presidential election, most recently enacting policies against “deepfakes.” Rivals Snapchat and TikTok, on the other hand, are digging in.
Both social networks have openly embraced controversial “deepfake” video technology, or fake videos that use artificial intelligence to generate depictions of events that never actually happened. Lawmakers and technologists fear that the simulations could make people believe something is real when it’s not, manipulating voters with false information. Facebook CEO Mark Zuckerberg went as far as calling it an “emerging threat” when he appeared in front of Congress last year.
Yet the tech also augments the playful filters Snap uses to engage with its 203 million users. Snap openly welcomed such technology last week when it acquired AI Factory, a computer-vision startup that recently worked with the company to create Snapchat’s new Cameos feature that relies on deepfake technology. Snapchat has a long history of working with this type of AI-driven camera technology, starting with its popular face-swapping camera lens that launched in April 2016.
TikTok is similarly adopting “deepfake” video technology. The company reportedly built a deepfake-maker that will let users swap faces in recorded videos and will allow them to easily create their own deepfakes by scanning their face from multiple angles, then choosing from a collection of curated videos to superimpose their face into. The resulting deepfake video will include a watermark, so people can easily tell that it’s not a real video. Still, the feature raises questions about what its Chinese parent company Bytedance will do with all of that sensitive biometric data it collects.
The debate around deepfakes and misinformation was reignited on Monday when rival Facebook announced it’s banning several types of manipulated and synthetic media including “deepfake” videos ahead of the 2020 presidential election.
Facebook says it will remove any media that has been edited or synthesized “in ways that aren’t apparent to an average person” or that would likely mislead someone. The company’s new policy is similar to YouTube’s deceptive practices policy, which includes banning the use of manipulated media when it poses “serious risks of harm.”
Twitter in November announced its plans to create a new policy on synthesized video and gathered more than 6,500 responses during an open comment period. The company says it is still drafting the policy.
Pinterest, the seemingly innocuous social media platform known for DIY projects and beauty tutorials, has also taken a hard stance against “deepfake” technology. A company spokesperson told Forbes, “Misinformation and disinformation aren’t allowed on Pinterest, and we remove it if we find it regardless of whether it’s manipulated media.”