Ivan is a 33-year-old Russian programmer who, having earned a fortune in the video-game industry, is enjoying an extended sabbatical spent cycling, running and camping near where he lives, on the banks of the Volga. He is the creator of DeepFaceLab, one of the most popular pieces of software used by the public to create forged videos. Ivan, who claims to be an “ordinary programmer” and not a political activist, discovered the technology on Reddit in 2017. The software he used to create his first deepfake left a watermark on his video, which irritated him. After the creator of the software rejected a number of changes Ivan suggested, he decided to create his own program.
In the past 12 months, DeepFaceLab’s popularity has brought Ivan numerous offers of work, including regular approaches from Chinese TV companies. “This is not interesting to me,” he says, via email. For Ivan, creating deepfake software is like solving an intellectual puzzle. Currently, DeepFaceLab can only replace the target’s face below the forehead. Ivan is working to get to the stage where an entire head can be grafted from one body to another. This will allow deepfake makers to assume “full control of another person”, he says, an evolutionary step that “all politicians fear like fire”. But while such technology exists behind closed doors, there is no source code in the public domain. (Ivan cites a 2018 presentation, Deep Video Portraits, delivered at a conference by Stanford researchers, as the gold standard towards which he is working.)
The most sophisticated deepfakes require advanced machine-learning skills and their development is computationally intensive and expensive. One expert estimates the cost to be about £1,000 a day. For an amateur creating fake celebrity pornography, this is a major barrier to entry. But for a government or a well-funded political organisation, the cost is insignificant – and falling every month. Ivan flipflops in his assessment of the threat. “I do not think that so many stupid rulers… are capable of such complicated schemes as deepfakes,” he says. Then, when asked if politicians and journalists have overestimated the risk of deepfake propaganda, he says: “Did the gods overestimate the risk of giving people fire?”
James, founder of derpfake, uses Ivan’s software to create his fakes. He says it is only a matter of time before “truly convincing” forgeries are created by amateurs, but he believes public awareness of the technology will prevent such footage from being able to “significantly disrupt or interfere” politically. “If I show you the latest Transformers film, you fully understand the world isn’t being attacked by robot aliens and that [the film] has been created using computers,” he says. “But show the same footage to a person from 1900 and the reaction would likely be very different.”
Not everyone shares James’s optimism. In December, the Republican senator Ben Sasse introduced the US’s first bill to criminalise the malicious creation and distribution of deepfakes, describing the threat as “something that keeps our intelligence community up at night”. A similar bill is being debated in New York state, while last month a Chinese law to regulate the use of deepfakes reached its second review before the country’s legislative body. For James, however, legislation cannot halt the rising tide: “Those who seek to undermine democracy or the rights of others won’t be deterred by the laws in another country, or even their own.”
‘There will always be an arms race between detection and generation’
Just north of Oxford Circus in central London, 80-odd data analysts work in a four-storey mansion, the lofty rooms of which each contain a blackboard, giving it the feel of a Victorian schoolhouse. Unlike most of London’s tech startups, Faculty chose an office in Marylebone, rather than the industry hub of Shoreditch, due to its proximity to University College London, where many of the company’s employees studied.
For the past year, one of Faculty’s teams has focused exclusively on generating thousands of deepfakes, of varying quality, using all the main deepfake algorithms in the market. The idea is not to sow disinformation, but to compile a library that will help train systems to distinguish real video or audio from fakes. While politicians scrabble to write laws that may protect societies from weaponised deepfakes, startups such as Faculty, whose clients including the Home Office and numerous police forces, hope to inoculate the internet-going public to their effects.