Remember when Snapchat introduced the Face Swap feature a few years ago? I’m pretty sure everyone will agree that in no way did it look real, never mind actually convincing. But what happens when technology advances allowing literally anyone to do a face swap on real time videos, creating scarily realistic Deepfakes? Well, strap in because we’re about to find out!
What is FSGAN?
FSGAN is a new deep-learning based approach that has been developed to create the increasingly popular Deepfake videos, with even more realistic effects but less time or effort needed. FSGAN can produce real time face swaps with very little training. Previously, programmes similar to Deepfake had a complicated algorithm meaning users had to spend days, even weeks, processing thousands of images of a single face in order for the program to merge these into a video. Not only is this process time consuming, but it also requires access to expensive hardware and technology. Ultimately, the ability for any average Joe to use Deepfake is pretty slim! Perhaps this is a good thing with the rise of scandal Deepfake videos.
We can probably all remember the viral fake video of Facebook creator Mark Zuckerberg declaring the power of social media and the amount of data collected from Facebook users. More recently, the Instagram account responsible for the Mark Zuckerberg video, Bill Posters, has released the series ‘Partly Political’. These fake videos targeted the Prime Minister, Boris Johnson and Jeremy Corbyn. The idea behind the series is to raise awareness regarding “the lack of regulation concerning misinformation online”.
Not only do Deepfake videos affect the likes of Boris Johnson or Mark Zuckerberg but can also impinge on regular people like you and me. There was the incident of ‘Deepnude’, an AI app that undressed women by creating a hyper realistic impression. Luckily, this was taken offline earlier this year. With this harrowing and disturbing side of the AI world, it’s probably a good thing that Deepfake was such a complicated process, but FSGAN is here to change all of that.
How does FSGAN work?
Users don’t require half as much of the technological knowledge as before to create realistic fake videos. The process has been massively simplified and now takes half the time. The system uses a target video of a person that mimics their facial expressions, reactions and movements. It is then placed onto someone else’s face to create a swap that is convincingly realistic. Pretty scary right? Although the results aren’t 100% perfect, when compared to similar softwares, we can see that FSGAN produces a significantly higher quality result.
The software, designed by scientists from Israel’s Bar-Ilan University, isn’t widely available as of yet but we don’t think it will be too long before it’s released to the wider public. The code is still being finalised and therefore there is no official date for release as of yet.
Researchers behind the technology of FSGAN stated within their project paper that ultimately, the reasons behind releasing open source code is to raise awareness of this type of technology. Also in a hope that other researchers will develop a technology to battle the likes of FSGAN.
“We feel strongly that it is of paramount importance to publish such technologies, in order to drive the development of technical counter-measures for detecting such forgeries, as well as compel law makers to set clear policies for addressing their implications. Suppressing the publication of such methods would not stop their development, but rather make them available to select few and potentially blindside policy makers if it is misused.”
Although, to be honest, we’re not sure if releasing this technology with a user-friendly process was the best way to tackle this issue…
As of yet, we have no idea how this software will change the tech industry. Who will be the next target of fake videos? How will this software be used in the future? We’re pretty sure more hilarious and slightly creepy Nicolas Cage videos will pop up thanks to FSGAN! With that being said, we need to keep previous examples like DeepNude in mind and seriously consider the darker, more twisted impact this technology is likely to have.