DeepFakes is that little thing that allows you to change one face of a person with another one of another person in a video or in an image. Even in audio recordings, by using a certain voice. People used face swapping for movies since forever, but those people have some skills and are mainly CGI experts who spend many hours in front of a screen to do it. They seem so real, and they’re not always used for the best.
The thing is that it’s easier to create fake videos if that certain person is skilled. Anyone can do it. All they need is time to learn the techniques. And hundreds of photos of a person A and a person B that can feed the algorithm with. And that’s pretty much it – they get the face swapping, and they don’t even have to know video editing skills.
Imagine how easy it is to insert a person in fake videos. Imagine that being done at scale, ever since our faces are online anyway.
Ever since the whole thing started, there are some amateurs who put celebrities’ faces on porn stars’ bodies or politicians that say funny things. However, anyone can, at any point, do an emergency alert warning of an attack, or can ruin someone’s life by doing a fake sex tape. They can even ruin close elections by putting on the internet a politician saying something awful days before the voting starts. Many people are afraid of these things now.
What can DeepFake do
You’re probably curious about how it all works. The thing to keep in mind is that seeing in believing. Many people work around that motto.
If someone decides to drop a fake news, and many people see it in time, they will start believing it, due to the fact that technology has come so far. They take a lie and spread it all over the world as the truth.
This little program uses GANs (generative adversarial networks), and other two machine learning models (ML) do the work. One of the machines works on data set and creates video forgeries, and the other one attempts to find the forgeries. If this set is larger, then it’s easier to create a deepfake –one that’s also believable. Many amateurs go for celebrities, as they’re an easy target and, as said earlier – seeing is believing. Fake TV news happens for a while now and, as said above, anyone who has a laptop can do it.
GANs has other uses, too (except the sex videotapes and fake words coming from a politician). It also has something called “unsupervised learning,” in which the ML models teach themselves. This, as a matter of fact, is a very promising thing, as it can, for example, recognize bicyclists or pedestrians, or make the digital assistants even more talkative.
There’s also an app
Many users chose to go for the app, which is called FakeApp. It’s not that easy to use it, but it’s not hard to learn either. They can take a video of someone beating someone else on the street (ordinary people), then saying that the emigrants did it – that’s what happens when they create a false narrative around the video. This doesn’t really require an Ml algorithm, just a video that fits and a believable narrative.
Can we detect a fake?
This represents a problem. If the videos are made by amateurs, sure, you can see it with the naked eye. Also, look at the shadowing and at people who don’t blink. But because of the fact that technology advances, soon we won’t be able to detect them as easily, and we’ll have to rely on digital forensics.
Sonia Theo has been writing for more than 15 years, first starting with fantasy stories. She has a bachelor’s degree in English and German, and one in Arts and Design. In the past years, her interests in gaming and tech news grew, so she started writing articles, guides and reviews for players. In her spare time, you’ll see Sonia playing WoW, crafting decorations and jewelry, or walking her dog. For Digital Overload, Sonia Theo will cover all things tech and gaming, delivering fresh updates on your favorite games.