Deepfake (machine learning + fake) is a technology that uses artificial intelligence (AI) to process images. It allows us to obtain realistic moving images by putting a person’s face on any other person. This technology can make people do or say things that in reality they would never do or say.
The technology uses machine learning and artificial neural networks, which are trained with examples of actors’ faces and the target “face”.Below you can find a sample video showing how deepfake is created.
This term first appeared in 2017, when the technology was used to modify adult films so that they present the faces of famous actors. And in April 2018, Buzzfeed created an example of political deepfake – a video in which Barack Obama calls President Donald Trump a fool. It shows how much impact this technology can have on politics.
The development and widespread use of this technology have led to a reflection on its possible consequences, as well as on the impact it may have on society and the dissemination of news facts.
There are many examples of deepfake using the image of famous people. Not only the video with the fake Barack Obama was created, which you can see here:
but also with Mark Zuckerberg, who gave a short speech to the audience – Imagine that. One man with total control over the stolen data of billions of people. All their secrets, their lives, their future… – says on the recording, which can be seen below
Fake news has taken on a new dimension, and deepfake can also be a threat to the political order of the country or the election results. It can be used to divide the electorate or change electoral behavior. One of the most prominent cases was the one with the alleged participation of Nancy Pelosi, the speaker of the U.S. House of Representatives. In 2018, the video was released on social networking sites that showed drunken Pelosi. The video was supposed to be funny, but it showed how deepfake can be used to damage reputation.
However, deepfake also has the obvious potential to use it for fraudulent purposes, to pretend to be someone else to access things like bank accounts or sensitive data. This means that fake video exposes companies, individuals and the government to increased risk.
However, it is worth noting the positive thing that results from using this technology in film production. It allows to see dead actors or hear dead singers. The example of a positive and funny use of deepfake is the following song:
It uses Wave2Lip, which is based on facial recognition technology and artificial intelligence, creating a realistic illusion of synchronizing the lips of film characters with a song.
Such videos can be humorous, but it should be remembered that the continuous development of this technology is primarily dangerous. The performances are perfect, so it is difficult to be critical. They can quickly deceive the viewer, who believes in what he/ she sees. In this case deepfake is a dangerous tool of manipulation. The fact that the message will be spread and more people will talk about it may make us believe in it faster.
It is important to remember that actions which will insult or humiliate the person presented in the deepfake video may violate their personal rights. Currently, Polish law requires that a person who has been offended should initiate civil proceedings for infringement of personal rights or insult through the mass media. Unfortunately, for now these crimes are not prosecuted ex officio.
This technology is constantly evolving and will be increasingly difficult to detect.
Researchers are working on creating tools to detect deepfakes. In the beginning, deepfakes were less perfect – for example, the characters did not blink or there were differences in skin color. Other, newer methods use algorithms that detect signs of manipulation at the edges of images. However, deepfakes are constantly improving, which makes the tools for detecting them out of date. Companies such as Facebook and Microsoft have already launched tools that detect such fakes. In September 2020 Microsoft announced that it has created a specialized deepfake detection tool that analyzes both photos and videos. The technology is supposed to fight against disinformation. The company also announced the creation of the system that will help audiovisual producers to embed in the materials created by them – a hidden code that will help to track possible changes made by deepfake. The system created by Microsoft is based on machine learning algorithms and uses the work of thousands of fake video recordings. The new tool will not be publicly available due to security reasons. Distribution will be done only by intermediary organizations, so people interested in using the system for malicious activities will not be able to obtain its code in order not to improve their manipulation techniques.
Recently another information has also emerged that scientists from University College London published the ranking of the most serious threats related to AI (you can see the ranking here). The list of 20 possible uses of the AI by criminals over 15 years for cybercrime purposes was made. They are ranked according to the seriousness of the threat, the damage they may cause, the criminal profit, and the ease of use of the method. Deepfake was at the top of this list because it can be difficult to detect and stop. The wide range of uses of deepfake is also dangerous – from discrediting a public figure to pretending to be any person and gaining access to their bank account.
It is enough to look at the examples and understand that the consequences of not regulating deepfake legislation can be enormous. The technology exists and is already widely used. Now anyone can create deepfake by downloading an application to their phone and use the technology for any purpose.
Not only a technological tool is needed to detect a potentially falsified image, but also a legal framework to create deepfake. In December 2018, Republican Senator Ben Sasse of Nebraska proposed a law known as the Malicious Deep Fake Prohibition Act of 2018, which aims to prohibit the “preparation of audiovisual recordings”. The DEEP FAKES Accountability Act proposed in June by New York Democrat representative Yvette Clarke requires that forged recordings be clearly watermarked. The proposed bill would also impose penalties for violations.
Deepfake can raise doubts about the reality of the world around us, and its use can contribute to effective disinformation or damage our reputation. That is why it is so important to introduce legal regulations and the widespread use of technology that could detect deepfake.