Deepfake: A Deep and Scary Threat, Definitely Not Fake!
By: Herminio B Liegen Jr, MBA, MIS, CDMP
Cybersecurity Specialist
Date: 7 October 2020
Nowadays, deepfake technology, aside from bringing delight to film and television, has also contributed a lot to fake news and information resulting to bad publicity, destruction of reputation and even business scam. It is a like a double-edged sword which brings improvement to media, advertising and education industries while bringing harm to people, business and even government if used otherwise. According to Hani Farid, one of the world’s leading experts on deepfake, “If we can’t believe the videos, the audios, the image, the information that is gleaned from around the world, that is a serious national security risk. Hence, as both an information technology and computer ethics student, I think I can help my community, neighborhood and family face the on looming threat of deepfake technology to privacy by disseminating the information about it through personal conversation, social media and maybe through information drive. With the help of vast information available in the internet, creating awareness about the said technology will help people become better informed and vigilant about the threat and risks of the said technology in their day to day endeavors.
With the advancement and the advantages of deep learning technology leading to machine learning (ML) and artificial intelligence (AI), it is more pragmatic to say that deepfake, as a derivative of the former, is inevitable. Ori Sasson of Blackscore–an AI-based risk assessment company–said that it is very difficult to ban deepfake because it is being used mainly by the lucrative entertainment, advertising and media companies to generate film effects and animation. Furthermore, he stressed that a lot of deepfake applications can be downloaded from the internet as open source software such as FaceApp where normal people without deep knowledge of ML and AI can easily use them. Said technology is basically like fossil fuel or oil which runs our factories and automobiles but also pollutes our environment and causes climate change.
Deepfake, just like other technologies used in cybercrimes, is like a cat-and-mouse chase between “generator” which creates fake images and the “discriminator” which detects fake images (based on the deep learning known as generative adversarial networks (GANs) invented by Ian Goodfellow in 2014). Surprisingly, Farid said that the number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1. This ratio is outrageously alarming.
So what can we do about this looming threat? Mike Beck of Darktrace, a cybersecurity company, said that preventing the malicious use of deepfake could be doable in corporate environment because detection system could be used to detect anomalies in the system. However, placing the same in the consumers’ point of view is quite challenging. Meanwhile, RobToews, in his May 25, 2020 article at forbes.com entitled “Deepfakes Are Going To Wreak Havoc On Society. We Are Not Prepared.” offered more detailed solutions. First is to pass a law against malicious use of deepfake as what the US did in 2018. Second is to use copyright or filter that would identify real images or sound from fakes. Third and final is the development of software and implementing guidelines that would detect and discourage the malicious use of deepfake as what social media giants Facebook, Twitter and Google are doing. However, Toews also stressed that these solutions should be implemented carefully so as not to curtail “freedom of speech” as enshrined in the constitution of every democratic country.
I believe that fighting the evil of deepfake is not only the responsibility of the government, technology companies and concerned organizations. Each person, regardless of age, has to be cautious also about sharing their personal information especially on social media. Each person should be responsible in protecting one’s personal information including other people’s information by using cybersecurity software and/or limiting the exposure of one’s personal information in the computer system, local network and the internet. Seriously, the threat and risks of deepfake are real and scarier than they sound. As what New York University professor Nasir Memon warned, “The man in front of the tank at Tiananmen Square moved the world. “Nixon on the phone cost him his presidency. Images of horror from concentration camps finally moved us into action. If the notion of not believing what you see is under attack, that is a huge problem. One has to restore truth in seeing again.”