Applying GANs as a cyber weapon: new forms of cyber attack

by Ivana Butorac - Data Protection Expert
| minute read

As I pointed out in a previous post on GANs (Generative Adversarial Networks), such deep neural networks can be put to great use to improve your cyber defense. At the other end of the spectrum, however, they can just as easily be used by cybercriminals for malicious purposes. In that case, GANs can have a significant negative impact on your business, not to mention on society as a whole.

Let's have a quick look therefore at some new and/or more sophisticated forms of cyber attack that the use of GANs can bring along with it.

Compromising authentication methods

GANs can be used for password cracking and, consequently, for compromising authentication methods. There are several known and popular methods to break into accounts, but training algorithms to generate password texts definitely didn’t use to be one. Until the introduction of GANs, that is.

An excellent example is PassGAN. This machine learning model was trained on the popular RockYou dataset, containing over 32 million passwords that were leaked in 2007. PassGAN has proved to be very effective in password cracking, as it not only manages to mimic RockYou’s original distribution of passwords but also generates other unique passwords that are likely to be used elsewhere, endangering password authentication.

Research has effectively shown that PassGAN is able to match more than 43% of RockYou passwords. Another interesting finding is that, although PassGAN had not been exposed to leaked LinkedIn passwords, it was nevertheless able to successfully match 24.2% of leaked LinkedIn passwords. More remarkably even, when combined with a password recovery tool such as HashCat, PassGAN proved able to match between 51 and 73% more passwords than the HashCat tool alone was able to do, which certainly increases the risk of data theft.

Evading detection systems

Although on the positive side, GANs can be very effective in preventing malicious attacks, they also provide their users with a powerful means or method for evading detection systems. Such systems are put in place to protect us from data theft, data poisoning, spying, and other negative actions or events. Unfortunately, however, it seems they can be cracked.

A known GAN-based algorithm that allows its users to bypass detection systems is MalGAN. It produces malware and performs black-box adversarial attacks to evade the detection system. Here a generative network is trained to minimise the generated adversarial examples' malicious probabilities predicted by the substitute detector. An interesting, but the scary fact about this researched GAN is that it was tested directly against the system it was aimed to bypass, and it achieved its goal 100% of the time.

Tricking biometric recognition systems

Knowing that GANs are primarily used to generate new data samples, this also means that any data can be produced really, including biometric data such as fingerprints or facial images. This application of GANs can therefore also lead to tricking facial and fingerprint recognition systems.

By way of experiment, researchers from Tel Aviv created 9 master faces and tested them against 3 major facial recognition systems. A master face is a facial image that passes face-based identity authentication for a large portion of the population. These faces can be used to impersonate, with a high probability of success, any user, without having access to any user information whatsoever. The results are downright alarming, showing that only 9 master faces can pass for 40 to 60% of the general population.

Apart from facial images, GANs can also successfully create good-quality fingerprints. It is true that nowadays fingerprint recognition systems are quite advanced, taking into account other factors such as warmth as well. But the reality remains that such systems are not designed to know which fingerprints are real and which are fake. The fact that aresearch experiment shows that fake fingerprints are able to spoof 23% of the subjects in the dataset at a 0.1% false match rate is a clear and strong sign that this deployment of GANs for malicious purposes should not be ignored.

Leveraging social engineering attacks

Lastly, GANs definitely leverage different social engineering attacks, identity thefts, and impersonations of people. More specifically, given that any kind of data can be generated by GANs, they can be widely used in phishing, whaling, and spoofing attacks. The first-known GAN-created spoofing attack was reported in 2019 when criminals impersonated a chief executive and demanded a fraudulent transfer of €220,000 (Euler Hermes customer case).

But it doesn’t stop there. GANs have the ability to produce real-time face re-enactment videos that look like the real human is realistically moving and talking, leaving no room to doubt their authenticity. This becomes particularly dangerous when used against real individuals in real companies, real organisations, and real institutions. Not only can this lead to serious reputational damages, it can also cause widespread fake news and harmful misinformation, threatening democratic processes and endangering national security even.

At Sopra Steria, we believe raising awareness and education is key to staying on top of new technological developments such as AI and, more particularly in this instance, Generative Adversarial Networks (GANs). Don’t hesitate to contact me or my colleagues for further information to help you stay on top.

Read my previous article to find out how GANs can also be used to improve your cyber defense.