Generative Adversarial Networks (GANs): a blessing for privacy?

by Maria Alexandra Enescu - Data Protection Officer
| minutes read

As with any emerging new technology, Generative Adversarial Networks or GANs offer both opportunities and risks, especially from a privacy and data protection perspective. Let’s have a look at the bright side of the equation first and check out some opportunities created by GANs.

“The most interesting idea in the last ten years in machine learning”: that is how AI pioneer Yann LeCun has described the GANs concept. LeCun is the Chief AI Scientist at Meta, the parent company of Facebook, Instagram, and WhatsApp, among other subsidiaries. So it seems pretty safe to assume the man wasn’t exaggerating.

The idea itself was hatched by Ian Goodfellow, now Director of Machine Learning at Apple, in 2014, when he was still a PhD student at the University of Montreal. Generative Adversarial Networks, in short, is the term applied to a machine learning model that is used to generate new datasets by training multiple competing neural networks instead of one single network, thus reducing the amount of existing and available data needed to train deep learning algorithms. In creating systems that learn more with less help from humans, GANs effectively manage to remove one of the biggest obstacles to advancing AI, and particularly deep learning: the huge amount of human effort required.

Endless potential

Whether or not the GANs concept really qualifies as the biggest breakthrough in the history of AI, its potential is certainly believed to be endless. Current GANS applications range from various image alterations, such as turning selfies into emojis or applying face aging effects to them, to creating brand new data samples from scratch. These newly created samples can be images of non-existent animals, objects, paintings, and text, such as poems and journalistic articles. They can even be non-existent people! Check out the examples on the website This Person Does Not Exist: each time the page is refreshed, a GAN algorithm renders hyper-realistic portraits of completely fake people.

Today, GANs are extensively used already in the healthcare sector. They are used, for example, to obtain high-resolution radiology images while using less radiation on patients. They are also used for drug discovery purposes and even for tumor detection. In the financial sector, GANs are employed by banks to detect and prevent fraud and money laundering schemes. How do they do that? GANs are trained to recognize legitimate money transactions. When they spot a transaction that is unusual, they flag it with an abnormality score. This fight against cybercrime can also be extrapolated into the domain of cybersecurity and privacy applications since GANs can be used to prevent adversarial attacks and to detect phishing websites in real-time.

Privacy-enhancing AI applications

For several reasons, GANs can be of interest to the legal field too. Depending on the use case, GANs can either put the data subject’s privacy at risk or enhance that privacy instead. Here are a few examples and characteristics of the latter, privacy-friendly type of AI application:

1. Synthetic data

GANs produce synthetic data, which is artificially generated and not caused by actual events. Therefore, since it is not real, it cannot be considered personal data. This data production method can enable data analysis and research, data sharing and international data transfers between scientists and analysts, contributing to an overall boost in innovation while preserving the privacy of all individuals involved.

2. Reduced need for training data

GANs can be trained without a lot of data. This is because of their capability to “think” inhuman ways, to mimic human behaviour, and to learn from past experiences, having an increased level of autonomy. Obviously, since GANs need less training data, you can also expect fewer privacy infringements and fewer risks for the individuals involved in the training of the GANs.

3. Efficient anonymisation

GANs allow for an efficient anonymisation of data. The AnomiGAN model or framework, for instance, is successfully used to anonymise medical patient data, which is a special category of data and should be treated with increased caution.

So far for the advantages and opportunities that GANs have to offer, if used correctly. If not used in a correct and compliant manner, however, GANs can actually harm individuals’ rights, as I will explain in my next blog post.

Would you like to gain a better understanding of GANs from a legal perspective? Contact me or my colleagues at Sopra Steria directly
Search

cloud

cybersecurity

data

digital-transformation

Related content