An anonymous reader quotes an MIT Technology Review report: Load website This person does not exist and it will show you a human face, almost perfect in its realism but totally false. Refresh and the neural network behind the site will generate another, and another, and another. The endless sequence of faces created by AI is produced by a generative adversarial network (GAN) – a type of AI that learns to produce realistic but false examples of the data it is trained on. But these generated faces – which are starting to be used in CGI movies and commercials – might not be as unique as they seem. In an article titled This person exists (probably) (PDF), the researchers show that many faces produced by GANs bear a striking resemblance to real people that appear in the training data. False faces can effectively unmask the real faces on which the GAN was formed, allowing the identity of these people to be exposed. The book is the latest in a series of studies that challenge the popular idea that neural networks are “black boxes” that reveal nothing about what goes on inside.
To expose the hidden training data, Ryan Webster and his colleagues at the University of Caen Normandie in France used a type of attack called a membership attack, which can be used to find out if certain data was used to train a neural network model. These attacks typically take advantage of the subtle differences between how a model treats data it was trained on – and therefore has seen thousands of times before – and unseen data. For example, a model can accurately identify an image never seen before, but with a little less confidence than the one it was trained on. A second attacking model can learn to spot such cues in the behavior of the first model and use them to predict when certain data, such as a photo, is or is not in the training set.
Such attacks can lead to serious security leaks. For example, discovering that a person’s medical data has been used to train a model associated with a disease may reveal that person has that disease. Webster’s team extended this idea so that instead of identifying the exact photos used to form a GAN, they identified photos in the GAN training set that were not identical but appeared to represent the same individual – in other words, faces with the same identity. To do this, the researchers first generated faces with the GAN and then used a separate facial recognition AI to detect whether the identity of those generated faces matched the identity of any of the faces seen in the faces. training data. The results are striking. In many cases, the team found multiple photos of real people in the training data that appeared to match the fake faces generated by the GAN, revealing the identities of the people the AI was trained on.
AI fake face generators can be rewound to reveal the real faces they trained on
Source link AI fake face generators can be rewound to reveal the real faces they trained on