TECHNOLOGY

AI duplicate-face generators can be re-mounted to reveal their trained real faces


Yet it assumes you can retain that training data, Kauts said. At Nvidia, he and his colleagues have come up with a different way of disclosing personal data, including pictures of faces and other objects, medical data and more, without the need to access training data.

Instead, they have created an algorithm that can recreate the data that a trained model has uncovered by reversing the steps that the model goes through while processing that data. Take a trained image-recognition network: To identify what is in an image, the network goes through a series of layers of artificial neurons, each layer extracting different levels of information, from abstract edges, to shapes, to more recognizable features.

Kauts’s team has seen that they can interrupt a model in the middle of these steps and reverse it by recreating the input image from the model’s internal data. They have tested this technique on various common image-recognition models and GANs. In one experiment, they showed that they could accurately reproduce images from ImageNet, one of the most well-known image recognition datasets.

Entertaining those images created by rewinding a model trained by ImageNet (below) as well as images from ImageNet (above)

NVIDIA

Like Webster’s work, the recreated images are closely related to reality. “We were amazed by the final quality,” Koutz said.

The researchers argue that such attacks are not merely speculative. Smartphones and other small devices are starting to use more AI. Due to battery and memory limitations, models are sometimes only half-processed on the device and sent to the cloud for the ultimate computing crisis, known as split computing. Most researchers have assumed that split computing will not reveal any personal information from a person’s phone because only the model has been shared, Koutz said. But his attack shows that this is not the case.

Kauts and his colleagues are now working on ways to prevent models from leaking personal data. We wanted to understand the risks so that we could reduce vulnerabilities, he said.

Although they use very different techniques, he thinks his work and Webster complement each other. Webster’s team has shown that personal information can be found in the output of a model; Kauts’s team has shown that personal information can be disclosed by rewriting the input. “It’s important to explore both aspects to better understand how to prevent an attack,” Coutts said.



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button