GAN fingerprint for active detection/attribution of fake media
Prof. Mauro BARNI
The continuous progress of generative models is fostering impressive advances in a variety of applications ranging from videoconferencing through virtual reality, passing from fast and seamingless creation of personalized media. At the same time, the availability of increasingly powerful tools for the generation of synthetic contents raises concerns about the possibility of distinguishing synthetic and genuine contents. Possible misuses of generative models, in fact, include the creation of fake media to support misinformation campaigns, defamation, polarization of public opinion and so on. For this reason, several multimedia forensic techniques have been developed to distinguish fake and real contents and trace back the manipulation history of any piece of media. However, such techniques fall short of keeping the pace of technological advancements, in addition they are not suitable to be applied in the wild even because they rely on subtle traces that are usually washed away by common processing tools (e.g. lossy compression). In this talk, I will advocate the use of active forensics techniques relying on the introduction within the to-be-authenticated content of a unique fingerprint (a.k.a. watermark). The fingerprint should be introduced within the media at creation time and should not be possible to remove it without degrading the hosting content in a significant way. In contrast to classical watermarking, here the fingerprint is embedded within the generative model, rather than directly in the generated content. In fact, to retain its effectiveness the generative models should still introduce a detectable fingerprint also in the presence of model finetuning, pruning and compression. In this webinar, I will outline the main challenges and opportunities associated to generative model fingerprinting and and describe some recent works of mine in this field.