Special Sessions

Trends and Challenges in Biometrics and Forensics

Biometrics and forensics are two fields that can benefit from each other in several ways. Biometric technologies can aid forensic investigations by providing reliable identification of individuals involved in criminal activities, which can help link suspects to crimes. On the other hand, forensic science can contribute to the development of biometric technologies by providing insights into the unique physiological and behavioural characteristics that can be used for identification. Additionally, forensic science can also provide valuable insights into the accuracy and reliability of biometric technologies, which can aid in their improvement. The aim of this special session is thus to facilitate synergies and bring together researchers working on the areas of forensic science and biometrics. Topics of interest of this special session include (but are not limited to):

  • Biometric analysis of crime scenes
  • Biometric-based cybercrime investigation
  • Attacks to biometric systems
  • Mobile, behavioural and soft-biometrics
  • Biometric data anonymization/de-identification
  • Multimedia Forensics
  • Deep fakes detection
  • Surveillance
  • Multi-biometrics
  • Explainable AI for Biometrics and Forensics
  • Ethical, societal or privacy implications
  • Case studies of the aforementioned topics

Chaired by Fernando Alonso-Fernandez (Halmstad University, Sweden), Naser Damer, (Fraunhofer Institute for Computer Graphics Research IGD, Germany), and Andreas Uhl (Salzburg University, Austria). This special session is supported by the EAB – European Association of Biometrics. We invite submissions for this special session, to be held in conjunction to the IEEE International Workshop on Information Forensics and Security (WIFS 2023) in Nürnberg, Germany.


Theory and practice of DNN watermarking

The demand for watermarking methods for ownership verification or to trace the usage of pre-trained models is becoming pressing. For this reason, DNN watermarking is receiving ever more attention in the last few years. Many techniques have been developed for white-box and black-box watermarking, in standard learning scenarios, as well as in federated learning. In addition to algorithms developed for computer vision models, methods to protect the Intellectual Property (IP) of language models have also appeared recently. In various cases, interesting interconnections are revealed with other topics in DL research, including, adversarial examples and backdoor attacks. Protection of the IP concerns both discriminative and generative models. In the latter case, the network output has a large entropic content, and is distributed as a product, thus posing the additional challenge of retrieving the watermark from every single output (no-box watermarking). As all watermarking schemes must satisfy the same requirements, the above scenarios pose many common challenges, and, despite the huge amount of literature, a number of open questions and drawbacks keep preventing the application of the techniques developed so far in real-life settings. Many schemes are still not robust against model retraining, especially in the inductive transfer learning scenario, that is, when the network is trained on a different dataset and different - yet related- task, or against output modification (for generative models). The capabilities in terms of capacity/payload are not explored and the number of bits embedded by most of the multi-bit watermarking methods are typically few (some hundreds). Finally, most of the DNN watermarking algorithms are not secure against informed attackers - aware of the presence of the watermark-, as witnessed by an increasing number of works proposing removal attacks, thus making the development of secure DNN watermarking scheme one of the main challenges for researchers. To address to the above issues, we advocate the need for a theory that can help to understand the limits of DNN watermarking. While a well-established theory has been built for media watermarking, DNN watermarking, and function watermarking in particular (characterizing black-box approaches), is a field that has not been much explored yet, with several fundamental questions to be answered. It is the goal of this SS to present the current status of the research in DNN watermarking, collect practical and theoretical insights and make a step forward in the development of advanced and solid techniques to address the remaining open issues. A non-comprehensive list of topics addressed by the Special Session include:

  • Dynamic and Static DNN watermarking
  • One-bit vs multi-bit DNN watermarking
  • GAN watermarking
  • Black-box vs white box DNN watermarking
  • Backdoor attacks and watermarking
  • Adversarial examples and watermarking
  • Robust DNN watermarking
  • Secure DNN watermarking
  • Informed coding and DNN watermarking
  • DNN-based media watermarking and data hiding
  • DNN watermarking for federated learning
  • Watermarking of language models

Chaired by Benedetta Tondi (University of Siena), Mauro Barni (University of Siena), and Fernando Perez-Gonzalez (University of Vigo).