FIVA: Facial Image and Video Anonymization and Anonymization Defense

8 Sep 2023  ·  Felix Rosberg, Eren Erdal Aksoy, Cristofer Englund, Fernando Alonso-Fernandez ·

In this paper, we present a new approach for facial anonymization in images and videos, abbreviated as FIVA. Our proposed method is able to maintain the same face anonymization consistently over frames with our suggested identity-tracking and guarantees a strong difference from the original face. FIVA allows for 0 true positives for a false acceptance rate of 0.001. Our work considers the important security issue of reconstruction attacks and investigates adversarial noise, uniform noise, and parameter noise to disrupt reconstruction attacks. In this regard, we apply different defense and protection methods against these privacy threats to demonstrate the scalability of FIVA. On top of this, we also show that reconstruction attack models can be used for detection of deep fakes. Last but not least, we provide experimental results showing how FIVA can even enable face swapping, which is purely trained on a single target image.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Face Swapping FaceForensics++ FIVA ID retrieval 99.25 # 1
pose 2.16 # 5
Face Anonymization LFW FIVA ID retrieval 0.000 # 1
negated ID retrieval 0.000 # 1
Temporal ID consistency 0.075 # 1

Methods


No methods listed for this paper. Add relevant methods here