Spectrum Translation for Cross-Spectral Ocular Matching

14 Feb 2020  ·  Kevin Hernandez Diaz, Fernando Alonso-Fernandez, Josef Bigun ·

Cross-spectral verification remains a big issue in biometrics, especially for the ocular area due to differences in the reflected features in the images depending on the region and spectrum used. In this paper, we investigate the use of Conditional Adversarial Networks for spectrum translation between near infra-red and visual light images for ocular biometrics. We analyze the transformation based on the overall visual quality of the transformed images and the accuracy drop of the identification system when trained with opposing data. We use the PolyU database and propose two different systems for biometric verification, the first one based on Siamese Networks trained with Softmax and Cross-Entropy loss, and the second one a Triplet Loss network. We achieved an EER of 1\% when using a Triplet Loss network trained for NIR and finding the Euclidean distance between the real NIR images and the fake ones translated from the visible spectrum. We also outperform previous results using baseline algorithms.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods