Boutros, Fadi ; Damer, Naser ; Raja, Kiran ; Kirchbuchner, Florian ; Kuijper, Arjan (2022)
Template-Driven Knowledge Distillation for Compact and Accurate Periocular Biometrics Deep-Learning Models.
In: Sensors, 2022, 22 (5)
doi: 10.26083/tuprints-00021119
Article, Secondary publication, Publisher's Version
Text
sensors-22-01921.pdf Copyright Information: CC BY 4.0 International - Creative Commons, Attribution. Download (3MB) |
Item Type: | Article |
---|---|
Type of entry: | Secondary publication |
Title: | Template-Driven Knowledge Distillation for Compact and Accurate Periocular Biometrics Deep-Learning Models |
Language: | English |
Date: | 11 April 2022 |
Place of Publication: | Darmstadt |
Year of primary publication: | 2022 |
Publisher: | MDPI |
Journal or Publication Title: | Sensors |
Volume of the journal: | 22 |
Issue Number: | 5 |
Collation: | 14 Seiten |
DOI: | 10.26083/tuprints-00021119 |
Corresponding Links: | |
Origin: | Secondary publication DeepGreen |
Abstract: | This work addresses the challenge of building an accurate and generalizable periocular recognition model with a small number of learnable parameters. Deeper (larger) models are typically more capable of learning complex information. For this reason, knowledge distillation (kd) was previously proposed to carry this knowledge from a large model (teacher) into a small model (student). Conventional KD optimizes the student output to be similar to the teacher output (commonly classification output). In biometrics, comparison (verification) and storage operations are conducted on biometric templates, extracted from pre-classification layers. In this work, we propose a novel template-driven KD approach that optimizes the distillation process so that the student model learns to produce templates similar to those produced by the teacher model. We demonstrate our approach on intra- and cross-device periocular verification. Our results demonstrate the superiority of our proposed approach over a network trained without KD and networks trained with conventional (vanilla) KD. For example, the targeted small model achieved an equal error rate (EER) value of 22.2% on cross-device verification without KD. The same model achieved an EER of 21.9% with the conventional KD, and only 14.7% EER when using our proposed template-driven KD. |
Uncontrolled Keywords: | biometrics, knowledge distillation, periocular verification |
Status: | Publisher's Version |
URN: | urn:nbn:de:tuda-tuprints-211196 |
Classification DDC: | 000 Generalities, computers, information > 004 Computer science |
Divisions: | 20 Department of Computer Science > Fraunhofer IGD 20 Department of Computer Science > Mathematical and Applied Visual Computing |
Date Deposited: | 11 Apr 2022 11:36 |
Last Modified: | 14 Nov 2023 19:04 |
SWORD Depositor: | Deep Green |
URI: | https://tuprints.ulb.tu-darmstadt.de/id/eprint/21119 |
PPN: | 500750130 |
Export: |
View Item |