Generating photo-realistic training data to improve face recognition accuracy

Daniel Sáez-Trigueros, Li Meng, Margaret Hartnett

Research output: Contribution to journalArticlepeer-review

20 Downloads (Pure)


Face recognition has become a widely adopted biometric in forensics, security and law enforcement thanks to the high accuracy achieved by systems based on convolutional neural networks (CNNs). However, to achieve good performance, CNNs need to be trained with very large datasets which are not always available. In this paper we investigate the feasibility of using synthetic data to augment face datasets. In particular, we propose a novel generative adversarial network (GAN) that can disentangle identity-related attributes from non-identity-related attributes. This is done by training an embedding network that maps discrete identity labels to an identity latent space that follows a simple prior distribution, and training a GAN conditioned on samples from that distribution. A main novelty of our approach is the ability to generate both synthetic images of subjects in the training set and synthetic images of new subjects not in the training set, both of which we use to augment face datasets. By using recent advances in GAN training, we show that the synthetic images generated by our model are photo-realistic, and that training with datasets augmented with those images can lead to increased recognition accuracy. Experimental results show that our method is more effective when augmenting small datasets. In particular, an absolute accuracy improvement of 8.42% was achieved when augmenting a dataset of less than 60k facial images.
Original languageEnglish
Pages (from-to)86-94
Number of pages9
JournalNeural Networks
Early online date27 Nov 2020
Publication statusPublished - 1 Feb 2021


  • Image generation
  • Generative adversarial learning
  • Face and gesture recognition
  • Machine Learning
  • Machine learning


Dive into the research topics of 'Generating photo-realistic training data to improve face recognition accuracy'. Together they form a unique fingerprint.

Cite this