University of Hertfordshire

By the same authors

Retaining Expression on De-identified Faces

Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)

Documents

  • Lily-M-AAM

    Accepted author manuscript, 634 KB, PDF document

View graph of relations
Original languageEnglish
Title of host publicationSpeech and Computer
EditorsAlexey Karpov, Rodmonga Potapova, Iosif Mporas
PublisherSpringer International Publishing
Pages651-661
ISBN (Electronic)978-3-319-66429-3
ISBN (Print)978-3-319-66428-6
DOIs
Publication statusE-pub ahead of print - 13 Aug 2017
Event19th International Conference, SPECOM 2017 - Hatfield, United Kingdom
Duration: 12 Sep 201716 Sep 2017

Publication series

NameLecture Notes in Computer Science book series (LNCS, volume 10458)
PublisherSpringer

Conference

Conference19th International Conference, SPECOM 2017
CountryUnited Kingdom
CityHatfield
Period12/09/1716/09/17

Abstract

The extensive use of video surveillance along with advances in face recognition has ignited concerns about the privacy of the people identifiable in the recorded documents. A face de-identification algorithm, named k-Same, has been proposed by prior research and guarantees to thwart face recognition software. However, like many previous attempts in face de-identification, kSame fails to preserve the utility such as gender and expression of the original data. To overcome this, a new algorithm is proposed here to preserve data utility as well as protect privacy. In terms of utility preservation, this new algorithm is capable of preserving not only the category of the facial expression (e.g., happy or sad) but also the intensity of the expression. This new algorithm for face de-identification possesses a great potential especially with real-world images and videos as each facial expression in real life is a continuous motion consisting of images of the same expression with various degrees of intensity.

Notes

© Springer International Publishing AG 2017

ID: 12491600