Image redundancy reduction for neural network classification using discrete cosine transforms

Z. Pan, A.G. Rust, H. Bolouri

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    66 Citations (Scopus)
    34 Downloads (Pure)


    High information redundancy and strong correlations in face images result in inefficiencies when such images are used directly in recognition tasks. In this paper, discrete cosine transforms (DCT) are used to reduce image information redundancy because only a subset of the transform coefficients are necessary to preserve the most important facial features, such as hair outline, eyes and mouth. We demonstrate experimentally that when DCT coefficients are fed into a backpropagation neural network for classification, high recognition rates can be achieved using only a small proportion (0.19%) of available transform components. This makes DCT-based face recognition more than two orders of magnitude faster than other approaches.
    Original languageEnglish
    Title of host publicationProcs of IEEE-INNS-ENNS Int Jt Conf on Neural Networks
    Subtitle of host publicationIJCNN 2000
    PublisherInstitute of Electrical and Electronics Engineers (IEEE)
    ISBN (Print)0-7695-0619-4
    Publication statusPublished - 2000
    EventIEEE-INNS-ENNS Int Joint Conf on Neural Networks - Como, Italy
    Duration: 24 Jul 200027 Jul 2000


    ConferenceIEEE-INNS-ENNS Int Joint Conf on Neural Networks


    • backpropagation
    • face recognition


    Dive into the research topics of 'Image redundancy reduction for neural network classification using discrete cosine transforms'. Together they form a unique fingerprint.

    Cite this