Quantitative accuracy assessment of contextually filtered classified images

P. Zukowskyj, M.A. Bussell, C. Power, R. Teeuw

    Research output: Contribution to journalArticlepeer-review

    4 Citations (Scopus)

    Abstract

    The use of post-classification contextual filters to increase the overall accuracy of classified remotely sensed imagery is a well known but under-used technique. Many practitioners have used a modal majority decision rule within a 3×3 kernel filter, but this may not be the optimum filter for achieving the highest classification accuracy from their imagery. Variation of the decision rules and kernel sizes used in the filtering process may further improve the accuracy of the final product. This paper investigates these variations when applied to an example dataset and class system. Three classified images, derived from Landsat Thematic Mapper (TM) imagery, were used to assess a number of filters. The images were classified using the maximum likelihood method to produce geological outcrop information. Two hundred and twenty-one filters and variations were assessed, quantifying the effects of decision rule changes, window size, shape and weighting, probability thresholding prior to filtering, edge effects and multiple pass filtering. The optimal technique for this dataset and class system is a two-pass filter with both kernel sizes between 11×11 and 15×15 pixels, using a modal majority decision rule, operating on an image with edge effects reduced. This resulted in an increase in accuracy from 57.9% to 71.3%. The results indicate that complex filters may be ineffective compared to simple filters, and for many applications may be less suitable than a large simple majority filter due to the increase in computation time.
    Original languageEnglish
    Pages (from-to)3203-3222
    JournalINTERNATIONAL JOURNAL OF REMOTE SENSING
    Volume22
    Issue number16
    DOIs
    Publication statusPublished - 2001

    Fingerprint

    Dive into the research topics of 'Quantitative accuracy assessment of contextually filtered classified images'. Together they form a unique fingerprint.

    Cite this