In this study we assessed a Kohonen network's ability to represent visual similarity between grayscale pictures and whether these representations were associated with human ratings af perceived similarity. We trained a Kohonen network (SOM) with 370 standardized grayscale pictures deriving from 70 basic level object categories (e.g. dog, apple, chair, etc.) and measured, for each category, the average euclidean distance of the SOM output patterns to provide an index of the visual similarity between exemplars of the same basic level category. We then asked human subjects to provide visual similarity ratings for the same categories of stimuli and compared these with the measures extracted from the SOM. The significant correlation between the SOM and human measures suggests that a SOM may he a useful way 01 modeling certain stages of human visual categorization. Interestingly, the human ratings showed category-specific differences in the level of similarity ascribed to living and nonliving things. However, this pattern was not reflected in the SOM representations of the same stimuli. This has important implications for theories of object recognition and, specifically, our understanding af category-specific naming impairments.
|Title of host publication||Procs of 2nd IEEE Int Conf on Intelligent Systems|
|Publisher||Institute of Electrical and Electronics Engineers (IEEE)|
|Publication status||Published - 2004|