Omni-directional motion: pedestrian shape classification using neural networks and active contour models

Ken Tabb, S. George, N. Davey, R.G. Adams

Research output: Chapter in Book/Report/Conference proceedingConference contribution

33 Downloads (Pure)


This paper describes a hybrid vision system which, following initial user
interaction, can detect and track objects in the visual field, and classify them as human and non-human. The system incorporates an active contour model for detecting and tracking objects, a method of translating the contours into scale-, location- and resolution-independent vectors, and an error-backpropagation feedforward neural network for shape classification of these vectors. The network is able to generate a confidence value for a given shape, determining how ‘human’ and how ‘non-human’ it considers the shape to be. This confidence value changes as the object moves around, providing a motion signature for an object. Previous work has accommodated lateral pedestrian movement across the visual field; this paper describes a system which accommodates all angles of pedestrian movement on the ground plane.
Original languageEnglish
Title of host publicationIn: Proc. of Image and Vision Computing New Zealand (IVCNZ)
Publication statusPublished - 2001
EventImage and Vision Computing New Zealand (IVCNZ) - Dunedin, New Zealand
Duration: 26 Nov 200128 Nov 2001


ConferenceImage and Vision Computing New Zealand (IVCNZ)
Country/TerritoryNew Zealand


  • snake
  • active contour model
  • shape classification
  • neural network
  • Omni-directional
  • Axis crossover vector
  • ground plane


Dive into the research topics of 'Omni-directional motion: pedestrian shape classification using neural networks and active contour models'. Together they form a unique fingerprint.

Cite this