Using single layer networks for discrete, sequential data: an example from natural language processing

C. Lyon, R. Frank

    Research output: Contribution to journalArticlepeer-review

    2 Citations (Scopus)
    68 Downloads (Pure)


    Natural Language Processing (NLP) is concerned with processing ordinary, unrestricted text. This work takes a new approach to a traditional NLP task, using neural computing methods. A parser which has been successfully implemented is described. It is a hybrid system, in which neural processors operate within a rule based framework. The neural processing components belong to the class of Generalized Single Layer Networks (GSLN). In general, supervised, feed-forward networks need more than one layer to process data. However, in some cases data can be pre-processed with a non-linear transformation, and then presented in a linearly separable form for subsequent processing by a single layer net. Such networks offer advantages of functional transparency and operational speed. For our parser, the initial stage of processing maps linguistic data onto a higher order representation, which can then be analysed by a single layer network. This transformation is supported by information theoretic analysis. Three different algorithms for the neural component were investigated. Single layer nets can be trained by finding weight adjustments based on (a) factors proportional to the input, as in the Perceptron, (b) factors proportional to the existing weights, and (c) an error minimization method. In our experiments generalization ability varies little; method (b) is used for a prototype parser. This is available via telnet.
    Original languageEnglish
    Pages (from-to)196-214
    JournalNeural Computing and Applications
    Issue number4
    Publication statusPublished - 1997


    Dive into the research topics of 'Using single layer networks for discrete, sequential data: an example from natural language processing'. Together they form a unique fingerprint.

    Cite this