TY - JOUR
T1 - Performance-guided neural network for rapidly self-organising active network management
AU - Lee, Sin Wee
AU - Palmer-Brown, Dominic
AU - Roadknight, Christopher M.
N1 - Funding Information:
This work is funded by British Telecom (BT) Laboratories. We would like to thank Ian Marshall for his supports and comments.
PY - 2004/10
Y1 - 2004/10
N2 - We present a neural network for real-time learning and mapping of patterns using an external performance indicator. In a non-stationary environment where new patterns are introduced over time, the learning process utilises a novel snap-drift algorithm that performs fast, convergent, minimalist learning ( snap ) when the overall network performance is poor and slower, more cautious learning ( drift ) when the performance is good. Snap is based on a modified form of Adaptive Resonance Theory (CGIP 37(1987)54); and drift is based on Learning Vector Quantization (LVQ) (Proc. IJCNN 1(1990a)545). The two are combined within a semi-supervised learning system that shifts its learning style whenever it receives a significant change in performance feedback. The learning is capable of rapid re-learning and re-stabilisation, according to changes in external feedback or input patterns. We have incorporated this algorithm into the design of a modular neural network system, Performance-guided Adaptive Resonance Theory (P-ART) (Proc. IJCNN 2(2003)1412; Soft computing systems: Design, Management and application, IOS Press, Netherland, 2002; pp. 21-31). Simulation results show that the system discovers alternative solutions in response to significant changes in the input patterns and/or in the environment, which may require similar patterns to be treated differently over time. The simulations involve attempting to optimise the selection of network services in a non-stationary, real-time active computer network environment, in which the factors influencing the required selections are subject to change.
AB - We present a neural network for real-time learning and mapping of patterns using an external performance indicator. In a non-stationary environment where new patterns are introduced over time, the learning process utilises a novel snap-drift algorithm that performs fast, convergent, minimalist learning ( snap ) when the overall network performance is poor and slower, more cautious learning ( drift ) when the performance is good. Snap is based on a modified form of Adaptive Resonance Theory (CGIP 37(1987)54); and drift is based on Learning Vector Quantization (LVQ) (Proc. IJCNN 1(1990a)545). The two are combined within a semi-supervised learning system that shifts its learning style whenever it receives a significant change in performance feedback. The learning is capable of rapid re-learning and re-stabilisation, according to changes in external feedback or input patterns. We have incorporated this algorithm into the design of a modular neural network system, Performance-guided Adaptive Resonance Theory (P-ART) (Proc. IJCNN 2(2003)1412; Soft computing systems: Design, Management and application, IOS Press, Netherland, 2002; pp. 21-31). Simulation results show that the system discovers alternative solutions in response to significant changes in the input patterns and/or in the environment, which may require similar patterns to be treated differently over time. The simulations involve attempting to optimise the selection of network services in a non-stationary, real-time active computer network environment, in which the factors influencing the required selections are subject to change.
KW - Adaptive resonance theory
KW - Learning vector quantization
KW - P-ARTPerformance indicator
KW - Snap-drift
UR - http://www.scopus.com/inward/record.url?scp=10244243685&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2004.03.001
DO - 10.1016/j.neucom.2004.03.001
M3 - Article
AN - SCOPUS:10244243685
SN - 0925-2312
VL - 61
SP - 5
EP - 20
JO - Neurocomputing
JF - Neurocomputing
IS - 1-4
ER -