TY - GEN
T1 - Correlated activity pruning (CAPing)
AU - Roadknight, C. M.
AU - Palmer-Brown, D.
AU - Mills, G. E.
N1 - Publisher Copyright:
© Springer-Verlag Berlin Heidelberg 1997.
PY - 1997
Y1 - 1997
N2 - The generalisation ability of an Artificial Neural Network (ANN) is dependent on its architecture. An ANN with the correct architecture will learn the task presented by the training set but also acquire rales that are general enough to correctly predict outputs for unseen test set examples. To obtain this optimum network architecture it is often necessary to apply a labourious ‘trial and error’ approach. One approach that helps to achieve optimum network architecture in a more intelligent way is pruning. Such methods benefit from the learning advantages of larger networks while reducing the amount of overtraining or memorisation within these networks. Sietsma and Dow (1988) describe an interactive pruning method that uses several heuristics to identify units that fail to contribute to the solution and therefore can be removed with no degradation in performance. This approach removes units with constant outputs over all the training patterns as these are not participating in the solution. Also, units with identical or opposite activations for all patterns can be combined. The approach to merging hidden units detailed in Sietsma and Dow’s paper is useful, however, it only covers perfectly correlated, binary activations.
AB - The generalisation ability of an Artificial Neural Network (ANN) is dependent on its architecture. An ANN with the correct architecture will learn the task presented by the training set but also acquire rales that are general enough to correctly predict outputs for unseen test set examples. To obtain this optimum network architecture it is often necessary to apply a labourious ‘trial and error’ approach. One approach that helps to achieve optimum network architecture in a more intelligent way is pruning. Such methods benefit from the learning advantages of larger networks while reducing the amount of overtraining or memorisation within these networks. Sietsma and Dow (1988) describe an interactive pruning method that uses several heuristics to identify units that fail to contribute to the solution and therefore can be removed with no degradation in performance. This approach removes units with constant outputs over all the training patterns as these are not participating in the solution. Also, units with identical or opposite activations for all patterns can be combined. The approach to merging hidden units detailed in Sietsma and Dow’s paper is useful, however, it only covers perfectly correlated, binary activations.
UR - http://www.scopus.com/inward/record.url?scp=84925134357&partnerID=8YFLogxK
U2 - 10.1007/3-540-62868-1_176
DO - 10.1007/3-540-62868-1_176
M3 - Conference contribution
AN - SCOPUS:84925134357
SN - 3540628681
SN - 9783540628682
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 591
EP - 592
BT - Computational Intelligence
A2 - Reusch, Bernd
PB - Springer Nature Link
T2 - 5th Fuzzy Days International Conference on Computational Intelligence, CI 1997
Y2 - 28 April 1997 through 30 April 1997
ER -