TY - JOUR

T1 - A Computational Study on Circuit Size vs. Circuit Depth

AU - Lappas, G.

AU - Frank, R.

AU - Albrecht, A.

N1 - Original article can be found at: http://www.worldscinet.com/ijait/mkt/archive.shtml Copyright World Scientific Publishing Company DOI: 10.1142/S0218213006002606 [Full text of this article is not available in the UHRA]

PY - 2006

Y1 - 2006

N2 - We investigate the circuit complexity of classification problems in a machine learning setting, i.e. we attempt to find some rule that allows us to calculate a priori the number of threshold gates that is sufficient to achieve a small error rate after training a circuit on sample data ${\mathcal S}_L$. The particular threshold gates are computed by a combination of the classical perceptron algorithm with a specific type of stochastic local search. The circuit complexity is analysed for depth-two and depth-four threshold circuits, where we introduce a novel approach to compute depth-four circuits. For the problems from the UCI Machine Learning Repository we selected and investigated, we obtain approximately the same size of depth-two and depth-four circuits for the best classification rates on test samples, where the rates differ only marginally for the two types of circuits. Based on classical results from threshold circuit theory and our experimental observations on problems that are not linearly separable, we suggest an upper bound of $8\cdot \sqrt{2^n/n}$ threshold gates as sufficient for a small error rate, where $n := \log|{\mathcal S}_L|$.

AB - We investigate the circuit complexity of classification problems in a machine learning setting, i.e. we attempt to find some rule that allows us to calculate a priori the number of threshold gates that is sufficient to achieve a small error rate after training a circuit on sample data ${\mathcal S}_L$. The particular threshold gates are computed by a combination of the classical perceptron algorithm with a specific type of stochastic local search. The circuit complexity is analysed for depth-two and depth-four threshold circuits, where we introduce a novel approach to compute depth-four circuits. For the problems from the UCI Machine Learning Repository we selected and investigated, we obtain approximately the same size of depth-two and depth-four circuits for the best classification rates on test samples, where the rates differ only marginally for the two types of circuits. Based on classical results from threshold circuit theory and our experimental observations on problems that are not linearly separable, we suggest an upper bound of $8\cdot \sqrt{2^n/n}$ threshold gates as sufficient for a small error rate, where $n := \log|{\mathcal S}_L|$.

U2 - 10.1142/S0218213006002606

DO - 10.1142/S0218213006002606

M3 - Article

SN - 0218-2130

VL - 15

SP - 143

EP - 161

JO - International Journal on Artificial Intelligence Tools

JF - International Journal on Artificial Intelligence Tools

IS - 2

ER -