Higher-order neural networks for invariant pattern recognition



Journal Title

Journal ISSN

Volume Title



This study investigated recognition of translated, scaled and rotated patterns using third-order neural networks. Neural networks exhibit poor performance if the input image is transformed by one or a combination of transformations. Training with a large number of transformed patterns, extracting the invariant features before training or using a network architecture which is invariant to transformations are some of the methods used to achieve invariant recognition. The weights of higher-order neural networks can be handcrafted to achieve invariance without using a large number of training patterns. Invarianre is embedded on higher-order neural networks by assigning unique weights to similar groups of input pixels. Even though this assignment of weights reduces the total number of weights, combinatorial growth of the number of higher-order terms limits the practical implementation of higher-order neural networks. The number of adjustable weights in higher-order neural networks can be further reduced by relaxing the similarity criteria used to assign the w(nght s. The proposed relaxed similarity criterion is based on a simple leader clustering algorithm. Clustering third-order terms reduced the number of weights and improved the robustness of the higher-order neural network against local distortions caused by limited input image size.



Optical pattern recognition, Neural networks (Computer science), Computer algorithms