Using sigmoidal nodes to train hard-limiter networks
Burgin, Jonathan Ronald
MetadataShow full item record
It is often desirable to have neural networks whose output is hard-limited, since these networks are easy to implement in hardware and provide simplistic output. However, the methods for training hard-limiter neural networks are not very efficient and are often very time consuming. A very popular method for training neural networks is error back-propagation, which requires a continuously differentiable activation function. Unfortunately, hard-limiter functions are not continuously differentiable and cannot be used in this technique. Sigmoidal networks can be used in back-propagation and if applicable to hard-limiter networks, could provide an easy method to train these networks. A technique for training hard-limiter neural networks using sigmoidal neural networks was investigated. 1 Several different problems were investigated using this technique, with promising results. One of the problems investigated was a font recognition problem. The training technique used on this network exhibited greatly decreased training time compared to a standard sigmoidal network and the performance of the hard-limiter network essentially matched the performance of the sigmoidal network. The hardlimiter networks showed good performance using the weights obtained from the sigmoidal networks, but did not always exhibit good generalization on some of the problems investigated. The technique worked well with most of the problems investigated and should be considered a viable technique to train hard-limiter networks. Further investigations need to be done to work out the generalization problem and to determine when training is to be ceased, since traditional methods to avoid overtraining do not apply to this technique.