Inversion for explanation capability of neural networks and query-based learning
Neural network inversion is the process by which we obtain the set of neural network inputs which produce a specific output. Network inversion can be used to generate an explanation of the neural network behavior. Neural networks are known for their powerful capability to model real systems by learning from examples. However, a known drawback is their "black box" character. By explanation capability we mean the expression of the knowledge learned by the neural network, in the form of comprehensible rules, so the neural network's decisions are understandable to humans. Network inversion is also the core of query-based learning (QBE). QBE is known as an active learning technique. The training data is selectively generated, such that it covers areas in the input space of high information content. This dissertation explores the use of network inversion in these two areas. Different means of inversion are presented and gradient descent inversion of the probabilistic neural network (PNN) is derived. A new technique is proposed, which generates an explanation of the neural network decision when used in classification. The proposed technique is able to generate rules with arbitrarily desired fidelity. A survey of the already existing neural network explanation algorithms is presented. Rule extraction is analyzed from an information theory point of view. The new explanation technique is applied to benchmark problems as well as to a real aerospace problem. A causality index, which provides preliminary neural network explanation, is analyzed and is applied to compare with the proposed explanation technique. QBE is applied to two real aerospace problems. The first application is a decision problem. The second application is a mapping problem with continuous output. Sigmoid scaling and jitter are explored as means of improving QBE.