This paper describes a special type of dynamic neural network called the Recursive Neural Network (RNN). The RNN is a single-input single-output nonlinear dynamical system with three subnets, a nonrecursive subnet and two recursive subnets. The nonrecursive subnet feeds current and previous input samples through a multi-layer perceptron with second order input units (SOMLP) . In a similar fashion the two recursive subnets feed back previous output signals through SOMLPs. The outputs of the three subnets are summed to form the overall network output. The purpose of this paper is to describe the architecture of the RNN, to derive a learning algorithm for the network based on a gradient search, and to provide some examples of its use. The work in this paper is an extension of previous work on the RNN . In previous work the RNN contained only two subnets, a nonrecursive subnet and a recursive subnet. Here we have added a second recursive subnet. In addition, both of the subnets in the previous RNN had linear input units. Here all three of the subnets have second order input units. In many cases this allows the RNN to solve problems more efficiently, that is with a smaller overall network. In addition, the use of the RNN for inverse modeling and control was never fully developed in the past. Here, for the first time, we derive the complete learning algorithm for the case where the RNN is used in the general model following configuration. This configuration includes the following as special cases: system modeling, nonlinear filtering, inverse modeling, nonlinear prediction and control.
The University of New Mexico
Recursive neural network, dynamic neural network, recurrent neural network, dynamic backpropagation, multi-layer perceptron, nonlinear control, nonlinear filter
Abdallah, Chaouki T.; Don Hush; and Bill Horne. "The recursive neural network." (2012). http://digitalrepository.unm.edu/ece_fsp/153