An Online Growing-and-Pruning Algorithm of a Feedforward Neural Network for Nonlinear Systems Modeling
Abstract
In recent decades, many researchers and practitioners have always been focusing on the optimal design approach of feedforward neural network (FNN). However, it is still a challenge that FNN has a compact structure, while meeting some special requirements. This paper investigates the interrelation of nodes and the diversity of samples in sliding windows based on previous work for nonlinear system modeling and develops a new adaptive growing-and-pruning algorithm of FNN (AGPA-FNN). More specifically, the basis of AGPA-FNN is to dynamically grasp the interactions between hidden nodes through the local sensitivity analysis and mutual information method, which can seamlessly prune the redundancy hidden nodes through the punishing mechanism for weight decay. Thus, AGPA-FNN can prevent breaking the network structure and forget the acquired knowledge due to the sudden change in weights. In learning algorithm, an effective learning technique is developed to accelerate the learning efficiency of the gradient descent method (AOGD) through keeping the diversity of samples and a suitable learning rate in each window. Experimental investigations show that the proposed AGPA-FNN can effectively self-adjust hidden nodes based on data changes and finally achieve a compact structure, while it can outperform other approaches in terms of generalization performance