Neural Networks in Control Systems

by Panos J. Antsaklis

The ever-increasing technological demands of our modern society require innovative approaches to highly demanding control problems. Artificial neural networks with their massive parallelism and learning capabilities offer the promise of better solutions, at least to some problems. By now, the control community has heard of neural networks and wonders if these networks can be used to provide better control solutions to old problems or perhaps solutions to control problems that have withstood our best efforts.

Background

Neural networks have the potential for very complicated behavior. They consist of many interconnected simple nonlinear systems, which are typically modeled by sigmoid functions. The massive interconnections of the rather simple neurons, which make up the human brain, provided the original mo tivation for the neural network models. The terms artificial neural networks and connec tionist models are typically used to distin guish them from the biological networks of neurons of living organisms. Interest in neural networks has made a comeback in this decade after a period of relative inactivity following the shortcomings of early neural networks (the single-layer perceptron), which were publicized in the late 1960s. The re newed interest was due, in part, to powerful new neural models, the multilayer percep tron and the feedback model of Hopfield, and to learning methods such as back prop agation; but, it was also due to advances in hardware that have brought within reach the realization of neural networks with very large numbers of nodes.

In a neural network, the simple nonlinear elements called nodes or neurons are inter connected, and the strengths of the intercon nections are denoted by parameters called weights. These weights are adjusted, de pending on the task at hand, to improve per formance. They can be assigned new values in two ways: either determined via some prescribed off-line algorithmÑ remaining fixed during operationÑor adjusted via a learning process. Learning is accomplished by, first, adjusting these weights step by step (typically to minimize some objective function) and, then, storing these best values as the actual strengths of the interconnections. The interconnections and their strength provide the memory, which is necessary in a learning process.

The ability to learn is one of the main advantages that make the neural networks so attractive. They also have the capability of performing massive parallel processing, which is in contrast to the von Neumann machinesÑthe conventional digital computers in which the instructions are executed sequentially. Neural networks can also provide, in principle, significant fault tolerance, since damage to a few links need not significantly impair the overall performance. The benefits are most dramatic when a large number of nodes are used and are implemented in hardware. The hardware implementation of neural networks is currently a very active research area; optic and more conventional means of implementation of these large networks have been suggested.

Neural networks are characterized by their network topologyÑthat is, by the number of interconnections, the node characteristics that are classified by the type of nonlinear elements used, and the kind of learning rules implemented. A clear and concise general introduction to neural networks is given in [1], where the emphasis is toward pattern recognition, an area that is particularly well suited for neural network applications. Neural networks have been the topic of a number of special issues [2], [3], and these are good sources of recent developments in other areas. In [4], [5], collections of neural network papers with emphasis on control applications have appeared.

Control Technology

The use of neural networks in control systems can be seen as a natural step in the evolution of control methodology to meet new challenges. Looking back, the evolution in the control area has been fueled by three major needs: the need to deal with increasingly complex systems, the need to accomplish increasingly demanding design requirements, and the need to attain these requirements with less precise advanced knowledge of the plant and its environmentÑthat is, the need to control under increased uncertainty. Today, the need to control, in a better way, increasingly complex dynamical systems under significant uncertainty has led to a reevaluation of the conventional control methods, and it has made the need for new methods quite apparent. It has also led to a more general concept of control, one that includes higher-level decision making, planning, and learning, which are capabilities necessary when higher degrees of system autonomy are desirable. These ideas are elaborated upon in [6]. In view of this, it is not surprising that the control community is seriously and actively searching for ideas to deal effectively with the increasingly challenging control problems of our modern society. Need is the mother of invention, and this has been true in control since the times of Ktesibios and his water clock with its feedback mechanism in the third century B.C. [7], the earliest feedback device on record. So the use of the neural networks in control is rather a natural step in its evolution. Neural networks appear to offer new promising directions toward better understanding and perhaps even solving some of our most difficult control problems. History, of course, has made clear that neural networks will be accepted and used if they solve problems that have been previously impossible or very difficult to solve. They will be rejected and will be just a fast-fading novelty if they do not prove useful. The challenge is to find the best way to fully utilize this powerful new tool in control; the jury is still out, as their best uses have not been decided yet. It is hoped that this special issue will raise interest in neural networks and will provide challenges and food for thought.

Special Issue

This special issue contains 11 articles. Early versions of most of these articles were presented in conferences on control, robotics, or neural networks in 1989. In selecting these articles, the emphasis was placed on presenting as varied and current a picture as possible. Additional articles were commissioned specifically for this special issue to make the exposition more complete and selfcontained. Applications were emphasized, but rigor was also praised. Complete proofs, however, of the results were not included; nevertheless, the authors take full responsibility for their claims! Please remember that this is a window with a view toward control applications of neural networks. It was opened originally to include papers from the 1989 American Control Conference, and then it was widened to give a more comprehensive picture. Nevertheless, it is still a window. This is not a survey issue; this is a special issue designed to raise interest, to be thought provoking, to generate ideas, and, I hope also, a bit of controversy. Neural networks are very powerful tools. Let's tame them, modify them to better fit our needs, and use them most effectively, all in the best engineering tradition.

There are several topics covered in the articles in this special issue. The first article, by A. N. Michel and J. A. Farrell, introduces mathematical models of neural networks and discusses algorithms to assign the weights in associative memories. The next article, by D. Nguyen and B. Widrow, introduces applications by using neural networks to model and control a highly nonlinear system, a trailer truck backing up to a loading dock. Modeling of chemical processes is addressed in the third article by N. V. Bhat, P. Minderman, T. McAvoy, and N. Wang; such processes are typically very complex, and neural networks do offer a very attractive alternative, as these models are, perhaps, better leamed than fully detailed. System identification in the time and frequency domains is the topic of the next article by R. Chu, R. Shoureshi, and M Tenorio. In order to effectively use neural networks in control problems, the neural controllers must be compared with conventional ones; this is the direction taken in the fifth article by L. G. Krap and D. P. Campagna, where a neural controller and certain conventional adaptive controllers are applied to the same simple system and the results are compared. The sixth article, by F. C. Chen, discusses a method to introduce neural networks to enhance self-tuning controllers so as to be able to deal with large classes of nonlinear systems; the back-propagation learning algorithm is used. In the seventh articleÑby S. R. Naidu, E. Zaflriou, and T. J. McAvoyÑneural networks and backpropagation are used for sensor failure detection in chemical process control systems. Additional information about learning algorithms in neural networks is given in the next article by S. C. Huang and Y. F. Huang; backpropagation is discussed and certain extensions are introduced. The next two articles are experimental applications of neural networks to control complex systems in real time. The pitch attitude of an underwater telerobot is regulated in the ninth article, by R. M. Sanner and D. L. Akin, and the experimental results are presented. Mobile robots with many sensors learn to interact in the next article by S. Nagata, M. SekEguchi, and K Asakawa; the robots demonstrate their abilities by playing a form of the cops-androbbers game. The interaction of rule-based systems and neural networks is studied by D. A. Handelman, S. H. Lane, and J. J. Gelfand in the last article, and a controller integrating the two is developed; it is used to teach a two-link robot manipulator a tennislike swing. A more detailed description of the articles follows.

The mathematical framework necessary for in-depth studies of several system and control applications of neural networks is set in the first article by A. N. Michel and J. A. Farrell titled "Associative Memories via Artificial Neural Networks," where mathematical models are introduced and methods are described to design associative memories using feedback neural networks. Neural networks with full feedback interconnections are of interest here. Their dynamical behavior, studied via differential equations, exhibits stable states, which act as basins of attraction for neighboring states as they develop in time. This time evolution toward these equilibrium points can be seen as the attraction of an imperfect pattern toward the correct one, stored as a stable equilibrium. Several design methods are presented to appropriately assign the weights, so that the resulting networks will behave as an associative memory. A neural network so designed can be useful in control as, for example, an advanced dictionary of different control algorithms; when certain operating conditions are present, they are matched to stored conditions, and the control action that corresponds to the conditions that most closely match the current operating conditions are selected. Other applications of associative memories to control are, of course, possible.

A method to use neural networks to control highly nonlinear systems is presented by D. Nguyen and B. Widrow in "Neural Networks for Self-Learning Control Systems." Feed-through, multilayered neural networks are used, and learning, via the back-propagation algorithm, is implemented to determine the neural network weights to first model the plant and then design the controller. First, a neural network emulator learns to identify the dynamic characteristics of the system. The controller, another multilayered network, then learns to control the emulator. The self-trained controller is then used to control the actual dynamic system. The learning continues as the emulator and controller improve as they track the physical process. The power of this approach is demonstrated by using the method to steer a trailer truck while backing up to a loading dock.

The main emphasis in the next two articles is on system modeling. The modeling of nonlinear chemical systems using neural networks and learning is addressed by N. V. Bhat, P. Minderman, T. McAvoy, and N. Wang in "Modeling Chemical Process Systems via Neural Computation." Back-propagation is used for the system to learn the nonlinear neural network model from plant input/output data and for interpreting biosensor data. Typical chemical processes to be controlled are rather complex, and, frequently, the relationships are perhaps better leamed than fully detailed out. Two reactor examples are considered: a steady-state reactor and a dynamic pH stirred tank system; the interpretation of sensor data is illustrated by using a fluorescence spectra example.

Two methods for identification of dynamical systems are described in "Neural Networks for System Identification'' by R. Chu, R. Shoureshi, and M. Tenorio. First, a technique for assigning weights in a Hopfield network is developed to perform system identification in the time domain; it involves the minimization of least-mean-square error rates of state variable estimates. System identification in the frequency domain is also illustrated, and it is shown that transfer functions of dynamical plants can be identified via neural networks.

Conventional adaptive controllers and neural network-based controllers are compared in the article by L. G. Kraft and D. P. Campagna entitled "A Comparison Between CMAC Neural Network Control and Two Traditional Adaptive Control Systems." If neural network controllers are to be used in the control of dynamic systems, they must be evaluated against controllers that are designed using conventional control theory. A self-tuning regulator and a model reference adaptive controller are compared with a neural cerebellar model articulation controller. They are all used to control the same simple system, and the results are tabulated and discussed at length.

A method to provide adaptive control for nonlinear systems is introduced in "BackPropagation Neural Networks for Nonlinear Self-Tuning Adaptive Control" by F. C. Chen. The author uses a neural network and the back-propagation algorithm to alter and enhance a self-tuning controller so that it can deal with unknown, feedback linearizable, nonlinear systems. Simulations of a nonlinear plant controlled by such neural controllers are included to illustrate the method.

Neural networks and back-propagation are proposed by S. R. Naidu, E. Zafiriou, and T. McAvoy for sensor failure detection in "Use of Neural Networks for Sensor Failure Detection During the Operation of a Control System." The ability to reliably detect failures is essential if a certain degree of autonomy is to be attained. Process control systems are of main interest here. Backpropagation is used for sensor failure detection, and the algorithm is compared via simulations with other fault-detection algorithms.

Most of the neural network applications seem to incorporate some form of learning. Leaming is discussed by S. C. Huang and Y. F. Huang in "Learning Algorithms for Perceptrons Using Back-Propagation with Selective Updates." The ability to leam is one of the main advantages of neural networks. Learning algorithms are discussed in general with main emphasis on supervised algorithms. The back-propagation algorithm used in feedforward types of networks is discussed at length, and an extension is presented. These learning algorithms are applied for illustration to a perceptron associative memory.

R. M. Sanner and D. L. Akin in "Neuromorphic Pitch Attitude Regulation of an Underwater Telerobot," present the experimental results of using trained neural networks to regulate the pitch attitude of an underwater telerobot. These experimental results are a follow-up of their previous work involving computer simulations only. The neural network performed as predicted in simulations; however, it was observed that unacceptable delays can be introduced if a single serial microprocessor is used to calculate the control action. Hardware implementations of neural networks are seen as necessary.

The control of mobile robots is the topic addressed by S. Nagata, M. Sekiguchi, and K. Asakawa in "Mobile Robot Control by a Structured Hierarchical Neural Network." Neural networks are used to process data from many sensors for the real-time control of mobile robots and to provide the necessary learning and adaptation capabilities for responding to the environmental changes in real time. For this, a structured hierarchical neural network and its learning algorithm are used, and the network is divided into two parts connected with each other via short memory units. This approach is applied to several robots, which learn to interact and participate in a form of the cops-and-robbers game.

D. A. Handelman, S. H. Lane, and J. J. Gelfand in "Integrating Neural Networks and Knowledge-Based Systems for Intelligent Robotic Control," address the issues involved when integrating these quite distinct systems, which offer very different capabilities. To demonstrate the integration technique and the interaction of the two systems, a two-link robot manipulator is taught how to make a tennislike swing. The rulebased system first determines how to make a successful swing using rules alone. It then teaches a neural network to perform the task. The rule-based system continues to evaluate the neural network performance, and, if changes in the operating conditions make it necessary, it retrains the neural network.

It has been a pleasure bringing these articles to you. I am sure you will find them interesting and perhaps useful in your work. If there is a message to be stressed here, which I hope has become apparent by now, it is this: Neural networks in control must be studied by using mathematical rigor in the tradition of our discipline. Only in this way can we harvest the full benefits of these powerful new tools. Only in this way can we create something lasting and useful for the years to come.

Acknowledgements

I am indebted to the authors and referees for all their efforts put forth in developing this special issue. I would also like to thank the Magazine Editor, Herb Rauch, for his help and his apparently boundless energy, which was an example to me throughout this period.

References

[1] R. L. Lippmann, "An Introduction to Computing with Neural Nets," IEEE Acoustics, Speech, Signal Proc. Mag., pp. 4-22, Apr. 1987.
[2] B. D. Shriver, Guest Editor of Special Issue on Artificial Neural Systems, IEEE Computer, vol. 21, no. 3, Mar. 1988.
[3] N. El-Leithy and R. N. Newcomb, Guest Editors of Special Issue on Neural Networks, IEEE Trans. Circ. Syst., vol. 36, no. 5, May 1989.
[4] B. Bavarian, Guest Editor of Special Section on Neural Networks for Systems and Control, IEEE Contr. Sysr. Mag., vol. 8, no. 2, pp. 3-31, Apr. 1988.
[5] Special Section on Neural Networks for Control Systems, IEEE Contr. Syst. Mag., vol. 9, no. 3, pp. 25-59, Apr. 1989.
[6] P. J. Antsaklis, K. M. Passino, and S. J. Wang, "Towards Intelligent Autonomous Control Systems: Architecture and Fundamental Issues," J. Intell. Robotic Syst., vol. 1, pp. 315-342, 1989; a shorter version appeared in the Proceedings of the American Control Conference, pp. 602-607, Atlanta, GA, June 1517, 1988.
[7] O. Mayr, The Origins of Feedback Control, Cambridge, MA: MIT Press, 1970.



Panos J. Antsaklis received his diploma in mechanical and electrical engineering from the National Technical University of Athens, Greece, in 1972, and his M.S. and Ph . D . degrees in electrical engineenng from Brown University in 1974 and 1977, respectively . After holding faculty positions at Brown University, Rice University, and Imperial College, University of London, he joined the University of Notre Dame, where he is currently a Full Professor in the Department of Electrical and Computer Engineering. In the summer of 1986, he was a NASA Faculty Fellow at the Jet Propulsion Laboratory, Pasadena, California. He was a Senior Visiting Scientist at the Laboratory for Information and Decision Systems of MIT during his sabbatical leave in 1987. His research interests are in multivariable systems and control theory, discrete event systems, adaptive learning and reconfigurable control, autonomous systems, and neural networks. He has published a number of technical results in those areas.

Dr. Antsaklis has served as Associate Editor of the IEEE Transactions on Automatic Control and is currently Chairman of the Technical Committee on Theory and head of the Working Group on Control Systems in the Technical Committee on Intelligent Control of the IEEE Control Systems Society. He is also an Associate Editor of the new IEEE Transactions on Neural Networks.