Learning is an important dimension or attribute of intelligent ontrol [1]. Highly autonomous behavior is a very desirable characteristic of advanced control systems so they perform well under changing conditions in the plant and the environment (even in the control goals), without external intervention. This requires the ability to adapt to changes affecting, in a significant manner, the operating region of the system. Adaptive behavior of this type is not typically offered by conventional control systems. Additional decision-making abilities should be added to meet the increased control requirements. The controller's capacity to learn from past experience is an integral part of such highly autonomous controllers. The goal of introducing learning methods in control is to broaden the region of operability of conventional control systems. Therefore, the ability to learn is one of the fundamental attributes of autonomous intelligent behavior [ 1,2] .
Machine LearningThe ability of man-made systems to learn from experience and, based on that experience, improve their performance is the focus of machine learning. Learning can be seen as the process whereby a system can alter its actions to perform a task more effectively due to increases in knowledge related to the task. The actions that a system takes depends on the nature of the system. For example, a control system may change the type of controller used, or vary the parameters of the controller, after learning that the current controller does not perform satisfactorily within a changing environment. Similarly, a robot may need to change its visual representation of the surroundings after learning of new obstacles in the environment. The type of action taken by the machine is dependent upon the nature of the system and the type of learning system implemented. The ability to learn entails such issues as knowledge acquisition, knowledge representation, and some level of inference capability. Learning, considered fundamental to intelligent behavior, and in particular the computer modeling of learning procetsses has been the subject of research in the field of machine learning for the past 25 years [3,4].
Learning ControlThe problem of learning in automatic control systems has been studied in the past, especially in the late '60s, and it has been the topic of numerous papers and books; see for example [5-9]. References [5,7,9] provide surveys on the early learning techniques. All of these problems involve a process of classification in which all or part of the prior information required is unknown or incompletely known. The elements or patterns that are presented to the control system are collected into groups that correspond to different pattern classes or regions [9]. Thus, learning was viewed as the estimation or successive approximation of the unknown quantities of a function [5]. The approaches developed for such learning problems can be separated into two categories: deterministic and stochastic.
Where can learning be used in the control of systems? As mentioned, learning plays an essential role in the autonomous control of systems; see example [2], also [ 10,11 ] . There are many areas in control where learning can be used to advantage, and these needs canbe briefly classifiedas follows: (1) learning about the plant; that is, learning how to incorporate changes and how to derive new plant models; (2) learning about the environment; this can be done using methods ranging from passive observation to active experimentation; (3) learning about the controller; for example, learning how to adjust certain controller parameters to enhance performance; and (4) learning new design goals and constraints.
Learning and Adaptive ControlWhat is the relation between adaptive control and learning control [10]? Learning is achieved, in a certain sense, when an adaptive control algorithm is used to adapt the controller parameters so that, for example, stability is maintained. In this case, the system learns and the knowledge acquired is the new value for the parameters. Note, however, that if later the same changes occur again and the system is described by exactly the same parameters identified earlier, the adaptive control algorithm still needs to recalculate the controller and perhaps the plant parameters since nothing was kept in memory. So, in that sense, the system has not learned. It has certainly learned what to do when certain types of changes take place. In particular, it has been told exactly what to do, that is it was given the adaptive algorithm, and this is knowledge by rote learning. The knowledge represented by the new values of the controller and the plant parameters, and the circumstances under which these values are appropriate, are not retained. So a useful rule of thumb is that a controller, to be a learning controller, requires memory where past knowledge is stored so it can be used when a similar situation arises.
Regarding terminology, it is perhaps beneficial at this point to bring in a bit of history: In the '60s, adaptive control and learning received a lot of attention in control literature. It was not always clear, however, what it was meant by those terms. The comment by Y. Tsypkin, in [8], describes quite clearly the atmosphere of the period. "It is difficult to find more fashionable and attractive terms in the modern theory of automatic control than the terms of adaptation and learning. At the same time, it is not simple to find any other concepts which are less complex and more vague." Adaptation, learning, selforganizing systems and control were competing terms for similarresearch areas, and K.S. Fu says characteristically in [5], "The use of the word 'adaptive' has been intentionally avoided here ... adaptive and learning are behavior-descriptive terms, but feedback and self-organizing are structure, or system configuration, descriptive terms. Nevertheless the terminology war is still going on.... It is certainly not the purpose of this paper to get involved with such a war."
The term "pattern recognition" also was appearing with adaptive, learning, and self-organizing systems in the control literature of that era. It is obvious that there was no agreement as to the meaning of these terms and their relation. Today, pattern recognition is a research discipline in its own right, developing and using an array of methods ranging from conventional algorithms to artificial intelligence methods implemented via symbolic processing. The term selforganizing system is not being used as much today in the control literature. Adaptive control has gained renewed popularity in the past few decades, mainly emphasizing studies in the convergence of adaptive algorithms and in the stability of adaptive systems; the systems considered are primarily systems described by differential (or difference) equations where the coefficients are (partially) unknown. In an attempt to enhance the applicability of adaptive control methods, learning control has been recently reintroduced in control literature; see, for example, [12,13] for learning methods in control with emphasis on neural networks.
Special IssueThis special issue on intelligent learning control emphasizes the importance of learning in the control of systems today. Reay et al. use fuzzy adaptive systems to learn non-linear current profiles in a 4kW, four-phase switched reluctance motor that minimize torque ripple. Simulations supported by experiments demonstrate torque ripple reduction in the actual system. Learning mechanisms are currently implemented via a variety of algorithms; predominant among them are neural network algorithms [12,14]. Failure diagnosis and accommodation abilities are essential in control systems with a high degree of autonomy. Polycarpou and Vemuri develop a methodology to detect faults, learn the new system model, and reconfigure the control law, that involves neural networks. Changes in the system dynamics are monitored by an on-line approximation model and nonlinear estimation and stable learning algorithms are developed. Simulations of the Van der Pol oscillator illustrate the results. Lemmon et al. discuss algorithms that originated in the computational learning area. In particular, learning algorithms that infer Boolean functions from finite sized example sets are used to learn certain control concepts and provide control laws in a computationally efficient way. Applications of the methodology to the stabilization of a communications satellite and the supervision of a discrete event system illustrate the approach. Narendra et al. present a control methodology for highly uncertain systems that involves switching to an appropriate controller followed by adaptation. An application to a robotic manipulator illustrates the approach. Michel et al. examine the effect of parameter perturbations, transmission delays, and interconnection constraints on the accuracy and qualitative properties of recurrent neural nets. Hopfield-like neural networks implemented as associative memories are used. Kim et al. present a methodology based on genetic algorithms to optimize the performance of fuzzy net controllers. It employs a genetic algorithm optimizer and high performance simulations tossignificantly reduce the design cycle time; two design examples illustrate the approach. In the final feature article, Shenoi et al. report on the design and implementation of a practical learning fuzzy controller using inexpensive hardware.
AcknowledgementsI would like to thank the authors for their contributions and all the reviewers of the papers for their time, effort, and valuable feedback. I would especially like to thank the Editor of IEEE Control Systems, Steve Yurkovich, who worked tirelessly to make this issue a success. Without his efforts this special issue on Intelligent Learning Control would not have been possible.
References
[1] "Defining Intelligent Control," Report of the Task Force on Intelligent Control, RJ. Antsaklis, Chair, IEEE Control Systems Magazine, pp. 4-5 & 58-66, June 1994.
[2] RJ. Antsaklis and K.M. Passino, eds., An Introduction to Intelligent and Autonomous Control, 448 pp., Kluwer Academic Publishers,1993.
[3] R.S. Michalski, J.G. Carbonell, and T.M. Mitchell, Machine Learning: An Artificial Intelligence Approach, Tioga, Palo Alto, 1983. Also R.S. Michalski and G. Tecuci, eds., Machine Learning: A MultistrategyApproach, vol. IV, Morgan-Kaufmann, 1994.
[4] J.W. Shavlik and T.G. Dietterich, eds., Readings in Machine Learning, Morgan-Kaufmann, 1990.
[5] K.S. Fu, "Learning Control SystemsÑReview and Outlook," IEEE Transactions on Automatic Control, vol. AC- 15, pp. 210-221, April 1970.
[6] J.M. Mendel, and K.S. Fu, Adaptive, Learning and Pattern Recognition Systems, Academic Press, New York, 1970.
[7] J. Sklansky, "Learning Systems for Automatic Control," IEEE Transactions on Automatic Control, vol. AC-11, pp. 6-19, January 1966.
[8] Y. Tsypkin, Aduptation and Learning in Automatic Systems, Academic Press, New York, 1971.
[9] Y. Tsypkin, "Self-Learning: What Is It?," IEEE Transactions onAutomatic Control, vol. AC-13, pp. 608612, December 1968.
[10] Panel discussion on "Machine Learning in a Dynamic World," Proc. of the 3rd IEEE Intern. Symposium on Intelligent Control, Arlington, VA, August 24-26, 1988.
[11] M.D. Peek and P.J. Antsaklis, "Pararneter Learning for
Performance," IEEE Control Systems Magazine, December 1990, pp. 3- 11.
[12] D.A. White and D.A. Sofge, eds., Handbook of
Intelligent Control Neural, Fuzzy, and Adaptive Approaches, Van Nostrand
1992.
[13] J. Farrell, T. Berger, and B. Appleby, "Using Learning Techniques to
Accommodate Unanticipated Faults," IEEE Control Systems Magazine, pp. 40-49, Special Issue on Intelligent Control, K. Passino, Guest Editor, June 1993.
[14] Special issues on Neural Networks in Control Systems
of the IEEE Control Systems Magazine, vol. 10, no. 3, pp. 3-87, April
1990; also vol. l2, no. 3 pp. 8-57, April 1992. P.J. Antsaklis. Guest Editor.
Panos J. Antsaklis received his Diploma in Mechanical and Electrical Engineering from the National Technical University of Athens (NTUA), Greece, in 1972 and his M.S. and Ph.D. degrees in Electrical Engineering from Brown University in 1974 and 1977, respectively. He is currently professor of electrical engineering at the University of Notre Dame. He has held faculty posi tions at Brown University, Rice University, and Imperial College, University of London. During sabbatical leaves, he has been senior visiting scientist at LIDS of MIT in 1987 and at Imperial College in 1992; he also was visiting professor at NTUA in 1992 and at the Technical University of Crete, Greece in 1993. His research interests are in Systems and Control Theory, with emphasis in the control of hybrid and discrete event systems, in autonomous, intelligent and learning control systems, in neural networks and in methodologies for reconfigurable control. He is co-editor (with K.M. Passino) of the recent volume An Introduction to Intelligent and Autonomous Control (Kluwer Academic, 1993). He was the program chair of the 30th IEEE Conference on Decision and Control (CDC) in Enland in 1991, and has been associate editor of the IEEE Transactions on Automatic Control and of the IEEE Transactions on Neural Networks. He was the guest editor of the 1990 and 1992 special issues on "Neural Networks in Control Systems" of the IEEE Control Systems Magazine (CSM). He has served as the general chair of the 1993 8th IEEE International Symposium on Intelligent Control in Chicago and he is the general chair of the 1995 34th CDC in New Orleans. He is an elected member of the IEEE Control Systems Society (CSS) Board of Governors since 1991 and CSS vice president-conferences for 1994 and 1995. He is an IEEE Fellow.