From Conventional to Intelligent Control
Evolution and Quest for Autonomy
The first feedback device on record was the water clock invented by the Greek Ktesibios in Alexandria Egypt around the 3rd century B.C. This was certainly a successful device as water clocks of similar design were still being made in Baghdad when the Mongols captured the city in 1258 A.D.! The first mathematical model to describe plant behavior for control purposes is attributed to J.C. Maxwell, of the Maxwell equations' fame, who in 1868 used differential equations to explain instability problems encountered with James Watt's flyball governor; the governor was introduced in the late 18th century to regulate the speed of steam engine vehicles. Control theory made significant strides in the past 120 years, with the use of frequency domain methods and Laplace transforms in the 1930s and 1940s and the development of optimal control methods and state space analysis in the 1950s and 1960s. Optimal control in the 1950s and 1960s, followed by progress in stochastic, robust and adaptive control methods in the 1960s to today, have made it possible to control more accurately significantly more complex dynamical systems than the original flyball governor.
When J.C Maxwell used mathematical modeling and methods to explain instability problems encountered with James Watt's flyball governor, he demonstrated the importance and usefulness of mathematical models and methods in understanding complex phenomena and signaled the beginning of mathematical system and control theory. It also signaled the end of the era of intuitive invention. The performance of the flyball governor was sufficient to meet the control needs of the day. As time progressed and more demands were put on the device there came a point when better and deeper understanding of the governor was necessary as it started exhibiting some undesirable and unexplained behavior, in particular oscillations. This is quite typical of the situation in man made systems even today where systems based on intuitive invention rather than quantitative theory can be rather limited. To be able to control highly complex and uncertain systems we need deeper understanding of the processes involved and systematic design methods, we need quantitative models and design techniques. Such a need is quite apparent in intelligent autonomous control systems and in particular in hybrid control systems.
Conventional control design methods: Conventional control systems are designed using mathematical models of physical systems. A mathematical model, which captures the dynamical behavior of interest, is chosen and then control design techniques are applied, aided by Computer Aided Design (CAD) packages, to design the mathematical model of an appropriate controller. The controller is then realized via hardware or software and it is used to control the physical system. The procedure may take several iterations. The mathematical model of the system must be ``simple enough" so that it can be analyzed with available mathematical techniques, and ``accurate enough" to describe the important aspects of the relevant dynamical behavior. It approximates the behavior of a plant in the neighborhood of an operating point.
The control methods and the underlying mathematical theory were developed to meet the ever increasing control needs of our technology. The need to achieve the demanding control specifications for increasingly complex dynamical systems has been addressed by using more complex mathematical models and by developing more sophisticated design algorithms. The use of highly complex mathematical models however, can seriously inhibit our ability to develop control algorithms. Fortunately, simpler plant models, for example linear models, can be used in the control design; this is possible because of the feedback used in control which can tolerate significant model uncertainties. Controllers can for example be designed to meet the specifications around an operating point, where the linear model is valid and then via a scheduler a controller emerges which can accomplish the control objectives over the whole operating range. This is in fact the method typically used for aircraft flight control. When the uncertainties in the plant and environment are large, the fixed feedback controllers may not be adequate, and adaptive controllers are used. Note that adaptive control in conventional control theory has a specific and rather narrow meaning. In particular it typically refers to adapting to variations in the constant coefficients in the equations describing the linear plant: these new coefficient values are identified and then used, directly or indirectly, to reassign the values of the constant coefficients in the equations describing the linear controller. Adaptive controllers provide for wider operating ranges than fixed controllers and so conventional adaptive control systems can be considered to have higher degrees of autonomy than control systems employing fixed feedback controllers. There are many cases however where conventional adaptive controllers are not adequate to meet the needs and novel methods are necessary.
High Autonomy Control Systems: There are cases where we need to significantly increase the operating range of control systems. We must be able to deal effectively with significant uncertainties in models of increasingly complex dynamical systems in addition to increasing the validity range of our control methods. We need to cope with significant unmodeled and unanticipated changes in the plant, in the environment and in the control objectives. This will involve the use of advanced decision making processes to generate control actions so that a certain performance level is maintained even though there are drastic changes in the operating conditions. The need to use intelligent methods in autonomous control stems from the need for an increased level of autonomous decision making abilities in achieving complex control tasks. Note that intelligent methods are not necessary to increase the control system's autonomy. It is possible to attain higher degrees of autonomy by using methods that are not considered intelligent. For example this is the case in current practice in adaptive control. It appears however that to achieve the highest degrees of autonomy, intelligent methods are necessary indeed.
For more information on intelligent control, see
Defining Intelligent Control.
For more information on control systems and the IEEE Control Systems Society, see
IEEE CSS.