- Shopping Bag ( 0 items )
WHAT IS ADAPTIVE CONTROL?
In everyday language, "to adapt" means to change a behavior to conform to new circumstances. Intuitively, an adaptive controller is thus a controller that can modify its behavior in response to changes in the dynamics of the process and the character of the disturbances. Since ordinary feedback also attempts to reduce the effects of disturbances and plant uncertainty, the question of the difference between feedback control and adaptive control immediately arises. Over the years there have been many attempts to define adaptive control formally. At an early symposium in 1961 a long discussion ended with the following suggestion: "An adaptive system is any physical system that has been designed with an adaptive viewpoint." A renewed attempt was made by an IEEE committee in 1973. It proposed a new vocabulary based on notions like self- organizing control (SOC) system, parameter-adaptive SOC, performance-adaptive SOC, and learning control system. However, these efforts were not widely accepted. A meaningful definition of adaptive control, which would make it possible to look at a controller hardware and software and decide whether or not it is adaptive, is still lacking. However, there appears to be a consensus that a constant-gain feedback system is not an adaptive system.
In this book we take the pragmatic attitude that an adaptive controller is a controller with adjustable parameters and a mechanism for adjusting the parameters. The controller becomes nonlinear because of the parameter adjustment mechanism. It has, however, a very special structure. Since general nonlinear systems are difficult to deal with, it makes sense to consider special classes of nonlinear systems. An adaptive control system can be thought of as having two loops. One loop is a normal feedback with the process and the controller. The other loop is the parameter adjustment loop. A block diagram of an adaptive system is shown in Fig. 1.1. The parameter adjustment loop is often slower than the normal feedback loop.
A control engineer should know about adaptive systems because they have useful properties, which can be profitably used to design control systems with improved performance and functionality.
A Brief History
In the early 1950s there was extensive research on adaptive control in connection with the design of autopilots for high-performance aircraft (see Fig. 1.2). Such aircraft operate over a wide range of speeds and altitudes. It was found that ordinary constant-gain, linear feedback control could work well in one operating condition but not over the whole flight regime. A more sophisticated controller that could work well over a wide range of operating conditions was therefore needed. After a significant development effort it was found that gain scheduling was a suitable technique for flight control systems. The interest in adaptive control diminished partly because the adaptive control problem was too hard to deal with using the techniques that were available at the time.
In the 1960s there were much research in control theory that contributed to the development of adaptive control. State space and stability theory were introduced. There were also important results in stochastic control theory. Dynamic programming, introduced by Bellman, increased the understanding of adaptive processes. Fundamental contributions were also made by Tsypkin, who showed that many schemes for learning and adaptive control could be described in a common framework. There were also major developments in system identification. A renaissance of adaptive control occurred in the 1970s, when different estimation schemes were combined with various design methods. Many applications were reported, but theoretical results were very limited.
In the late 1970s and early 1980s, proofs for stability of adaptive systems appeared, albeit under very restrictive assumptions. The efforts to merge ideas of robust control and system identification are of particular relevance. Investigation of the necessity of those assumptions sparked new and interesting research into the robustness of adaptive control, as well as into controllers that are universally stabilizing. Research in the late 1980s and early 1990s gave new insights into the robustness of adaptive controllers. Investigations of nonlinear systems led to significantly increased understanding of adaptive control. Lately, it has also been established that adaptive control has strong relations to ideas on learning that are emerging in the field of computer science.
There have been many experiments on adaptive control in laboratories and industry. The rapid progress in microelectronics was a strong stimulation. Interaction between theory and experimentation resulted in a vigorous development of the field. As a result, adaptive controllers started to appear commercially in the early 1980s. This development is now accelerating. One result is that virtually all single-loop controllers that are commercially available today allow adaptive techniques of some form. The primary reason for introducing adaptive control was to obtain controllers that could adapt to changes in process dynamics and disturbance characteristics. It has been found that adaptive techniques can also be used to provide automatic tuning of controllers.
1.2 LINEAR FEEDBACK
Feedback by itself has the ability to cope with parameter changes. The search for ways to design a system that are insensitive to process variations was in fact one of the driving forces for inventing feedback. Therefore it is of interest to know the extent to which process variations can be dealt with by using linear feedback. In this section we discuss how a linear controller can deal with variations in process dynamics.
Robust High-Gain Control
A linear feedback controller can be represented by the block diagram in Fig. 1.3. The feedback transfer function Gfb is typically chosen so that disturbances acting on the process are attenuated and the closed-loop system is insensitive to process variations. The feedforward transfer function Gff is then chosen to give the desired response to command signals. The system is called a two-degree-of-freedom system because the controller has two transfer functions that can be chosen independently. The fact that linear feedback can cope with significant variations in process dynamics can be seen from the following intuitive argument. Consider the system in Fig. 1.3. The transfer function from ym to y is
T = GpGfb/1 + GpGfb
Taking derivatives with respect to Gp, we get
dT/T = 1/1 + GpGfbdGp/Gp
The closed-loop transfer function T is thus insensitive to variations in the process transfer function for those frequencies at which the loop transfer function
L = GpGfb
is large. To design a robust controller, it is thus attempted to find Gfb such that the loop transfer function is large for those frequencies at which there are large variations in the process transfer function. For those frequencies where L(Iω)[approximately equals]11, however, it is necessary that the variations be moderate for the system to have sufficient robustness properties.
Judging Criticality of Process Variations
We now consider some specific examples to develop some intuition for judging the effects of parameter variations. The following example illustrates that significant variations in open-loop step responses may have little effect on the closed-loop performance.
EXAMPLE 1.1 Different open-loop responses
Consider systems with the open-loop transfer functions
G0 = 1/(s + 1)(s + a)
where α = —0.01, 0, and 0.01. The dynamics of these processes are quite different, as is illustrated in Fig. 1.4(a). Notice that the responses are significantly different. The system with α = 0.01 is stable; the others are unstable. The initial parts of the step responses, however, are very similar for all systems. The closed-loop systems obtained by introducing the proportional feedback with unit gain, that is, u = uc — y, give the step responses shown in Fig. 1.4(b). Notice that the responses of the closed-loop systems are virtually identical. Some insight is obtained from the frequency responses. Bode diagrams for the open and closed loops are shown in Fig. 1.5. Notice that the Bode diagrams for the open-loop systems differ significantly at low frequencies but are virtually identical for high frequencies. Intuitively, it thus appears that there is no problem in designing a controller that will work well for all systems, provided that the closed-loop bandwidth is chosen to be sufficiently high. This is also verified by the Bode diagrams for the closed-loop systems shown in Fig. 1.5(b), which are practically identical. Also compare the step responses of the closed-loop systems in Fig. 1.4(b).
The next example illustrates that process variations may be significant even if changes in the open-loop step responses are small.
EXAMPLE 1.2 Similar open-loop responses
Consider systems with the open-loop transfer functions
G0(s) = 400(1 - sT)(s + 1)(s + 20)(1 + Ts)
with T = 0, 0.015, and 0.03. The open-loop step responses are shown in Fig. 1.6(a). Figure 1.6(b) shows the step responses for the closed-loop systems obtained with the feedback u = uc — y. Notice that the open-loop responses are very similar but that the closed-loop responses differ considerably. The frequency responses give some insight. The Bode diagrams for the open- and closed-loop systems are shown in Fig. 1.7. Notice that the frequency responses of the open-loop systems are very close for low frequencies but differ considerably in the phase at high frequencies. It is thus possible to design a controller that works well for all systems provided that the closed-loop bandwidth is chosen to be sufficiently small. At the crossover frequency chosen in the example there are, however, significant variations that show up in the Bode diagrams of the closed-loop systems in Fig. 1.7(b) and in the step responses of the closed-loop system in Fig. 1.6(b).
The examples discussed show that to judge the consequences of process variations from open-loop dynamics, it is better to use frequency responses than time responses. It is also necessary to have some information about the desired crossover frequency of the closed-loop system. Intuitively, it may be expected that a process variation that changes dynamics from unstable to stable is very severe. Example 1.1 shows that this is not necessarily the case.
EXAMPLE 1.3 Integrator with unknown sign
Consider a process whose dynamics is described by
G0(s) = kp/s
where the gain kp can assume both positive and negative values. This is a very severe variation because the phase of the system can change by 180º. This process cannot be controlled by a linear controller with a rational transfer function. This can be seen as follows. Let the controller transfer function be S(s)/R(s), where R(s) and S(s) are polynomials. Assume that deg R ≥ degS. The characteristic polynomial of the closed-loop system is then
P(s) = sR(s) + kpS(s)
Without lack of generality it can be assumed that the coefficient of the highest power of s in the polynomial R(s) is 1. The coefficient of the highest power of s of P(s) is thus also 1. The constant coefficient of polynomial kpS(s) is proportional to kp and can thus be either positive or negative. A necessary condition for P(s) to have all roots in the left half-plane is that all coefficients are positive. Since kp can be both positive and negative, the polynomial P(s) will always have a zero in the right half-plane for some value of kp.
1.3 EFFECTS OF PROCESS VARIATIONS
The standard approach to control system design is to develop a linear model for the process for some operating condition and to design a controller having constant parameters. This approach has been remarkably successful. A fundamental property is also that feedback systems are intrinsically insensitive to modeling errors and disturbances. In this section we illustrate some mechanisms that give rise to variations in process dynamics. We also show the effects of process variations on the performance of a control system.
The examples are simplified to the extent that they do not create significant control problems but do illustrate some of the difficulties that might occur in real systems.
A very common source of variations is that actuators, like valves, have a nonlinear characteristic. This may create difficulties, which are illustrated by the following example.
EXAMPLE 1.4 Nonlinear valve
A simple feedback loop with a Proportional and Integrating (PI) controller, a nonlinear valve, and a process is shown in Fig. 1.8. Let the static valve characteristic be
v = f(u) = u4u ≥ 0
Linearizing the system around a steady-state operating point shows that the incremental gain of the valve is f'(u), and hence the loop gain is proportional to f'(u). The system can perform well at one operating level and poorly at another. This is illustrated by the step responses in Fig. 1.9. The controller is tuned to give a good response at low values of the operating level. For higer values of the operating level the closed-loop system even becomes unstable. One way to handle this type of problem is to feed the control signal u through an inverse of the nonlinearity of the valve. It is often sufficient to use a fairly crude approximation (see Example 9.1). This can be interpreted as a special case of gain scheduling, which is treated in detail in Chapter 9.
Flow and Speed Variations
Systems with flows through pipes and tanks are common in process control. The flows are often closely related to the production rate. Process dynamics thus change when the production rate changes, and a controller that is well tuned for one production rate will not necessarily work well for other rates. A simple example illustrates what may happen.
EXAMPLE 1.5 Concentration control
Consider concentration control for a fluid that flows through a pipe, with no mixing, and through a tank, with perfect mixing. A schematic diagram of the process is shown in Fig. 1.10. The concentration at the inlet of the pipe is cin. Let the pipe volume be Vd and let the tank volume be Vm. Furthermore, let the flow be q and let the concentration in the tank and at the outlet be c. A mass balance gives
Vm dc(t)/dt = q(t) (cin(t - τ - c(t)) (3)
τ = Vd/q(t)
T = Vm/q(t) (4)
For a fixed flow, that is, when q(t) is constant, the process has the transfer function
G0(s) = e-st/1 + sT (1.5)
The dynamics are characterized by a time delay and first-order dynamics. The time constant T and the time delay τ are inversely proportional to the flow q.
Excerpted from ADAPTIVE CONTROL by Karl Johan Åström, Björn Wittenmark. Copyright © 2008 Karl Johan Åström and Björn Wittenmark. Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.