Jump to content

Adaptive control

From Wikipedia, the free encyclopedia
(Redirected from Adaptive Control)

Adaptive control is the control method used by a controller which must adapt to a controlled system with parameters which vary, or are initially uncertain.[1][2] For example, as an aircraft flies, its mass will slowly decrease as a result of fuel consumption; a control law is needed that adapts itself to such changing conditions. Adaptive control is different from robust control in that it does not need a priori information about the bounds on these uncertain or time-varying parameters; robust control guarantees that if the changes are within given bounds the control law need not be changed, while adaptive control is concerned with control law changing itself.

Parameter estimation

[edit]

The foundation of adaptive control is parameter estimation, which is a branch of system identification. Common methods of estimation include recursive least squares and gradient descent. Both of these methods provide update laws that are used to modify estimates in real-time (i.e., as the system operates). Lyapunov stability is used to derive these update laws and show convergence criteria (typically persistent excitation; relaxation of this condition are studied in Concurrent Learning adaptive control). Projection and normalization are commonly used to improve the robustness of estimation algorithms.

Classification of adaptive control techniques

[edit]

In general, one should distinguish between:

  1. Feedforward adaptive control
  2. Feedback adaptive control

as well as between

  1. Direct methods
  2. Indirect methods
  3. Hybrid methods

Direct methods are ones wherein the estimated parameters are those directly used in the adaptive controller. In contrast, indirect methods are those in which the estimated parameters are used to calculate required controller parameters.[3] Hybrid methods rely on both estimation of parameters and direct modification of the control law.

MRAC
MIAC

There are several broad categories of feedback adaptive control (classification can vary):

  • Dual adaptive controllers – based on dual control theory
    • Optimal dual controllers – difficult to design
    • Suboptimal dual controllers
  • Nondual adaptive controllers
    • Adaptive pole placement
    • Extremum-seeking controllers
    • Iterative learning control
    • Gain scheduling
    • Model reference adaptive controllers (MRACs) – incorporate a reference model defining desired closed loop performance
      • Gradient optimization MRACs – use local rule for adjusting params when performance differs from reference. Ex.: "MIT rule".
      • Stability optimized MRACs
    • Model identification adaptive controllers (MIACs) – perform system identification while the system is running
      • Cautious adaptive controllers – use current SI to modify control law, allowing for SI uncertainty
      • Certainty equivalent adaptive controllers – take current SI to be the true system, assume no uncertainty
        • Nonparametric adaptive controllers
        • Parametric adaptive controllers
          • Explicit parameter adaptive controllers
          • Implicit parameter adaptive controllers
    • Multiple models – Use large number of models, which are distributed in the region of uncertainty, and based on the responses of the plant and the models. One model is chosen at every instant, which is closest to the plant according to some metric.[4]
Adaptive control with Multiple Models

Some special topics in adaptive control can be introduced as well:

  1. Adaptive control based on discrete-time process identification
  2. Adaptive control based on the model reference control technique[5]
  3. Adaptive control based on continuous-time process models
  4. Adaptive control of multivariable processes[6]
  5. Adaptive control of nonlinear processes
  6. Concurrent learning adaptive control, which relaxes the condition on persistent excitation for parameter convergence for a class of systems[7][8]

In recent times, adaptive control has been merged with intelligent techniques such as fuzzy and neural networks to bring forth new concepts such as fuzzy adaptive control.

Applications

[edit]

When designing adaptive control systems, special consideration is necessary of convergence and robustness issues. Lyapunov stability is typically used to derive control adaptation laws and show .

  • Self-tuning of subsequently fixed linear controllers during the implementation phase for one operating point;
  • Self-tuning of subsequently fixed robust controllers during the implementation phase for whole range of operating points;
  • Self-tuning of fixed controllers on request if the process behaviour changes due to ageing, drift, wear, etc.;
  • Adaptive control of linear controllers for nonlinear or time-varying processes;
  • Adaptive control or self-tuning control of nonlinear controllers for nonlinear processes;
  • Adaptive control or self-tuning control of multivariable controllers for multivariable processes (MIMO systems);

Usually these methods adapt the controllers to both the process statics and dynamics. In special cases the adaptation can be limited to the static behavior alone, leading to adaptive control based on characteristic curves for the steady-states or to extremum value control, optimizing the steady state. Hence, there are several ways to apply adaptive control algorithms.

A particularly successful application of adaptive control has been adaptive flight control.[9][10] This body of work has focused on guaranteeing stability of a model reference adaptive control scheme using Lyapunov arguments. Several successful flight-test demonstrations have been conducted, including fault tolerant adaptive control.[11]

See also

[edit]

References

[edit]
  1. ^ Annaswamy, Anuradha M. (3 May 2023). "Adaptive Control and Intersections with Reinforcement Learning". Annual Review of Control, Robotics, and Autonomous Systems. 6 (1): 65–93. doi:10.1146/annurev-control-062922-090153. ISSN 2573-5144. Retrieved 4 May 2023.
  2. ^ Chengyu Cao, Lili Ma, Yunjun Xu (2012). "Adaptive Control Theory and Applications". Journal of Control Science and Engineering. 2012 (1): 1, 2. doi:10.1155/2012/827353.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  3. ^ Astrom, Karl (2008). adaptive control. Dover. pp. 25–26.
  4. ^ Narendra, Kumpati S.; Han, Zhuo (August 2011). "adaptive control Using Collective Information Obtained from Multiple Models". IFAC Proceedings Volumes. 18 (1): 362–367. doi:10.3182/20110828-6-IT-1002.02237.
  5. ^ Lavretsky, Eugene; Wise, Kevin (2013). Robust adaptive control. Springer London. pp. 317–353. ISBN 9781447143963.
  6. ^ Tao, Gang (2014). "Multivariable adaptive control: A survey". Automatica. 50 (11): 2737–2764. doi:10.1016/j.automatica.2014.10.015.
  7. ^ Chowdhary, Girish; Johnson, Eric (2011). "Theory and flight-test validation of a concurrent learning adaptive controller". Journal of Guidance, Control, and Dynamics. 34 (2): 592–607. Bibcode:2011JGCD...34..592C. doi:10.2514/1.46866.
  8. ^ Chowdhary, Girish; Muehlegg, Maximillian; Johnson, Eric (2014). "Exponential parameter and tracking error convergence guarantees for adaptive controllers without persistency of excitation". International Journal of Control. 87 (8): 1583–1603. Bibcode:2011JGCD...34..592C. doi:10.2514/1.46866.
  9. ^ Lavretsky, Eugene (2015). "Robust and Adaptive Control Methods for Aerial Vehicles". Handbook of Unmanned Aerial Vehicles. pp. 675–710. doi:10.1007/978-90-481-9707-1_50. ISBN 978-90-481-9706-4.
  10. ^ Kannan, Suresh K.; Chowdhary, Girish Vinayak; Johnson, Eric N. (2015). "Adaptive Control of Unmanned Aerial Vehicles: Theory and Flight Tests". Handbook of Unmanned Aerial Vehicles. pp. 613–673. doi:10.1007/978-90-481-9707-1_61. ISBN 978-90-481-9706-4.
  11. ^ Chowdhary, Girish; Johnson, Eric N; Chandramohan, Rajeev; Kimbrell, Scott M; Calise, Anthony (2013). "Guidance and control of airplanes under actuator failures and severe structural damage". Journal of Guidance, Control, and Dynamics. 36 (4): 1093–1104. Bibcode:2013JGCD...36.1093C. doi:10.2514/1.58028.

Further reading

[edit]
  • B. Egardt, Stability of Adaptive Controllers. New York: Springer-Verlag, 1979.
  • I. D. Landau, Adaptive Control: The Model Reference Approach. New York: Marcel Dekker, 1979.
  • P. A. Ioannou and J. Sun, Robust Adaptive Control. Upper Saddle River, NJ: Prentice-Hall, 1996.
  • K. S. Narendra and A. M. Annaswamy, Stable Adaptive Systems. Englewood Cliffs, NJ: Prentice Hall, 1989; Dover Publications, 2004.
  • S. Sastry and M. Bodson, Adaptive Control: Stability, Convergence and Robustness. Prentice Hall, 1989.
  • K. J. Astrom and B. Wittenmark, Adaptive Control. Reading, MA: Addison-Wesley, 1995.
  • I. D. Landau, R. Lozano, and M. M’Saad, Adaptive Control. New York, NY: Springer-Verlag, 1998.
  • G. Tao, Adaptive Control Design and Analysis. Hoboken, NJ: Wiley-Interscience, 2003.
  • P. A. Ioannou and B. Fidan, Adaptive Control Tutorial. SIAM, 2006.
  • G. C. Goodwin and K. S. Sin, Adaptive Filtering Prediction and Control. Englewood Cliffs, NJ: Prentice-Hall, 1984.
  • M. Krstic, I. Kanellakopoulos, and P. V. Kokotovic, Nonlinear and Adaptive Control Design. Wiley Interscience, 1995.
  • P. A. Ioannou and P. V. Kokotovic, Adaptive Systems with Reduced Models. Springer Verlag, 1983.
  • Annaswamy, Anuradha M.; Fradkov, Alexander L. (2021). "A historical perspective of adaptive control and learning". Annual Reviews in Control. 52: 18–41. arXiv:2108.11336. doi:10.1016/j.arcontrol.2021.10.014. S2CID 237290042.
[edit]