ECE 517:
Nonlinear and Adaptive Control
Fall 2013 Lecture Notes
Daniel Liberzon
November 20, 2013
2
Disclaimers
DANIEL LIBERZON
Don’t print future lectures in advance as the material is always in the process of being
updated. You can consider the material here stable 2 days after it was presented in class.
These lecture notes are posted for class use only.
This is a very rough draft which contains many errors.
I don’t always give proper references to sources from which results are taken. A lack of
reference does not mean that the result is original. In fact, all results presented in these
notes (with possible exception of some simple examples) were borrowed from the literature
and are not mine.
Contents
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1
1.2
Motivating example
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Course logistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
6
9
2 Weak Lyapunov functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1
2.2
2.3
LaSalle and Barbalat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Connection with observability . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Back to the adaptive control example
. . . . . . . . . . . . . . . . . . . . . . 15
3 Minimum-phase systems and universal regulators . . . . . . . . . . . . . . . . . . . . 17
3.1
Universal regulators for scalar plants . . . . . . . . . . . . . . . . . . . . . . . 18
3.1.1
3.1.2
3.1.3
The case b > 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
General case: non-existence results . . . . . . . . . . . . . . . . . . . 20
Nussbaum gains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2
Relative degree and minimum phase . . . . . . . . . . . . . . . . . . . . . . . 23
3.2.1
Stabilization of nonlinear minimum-phase systems . . . . . . . . . . 27
3.3
Universal regulators for higher-dimensional plants
. . . . . . . . . . . . . . . 29
4
Lyapunov-based design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.1
Control Lyapunov functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.1.1
Sontag’s universal formula . . . . . . . . . . . . . . . . . . . . . . . 35
4.2
Back to the adaptive control example
. . . . . . . . . . . . . . . . . . . . . . 38
5
Backstepping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.1
5.2
Integrator backstepping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Adaptive integrator backstepping . . . . . . . . . . . . . . . . . . . . . . . . . 44
6
Parameter estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.1
Gradient method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3
4
DANIEL LIBERZON
6.2
6.3
6.4
6.5
Parameter estimation: stable case
. . . . . . . . . . . . . . . . . . . . . . . . 49
Unstable case: adaptive laws with normalization . . . . . . . . . . . . . . . . 54
6.3.1
6.3.2
6.3.3
6.3.4
Linear plant parameterizations (parametric models) . . . . . . . . . 57
Gradient method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Least squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Sufficiently rich signals and parameter identification . . . . . . . . . . . . . . 66
Case study: model reference adaptive control
. . . . . . . . . . . . . . . . . . 70
6.5.1
6.5.2
Direct MRAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Indirect MRAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
7
Input-to-state stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
7.1 Weakness of certainty equivalence
. . . . . . . . . . . . . . . . . . . . . . . . 78
7.2
Input-to-state stability and stabilization . . . . . . . . . . . . . . . . . . . . . 80
7.2.1
ISS backstepping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
7.3
Adaptive ISS controller design . . . . . . . . . . . . . . . . . . . . . . . . . . 89
7.3.1
7.3.2
Adaptive ISS backstepping . . . . . . . . . . . . . . . . . . . . . . . 90
Modular design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
8
Stability of slowly time-varying systems
. . . . . . . . . . . . . . . . . . . . . . . . . 94
8.1
8.2
8.3
Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Application to adaptive stabilization . . . . . . . . . . . . . . . . . . . . . . . 98
Detectability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
9
Switching adaptive control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
9.1
The supervisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
9.1.1
9.1.2
9.1.3
Multi-estimator
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Monitoring signal generator . . . . . . . . . . . . . . . . . . . . . . . 107
Switching logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
9.2
9.3
Example: linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Modular design objectives and analysis steps
. . . . . . . . . . . . . . . . . . 112
9.3.1
9.3.2
Achieving detectability . . . . . . . . . . . . . . . . . . . . . . . . . 115
Achieving bounded error gain and non-destabilization . . . . . . . . 117
10
Singular perturbations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
NONLINEAR AND ADAPTIVE CONTROL
5
10.1 Unmodeled dynamics
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
10.2
Singular perturbations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
10.3 Direct MRAC with unmodeled dynamics
. . . . . . . . . . . . . . . . . . . . 123
11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6
1
Introduction
DANIEL LIBERZON
The meaning of “nonlinear” should be clear, even if you only studied linear systems so far (by
exclusion).
The meaning of “adaptive” is less clear and takes longer to explain.
From www.webster.com:
Adaptive: showing or having a capacity for or tendency toward adaptation.
Adaptation: the act or process of adapting.
Adapt: to become adapted.
Perhaps it’s easier to first explain the class of problems it studies: modeling uncertainty. This
includes (but is not limited to) the presence of unknown parameters in the model of the plant.
There are many specialized techniques in adaptive control, and details of analysis and design
tend to be challenging. We’ll try to extract fundamental concepts and ideas, of interest not only
in adaptive control. The presentation of adaptive control results will mostly be at the level of
examples, not general theory.
The pattern will be: general concept in nonlinear systems/control, followed by its application
in adaptive control. Or, even better: a motivating example/problem in adaptive control, then
the general treatment of the concept or technique, then back to its adaptive application. Overall,
the course is designed to provide an introduction to further studies both in nonlinear systems and
control and in adaptive control.
1.1 Motivating example
Example 1 Consider the scalar system
˙x = θx + u
where x is state, u is control, and θ is an unknown fixed parameter.
A word on notation: There’s no consistent notation in adaptive control literature for the true
value of the unknown parameters. When there is only one parameter, θ is a fairly standard symbol.
Sometimes it’s denoted as θ∗ (to further emphasize that it is the actual value of θ). In other sources,
p∗ is used. When there are several unknown parameters, they are either combined into a vector
(θ, θ∗, p∗, etc.) or written individually using different letters such as a, b, and so on. Estimates of
the unknown parameters commonly have hats over them, e.g., ˆθ, and estimation errors commonly
have tildas over them, e.g., ˜θ = ˆθ − θ.
Goal: regulation, i.e., make x(t) → 0 as t → ∞.
If θ < 0, then u ≡ 0 works.
NONLINEAR AND ADAPTIVE CONTROL
If θ > 0 but is known, then the feedback law
u = −(θ + 1)x
gives ˙x = −x ⇒ x → 0. (Instead of +1 can use any other positive number.)
But if (as is the case of interest) θ is unknown, this u is not implementable.
−→ Even in this simplest possible example, it’s not obvious what to do.
Adaptive control law:
˙ˆθ = x2
u = −(ˆθ + 1)x
Here (1) is the tuning law, it “tunes” the feedback gain.
Closed-loop system:
˙x = (θ − ˆθ − 1)x
˙ˆθ = x2
7
(1)
(2)
Intuition: the growth of ˆθ dominates the linear growth of x, and eventually the feedback gain
ˆθ + 1 becomes large enough to overcome the uncertainty and stabilize the system.
Analysis: let’s try to find a Lyapunov function.
If we take
V :=
x2
2
then its derivative along the closed-loop system is
˙V = (θ − ˆθ − 1)x2
and this is not guaranteed to be negative.
Besides, V should be a function of both states of the closed-loop system, x and ˆθ.
Actually, with the above V we can still prove stability, although analysis is more intricate. We’ll
see this later.
Let’s take
V (x, ˆθ) :=
x2
2
+
(ˆθ − θ)2
2
(3)
The choice of the second term reflects the fact that in principle, we want to have an asymptotically
stable equilibrium at x = 0, ˆθ = θ. In other words, we can think of ˆθ as an estimate of θ. However,
the control objective doesn’t explicitly require that ˆθ → θ.
8
DANIEL LIBERZON
With V given by (3), we get
˙V = (θ − ˆθ − 1)x2 + (ˆθ − θ)x2 = −x2
(4)
Is this enough to prove that x(t) → 0?
Recall:
Theorem 1 (Lyapunov) Let V be a positive definite C1 function. If its derivative along solutions
satisfies
(5)
˙V ≤ 0
everywhere, then the system is stable. If
˙V < 0
(6)
everywhere (except at the equilibrium being studied), then the system is asymptotically stable. If
in the latter case V is also radially unbounded (i.e., V → ∞ as the state approaches ∞ along any
direction), then the system is globally asymptotically stable.
From (4) we certainly have (5), hence we have stability (in the sense of Lyapunov). In particular,
both x and ˆθ remain bounded for all time (by a constant depending on initial conditions).
On the other hand, we don’t have (6) because ˙V = 0 for (x, ˆθ) with x = 0 and ˆθ arbitrary.
It seems plausible that at least convergence of x to 0 should follow from (4). This is indeed true,
but proving this requires knowing a precise result about weak (nonstrictly decreasing) Lyapunov
functions. We will learn/review such results and will then finish the example.
Some observations from the above example:
• Even though the plant is linear, the control is nonlinear (because of the square terms). To
analyze the closed-loop system, we need nonlinear analysis tools.
• The control law is dynamic as it incorporates the tuning equation for ˆθ.
equation “learns” the unknown value of θ, providing estimates of θ.
Intuitively, this
• The standard Lyapunov stability theorem is not enough, and we need to work with a weak
Lyapunov function. As we will see, this is typical in adaptive control, because the esti-
mates (ˆθ) might not converge to the actual parameter values (θ). We will discuss techniques
for making the parameter estimates converge. However, even without this we can achieve
regulation of the state x to 0 (i.e., have convergence of those variables that we care about).
So, what is adaptive control?
We can think of the tuning law in the above example as an adaptation block in the overall
system—see figure. (The diagram is a bit more general than what we had in the example.)