close

Вход

Забыли?

вход по аккаунту

?

6.2011-6286

код для вставкиСкачать
AIAA 2011-6286
AIAA Guidance, Navigation, and Control Conference
08 - 11 August 2011, Portland, Oregon
Improving Adaptation Performance For
Systems With Slow Dynamics
Jonathan A. Muse∗
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
U.S. Air Force Research Laboratory, Wright-Patterson Air Force Base, Ohio 45433
It is often difficult to achieve good tracking performance in the presence of modeling
error with the use of high adaptation gain when a system has slow modes. This leads
to unnecessary high frequency control effort that can excite unmodeled dynamics. This
paper introduces an adaptive control architecture that allows fast adaptation for systems
with both fast and slow modes. Fast adaptation is achieved using a high bandwidth state
emulator to train the adaptive element. The state emulator allows the drift part of the
adaptation dynamics to be set arbitrarily. This allows a control designer to shift the adaptive
process dynamics to a more favorable set and represents a new strategy for improving the
adaptation process in that no modification terms need to be added to the adaptive law to
improve adaptation. Though not required for system stability, the system tracking error
is kept small via a low bandwidth feedback on the emulator tracking error. The usefulness
of the architecture is illustrated on a nonlinear model for wing rock and a linear model of
a Boeing 747.
I.
Introduction
As adaptive control theory has advanced over the past few years, it has reemerged as a dominant force
in the control research community and has sparked much interest in industry.1–7 Model reference adaptive
control(MRAC) has numerous advantages over modern linear model-based control design methods. Classical
methods are limited by uncertainties and nonlinearity. Robust control design reduces the effect of uncertainty
and nonlinearity at the expense of reduced performance. Adaptive control offers the possibility of achieving
a much higher degree of robust performance, particularly in applications that are dominated by the presence
of uncertain flexible dynamics.8, 9 However, a major disadvantage of adaptive control is that it lacks an
accepted means of quantifying the behavior of the control signal apriori. Hence, most adaptive control laws
will require a more extensive verification and validation process due to the time varying and nonlinear manner
in which its gains are adapted. This process can lead to unacceptable transients during adaptation, which
can be made worse by actuator limitations10 and can yield a transient response that exceeds the practical
limits of the plant. One case where adaptive control can lead to unacceptable control input is when fast
adaptation is desired for a system with both fast and slow modes.
In this case, it is often difficult to achieve good tracking performance in the presence of modeling error
with the use of high adaptation gain. This leads to unnecessary high frequency control effort that can
excite unmodeled dynamics. This paper introduces a generalization of the adaptive control architecture for
slow reference models previously developed.11, 12 This architecture also allows fast adaptation for systems
with slow reference models. Fast adaptation is achieved using a high bandwidth state emulator to train the
adaptive element. The state emulator allows the drift part of the adaptation dynamics to be set arbitrarily.
This allows a control designer to shift the adaptive process dynamics to a more favorable set and represents
a new strategy for improving the adaptation process in that no modification terms need to be added to
the adaptive law to improve adaptation. Though not required for system stability, the system tracking
error is kept small via a low bandwidth feedback on the emulator tracking error. The previous concept11, 12
only allowed the adaptation drift dynamics to be altered through the range of the control effort. However,
significantly shifting a slow mode in a dynamic system through the range of the control may require large
gain which can degrade performance in a real system (even though the effect of the gain only enters the
∗ Research
Aerospace Engineer, Control Design and Analysis Branch, Member AIAA.
1 of 20
This material is declared a work of the U.S. Government and is not subject to copyright protection in the United States.
American Institute of Aeronautics and Astronautics
control channel at low frequencies). It turns out that constructing stable adaptation dynamics using standard
adaptive laws does not require the difference between the drift dynamics of the system and the drift dynamics
of adaptation error dynamics to be in the range of the control. The usefulness of the new architecture for
systems with fast and slow modes is illustrated on the same nonlinear model for wing rock and a linear model
of a Boeing 747 used in the previous work. This allows a comparison that shows that similar performance is
possible from the new architecture without the potential limitations of the old architecture.
II.
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
II.A.
Mathematical Preliminaries
Projection Operator
The adaptive architecture derived, with minor modifications, can use an adaptive control law of choice.
However, the projection operator13 allows one to obtain stronger results since asymptotic convergence of
the system to the reference model can be retained for perfect parameterization while the adaptive weights
are guaranteed bounded with a known bound. The following definition of the projection operator is taken
from.13, 14
Definition II.1. Consider a convex compact set with a smooth boundary given by
Ωc ≡ {θ ∈ Rn : f (θ) ≤ c},
0≤c≤1
(1)
where f : Rn 7→ R is the following smooth convex function
2
f (θ) =
2
∥θ∥ − θmax
2
ϵθ θmax
(2)
where θmax is the norm bound imposed on the parameter vector θ, and ϵθ denotes the convergence tolerance
of our choice. Let the true value of the parameter θ, denoted by θ∗ , belong to Ω0 , i.e. θ∗ ∈ Ω0 . The projection
operator is defined as


if f (θ) < 0

y
P roj(θ, y) = y
(3)
if f (θ) ≥ 0 and ∇f T y ≤ 0



∇f
∇f
T
y − ∥∇f ∥ ⟨ ∥∇f ∥ , y⟩f (θ) if f (θ) ≥ 0 and ∇f y > 0
The next lemma13 allows one to guarantee that adaptive parameters updated with the adaptive law
θ̇(t) = P roj(θ(t), y(t)) are contained in a compact invariant set for all t ≥ 0.
Lemma II.1. The projection operator P roj(θ, y) as defined in (3) does not alter y if θ belongs to the set
Ω0 ≡ {θ ∈ Rn : f (θ) ≤ 0}. In the set {0 ≤ f (θ) ≤ 1}, if ∇f T y > 0, the projection operator subtracts a
vector normal to the boundary of Ω1 = {θ ∈ Rn : f (θ) = c} so that there is a smooth transformation from
the original vector field y to an inward or tangent vector field for c = 1. Thus, if θ is the adaptive parameter,
and θ̇(t) = P roj(θ(t), y(t)), then θ(t) can never leave Ωc .
The next lemma is useful for proving that a chosen Lyapunov function candidate’s time derivatives are
non-positive for the adaptive laws applied in this paper.
Lemma II.2. Given the vectors y = [y1 , · · · , yn ] ∈ Rn , θ = [θ1 , · · · , θn ] ∈ Rn , and θ∗ = [θ1∗ , · · · , θn∗ ] ∈
Rn where θ∗ is the constant true value of the parameter θ. Then
T
T
T
(θ − θ∗ ) (P roj(θ, y) − y) ≤ 0
T
(4)
Proof. Note that



0
∗ T
(θ − θ ) (P roj(θ, y) − y) = 0


 ∗
∇f
∇f
(θ − θ) ∥∇f
∥ ⟨ ∥∇f ∥ , y⟩f (θ)
f (θ) < 0
f (θ) ≥ 0, ∇f T y ≤ 0
(5)
f (θ) ≥ 0, ∇f y > 0
T
Since the angle between θ∗ − θ and ∇f is greater than 90 degrees by definition of the projection operator,
we have the result.
2 of 20
American Institute of Aeronautics and Astronautics
This property can be applied when θ and y are matrices.14 θ and y will become matrices in this paper when
the uncertainty has dimension greater than one. In this case, the projection operator is defined column-wise
as
P roj(Θ, Y ) = (P roj1 (θ1 , y1 )
... P rojN (θN , yN ))
(6)
where
...yN ) ∈ RnxN
Y = (y1
and
Θ = (θ1
...θN ) ∈ RnxN
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
and yi and θi are vectors in Rn . Note that each vector θi may have a different projection bound θmax .
To see how the previous Lemma is applied with this new definition of the projection operator in a stability
proof, consider the following equation
(
)
f T (t)β(x(t)) + 2tr W
f T (t)Γ−1 W
ḟ (t)
(7)
−2b
eT (t)P DW
f is the weight estimation error matrix, W
c is the uncertainty weight estimate
where eb(t) is the error state, W
matrix, β(x) is a set of basis functions, P and D are matrices, and Γ is a constant positive definite matrix.
This arrangement of variables will appear in this paper. Equation (7) can be rewritten in the form
[
(
)]
ḟ − β(x(t))b
f T (t) Γ−1 W
eT (t)P D
2tr W
If the adaptive law is defined as
(
)
ċ (t) = ΓP roj W
c , β(x(t))b
W
eT (t)P D
then using the properties of the trace operator and the projection operator one has that
[
(
)]
f T (t) Γ−1 W
ḟ − β(x(t))b
2tr W
eT (t)P D
=
N
∑
(
(
)
(
) )
fjT (t) P rojj (W
cj , β(x(t))b
W
eT (t)P D j ) − β(x(t))b
eT (t)P D j ≤ 0
j=1
This is a fundamental property of the projection operator and will be used in the paper.
II.B.
Useful L Stability Properties
Proofs in this paper use input-output stability to show stability and bounded transients. Excellent references
for L Stability theory include but are not limited to.15, 16 Lp stability studies input-output maps of the form
y(t) = Hu(t)
where H is an operator that describes y : [0, ∞) 7→ Rn in terms of u : [0, ∞) 7→ Rm . A piecewise continuous
function signal, u(t) exists in the space Lp if
(∫
∥u∥Lp =
∞
0
p
∥u(t)∥q
) p1
dt
< ∞,
1≤p<∞
or for p = ∞, if
∥u∥L∞ = sup ∥u(t)∥q < ∞
t≥0
where q specifies the spatial norm defined by
1
∥x∥q = (∥x1 ∥q + · · · + ∥xn ∥q ) q ,
x ∈ Rn ,
1≤q<∞
3 of 20
American Institute of Aeronautics and Astronautics
or when q = ∞
∥x∥∞ = max ∥xi ∥,
x ∈ Rn
i=1...n
The input-output mapping H : Lm →
7 Lm cannot be defined properly for all signals(i.e. unstable systems).
Usually, H : Lm 7→ Lm is defined as a mapping from the extended Lem space to the extended Lem space
where
Lem = {u : uτ ∈ Lm ,
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
and
∀τ ∈ [0, ∞)}

u(t) 0 ≤ t ≤ τ
uτ (t) =
0
t>τ
The next Lemma16 shows that the L1 signal norm of a system’s convolution kernel allows for an upper
bound on all signals in Lp .
Lemma II.3. Consider the system defined by the casual convolution operator
∫ t
y(t) =
h(t − σ)u(σ)dσ
0
where y(t) is the system output, h(t) is the convolution kernel, and u(t) is the system input. If u ∈ Lp and
h ∈ L1 then
∥y∥Lp ≤ ∥h∥L1 ∥u∥Lp
where p ∈ [1, ∞].
III.
Slow Systems Architecture
Consider the uncertain nonlinear dynamical system defined by
ẋ(t) = Ax(t) + BΛu(t) + Bf (x(t))
where A ∈ Rnxn , B ∈ Rnxm is a matrix with full column rank, x(t) ∈ Rn is the system state, u(t) ∈ Rm is
the system control input, Λ ∈ Rmxm is a constant unknown positive definite matrix, and f (x) : Rn 7→ Rm is
an unknown function of the system state. In this case, it is assumed that Λ can be decomposed as
Λ = I + δΛ
(8)
where δΛ ≤ I and that {A, B} is a stabilizable pair. The previous assumptions are sufficient conditions
for system controllability. It is also assumed that f (x) can be linearly parameterized to within a bounded
approximation error as
f (x) = W T β(x) + ϵ(x),
∀x ∈ Rn
(9)
where β : Rn 7→ Rj is a set of known locally Lipschitz functions, W ∈ Rj×m is a set of constant but unknown
ideal weights, and ϵ : Rn 7→ Rm is unknown, locally Lipschitz continuous, and bounded by ϵ∗ ∈ R+ as
∥ϵ(x)∥ ≤ ϵ∗ < ∞,
∀x ∈ Rn
(10)
Suppose that the total control effort is defined as
u(t) = un (t) − uad (t)
(11)
un (t) = −Kx x(t) + Kr r(t)
(12)
where uad (t) will be defined shortly and
4 of 20
American Institute of Aeronautics and Astronautics
is an existing nominal control law that achieves the desired response of the system assuming that Λ = I and
the system uncertainty is zero. This nominal control law defines the ideal system behavior as
ẋm (t) = Am x(t) + Bm r(t),
xm (0) = x(0)
(13)
where Am = A − BKx is assumed to be Hurwitz and Bm = BKr (the reference model often used in MRAC
architectures).
A state emulator (structurally similar to a series parallel model18 ) will be used to separate the adaptation
process from the control realization. Let the state emulator in this paper be defined as
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
b
c T (t)β(x(t))
x(t)
ḃ = Ā (b
x(t) − x(t)) + Ax(t) + B Λu(t)
+ BW
(14)
c is the uncertainty weight
where x
b ∈ Rn is the state emulator state, Ā ∈ Rn×n is any Hurwitz matrix, W
mxm
b
estimate, and Λ ∈ R
is an estimate for Λ (the term estimate is used loosely). If the state emulator
tracking error is defined as, eb(t) = x(t) − x
b(t), the weight update laws are defined as
(
)
ċ (t) = −ΓW P roj W
c (t), β(x(t))b
W
eT (t)P B
(15)
(
)
ḃ = −ΓΛ P roj δ Λ(t),
b
δ Λ(t)
B T P eb(t)uT (t)
where P roj(·, ·) is the projection operator13 defined using a known bound on the unknown weights, P ∈ Rn×n
such that P = P T > 0 is from the solution of the Lyapunov equation
ĀT P + P Ā + Q = 0
(16)
b = I + δ Λ.
b Using these definitions, the adaptive
where Q ∈ Rn×n is any matrix such that Q = QT > 0, and Λ
control signal is given by
(
)−1 [
]
b
b n (t) + W
c T (t)β(x(t)) + uad (t)
uad (t) = I + δ Λ
δ Λu
(17)
s
where
b
Uads (s) = Fc (s)E(s)
(18)
(
)−1 T (
)
Fc (s) = Gc (s) B T B
B sI − Ā
(19)
Fc (s) is defined as
and Gc (s) is a low pass filter with the following realization
ẋc (t) = Ac xc (t) + Bc (t)uc (t)
yc (t) = Cc xc (t)
(20)
Since Gc (s) is strictly proper, Fc (s) is proper and realizable. This defines the complete adaptive control
architecture.
Next, the state emulator error dynamics are examined. Let the weight estimation errors be defined as
f=W
c − W and Λ
e=Λ
b − Λ, then the state emulator error dynamics can be expressed as
W
(
)
f T (t)β(x(t)) + δ Λu(t)
e
(21)
e(t)
ḃ = Āb
e(t) − B W
− ϵ(x(t))
The above dynamics are used to train the adaptive weights. Note how the above dynamics differ from the
those typically used in adaptive control. In this case, the arbitrarily selected matrix Ā replaces the matrix
Am in a standard set of adaptation error dynamics. This gives a degree of freedom in the adaptive design
that is fundamentally different that most modifications to adaptive laws. Most modifications developed to
improve the performance of an adaptation process achieve the modification by changing how the weights
update via modifications to the weight update law (i.e. by modifying equation (15)). This is in contrast to
this method which changes the adaptive law training signal to improve adaptation performance.
The following theorem shows that eb(t) remains bounded.
5 of 20
American Institute of Aeronautics and Astronautics
Theorem III.1. Consider the uncertain nonlinear dynamical system in equation (8), the state emulator
defined in equation (14), the adaptive weight update laws defined in equation (15), and the control signal
defined in equations (11), (12), and (17). Let Q ∈ Rn×n be such that Q = QT > 0 and let P ∈ Rn×n be the
solution of the Lyapunov equation in equation (16). Then the emulator error,
eb(t), is bounded. c
b
and ∥δΛ∥F ≤ δ Λ
Moreover, if eb(0) = 0 and the unknown ideal weights satisfy ∥W ∥F ≤ W F,max
F,max
2
2
c
b
where ∥·∥F represents the Frobenius norm and W and δ Λ
are the maximum allowed Frobenius
F,max
F,max
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
c (t) and δ Λ(t)
b
norms of W
set by each respective projection operator, then eb(t) is bounded ∀t ≥ 0 by
√
2
2
2
∥W ∥F,max
∥δΛ∥F,max
λmax (P ) ∥P B∥F (ϵ∗ )2
+
∥b
e(t)∥2 ≤ 2
+
λmin (P ) λ2min (Q)
λmin (ΓW )
λmin (ΓΛ )
Proof. First, boundedness of eb(t) is shown. Consider the Lyapunov candidate function
(
)
(
)
fT
e
f (t)) = ebT (t)P eb(t) + tr δ Λ(t)Γ
e
eT
e
V (b
e(t), δ Λ(t),
W
Λ δ Λ (t) + tr W (t)ΓW Λ(t)
(22)
(23)
where P is the solution of the Lyapunov equation in equation (16). Computing the Lyapunov derivative of
e
f (t)) along the system trajectories, one has that
V (b
e(t), δ Λ(t),
W
(
)
(
)
f , δ Γ)
e = 2b
f T (t)Γ−1 W
ḟ (t) + 2tr δ Λ
e T (t)Γ−1 δ˙Λ(t)
e
V̇ (b
e(t), W
eT (t)P e(t)
ḃ + 2tr W
(24)
W
W
Substituting the emulator error dynamics in equation (21) and applying properties of the trace operator,
equation (24) becomes
(
)
f (t), δ Λ(t))
e
V̇ (b
e(t), W
= ebT (t) ATm P + P Am eb(t) + 2b
eT (t)P Bϵ(x(t))
[
(
)]
f T (t) Γ−1 W
ċ (t) + β(x(t))b
+2tr W
eT (t)P B
W
(25)
[
(
)]
T
e
ḃ
+2tr δ Λ(t)
Γ−1
eT (t)xT (t)P B
Λ δ Λ (t) + u(t)b
Next, applying the adaptive laws from equation (15) and making use of the properties of the projection
operator from Section II.A, the derivative of equation (23) along the system trajectories is bounded by
f (t), δ Λ(t))
e
V̇ (b
e(t), W
≤ −b
eT (t)Qb
e(t) + 2b
eT (t)P Bϵ(x(t))
(26)
Since ∥ϵ(x)∥2 ≤ ϵ∗ ∀x ∈ Rn ,
∥b
e(t)∥ >
2 ∥P B∥F ϵ∗
λmin (Q)
(27)
f (t), δ Λ(t))
e
implies that V̇ (b
e(t), W
≤ 0 and whence eb(t) is ultimately bounded.
Next, the bound for eb(t) is derived.11 The adaptive
are bounded
due to the projection operator.
weights
f
e
The projection operator ensures that there exists a W and δ Λ
such that
F,max
f f
W (t) ≤ W
F
and
F,max
F,max
e e
δ Λ(t) ≤ δ Λ
2
(28)
F,max
where ||X||F,max is the maximum Frobenius norm of X(t), ∀t ≥ 0 . This implies that the Lyapunov-like
f (t), δ Λ(t))
e
function derivative satisfies V̇ (e(t), W
≤ 0 outside the compact set
}
{
2 ∥P B∥F ϵ∗
f , δ Λ)
e = (b
f , δ Λ)
e : ∥b
α(b
e, W
e, W
e∥2 ≤
λmin (Q)
}
∩{
f
f , δ Λ)
e :
f
(b
e, W
W
≤
W
F
F,max
{
}
∩
e
f , δ Λ)
e :
e
(b
e, W
Λ
≤
Λ
δ δ 2
2,max
6 of 20
American Institute of Aeronautics and Astronautics
f (t), δ Λ(t))
e
f (t), δ Λ(t))
e
V (b
e(t), W
cannot grow outside this set. Hence, the evolution of V (b
e(t), W
is upper
bounded by
f (t), δ Λ(t))
e
V (b
e(t), W
≤
max
f ,δ Λ)∈α
e
(b
e,W
f , δ Λ),
e
V (b
e, W
t≥0
(29)
From the definition of the Lyapunov-like candidate in equation (23),
)
(
)
(
−1 e T
f (t), δ Λ(t))
e
f T (t)Γ−1 W
f (t) + tr δ Λ(t)Γ
e
V (b
e(t), x(t), W
=b
eT (t)P eb(t) + tr W
W
Λ δ Λ (t)
(30)
Therefore, from equation (29) ∀t ≥ 0,
f (t), δ Λ(t))
e
V (b
e(t), W
≤
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
≤
max
f ,δ Λ)∈α
e
(b
e,W
max
f ,δ Λ)∈α
e
(b
e,W
f , δ Λ)
e
V (b
e, W
[
]
2
f T Γ−1 W
f ) + tr(δ Λ
e T Γ−1 δ Λ)
e
λmax (P ) ∥b
e∥2 + tr(W
W
Λ
(31)
Using properties of the trace operator,
1
fT W
f)
tr(W
λmin (ΓW )
1
e T Γ−1 δ Λ)
e ≤
e T δ Λ)
e
tr(δ Λ
tr(δ Λ
Λ
λmin (ΓΛ )
f T Γ−1 W
f) ≤
tr(W
W
(32)
Applying these inequalities, from the definition of the set α(·, ·),
f (t), δ Λ(t))
e
V (b
e(t), x(t), W
≤
2
∥P B∥F (ϵ∗ )2
4λmax (P )
λ2min (Q)
2
f
W +
F,max
λmin (ΓW )
2
e
δ Λ
+
F,max
(33)
λmin (ΓΛ )
Noting that
f c (t)
W (t) ≤ ∥W ∥F + W
F
F
,
c
≤ 2 W ∀t ≥ 0
(34)
∀t ≥ 0
(35)
F,max
and
e b δ Λ(t) ≤ ∥δΛ∥F + δ Λ(t)
2
F
,
b
≤ 2 δ Λ
F,max
c
where W
F,max
b
and δ Λ
are known bounds set by the projection operator, it is found that
F,max
∥P B∥F (ϵ∗ )2
4
4
2
2
f (t), δ Λ(t))
e
+
∥W ∥F,max +
∥δΛ∥F,max
V (b
e(t), W
≤ 4λmax (P )
2
λmin (Q)
λmin (ΓW )
λmin (ΓΛ )
2
(36)
Since equation (30) implies that
f (t), δ Λ(t))
e
λmin (P ) ∥b
e(t)∥ ≤ V (e(t), x(t), W
2
(37)
the result in equation (22) is obtained.
Using the previous theorem showing that the emulator error is bounded, it can be shown that the system
tracking error, e(t) = xm (t) − x(t), is bounded and converges to zero asymptotically when ϵ(x) = 0. Towards
this goal, the system dynamics in equation (8) can be rewritten as
(
)
ẋ(t) = Ax(t) + BΛu(t) + B W T β(x(t)) + ϵ(x(t))
(
)
= Am x(t) + Bm r(t) − Buad (t) + BδΛu(t) + B W T β(x(t)) + ϵ(x(t))
7 of 20
American Institute of Aeronautics and Astronautics
Defining the tracking error dynamics between the reference model and the system state as e(t) = xm (t)−x(t),
the tracking error dynamics are computed as
[
]
ė(t) = Am e(t) + Buad (t) − B δΛu(t) + W T β(x(t)) + ϵ(x(t))
The choice of uad (t) in equation (17) implies the following relationship
b
c T (t)β(x(t)) + uad (t)
uad (t) = δ Λ(t)u(t)
+W
s
(38)
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
which implies that the tracking error dynamics, e(t), can be expressed as
[
]
e
f T (t)β(x(t)) − ϵ(x(t))
ė(t) = Am e(t) + Buads (t) + B δ Λ(t)u(t)
+W
From the state emulator error dynamics in equation (21) and the fact that B has full column rank, the
following relationship is obtained
)
(
)−1 T (
f T (t)β(x(t)) + δ Λu(t)
e
(39)
W
− ϵ(x(t)) = − B T B
B e(t)
ḃ − Āb
e(t)
This implies that
)
(
)−1 T (
ė(t) = Am e(t) + Buads (t) − B B T B
B e(t)
ḃ − Āb
e(t)
This system is linear and its Laplace transform exists. Assuming that eb(0) = 0, in the Laplace domain
(
)−1 T (
)
b
sE(s) = Am E(s) + BUads (s) − B T B
B sI − Ā E(s)
(40)
b
where E(s), E(s),
and Uads (s) is the Laplace transform of e(t), eb(t), and uads (t) respectively. Using some
algebra and the definition of Uads (s), this simplifies to
(
)−1 T (
)
b
E(s) = Am E(s) + B (Gc (s) − I) B T B
B sI − Ā E(s)
or equivalently
−1
E(s) = (sI − Am )
(
)−1 T (
)
b
B (Gc (s) − I) B T B
B sI − Ā E(s)
(41)
−1
The above transfer function is proper since (sI − Am ) B is strictly proper and the matrix polynomial
( T )−1 T (
)
B B
B sI − Ā does not contain any polynomial of s with order greater than 1. Using equation (41),
the desired properties of e(t) are derived in the following theorem.
Theorem III.2. Consider the uncertain nonlinear dynamical system in equation (8), the state emulator
defined in equation (14), the adaptive weight update laws defined in equations (15), and the control signal
defined in equations (11), (12), and (17). Assume that the statements and conditions of Theorem III.1 hold
and let F be a realization for equation (41) such that
[
]
AF BF
F =
(42)
CF DF
Then assuming that the initial state of F is zero, the system tracking error, e(t), is bounded as
[
]
e∥L∞
∥e∥L∞ ≤ max ∥fi ∥L1 + ∥DFi ∥1 ∥b
i=1...n
(43)
where DFi is the ith row of DF , the convolution kernel is defined as
f (t) = CF eAF t BF
and fi is the ith row of f . Moreover, if ϵ(x) = 0 and r(t) is bounded, eb(t) → 0 as t → ∞ and e(t) → 0 as
t → ∞.
8 of 20
American Institute of Aeronautics and Astronautics
Proof. Proof of the first claim in equation (43) is immediate from equation (41) by applying standard
L stability arguments (for details, see Chapter 4 in An H∞ Norm Minimization Approach For Adaptive
Control 11 ).
Now, suppose that ϵ(x) = 0. With ϵ(x) = 0, the emulator error dynamics become
(
)
f T (t)β(x(t)) + δ Λu(t)
e
e(t)
ḃ = Āb
e(t) − B W
(44)
and the derivative along the system trajectories of the Lyapunov function in equation (23) reduces to
f (t), δ Λ(t))
e
V̇ (b
e(t), W
≤ −λmin (Q) ∥b
e(t)∥
From the definition of the Lyapunov function, this implies that
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
f (t), δ Λ(t))
e
eb(t)T P eb(t) ≤V (b
e(t), W
(45)
f (0), δ Λ(0))
e
≤V (b
e(0), W
f (t), δ Λ(t))
e
Whence, V (e(t), W
and e(t) are uniformly bounded. Integrating the bound from equation (45), a
new bound is obtained as
∫ t
f (0), δ Λ(0))
e
(46)
0≤
ebT (τ )Qb
e(τ )dτ ≤ V (b
e(0), x(0), W
0
f (t), δ Λ(t))
e
due to the fact that ebT Qb
e is non-negative. Since x(t) is also bounded, e(x(t),
ḃ
eb(t), W
is bounded
uniformly in t for all t ≥ 0. Hence, eb(t) is uniformly continuous in t on [0, ∞). eb(t) uniformly continuous in t
on [0, ∞) implies that ebT (t)Qb
e(t) is uniformly continuous in t on [0, ∞). Hence, Barbalat’s Lemma16 implies
T
that eb (t)Qb
e(t) → 0 as t → ∞. Which implies that eb ∈ L2 . From the previously derived norm bound, it is
known that
[
]
∥e∥L2 ≤ max ∥fi ∥L1 + ∥DFi ∥1 ∥b
e∥L2
(47)
i=1...n
Hence, e ∈ L2 which implies that e(t) → 0 as t → ∞.
The previous theorem yields some important insight. First, the control signal uads (t) is not strictly
necessary (i.e. one could choose uads (t) = 0). The use of the emulator error to train the adaptive law allows
one to independently set the adaptation dynamics from the reference model dynamics without additional
control effort. However, in practice, uads (t) may be necessary to achieve tight tracking bounds. Another
important thing to notice from the theorem is that Gc (s) − I in equation (41) acts as a high pass filter that
can be used to attenuate the system tracking error relative to the emulator error. Note that the properties
of the adaptive law in this paper are analogous to those possessed by many adaptive laws in the literature.
IV.
Wing Rock Example
Consider the following idealized model of aircraft wing rock dynamics.19
ẋ = Ax + B(u + f (x))
y = Cx
where
[
A=
]
0 1
0 0
[
,
B=
]
0
1
(48)
[
,
T
C =
]
1
0
(49)
u is the control moment, f (x) is a set of unknown system nonlinearities, x = [ϕ ϕ̇]T , and ϕ is the aircraft
roll angle. The system nonlinearities are defined as
f (x) = 0.1 + ϕ + 2ϕ̇ − 0.6|ϕ|ϕ̇ + 0.1|ϕ̇|ϕ̇ + 0.2ϕ3
9 of 20
American Institute of Aeronautics and Astronautics
(50)
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
This model choice assumes that the unknown high frequency gain is unity and is the same used in previous
work.12 It is used again in this paper to allow a comparison between the old and new approach. Suppose
that it is desired that the system behave like a second order system with natural frequency ωn = 1 rad/s and
damping ratio ζ = 0.707. With this choice, the nominal control law defined in (12) is given by Kx = [ωn2 2ωn ζ]
and Kr = ωn2 . This choice of nominal control law is closed loop unstable when only the nominal control law
is used.
For comparison purposes, the performance of a standard adaptive architecture13 was also investigated.
The equations for the standard adaptive control law used can be recovered by setting Ā = Am and uads (t) = 0
in equations (14) and (17) since the emulator dynamics collapse to the reference model dynamics in equation
(13). The system nonlinearity representation used in both the standard adaptive law and the new architecture
was a radial basis function neural network with 121 basis functions uniformly distributed over a unit cube
and an added bias term. If the desired closed loop performance was fast enough, the standard adaptive law
performed well. For illustration purposes, the closed loop system response and control effort for the standard
adaptive law with a reference model natural frequency of ωn = 5 rad/s is shown in Figures 1 and 2 for a
10 degree step command. However, when the reference model natural frequency was lowered to the desired
natural frequency of ωn = 1 rad/s, the performance of the standard law degraded significantly. One can
vary the choice of adaptive gain over several orders of magnitude and achieve the same trend in the results.
Good tracking and a low frequency control signal cannot be obtained simultaneously. An example of good
tracking with a poor quality control signal was obtained when the adaptive gains are set to ΓW = 10000
and ΓΛ = 10. This result is shown in Figures 3 and 4. In this case, the control signal was highly oscillatory
and was relatively large in amplitude. This causes the system to oscillate some around the reference model
trajectory and could excite unmodeled dynamics in a real system.
The new adaptive architecture was designed so that Ā has the form
[
]
0
1
Ā =
(51)
−wn2 −2ζωn
where ωn = 10 rad/s and ζ = 0.707. Gc (S) from equation (20) was implemented as a first order filter with
the filter pole selected as 10 1/s. The performance of the new adaptive control architecture for the same 10
degree step command is shown in Figures 3 and 4 using the same choice for ΓW and ΓΛ . In this case, the
tracking performance is similar but the control signal is smooth. The Fc transfer function used to compute
uads remains low bandwidth. The frequency response of Fc (s) is shown in Figure 5. Note that a potentially
better choice of Ā is possible but Abar was chosen to be the same as in previous work.12 This allows direct
comparison between the results.
V.
Boeing 747 Example
The next example, based on a model for a Boeing 747, introduces a set of dynamics that is difficult for
the standard projection based adaptive law to perform well. Consider the following model of the longitudinal
dynamics of a Boeing 747.20
ẋ = Ax + B(δe + W T x)
yt = Ct x
where



A=




B=

−0.006868
0.01395
−0.09055
−0.3151
0.0001187 −0.001026
0
0


−0.000187


−17.85 

 , CtT = 

−1.158 
0
0
773.98
−0.4285
1

0

0 

0 
−32.2
0
0
0



,

1
10 of 20
American Institute of Aeronautics and Astronautics
W is a set of unknown weights, x = [∆u w q ∆θ]T , ∆u is the change in the x-body velocity, w is the
z-body velocity, q is the pitch rate, ∆θ is the change in pitch angle, and δe is the elevator command. yt and
Ct describe the desired tracking variable, ∆θ. For demonstration purposes, suppose the unknown weights
are given by
[
]T
W = −0.01 −0.01 −0.01 −0.01
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
where the known magnitude bound on each weight is 0.03. The baseline control design was computed from
LQR theory. To create the LQR design, the dynamics in (52) were augmented with an integrator in the
control design to ensure that the nominal system has zero steady state error in the tracking variables to a
change in pitch command. The augmented matrices used in the design are given by
[
]
0 Ct
Aaug =
,
(52)
0 A
and
[
]T
Baug = 0 B T
From the definitions of these matrices, the augmented state vector for the system is given by x̄ = [xint xT ]T
where xint is the integrator state. The LQR weighting matrices for the baseline LQR design are
(
)
QAm = diag [1x106 0.1 6 1 1x103 ] , RAm = 1x105
(53)
where QAm is the state weighting matrix and RAm is the control weighting matrix. In order to realize a
tracking control law from the computed feedback gain, the integrator error dynamics are given by
ẋint (t) = ∆θ(t) − r(t)
(54)
where r(t) is the desired change in pitch angle. This design is considered fixed and unchangeable. Strictly
speaking this does not fit the form of the system model used in this paper. However, the modifications to
the equations are simple and are often made in adaptive control. These modifications are sometimes referred
to as the extended system dynamics. In this case, the nominal control law takes the form
un (t) = −Kx x̄(t)
(55)
where Kx is the computed LQR gain and the state emulator takes the form
b
c T (t)β(x(t)) + Bm r(t)
x(t)
ḃ = Ā (b
x(t) − x(t)) + Ax(t) + B Λu(t)
+ BW
(56)
[
]T
Bm = −1 Kx5 B T
(57)
with
and Kx5 is the 5th element of the LQR feedback gain (i.e. the feedback on ∆θ) computed for this example
(Bm is used to create the reference model in equation (13)). The previously derived theory still applies as
the differential equations for the error signals e(t) and eb(t) remain the same when e(t) = xm (t) − x̄(t) and
eb(t) = x̄(t) − x(t).
The Ā matrix used for the new architecture is selected as the same Ā used in previous work12 to allow
direct comparison (the new theory allows a more general selection of Ā if desired). To compute Ā an LQR
design is computed using the same integrator augmented system. The computed LQR gain is referred to as
K̄ and is computed using the following weighting matrices (Baug is used again in the LQR design)
(
)
QĀ = diag [1x106 0 60 1 1x105 ] , RĀ = 10
(58)
Using this gain, Ā = Aaug − Baug K̄.
The main effect of the K̄ design is to penalize the “control effort” less. This in effect speeds up the
emulator error dynamics allowing one to increase the adaptation gain. The filter Gc (s) used in equation
(20) was randomly chosen to be a first order filter each with a pole at 30 1/s. This yields the frequency
11 of 20
American Institute of Aeronautics and Astronautics
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
response for Fc (s) shown in Figure 6. Since the uncertainty is linear, the standard adaptive law and the new
adaptive architecture are implemented with β(x̄) = x̄ for simplicity (the standard adaptive law equations
were implemented by choosing Ā = Am and uads = 0). This is the form used in classical adaptive control
and ensures asymptotic convergence of the system tracking error. Evaluating the Lyapunov equation for
each adaptive law, assuming that Q = I, the resulting P matrices are similar in that the ratio of the norms,
∥PAm ∥ / ∥PĀ ∥, is 1.3. This implies that the effect of PAm and PĀ on β(x)eT P B was, loosely speaking, the
same in terms of effective gain. For each example, a square wave reference command with a 10◦ amplitude
and a frequency of 0.1 Hz was given to ∆θ command channel. The control signal of the standard adaptive
law oscillated with high frequency for all adaptive gains above Γstd = 1x10−7 . The tracking performance
and control signal for this adaptive gain is shown in Figures 7 and 8 respectively. For this gain, the tracking
performance was extremely poor. At an adaptive gain of Γstd = 1x10−2 , the tracking performance became
acceptable but the control signal contained large high frequency content. The tracking performance and
control signal for this adaptive gain is also shown in Figures 7 and 8 respectively. For the new adaptive
control architecture, the adaptive gain was set at Γnew = 1x10−2 . Figures 9 and 10 show the tracking
performance and the adaptive control effort from the new control architecture. The control signal in this
case appears perfectly smooth and tracking performance was better. A comparison of the frequency content
between the standard adaptive law with Γstd = 1x10−2 and the new architecture is shown in Figure 11.
Note that it was previously shown that σ-modification21 and e-modification22 do not improve performance11
of the standard adaptive law. These results are similar to previous results12 but do not rely on a feedback
signal comprised of K̄ to achieve tight tracking. This is beneficial since large K̄ in the previous work could
cause the control law to operate poorly in a realistic system.
One of the benefits of being able to adapt fast, from a practical perspective, is the ability to compensate
for faster varying uncertainty. Assuming that the ideal weights now vary as
θnew (t) = θ + 2 θ squarewave(0.01πt),
(59)
the system controlled by the new architecture was simulated using the same adaptive parameters as used
previously (where θ is the same weights defined in equation (52)). The standard adaptive system, when it
has a smooth control signal, cannot even compensate for constant unknown weights. However, the method in
this paper allowed the control system to compensate for this harsh time varying uncertainty with a relatively
smooth control signal. These results are shown in Figures 12 and 13. The frequency content of the control
signal is shown in Figure 14. Even though there are large step changes in the unknown weights, the frequency
content is similar to the case with constant weights. It is interesting to note that this same example was
used in previous studies.12 However, in this case, the control signal was visibly smoother.
VI.
Conclusion
When fast adaptation is desired for systems possessing slow dynamics, undesirable high magnitude oscillatory control signals can result. An adaptive control architecture is presented that allows the possibility of
fast adaptation without oscillatory control signals and with smooth weight convergence. It offers a new perspective for improving adaptive control system performance without using modification terms. Simulation
results using a model for wing rock and a Boeing 747 model illustrate the methods potential usefulness.
References
1 Sharma, M. A., Calise, A. J., and Corban, E., “Application of an Adaptive Autopilot Design to a Family of Guided
Munitions,” AIAA Guidance, Navigation, and Control Conference, Aug. 2000.
2 Calise, A., Lee, S., and Sharma, M., “Development of a reconfigurable flight control law for the X-36 tailless fighter
aircraft,” AIAA Guidance, Navigation, and Control Conference, 2000.
3 Calise, A., Lee, S., and Sharma, M., “Development of a reconfigurable flight control law for tailless aircraft,” Journal of
Guidance, Control, and Dynamics, Vol. 24, No. 5, 2001, pp. 896–902.
4 Brinker, J. and Wise, K., “Flight testing of reconfigurable control law on the x-36 tailless aircraft,” Journal of Guidance,
Control, and Dynamics, Vol. 24, No. 5, 2001, pp. 903–909.
5 Wise, K., Lavretsky, E., Zimmerman, J., Francis Jr, J., Dixon, D., and Whitehead, B., “Adaptive flight control of a
sensor guided munition,” AIAA Guidance, Navigation, and Control Conference, No. AIAA-2005-6385, 2005.
6 Johnson, E., Calise, A., and Corban, J., “Reusable launch vehicle adaptive guidance and control using neural networks,”
AIAA Guidance, Navigation and Control Conference, Vol. 4381, Montreal, Canada: AIAA, 2001.
12 of 20
American Institute of Aeronautics and Astronautics
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
7 Johnson, E., Calise, A., El-Shirbiny, H., and Rysdyk, R., “Feedback linearization with neural network augmentation
applied to X-33 attitude control,” Proceedings of the AIAA Guidance, Navigation, and Control Conference, 2000.
8 Yang, B. J., Adaptive Output Feedback Control of Flexible Systems, Ph.D. thesis, Georgia Institute of Technology, School
of Aerospace Engineering, April 2004.
9 Muse, J. and Calise, A., “Adaptive Attitude and Vibration Control of the NASA Ares Crew Launch Vehicle,” AIAA
Guidance, Navigation and Control Conference and Exhibit, Honolulu, Hawaii, 2008.
10 Karason, S. and Annaswamy, A., “Adaptive control in the presence of input constraints,” IEEE Transactions on Automatic Control, Vol. 39, No. 11, 1994, pp. 2325–2330.
11 Muse, J., An H-Infinity Norm Minimization Approach for Adaptive Control, Ph.D. thesis, Georgia Institute of Technology, School of Aerospace Engineering, July 2010.
12 Muse, J. and Calise, A., “Adaptive Control for Systems with Slow Reference Models,” AIAA Infotech, Atlanta, Georgia,
2010.
13 Pomet, J. B. and Praly, L., “Adaptive nonlinear regulation: estimation from the Lyapunov equation,” Vol. 37, No. 6,
1992, pp. 729–740.
14 Lavretsky, E., “Adaptive Control,” Lecture notes for CDS 270 , 2010.
15 Desoer, C. and Vidyasagar, M., Feedback Systems: Input-Output Properties, Academic Press, 1975.
16 Khalil, H., “Nonlinear Systems, Prentice Hall,” Upper Saddle River, NJ , 2002.
17 Haddad, W. and Chellaboina, V., Nonlinear dynamical systems and control: A Lyapunov-based approach, Princeton
University Press, 2008.
18 Narendra, K. and Parthasarathy, K., “Identification and Control of Dynamical Systems Using Neural Networks,” IEEE
Transactions on Neural Networks, Vol. 1, No. 1, 1990, pp. 4–27.
19 Volyanskyy, K., Calise, A., and Yang, B., “A novel Q-modification term for adaptive control,” Proceedings of the American
Control Conference, 2006, pp. 5.
20 Etkin, B. and Reid, L., “Dynamics of Flight - Stability and Control,” New York: John Wiley & Sons, Inc, 1996., 1996.
21 Ioannou, P. and Kokotovic, P., “Instability analysis and improvement of robustness of adaptive control,” Automatica,
Vol. 20, No. 5, 1984, pp. 583–594.
22 Narendra, K. and Annaswamy, A., “A new adaptive law for robust adaptation without persistent excitation,” IEEE
Transactions on Automatic Control, Vol. 32, No. 2, 1987, pp. 134–145.
23 Muse, J. A., “A Method For Enforcing State Constraints in Adaptive Control,” AIAA Guidance, Navigation, and Control
Conference, Portland, Oregon, August 2011.
24 Muse, J. A., “An Adaptive Law With Tracking Error Dependent Adaptive Gain Adjustment Mechanism,” AIAA Guidance, Navigation, and Control Conference, Portland, Oregon, August 2011.
13 of 20
American Institute of Aeronautics and Astronautics
15
10
0
−5
−10
Commanded Angle
Reference Model Angle
Standard Adaptive Angle
−15
0
1
2
3
4
5
6
Time − sec
7
8
9
10
Figure 1. Roll track response of the standard projection based adaptive law with a reference model natural frequency
of 5 rad/s.
5
4
3
Control Moment (N−m)
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
Roll Angle (deg)
5
2
1
0
−1
−2
−3
−4
−5
0
1
2
3
4
5
6
Time − sec
7
8
9
10
Figure 2. Adaptive control effort of the standard projection based adaptive law with a reference model natural frequency
of 5 rad/s.
14 of 20
American Institute of Aeronautics and Astronautics
15
10
0
−5
Commanded Angle
Reference Model Angle
Emulator Angle
Old Adaptive Law Angle
New Adaptive Law Angle
−10
−15
0
1
2
3
4
5
6
Time − sec
7
8
9
10
Figure 3. Roll track response of the old adaptive architecture verses the new adaptive architecture with a reference
model natural frequency of 1 rad/s.
1
Old Adaptive Law
New Adaptive Law
0.8
0.6
Control Moment (N−m)
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
Roll Angle (deg)
5
0.4
0.2
0
−0.2
−0.4
−0.6
−0.8
−1
0
1
2
3
4
5
6
Time − sec
7
8
9
10
Figure 4. Adaptive control effort of the old adaptive architecture verses the new adaptive architecture with a reference
model natural frequency of 1 rad/s.
15 of 20
American Institute of Aeronautics and Astronautics
From: e1
From: e2
40
35
To: Uad
20
15
10
5
0
0
To: Uad
s
Magnitude (dB) ; Phase (deg)
25
−45
−90
−1
10
0
1
10
10
2
10
3
−1
0
10 10
10
Frequency (rad/sec)
1
2
10
3
10
10
Figure 5. Frequency response of Fc (s) for the wing rock example.
From: e
From: e
1
From: e
2
From: e
3
From: e
4
5
100
To: Uad
s
50
0
−50
−100
225
s
180
To: Uad
Magnitude (dB) ; Phase (deg)
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
s
30
135
90
0
10
5
10
0
10
5
10
0
5
0
10
10
10
Frequency (rad/sec)
5
10
0
10
Figure 6. Frequency response of Fc (s) for the Boeing 747 example.
16 of 20
American Institute of Aeronautics and Astronautics
5
10
30
20
Pitch Angle (deg)
0
−10
−20
Commanded Angle
Adaptive Angle, Γ = 1e−2
std
Adaptive Angle, Γ
std
−30
0
5
= 1e−7
10
15
Time − sec
Figure 7. Pitch track performance for the standard adaptive law with Γstd = 1x10−2 and Γstd = 1x10−7 .
10
Adaptive Control, Γstd = 1e−2
Adaptive Control, Γ
8
std
= 1e−7
6
4
ad
2
u
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
10
0
−2
−4
−6
−8
−10
0
5
10
15
Time − sec
Figure 8. Adaptive control effort for the standard adaptive law with Γstd = 1x10−2 and Γstd = 1x10−7 .
17 of 20
American Institute of Aeronautics and Astronautics
30
20
Pitch Angle (deg)
0
−10
−20
−30
0
Commanded Angle
Reference Model Angle
Emulator Angle
New Adaptive Law Angle
5
10
15
Time − sec
Figure 9. Pitch track performance for the new architecture with Γnew = 1x10−2 .
10
8
6
4
ad
2
u
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
10
0
−2
−4
−6
−8
−10
0
5
10
15
Time − sec
Figure 10. Adaptive control effort for the new architecture with Γnew = 1x10−2 .
18 of 20
American Institute of Aeronautics and Astronautics
0.25
Standard Law
New Architecture
0.2
|δe(f)|
0.15
0.05
0
0
5
10
15
20
25
30
Frequency (Hz)
35
40
45
50
Figure 11. Single-sided amplitude spectrum of δe for Γnew = 1x10−2 and Γstd = 1x10−2 .
30
20
10
Pitch Angle (deg)
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
0.1
0
−10
−20
−30
0
Commanded Angle
Reference Model Angle
Emulator Angle
New Adaptive Law Angle
5
10
15
Time − sec
Figure 12. Pitch track performance for the new architecture with Γnew = 1x10−2 and time varying ideal weights.
19 of 20
American Institute of Aeronautics and Astronautics
10
8
6
4
u
ad
2
0
−4
−6
−8
−10
0
5
10
15
Time − sec
Figure 13. Adaptive control effort for the new architecture with Γnew = 1x10−2 and time varying ideal weights.
0.25
0.2
0.15
|δe(f)|
Downloaded by UNIVERSITY OF ADELAIDE on October 27, 2017 | http://arc.aiaa.org | DOI: 10.2514/6.2011-6286
−2
0.1
0.05
0
0
5
10
15
20
25
30
Frequency (Hz)
35
40
45
50
Figure 14. Single-sided amplitude spectrum of δe for Γnew = 1x10−2 with time varying unknown system weights.
20 of 20
American Institute of Aeronautics and Astronautics
Документ
Категория
Без категории
Просмотров
2
Размер файла
531 Кб
Теги
6286, 2011
1/--страниц
Пожаловаться на содержимое документа