close

Вход

Забыли?

вход по аккаунту

?

Simulation of a Reconfigurable Adaptive Control Architecture

код для вставкиСкачать
Simulation of a Reconfigurable Adaptive Control Architecture
By
RYAN JOHN RAPETTI
B.S. LeTourneau University 2002
M.S. University of California, Davis 2006
DISSERTATION
Submitted in partial satisfaction of the requirements for the degree of
DOCTOR OF PHILOSOPHY
in
Mechanical and Aeronautical Engineering
in the
OFFICE OF GRADUATE STUDIES
of the
UNIVERSITY OF CALIFORNIA at DAVIS
Approved:
Professor Nesrin Sarigul-Klijn (chair)
Professor Ron Hess
Professor Harry Cheng
Committee in Charge
2010
UMI Number: 3436484
All rights reserved
INFORMATION TO ALL USERS
The quality of this reproduction is dependent upon the quality of the copy submitted.
In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if material had to be removed,
a note will indicate the deletion.
UMI
Dissertation Publishing
UMI 3436484
Copyright 2010 by ProQuest LLC.
All rights reserved. This edition of the work is protected against
unauthorized copying under Title 1 7, United States Code.
®
ProQuest
ProQuest LLC
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, Ml 48106-1346
© 2010 Ryan John Rapetti
All Rights Reserved.
Abstract
A set of algorithms and software components are developed to investigate the use of a priori models of damaged aircraft to improve control of similarly damaged aircraft. An addition to Model
Predictive Control called state trajectory extrapolation is also developed to deliver good handling
qualities in nominal an off-nominal aircraft. System identification algorithms are also used to im-
prove model accuracy after a damage event. Simulations were run to demonstrate the efficacy of the
algorithms and software components developed herein. The effect of model order on system identification convergence and performance is also investigated. A feasibility study for flight testing is
also conducted. A preliminary hardware prototype was developed, as was the necessary software to
¦integrate the avionics and ground station systems. Simulation results show significant improvement
in both tracking and cross-coupling performance when a priori control models are used, and further
improvement when identified models are used.
11
Contents
1
Introduction
1
1.1
Motivation
1
1.2
Literature Search
4
1.2.1
4
Predictive Control
1.2.2 Adaptive Critic
8
1.2.3
9
Neural Nets
1.2.4 Damage Compensation
10
1.2.5
11
System Identification
2 Problem Definition and Objectives
12
2.1 Problem Description
2.1.1
2.1.2
12
Predictive Control
12
2.1.1.1
Cost Function
13
2.1.1.2
Optimization Methods
14
2.1.1.3 System Models
14
2.1.1.4
15
Closed Loop Stability & Robustness
Neural Networks
16
2.1.3 Damage Compensation
17
2.1.4 Adaptive Critics
18
2.2 Objectives
18
2.2.1 Proposed Control Architecture
18
2.2.1.1
Damage Database
19
2.2.1.2
Cost Function Speculation
20
2.2.1.3
State Trajectory Extrapolation (STEX)
21
2.2.1.4
System Identification
21
iii
2.2.2
Stability Work
21
2.2.3
Objective Summary
23
2.2.3.1
Damage Compensation Database
23
2.2.3.2
Dynamic Cost Function
23
2.2.3.3
State Trajectory Extrapolation
23
3 System ID Background
24
3.1 Derivation of Algorithms
24
3.1.1
Least Squares Estimator
24
3.1.2
Fisher Model Maximum Likelihood Estimator
26
3.1.3 Bayes Model Maximum Likelihood Estimator
27
3.1.4
28
Recursive Least Squares Estimator
4 Kaiman Filter Background
4.1
30
Introduction & Motivation
30
4.2 Kaiman Filtering in Short . .
31
4.3 Kaiman Filter Equations
31
4.4
Kaiman Filter Structure for Aircraft
34
4.5
Kaiman Filter Additions
37
4.5.1
Bias Estimation
37
4.5.2
Scale Factor Estimation
37
4.5.3
Noise Characterization and Modeling
38
4.5.4 Magnetometers
38
5 Predictive Controller Development
39
5.1 System Model
41
5.1.1
Kinematic Equations of Motion
41
5.1.2
Aerodynamic Equations of Motion
44
5.2 Damage Database
45
5.3 System Identification
46
5.4 State Trajectory Extrapolation
48
5.4.1
Introduction and Theory
48
5.4.2
Point Mass Implementation
51
5.4.3
Rate Command
57
iv
5.4.4
Rate Command with Pitch and Roll Closure
58
5.4.5
Pitch and Roll Command
60
5.5 Objective Function
5.6
60
Constraints
5.6.1
· · ¦ ·
Constrained Optimization
^l
61
5.6.2 U/V Functions
61
6 Hardware and Software Feasibility Study
6.1 Hardware Development
65
65
6.1.1
Gumstix Control Computer
65
6.1.2
Phoenix AX IMU
67
6.1.3
Additional Hardware
67
6.2 Software Description
68
6.2.1
Simulator
68
6.2.2
OpenGL Visualization
69
6.2.3
MPC Class
69
6.2.4 Flight Management Software
71
6.2.5
71
Utilities
7 3D Trajectory Development
73
7.1 Waypoint to Waypoint Transition
73
7.2 Final Leg Solution
76
7.2.1
Numerical Solution of Vectorial Governing Equation
76
7.2.2
Successive Approximation Solution
78
8 Simulation Results
79
8.1 Gyroscope
79
8.1.1
79
8.2
Derivation of Euler's Equations
8.1.2 Analytical Solution with Semi Body-Fixed Coordinate System
80
8.1.3
81
Numerical Solution With Body Fixed Coordinate System
8.1.4 Gyro Simulation Results
83
Aircraft Simulation Results
84
8.2.1
Simulation Parameters
85
8.2.2
Test Case Outline
87
?
8.2.3
Results
88
8.2.3.1
Test Case 1: Nominal Aircraft, Nominal Control
88
8.2.3.2
Test Case 2: Damaged Aircraft, Nominal Control
95
8.2.3.3
Test Case 3: Damaged Aircraft, Database Controller
99
8.2.3.4
Test Case 4: System ID
102
8.2.3.5
Test Case 5: Damaged Aircraft, Identified Model
108
9 Conclusions
HO
9.1 Discussion of Results
HO
9.2 Objective Achievement
Hl
9.3 Concluding Remarks
112
vi
List of Figures
2.1
Neural Network Schematic
16
2.2
Neuron Schematic
17
2.3 Proposed Control Architecture
19
5.1 Control System Block Diagram
5.2 Aircraft Body Fixed Coordinates
40
42
5.3
U constraint function. Axes are nondimensional
62
5.4 V constraint function. Axes are nondimensional
64
6.1 Gumstix Overo Air
66
7.1 Waypoint Transition Schematic
74
7.2 Flattened Final Leg Diagram
76
8.1 Pitch response to slow sine pitch input
8.2 Roll response to a slow sine pitch input
88
89
8.3 Roll response to a slow sine roll input
90
8.4 Pitch response to a slow sine roll input
91
8.5 Pitch response to a fast sine pitch input
92
8.6 Roll response to a fast sine roll input
93
8.7 Pitch response to a fast sine roll input
94
8.8 Pitch response to a pitch doublet input
95
8.9 Pitch response to a pitch doublet input
96
8.10 Roll response to a pitch doublet input
8.11 Pitch and roll response to steady state pitch and roll input
97
98
8.12 Pitch and roll response to a steady state input with overlarge damage
99
vii
8.13 Pitch and roll response to steady state input
100
8.14 Pitch response to pitch doublet input
101
8.15 Roll response to pitch doublet input
102
8.16 System ID parameter with chirp input
104
8.17 System ID with straight and level flight
105
8.18 System ID with more regressors and straight Sz level flight
106
8.19 System ID with more regressors and a pitch chirp
107
8.20 Pitch response to a pitch doublet input
108
8.21 Roll response to pitch doublet
109
viii
Nomenclature
A
Direction Cosine Matrix
E
Euler Angles
FM
Force and Moment Vector
G
Body Rate to Euler Rate Transformation Matrix
I
Moment of Inertia Tensor
L
Sensor Noise Covariance Matrix
S
Planform Area
b
Wingspan
c
Mean Aerodynamic Chord
g
Gravity Acceleration Vector
m
Mass
?
X Body Angular Velocity
q
Y Body Angular Velocity
r
Z Body Angular Velocity
u
X Body Velocity
?
Y Body Velocity
w
Z Body Velocity
x
X Position Coordinate
ix
y
Y Position Coordinate
?
Z Position Coordinate
a
Angle of Attack
ß
Sideslip Angle
da
Aileron Deflection
Se
Elevator Deflection
5r
Rudder Deflection
St
Throttle Deflection
A
Input Transition Matrix
?
Body Angular Velocity Vector
uip
Point mass trajectory parameter
f
Roll Angle
f
Heading Angle
t
Body Torque
?
Pitch Angle, Parameter Vector
Ca
Aerodynamic Coefficient Matrix
Fc
Force Constraint
Fm
Force on point mass
Fy
Y Body Force
Fz
Z Body Force
Mx
X Body Moment
My
Y Body Moment
Mz
Z Body Moment
am
Point mass acceleration
?
Fx
X Body Force
Hb
Angular momentum
pm
Point mass position coordinate
Tp
Position Vector
Ui
Control Input Vector
Vt,
Velocity, body coordinates
vc
Velocity Constraint
vm
Point mass velocity
Vt
Total Velocity
Xi
State Vector
Hi
Output Vector
ARMA Auto Regressive Moving Average
CARIMA Controlled Auto Regressive Integrated Moving Average
CARMA Controlled Auto Regressive Moving Average
ECEF Earth Centered, Earth Fixed
GPC
Generalized Predictive Control
GPS
Global Positioning System
IMU
Inertial Measurement Unit
LQR
Linear Quadratic Regulator
LTI
Linear Time Invariant
MIMO Multi-Input, Multi-Output
MLFN Multi-Layer Feedforward Network
MPC
Model Predictive Control
NMP
Non-Minimum Phase
xi
RLS
Recursive Least Squares
STEX State Trajectory Extrapolation
CHAPTER 1. INTRODUCTION
1
Chapter 1
Introduction
1.1
Motivation
Damaged aircraft have been the subject of much research. The loss of stability and handling
qualities under damage result in a significant decrease in safety for the passengers and crew of the
aircraft, even when the aircraft is airworthy in the strictest sense of the term. The control authority
required for safe landing may be available, but due to changes in the aircraft, the pilot may not be
capable of learning to allocate the control authority in a way that will land the aircraft safely. In
more extreme cases, the aircraft may be airworthy but unstable with the nominal control system.
In these cases, an airworthy aircraft may crash, resulting in large numbers of preventable casualties.
Adaptive and reconfigurable control attempt to solve this problem, with varying results. Several
points emerge from the current state of research. First, persistency of excitation is an ironclad
requirement of most adaptive control schemes. The simple, unavoidable truth is that any control
system that synthesizes the control input entirely based on a set of input/output data must keep
that data set full of relevant data with a reasonable signal-to-noise ratio. If this condition is not
met, the control synthesis will fail. There are ways around this problem, either by integrating
more information about the system into the control synthesis, or by altering the way that the
input/output data set is managed. Normal linear control does not suffer from the persistency
of excitation problem because the structure of the control system (poles and zeros, statespace
or any other equivalent linear time invariant structure) contains information about the system
being controlled, and hence avoids the problem. Direct adaptive control attempts to synthesize a
controller with very limited structure, in a push for a "universal" controller that can be applied
CHAPTER 1. INTRODUCTION
2
to any system. But by eliminating the structure, the information about the system is lost, and
persistency of excitation becomes a necessity.
Second, there is a strong push to use only inertial sensors for the control synthesis. While
this is useful for retrofitting the system to existing aircraft, there are many lines of research that
derive useful, accurate information about aircraft performance and airworthiness from a small set of
additional sensors. This information could be integrated into the control synthesis to improve the
performance of the aircraft and the control synthesis process, as well as allowing the control system
to implement the aircraft equivalent of a limp- a different set of maneuvers designed to minimize
structural or other damage.
With all of this in mind, this work attempts to create a more global perspective on damage
adaptation. Most control theory centers on control. The obvious nature of the statement masks its
importance. Flying an airplane goes far beyond synthesizing an input from a state, and therefore
aircraft control should as well. Aircraft control can never be separated from the extreme intolerance
of the problem to mistakes and failure, and the large scale problem of getting from airport to airport
safely, even in the presence of unforeseen events. Naturally, the problem must still be broken down
into smaller pieces, but it is critical that when deconstructing the problem the context of aircraft
control and the specific challenges and idiosyncrasies of the problem are not lost. In this sense,
controls engineers ought not think in terms of algorithms so much as structures and systems. The
future of aircraft control lies in creating robust, damage tolerant control structures that synthesize
large amounts of information, advise the pilot, oversee the health of the aircraft and finally control
the aircraft. In this sense the traditional domain of the controls engineer becomes a small part
of the overall task. In truth, this has always been the case. In the past, controls engineers had
the luxury of focusing only on the mathematical side and left the decision making to the pilot.
With increasingly crowded airspace, public demands for increased safety, and airline demands for
faster, more compact and efficient air traffic control, we can no longer compartmentalize our design
process.
An essential component of any damage compensating control system is some type of adaptation
of the control law. How this is done varies tremendously, but is largely based on a time history
of the system states or outputs and control vector, some baseline control system structure, and
possibly the original control system. From this data« a new control system is synthesized. In the
strictest sense, all control works this way- synthesize a control input from a time series of plant
outputs. Adaptive control is characterized by the process used in synthesizing the control input.
All current adaptive techniques can be classified based on how they manage the input/output
CHAPTER 1. INTRODUCTION
3
data set and how they synthesize a new controller. Typically, some type of forgetting is used to
manage the data set, since the number of input/output pairs grows very quickly. In the area of
control synthesis, two major classes are in use- direct and indirect. Direct adaptive control derives
the control gains from a time history of tracking error. Indirect adaptive control attempts to
identify the dynamics of the system, and synthesize a control system from this identified model.
The data set management process for an indirect adaptive system is more complex, and depends
on how the model is implemented. This work introduces several innovations to this process. First,
the addition of a new data source to the control system synthesis is proposed. By compiling
a database of aerodynamic characteristics for different damage cases, the control synthesis can
use prior knowledge (effectively, the collective experience of years of wind tunnel, modeling and
simulation work) to design the new control system. This can be done within a few seconds of damage
detection, possibly less, depending on the processing and access speed of the control computer.
Second, some new strategies for managing the input/output data set are also explored. Naive
techniques for data set management lead to persistency of excitation problems, which is simply
the control synthesis "forgetting" the aircraft's performance at operating points the aircraft has not
operated at recently. Hence, a common problem with data set management techniques that only
account for the age of each part of the data set is that once the system settles, the data set contracts
until it has knowledge about only a single operating point, and most of that knowledge is sensor
noise and wind turbulence. At this point, the control synthesis is no longer stable and the entire
system can fail. More robust data set management techniques can ensure that important data is
not forgotten and new data is given sufficient weight, without resorting to constant maneuvers to
maintain the control synthesis data set, since that is what persistancy of excitation actually is-
an ad hoc and highly inefficient method of maintaining the size and range of the control synthesis
data set. Thirdly, a novel method of control synthesis is explored, using model predictive control
with a dynamic utility function to optimize the performance of the aircraft for its new, damaged
performance characteristics. Since the new aircraft has different handling qualities and stress limits,
the control system must account for these changes. A dynamic cost function and a new technique
called state trajectory extrapolation can be used to achieve this.
CHAPTER 1. INTRODUCTION
1.2
4
Literature Search
1.2.1
Predictive Control
Model predictive control (MPC) for linear systems reached its peak in Clarke, et al's[8, 9] 1987 2
part paper introducing Generalized Predictive Control. (GPC) It differs from previous incarnations
of predictive control by its flexibility in control and prediction horizons. It allows the prediction
horizon to start late to reduce computation in the case of a plant with known dead time, and
allows the prediction horizon to extend beyond the control horizon, which improves robustness.
They also outline some techniques for choosing the horizons for best performance. A set of simple
simulations show the effectiveness of GPC when combined with recursive least squares (RLS) system
identification. Looking at the plots, the the RLS system ID converged very quickly, maintaining
ideal response despite plant variations.
The major component that is lacking in this paper is any rigorous stability work. Stability is
achieved by the correct choice of control horizons and a measure of luck. Stability is not guaranteed.
Even the method of choosing the horizons is not shown rigorously, but in a purely conceptual form
that makes little mention of stability. The rules simply state that the prediction should start at
the end of the plant's dead time and end at roughly the end of the rise time. The control horizon
should be equal to the number of unstable or badly damped poles. These rules seem to work for
most systems, but for aircraft control something with more substance and mathematical rigor is
required.
Another lack is a nonlinear formulation. The CARIMA model used in this work is basically a
z-transform, and hence linear. Given that there is no stability work anyway, the lack of a nonlinear
model is fairly irrelevant. The implementation is all that changes. The optimization becomes
nonlinear, and the control and prediction horizons are somewhat harder to set. Other than that the
fundamentals are identical. Really, this is one of the prime advantages of predictive control. The
method works just as well for nonlinear systems as it does for linear ones. The basic conceptual
framework is independent of how the model is represented, which offers a great deal of design
freedom.
Over the next several years a great deal of work was done by various groups regarding stability
concerns. Several groups added stability proofs given constraints or terminal state weighting, plus
nonlinear cases with and without constraints. One interesting paper on the subject by Sistu &
Bequette[34] describes a framework for the stability analysis of nonlinear systems. The typical
method is to use the cost function as a Lyapunov function and show stability from there. Sistu[34]
CHAPTER 1. INTRODUCTION
5
uses a different method- three of them, in fact. The first is by finding the closed loop state transition
matrix, and showing that the spectral radius is less than one, hence the system is locally stable.
The problem is that "local" is not formally defined and could be arbitrarily small. Hence the proof
is only really useful for linear time invariant systems, where the stability is truly global. Fortunately
Sistu [34] has two more methods, contraction mapping and Lyapunov's Direct method. Contraction
Mapping is not taken far in this work. The authors show the method, but it apparently can only
be used on specific systems and hence cannot be used for a general stability proof. They also show
an outline for Lyapunov's direct method with a numerical solution. Naturally this precludes a
general, analytical solution, but offers the ability to analyze the stability of a given system without
any artificial cost function additions. Therefore it becomes possible to solve for the stability range
for the given system offline. If the stable region is acceptable, then the analysis is done and the
controller can be implemented with only the nominal cost function. In fact, state constraints could
be used to keep the system out of the unstable regions.
In contrast, a survey paper by D. Q. Mayne et al[26] outlines ways to prove stability with cost
function modifications, as well as several other interesting results. The primary stability result
from this paper is the set of four axioms that, when satisfied, guarantee asymptotic stability within
a specified region. These are not original, but are distilled from the body of work preceding the
review. They then go into a lengthy discussion of how these axioms can be used to prove stability
in for linear and nonlinear, constrained and unconstrained systems with various combinations of
state constraints and terminal time weighting. They also show that a feedback linearizable system
without state constraints inherits all the stability and robustness properties of infinite horizon
control. There is also significant discussion of robustness. The primary change here is to move to a
feedback model predictive control strategy. Every version of MPC uses feedback to get the current
state of the system. However, the control is calculated without respect to the state of the system
when the control is applied. It still affects the system, but not until the next sampling instant. In
this way open loop MPC in effect adds a one time step delay to the feedback. To avoid this and
improve robustness, one can formulate the optimization problem to search the space of all control
strategies, instead of control inputs. This will ensure a great deal of robustness since the noise and
model mismatches can be parametrized as best as possible and the control strategies optimized
for these conditions. The problem is that the optimization is almost completely intractable. The
search space is simply too large to find anything close to an optimal solution, since the search is
over the highly discontinuous, unbounded space of all possible functions rather than the smaller,
continuous, bounded space of control inputs. One possible solution is to use a suboptimal search
CHAPTER 1. INTRODUCTION
6
that assumes that the control strategy is linear and finds the best linear control strategy for the
current (predicted) operating point. The final verdict was that while stability was very mature and
well developed, the robustness properties of model predictive control were still (as of 2000) not well
understood.
Several more practical works have been done in recent years. Soloway & Haley have done
several papers using MPC with neural nets as system models. In 1997 they published work[18]
showing that Neural Generalized Predictive Control could control a nonlinear system in real time
at reasonable servo rates. In this paper they develop an input/output formulation of GPC, rather
than the usual full state feedback formulation. Most notably they design a tracking system, rather
than the regulators developed in the theoretical stability papers. They show good results from
simulations of a simple nonlinear system represented by Duffing's equation, with the addition of a
nonmimum phase zero for interest. The neural network accurately models the system, and they
use Newton-Raphson as the cost function optimizer. They do benchmarks on the code for different
horizon lengths and show the maximum servo rate. They also do a rough estimate of relative speed
between Newton-Raphson and simple gradient descent, showing that Newton-Raphson is 6-12 times
faster. The notable fact is that stability is never mentioned, let alone proved. This will be a common
theme throughout Soloway's work.
A second work was done by the same authors[l7] in 1997. They designed an active flutter
suppression system with GPC. They did not use a neural network as the plant model. An ARMA
(Auto Regressive Moving Average) linear model was used instead. They also formulated the problem
as a SISO system with an accelerometer for output measurement. The flutter was suppressed by
about 12dB at the flutter frequency. The most novel aspect of this work was the use of a cost
function term to suppress low frequency movement of the control surface. Since the wing mockup
in the wind tunnel is fixed at the root, the control surface can "wander" without changing the
accelerometer output if it wanders slowly enough. To prevent this, a z-transform for a low pass
filter was added to the cost function, adding a penalty for low frequency control moves without
penalizing the high frequency control inputs required for flutter suppression. The author finds this
to be the most significant contribution of this work because it highlights the open-ended adaptability
of GPC through cost function modifications.
A third work by the same authors from [33]is also reviewed, mainly for its recent publication
and the stability work that is included. Unfortunately it only covers the linear, time invariant case.
They do, however, show how to develop a reconfiguration strategy that guarantees stability given
redundant actuators and a priority for each output/actuator combination. It does this by developing
CHAPTER 1. INTRODUCTION
?
a strategy to reconfigure the control surfaces and control system gains such that the system remains
stable, if stability is attainable with the actuators still available. An interesting result, though still a
reduced order linear model for a nonlinear system, so robustness is still important and unaddressed
in this work. However it is notable as a standard damage reconfiguration strategy. The most
important point is that only control surface failures are considered. Changes in the dynamics of
the system are not examined at all. This is a common limitation in the literature.
A more recent work [19] uses predictive control and an identified subspace predictor to create
an adaptive predictive controller. The subspace predictor uses a least squares estimation to predict
the output of the system over the prediction horizon, given the current control sequence. This is a
useful technique, since it is apparently somewhat more efficient than deriving a state space model
and then integrating. The advantage likely stems from the fact that the model is linear, and hence
the state transition matrix is not time varying, which allows for a faster state propagation. The
method also uses a set of precomputed models that are blended in using a weighting scheme. This is
the first time the author has encountered the use of precomputed models for damage compensation,
though the authors are still relying on input/output data pairs to detect the model changes and
generate the weighting. The identification is essentially an exponentially weighted least squares
algorithm, with no provision for maintaining high signal to noise ratio data for the identification,
or stopping adaptation once a good model is identified. Another downside is that the identification
is applied to the entire aircraft, not just the damaged portion as in this work. A further limitation
is that the model is linear and the identification is relied upon to adapt to nonlinearities. Hence a
stop adapting strategy cannot be applied with this framework.
Another work [4] uses an unscented Kaiman filter to identify the system model. The extended
Kaiman filter is usually sufficient to estimate the aircraft states, since the nonlinearities are limited.
However, when trying to identify a dynamic model the nonlinearities are large and significant,
so an unscented Kaiman filter offers worthwhile improvements in identification over an extended
Kaiman filter. It also removes the necessity of computing the Jacobian, which can be difficult to
debug. Since a Kaiman filter has better data set management than a least squares strategy, it does
not diverge as easily when the aircraft spends long periods of time in one state. However, since
persistency of excitation is a issue of data, if the data are not available, the identification must and
will diverge. Even if the dynamics of the model parameters are made very slow (recall that the time
dynamics of the model parameters are typically modeled as a random walk), they will still wander
when there is no data to keep them in place. In addition, the slower the dynamics are, the longer
the filter will take to converge on the correct model. Hence, even with a Kaiman filter, dataset
CHAPTER 1. INTRODUCTION
8
management is a critical.
1.2.2
Adaptive Critic
Significant work has been done in the area of adaptive critic control, which is related to MPC in
that both are optimal strategies. The difference is that adaptive critics have no internal model.
The action network is initially trained offline on the optimal solution of the cost function that is
used by MPC, but online the critic updates the action network directly, without a model. This has
been shown to be effective for several applications, including flight control of aircraft and missiles,
and jet engine and turbogenerator control.
Han and Balakrishnan [2]simulated an over-the-shoulder air to air missile launch, controlled by
an adaptive critic. The formulation is very general and allows for a range of initial conditions without
loss of optimality. The problem is both mathematically and practically difficult. Mathematically
because the formulation is a free final time, state constrained optimization, a notoriously difficult
problem, and practically because the missile has to come around 180 degrees over the top of the
launching aircraft without loss of air speed as quickly as possible so that the target aircraft has
no time to evade. The major shortcoming here is the lack of any stability and robustness work.
The action network is trained on a stable cost function with terminal time weighting, but that
is only conclusive insofar as the network perfectly emulates the solution of the cost function, a
dubious assumption at best. Besides that, the critic network will change the action network, further
invalidating stability. Adaptive Critics are a very new strategy in controls, so it is unsurprising that
the stability work is still immature.
Ferrari and Stengel [12]developed an adaptive critic based flight control system with some interesting properties. Mainly, they use a gain scheduled controller to initialize the neural net, so it
starts out with a good baseline. Then the critic network fills in the spaces between the linearizations
to improve performance. The interesting addition here is that the network is large enough that this
additional online training does not unlearn the old information. It also uses a novel algorithm for
training that converges more quickly and does not have a jump in error as training begins. Once
again, there is no mention of stability.
A further paper will be mentioned, more for an incidental aspect than the work on adaptive
critics. KrishnaKumar & Kulkarni [24] used an adaptive critic strategy to control a jet turbine
engine to maximize stall margin and other performance measures. What was interesting was a
passing comment about their system model for the simulation, which had several model versions
CHAPTER 1. INTRODUCTION
9
corresponding to various damaged conditions. This is similar in concept to the damage database
proposed for this dissertation, the difference being that in KrishnaKumar's work it was simply part
of the simulation, while for the proposed control architecture it will be part of the system model
for predictive control.
1.2.3
Neural Nets
Ferrari & Stengel[14] have also done some work in Neural Nets, specifically some algebraic training
methods that show some promise. The neural net that they work with is slightly different than what
is often used, in that only the hidden layer has the squashing function. Typically the output layer
also has this, along with some scaling gains so that the output can be larger than one. Removing
this layer retains most of the advantages but makes the algebra easier. Also, retaining the layer
could be done, it would just require more work. Their first algorithm will match the training data
exactly as long as there are as many hidden nodes as training data pairs, or the data are sufficiently
consistent that a single node can reproduce multiple patterns. This may overfit the data, but
leaves plenty of degrees of freedom for further learning. They also show an extension by which
gradient data can also be incorporated, which results in an even better fit. They also show a way
to approximately fit data in case the training set is too large to assign one hidden node per pair.
The beauty of these methods is that the nonlinear system is cast into a linear system that can
be solved in however seems convenient. This is the principle underlying the approximate method.
It simply becomes an overdetermined system that can be solved with a psuedoinverse, resulting
in minimized mean square error. They compare these methods with standard neural net training
algorithms in the areas of computation time, lines of Matlab code, and final error. The algebraic
method is better in every area, taking only 8 lines of code, giving 0 error and using ~2 orders of
magnitude less computation to achieve convergence. All in all a great many advantages. The two
downsides would be that there is not an online training version given, so this method can only
be used offline, and that this method is only good for noiseless systems. Hence it is not the best
method for adaptive control and system identification. However, it is perfectly acceptable for initial
offline training of the system model.
This method could also be generalized for online training, though noise would have to be included, as well as some kind of forgetting factor so that the training set does not grow excessively.
Since the training set determines the usefulness of the resulting network, it is essential to develop a
good training set for online training. The real key is how to decide which new data points to keep
CHAPTER 1. INTRODUCTION
10
and which of the old ones to drop. Typically older points are dropped by a "forgetting factor", but
it seems that older points are not necessarily less useful or valid. This is only true if the idea is to
adapt to changes. When the objective is to model a function, older points are no less useful. The
problem is simply the size of the data set rapidly becomes intractable if points are not discarded.
That said, perhaps recording the probability distribution of the data instead of the data itself could
alleviate this problem. One would then add a bias in the training algorithm so that the network
would have less error in regions where the variance of the data is small and allow more model
variance where the data variance is large. Knowledge of the accuracy of each prediction from the
neural net system model could also be factored into the control law, improving robustness.
1.2.4 Damage Compensation
Damage Compensation is a fairly new idea, and there are many ideas on how best to do it. Most
are some combination of system identification and control law modification. Usually damage is
inferred from changes in the response of the system to control inputs. A novel use of this technique
with some important additions is reviewed below.
Wilson, Lages and Mah [38] use a variation of system identification (ID) to detect thruster
failures on the X-38 reentry vehicle. They discovered that simple system ID could not detect
failures with sufficient accuracy. The difficulty is that a bad thruster only affects the system when
it is fired. This is bad in that only active thrusters can be tested for failure. This is good because
it narrows the system ID problem to thrusters currently firing. The authors sought to constrain
the problem by integrating this a priori information into the system ID algorithm. They computed,
offline, the disturbing acceleration for any possible combination of thruster failures for all thruster
firing patterns. Hence the failure detection becomes a pattern matching problem instead of a system
ID problem. The actual disturbing acceleration is calculated for each time step and statistically
compared to the database of calculated accelerations for the current thruster firing configuration.
When a sufficient likelihood of a thruster failure is detected and identified, the controller simply
reconfigures, based on prior knowledge. One interesting trick used here was to automatically change
thruster firing patterns to isolate a failed thruster. Done correctly, in many cases this will reconfigure
the thrusters automatically as a direct result of isolating the failure.
The major point to take away from this work is the use of precomputed data in the control
system. By doing some computation beforehand and including other knowledge and constraints, the
sensitivity and effectiveness of the control system was greatly improved. The variation of the thrust
CHAPTER 1. INTRODUCTION
U
from one firing to the next, blowdown as the propellant tanks emptied, the lack of observability
of individual thrusters and general noise in the system made system ID impossible, but by adding
a priori data and a knowledge of the system and nature of the faults, the fault detection system
was effective. The system also was equipped with temperature sensors on each thruster, but they
lacked the response time to effectively detect a failure. It is notable that if pressure sensors were
used, failures could be detected instantly with near perfect accuracy. A strain gage on the nozzle
mounting strut would also serve the purpose.
1.2.5
System Identification
The current state of system identification is well covered in Klein and Morelli[23]. Since this work
is not attempting to develop new ways of synthesizing the model parameters from the data set, a
good survey of current, proven system ID techniques is all that is required. Morelli and Klein do
exactly this. Most of the algorithms presented in the book are easily applicable to any data set,
no matter how the input/output data pairs are managed. The notable exceptions are the recursive
methods and frequency domain methods, where the dataset is inherent to the algorithm. There
are several categories of algorithms, distinguished by the system models, output models and cost
function.
CHAPTER 2. PROBLEM DEFINITION AND OBJECTIVES
12
Chapter 2
Problem Definition and Objectives
2.1
2.1.1
Problem Description
Predictive Control
Predictive control is a control strategy that uses a model of the controlled system to predict a
control input series that will minimize a cost function over some finite period of time. In that
sense it is an optimal control strategy. The word "optimal" is used here in a very general sense- the
control series minimizes some as yet unspecified cost function over a finite period of time. There
is a great degree of design freedom in this methodology since the type of model, the cost function
and the length of the control and prediction horizons are all independently adjustable. This allows
enough design freedom for nearly any application. The principle freedom is the cost function.
The basic formulation balances the state error with the control effort. This is standard optimal
control and is sufficient for most applications, particularly since if a linear cost function is used on
a linear time invariant (LTI) system model, time invariant solutions can be developed, i.e a Linear
Quadratic Regulator (LQR). However LTI systems are very limiting and will not be used herein.
In most cases the best that can be hoped for is a linear, time varying system, which will necessitate
reoptimization of the cost function at each time step, since only the first control input of the optimal
series calculated at each time step will actually be sent to the plant. It is this fact that explains why
this control methodology is surprisingly robust to modeling errors. As long as the model predicts
something with the correct sign and the nearly correct magnitude to what the actual system will
do, the controller can control the plant, particularly if the plant is open loop stable. "Nearly" is
not a fully rigorous definition, but a fully quantitative analysis is beyond the scope of this work.
CHAPTER 2. PROBLEM DEFINITION AND OBJECTIVES
13
At every time step the optimizer recalculates the optimal control series, which brings up the major
drawback of predictive control- the computational complexity. The fact that the controller must
solve a fairly large nonlinear optimization problem within the sampling time of the system places
severe demands on the control computer. This fact has relegated predictive control to the process
control industry until recently. With modern computers becoming so inexpensive and so small,
sufficient processing power to run a predictive control scheme at reasonable sampling rate (and
reasonable size) is now available at reasonable cost. A common objection to predictive control is
the fact that performance is dependent on the accuracy of the model. This is certainly true, and
a mismodeled system will have poorer performance and potential stability issues. However, every
control methodology is dependent, implicitly or explicitly, on a plant model. Hence this objection
is applicable to any control strategy. The most important question is how dependent performance
and stability are on the modeling accuracy. With the principle disadvantages more or less taken
care of, a discussion of the advantages is in order.
2.1.1.1
Cost Function
The core of any predictive control strategy is the cost function. It represents the control law as
an implicit function of the system dynamics, the states and the aforementioned control series. It
can include any quantity that is always positive, so any measure of performance one can formulate
in a positive definite manner can be minimized. The basic set of cost function terms are the state
error and the control effort. This represents a simple, workable optimization problem. With proper
weighting term selection, any balance of control effort versus speed can be achieved. Two very
common additions to the cost function are the control increment and the control limits. A term
proportional to the control increment will place a cost on the speed of control actuation and is useful
for preventing control input overdrive and resulting model/system mismatch. The second common
addition is input or state limiting terms. This is typically done with a U shaped nonlinear function
that causes the cost function to be nearly infinite if the control or state strays beyond a certain set
value, but is very small and nearly constant otherwise. In this way the control input can have any
value within the specified range without penalty, but has tremendous penalty outside of that range.
This can be applied to the states (think no-go zone for rotary wing aircraft) or the control inputs.
One can also formulate the problem as a constrained optimization, which achieves nearly the same
ends in a slightly different manner. The only downside is divergence if the constraints cannot be
met. With the nonlinear method, the cost function just grows rapidly. It would also be possible
to make a Lagrangian strategy robust to infeasibility with slack variables in the constraints. One
CHAPTER 2. PROBLEM DEFINITION AND OBJECTIVES
14
of the more interesting additions in the literature is a term derived from the z-transform of a low
pass filter, which penalizes low frequency control inputs. [17] A further point to consider is that this
function can be changed in real time, while the system is active, to alter performance. This idea
is more or less uninvestigated in the literature, as far as the author can tell. The programming
challenges for such a design are significant, but the gains could be dramatic. This will be discussed
further in later sections.
2.1.1.2
Optimization Methods
The optimization of the cost function makes one further demand on the terms. They must be
continuously differentiable at all points with respect to all variables. This is a condition required
of nearly every optimization strategy. There are some from the Neural Network field that allow
optimization of discrete systems, but this is done by assuming the discontinuities are noisy continuous functions, so the optimization is less than optimal. The actual optimization can be done
by any optimization method. For linear systems with simple cost functions the problem is one of
quadratic optimization. For nonlinear systems with complex cost functions a more robust strategy
is indicated to avoid local minima and maxima. Standard gradient descent methods are typically
used and work well for most applications.
2.1.1.3
System Models
The second major component of a model predictive control scheme is the model. The overall strategy
makes no demands on how the model is represented, though CARMA and more often CARIMA
models are used. Neural networks are also very popular due to the maturity of learning algorithms
for such models and their ability to represent any smooth function to any degree of accuracy, given
sufficient neurons. Hence they are logical choices for a nonlinear adaptive control scheme. Naturally,
statespace models can be used for MIMO linear systems, and locally linearized MIMO formulations
would be analyzed as statespace systems. Nonlinear polynomials can also be used, though concerns
about numerical stability and update methods for adaptive control must be addressed. Another
excellent possibility is to use interpolating splines, such as b-splines, with a. lookup table. This
has the advantage of being easily updatable, and with the correct formulation, new points can
be added and removed dynamically, which means that even the order of the model is adaptive.
Another useful trait is that the derivative information is also readily available with a minimum of
computation. The downside here is that the cross derivative terms must be set manually, which
can be very difficult for high order systems. Another possibility is a structured model where the
CHAPTER 2. PROBLEM DEFINITION AND OBJECTIVES
15
algebraic structure of the model is retained, which has the useful effect of drastically constraining
the system identification, which makes the process faster, more accurate and more robust.
2.1.1.4
Closed Loop Stability L· Robustness
Closed loop stability is difficult to show for predictive control due to the implicit nature of the
control law. As a result of this, stability proofs for this type of control were slow in development.
Most are some variation of terminal state weighting or terminal state constraint. Terminal state
weighting is interesting, because it allows a finite horizon method to emulate the infinite horizon
solution by telescoping the cost beyond the horizon into the last term. The catch is that this
telescoping can only be approximate, and can be difficult to compute. In many cases it may not
even be possible. A version of terminal constraint known as Dual Mode control uses predictive
control to drive the system to a particular region of the state space (the constraint region), where
a simple gain controller that has been chosen for its stabilizing characteristics takes over, hence the
name dual mode. A special case is the equality constraint, where the error is required to go to zero
at the end of the horizon. All of these methods will ensure stability for certain combinations of
constraints and linear/nonlinear behavior or model/ system mismatch.
The proofs for these results are arrived at using Lyapunov stability concepts. Many proofs use
the cost function as a Lyapunov function and proves stability, or finds the stable region, with the
chosen cost function. This can be done in different ways. The first is by finding the derivative (or
difference in the discrete case) and solving for the region where the derivative is negative, proving
stability. The second way is to show that the cost function(discrete formulation only of course),
forms a monotonically decreasing series in time, which implies a negative difference. Other methods
do not use the cost function as the Lyapunov function. These do not require state constraints or
terminal costs, but do not extend well to uncertain nonlinear systems. In fact, no proof of stability
in the presence of model uncertainty and nonlinearity could be found for an implementable system.
There are methods that search the space of all possible controllers for the optimal controller instead
of the optimal control input. The problem is that while such a strategy can be shown to stabilize
an uncertain system, no workable implementation has been shown thus far. A method has been
implemented, but takes over an hour to run a simple simulation at nowhere near real time on
a modern desktop PC. For embedded systems, the ability to run this sort of control strategy is
unlikely in the near future. There is plainly a great deal of work to be done in this area.
Interestingly, most of the implemented controllers in the literature do not prove stability. They
simply assume that optimality implies stability and implement the controller. Optimality does not,
CHAPTER 2. PROBLEM DEFINITION AND OBJECTIVES
16
of course, imply stability, but it is a good sign. A major part of the problem is that the stability
of any nonlinear system is confined to a finite region. In a regulation problem, this is easy to deal
with by simply assuring that the setpoint and the initial condition are in this region. For a tracking
system, more work is required. Since the setpoint is moving, the shape of the stable region is not
static, and will almost certainly vary depending on where the setpoint is and how fast it is changing.
It becomes very difficult to guarantee that the setpoint will not drag the stable region away from
the current state, leaving the system unstable. Apparently, stability is difficult to prove but easy
to get, since nonlinear predictive control seems to work, at least in simulation and very often in
practice for process control.
2.1.2
Neural Networks
Neural Nets make excellent system models for several reasons. First, they are universal function
approximators. Given enough neurons, a neural network can approximate any smooth function to
any degree of accuracy. They also extend easily to multi input/output formulations. This property
alone makes them good system models. In addition, learning algorithms for neural nets are very
well developed, so making the model adaptive is a simple extension. Neural networks come in
many flavors, but the most common in the controls field is the multi layer feed forward network
(MLFN) The diagram below illustrates the basic structure. The size of the hidden layer determines
the complexity of the functions the network can approximate. Naturally the sizes of the input and
output layers are set by the nature of the problem.
Hidden
o.
Input
Output
/
\
\
\ \
O
O
\ ^
\A
\
^:
Figure 2.1: Neural Network Schematic
CHAPTER 2. PROBLEM DEFINITION AND OBJECTIVES
17
Each of the neurons in the diagram above (except the input layer for reasons that will obvious
momentarily) is identical in function. They take a weighted sum of the inputs to that layer(hence
the difference- no inputs to input layer) and run the sum through a saturation function, typically
a sigmoid, hyperbolic tangent or Gaussian, though sometimes domain specific functions are used,
such as sinusoids or logs, depending on the problem. For example, if the Euler angles are inputs, it
makes sense to put sinusoids in the network. A diagram of a simple neuron is given below-
Xq = -fl
O.
TO
î2
?{·)
Vi
-O
Figure 2.2: Neuron Schematic
The function of the network is determined entirely by the weights on each link and the structure
of the network. Training the network is done by adjusting the weights until the network has
'•learned" the function. Generally everything is fully interconnected, so the number of weights
grows very quickly with the number of neurons. This is the primary downside to neural nets. The
complexity grows so fast that implementing a neural net as an artificial intelligence is impractical,
hence the lack of recent interest in AI circles. For simple system models however, they work well
enough and have been used successfully.
2.1.3
Damage Compensation
This is a rather new field and has seen a dramatic surge in interest in recent years, for obvious
reasons. The primary focus thus far has been on actuator failures. A failed actuator changes the
system model in very simple, well defined ways, so adapting and compensating is relatively easy,
particularly when predictive control is used, where the solution is to update the control constraints
and move on, assuming the system is still controllable. The optimization takes care of the rest. Even
Lyapunov based controllers merely update the dynamic inversion, and potentially the command
CHAPTER 2. PROBLEM DEFINITION AND OBJECTIVES
18
filter and the control gains, depending on the strategy. The primary difficulty here is detecting the
failure, since the controller has no way of detecting a broken actuator except to command a move
and see if it actually happens. This means that excitation is a key part of failure detection. This is
a fundamental requirement of any adaptive control scheme that uses performance data to update
the control law. There has been limited work on cases where the damage is more extensive than a
control surface, extending to a loss of wing area or other major structure.
2.1.4
Adaptive Critics
The term Adaptive Critic originates in the Artificial Intelligence field, so the terminology in the control field is still somewhat in flux. Most of the literature refers to an adaptive critic as an algorithm
that asses the performance of another algorithm (which does the actual control) and makes changes
to the watched algorithm in a way that will improve performance. A typical implementation uses
neural networks for both the control law and the critic. The control law network, often called the
action network, is trained to emulate the solution to the infinite horizon optimal control problem
within the typical operating range of the system. More specifically it outputs an optimal set of
gains. There is a subtle difference here from the usual predictive controller. It does not output a
control signal, it outputs a control policy, which is far better. That makes this design more like a
Feedback Model Predictive Controller than anything else. Recall that this problem is too difficult
to do efficiently in its exact form. In this case the solution is only approximate, but still sufficient,
especially when paired with a critic to optimize its performance. The critic actually approximates
the derivative of the infinite horizon cost function for the current state. While this is zero, the
solution is optimal. In this way, the critic is able to keep track of the optimality of the control law
and adjust it accordingly.
2.2
2.2.1
Objectives
Proposed Control Architecture
The proposed control architecture has several components. The overall control strategy is nonlinear
predictive control. The system model will likely be a neural network or some structured model,
and there will certainly a system identification step in the algorithm. The novel aspects are the use
of a precalculated damage model database and some new additions to the cost function. A basic
block diagram of a representative system is shown below.
CHAPTER 2. PROBLEM DEFINITION AND OBJECTIVES
Damage
Sensors
Damage
Database!
19
System
Identification
System
Model
Rate
Optimization
Input
*?*
Plant
Figure 2.3: Proposed Control Architecture
2.2.1.1
Damage Database
The central problem of adaptive control is balancing the convergence speed with robustness to
disturbance-induced misidentification. The core problem is that the control system must infer the
changes in the system. It has no way of measuring the damage directly. By adding dedicated
damage sensors and a database of model updates for given damage conditions, the model can be
updated instantly, with no convergence wait. The learning rate for the system ID can be slowed
to a more reasonable and stable level because the burden of reacting to large shifts in performance
is removed and transferred to the damage database, which directly measures and quantifies the
damage.
The damage compensation is a two step process. The first step uses a network of sensors to
detect and quantify the damage in some way. Exactly how is not important, just that the system
translates the output of a set of sensors into a format that can be fed into the damage database,
which has model updates for a set of precomputed damage conditions. Whether these model updates
are arrived at by high fidelity computational methods or experimental methods is not important,
just that the database can translate from the quantified damage to the updated model. Then the
new model is scheduled into the predictive controller, finishing the process. Most likely damage
updates will be applied to the model immediately, rather than scheduled at the end of the prediction
horizon.
The end result is that the control system has a more or less correct model for the damaged system
instantly, with no convergence time and complete certainty that the system ID is not chasing gusts
of wind or turbulence. The problem is the dedicated damage sensors. For real-world applications
the author envisions a combination of short range radar, broken wire and structural sensors. The
structural sensors are useful for health monitoring anyway, and will be useful in the next section.
CHAPTER 2. PROBLEM DEFINITION AND OBJECTIVES
20
The radar is a simple bolt-on addition, and the broken-wire sensors can be glued right to the
aircraft's skin. The only dedicated sensors are easy to apply, and the most difficult sensors have
multiple uses, as will be discussed in the next section.
2.2.1.2
Cost Function Speculation
What makes predictive control so versatile is the ability to specify any cost function that satisfies
some simple requirements. One of the first additions were state constraints, either as hard constraints in the optimization, or as nonlinear additions to the cost function. The second option offers
the ability to specify how "hard" the constraint is. For some applications, the system absolutely
must not enter certain regions of the state space. In others, entering the region might simply be
hard on the aircraft, but in a pinch is acceptable for a short time. By tuning the cost function
parameters, nearly any constraint behavior can be achieved.
In a more speculative vein, it would be possible to link the structural health monitoring sensors
into the cost function and put a penalty on airframe stress. Under normal operating conditions this
would not affect the system, but with a large hole in the wing it would relieve the pilot from having
to worry about how badly the control inputs were stressing the airframe. The control system would
handle it all, leaving the pilot free to fly as aggressively as necessary to avoid threats, secure in the
knowledge that the already damaged airframe will not be damaged further by sudden maneuvers.
Another possibility is to make the cost function dynamic, changing it online to modify the
behavior of the aircraft. A good application of this is a partially disabled control surface, that
perhaps has only limited range of movement or limited speed. The control constraints in the cost
function can be updated to reflect this and get the maximum performance out of the aircraft. If
the control surface is only partially locked, it makes sense to use whatever control authority is still
available.
A further use of a dynamic cost function would be to alter the behavior in real time in response
to a higher level optimization. One of the difficulties in designing the cost function is how each
term is weighted. The design criteria specify a particular response or some other performance
metric, but the cost function has a weighting matrix. That translation, from design specification
to system parameter, is still done more or less manually, by trial and error. If a statistical record
of performance was kept during operation, a second optimization could be conducted to alter the
cost function weighting to satisfy the design criteria automatically. The criteria would have to be
expressed numerically and statistically, but this is likely the case already.
CHAPTER 2. PROBLEM DEFINITION AND OBJECTIVES
21
2.2.1.3 State Trajectory Extrapolation (STEX)
A critical problem when using predictive control is generating the desired states. Much of the time,
predictive controllers are used as regulators, attempting to drive all states, which are typically
deviations from trim, back to zero. For systems that must be highly maneuverable, or cannot be
properly trimmed due to damage, some kind of desired state trajectories for the controller to follow
must be generated. A common solution is to compute a single desired state, and have the controller
drive the system to that state. The downside of this is that the way to compute this desired state
is not well defined, and causes the control horizon to significantly affect the way that the system
handles. It also makes the handling sensitive to the state and input weighting. Since the controller
is always being told to follow a step input, the way the controller tries to follow this trajectory is
strongly dependent on how the states are weighted, and the speed at which it does it is dependent
on how long it has to get there- the control horizon. Hence the handling and speed come to depend
strongly on the weighting and prediction and control horizons.
A better way has to be developed, one that makes the handling depend on physical parameters
rather than tuning values with little or no physical significance. To this end, we will abandon
step input methods, and generate feasible, smooth trajectories parametrized by pilot input. This
method will have several significant advantages, as will be discussed in a later section.
2.2.1.4
System Identification
The database can never be perfect due to the irregular and uncertain nature of damage, so an
adaptive step is required for best performance. As mentioned, the major difficulty with system ID
is convergence versus stability. With the database, the balance can be shifted well toward stability
because the database preconverges the model to a point that the aircraft can still fly, and then
the system ID step fine tunes the model for best performance. This architecture retains the best
aspects of both methods while eliminating most of the disadvantages. The only downside left is the
computational complexity, but it is assumed that any system that needs this level of performance
can afford a powerful control computer.
2.2.2
Stability Work
The foundation of any work in the area of stability and robustness is to specify exactly what stability
means. The typical definition is if a bounded input will result in an unbounded output, the system
is unstable, but if the output is bounded, then the system is stable. This is very basic control
CHAPTER 2. PROBLEM DEFINITION AND OBJECTIVES
22
theory. This definition works very well for linear systems and many nonlinear systems as well.
There are problems, however. The largest is that an unbounded output implies infinite energy in
the system, which is physically impossible. Hence, the very characteristic that we are testing for is
physically impossible. What is actually going on is that instability in the mathematical model is
assumed to imply a different kind of instability in the physical system. We have two definitions for
instability- one that is mathematical and very well defined, and one that is physical and somewhat
less well defined. What is more, the physical definition varies depending on the system in question,
and the operating conditions. The bucking and pitching exhibited by a Baja 1000 truck at 90mph
through a washboarded corner would be considered unstable if found in an aircraft, but for a race
driver it is perfectly normal. Hence for any given system, the design engineer must determine what
acceptable performance is, and then apply that criteria to the system. The major problem that
appears in nonlinear control and stability analysis is that with a sufficiently high fidelity model, an
unbounded output becomes impossible by the very nature of the model, regardless of the control
system or the handling characteristics. This makes it essential that any stability proof show not
just that the Lyanpunov function is bounded, but must also show specifically what those bounds
are, and that these are consistent with acceptable performance.
With this in mind, it is possible to start developing a new way of looking at stability. The
problem does not lie with the physical definition. This has always varied from problem to problem.
The issue is how to translate that definition to some kind of mathematical formulation that we can
apply to control laws. The most obvious solution is to formulate a set of constraints that define
acceptable behavior, and then find the regions in which these constraints are satisfied. In the case of
predictive control this means finding the regions in which an optimal solution exists, which implies
that the constraints can be satisfied within those regions. There are still some difficulties. One
would have to show that not only does the optimal solution exist, but that it continues to exist for
a sufficiently long period of time. What sufficiently long means is still unclear. It is implied that
if the optimal solution exists, then it exists up to the end of the prediction horizon, though this
may or may not be a sufficient period of time. This also does not apply to non predictive control
laws, since they lack a prediction horizon. There is a tie in here with traditional predictive control
stability proofs. Terminal constraints force the system to zero at the end of the horizon, which
implies that it will stay there if the system is open loop stable and minimum phase. This may make
the existence of an optimal solution imply stability, though this is hardly a rigorous analysis.
CHAPTER 2. PROBLEM DEFINITION AND OBJECTIVES
2.2.3
23
Objective Summary
There are three truly novel objectives presented here. They are summarized below for easy reference.
2.2.3.1
Damage Compensation Database
The eventual goal of this line of research would be to to use a precomputed database of system
models to adapt to different damage scenarios. The database could be created based on high fidelity
simulations or empirical data for dynamic performance under different damage conditions. When
damage is detected by a network of dedicated damage sensors, the damage is quantified and fed
into the database, which changes the system model to reflect the damaged system. Hence the
predictive control system has the correct model and can control the system in an optimal manner,
even in the presence of damage. It is essential to note here that actually compiling a database
for any given aircraft is a major undertaking far beyond the scope of this dissertation. For the
purposes of this work the damaged model will be created for the one damage case of interest. The
sensor networks and damage quantification layers are also assumed to be outside the scope of this
work, so these components will be assumed to be functioning outside of the simulation. However,
as a proof of concept that precomputed models can be used to improve aircraft performance under
damage conditions, a simulated damage case will be developed, with a corresponding imperfect
control model, to show in simulation that performance can be improved in this fashion.
2.2.3.2
Dynamic Cost Function
Dynamically altering the cost function is a new idea that has great potential. It offers the possibility
of updating constraints and weighting based on damage or other actuator failure. It also allows
dynamic alteration of the type of controller (rate command vs. turn/climb rate) based on the needs
to the situation.
2.2.3.3
State Trajectory Extrapolation
In the interest of generating feasible desired state trajectories and moving handling quality specification from the cost function to a more physical structure, a state trajectory extrapolation algorithm
was developed. It uses the pilot stick input to infer pilot intention and generates a trajectory
consistent with both the pilot's intention and the aircraft constraints.
CHAPTER 3. SYSTEM ID BACKGROUND
24
Chapter 3
System ID Background
3.1
Derivation of Algorithms
For simplicity and without loss of generality, the algorithms will be derived for the single output
case. Since nothing prevents multiple instances of the algorithm to be run in parallel (in fact, this
is more efficient on multiprocessor systems) there is not a loss in considering the outputs one at a
time.
3.1.1
Least Squares Estimator
The simplest method is the least squares estimator, which simply minimizes the sum of squared
error between the measurements and model outputs, optimized over the space of output model
parameters. Since for an output model that is linear in the parameters, the problem is a simple
quadratic optimization, the psuedoinverse can be used to solve for the parameters. For a nonlinear
output model an iterative optimization method is used. This method also has a frequency domain
version. The implementation is very similar, but the Fourier transform of the data series is used.
The derivation of the time domain linear least squares estimator is as follows:
First, the output model is algebraic-
y=m
(3.1)
Where ? is the model parameters and H is the known matrix of regressors. Since y is a vector
of outputs, each of which depends on a number of parameters and regressors, the structure of H
is somewhat complicated, ? contains all the system parameters in a single vector, so H will have
25
CHAPTER 3. SYSTEM ID BACKGROUND
many, many repeated entries. For example, both the Z force and Y moment depend on the angle
of attack, so a will appear at least twice in the regressor matrix. The same is true of the control
inputs. Also note that this is the output model. The actual dynamic model of the aircraft itself can
be nonlinear without necessitating a nonlinear output model. In fact, each of the regressors can be
a nonlinear function of the states and other flight parameters. The mathematics can be simplified
by looking at the case where y is a vector of data for a single output and H is replaced with X, a
matrix with rows made up of regressors for each time in the data set.
1 X0(to)
¦ ¦ Xnp(to)
1 X0(U)
. . x„p(ti)
X=
(3.2)
1 xo(tN) . . xnp(tN)
The 1 in the X matrix is bias regressor. In general, least squares problems are formulated as a
sum of squares of errors.
JV
.7(0) = 5>(*i) ¦ e(*0
I=O
(3-3)
If ? (ti) is the measurement of the output at time U
z(U) = y(U) + V(U)
(3.4)
Where ? (U) is a zero mean measurement noise term. Since y and ? are vectors of states and
measurements respectively,
m = (? - y)T(z - y)
(3.5)
? = ?T
(3.6)
J(9) =1-(?-?T)t(?- ?T)
(3.7)
without loss of generality. Using
we see that
CHAPTER 3. SYSTEM ID BACKGROUND
26
Since the measurements and regressors are constants wrt the parameters, we can take the partial
wrt ? and set it equal to zero:
^
?? = -?t? + ?t?T = 0
(3.8)
? = (XTX)-1XTz
(3.9)
Solving for theta
This is a basic least squares estimator for offline computations.
3.1.2
Fisher Model Maximum Likelihood Estimator
The Fisher model uses a different measurement model, which assumes a covariance for the noise
vector. This is one place where there is a loss of generality for the single output model, which
essentially is assuming a diagonal covariance matrix. However, that is assumed to be the case in
most practical applications anyway, so the single output case ends up losing very little in practice.
In any case, the Fisher measurement model is identical to the least squares model, but assumes a
covariance for the noise series. Note that this covariance refers the correlation of the scalar noise
signal with itself in time, rather than with other members of a noise vector. This was not explicitly
stated in the text, and took a certain amount of consideration to infer. Since this is a maximum
likelihood estimator, it attempts to find the parameter vector that makes the output series most
likely to occur. The probability of a given output series ? given a parameter vector ? with zero
mean measurement noise with (temporal) covariance R is:
L(z; ?) =
\ e-j(«-**rfl-'(*-*g)
(3.10)
Noting that the ? that maximizes the above also minimizes
J(O) = hz -X9)TR-\z-Xe)
(3.11)
Which is merely the weighted least squares problem, solved similar to the least squares problem.
J(B) = -(Z71R-1Z - (Xd)TR-1Z - Z7R-1XO + (??)tR-1XO)
(3.12)
27
CHAPTER 3. SYSTEM ID BACKGROUND
J(O) = \zTR-lz - Z7R-1XO + U7X7R-1Xe
(3.13)
Taking the partial derivative
dJ
do
?? = -X7R-1Z + X7RT1XQ = O
(3.14)
O=[X1R-1Xy1X7R-1Z
(3.15)
This is the solution to the weighed least squares problem, which is useful when the variances of
the noise signals are known and the noise is uncorrelated.
3.1.3 Bayes Model Maximum Likelihood Estimator
The Bayes model assumes that the parameter vector ? is a random variable with mean ?? and
covariance S?. Like the Fisher model, it assumes zero mean noise with (temporal) covariance R.
From Bayes' Rule:
mz) = «AM
p(z)
(3.16)
assuming ? and ? are independent,
?(?) =
,-J(O-^Sp1C-',)
(3.17)
\ e-l(z-m)-R-Hz-HB)
(3 18)
?-1(?~??)t?:-\?-??)-?(?-??)t?-\?-??)
/3 19)
and
?(?\?) =
VWN\R\
applying Bayes' Rule:
?(?\?? =
PK ' ; y/{2r:)N+n'P\T,P\ \R\
Any ? that maximizes ?(?\?) will minimize
he- ??)t?.-\? -ep) + hz- He)7R-1 (? - ??)
(3.2o)
CHAPTER 3. SYSTEM ID BACKGROUND
28
3.1.4 Recursive Least Squares Estimator
It is often advantageous to be able to observe the estimate of the parameters in realtime. An
excellent example is to evaluate the effectiveness of the identification for a particular maneuver
while the aircraft is still in the air. This can be extremely helpful in maximizing the usefulness
of (often expensive) flight testing time. While small scale UAVs are relatively cheap to fly, flight
time, pilot time and analysis time are all in short supply, so getting the most out of the few hours
available is still very important. For that reason, recursive estimators are highly useful additions
to the system ID toolbox. Recursive estimation also requires less computational resources than a
Kaiman ID, since they require no matrix inversion. Using the standard measurement model:
? = ?T + ?
(3.21)
E{v) = 0
(3.22)
E{vvT) = s2?
(3.23)
and
and
The recursive least squares estimator uses a three step process. There are two primary data items
that are updated: the parameter estimate 6(k) and the dispersion matrix D(k). The derivation is
surprisingly lengthy, and will not be given here, as an excellent treatment is given in [23]. The first
step is to compute the gain matrix K(k + 1).
K(k + 1) = D(k) ¦ x(k + 1) · [1 + xT{k + 1) ¦ D(k) ¦ x(k + I)]"1
(3.24)
Then the gain can be used to update the parameter estimate.
6(k + 1) = e(k) + K(k + 1) · [z(k + 1) - xT(k + 1) · 0(fc)]
(3.25)
Finally, the dispersion matrix must be updated as well.
D(k + 1) = D(k) - D(k) ¦ x{k + !)·[! + xT(k + 1) ¦ D(k) ¦ x(k + I)]"1 ¦ xT(k + 1) · D{k) (3.26)
CHAPTER 3. SYSTEM ID BACKGROUND
29
If the variance is desired as well, it can be computed as
E(fc) = s2 ¦ D(k)
(3.27)
This assumes that the error variance is known. If it is not, then it must be estimated. See [23],
p267 for more info.
CHAPTER 4. KALMAN FILTER BACKGROUND
30
Chapter 4
Kaiman Filter Background
4.1
Introduction Sz Motivation
The Kaiman filter equations have been derived ad nauseum in the literature, so the full derivation
will be omitted here. The equations will be given for completeness and to facilitate an explanation
of how exactly they are used to create an optimal estimate of the state vector. The explanation
and discussion given here are shown in the hope that they will be useful, as the precise formulation
commonly used for aircraft (Modular GPS, gyros and accelerometers) is not well covered in the literature. Most papers skip too many intermediate steps in the interest of space, and most textbooks
either take a very general approach, derive the equations and move on, or cover an overly simple
case that is not illustrative of the problems and idiosyncrasies of filter design for aircraft. Also,
many textbooks are so old that they do not cover the recent developments in sensor developments
or filter architecture, and hence present a structure that is not applicable to modern hardware. A
common example is handling the GPS psuedorange estimation within the aircraft state estimation.
Most systems use a standalone GPS unit, preprogrammed with high performance proprietary filtering algorithms, allowing the control designer to take the GPS as a black box with specified output
(lat/lon/alt or ECEF) and noise parameters.
As a response to these issues, the author offers this chapter in the hope that it will be useful
for continuing the present work, and for extending it to new hardware. For that reason, several
potential extensions to the current architecture are covered as a starting point for improving the
performance of the state estimation.
CHAPTER 4. KALMAN FILTER BACKGROUND
4.2
31
Kaiman Filtering in Short
In the most basic sense, a Kaiman filter attempts to take two estimates of a system state and fuse
them into a better estimate. In most cases, the two estimates are an open loop estimate derived
from the previous state and the measured inputs, and the state as measured by the sensors. Hence
a critical part of the Kaiman filter is prediction. It is also important to remember that the filter
must keep track of the covariance of both the measurements and the states. This has the useful side
effect of maintaining information on the accuracy of the estimate at all times. The Kaiman filter
can then compute the optimal estimate, based on the two separate (and independent) estimates of
the state. This new estimate will be more accurate than either the prediction or the measurement
alone. This rather general explanation covers the basic idea of how a Kaiman filter works and
should be a good starting point from which to launch into the equations.
4.3
Kaiman Filter Equations
As mentioned above, the derivation and proof of optimality has been derived many times and is
available in many textbooks and online (wikipedia has a decent derivation) Hence, only the final
equations are given here. First, the predict step. The task here is to take the previous state and
state covariance matrix, the previous inputs, and predict the state and covariance at the current
time step. The most critical aspect here is the output- the predicted state and covariance. Different
flavors of filter will compute this in different ways, but all eventually end up with an estimate and
covariance. A linear Kaiman filter will use the state transition matrix like so:
Xn+1 = F?? + Tun + ?
(4.1)
where ? is the process noise, N(O, Qn) and Xn is N(O, Pn). (Normally distributed, zero mean and
covariance Qn and Pnrespectively) The state covariance Pn+1 will be
Pn+1 = F??Ft + Qn
(4.2)
Such is the predict step for the linear Kaiman filter. For nonlinear systems of the form (noting
that the output is the next state and not the state derivatives)
Xn+1 = f(xn,un) + V
(4.3)
CHAPTER 4. KALMAN FILTER BACKGROUND
32
The extended Kaiman filter linearizes the system about the previous estimate like so:
Qf
Fn = ^ax ^ Xn, Un
(4.4)
Then use the nonlinear model to predict the next state, and predict the covariance matrix like
so:
Pn+1 = FnPnF? + Qn
(4.5)
The unscented Kaiman filter uses the unscented transform to predict the state prediction covari-
ance, which takes an optimal scattering of points, propagates them through the nonlinear model
and then uses the resulting output points to recover the covariance. This method is much more
accurate, especially for highly nonlinear systems. Interestingly, it is not significantly more computationally demanding. Irritatingly, it require just enough more computational time to preclude
implementation on our hardware, as proven by attempts to run it in real time.
The second step of a Kaiman filter is to blend the two measurements of the state. One of the
more powerful features of the Kaiman filter is that it is possible to estimate the state even with an
incomplete measurement of the state vector. Even if it is only possible to measure a subset or even
a linear or nonlinear combination of the states, it is (often) possible to estimate the state vector. As
long as the system is observable (refer to most any controls text for a more rigorous development
of observability) the filter can estimate the states.
Now we must introduce the measurement model. We assume for the linear case that the measurements are a linear combination of the states plus a noise vector:
Zn = HnXn + w
(4.6)
where Zn is the measurement and w is N(O, R).
Similarly, the nonlinear version is:
zn = h(x„) + w
(4.7)
Linearizing...
_ dh
tln — Tj
OXn
(4.8)
CHAPTER 4. KALMAN FILTER BACKGROUND
33
The first step is to compute the covariance of the predicted measurements. Linearly, this can
be done like so:
Sn = HnPnHl + R
(4.9)
Next, we must compute the Kaiman gain:
Kn = PnHlS-1
(4.10)
Now we can update the state estimate
Xn — Xn ? ft-nyZn
HnXn)
\Q- II)
Xn=Xn + Kn(Zn - h(xn))
(4.12)
or for the nonlinear case:
where Zn is the measurement vector. Now we can also update the state covariance matrix:
Pn = (I-KnHn)Pn
(4.13)
These are the basic equations for a linear and an extended Kaiman filter. These equations are
quite simple. The difficulty arises in trying to decide what / and h are and what our measurements
are, and what the states should be. In that vein, we can begin our discussion of how to apply these
equations to a real aircraft. First, the equations of motion for an aircraft are typically given in the
continuous form:
x = f(x,u) + v
(4.14)
This means that the Jacobian equations above are not quite valid. Since / is a continuous
function, we must integrate it (numerically) like so to get the state transition matrix:
Fn = I +^-At
ox
(4.15)
This is simply Euler's method of integration for a multivariate system. (The partial is taken at
the current state estimate of course) There is a similar equation for the measurement equation, but
34
CHAPTER 4. KALMAN FILTER BACKGROUND
for aircraft in general h is linear, magnetometers being one of the more notable exceptions.
4.4
Kaiman Filter Structure for Aircraft
Now we must venture into more specifics of how to structure a Kaiman filter for an aircraft. A key
topic here is what are the state, the inputs, and the measurements. One is tempted to say that the
measurements should simply be whatever sensors the aircraft might have- accelerometers, gyros,
GPS, magnetometers, angle of attack sensors, etc. This calls attention to something so often left
out of general treatments of Kaiman filtering- generally, the gyros and accelerometers are taken as
inputs to the system. One would expect them to be measurements, since they are, in the strictest
sense, measurements. They are, however, taken as inputs. Doing this has several advantages. First,
the exact aerodynamic characteristics no longer enter into the Kaiman filter, so the filter can be
(mostly) aircraft independent. Second, this also allows the IMU to be tested on the bench. If the
aerodynamics were included, the filter could only be tested on a flying aircraft, or in simulation.
Depending on the extent to which you can include hardware in the loop, this may be a highly
attractive attribute. Thirdly, it significantly reduces the number of states that must be estimated,
which is advantageous both in terms of stability and computational complexity. We will look at
this further in a moment. First, we will look at a naive choice for the state vector and inputs.
?
y
?
U
V
St
W
Se
(4.16)
Sa
F
P
Q
r
This would be a relatively standard set of equations for simulating an aircraft. However, it runs
afoul the problem listed above- it is not aircraft independent, and will not work on the bench. It also
CHAPTER 4. KALMAN FILTER BACKGROUND
35
has another critical problem. The accelerometers measure the specific force applied to the aircraft.
This is not a state. If we add it to the state vector, we then need to compute its derivative (since
that is what / does). What is the derivative of the force? The unpleasant solution is to model
the force as a random walk (F = 0 + ?) , which does not work very well and results in an oversize
state vector. The other solution is to not include the forces as states at all, but call them highly
nonlinear measurements of the angle of attack, sideslip, and airspeed. The disadvantage of this
is that h(x) becomes larger and contains more varying terms, and the highly nonlinear and noisy
nature of aerodynamic forces will introduce significant error into the measurement of the actual
states- in this case u, v, w. (which can be formed into AOA, sideslip and airspeed, and thence into
forces). Also, you would need to formulate the aerodynamic model as
u
¦¦g(Fx,Fv,Fz)
?
(4.17)
W
which is going to be quite difficult, and possibly singular or ambiguous. The real solution is to
consider the following system:
?
y
Fx
Z
Fy
U
Fz
V
(4.18)
P
W
Q
r
?
This trims the state vector to 9 entries, and solves the problem of how to model the forces. It
keeps h linear, and maintains aircraft independence. It also means that disturbances like turbulence
are measured, which significantly improves accuracy. In fact, it makes the primary contributor to
the process noise integration error, (the only exception to this being the bias noises, which will be
discussed later) However, it does introduce one complication. The Kaiman filter assumes that the
inputs are known. In this case, they are only measured. We must account for this in the process
noise. This is done by propagating the measurement noise through the system dynamics and finding
CHAPTER 4. KALMAN FILTER BACKGROUND
36
an equivalent process noise matrix that includes the sensor noise.
Qeq = TLT7' + Q
(4.19)
Where L is the sensor noise covariance and G = §{??. The system dynamics are easily derived
from the definition of the reference frames and Newton's 2nd law.
fp = A ¦ uvw
(4.20)
Vb = F/m — pqr ? uvw
(4-21)
È = G{<l>,e,il))-pqr
(4.22)
Where A and G are the direction cosine matrix from the body axes to the world axes, and the
body rate to Euler rate transform matrices, respectively. Naturally it would be possible to use a
quaternion formulation instead, but for our low-performance aircraft it is not worth the effort.
At this point we must discuss a significant issue. Since the measurements are being used as
inputs, they cannot be used in a Kaiman filter (the predicted and measured states would not be
independent) so in what sense are we using a Kaiman filter here? The answer lies in the GPS. The
position of the aircraft is the only measurement that the Kaiman filter is aware of. The inertial
data is used only as inputs for the state prediction. In general, GPS data cannot be had at 50 Hz.
In this case, it is only available at IHz. That means that the Kaiman filter will only run at 1 Hz.
The state prediction will still run at 50 Hz, however, since updated state data must be available for
control purposes. Since the prediction step runs constantly, the state covariance matrix will grow
until the Kaiman filter runs, at which point it will shrink, and then grow, and then shrink, etc in
a sort of sawtooth pattern as the sensor noise compounds until the Kaiman filter can correct the
error.
In any case, the measurement model is simply a 3x9 identity matrix because the GPS measures
the position directly. There may need to be a coordinate transform in there somewhere, but it is
basically a pass through.
CHAPTER 4. KALMAN FILTER BACKGROUND
4.5
4.5.1
37
Kaiman Filter Additions
Bias Estimation
The above description assumes that the inertial measurements have no bias. In general (and to a
great extent with the IMU used for this work) this will not be true. It is possible to model the
biases as states and use the Kaiman filter to estimate them. They are modeled as (very, very slow)
random walks,
B = O+?
and simply added to the equations of motion. This can work well, especially to check the
calibration. The problem lies with the observability of these states. Without a direct way to sense
the orientation, it is not possible to distinguish a bias in the accelerometers from tilt, especially
roll. Pitch and yaw both affect the way that the position changes over time, but in the standard
dynamics roll does not. Hence even when the aircraft is moving Y accelerometer bias and roll angle
cannot be distinguished. Hence, the bias will tend to drift if no other way of detecting the roll angle
is used. For that reason, a good calibration is often better than trying to estimate accelerometer
biases. A similar argument can be made for the gyro biases. In the end, the best option is often
to get the absolute best calibration possible, and then try the filter both with and without bias
estimation, and find out empirically which works best for the given situation.
4.5.2
Scale Factor Estimation
Another type of error that can be modeled is scale factor error. It is possible for the inertial sensors
to have incorrect scale values, resulting in incorrect acceleration/angular rate readings. This is a
particularly difficult problem, because gyro scale factors cannot be measured easily without a rate
table that can apply a known angular rate for calibration. The only way to measure them is rotate
the sensors through a known angle and estimate the scale factors by the measured rotation angle.
Such a strategy is not nearly as accurate as a proper rate table calibration, but in the absence of
such specialized and expensive equipment, it is better than nothing at all.
Given these problems, it seems attractive to attempt to estimate these unknown scale factors.
There is a critical problem with such a strategy. The accelerometer biases and scale factors must be
measured based on GPS data. Then the gyro biases and scale factors must be measured based on
the corrected accelerometer signals (because the accelerometers are being used to infer orientation) .
CHAPTER 4. KALMAN FILTER BACKGROUND
38
There are so many steps, and the error compounds at each step, resulting in very poor estimation.
The end result can often be worse than simply calibrating carefully and counting on the GPS data
to correct any drift. The exception to this rule is when magnetometers are used. Because there are
more measurements, more states can be estimated accurately.
4.5.3 Noise Characterization and Modeling
Another common addition to a Kaiman filter is to model the noise more extensively, specifically to
include the time correlation of the ? and w noise series. This allows for the modeling of quantization
error (which has a very specific and well known time correlation) as well as other time correlation
effects. The balance that must be struck with any Kaiman filter is between ever more accurate
modeling, and ever larger state vectors, as more and more parameters must be estimated. The
fundamental problem is that given an error between predicted and measured states, the filter must
assign the blame for that error one or more of the unmeasured states. The more unmeasured states
that affect the measurement, the more difficult it is to assign the blame correctly. This is why
adding bias and scale factor states can reduce the overall accuracy of the filter: too many possible
sources for error.
4.5.4
Magnetometers
Magnetometers are integrated into the Kaiman filter described above as additional measurements.
Magnetometers measure the strength of the local magnetic field, and by arranging three sensors
in an orthogonal triad, it is possible to measure the exact orientation of the earth's magnetic field
in the body coordinate system. This is measured as a vector, which can be normalized into a
unit vector that can be used to help determine the orientation of the aircraft. Implementing this
is something that the author has not attempted, but the first method that could be tried would
be to implement the transform from the Euler angles or quaternions to what the magnetic field
(as measured by the sensors attached to the aircraft) would be in that orientation. Given the
magnetic declination and inclination for the current location, this should be a matter of running
the magnetic field vector (from a standard magnetic model) through the direction cosine matrix
from the coordinate system that the field is expressed in (probably earth centered, earth fixed)
to the body coords. This corresponds to what the sensors measure, which is in general how to
integrate a measurement into the Kaiman filter. Write the new measurements as a function of the
states, implement in h(x) and that should be it.
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
39
Chapter 5
Predictive Controller Development
Most control theory constructs are built around one of two basic paradigms: continuous linear
ODEs, or discrete time differential equations. The tools associated with each mathematical con-
struct (Laplace transforms and Z-transforms) are then used to analyze and design a controller that
will modify the response of the system in a desirable way. An interesting consequence of this fact
is that most control engineers are tend to think in terms of continuous systems, even though such
systems are almost never used to implement the controllers that they have designed. In fact, the
Z-transform is used mainly to allow designs in the continuous domain to be implemented in the
discrete domain. The author suspects that this attachment to continuous systems originates from
the days of analog computers, when linear ODEs could be directly implemented. Control theory
got started on vacuum tubes, and has never really left. Of course, the tools available to controls
engineers in the continuous domain are considerable; Bode plots, transfer functions and root locus
(among many, many others) have been excellent and reliable tools for controls engineers for many
years. That said, the author would like to explore some other paradigms for control design that
break completely from the standard linear ODE form.
A word on stability proofs- "proof is an entirely relative term. Models are not the real system,
so a proof only goes as far as the model is congruent to reality. Since the model is never quite right,
and even proofs of robustness still assume a model structure (they have to, for without structure,
analysis is impossible), so as soon as the structure is no longer congruent to reality the proof is
no longer valid. It is certainly better than nothing, but it is the difference between certainty and
probability. If your financial adviser tells you that an investment is a sure thing, then you ought
to be comfortable betting your entire life savings on it. However, if the investment has only a very
40
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
high probability of success, then we ought not bet everything on it. Stability proofs give us a high
probability of stability, not certainty. This can be easily proven by observing airplane crashes. They
exist, therefore the control systems in place were only very stable, not completely stable. Unfair,
you say? If a proof does not cover all possible situations, then it is not a proof at all. Stability
proofs do not prove safety. Nothing proves safety. We can only have probabilities. So what is the
point of all this? Mainly, that lack of a stability proof ought not disqualify a control system if safety
can be established in other ways- extensive testing and simulation, for example. This is required
anyway, as the FAA understands that there is no substitute for actually testing a system under
many different conditions.
From a traditional standpoint, the controller developed here is an output feedback predictive
controller with independent control and prediction horizons. Output feedback is the most notable
part of the structure, since typically state feedback is preferred. In addition, the outputs used are
not the traditional IMU outputs used when we typically talk about output feedback. The exact
feedback structure will be discussed further in later sections. The block structure looks like this:
Stats Trajectory
Extrapolation
Pilot Command
Damage
Database
Desired Outputs
O
Cost Function/
Constraints
Output Error
State Trajectory
Predicted
Prediction
Outputs
fcostand Constraints
Optimizer
Control Sequence
Control Input
Figure 5.1: Control System Block Diagram
Several mathematical constructs are used in this work. The first is partitioning the model
into certain and uncertain components. Some aspects of a dynamic model do not change. Euler's
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
41
equations are an excellent example of this. The equations hold no matter what happens to the
aircraft. The moments of inertia, mass and aerodynamic characteristics can all change in the case
of damage, but Euler's and Newton's equations remain as valid as the day they were first derived.
By keeping the certain and uncertain aspects of the model separate, the system identification
algorithm can confine its adjustments to the uncertain aspects of the system. This allows the
system ID to be applied to only what has changed significantly. This constrains the identification,
which improves the speed and accuracy of the identification process. Second, the concept of state
trajectory extrapolation is introduced. Finally, a damage database is used to preconverge the
identification and improve the modeling of the damage immediately after a damage event.
5.1
System Model
The system model has two main components. First, the body fixed equations of motion relating
the forces and moments to the angular velocity, acceleration, and other states and state derivatives.
These were derived from [30]. The second major component is the aerodynamic model which relates
the aerodynamic state of the aircraft to the forces and moments acting upon it. The aero model is
based on linear coefficients and regressors and can then be combined with the previously discussed
Euler model. The result is a full nonlinear model with the ability to reuse the Euler equation code
and swap the aero code out dynamically while the controller is running.
5.1.1
Kinematic Equations of Motion
The kinematic equations of motion describe the motion of a 6 degree of freedom body subjected to
forces and moments. The forces and moments are expressed in the body frame. The state space
form is:
x = f(x,u)
The coordinates are illustrated in the diagram below:
(5.1)
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
42
Y- Pitch Axis
*^J^^
¡Z-YawAxisv
,''Tx-RoII Axis KlS
Figure 5.2: Aircraft Body Fixed Coordinates
The state vector ? is:
rv
Tz
U
V
W
(5.2)
f
?
?
P
Q
r
The input vector is:
Fx
Fy
Fz
Mx
My
Mx
(5.3)
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
43
As mentioned, all the forces and moments are expressed in the body reference frame. It is easier
to deal with the equations 3 state derivatives at a time, as vectors. The first equation is simple
enough:
COs(^) ¦ cos(0) sin(0) · sin(0) · cos('0) - Sm(V-O · cos(0) sin(#) ¦ sin(t/>) + sin(<?) · cos(0) · cos(V>)
sin(^i) · cos(0) cos(<p) ¦ cos(^) + sin(0) · sin('</>) · sin(0) sin(ip) · sin(0) · cos(</>) - sin(</>) · cos(^)
- sin(#)
sin(</>) · cos(0)
cos(0) · cos(0)
(5.4)
Next, we can write Newton's second law in vector form:
I
Vb = —
¦ F -? ? ? + g
m
(5.5)
Again, note that the vb used here is the vector velocity expressed in the body frame. Next we
can move to the Euler angle rates:
: ? + q ¦ sin(0) · tan(ö) + r ¦ cos(0) · tan(0)
(5.6)
? = q ¦ œs(0) - r ¦ a??(f)
(5.7)
f = q- sin(0) · sec(ö) + r ¦ cos(</>) · sec(ö)
(5.8)
Finally, we get to the Euler equation for rotational motion:
? = ?"1 ¦ (t - ? ? (7 ¦ ?))
(5.9)
This completes the kinematic equations of motion for the aircraft. Because of the way that the
Euler equation is formulated, it can include all the product moments of inertia, which is useful for
simulating damage.
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
44
5.1.2 Aerodynamic Equations of Motion
Aerodynamics can particularly difficult to model, especially when damage is present. Airflows can
be difficult to model accurately under ideal conditions, and become even more difficult to model
under damage. As a result, any model is going to be subject to a certain degree of inaccuracy. For
relatively tame systems, linear models can be adequate, if carefully designed. The model is built
around a linear structure, where each force or moment is a dot product of a regression vector and
a parameter vector. The regressors are nondimensional and nonlinear functions of the states:
r
V pb qc rb àc ßb
.
Xr = ^V0'2V'ZV>ZV'2V>2V>Ôu5a>Ôe>5r]
,_ ~?
0^
Not all of these regressors will be used, mostly due to sensor limitations. Specifically, angle of
attack rate is very difficult to measure to a level of accuracy that will allow it to be separated from
pitch rate. Data colinearity between pitch rate and angle of attack rate is significant, and very
difficult to decouple. Implementation is done with a matrix of coefficients that can be multiplied
with the regression vector to yield the forces and moments. It is very simple to implement and
tune, and much of the code can be reused for the system ID process. The equation is:
FM = Ca- xr
(5.11)
Where FM is the concatenation of the body forces and moments. Naturally, this is a very
simple aerodynamic model. Fortunately, the aircraft in question (Small scale UAV) is also simple,
and operates well inside the flow regime where a simple model is adequate. Aside from that, the
predictive controller is model-agnostic. All the controller needs is a way to propagate the state,
given an initial state and control sequence. If this model proves insufficient, a better one can be
developed and implemented with a minimum of difficulty.
The data for this model was derived from computational simulations, which yielded the longitu-
dinal dynamics. The control coefficients were not derived, and had to be guessed at, assuming the
speed of the aircraft and reasonable control deflections. A similar procedure was used to estimate
the lateral coefficients. One of the many upsides of predictive control is that all of the work in tun-
ing the controller and getting adequate performance is almost entirely independent of the aircraft.
If a more accurate model is developed, it can be loaded into the controller, and the control law is
instantly updated. Which is a good transition into the next section.
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
5.2
45
Damage Database
As mentioned above, the control law is derived from the system model in an iterative fashion, so
changing the model is equivalent to changing the control law. This has a critical application to
damage compensation. Once the aircraft is damaged, the extent to which performance is recovered
is the extent to which the new system can be modeled, minus any new constraints due to the
damage. Since this process always takes time, and often requires maneuvers specifically designed
to improve the identification process, there is always a gap time where the pilot must fly the plane
with a suboptimal control system. There is a way around this, however. If a model for the damaged
system already exists, or can be generated from a priori data, the control system can get a first pass
approximation of the damaged system model immediately, with no gap time at all. It is analogous
to gain scheduling in some ways, with damage as a flight condition parameter. The resulting control
system will not be perfect, but it should be a good start. Implementation is accomplished through
the class based structure of the model predictive controller code. The class simply has a function
pointer to the aerodynamic model and a set function that is used to change the function that the
pointer points to. In terms of implementation and use, swapping the aero model for a damaged
aero model can be accomplished with a single line of code. Alternatively, the entire controller can
be swapped out. The methods are equivalent insofar as the rest of the components (6DOF model,
state trajectory extrapolator) are the same. In fact, the way C++ is implemented, it is exactly the
same code in memory running.
The missing link in the chain is detecting and quantifying the damage. That task is more of a
health monitoring task, and therefore is left to others [29]. However, some discussion will be given
here about some of the difficulties and possible solutions for the problem of actual development of
a true damage database and damage detection and quantification. One critical aspect of a damage
database is some method of reducing the enormous size of such a construct. Developing controllers
for every possible damage case is not practical, so a method of "interpolating" models or controllers
is necessary. One important tool is to recognize that damage can be normalized by location and
size, so knowing what a hole will do in one part of the wing is good insight to what it will do
in another part of the wing. The first step in reducing the size of the database is testing or high
fidelity simulation to develop better understanding of how the forces generated by damage change
as the damage is moved and changed in size. Once this understanding is developed, it should be
possible to store only a few coefficients and still be able to plug damage size, type and location into
the database and get a good damage model.
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
5.3
46
System Identification
After a damage event, most of the aircraft is still the same. The aspects of the aerodynamics that the
damage changes are the only things that need to be identified. Mathematically, it means identifying
the forces, or more specifically, the aerodynamic coefficients for the forces the hole produces. Also,
because the location and size of the hole are assumed to be known, it is necessary only to identify
the drag and lift (or in this case, loss of lift) forces and possibly the Y moment due to the hole.
Measuring these coefficients is a multistep process. First, the forces must be computed from the
inertial measurements. Since we are trying to estimate two forces and possibly a moment, and have
six measurements to do it with, we have an over constrained system. It would be useful to use all
the information and get a best estimate of the aerodynamic coefficients. This sounds much like a
Kaiman filter. The full development is given in a later section, but the specific implementation for
identifying the damage coefficients is given here. The forces produced by a hole or other damage
are assumed to fit the model:
Fa = qS(CL0 + CLaa)
(5.12)
Fd = qS(CD0 + CDaa + CDvVt)
(5.13)
Where Fd is the drag force and Fa is the loss-of-lift force. Forming these into a vector,
F=[-Fd,0,Fa]
(5.14)
M = rdx F
(5.15)
These are the measurements used for the online Kaiman identification. These are, of course,
deviations from the computed force from the online aerodynamic model. The forces and moments
can be measured using the inertial sensors, with one caveat. The forces are measured directly by
the accelerometers, but the moments require more effort. From the kinetics of a rigid bodyT = ?? + ? ? ??
(5.16)
i" (moment of inertia tensor) is known from ground measurements and damage quantification
and localization, ? is measured, so all that is needed is ?. This is more difficult, as numerical
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
47
differentiation is required. This introduces a significant amount of noise if not done carefully. A
quadratic, discrete, time domain smooth differentiation algorithm can be derived as such:
z'=~tAzi~2zi-i+lz*-2)
(5-17)
Frequency domain filtering is also an option of course, but the phase introduced by the filter
has a tendency to skew the time coherence of the measurements. This can be solved by using a
zero phase filter, but this is not an option for real time use, though it works well for offline data
preprocessing. Even time domain smoothing results in a certain amount of phase lag. For this task
the equation is:
13
1
luì = ?t(??? ~ 2?*-? + 2??-?>
(5·18)
Now that the forces and moments can be measured, we can structure our Kaiman filter to
estimate the aerodynamic coefficients.
x = [Cd0 , Cda , Cdv , Cío , CLaf
(5.19)
z=[AFT,AMT]T
(5.20)
Where AF and AM are the deviations of the measured forces and moments from the expected
aerodynamic forces and moments. These are modeled as:
AF = Fmeas - Faero + VF
(5-21)
AM = Mmeas - Mae.ro + "M
(5-22)
Once the noise is characterized, this Kaiman filter structure can be used to estimate the aero-
dynamic characteristics of the damage. The dynamics are random walk:
Xn+i = Xn + w
(5.23)
A measurement model is also required. Since the aerodynamic coefficients are generally used as
coefficients, it can be slightly counter-intuitive to think of them as states. The H matrix is:
CHAPTER 5. PREOICTWE CONTROLLER DEVELOPMENT
H = qS
48
-1
-a
-Vt
O
O
0
0
0
0
0
0
0
0
1a
0
0
0
dy
ady
—dz
—adz
—vtdz
—dx
—adx
dy
ady
v%dy
0
0
(5.24)
This assumes that the drag force acts towards the rear of the aircraft, as a drag force should,
and the loss-of-lift force toward the local aircraft down. The Kaiman filter shows its flexibility
here because we can simply program in an estimate of the numerical derivative noise in addition
to the measurement noise and get an optimal estimate of our aerodynamic coefficients. Given the
above equations, we can implement a working Kaiman filter for damage estimation. Note that a
full treatment of the Kaiman filter was already given, so the actual equations are not given here.
5.4
5.4.1
State Trajectory Extrapolation
Introduction and Theory
State trajectory extrapolation is a novel concept that grew out of the need for a full state feedback
tracking system. Since most control systems are regulatory in nature, either due to actually desiring
to drive the states to zero or because the model is formulated as deviations from the desired state,
there is no need to derive a complete, kinematically and kinetically consistent state trajectory.
The usual solution has some drawbacks, however. First, the actual behavior of the system ends
up depending strongly on the weighting matrices and the control horizon, and this dependence is
not always as intuitive as one would like, especially with damaged, cross coupled aircraft. The
end result is that tuning a predictive controller can be very difficult and time consuming. Another
subtle point deserves attention. Often, the control sequence will have to generate errors in one state
to maintain acceptable errors in another. No small part of the difficulty arises out of the fact that
it is not possible for the state error to be zero anywhere other than the final state. Take a single
degree of freedom point mass with a force input:
m ¦ am + c- vm = Fm
(5.25)
Further, assume that we would like to have positional control over the mass, so using the velocity
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
49
as the only state is not an option. The state vector is:
X = [Pm,VmY
(5.26)
?t? — 1Ur,
(5.27)
Vm = -c/m ¦ vrn + Fm/m
(5.28)
and the equations of motion are:
The A matrix is:
0
1
(5.29)
0 —c/m
The B matrix is:
0
B =
(5.30)
1/m
So let us say that we want to move the mass from pm(0) = 0 to pm(tf) = 1, a simple step
input for position. The naive strategy is to simply set the desired position to 1, the velocity to
zero and start the controller. The controller will then use the force input to drive the position
to one. However, there is a problem. We have also told it to keep the velocity at zero. Now the
controller has two conflicting objectives, so it will have to accept velocity error to eventually achieve
its position goals. Conflicting objectives are acceptable, and often show up as consequences of a
physical situation, and cannot be avoided. However, this is not the case for our point mass system.
We have absolutely no problem with velocity in between the start and end points. One solution
might be to not weight the velocity at all. This has a major downside- the optimal strategy is then
to do a "flyby" of the target state, where the position matches but the velocity is large, so the mass
flies right by the target state right around the end of the control horizon. This is due to the fact
that in this case the weighting stops right around the time that the position error starts to grow
rapidly. Naturally, a predictive controller only sends the first control input to the system, so it will
compensate better and better as the mass nears its target. However, this problem can make tuning
more difficult and compromise stability because the initial trajectory computed by the controller
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
50
is not the trajectory that will eventually be followed. This is another characteristic that we would
like to see in any motion planning system for predictive control- the trajectory that is computed at
the initial point is the trajectory that will be flown, barring modeling errors. This brings to mind
dynamic programming, which builds plans from the end point back to the initial condition.
Another solution is vary the weighting from the beginning of the horizon to the end, so the
initially the velocity is lightly weighted, but is weighted more strongly as the mass gets close to its
target. This still is not an ideal solution, especially since the control horizon might not extend all
the way to the target state and still respect the constraints. This methods adds a lot of complexity
without really solving the problem. It also adds even more tuning parameters that will not allow
intuitive tuning of the controller. Hence this is really a non-solution.
The best solution is to generate a desired state trajectory that is kinematically consistent (e.g.
does not have zero velocity with a changing position) and will achieve the goal, in this case a
position change. This becomes analogous to motion planning problems for robots. The common
solution to such problems is to create a map for pm(t) and differentiate it to get vm(t) and am(t).
While this works very well for point mass systems, aircraft are not as kinematically simple. On
the other hand, we are not necessarily looking for time optimality, which simplifies the problem
dramatically. Back to our point mass system. What is needed is a way of automatically generating
a path from the start point to the end point that will respect velocity and acceleration constraints.
Probably the simplest solution to our 1-D problem is the bang-bang solution. The issue is that
this is excellent for automated systems, but is not so useful for manual control systems. Also,
the damping of the system interferes with the solution because the acceleration/deceleration is not
constant. This significantly complicates the solution.
A critical point to realize is that for any given trajectory, the prediction horizon may or may
not extend all the way to the end point. Removing this requirement has the effect of decoupling
the prediction horizon from the speed of the system response. A predictive controller that always
ends the move at the prediction horizon is like a boss that always sets deadline exactly one week
out, regardless of the task. For small projects, the employees will move slower than they could.
For big projects, they will be in such a hurry that they will do poor work to meet the deadline.
Predictive control is not quite that bad, since getting the move done quicker reduces the objective
function, but the length of the control horizon will still affect the speed of the response. However,
by generating a feasible, constraint-respecting trajectory independent of any specific time frame,
we can specify all sorts of handling qualities completely independently of weighting matrices and
prediction horizons. In this scheme, weighting state or output error ever more heavily simply drives
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
51
the system to follow the feasible trajectory ever more closely. Hence the system tends to converge
to the handling that we have specified for it, which is especially useful when damage occurs and
the handling must be modified to match the new system capabilities. Changing the extrapolation
will change the handling, without having to attempt to tune the weighting matrices in flight.
To summarize, we need several characteristics for any extrapolation system:
1. Dynamic/Kinematic/Constraint Consistency
2. Temporal Invariance
3. Delivers Desired Handling
The first refers to generating paths that will respect constraints and be consistent with the dynamics
and kinematics of the system in question, the second requires that no matter where along the
trajectory we regenerate the trajectory, we will generate the same trajectory that was generated
in the first place (assuming the pilot/operator input has not changed, and there is no error in the
current state). The third is self explanatory.
5.4.2
Point Mass Implementation
To implement this system for the ID point mass system, we must decide what our control input
should do. The simplest is to use the input to control velocity. To do that, we first need to decide
on our velocity profile. The simplest way is to assume a constant force until the velocity reaches the
desired value, and from there maintain a constant velocity. One handy addition is to go ahead and
predict the desired position as well. This closes the position loop and ensures that if a zero velocity
is commanded, the system will hold position, even in the presence of a disturbance. However, for
simplicity we will consider a simple position command system. This will be applicable later, when
we discuss a rate command system for an aircraft. In this case, we need to be aware that we should
not expect the system to run right at its maximum force input regularly. This is rarely good for a
system, and leaves it no reserve force for unforeseen disturbances. We should leave enough excess
force in reserve to handle the likely disturbances that we may encounter. We will also need to be
aware that because of the type of system the velocity may be limited by either a straight velocity
limit, or by the force not being able to overcome the damping force. We will also need to be aware
that we need some velocity in reserve, for the same reasons we need force reserve. We will define
Fc and vc, the cruise force and velocity, respectively. These are assumed to be the force input and
velocity that allow enough reserve to handle normal disturbances.
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
52
For this version of the trajectory extrapolation, we will start with a function for position, and
differentiate it to get velocity. This will represent our trajectory. There will be some complications
to this, but we will discuss those as we go. What function we use will affect the time optimality of
the motion. By choosing the right function, we can get a bang-bang solution. The difficulty lies in
the multiple special cases. Depending on the distance of the move and the velocity constraint, the
velocity constraint may or may not become active, which will change the shape of the path. Also,
damping introduces an asymmetry to the path, which makes things even more difficult. Trying
to keep the force input and /or velocity pegged at the limit can be quite difficult to do. For this
reason, a simpler, suboptimal (time-wise) solution is formulated. We will represent the path as
a sinusoid, and assume that the initial and final velocities should be zero. This simplifies the
derivation significantly, and if nonzero velocity at the boundaries is desired, it can be added to the
derivation later. The path is assumed to be:
pm(t)= A1SiIi(LUp- t + f)+ A0
(5.31)
Pm(O) = Po
(5.32)
Pm(^)=Pl
(5·33)
pm(0)=pM(tf) = 0
(5.34)
Pm(i) < vc -» (X t < tf
(5.35)
F(t) <Fc-+0<t<tf
(5.36)
subject to:
We can get the velocity easily enough by differentiating the position:
Pm(t) = A1Wp cos(wp -t + f)
And from 5.34 we can find f:
(5.37)
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
pm(0) = A1Wp-COsO) = O
53
(5.38)
This can only be true (other than the trivial cases ?? = 0 or ?? = 0) when
f = p/2
(5.39)
pm(tf) = A1-Up- cos(wp · tf + p/2) = 0
(5.40)
Now we can use the other half of5.34:
Again, ignoring the trivial cases:
COs(Wp ¦ tf + p/2) = 0
(5.41)
The parameter of the cos function must be some odd multiple of p/2. To keep the trajectory
as simple as possible and to prevent overshoot, we will use 3 · p/2.
LOp-tf+ p/2 = 3 ¦ p/2
(5.42)
?,? = p/tf
(5.43)
Pm(t) = A1 ß??(p · t/tf + p/2) + A0
(5.44)
Now, our position is:
Next, we can use 5.32 and 5.33 to get A1 and A0.
Pm (0) = Po = A1 ¦ 8??(p/2) + A0
(5.45)
Pm(tf) = P1= A1- 8??(3p/2) + A1
(5.46)
Po = A0 + A1
(5.47)
Simplifying:
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
54
P1=A0- A1
(5.48)
(P0 +Pl) /2 = A0
(5.49)
-(P1-Po)^ = A1
(5.50)
Pm(t) = -ELZ* . sin(7r . t/tf + p/2) + *±*
(5.51)
Adding, we get:
Now our position is:
The only degree of freedom left is the final time tf. This will be constrained by either the
velocity limit or the control input limit. The easiest to derive is the velocity limit, so we will start
there. The velocity is:
Pm(t) = -ELZ^
. 1¿ . C08(Jr . t/tf + p/2)
Zf
(5.52)
This will have its maximum magnitude on the active interval when the cos = -1, or when
p · t/tf + p/2 = p. We can set this equal to the cruise velocity per 5.35 and get:
—
-? = Vc
(5-°3)
tf = n-^*
2
vc
(5.54)
tf
¿
Solving for tf
This is the minimum time to complete the maneuver that will respect the velocity constraint.
Now we must find a corresponding minimum time that will respect the force constraint. This is
more difficult because now the system dynamics come into play. Using the equation of motion and
the specified motion, the necessary force input is:
F(t)=m-pm(t)+c-pm(t)
The acceleration is a differentiation away:
(5.55)
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
Pm.(t) = ^r-^
-(f)2'
8??(p · t/tf + p/2)
2
tf
(5.56)
So now the force is:
F(i) = m . EL-J^
. (£)2
. sin(7r . t/i/ + p/2) - c ¦ ^—^
¦ ftf · COS(Tr ¦ t/tf + p/2)
2
tf
Z
(5.57)
Making a few substitutions to simplify the equation:
F(t) = A ¦ sin(0) + B · cos((9)
(5.58)
A = m.az£o.(JL)a
2
i/
(5.59)
5 = _c.^LZ^.iL
(5.60)
Ö = p · ?/?/ + tt/2
(5.61)
Where
2
Í/
Now, we only care about the magnitude of the force, not at what time it occurs, and we know
the magnitude of this sum is:
Fmax = VA* + B*
(5.62)
Rearranging to solve for tf and recalling our cruise force:
4
c (Pi-Po)2
m2-(-)4
tf + c2-(f)2
tf
(5-63)
Multiply through by Û:
F2 . (Pi -A Po)2 . ?J _ c2 . p2 . t2 4 . m2 = 0
This is a quadratic in tj, which we can solve easily enough:
(5 64)
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
?=
-b ± Vb2 -A- a- c
2·a
,2 ,
/r*r\
(5.65)
LTT^^ñ
tf =p · —-— 2 ,··
J
56
y,—
8-F2Z(P1-Po)2
(o.ob)
Given that all the parameters are positive, and that we only want positive roots, we can see
that the radical will always be larger than c2, so we know that we only want the + in the ±. We
can also multiply top and bottom by (pi — Po)2
2
tf = p
2 c2 · (pi - pò)2 + (P1 - Po) · y/c4 · (pi - Po)2 + 16 ¦ m2 ^F2
^y,
(5.67)
This is about as simplified as this is going to get. In planning the move, both the velocity limited
and force limited maneuver times would be computed, and the larger of the two used to generate the
motion plan. Interestingly, this can also generate a control sequence that will follow the trajectory.
Given that, why would we apply a controller to the system, in particular a computationally intensive
predictive controller? Besides disturbance rejection and robustness to modeling errors, imagine that
there is a nonlinear actuator such that
F = g(u)
(5.68)
Imagine now that that we only have control of u. By using a predictive controller with the
motion plan, we can convert F(t) into u(t) such that the mass moves the way that we want and
obey all the constraints. Essentially we are using predictive control to invert the nonlinear actuator
dynamics. Which raises a critical question: since we are inverting, how does this method handle
nonminimum phase behavior? Let us add to the point mass dynamics:
F(t) - k ¦ F(t) = m -p + c- ?
(5.69)
Assuming the same motion plan, let us look at what we get:
F(t) - k ¦ F(t) = y/A2 + B2 ¦ sin(w · t/tf + f)
(5.70)
Where ip is something that we can get later if we need it, and C is the magnitude of the force
discussed above. Mostly we are concerned with magnitude, so we will leave the phase alone for
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
57
now. Basically what we have here is a first order nonlinear differential equation. If F(t) were a cos
function, F(t) = Fc ¦ 003(p · t/tf + ?) then we can find the minimum time. Now we can plug F(t)
in:
Fc ¦ cos(tt ¦ t/tf + ?) + k ¦ Fc ¦ — ¦ ß??(p ¦ t/tf + ?) = V?2 + B2 ¦ sin(p · t/tf + ?)
(5.71)
So, looking at magnitudes only, we can equate
F2 + k2 ¦ F2 ¦ (-)2 = VA2 + B2
(5.72)
Fl + Fl -fc2· (-)2=
A2 + B2
ti
(5.73)
Squaring both sides:
Using the above definitions 5.59 and 5.60, we get:
Fl + Fl ¦k^f=m2^^^¦(^
-(ff
tf
4
tf + ^^^^
4
tf
(5.74)
It is possible, though tedious, to take this derivation all the way to its conclusion. There
is, however, a good conceptual way to understand how this constrained trajectory extrapolation
would handle nonminimum phase systems. The problem with any type of inversion used on an NMP
(nonminimum phase) system is that to drive the system beyond a certain bandwidth the control
input starts to grow without bound, resulting in violation of the modeling assumptions. Instability
often follows, depending on the system. However, if the inversion is constrained, the input growth is
stopped. The key here is the blending of the trajectory extrapolator and the inversion into a single
algorithm, allowing the desired trajectory to be generated with the input constraints in mind. As
long as the input constraints are observed, the major problems of NMP systems can be avoided.
5.4.3
Rate Command
The first control system developed for an aircraft was a simple rate command system. The stick
controls the body rates, sideslip and airspeed, but none of the orientation loops are closed. This
means that modeling error tends to produce drift in the orientations, and even aside from modeling
error, the pitch tends to drift as well. This is all solved by closing the orientation loops, which is
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
58
surprising difficult and will be discussed later. The key challenge is the state trajectory extrapolation. In this case, it is surprisingly easy. The stick inputs are scaled and shifted as needed to get
the desired body rates, sideslip angle and airspeed. These are then strung together into a constant
trajectory that the cost function can then work with and weight against the predicted trajectory.
No incorporation of the current state is required, making the state trajectory extrapolation rather
trivial, since the orientations are not predicted. The output vector is:
Y=\p,q,ß,Vt]
(5.75)
These are the only states that are predicted and weighted. Because predictive control will allow
nonzero error when a nonzero control input is required to drive the error to zero, drift, especially in
the pitch channel, is inevitable. Another difficulty is that trying to compensate for the drift with
a stick input is more difficult than might be expected, since the stick commands a pitch rate. As
a result, setting a stick input to precisely balance the drift can be rather difficult, both for a pilot
and an autopilot. The drift can also be minimized by weighting the pitch rate more heavily versus
the elevator input, though the drift cannot be eliminated entirely in this way. This highlights an
advantage to state trajectory extrapolation- normally, changing the weighting of a state will tend
to change its response speed. In this case, it only forces the system to follow the extrapolated state
more closely, and if the extrapolated state has the response that we are looking for, then following
it more closely can only improve the quality of the response. However, heavy weighting will still
tend to cause the controller to overreact to disturbances, so there is a limit on how much drift can
be eliminated in this way. A better way is to close the pitch and roll loops.
5.4.4
Rate Command with Pitch and Roll Closure
Closing the pitch and roll loops will greatly reduce the tendency of the pitch channel to drift,
substituting instead a small orientation steady state error, which is far more manageable from a
control and handling quality standpoint. The state trajectory extrapolation for this controller is
a multistep process, and took a great deal of trial and error to arrive at. To start, the desired
body rates are inferred again by scaling and translating the stick input. The total velocity and
sideslip make appearances again, but now the desired body rates must be used to generate a desired
orientation trajectory. The Euler rate to body rate equations 5.6, 5.7, 5.8 are used to generate this
Euler angle trajectory.
This is a consistent, feasible trajectory that respects body rate constraints. For its initial
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
59
condition, some problems arise. If we use the current orientation, the steady state error that MPC
allows becomes random walk, resulting in drift. Since that is what this controller is intended to
eliminate, a different initial condition must be found. An excellent choice is to have a persistent
desired orientation that is updated solely by the desired body rates. In C this is easy to do with
a static variable. However, this method is often unstable, as the desired trajectory always forces
the system to first make up any steady state error, and then track the desired trajectory. The
initial step of steady state error is often enough to drive the system unstable, or almost completely
uncontrollable. The solution is to add another step. A low pass filter is introduced, using the
current orientation as an initial condition, and the desired trajectory as an input series. Hence
a new trajectory is generated: one that starts at the current orientation, but also converges to
a persistent desired orientation that does not drift due to tracking error. This works well, but
introduces another problem. Now the body rates are no longer consistent with the orientation
series, since what was generated before has now been blended with the current orientation. Now
we must numerically differentiate the desired trajectory and use the inverse of the equations above
to get a set of body rates that will result in the orientation trajectory that was generated. There
is a small caveat- the ? body rate and yaw angle cannot be weighted or predicted. Doing so results
in instability, as any trajectory that attempts to predict these two states in particular becomes
infeasible and unstable. This may be the author's own problem, but both states can be eliminated
from the output vector with no loss of controllability or stability. The output vector used is:
yi = [4>,e,P,q,ß,Vt]
(5.76)
The three step process is:
1. Generate desired orientation trajectory, with persistent desired state as initial condition, and
desired body rates
2. Blend above with current state using low pass filter
3. Regenerate consistent body rates to match filtered orientation trajectory
The resulting orientation trajectory is dynamically consistent, starts at the current orientation, and
always converges to a desired orientation that does not drift due to steady state error. The controller
then achieves drift-free, smooth, very predictable performance with good handling qualities.
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
5.4.5
60
Pitch and Roll Command
An easier to fly (and easier to implement) controller is to use the stick to command pitch angle
and roll angle. Sideslip and velocity are weighted the same as previous controllers, but now the
desired orientation series is simply constant. The rest of the algorithm is identical to the rate
command version- low pass blend the desired orientation with the current orientation, generate a
set of consistent body rates, run predictive controller.
5.5
Objective Fonction
A model predictive controller was chosen as the primary control algorithm for the flight simulations.
The core of a predictive controller is the cost function, which determines how the optimization
balances the opposing goals of tracking and minimum control effort. It can also have a great deal
to do with the way that the aircraft will handle. Typically, the cost function is a quadratic in the
state errors and control inputs, plus a terminal state error weighting term. For this work, a similar
system will be used, plus some additional terms that will be discussed in the next section.
Predictive control requires that the cost function be minimized, so an optimization system is
required. For this work, a commercial optimization package is used to optimize the control sequence.
The package is written by Vanderplaats R&D (www.vrand.com) and is called Design Optimization
Tools (DOT). It is a highly robust package that computes gradients automatically, which saves
a great deal of programming. All that it requires is a properly designed loop, the value of the
objective function, and the address of the first entry of the design variable vector. Given these, it
can control the loop, which recomputes the objective function using the design variables that DOT
has access to, and solve the optimization problem. Overall, the system is remarkably easy to work
with.
The cost function is implemented with pointers to functions, which allows the behavior of the
system to be easily modified in real time. There is a J1 function, which takes the predicted state,
desired state and input for each time step up to the horizon and returns the weight for that time
step. A Jt function does the same for the final time step (terminal state weighting). The CL (closed
loop) function does the state trajectory extrapolation, given a stick position and current state. It
is also possible to overload the entire cost function, but that option was not needed for this work.
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
5.6
61
Constraints
For the most part, state constraints are built into the state trajectory extrapolation. For example,
the pitch and roll command system cannot command more than a specific roll angle, and the
dynamic consistency of the desired trajectory ensures that overshoot is highly unlikely. In addition,
the optimization package builds in design variable constraints, so the control inputs are constrained
rather firmly as well. However, there are occasions where constraining unextrapolated states is
desirable. There are two major ways of doing this- constrained optimization and U/V functions.
5.6.1
Constrained Optimization
Constrained optimization uses Lagrange multipliers to constrain the states. This can be difficult,
because the states that we might want to constrain are related dynamically to the design variables.
Simulations with this strategy were difficult to stabilize, and often did not detect constraint violation. The author suspects that this is related to the integral relationship between the design variable
(control inputs) and constrained outputs (dynamic states). Software bugs are also suspected, but
since constraints are mostly implemented in the state trajectory extrapolation, fixing this was not
a high priority. In any case, another method offers better results.
5.6.2
U/V Functions
This technique uses additional nonlinear weighting of the states to constrain them. A common
implementation [17] uses a U shaped function (fiat on the bottom, but steeply increasing to infinity
near the edge) to create a situation where the state can vary freely within a certain range (the
flat bottom) but as soon as the state begins to move out of the safe envelope, the weight rapidly
increases, forcing the controller to drive the state back into the flat range. This tends to constrain
the state to the flat bottom of the function.
62
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
1400-
1200:
1000:
800:
600:
400:
200:
0:
-i—?
?
?
?
1.2
i"*""i
i"'i"'i
1.4
1I111T1T1I
r
1.6
II' ?
?
?
?
?
;
?—?—?
1.8
Figure 5.3: U constraint function. Axes are nondimensional.
This has the advantage of giving the controller a slope to follow to get the state back where
it needs to be, and not being a full hard constraint. The state is not forbidden from leaving the
safe range, it is simply very expensive. One problem is the asymptotic nature of the function. If
somehow the state does leave the range, (due to disturbance, or in the numerical gradient cales,
for example) the weighting will now force the state to stay out of the safe range, with the same
force that it was using to keep the state in. This will likely result in immediate failure (roll rate or
angle of attack out of safe range, and held there, will result in almost certain catastrophic failure).
While unlikely, this is a not a situation to be taken lightly. A possible solution to this is to make
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
63
the function defined, and sloping into the desired region, everywhere. This ensures that no matter
what the state value is, even if it is only a hypothetical state in a gradient calculation, the function
always slopes into the safe region. This allows recovery from any constraint violation. There are
several good ways of doing this. First is to use a procedural method where when the constraint is
nearly violated, a linear function of the state is returned, with a high slope to drive the state back
into the safe region. This is simple and computationally efficient, but has a minor problem- if the
constraint is ever violated, the optimization will try to drive a state with a linear variance to its
lowest value, which does not work. On the plus side, the steepness of the linear function is easily
tunable, resulting in a variable hardness for the constraint. A more complicated, but potentially
better solution is to use a quadratic, set with roots at each limit, (or slightly inside the constraint
value) and a variable multiplier:
V
K ¦ (x — a) ¦ (x — b) ? < a, ? > b
0
(5.77)
a <x <b
With this function, the bottom is flat, deactivating the constraint when the variable is in the
safe range, and any optimization done when the state is outside the desired range will tend to drive
the state back to the middle of the range, rather that to ±oo. And K still allows the hardness of
the constraint to be tuned. All in all, the safer and more versatile choice. Below is a nondimesional
plot. The left and right corners are, of course, adjustable to set the valid range for the variable.
CHAPTER 5. PREDICTIVE CONTROLLER DEVELOPMENT
64
100
T—I—I—I—T-I—I—I—I—TT—I—I—I—T-I—?—?—?—G t—?—I
4
5 ¦
6
X
Figure 5.4: V constraint function. Axes are nondimensional.
CHAPTER 6. HARDWARE AND SOFTWARE FEASIBILITY STUDY
65
Chapter 6
Hardware and Software Feasibility
Study
A feasibility study was conducted to test the possiblity of flight testing these algorithms. This
chapter covers two major topics: First, the hardware that was developed for flight testing will be
detailed. Second, the software will be discussed. The algorithms are covered in other sections, so
for this section the software will be covered from a software engineering perspective- interfaces and
class hierarchy rather than vectors and filters. Some of the software detailed here is part of the
simulation work, but is discussed here to place it in the context of flight testing, as the software
ivas written specifically to support flight testing in the future through careful interface design
and an architecture specifically designed to facilitate code reuse and implementation on embedded
hardware. Because of the large software component of this work, it is important to detail the large
quantity of code that was written.
6.1
6.1.1
Hardware Development
Gumstix Control Computer
The onboard computer is a Gumstix Overo Air single board computer. This board is based on the
Texas Instruments OMAP series system-on-a-chip, and includes an ARM Cortex-A8 CPU, 256MB
flash and 256MB RAM on a single chip. The Air version adds a Bluetooth/802. 11(g) wireless
communication module. The chip also includes limited floating point support, which is helpful. All
of these components fit onto a PCB somewhat smaller than a stick of gum. The silver card slot on
CHAPTER 6. HARDWARE AND SOFTWARE FEASIBILITY STUDY
66
the left side of Figure 6.1 is for a MicroSD card, which should give a sense of just how tiny these
board are.
Figure 6.1: Gumstix Overo Air
Since the base board provides no solderable interconnects, an expansion board is required to
interface to other components. The gumstix Summit board provided (at the time) the best, smallest
option to break out the necessary lines from the main board. The Summit includes a large number
of ports that are not used, including LCD signals, a DVI port, and audio in and out. At the time
that the board was purchased, it was the smallest option to gain access to the Overo's serial ports
and supply power to the computer, which is all that is actually needed for this application. Gumstix
has since added a smaller board to its inventory, but too late to be useful. In any case, the Summit
is small enough to be functional and fits into the fuselage of the aircraft, even if it does so rather
snugly.
The built in WiFi is used as the communications hardware for the aircraft, replacing the old
900MHz, 56Kbps modem. It offers higher data rates, but challenges with unpredictable transport
delays and difficulties operating in ad-hoc mode introduced delays for flight testing. Switching
from TCP to UDP eliminates some of the overhead and would likely stabilize some of the delay
issues, but there will still be enough delay that running a control algorithm on the ground station is
very likely to be nonfunctional, which leads to the next difficulty. MPC is a very computationally
intensive control method. The 2.4GHz workstation on which this code was developed is capable of
running the code in realtime, but not by a large amount, and only when the control and prediction
horizons are kept relatively small (3 or 4 time steps) Fortunately, horizons of 3 or 4 timesteps end
up performing the best anyway. Regardless, these algorithms are far too demanding to run on a
600Mhz single board computer with limited hardware floating point support.
CHAPTER 6. HARDWARE AND SOFTWARE FEASIBILITY STUDY
6.1.2
67
Phoenix AX IMU
The IMU used for the aircraft is Phoenix AX autopilot, manufactured by O-Navi. The system was
actually designed as a full autopilot, complete with servo outputs, but the onboard processor is
so slow (32MHz) that any significant control algorithm will rapidly overtax the processor. Also,
writing code for the phoenix is very difficult, since everything must run directly on the processor,
with no operating system services whatsoever. Hence filesystem operations and serial I/O can be
very difficult to implement, especially since O-Navi provides almost no software support for their
product. Some startup assembly code and a very basic C file for simple output in ASCII is all that
is provided. No software tools (compiler, binary translation utilities) were provided. JTAG access
was included, as was a parallel/JTAG dongle and basic flash writer, but that is all. To keep costs
low, GCC was used to generate binary code for the Phoenix. GCC will work for this, but a custom
linker script had to be written to tell GCC how to do symbol relocation and memory management.
In the end, the code worked, but only for basic I/O tasks. Data could be read from the ADCs,
and the GPS packets read from the onboard GPS, and then sent out to the gumstix via the serial
port, but any further computations quickly ran into problems. GCC did not seem to be managing
the stack correctly, and so function return values never worked correctly. However, once the I/O
was working correctly the phoenix did most of what it needed to do. Another deficiency of the
Phoenix is that it is not fully regulated. Changes in input voltage drastically change the gyro and
accelerometer biases and scale factor, necessitating an external regulator for an autopilot with 3
onboard switching regulators.
6.1.3
Additional Hardware
To control the aircraft servos, a Pololu servo board was used. It uses a simple serial protocol to set
position for up to eight servos from a single board. Boards can be chained to use even more servos,
though only four are needed to fly a basic aircraft. The packet format is fairly simple, and a library
was written to allow an easy, single function interface that sets the entire control vector with a
single function call. This keeps the control software particularly simple and easy to read. The one
difficulty in using this hardware is the serial voltage level. It requires a 5V serial signal, but the
Gumstix uses 1.8V logic, which is not enough to register a logic 1 on the servo control board. In
consequence, a level shifting IC had to be integrated as part of an adapter board.
This adapter board was designed and fabricated to do a variety of integration tasks. Since the
gumstix is 1.8V logic, and the pololu and phoenix both operate at 5V, some level shifting was
CHAPTER 6. HARDWARE AND SOFTWARE FEASIBILITY STUDY
68
required. In addition, an emergency control system was required, in case the entire gumstix system
were to fail. All of the components have to be connected, and the battery power regulated down for
the pololu and gumstix. To this end, an integration board was built. It integrates a level shifting
IC, a power regulator, and a relay. Everything plugs into the board and has signals routed and
power supplied. The emergency control system uses an 8PDT relay, which controls whether the
RC receiver or the pololu servo board signals are routed to the servos. The relay coil is wired to
an RC controlled switch, ensuring that as long as the RC system is functional, the aircraft can be
controlled. It adds some bulk to the system, but a manageable amount.
6.2
6.2.1
Software Description
Simulator
For simulation work, a simulator is necessary. It might be asked why a simulator was written in
C++ rather than simply using Matlab or Simulink. First, Matlab software does not port easily. If
Mathworks has a target for the architecture in question, then it is possible to get binaries for that
system. But if not, it is not possible to get a C file that can be compiled by another compiler. Matlab
returns binaries only, and has no target for the gumstix system. Because the original intent was to
fly the software, it absolutely had to be portable to ARM/Linux. Secondly, Matlab and even more
so Simulink lacks the flexibility to effectively implement an entire flight management system, and
Matlab does not cleanly allow itself to be integrated to another project, mostly due to its extensive
license management infrastructure. Developing a fast, portable, simple flight management package
is not possible in Matlab, and developing just the control algjfjrthms in Matlab and integrating
them into the flight management package is not practical either, for the reasons mentioned above.
Hence the only option is to develop the entire package in C++. In the end, the major challenges
to implementing in C++ are syntactical, because C++ has a more detailed and verbose syntax
than Matlab. The vector and matrix operations are also important, but those were developed in
C++ for another project, and therefore were already available. Regarding syntax, the author is
sufficiently experienced that the syntax of C/C++ poses no challenge. In addition, MPC is a control
methodology that lends itself particularly well to a procedural implementation, and particularly
poorly to an input/output Simulink methodology. An aircraft simulator is also relatively easy
code, merely somewhat intimidating to the uninitiated. The core is the equations of motion, which
are implemented in two parts: the six degree of freedom rigid body equations of motion, and the
CHAPTER 6. HARDWARE AND SOFTWARE FEASIBILITY STUDY
69
aerodynamic model. The rigid body equations are easy enough, the main challenges being getting
the equations right and keeping track of where each motion variable is in the state vector. The
aerodynamic model is simple once the regressors are chosen and ,the coefficients determined. Then
add a 4th order Runge-Kutta integration, and the core of the simulator is complete. All that is
left is generating desired control sequences and outputting the large number of relevant parameters
to a data file. A major service that Matlab supplies is plotting. For this case, plots are generated
using gnuplot scripts that draw their data from the aforementioned data file. Gnuplot creates very
good plots that can easily be formatted, labeled and embedded in documents. The actual raw data
is also stored, which is useful for debugging and archival purposes.
6.2.2
OpenGL Visualization
For realtime visualization, an OpenGL flight simulator was also written. It communicates with
the simulator or the flight management software over TCP/IP or UDP/IP, and interfaces with a
joystick. This allows it to be used as a simulator interface or a flight testing ground station, with no
porting or rewriting required. The OpenGL rendering is relatively simple, with a simple HUD and
a grid for the ground, and a blue sky box. The HUD displays all the important flight parameters-
pitch, roll, heading,angle of attack, sideslip, altitude, and airspeed. It allows easy testing of the
equations of motion, and allows the handling qualities of the control algorithm to be tested. There
is also a configuration menu, which allows the pilot to configure various aircraft parameters from the
simulator/ground station interface. OpenGL is also a cross platform library, which helps keep the
software portable. This type of interface would have been extremely difficult to develop in Matlab,
since the serial interface between the simulator and the flight management software requires bit
level manipulation, and Matlab simply does not offer the level of control required. But by allowing
the visualization to interface with the simulator directly or through the flight management package,
debugging becomes easier and the software simply more flexible and useful.
6.2.3
MPC Class
The MPC algorithm used in this work is implemented as a C++ class. This allows multiple
instances to be instantiated and encapsulates the data in a way that keeps application and library
code simple and easy to read and maintain. The MPC code also seems to lend itself surprisingly well
to a class based procedural implementation. Doing the same task with a signal based methodology
(like Simulink) would be extremely difficult. Even working with straight Matlab would be more
CHAPTER 6. HARDWARE AND SOFTWARE FEASIBILITY STUDY
70
difficult than using C++ and good vector library. It also makes reusing code and changing behavior
dynamically much easier. When looking at the closed loop behavior of a system using MPC,
seA-eral components are unique to any given control system, while others are common to any MPC
system. This class builds all the common functionality into the class, and breaks out the system
specific functionality into small, self contained pieces. For example, any MPC system must have an
integration algorithm (assuming it accepts continuous equations of motion), so that functionality is
written into the class. For similar reasons, so is the state propagation. The objective function, in
most cases, is a sum of quadratically weighted tracking errors and control inputs over the prediction
horizon, plus a terminal weighting term. The loops over the prediction horizon are common to most
MPC implementations, but the actual weighting is not. However, the weighting for each timestep
is typically identical, just with a different state and input for each step. Hence each term in the
objective function is a single function of the state, the desired state, and the control input. This
way, the function can be exposed to the user as a function pointer that the user can implement and
pass into the MPC class. This can be done dynamically, changing the behavior on the fly. This
form is followed throughout the code. Implement what is common, use a user supplied function
pointer to implement what is system specific or needs to be changed dynamically.
Perhaps the most critical functionality, in terms of system behavior, is the state trajectory
extrapolation. This function must take the stick input and generate a state or output trajectory
for the system to follow. The way that it does this determines the way that the system will handle
and how stable it will be. The actual implementation of such a system is highly procedural, and
includes several loops and other logic. This function also implements the constraints on the desired
trajectory, and therefore is something that might need to be changed in flight in response to damage.
Hence it is best implemented by the class as a function pointer supplied by the user.
The rigid body equations and aerodynamic model are also broken out, of course. It might seem
easier and more compact to pass in the moment of inertia and mass, but since a simulator already
has the rigid body equations, it is easier to simply pass the entire function in. Besides, the rigid
body equations are relatively simple anyway.
The code for the MPC algorithm, when divided in this fashion, is very compact- only a few
hundred lines. A single controller's worth of user defined functionality adds less than a hundred
more. The entire controller is easily divided into two easy to navigate source files. The size and
simplicity of the code is a testament to how amenable to object oriented implementation MPC
really is. Trying to implement such a controller in Simulink would have been far more difficult and
counter-intuitive.
CHAPTER 6. HARDWARE AND SOFTWARE FEASIBILITY STUDY
71
6.2.4 Flight Management Software
Control of an actual aircraft turns out to be much more serial interfaces and data shuffling than
traditional controls topics. Actually commanding a control move takes a great deal of support
software, and the flight management software provides this. It handles all of the I/O tasks for
the control system, reading and formatting data from the IMU, and formatting and writing data
to command control moves, as well as coordinating all communication with the ground station.
The latest version is multithreaded, with two threads. One handles reading IMU data, running
the Kaiman filter, running the control algorithm, and commanding the control move. The other
handles all communication with the ground station. This keeps the system simple, and avoid many
problems with communication loss. It is absolutely critical the there be no blocking calls within
the control loop, lest the entire control system lock up due to a lost connection. But nonblocking
I/O can be unreliable. By going multithreaded, blocking calls can be used without fear, so long
as blocked I/O or dropped connections can be detected and handled by the control system. Doing
this is relatively easy with the right system calls- namely, poll() and select(). This is another area
where Matlab simply does not offer the low-level control required by the application.
6.2.5
Utilities
Vector & Matrix Library A rich and useful vector library was written to do the large amount
of linear algebra and vector/matrix operations required by the algorithms. It is a class based
wrapper around the Gnu Scientific Library vector, matrix and linear algebra routines. The library includes a large number of linear algebra routines, including vector/matrix multiplication,
addition/subtraction, scaling, matrix inverse, matrix square root (Cholesky factorization, useful for
Unscented Kaiman filtering), matrix transpose, column access, matrix/vector quilting (lay a matrix
over the top of another matrix), element wise trigonometric functions, as well as copy constructor and equal operator overloads to ensure deep copies and avoid memory allocation problems. A
test suite was also written to test most of the functionality, which is very useful when expanding
functionality.
Optimization Class For optimization duties, a software package called DOT (design optimization tools) was used. It is shipped as a binary, and the interface is a function that controls a variable
that controls a loop, allowing the function to control execution of the objective function without
actually having to call the objective function directly, which avoids many possible runtime errors.
CHAPTER 6. HARDWARE AND SOFTWARE FEASIBILITY STUDY
72
The difficulty is that the user must allocate all the working memory for the function. This avoids
interlingual incompatibilities, since the function is written in FORTRAN. Since the user has to
do the memory management, running more than one instance of the optimization can be difficult.
To avoid this problem, a wrapper class was written to encapsulate the data and handle all of the
memory management. This allows an arbitrary number of instances of the class to instantiated and
used at the same time. Because all the memory used by the DOT function is external, it is also
completely reentrant and thread safe, which is also useful. The optimization class itself is an abstract base class, with the objective function as an undefined virtual function. To use the optimizer
in another class, it is only necessary to inherit from the base class and define the objective function.
To make the system easier to use, a less general but easier to use version was developed. It inherits
from the base class and defines an objective function that calls a user supplied function pointer with
the design variables and a user defined void pointer, to pass in an arbitrary parameter. The result
is a very simple, easy to use optimization system built around a very reliable, fast optimization
algorithm.
CHAPTER 7. 3D TRAJECTORY DEVELOPMENT
73
Chapter 7
3D Trajectory Development
As part of the flight testing feasibility study, a flight trajectory representation was developed. It is
designed as a general purpose trajectory representation, with maximum generality and flexibility.
The trajectory is represented by a series of waypoints connected by straight segments and constant
radius turns. A path is fully represented by the start, intermediate (waypoints) and end points
plus start and end orientations, but this representation is not convenient for application to an
autopilot. An autopilot generally requires location and heading deviations, and this representation
of the path does not readily form these quantities. For this reason, an intermediate form of the
path was derived. Since the path is a series of alternating straight segments and turns, the location
and heading of each path transition is derived whenever the independent path parameters are
changed. With the locations and headings of all the segment transitions, the information necessary
to feed back into the autopilot is more easily derived. The novel aspect of this formulation is
the transformation from the waypoint representation to the intermediate form. It is actually done
without using any trigonometric functions. The method is entirely vector based, which means that
the intermediate form can be derived using linear vector operations (cross/dot product, matrix
addition/multiplication/inversion). One transformation in 3D must be solved numerically, but the
hope is that an analytical solution will be found at some point. An analytical solution exists for
the 2D case. A more mathematical treatment is given below.
7.1
Waypoint to Waypoint Transition
There are two types of transitions. The first and simplest is from waypoint to waypoint. In this case,
the starting point, starting orientation and ending point are specified and the ending orientation
CHAPTER 7. 3D TRAJECTORY DEVELOPMENT
74
and turn-to-straight transition location need to be determined. See schematic below.
V
V0
On
Figure 7.1: Waypoint Transition Schematic
It is notable that there are no angles on this diagram, only vectors because the algorithm is
vector based, not angle based. V0, XoXi are known, and Xa,Vi,L,N must be derived. The entire
maneuver lies in the plane defined by the vectorsVftand Xi-X0. N represents the vector normal
to this plane and must be derived first:
N = V0X(X1- X0)ZWV0 ? (X1 - X0)II
(7.1)
All the values on the RHS of this equation are known, so the calculation is straightforward. The
only pitfall is when there is no turn, but this situation can easily be detected heuristically. When
V0 and (Xi - Xo) are parallel, N does not exist. However, it is also irrelevant, so it can be simply
set arbitrarily. The next step is to compute L, which takes a few steps. First, we must form a
vector that points from X0toward the center of the turn. This can be done by crossing V0 into N:
U = V0 x N
(7.2)
Since these vectors are perpendicular unit vectors, no normalization is needed. Next, the center
of the turn can be computed:
C = Xo + U*R
From geometry, we know that the distance from Xcto Xiis:
(7.3)
CHAPTER 7. 3D TRAJECTORY DEVELOPMENT
d=\\X!-C\\
75
(7.4)
Prom the Pythagorean Theorem: (Xc, Xa, X1form a right triangle)
L = Vd2 - R2
(7.5)
Now we must define a vector pointing from X0to Xc:
W = V1 x N
(7.6)
X0 + R*U-R*W+L*V1=X1
(7.7)
-R*V1xN + L*V1=Xi-X0-R*U
(7.8)
Next, a vector traverse of the path:
Substituting eq(7.6):
Since the cross product can be expressed as a matrix multiplication, we can convert the vector
jV into the skew symmetric matrix Nrn:
[L* I + Nn]Vi= Xi- X0 -R*U
(7-9)
V1 = [L*! + Nrn]-1 [X1-X0-R* U]
(7.10)
Solving for V1 :
Xacan be easily derived from here with:
Xa = X0-W* R
(7.11)
Xa = X1- L *Vi
(7.12)
comparing the results of these two computations can serve as a check. Eq(7.10) can be used to
CHAPTER 7. 3D TRAJECTORY DEVELOPMENT
76
build the intermediate representation up to the final waypoint.
7.2 Final Leg Solution
Then the problem is slightly different. The problem has an additional constraint- the ending
orientation. Because of this, another degree of freedom is needed. See the diagram below:
V1
N1
®
V01
V
*X0 °No
Figure 7.2: Flattened Final Leg Diagram
By adding a second turn, the path gains the flexibility necessary to match an ending orientation.
This diagram is by necessity in 2D, but physically it can be 3D. There are two planes represented
in the diagram- one defined by Xa,Xb,Xo&nd another defined by Xa,Xb,Xi These have been
flattened by folding the second plane around the line XaXb so that the schematic looks flat. The
3D problem does not yet have an analytical solution, though work is ongoing in this area. The
governing equation for the 3D case is:
X0 + R * U0 - R * W0 + V01 + R * W1 - R * U1 = Xi
(7.13)
This equation is very similar to the waypoint case, but with extra terms. The vectors U0, W0, U1 , W1
are computed similarly to the waypoint case, and for the numerical work V01 is not a unit vector.
7.2.1
Numerical Solution of Vectorial Governing Equation
The difficulty in solving the 3D vector equation arises in computing N0 and N1. For the 3D case,
these are not readily computed. However, for the 2D case, they are easily derived. Since the
direction of the turn is always the same relative to the normal vector and for the 2D case there is
CHAPTER 7. 3D TRAJECTORY DEVELOPMENT
77
only one plane, the family of Dubins curves for the problem geometry is defined by whether the
normal vectors are into or out of the page. One can easily see that there are four ways of arranging
the vectors with the constraint that they must be normal to the plane: [out,out ; out,in ; in, out,
in, in] which corresponds to the turn set [R,R ; R,L ; L,R ; L5L]. Finding the optimal path is then a
matter of computing all four combinations and choosing the shortest. This is not the most efficient,
but it does find the solution and work is ongoing to remove this requirement. The 3D solution
is found with a numerical strategy. The above vector equation can be rearranged to equal zero
when the solution is found, and hence the residual dotted with itself gives a least squares objective
function that can be driven to zero. The design variables in this case are the components of V01
and the above equation can be manipulated to be a function of only these components, giving a
simple 3 variable optimization problem. The guess value for this optimization problem is computed
by projecting the 3D problem onto the plane defined by Vo and Xi -X0. The 2D problem can then
be solved as described below, and the components of Vnused to solve the optimization problem.
The problem is solved by first projecting the 3D problem onto the plane defined byVó and
Xi — Xo, forming two important vectors: N and V1'. These are the plane normal vector and the
normalized projection of Vi onto the plane respectively. As before, L is derived geometrically:
L = VZ(Ci - C0) · (Ci - Co) - (R* N1 - R* N0) · (R* Ni - R* N0)
(7.14)
The second set of terms take care of the geometric differences between the cases- i.e. if the turns
are the same direction (LL or RR) L is just the distance from turn center to turn center. Otherwise,
the Pythagorean theorem is used to find the diagonal. Simply note that the second term is either 0
or R2, depending on whether N0 and N1 point in the same direction, or opposite direction. Next,
since TV0 and Ni are known for all four curves in the Dubins family, we can derive the four solutions
based on:
V0I = [L* I + R* No- R* Ni]-1IX1 - X0 - R* U0 + R* U1]
(7.15)
For the set [JV0, .Ni] = [N, N; N, -N; -N, N, -N, -N] where N is the plane normal vector
discussed above. From this family of solutions the shortest (optimum) path can be chosen. This
guess value is then plugged into the residual equation:
re = X0 -X1 +R* U0 -R* W0 + L* V01 +R* W1 -R* U1
(7.16)
CHAPTER 7. 3D TRAJECTORY DEVELOPMENT
78
And the objective function
J = re*re
(7.17)
optimized for the quantity L* Vq1 . Note that since Vói is a unit vector, only two of its components
are independent. Adding L brings the number of unknowns up to 3.
For transport category aircraft, the planar guess will be very close to the real solution and the
optimization will converge quickly. Until an analytical solution is found, this method is functional.
7.2.2
Successive Approximation Solution
The final leg also has another numerical solution that does not require a guess value. It can be
described as solving the waypoint to waypoint problem multiple times, with each successive solution
approximating the actual solution more closely. Now consider a function that, given Xo, Vo, and
X1 would return Xa. In psuedocode: Xa = wypt(Xo,Vo,Xi) This is the waypoint problem above.
Note the points Xa and Xb in the figure above. Now consider the following pseudocode applied to
the final leg problem:
Xa = WyPt(X0Zu01X1)
Loop until converged:
Xh = WyPt(X1, -V1, Xa)
Xa =wypt(Xo,v0,Xb)
end
These five lines will find the correct value for Xa and Xb in only a few iterations. The overall
code is an order of magnitude or more faster than the algorithm described above, besides being
dramatically simpler. This code has now been extensively tested, and is robust to colinearity and
other unusual situations. The only situation that the code is unable to handle is when the start
and end point are too close together for this type of path.
CHAPTER 8. SIMULATION RESULTS
79
Chapter 8
Simulation Results
8.1
Gyroscope
A gyroscope was simulated to verify the accuracy of the Euler equation derivation. The gyroscope
was chosen for two main reasons to verify the equations. First, it's a full 3D system. It tests the
entire set of equations, not just a simplified, 2D set of equations. Second, an analytical solution for
the motion of a gyroscope exists, which allows for easy verification. The major difficulty is that the
coordinate systems are slightly different. The analytical gyroscope solution assumes an asymmetric
body and a non-body-fixed coordinate system. The aircraft equations assume an asymmetric body
and a body fixed coordinate system. Hence a coordinate transformation will be necessary to compare
the results for the two models. To aid in this process, a 3D visualization program has been written
to animate the motion of the gyros. This allows easy evaluation of the new EOM. If the gyros stay
in sync, the equations are correct and are implemented correctly. The rendering engine was written
in C+-1- with the GLUT OpenGL library. It is not particularly general, simply a sky box with a
grid and the gyros animated within. However, it is sufficient to see the motion of the gyros and
compare the simulation results.
8.1.1
Derivation of Euler's Equations
From angular momentum, we have:
Since the angular momentum is easiest to express in a rotating frame, we can expand the
80
CHAPTEK 8. SIMULATION RESULTS
equation to:
dHA
dt
+ uxHa=t
(8.2)
now we can expand further:
-Z-(I1U1I + I2U2J + hu3k) + (uri + u2j + u3k) ? (I1U1I + I2LJ2J + hu3k) = t
(8.3)
Collecting terms,
????? + I2U)2J + husk + ?2??{?? - I2)i + W3Wi(/i - h)j + ???2(?2 - h)k = t
(8.4)
Splitting into directional components:
/??? + Uj2W3(I3 - I2) — Ti
(8.5)
I2Uj2 + ?3?? (Ji - h) = T2
(8.6)
I3U3 + U1U2(I2 - I1) = T3
(8.7)
8.1.2 Analytical Solution with Semi Body-Fixed Coordinate System
The equations can be solved for a steady state solution with a semi body-fixed coordinate system.
The coordinate system 1 axis is aligned with the spin axis of the gyro, while the 2 axis is always
vertical. Hence the angular momentum, in these coordinates, can be expressed by:
HB = ?????
(8.8)
Even though the system will be rotating around the 2 axis, we are concerned with the angular
momentum expressed in the the coordinate frame of interest.
From Newton's laws:
dHB
¦ ??H
dt
Substituting:
(8.9)
81
CHAPTER 8. SIMULATION RESULTS
T = Ji(Ji ¿ + (w2j) X (JlWii)
(8.10)
T = JitJiJ — JiWi W2fc
(8.11)
? = huí-i
(8.12)
T2 = O
(8.13)
T3 = -JiWiW2
(8.14)
Taking the cross product:
Splitting into components:
We are only interested in the third equation, since the change in spin speed is assumed to be
zero. Solving for W2:
W2 =
UJl
(8.15)
If the moment comes from gravity:
?2
m¦g¦I
UJi
(8.16)
The moment is actually negative, so the minus sign disappears.
8.1.3
Numerical Solution With Body Fixed Coordinate System
This demonstration uses the standard aircraft coordinates, applied to a gyroscope. From an aircraft
control perspective, this is a simulation of an aircraft doing very high speed barrel roles with zero
airspeed. An odd situation, but one that tests the accuracy and numerical stability of the equations.
Deriving the equations is simple, but the equations derived are left as vector equations, rather than
split into components, so the derivation will be given here.
First, linear momentum and some nomenclature:
CHAPTER 8. SIMULATION RESULTS
82
u
Vb
?
(8.17)
V
Q
(8.18)
r
Since we're working in body fixed coordinates:
Lb = m- v
dLB
dt
+ ? ? Lb
5.19)
(8.20)
Rearranging:
mût, = F - ? ? (m · ?)
(8.21)
Solving for Vb
Vb =
Lu X Vb
m
(8.22)
This formulation is identical to the typically used body fixed equations, but for one difference in
implementation. The equations are used as a single vector equation. Since the simulation code uses
a very capable vector library, it is much easier to leave the system formulated as a vector equation,
as this limits bugs and equation formatting errors.
Now for the rotational component:
HB =??
(8.23)
dHB +t ?? HB
t = ——
(8.24)
?? + ? x (??)
(8.25)
dt
CHAPTER 8. SIMULATION RESULTS
83
Rearranging:
? = 1-1^-LuX(ILu))
(8.26)
Once again, these are equivalent to Euler's equations, but expressed as a vector equation in a
way the lends itself easily to numerical integration. It also makes no assumptions about product
moments of inertia, which can be useful.
8.1.4
Gyro Simulation Results
The gyros track one another fairly well, but since the full 6DOF version must be attached to a pivot
point, some complications arise. To avoid algebraic loops, it is necessary to model the attachment
point with a spring force. It turns out, however, that the various modes associated with the spring
attachment do not tend to transfer energy back into the gyro's spin. As a result, the gyro tends to
oscillate, which tends to transfer more energy from the gyro spin to the oscillations, reducing the
spin speed. Adding damping reduces the oscillation but also bleeds energy, further reducing rotation
rates. As a result, the movements do not match perfectly, but they do track fairly well. This is
strong evidence that the equations are properly derived and are modeling the system accurately, in
full nonlinear, six degree of freedom form. Given this, we can comfortably apply these equations
to the aircraft, secure in the knowledge that any bugs or modeling errors do not exist in the rigid
body dynamics. A screen capture of the gyros in motion is included below.
CHAPTER 8. SIMULATION RESULTS
84
!¦wra.jWHimjffl
?
LA
\
\
\
\
\
\
\
\
\
\
\
/
/
\
/
/
\\
?
/
/
Y
/
/
ri
\
\
\
/
f
\
m
A
^J
A
\
/
\
\
a
X?
/i
X
\
y
\
/
U
\
\
/ \
8.2
/I
\j
/
/
i
-j
\
WM
\
\
\
Aircraft Simulation Results
Given the important nature of the simulation results and their close relationship to the dissertation
objectives, a short summary of those objectives is useful. First, a damage database is to be used to
compensate for a damaged aircraft. Second, the objective function or other component of the predictive controller, aside from the system model, is to be dynamically changed to alter performance.
Thirdly, a state trajectory extrapolator is to be developed to deliver stability and good handling
qualities, even under damage.
To accomplish these objectives, a simulator was written. The simulator uses logic within the
main loop to select between a damaged and undamaged model for simulation and control, as well
as select maneuvers and activate and deactivate the system ID. All of this can be done while the
simulator is running. The MPC controller that was written is also dynamically configurable, with
the weighting, STEX and model all dynamically configurable. The STEX written for the controller
is able to generate a desired orientation trajectory that is both stable and responsive, and highly
resistant to drift. The result is a very good controller that flies well and is easy to handle, even for
an unskilled pilot such as the author.
V
\
85
CHAPTER 8. SIMULATION RESULTS
8.2.1
Simulation Parameters
The parameters used for the simulation will be detailed in this section. First, the precise geometry
and mass properties of the aircraft used for this work (A Kadet EP-42 model aircraft) will be listed
for completeness and in the interest of repeatability.
Table 8.1: Aircraft Geometry and Mass Properties
Value
Description
0.210m2
Planform Area
0.203
Mean Aerodynamic Chord
1.067m
Wingspan
1.02%
Mass
163107e - 7kg ¦ m2
X Moment of Inertia
313515e - 7kg ¦ m2
Y Moment of Inertia
44767Ie - 7kg ¦ m2
Z Moment of Inertia
Variable
Izz
XZ Product of Inertia
1456Oe - 7kg ¦ m2
Next, all of the nonzero aerodynamic coefficients are included for the same reasons as above.
Table 8.2: Aerodynamic Coefficients
Variable
Value
Generates
From
Cza
-0.065
CXVt
-0.01
Fx
Vt
Cxst
0.4
Fx
St
C-MxSa
-0.01
Mx
da
C??„a
-0.00012
Mr,
CMySe
0.01
M1,
CMyq
-0.5
MY
Cmz0
0.0001
Mz
CM*r
-0.1
Mz
CM. Sr
0.001
Mz
The damage is modeled as a patch of wing with its own aerodynamic coefficients. The nominal
model is left unchanged, and the forces produced by the hole are reflected back to the CG, which
introduces moments as well. This is a simple, linear method of modeling the damage forces. Reality
is, of course, nonlinear. But this work is meant to be more of a proof of concept: if it can show
CHAPTER 8. SIMULATION RESULTS
86
that having a database of precomputed models can improve stability and handling, then its goals
are largely accomplished. The fact that its models are somewhat simplistic is not entirely germane
to the task of showing that precomputed models can be useful, and in a production situation
there would be whole teams of flight test engineers and aerodynamicists developing highly accurate
models for the database. For this work, simple models will be used to show the effectiveness of the
algorithms. The next table covers the geometry of the damage: location relative to the CG in body
coordinates and the planform area of the damage. The location refers to the X, Y, and Z distances
from the aircraft CG to the center of pressure of the damage, and the planform area is the size of
the affected region of the wing.
Table 8.3: Damage Geometry
Variable
Value
Description
Rx
0.0cm
Damage X location
Ry
25.0cm
Damage Y location
R,
-2.0cm
Damage Y location
Sd
4cm
Damage Planform Area
Next, we have the aerodynamic coefficients of the damage, conforming to the same type of
model used for the nominal aerodynamic model. There are two sets of coefficients. The first is the
actual data, used for the simulation. The second is the data used by the damage controller (the
model stored in the database) . These models are not identical because no database can encompass
all possible damage, so there will always be some degree of mismatch. The author has no illusion
about always having the exact model for any possible damage, but getting relatively close always
improves performance, and hence is worthwhile.
Table 8.4: Damage Coefficients- Actual
Variable
Value
Cza
0.065
Cxvt
-0.02
Generates
From
a
Fx
Vt
Table 8.5: Damage Coefficients- Database
CHAPTER 8. SIMULATION RESULTS
Variable
Value
Cza
0.05
CXVt
0.035
Generates
87
From
a
Fx
Vt
The above data should be enough to replicate the work done here, at least as far as model data
goes.
8.2.2
Test Case Outline
A series of test cases were run to illustrate the process of flying normally, sustaining damage, and
compensating for that damage. The parameters and characteristics of each test case are given in
the tables below. The first table covers the maneuvers used to test the performance of the control
system. Each maneuver is applied to both the pitch and roll channels, resulting in nine runs for
each test case described below. With 5 test cases, there are a great many runs to do. 32 regular
runs (no system ID) plus a system ID run means 33 runs with an average of 3 or 4 plots each, which
is far too many to show all of them. Therefore only a selection will be included. The baseline case
(test case ID 1) will be heavily covered, as this establishes the basic performance of the system. The
second shows robustness, but for the sake of space, only the pitch channel for a smaller selection of
maneuvers will be shown.
Table 8.6: Maneuver Description
ID
Name
Description
Slow Sin
Sinusoidal, one cycle in 10 sec
Fast Sin
Sinusoidal, two cycles in 10 sec
Step Doublet
15 deg amplitude, 4 sec width
Straight and Level
5 deg up
Chirp
Linear chirp for system ID
Test case 1 establishes baseline performance. Test case 2 illustrates the robustness of the controller to damage with only a nominal model. The third test case shows the improvement with a
database model. The system ID run shows the convergence of the model when a Kaiman ID is used
to identify the damage parameters. The final case shows the performance with an identified model.
CJTAPTEB 8. SIMULATION RESULTS
Table 8.7: Test Case Outline
ID
System ID
Maneuvers
Nominal
inactive
1,2,3,4
Damaged
Nominal
inactive
1,2,3,4
Damaged
Database
inactive
Damaged
Database
active
Damaged
Identified
inactive
Sim Model
Control Model
Nominal
Ctrl Hor.
Pred. Hor
8.2.3
Results
8.2.3.1
Test Case 1: Norninal Aircraft, Nominal Control
1,2,3,4
We will start the results with a simple test case: a slow sin pitch input. The aircraft will complete
a sinusoidal pitch doublet over a ten second simulation.
Desired Pitch
Figure 8.1: Pitch response to slow sine pitch input
89
CHAPTER 8. SIMULATION RESULTS
There is a small amount of phase lag, which is to be expected. Otherwise, the tracking performance is excellent. The magnitude is within a degree of the commanded pitch. The phase exists
because the the stick input does not have the ability to look ahead at what the trajectory will look
like in the future. The state trajectory extrapolation cannot really look ahead, it simply has to
make a best estimate of what the pilot or autopilot wants to do based on the stick input. Hence
it tends to lag a bit when the input is not constant. Let us look at the roll channel with the same
pitch input, to see what sort of cross coupling exists.
0.005
T
I
Roll
Desired Roll
0.0045 h
0.004
:i
0.0035
0.003 h
0.0025
Hi
IM
0.002
Ml
Ml
0.0015
0.001
I t
I \
I \
0.0005
10
Figure 8.2: Roll response to a slow sine pitch input
Noting that the Y axis is in degrees, the cross coupling is basically nonexistent, which is no
surprise for a nominal, symmetric aircraft. Next, we will look at a slow sin input on the roll
channel. The roll output:
90
CHAPTER 8. SIMULATION RESULTS
Desired Rol
Figure 8.3: Roll response to a slow sine roll input
Again, excellent tracking. Not a surprising result for a nominal aircraft. For this case, the pitch
angle was commanded at five degrees, so we will now look at the pitch output to examine that cross
coupling.
CHAPTER 8. SIMULATION RESULTS
91
5.05
Desired Pitch
4.55
10
Figure 8.4: Pitch response to a slow sine roll input
Again, the scale tells the story. There is less than 0.5 degrees of cross coupling error, which is
easily within the constraints of good handling. At this point, the nominal orientation performance
has been largely established for a slow sinusoidal input- tight tracking with no cross coupling.
Moving on to a fast sinusoidal pitch input test case
CHAPTER 8. SIMULATIONRESULTS
92
Pitch
Desired Pitch
20
15
/
/
10
0
t
/
?
\V//
/
/
10
O
2
4
6
8
?
/
10
Figure 8.5: Pitch response to a fast sine pitch input
Still good tracking, but starting to show a small amount of overshoot, perhaps one or two
degrees. This is not unusual for aircraft, and is not fatal, either for safety or handling qualities.
The cross coupling behavior has been established and will not be shown for this maneuver. Same
maneuver, but for roll:
93
CHAPTER 8. SIMULATION RESULTS
Desired Roll
Figure 8.6: Roll response to a fast sine roll input
The overshoot is not present in the roll channel. In fact, the roll comes up slightly short, perhaps
a half degree, for each cycle. Again, cross coupling is not a problem for the nominal aircraft, but a
pitch plot will be included for completeness.
94
CHAPTER 8. SIMULATION RESULTS
5.05
Pitch
DeteirecfPitch
?
1
4.95
I
i
i
i!
I
f~
s
i
\
J
I
4.85
i
l
4.8
4.75
i.
4.65
O
2
4
6
8
10
Figure 8.7: Pitch response to a fast sine roll input
The cross coupling has increased, but is still very small, less than a half degree. In addition, on
any aircraft the pitch channel tends to be more coupled to roll than vice versa, so this is worst case
cross coupling performance. Now we will examine a step doublet on the pitch channel.
CHAPTER 8. SIMULATION RESULTS
95
Desired Pitch
Figure 8.8: Pitch response to a pitch doublet input
The pitch response shows no steady state error after the settling time. A small amount of
overshoot is present, but only a few degrees, which is quite reasonable. Settling time is roughly two
seconds, which is also very reasonable, and does not saturate the elevator.
8.2.3.2
Test Case 2: Damaged Aircraft, Nominal Control
Next we will damage the aircraft, but leave the nominal control system in place. This establishes the
robustness of the MPC algorithm, and demonstrates the ability of the control method to maintain
stability for the time required to detect and quantify the damage. We will trim the plots shown
here to a pitch doublet, cross coupling and steady state error. One of the primary dangers of wing
damage is that the asymmetry tends to couple the roll output to pitch input, so that is a situation
of prime importance. To start, the pitch output for a pitch doublet input:
CHAPTER 8. SIMULATION RESULTS
96
Desired Pitch
Figure 8.9: Pitch response to a pitch doublet input
Performance is still very good, even with the damage. Settling time has increased by a half
second, and there is still a degree of overshoot. Now cross coupling for the same maneuver:
CHAPTER 8. SIMULATION RESULTS
97
Desired Roll
Figure 8.10: Roll response to a pitch doublet input
Remarkably, there is very little cross coupling. From experience with the open loop damaged
model, flying the damaged aircraft is very tricky, and it is difficult to maintain straight and level
flight with the wing damage. That the controller is capable of this level of performance with an
off-nominal system is surprising. Next we will look at steady state error.
CHAPTER 8. SIMULATION RESULTS
98
10
Figure 8.11: Pitch and roll response to steady state pitch and roll input
The steady state error is very small; it is essentially negligible. The pitch runs at 5 degrees
because it is commanded there. Just for the sake of completeness, a second run was made with
the downward force due to the damage multiplied be a factor of ten- Cza multiplied by ten, in
other words. This is a fairly unrealistic damage case, but illustrates the resistance of the MPC
algorithm to steady state error. The damage case is unrealistic because the value shown in table
8.4 corresponds to a negation of the lift on that section of the wing, so multiplying it by ten results
in a downward force ten times stronger than what that patch of wing is capable of producing in
lift.
99
CHAPTER 8. SIMULATION RESULTS
Prtel>
Figure 8.12: Pitch and roll response to a steady state input with overlarge damage
Note the still small steady state error, about 1.2 degrees. Even the maximum error is only 1.8
degrees. The pitch has less than 0.1 degree of error.
8.2.3.3
Test Case 3: Damaged Aircraft, Database Controller
We will now be integrating the controller from the database, which is not the exact model, but is
closer than the nominal model. We will start with the steady state error maneuver.
CHAPTER 8. SIMULATION RESULTS
100
Figure 8.13: Pitch and roll response to steady state input
The steady state error is smaller than it was before down to 0.05 degrees from 0.1 degrees, which
indicates that the system is working as it should, improving performance with a better knowledge
of the way the aircraft will respond. A rougly 50% reduction in steady state error is realized by
using a damage database. Now let us look at the doublet response:
101
CHAPTER 8. SIMULATION RESULTS
Figure 8.14: Pitch response to pitch doublet input
20
Pitch
Desired Pitch
15
10
?
?
I
0
10
ì
15
20
0
2
4
6
8
10
Pitch response looks good, better than the nominal controller with less overshoot and steady
state error. Now the cross coupling:
102
CHAPTER 8. SIMULATION RESULTS
Figure 8.15: Roll response to pitch doublet input
0.7 r
I
Roll
Desired Roll
0.6 h
0.5
0.4 h
0.3
0.2
I
1
0.1 h
0
8
10
Very small cross coupling here, still small enough to be negligible. Spikes to slightly larger than
before, but the steady state is otherwise better, settling at 0.04 degrees, which is again more than
50% better than the nominal controller.
8.2.3.4
Test Case 4: System ID
System ID is a subject all its own. Tuning the algorithms to operate at maximum efficiency, designing adequate maneuvers and final processing of the data are all things typically done offline. For the
sort of system ID that is required for a full implementation of this work, much of this process must
be automated. In particular, maneuver design is highly dependent on what is being identified, and
typically the precision of an autopilot actually works against good identification. Hence, automating
system ID for a wide range of damage situations can be very difficult. Even developing a strategy
for automated system ID would be a dissertation all by itself. The damage must be detected and
localized, a set of likely regressors compiled, a maneuver designed to maximize variation of those
CHAPTER 8. SIMULATION RESULTS
103
regressors, the maneuver flown, and finally the data processed to discover which regressors are most
relevant, and to get the best fit, most parsimonious model of the damage. Developing a complete
system to reliably do all of that automatically is very difficult. Not impossible, but beyond the
scope of this work. For that reason, the system ID done here is admittedly simple. The algorithms
themselves are fully functional and well tested, but the other components of a true system ID
framework- regressor estimation, maneuver design and a good analysis of the noise characteristics
are not present. In practice, these are the most important and difficult aspects of system ID. The
algorithms are simple and well understood, but getting the data to run the algorithms on is system
dependent and has a tremendous effect on the quality of the identification. With all of this in mind,
it is clear that any system ID done here would be too simple to be of much use, so the results will
be presented mainly for completeness, not because the author considers them to be particularly
relevant or illustrative of the how the system would actually work in practice. The system ID used
is an online Kaiman ID, which uses a Kaiman filter to identify parameters. A recursive least squares
algorithm was also written, but it is used mainly as a benchmark to compare the Kaiman ID to.
The Kaiman ID does the following with the parameters:
CHAPTER 8. SIMULATION RESULTS
0.07
104
S
. . „ C J=XjW .
C_Fz_alpha
0.06
0.05
0.04
0.03
0.02
0.01
-0.01
-0.02
-0.03
10
Figure 8.16: System ID parameter with chirp input
The results here are far from realistic. No noise is included, but even so the convergence is quite
fast, and the ID corrects the database nearly perfectly within a one second. What is interesting
is that system ID usually requires at least one cycle of a given mode to identify a parameter,
regardless of noise. That was the experience when testing these algorithms. For interest, we will
now try straight and level flight and see what that does to the system ID.
105
CHAPTER 8. SIMULATION RESULTS
0.07
--------f
!
_ „C_Ex„Vi „ „ , „ _
C_Fz_alpha
0.06
0.05
0.04
0.03
0.02
0.01
-0.01
-0.02
-0.03
10
Figure 8.17: System ID with straight and level flight
This should be worst case for system ID, but it is not particularly slowed down. If anything, it
is slightly faster. This bodes well for identifying only the damage, rather than the whole system.
However, any conclusions drawn here must be taken with the proverbial grain of salt. This work
does not have to derive the body angular accelerations, which is a critical and difficult step in
measuring the forces and moments on the aircraft. Other work done for this dissertation clearly
demonstrated that numerical differentiation of the body rates is a very significant source of noise,
and significantly degrades the system ID. In any case, all that can be concluded from this result
is that identifying only the damage reduces the number of parameters to identify and makes the
system ID much less sensitive to maneuver design. The primary reason that this identification
works so well is that each output (Fz and Fx) is a function of only one regressor, so there can be
no colinearity problems, which would clearly be a problem if the regressors never changed, as they
would in straight and level flight. Let us take a look at what happens when we add a few more
regressors:
CHAPTER 8. SIMULATION RESULTS
0.1
106
-~T
C_Fx_l
C Fx Vt
0.05
C_Fx_alpha
____C_F_z_l_
0 h-~- :-_—
___ C FZ. Vi-
C_Fz_alpha
-0.05 h
-0.1
-0.15
-0.2
-0.25
-0.3
-0.35
10
Figure 8.18: Sj?stem ID with more regressors and straight & level flight
None of the parameters are converging to their proper values. Now there are colinear regressors,
and the identification does worse than nothing, it is now misidentifying parameters. Now, a better
chirp maneuver.
CHAPTER 8. SIMULATION RESULTS
107
0.4
C_Fx_l
C_Fx_Vt
0.3 h
C_Fx_alpha
C_Fz_l
C_Fz_Vt
C_Fz_alpha
0.2
0.1
frv·
¦ ? r""V " "„'
\
-0.1
\
\
/ v..
V^ /?,·.
AA
?A
(, ? ? f
i! h ?
J ?.
V
\
^
\1
/?
i ! !?. ? · ? ? G
V \/ V V
-0.2
-0.3
-0.4 h
\
? !
\ !
\ J
\ !
-0.5
10
Figure 8.19: System ID with more regressors and a pitch chirp
Cfxc is wandering about, not converging at all, and most of the other parameters are not even
close to their actual values. With ideal measurements and a maneuver that should give a good
identification, the system ID still does not converge. This is using the exact same data as fig 8.16,
where with a reduced regression vector the system ID converged to a perfect identification in under
a second. Large regression vectors are the enemy of good system ID. A small regression vector
was almost impossible to subvert, but a large one equally difficult to get right. Unfortunately, we
do not know a priori what the best regression vector is for any given damage. That is typically
a postprocessing task done by a trained engineer with [23] in hand. Automating the process is
not easy. Also keep in mind that even this more general model is not being asked to locate the
damage. If it were, there would be three more parameters to even further dilute the identification.
In addition, if the location parameters are added to the identification vector, the identification
becomes nonlinear, which can make the process even more difficult. This merely highlights the
principle that the fewer parameters must be identified, the better.
108
CHAPTER 8. SIMULATION RESULTS
8.2.3.5
Test Case 5: Damaged Aircraft, Identified Model
Moving on the the performance with an identified model, it is assumed that the system ID worked
correctly and was able to identify the model well. As is clear above, it can do it, but only when
the regression vector is chosen intelligently and the maneuver is well designed. In any case, we now
need to simply look at the basic performance with the new, even better identified model. Starting
with another pitch doublet on the pitch channel, we can look at the pitch tracking:
Desired Pitch
Figure 8.20: Pitch response to a pitch doublet input
Still good performance, on par with the nominal performance. Honestly, the performance is
largely indistinguishable from the database model, which is mainly an indication of how robust the
system really is. Cross coupling for the same maneuver:
109
CHAPTER 8. SIMULATION RESULTS
Desired Roll
0.25
0.15
0.05
-0.05 h
Figure 8.21: Roll response to pitch doublet
The roll coupling transient peaks at roughly 1/3 what it did with the nominal control system.
In addition, the system settles with 90% less steady state error with the better model than with
the nominal control system.
CHAPTER 9.
CONCLUSIONS
110
Chapter 9
Conclusions
9.1
Discussion of Results
The results show the effectiveness of a database strategy for adaptive control, and the applicability
of predictive control to such a strategy. The ability to simply swap models in and out of the
controller makes an indirect adaptive control scheme almost trivial, and makes the database system
particularly easy to implement and test, since the model used for simulation can be used for control,
with no code changes. Once the sim model has been tested, it is also ready for use in the predictive
controller. This greatly limits debugging and testing problems with the controllers, and is easy to
adapt to existing simulation models.
The state trajectory extrapolation proved to be an effective method of adapting a control
methodology rooted in regulation of chemical plants to an aircraft control tracking system. It
provided handling characteristics largely independent of weighting, and remarkably robust and
stable performance, both for preprogrammed maneuvers and for manual flight.
Strangely, the effectiveness of the database and identified models for control was partially masked
by the robustness of predictive control. No adaptation at all still delivered at least adequate control
and handling, for an aircraft that was very difficult to control open loop. Given that predictive
control is popular for process control in chemical plants mainly for its robustness to model mismatch,
it should not be too surprising that it handles that situation well in aircraft. Performance was still
degraded, so a better model has the potential to improve performance.
Integrating a database-derived model improved control and handling, beyond what even the
robustness of the controller could provide. The performance was not bad even with the nominal
CHAPTER 9.
CONCLUSIONS
111
model, but tracking and cross coupling were both improved significantly by installing a databasederived approximate model of the damage.
System ID proved to be a difficult task, and the importance of limiting the number of parameters
that needed to be identified was highlighted. Limiting the size of the identification vector was critical
in getting good convergence and limiting the sensitivity of the identification to the type of maneuver
being flown. This makes the task of identifying a model of a damaged aircraft under less than ideal
conditions far easier to do.
Integrating an even more accurate identified model improved the tracking and cross coupling
performance to nearly nominal levels, without excessive control utilization and fully eliminating
any drift or steady state orientation error due to the damage. This would make a pilot's job much
easier, since the pilot must no longer devote a significant portion of their attention on the task of
keeping the aircraft straight and level, and can devote more effort to emergency planning, and leave
the control to the intelligent flight control system.
9.2
Objective Achievement
Damage Database
In this work, the concept of preconverged models for aircraft control adaptation was tested and
found to be effective. Steady state and transient error were found to be dramatically reduced (90%
reduction in steady state cross coupling error, 60% reduction in transient cross coupling error) and
pitch channel transient error was reduced by 50% over the nominal controller.
State Trajectory Extrapolation
A state trajectory exprapolation scheme was also developed and found to be effective, delivering
solid tracking performance in every simulation. Good handling qualities and stability were also
verified through the use of an interactive flight simulator, both for a rate command and a pitch
and roll command sontrol system. It was found in simulation that the handling qualities- response
speed, tracking and general handling feel were almost entirely determined by the state trajectory
extrapolation system in use. The handling was rendered nearly independent of the weighting.
CHAPTER 9.
CONCLUSIONS
112
Dynamic MPC
Both the system model and the state trajectory extrapolation can be altered dynamically to improve performance or safety. The effectiveness of changing the model used for control has been
demonstrated. The state strajectory extrapolation is also dynamic, and is even capable of changing
the type of control system on the fly, from a rate command to pitch and roll command for example.
9.3
Concluding Remarks
The motivation of this work was to look at new ways of doing integrated intelligent flight control.
Primarily, it was to be an excursion into modern control methods that are more amenable to a highly
integrated flight management system. A dissertation is an opportunity to explore new territory and
new ideas and ways of solving problems. The author sees nonlinear control, in particular predictive
control, as a route to a more integrated, more intelligent form of flight control, one that does not
try to deconstruct and compartmentalize the flight control problem, but rather takes a synthesis
approach and integrates all available information in an intelligent way to control the aircraft in the
safest manner possible. Given the demands of modern flight control- high traffic, damage tolerance,
high levels of intelligent automation, airspace integration with UAVs, and perhaps even autonomous
transport aircraft, linear control is starting to find its limits. Constraints do not integrate well,
abstract data often cannot be integrated at all, and adaptation does not always integrate as well
as we would like, particularly with indirect strategies. If flight, and flight control, is going to make
the next steps in its evolution, better ways of controlling the aircraft and synthesizing the vast
quantities of information available need to be developed. In fact, the problem of how to synthesize
all of this information is probably the most important challenge facing the flight control research
community today. The author believes that predictive control has the potential to play a key role
in solving this problem.
The framework for any flight control design methodology is immensely complex, and develops
over decades. Linear control design and flight control in general have a long history and very well
developed tools and techniques for design. These tools take time to develop, and modern control is
still, to a great extent, in its infancy. Time will see better design methods and more assurances of
stability. The author hopes that this work is a small step towards these better methodologies, and
a new way of approaching flight control, one that will enhance the safety, efficiency, and comfort of
flight in the future.
BIBLIOGRAPHY
113
Bibliography
[1] Adaptive Generalized Predictive Control Subject to Input Constraints.
[2] S. N. Balakrishnan and Dongchen Han. State constrained agile missile control with adaptivecritic-based neural networks. IEEE Transactions on Control Systems Technology, 10(4), 2002.
[3] V. Bobal, J. Böhm, J. Fessi, and J. Machacek. Digital Self Tuning Controllers. Springer-Verlag,
1 edition, 2005.
[4] Mark E. Campbell and Shelby Brunke. Nonlinear estimation of aircraft models for on-line
control customization. In Aerospace Conference, 2001, IEEE Proceedings, 2001.
[5] D. W. Clarke. Self-tuning control of nonminimum-phase systems. Automatica, 20, 1984.
[6] D. W. Clarke. Adaptive predictive control. A. Rev. Control, 20, 1997.
[7] D. W. Clarke and C. Mohtadi. Properties of generalized predictive control. Automatica, 25,
1989.
[8] D. W. Clarke, C. Mohtadi, and P. S. Tufts. Generalized predictive control - part 1. the basic
algorithm. Automatica, 23(2):137-148, 1987.
[9] D. W. Clarke, C. Mohtadi, and P. S. Tufts. Generalized predictive control-part 2. extentions
and interpretaions. Automatica, 23, 1987.
[10] Moritz Diehl and Jakob Bjornberg. Robust dynamic programming for min-max model predictive control of constrained uncertain systems. IEEE Transactions on Automatic Control,
49(12):2253-2257, 2004.
[11] Guy A. Dumont and Mihai Huzmezan. Concepts, methods and techniques in adaptive control.
2002.
[12] Silvia Ferrari and Robert F. Stengel. An adaptive critic glogal controller. 2002.
BIBLIOGRAPHY
114
1 13] Silvia Ferrari and Robert F. Stengel. Online adaptive critic flight control. Journal of Guidance,
Control and Dynamics, 27, 2004.
[14] Silvia Ferrari and Robert F. Stengel. Smooth function approximation using neural networks.
IEEE Transactions on Neural Networks, 16(l):24-38, 2005.
[15] Peter J. Gawthrop, Huseyin Demircioglu, and Irma I Siller-Alcala. Multivariable continuoustime generalized predictive control: A state-space approach to linear and nonlinear systems.
Technical report, Center for Systems and Control, February 1998.
[16] Pamela Haley and Don Soloway. Pilots rate generalized predictive control for reconfiguration.
Technical report, NASA Ames, ?
[17] Pamela Haley and Don Soloway. Generalized predictive control for active flutter suppression.
IEEE Control Systems, 17, 1997.
[18] Pamela Haley and Don Soloway. Neural generalized pedictive control: A newton-raphson
implementation. Technical report, NASA Langley, February 1997.
[19] R. Hallouzi and M. Verhaegen. Fault-tolerant subspace predictive ontrol applied to a boeing
747 model. Journal of Guidance, Control and Dynamics, 31(4):873-883, July-August 2008.
[20] Lars Imsland, Rolf Findeisen, Eric Bullinger, Frank Allgower, and Bjarne A. Foss. A note on
stability, robustness and performance of output feedback nonlinear model predictive control.
Journal of Process Control, 13:633-644, 2003.
[21] Huairui Guo Jionghau Jin and Shiyu Zhou. Statistical process control based supervisory
generalized predictive control of thin film deposition processes. Journal of Manufacturing
Science and Engineering, 128, February 2006.
[22] R. Kennel, A. Linder, and M. Linke. Generalized predictive control (gpc) - ready for drive
applications? volume 4, 2001.
[23] Vladislav Klein and Eugene A. Morelli. Aircraft System Identification: Theory and Practice.
AIAA Education Series. AIAA, 1 edition, 2006.
[24] Nilesh V. Kulkarni and K. KrishnaKumar. Intelligent engine control using an adaptive critic.
IEEE Transactions on Control Systems Technology, 11(2), 2003.
[25] D. Q. Mayne, S. V. Rakovic, R. Findeisen, and F. Allgower. Robust output feedback model
predictive control of constrained linear systems. Automatica, 42:1217-1222, 2006.
BIBLIOGRAPHY
115
[26] D. Q. Mayne, J. Rawlings, C. Rao, and P. Scokaert. Constrained model predictive control:
Stability and optimality. Automatica, 26(6):789-814, 2000.
[27] Prashant Mhaskar, Nael H. El-Farra, and Panagiotis D. Christofides. Robust hybrid control
of nonlinear systems. Automatica, 41:209-217, 2005.
[28] Jung-Wook Park, Ronald G. Harley, and Ganesh Kumar Venayagamoorthy. Adaptive-criticbased optimal neurocontrol for synchronous generators using mlp/rbf neural networks. IEEE
Transactions on Industry Applications, 39(5), 2003.
[29] N Sarigul-Klijn, I Lopez, SI Beak, and B Kuo. "smart" monitoring of structural health in flight
environment (invited keynote paper) . In Proceedings of the ICCEE, Japan, 2008.
[30] N Sarigul-Klijn, P Nespeca, T Marchelli, and M Sarigul-Klijn. An approach to predict flight dynamics and stability derivatives of distressed aircraft. In AIAA Atmospheric Flight Mechanics
Conference and Exhibit. AIAA, 2008.
[31] N Sarigul-Klijn, R. Rapetti, A. Jordan, I. Lopez, M. Sarigul-Klijn, and P. Nespeca. Intelligent
flight-trajectory generation to maximize safe-outcome probability after a distress event. Journal
of Aircraft, 47(l):255-267, 2010.
[32] S. I. Shah, C. Mohtadi, and D. W. Clarke. Multivariable adaptive control without a prior
knowledge of the delay matrix. Systems and Controls Letters, 9, 1987.
[33] Jianjun Shi, Atul G. Kelkar, and Don Soloway, Stable reconfigurable generalized predictive
control with application to flight control. Journal of Dynamic Systems, Measurement, and
Control, 128, June 2006.
[34] P.B. Sistu and B.W. Bequette. Nonlinear model predictive control closed-loop stability analysis.
AIChE Journal, 42, 1996.
[35] M. B. Subrahmanyam. ?-infinity design of f/a-18a automatic carrier landing system. Journal
of Guidance, Control and Dynamics, 17(1):187-191, 1994.
[36] Qian Wang and Robert F. Stengel. Robust control of nonlinear systems with parametric
uncertainty. Automatica, 38, 2002.
[37] Qian Wang and Robert F. Stengel. Robust nonlinear flight control of a high performance
aircraft. IEEE Transactions on Control Systems Technology, 13, 2005.
BIBLIOGRAPHY
116
[38] Edward Wilson, Chris Lages, and Robert Mah. Gryo-based maximum-likelihood thruster fault
detection and identification. In Proceedings of the 2002 American Control Conference, May
2002.
Документ
Категория
Без категории
Просмотров
0
Размер файла
4 624 Кб
Теги
sdewsdweddes
1/--страниц
Пожаловаться на содержимое документа