вход по аккаунту



код для вставкиСкачать
It is a tradition of SUAI to hold the great scientific conference for undergraduate and post-graduate
students every year in April. The first in the world space flight was performed by Yuri Gagarin at 12th
April, 1961 and this date is celebrated in many countries as the World Day of Aviation and Astronautics.
Our university was created in 1941 and many its graduates actively participated in the first space launches
and space aerospace vehicles development.
Presently SUAI became the great educational and scientific centre with thirteen faculties, 47 chairs, over
fifty directions of training, postgraduate and doctor's degree courses.
The Chair of Aerospace Instruments and Measuring-Computing Complexes (Chair N11) is still the leader
department in SUAI defining the basic direction of aerospace students training in the university.
The aviation, space vehicles, orbital stations, interplanetary probes and other artificial flying objects are
considered to be the highest technologic achievements of a terrestrial civilization. Many countries
develop the advanced aerospace technologies, but Russia is still among the leaders.
From rigid rivalry in space area we are going to internationalization of projects. We are trying to develop
the international cooperation, the world market of aerospace vehicles and services. The volume of this
market increases rapidly.
Qualified aerospace experts are necessary everywhere. They cost dearly at any continent. We are able to
educate such experts. Many applicants from different countries come to our university for training and
graduate degrees. During last years the chair consolidates the educational resources with the International
Institute for Advanced Aerospace Technologies (IIAAT of SUAI ) created in 1998
and having an excellent reputation due to a number of very successful projects. It gives the opportunity to
study in English medium and to participate in real R&D.
This Volume IV involves 15 papers from the regular session of the conference. All papers were presented
in English by the foreign students from different countries. Most of them belong to the master students
from Nigeria which were supported by the National Space Research and Development Agency
(NASRDA), Abuja, Nigeria. The efforts of all contributing authors in studying and investigation of the
problems related with aerospace technologies development are very commendable. For the majority of
students the tasks under consideration are new. That is why the statement of the problems and the
possible methods for their solution take the essential part of each paper.
We hope the knowledge and experience obtained during the investigation of real problems of aerospace
technologies will be useful for the future successful scientific career of all students which prepared papers
for this volume.
Prof. Dr. Alexander Nebylov,
Head of Department #11 of SUAI,
Director of IIAAT SUAI,
Honored Scientist of the Russian Federation
Dr Olufemi Agboola Engr Kunle Fasade Adetoro M.A Lanre
National Space Research and Development Agency, Abuja, P.M.B 437, Nigeria
Phone no: +2348033489278,
Abstract: This paper presents numerical results of a study concerned with simultaneous optimization of the ascent
trajectory of a flying vehicle and some significant vehicle design parameters. Besides the trajectory, models are
implemented that relates empirically the mass interrelationship among various propulsion subsystems as a function
of velocity at end of trajectory, the range of trajectory and other performance index. With this model, one can
estimate the vehicle parameters and interrelationship between various units of the propulsion subsystem The models
and methods adopted for the mass budget and trajectory optimization was adopted form the lecture notes during the
educational collaboration between the International Institute for Advanced Aerospace Technologies of St-Petersburg
State University of Aerospace Instrumentation, Russia and Center for space transport and propulsion, Epe, Nigeria.
Keywords: Trajectory optimization, launch vehicles, mission analysis, ground trace, coordinates systems, mass
2.1 Mathematical Model of the flying Vehicle
Trajectory optimization is the process of designing an optimized
trajectory that minimizes or maximizes some measure of
performance index within prescribed constraint boundaries [1].
The selection of flight profiles that yield the greatest performance
plays a substantial role in the preliminary design of flying vehicles,
since the use of ad-hoc profile or control policies to evaluate
competing configurations may inappropriately penalize the
performance of one configuration over another. Thus, to
guarantee the selection of the best vehicle design as a function of
trajectory optimization and mass budget, it is important to optimize
the profile and control policy for each configuration early in the
design process. It is the purpose of this paper to implement a
design process for trajectory optimization and propulsion system
mass budget estimation for preliminary analysis and study at
Center for space transport and propulsion, Lagos, Nigeria. In
addition Mass budget predictor is designed and implemented as a
tool to perform quick structural and propulsion mass budget and
evaluation. It also in one stroke gives a vivid picture of good
design against bad design.
The vehicle under consideration is modeled as rigid body and its
motion restricted to two degree of freedom. Two dimensional
mathematical model used in describing the vehicle motion is
based on equations of motion of the center of mass over an
oblate, rotating Earth and are taken from Ref. [3]. The state
variables are: Inertial velocity components Vλ , VR , position
Fig. 1 shows the reference vehicle [2]. It is a flying vehicle with
its motion considered lying in the X LYL plane and the position of
the vehicle is characterized by spherical coordinates ( R, λ ) . The
vehicle’s rotation is represented by θ = θ(t ) .
variables λ , R and appropriate equations for the mass change of
the vehicle during the ascent described as:
R& = VR ,
λ& = λ − Ω E ,
V&R 
= [ Vλ 2 ]Local + x ,
m (t )
V&λ 
= [( −VλVR )]Local +
m (t )
( )
where Ω E is the Earth rotational speed and Fx , Fy are the
vectorial sum of all the external forces. The subscript Local
indicates inertial variables in the local horizontal coordinate
system. The engine model can be represented by equation (5).
T = g0 I sp m& b (t ) − Ae P .
The thrust force for the vehicle is a function of time as given in
the equation (5) and ref [3]; g0 is the gravitational acceleration at
sea level; and I sp is the specific impulse, m& e (t ) mass flows, and
Ae engine exit areas of typical propulsion systems form the
center; Pa is the ambient pressure as a function of altitude.
Pd and Aref are the dynamic pressure and reference area for the
aerodynamic forces.
Fig.1. Representation of the vehicle in spherical coordinate frame
The solution to the mathematical model was solved using
Simulink block and Runge-Kutta method as solver. The Simulink
block for equations (1)-(4) is described by Fig.2.
3.1 Solution
Initial Conditions. The vehicle is simulated using the initial
conditions of CSTP (Epe, Nigeria) with ground track represented
in Fig.3.
Rinitial = RE + hlaunchpad ,
λ initial = λ launchpad ,
VRinitial = 0 ,
Vλinitial = RE .Ω E ,
where RE (6378 km) is the radius of the Earth. The initial values
for the position are + RE altitude, geographical longitude 60 E
latitude 50 N
[VRinitial ,Vλinitial ]
components as described in equation (8)
and equation (9). The simulation result of this paper and nominal
plot of the vehicle adopted are shown in Fig.4 and Fig.5 for
comparison. Result indicates that the method adopted in this
paper is close to the real result.
Fig.2. Non Linear Simulink block for the mathematical model
2.2 Mass Models for Engine and Tank Sizing
Simple models describing the interrelation of propulsion system
are taken from Ref. [1]. Given an initial mass budget, the flight
range and specific impulse, the mass budget of each individual
component can be estimated using the approach adopted in this
paper. Though the model supports liquid fuel but simulation
results on existing vehicle using solid fuel gave an accuracy of
The models for the mass budget [4] were implemented using
graphical user interface and implemented using a popular
computer aided design software. The implementation in GUI
enhances ease of parameter adjustment and evaluations. The
visualization display of GUI is represented as in Fig.9. The model
will optimize the vehicle design with adequate information on
Initial vehicle mass, thrust to weight ratio, flight range, velocity at
end of flight, and specific impulse of the fuel.
Fig.3.Ground track of CSTP-Nigeria
Optimizations of trajectory ascent for a simple flying vehicle are
presented. Example are given for a popular flying vehicle and the
obtained data is compared to reference data.The vehicle is to
deliver a payload of 1000 kg at 400 km of altitude with gross lift off
mass of about 136 tonnes.The task is to design the trajectory and
to estimate the mass budget of the vehicle using a realizable
propellant with higher specific impulse.
Fig.4.Simulation result of this paper
Acceleration due to gravity= 9.81m/s^2
K= 1.12 [4].
Launch Angle = 40 Degree= 0.3489 radian
3.4 Design procedures
The estimation of various parameters is implemented in Fig.8
and this was used to estimate require data for initial vehicle
analysis. The plot and graphical display was not incorporated into
the GUI so separate software was used to plot the data generated
from GUI as example of plot of launch mass versus payload mass
variation in Fig.7 and Fig.8 for launch mass against specific
impulse. The detail of the design calculations is shown on the GUI
software developed for the budget estimator.
Fig.5. Reference plot of the adopted vehicle configuration
3.2 Design case for mass budget
Summary of customer’s requirements:
Payload Mass = 1000 kg
Flight distance = 1000 km
The following abbreviation was used in the design and
implementation of this problem K is dimensionless and is
determined from the empirical data of plot of K against L (flight
distance in Kilometer) from empirical data graph of Fig.6. The
slope of K against L is the Lell (elliptical length), ideal velocity
(Vid), Vend (velocity at end of flight) and Isp (specific impulse).
Fig.7. Launch mass with varying mass of payload
Fig.6. Empirical selection of K as a function of flight length (L) [4]
3.3 Design considerations
Propellant choice: ethyl alcohol ( 75 % alcohol + 25 %
H2O) + liquid oxygen
• optimum mixture ratio = 1.43
• specific impulse= 204 s
• characteristic velocity =20001 m/s = 2.001
Initial start variables:
• Payload Mass= 1000 Kg
• Flight Distance= 1000 km
• Radius of the earth= 6370 Km
Fig.8. Launch mass against specific impulses
Fig.9. Developed graphical user interface for budget estimator
The mathematical models described in previous equations
for equations of motion and mass budget was used to
implement this interface. Matlab programming language was
used and later converted to executable file as standalone
software. The result of data generated was exported to
mathcad software so as to have graphical views of the final
results. The result of this examples shows that as the
payload mass increases so also the mass of launch vehicle
increases (Fig.7) Additionally, Fig.8 shows that increasing
specific impulses significantly reduces total mass of the
launch vehicle.
The method used in the paper was from lecture notes used
during postgraduate program at St Petersburg State
University of Aerospace Instrumentation, St-Petersburg,
Russia. The authors and the students from the agency for
this program appreciates the contributions of the
management and staff of National Space Research and
Development Agency, Abuja and the entire management and
staff of Center for Space Transport and Propulsion, Epe,
Lagos State. The personal contribution of staff and
management of International Institute for Advanced
Aerospace Technologies of SUAI, St Petersburg, Russia are
gracefully acknowledged. Special thank goes to Prof.
Alexander Nebylov (Director of the Institute), Prof Alexander
Panferov and Dr. Sergey Brodsky.
The research in this direction has given us the opportunity
to implement optimization of trajectory and mass budget with
the aid of computer aided design software for our own use.
The main applications of this finding is analysis, mass budget
estimation and training modules for better understanding of
vehicle design and constraints. The future work will include,
improvement on GUI template for mass budget estimation
and visualization, improvement of the models to include
expected sources of errors and corrections, extension of
motion as three-dimensional trajectories; Implementation of
anticipated source of errors such as wind, thrust
misalignment, vehicle deflection, etc. which cause the real
trajectory to deviate from the nominal one.
[1] WELL, K. H., MARKL, A., and MEHLEM. K., ASTOS: A
Trajectory Analysis and Optimization Software for
Launc And Reentry Vehicles, Paper IAF-97-V4.04,
48th International Astronautical Congress, Turin,
Italy, 1997.
E., Multidisciplinary Design Tool for System and
Mission Optimization of Launch Vehicles, 6th
AIAA/NASA/ISSMO Symposium on
Multidisciplinary Analysis and Optimization,
Bellevue, Washington, USA, 1996.
[3] BUHL, W., EBERT, K., and WOLFF, H., Technical
Report 2, Modeling: Advanced Launcher Trajectory
Optimization Software Technical Documentation,
European Space Technology and Research
Center, Netherlands, 1999
[4] Postgraduate Lecture Notes of International Institute for
Advanced Aerospace Technologies of SUAI,
St Petersburg, Russia, 2009.
3.5 Development of Graphical User Interface
This paper and the education collaboration were supported
by National Space Research and Development Agency,
Abuja, Nigeria.
Adetoro M.A Lanre
International Institute for Advanced Aerospace Technologies of SUAI, 67, Bolshaya Morskaya,
Saint-Petersburg, 190000, RUSSIA
Phone no: +79523933920
Abstract: In this paper, an effective trajectory control system has been developed to control the satellite’s
motion, using the Euler-Hill equations of relative motion. Trajectory control of a satellite is very crucial to any
mission with very pointing accuracy such as geosynchronous communication satellite mission. Model of
Space environment perturbation and sensor noises are taken into consideration from the start of this control
synthesis so as to account for the uncertainties to the linearized system dynamics provided by modified Hillclohessy Wiltshire (HCW) equations [17]. Onboard observer design was also implemented so as to estimate
the state vectors in anticipation to sensor noises or failures. The theoretical results are supported by
simulation based in Simulink block diagram and Matlab m-files.
Keywords: Trajectory control, Apogee firing mode, unified propulsion subsystem, optimum, controller,
control system design.
In a geostationary orbit mission, there are two major stages
that concern the control of a satellite’s orbit. The first one is
the transfer from the Geosynchronous Transfer Orbit (GTO)
to the final Geostationary Earth Orbit (GEO). The second
stage is the so-called mission stage, which is generally
expected to last more than fifteen years. In this paper, only
the mission stage of the geosynchronous trajectory will be
A common problem in the operation of any satellite is to
guide them and position them at a desired trajectory (orbit).
Numerous forces like gravitational force, three body effect,
solar radiation pressure etc tend to disturb the satellite
motion and perturbed it into other undesired trajectory [2].
Geostationary orbit satellite missions are typically designed
to occupy a particular orbit about which periodic trajectory
(station keeping) maneuvers are conducted.
In the geosynchronous orbit, the primary parameters of a
satellite trajectory that change with time are Inclination
(North-South), Longitude (East-West), and the Eccentricity
(deviation from circular orbit). However, in the case of the
relative motion (with respect to the desired orbit), those
adjustments will be interpreted as x, y and z corrections in
the rotating coordinate system that will be placed on the
satellite. Hence, the task of the proposed trajectory control is
to periodically minimize the error of these three primary
parameters and their derivatives with minimal fuel
consumption [19].
Significant research advances have been made recently
and numerous methods been developed for trajectory control
in various literature. These approaches consider many
different aspects of the trajectory control problems of
geostationary orbit, including relative satellite orbit control,
coupled mission objectives, and global fuel minimization. The
approaches include transition matrix inversion whose
approach to control is the use of a Battin matrix method [33].
This approach calculates the velocity change ∆V required at
the current time to achieve some desired state in the future.
This has been in the literature for decades for use in
correcting orbits for geostationary satellites.
Another approach is Impulsive/Heuristic method. This is
traditional classic orbital maneuver. Ref. [33] contains
mathematical descriptions of the Lambert problem for finding
an orbit connecting two states in a specified time, as well as
methods for finding optimal one- and two-impulse burns to
desired trajectory. These approaches have the advantage of
low risk, because the methods have decades of established
usage (heritage) in the field.
A straightforward approach to controlling the trajectory of
the satellites when the dynamics of those states can be
modeled linearly is to use a Linear Quadratic Regulator
(LQR) [24]. This approach has the advantage that all the
tools of linear control can be used to analyze stability and
some other properties and therefore can operate with
minimal risk when implemented. However, an LQR controller
would be expected to fire continuously in response to system
uncertainty, incurring a fuel penalty over maneuver-planning
controllers earlier mentioned. To prevent continuous firing, an
LQR controller would likely be combined with a deadband.
Some other approaches, such as Lyapunov controllers and
PID controllers [28, 31], require that control be applied
continuously, a strategy both prone to high fuel use and
difficult to implement when thrusting requires attitude
adjustment. Other approaches, such as the impulsive
thrusting scheme introduced in [32], require satellite to thrust
at previously-specified times and directions in the orbit, The
disadvantage is, many potential maneuvers will not be fueloptimal.
As presented above, satellite trajectory control has been an
essential interest in space technology. This paper
concentrates on special scenarios that are only mentioned
but not really investigated in the previous literature. We
concentrate on unmodelled plant uncertainties right from the
beginning of the control synthesis. This issue is concerns
with space orbit dynamics; a control theory under the
condition of existence of perturbation in the space.
The contribution of this project is: first, systematic
investigation of perturbed plant equation using optimal
control, second, implementation of control constraints,
filtering and observer analysis together with their
performances.Thirdly, development of Simulink simulation
platform for this controller.
upon the longitude and the station keeping requirement
(allowable drift), but it is typically only about 2 m/s per year
The satellite will use low power thrusters as its principal
actuators during trajectory control. The thrusters in pairs will
be used to fire symmetrically in order to track the desired
trajectory by firing maneuvers either to be accelerated east or
west and inclination correction firing maneuvers to be
accelerated south or north [2].
The trajectory of the satellite will be described as a
geostationary orbit at an altitude of 36,678 km. The payload
is for communication and therefore the satellite will be
pointing to the earth surface with a good accuracy of
inclination, eccentricity and longitude positioning.
The satellite contains all of the platform equipments and
provides the stable platform on which the payload
equipments are mounted. A two wing solar arrays each on
south and north facing side continuously rotates about the
vehicle pitch axis to step track the sun during orbital motion.
The payload is hard mounted on the earth-facing panel for
unobstructed earth coverage and maximum alignment
stability. To provide near Omni directional coverage during
emergency situation, the telemetry and command antennas
are mounted both at earth and anti earth desk [12].The body
coordinate system utilized for the satellite is such that the z
axis is aligned with the nadir direction and the x axis is the
forward velocity direction while the y axis is normal to the
orbit for on-orbit configuration.
During the mission orbit phase the satellite will use the
earth sensor to measures both roll and pitch; though yaw is
not directly sensed, but will be determined from the orbit rate
coupling with roll. A yaw gyro will also be used to sense rate
information during trajectory control maneuvers. 10N
thrusters will be used for both inclination maneuvers, and
east-west eccentricity corrections .In trajectory control,
infrared earth sensor will be used for roll and pitch attitude
measurement and sun sensor or gyro integration used for
yaw attitude, or gyro integrations are used for three-axis
attitude measurements.
Various results show that the magnitude of North-South
oscillation grows at a rate of about 1.5 km/day. Therefore, we
would have to apply an impulsive delta-v in the orbit normal
direction about once every 50 days. The past results also
show that the North-South relative velocity grows at a
constant rate of about 0.08 m/s per day. Thus, on the 50th
day, the delta-v required would be 4 m/s. This translates into
an annual delta-v budget of 30 m/s. When the effects of the
moon are included, the North-South oscillation grows
somewhat larger to about 2 km/day, which is 1 degree per
year. It then requires about 50 m/s of annual delta-v to
perform North-South station keeping. The East-West motion
is caused by accelerations acting in the xy plane of the
rotating frame. This includes both the gravity from the sun
and moon, as well as the gravitational harmonics of the
asymmetric earth. The effect of earth oblateness depends on
the longitude of the station; the general result is a drift in the
longitudinal or East-West direction. The longitudinal
displacement (y direction) growing at a rate of about 2.6 km
per day. It would therefore take 28 days in order to reach our
73.6 km boundary. Periodic burns in the y direction are
applied to bound this drift. The associated delta-v depends
In general, all model development for satellite motion
requires the specification of a reference frame [6].Three
coordinate systems were used in this paper in applying the
Euler-Hill equations. The first one is Geocentric Inertial frame
which Locates orbital plane with respect to the Earth, the
second is Local Vertical Local Horizontal (LVLH) which
Locate orbit within orbital plane, the component of the
external forces also aligned with LVLH and the third is
spherical coordinate which was used to transform Newton
law of motion that guide the satellite motion from geocentric
coordinate system to spherical coordinate system.
A geocentric inertial system is centered at the Earth, the
fundamental plane is the equator, the unit vector Ox is
directed from the Earth’s center along the vernal equinox, Oz
is normal to the fundamental plane, positive in the direction of
the geographic north pole, and Oy completes the setup (see
Fig.1). This reference frame is usually referred to as Earthcentered inertial (ECI).An inertial system is used to define a
satellite position from the center of the Earth and velocity
vectors, denoted by r and v respectively [3].
Local-vertical local-horizontal (LVLH ) rotating coordinate
system, with origin at the center of mass of the satellite. The
fundamental plane is the orbital plane. The unit vector Ox is
directed from the spacecraft radially outward, Oz is normal to
the fundamental plane, positive in the direction of the
(instantaneous) angular momentum vector, and Oy
completes the setup (Fig. 1.).
Fig.1. Geocentric inertial frame
Fig.2. Local-vertical local-horizontal (LVLH)
As mentioned above, the unperturbed geostationary orbit of
the point (chief) is described with equation (4) and the
perturbed real satellite’s orbit (deputy) with equation (5) as
presented below:
6.1 Modelling the system
Dynamics modeling of spacecraft in formation has been
studied in the previous work such as [15]. Considering the
chief and the deputy satellite formation as shown in Fig. 3.,
the relative motion will be described. In order to implement
effective trajectory control, it is necessary to model the
equations of motion of the satellite. The Euler-Hill equations
that describe the relative motion between satellites in
neighboring orbit is used. By considering an ideal (desired)
circular orbit of a point and having a real perturbed satellite
orbit, the relative distance between any two points on the two
orbits can be calculated through the use of the Euler-Hill
equations for a dynamical system.
Using the assumption of point mass, two body interactions,
sole effect of gravitational force and assumption that the
earth is spherical, the keplarian two body equation of satellite
motion can be written as follows:
where r = [ x
= 0,
r1 = r1θ& 2 −
r2 = r2θ& 2 −
r2 2
+ f ,
where r2 is the real satellite radius, r1 is the radius of the
ideal points on geostationary orbit, f =  f x f y f z  ,is
perturbations due to thrusters and environment forces in real
satellite orbit in x, y and z directions. Equating δ = r2 − r1 ,
vector of the relative position of the real satellite with a
corresponding point in geostationary orbit and the
corresponding acceleration can be defined as,
µ 
µ 
δ =  r1θ& 2 − 2  −  r2 θ& 2 − 2  + f ,
r1  
r2 
is the position vector in ECI, r is
vector radius of the orbit from Earth center, r is magnitude
of the radius of the orbit in meters, µ is GM, where G is the
Universal Gravitational Constant and M is the mass of the
Earth. Transforming equation.1. from ECI to polar coordinate,
the equation below is derived:
r&& = r&&
θ− 2 ,
Transforming equation (6) to spherical coordinate, the
equations (7-9) can be derived [20]
x = 2Ωy& (t ) + 3Ω 2 x(t ) + f x ,
y = −2Ωx& (t ) + f y ,
z = Ω 2 z (t ) + f z ,
where Ω = µ r is the velocity of geostationary orbit in
where θ is angle of rotation in polar coordinate in radian.
Modeling a coherent exposition of the satellite relative
motion, both in the unperturbed and perturbed settings using
the above equation, and using geostationary orbit as the
chief, and the perturbed real satellite orbit deviating from
geostationary orbit as the deputy. The chief/deputy notation
implies a more general setup, in which the trajectory of the
satellite under investigation is modeled with respect to a
reference point on geostationary orbit as shown in Fig.3
using two equations one for the chief and the second for the
deputy .
radian per seconds.
6.2 State Space Representation
With assumption of small deviation from nominal trajectory,
minimal control forces from the thrusters and negligible
space environment disturbances, A simplified linearized
model of the satellite motion that can be used as the basis for
the trajectory control design can be derived using equations
At this point it is assumed that the system always works in
the neighbour-hood of the stationary working point. Therefore
a constant state-matrix A and input-matrix B descried by
equation (10) can be derived for system in state space form.
These matrices can be found by differentiating the three
nonlinear equations in spherical coordinate with respect to
the states and inputs using jacobian linearization [15]. To
sum up the above we get the following state equation.
 A11
x& = 
 A21
A12 
 B1 
x +  u ,
A22 
 B2 
The output matrix is given by:
Fig.3. Chief/deputy relative trajectory to be minimized
y = diag ([1 1 1 1 1 1]) x + Du
is the state vector of the system defined as
where x
 fx
z& ] , the
is first order time
f z  and y is the output. A is state matrix,
inspected through the values of Cdiag .Since there are no
columns with only zeroes, all states are connected to an
output and the system is therefore observable.
6.5 Trajectory control subsystem architecture
B is the control matrix, C is the output matrix and D is
the feed forward matrix. For easy representation the
subscripts A11 , A12 , A11 , A12 , A11 , B1 and B2 are defined
as below.
A11 = O 3 x3 , A12 = I 3 x3 ,
3Ω 2
A21 =  0
 0
0 
0  , A22=
0 −Ω 2 
2Ω 0 
 0
 −2Ω 0 0 ,
 0
0 0
B1 = O3 x3 , B2 = I3 x3 . The numerical value of Ω for
geostationary orbit is 7.273x10^-5 radian per second.
6.3. Stability of plant open loop simulation
From the matrices above the eigenvalues of matrix A for
the state space system represented by equation (8) is
[ ± jΩ ± jΩ 0 0] radian/seconds, so the system is
asymptotically unstable. The plant has two double pole
imaginary eigenvalues. The MATLAB script used to simulate
the initial conditions response of the trajectory shows that it is
asymptotically unstable and therefore in the absence of
control actuator, the real satellite trajectory oscillate around
the ideal orbit for any small deviation from the desired orbit
[10].Therefore a control system has to be designed in order
to make it asymptotically stable.
Fig.4. Control loop architecture
IRES=Infra red earth sensors=sun sensor; RIGA=Rate
integrated gyro assembly; OBC=onboard computer;
LAE=Liquid apogee engine; MW=momentum wheel;
RW=Reaction wheel; OBDH=On board and data handling
system; TT and C=Tracking telemetry and Telecommand;
SADA=Solar array drive assembly;SSE=Sun sensor
WE=Reaction wheel assembly.
Typical components of the trajectory control will include sun
and Earth sensors, reaction and momentum wheels(attitude
control actuator), Thrusters(trajectory control actuator), Rate
integrated gyro assembly (RIGA), on board computer and the
electronics required to process signals from the above
6.4. Controllability and observability
6.6. Trajectory controller algorithm
To be able to design a controller and an observer for the
plant, it is important to test the controllability and observability
for the linear model. In order to do this the state matrix is
transformed into diagonal form so no dynamics between the
states exists. The diagonal transformation is done by
multiplying the modal Matrix M, containing the eigenvectors
of A, to the system matrices according to rules of similarity
transforms A diag =M -1AM;Bdiag =M -1B and
Cdiag =CM . When looking at the system in this form it is
possible to consider each state individually to determine
whether a particular state is controllable and/or observable.
Furthermore it shows that the states in the linear model can
be decoupled from each other. The two columns of the B
matrix are linearly independent which make it possible to
control part of the states while keeping the rest of the states
at their respective equilibrium. This strategy however does
demand an infinitely fast controller and a linear system.
The controllability of the linear system is found by
inspecting the values of Bdiag . This inspection concludes
that all the states are controllable since there are no rows
containing only zeroes. This means that all states are
connected to an input but not to all the three. Thereby the
system is controllable. The observability of the system is
Computer hardware and software on board the space will
decode the input signals from the sensors to determine the
trajectory of the spacecraft. A particular type of computer
algorithm, known as an "observer" will be used to estimate
the parameters of some of desired variables of the spacecraft
trajectory dynamics [4]. An algorithm of this type uses a
model of the spacecraft and a time history of the trajectory
information to generate an estimate of the spacecraft
parameters. The control algorithm then uses this estimated
parameters, along with the current and desired trajectory to
compute appropriate actuator torques. In order to be able to
control the satellite, there is a need of translatory
acceleration to act on the body of the satellite.
The following requirements will drive the selection of the
control laws: (1) Accuracy, (2) robustness, (3) energy
efficiency and (4) computational simplicity. This approach will
also include use of reliable, redundant, cross-strapping,
heritage and space qualified equipments and above all,
implementation of fault detection and isolation schemes [20].
6.7. Problem Formulation
The problem of this satellite trajectory can be formulated on
the basis of Fig. 5 as follows: we are given the perturbed real
satellite trajectory whose behavior is considered
unsatisfactory. We are also given a second system, M,
desired trajectory (unperturbed), whose behavior is
considered ideal. This system M is called the reference
by ignoring the term N in Eq. (11) and minimizing the
following performance index in equation(10),where t f the
final time and Q ≥ 0 and R > 0 are, respectively, the state
and control weight matrices. For an autonomous system,
constant weight matrices, and t f → ∞ , the minimization of
Eq. (10) is achieved by the following control law:
u = − Kx ,
where K = − R −1 BT P and P satisfies the Algebraic
Riccati Equation (ARE),
AT P + PA − PBR −1BT P + Q = 0 .
The solution to the ARE is positive definite if the pair (A; B)
is controllable and the pair (A, C) is observable [79]. Positive
definiteness of P guarantees closed-loop stability, i.e.,
asymptotic stability of the system
x& = ( A − BK ) x .
Fig.5. General configuration of the control problem
The problem is to find an appropriate controller which
minimizes a specific cost function. The cost function is
designated by the letter J and its typical form is given by:
Numerical solvers of the ARE are standard in software
packages such as MATLAB.
7.2. Design and analysis of dynamical process noise
1 tf
J = ∫ (. eT (t )Qe(t ) + uT (t ) R(t )u (t ).) dt ,
2 t0
where e(t ) = y (t ) − ym (t ) is the error between the desired
behavior (output) ym (t ) of the reference model and the
actual behavior (output) y (t ) of the given system. It is clear
that the solution of Equation (10) is the optimal solution to the
problem. The field of modern control engineering which is
based upon the minimization of cost functions is called
optimal control. The weight matrices Q and R are defined
here as: Q=positive semi definite matrix, and R=positive
definite matrix. The weighting matrices Q and R are
constraints on the system state variables and the control
input. Q and R are, respectively, the state and control
weighting matrices that define the tradeoff between
regulation performance and control efforts. Further it is to be
noted that their relatives values rather their absolute values
have a sense. The larger values of R impose larger
constraints on the control input and make the system slower
with small feedback gain and same applicable to Q.
7.1 LQR controller – design and analysis
Linear Quadratic Regulator (LQR) is an optimal control
approach based on a linear approximation of a dynamical
system of the form
x& = Ax + Bu + N ( x) ,
where A and B are matrices of appropriate dimensions and N
denotes the effects due to nonlinearities and unmodeled
dynamics. The control law for u is obtained in this approach
This dynamical process noise will be a function of the
fidelity of the dynamical model used for the controller
implementation. Although the space environment is very well
known, approximate linear models are often used because of
their computational simplicity and ease of implementation.
These models usually account for the effects of Earth’s
gravity modeled as a point mass, but may also take into
account higher-order gravity terms (e.g., J2).In addition,
some linear models also include terms for drag [10].
However, in practice, dynamical models used for computing
the control must always truncate or ignore some
perturbations in order to be feasible. Typically, at least some
of the higher-order gravitational terms, advanced
atmospheric models, solar wind, and third-body effects of the
Moon, the Sun and planets are excluded. These exclusions
combine to form the dynamical model uncertainty denotes
the effects due to nonlinearities and unmodeled dynamics in
equation (13).
The dynamic model designed in this paper is used as
space environment perturbation. The introduction of this
makes the plant equation more realistic. In Matlab/Simulink
the perturbation is modeled by using a random number
generator, which generates normally distributed random
numbers with a 0 mean and a variance of 1. This signal is
amplified with a factor of A/3, where A is the normalized
maximum amplitude of expected disturbance in space. A
noise generator is in the complete Simulink diagram. Since
99.75% of a Gaussian distributed stochastic signal is inside a
window of 6 times the standard deviation (σ) , the
amplification factor can then be calculated as follows:
6σ = 2 A , σ = A / 3 .
The sampling time of the random number generator is
given to be 1 second so as to meet up with the simulation
time of the control strategy.
7.3. LQR simulations
The control strategy of Linear Quadratic Regulator is to be
designed for the plant. It will be designed to operate around
the reference model trajectory.The initial condition for the
plant is x(0) = [800m 1000m 1200m 0.02m/s 0.02m/s
0.002m/s ].The perturbed linearized equation will be used for
the LQR simulation. The Linear Quadratic Regulator is a
controller design optimized to minimize the cost function
stated in equation (12). In the equation (12), the time interval
is defined to be longer than the external excitation.
Optimal feedback control guarantees the change of the
operating point within t ∈  0 t f  from initial state to a
neighborhood of the origin x = 0.The optimal control problem
could be solved by various methods such as Euler-Lagrange
method, Hamilton-Jacobi Bellman equation or pontrriagin’s
minimum principle. By employing pontrriagin’s minimum
principle with Riccati transformation, the sufficient condition
for the optimal control is equation (12).
Assuming the satellite is equipped with two three axis
sensors, one for position measurement in x, y, z directions
and the other for velocity measurement in x, y and z
directions. This assumption means that all the states are
available for feedback control.Additionally,the measurement
equation described equation(9) shows that all the
measurement of C are zeros except at diagonal, the
performance criteria for the state will then be chosen base on
this premise. One simpler method to start the LQ method is
suggested by Bryson’s rule[15].In practice, an appropriate
choice to obtain acceptable values of x and u is to initially
choose diagonal matrices Q and R such that qii
=1/maximum acceptable value of
q11 , q22,............q66 and rii =1/maximum acceptable value
consequence of designing the LQR gain from Q and R. It is
obvious that it is not the absolute magnitudes of Q and R that
determines the gain, but the ratio between them. Comparing
the eigenvalues of the closed loop system as stated above
with the open loop properties of the original unstable system.
It is seen that the system has become asymptotically stable.
The slowest time constant 13749.4 for open loop has
become more than 3618 times faster.
Fig.6. shows how the LQR controller has been
implemented. The controller operates around the
linearization point of desired trajectory which is why the
deviation given by initial condition from the operating point is
feed back through the state feedback matrix. The response
of the system can be seen in Fig.8, Fig.9 and Fig.10. The
simulations of the controller show that it is stable, though
some oscillations occur when the system is forced far away
from the linearization point which shows that the LQR
regulator works as expected only for linear system operating
close to nominal trajectory.
Fig.6. Perturbed plant model with LQR control scheme
of u 2 i.e. r11, r22,..................r66 [6].
The Bryson’s rule was used to start the iteration, it is
assumed that there is no constraints on u and x, the first
simulation using this rule give unsatisfactory results and
therefore further selection of matrices Q and R was based on
the fact that the requirements are to have as little fuel energy
utilization. For that case, after some trial and error iterations
and simulations, matrix R and Q had been selected as
Q=diag ([1 1 1 1 1 1]) and R=diag ([1/Om^2 1/Om^2
1/Om^2])*60;here Om is equal to 7.273x10^-5 radian per
second and R are optimal.
The eigenvalues of the closed-loop system are: -0.2621 +
0.2458i, -0.2621 - 0.2458i, -0.2621 + 0.2457i, -0.2621 0.2457i, -0.2621 + 0.2457i, -0.2621 - 0.2457i.This shows
the time constant is 3.8 and frequency about 0.32
radian/seconds for the close loop system.
The approximate gains from a given state to a given input for
the close loop system can be calculated as from diagonal
weighting matrix of Q and R and is defined as qii rii
[10]. By inserting the corresponding elements, the
approximate gain from the state x to the inputs u can be
determined. One note that at this gain, the state vectors and
the input tends to their operating point, which is a
Fig.7. Illustration of satellite trajectory from initial conditions
to track the chief orbit in LVLH coordinates
Fig.8. Position errors of the satellite trajectory while tracking
the desired trajectory in LVLH
Fig.10. Control effort versus time all in x,y and z directions
during trajectory control
7.4 Actuator saturation
In the real world the control objects actuators have a finite
operating interval.In this case it means that there is a
maximal and a minimal operating range for the thruster at
any given time. However, an LQR controller so far designed
would be expected to fire continuously in response to system
uncertainty, incurring a fuel penalty over maneuver-planning
controller earlier mentioned. This is visible in the plot of
Fig.10. To prevent continuous firing, an LQR controller would
likely be combined with a deadband. This has not been taken
into account in the model and previous literature has not
taken this into account too [4].Including this in the model will
put constraints on the control input. In view of this, the control
thrust to the rocket engine is limited to the interval from -0.5
to 0.5 Newton. The interval is implemented in the Simulink
model by a saturation box.
7.5 Estimator Design
Fig.9. Velocity errors of the satellite versus in x, y and z
The major difficulty associated with any control signal that
makes use of the system state is that exact measurements of
the states are rarely available. The premise of the LQR is
that all states be available for the feedback. In many real
implementations, only a few state variables of a given system
are measurable and it is necessary to estimate the
unmeasured states. An estimator estimates the state
variables from the control inputs and system outputs. The
estimator can be designed as luenberger or as a Kalman
bucy filter(optimal observer).These observer use a linear
feedback of the difference between the measurement output
and computed output to reduce the estimator error. The well
known and well-publicized 60-degree guaranteed phase
margins of LQR full state feedback, for example, were found
to disappear when Kalman filters were used to estimate
unmeasurable states in the LQG control method [2]. Although
certain industries have successfully applied modern control
methods to control problems through careful attention to
robustness margins [3], modern control techniques are still
not widely used. In the case of linear dynamics, the
celebrated Kalman-Bucy filter provides a solution, whereby,
given a good model of the dynamics and some knowledge of
any measurement and process noise entering the system of
interest, an optimal estimate of the states can be computed
causally, in real-time [13]. In view of this, Kalman-bucy filter
will be use as an estimator in this paper. Estimation makes it
possible to use state feedback techniques to solve many
important control problems, such as pole assignment, input–
output decoupling, and model matching.The dynamics of the
estimated states is [19].
x&ˆ = Axˆ + Bu + L( y − Cxˆ ) ,
where x̂ is the estimated state vector and L is the observer
gain vector. The observer gain could be obtained from a full
order observer or a luenberger observer, if the rank condition
of observability matrix is satisfied. In this paper, a Kalman
observer is used to account for the system and measurement
noises. Including process and measurement noises, the
system equation can be modified as
x& = Ax + Bu + w ,
y = Cx + v .
given by Kalman filter will be forwarded to the controller
gain, thus control force is produced which will go to the
propagator for next time step state generation.
Synthesizing the plant equation in state space equation
(8) and equation (9) which is the plant to be controlled
with performance index given in equation (11). The
observed state is the control output. The time of the
simulation was made to be 600 seconds, since before half
of that period of time there is no significant difference
between desired trajectory and actual trajectory of
chief/deputy architecture, in addition, the final state
x(t f ) is free and the admissible controls and states are
unbounded. We design feedback control law and plot all
the time history for estimated states and control using
pontrygin minimum principle [14].
Similar to the case of LQR, the optimal observer gain
are also dependent on the relative values of weighting
matrices. The control feedback gains are fixed as those
obtained in the LQR design and observer gains are
determined from equation (19). The result of estimation
using Kalman filter is shown in Fig.13, Fig.14, Fig.15 and
Fig.16. The plant model of the observer has zero initial
condition while the plant model has initial conditions
stated earlier in LQR control synthesis.
Assuming the two noises are uncorrelated white
Gaussian noise with zero mean stationary process, the
variance E w = E v = 0, E wwT = Q0
E vvT = R0 ,
where Q0 and R0 are process and
sensors noise respectively, also Q0
is positive semi
definite and R0 is positive definite. The Kalman observer
is to obtain an estimate of the state which is to minimize
the mean square of the estimation error or the covariance.
The gain L is determined from ARE of
AP + PAT + GQ0 GT − PC T R0 −1CP = 0 ,
L = PC R
Fig.11. Complete simulink block platform for this controller
7.6 LQG and Estimator Design Simulation
The combination of Kalman filter and LQR is called LQG
control.LQG method can provide an optimal control for
linear dynamic system which is perturbed by Gaussian
The simulation process can be defined as follows; The
desired trajectory generated by unperturbed plant
equation will be the control input, the difference of the
desired state vector and optimal estimation of state vector
Fig.12. Unperturbed Geostationary Orbit
Fig.13. LQG control and estimated and measured position
error x
Fig.16. Magnified Estimated and measured position error y
7.7 Simulation Results
Fig.14. Magnified estimated and measured position error x
Simulation result will be presented here; the scenarios
simulation together with the plot is already explained. In
scenarios simulation, we choose typical initial conditions
of a typical geostationary orbit where trajectory control is
periodically executed and minimization of energy function
is emphasized. The results as expected are the LQR give
best performance for the linear dynamic system and on
the other hand, LQG will weaken the effects of sensors
noises, sensor failure and plant dynamic model
uncertainties on the trajectory control problems.
The simulations performed simulated perturbation due
to space environment which the satellite will experience in
practical situations.
The simulation is just an improve model which visualizes
the ability of the proposed control system to track a
predefined trajectory, and its performance to compensate
and reduce errors between the chief and deputy trajectory
during the control actuation period.
The results show how the satellite in an arbitrary
initial conditions outside the desired trajectory picks up
its pace toward the desired trajectory with minimum
control efforts. We run simulations where the real
satellite (deputy) tracks the chief trajectory
(geostationary orbit) using the geostationary orbit
parameters. The model gives the advantage of altering
the desired path and initial conditions as we wish; we
can plot various trajectories and check the
performance and the capability of the proposed control
system. The model was also simulated using low earth
orbit parameters low earth and gave satisfactory
results. This indicates that the trajectory control model
can be implemented for any arbitrary circular orbit.
Fig.15. Estimated and measured position error y
The approach implemented in this paper can be used to
generate optimized trajectory control that satisfies
performance constraints [6, 26–30]. The approach has a
number of advantages compare with previous findings: It
easily incorporates realistic constraints on thrusting and
control performance; it generates plans that closely
approximate fuel-optimal “bang-off-bang” solutions rather
than the continuous thrusting plans that inevitably arise
from previous LQR control synthesis, H1, and Lyapunov
controllers; and it allows for piecewise-linear cost
functions, such as the optimal fuel use. This approach
also models the linearized plant equation with uncertainty
from the beginning of the control synthesis as expected
for ideal satellite in orbit.
The perspectives of future works include the analysis
for other control schemes as well as consideration of
mathematical models of space environments
perturbations (e.g. atmosphere drag, J2 effect, etc).
Additionally, analysis of the condition of the
stabilization and implementation of unscented kalman
filter due to non-linear nature of the problem will also
be considered.
[1] Kaplan, M. Modern Spacecraft Dynamics and Control,
JohWiley and Sons, New York, 1976.
[2] Astrom, K.J and D. Wittenmark. Adaptive Control,
2nd Ed.Addison-Wesley. New York, 1995.
[3] Battin R.H. An Introduction to the Mathematics and
Methods of Astrodynamics. American Institute of
Aeronautics and Astronautics, New York, 1987.
[4] Li, Yong, Wu, Hong-Xin. On the slew maneuvers of a
flexible space structure with a saturating actuator,
proceedings of intelligent Autonomous control in
aerospace, 1995.
[5] Maciejowski J.M. Multivariable Feedback Design,
Addison-Wesley, Wokingham, England, 1989.
[6] Morris Driels, Linear Control Systems Engineering, The
McGraw-Hill Companies, Inc, 1996.
[7] Gu, Petkov and Konstantinov. Robust Control Design
with Matlab, Spinger-Verlag London Limited, 2005.
[8] Richard C.Dorf Robert H. Bishop, Modern Control
Systems, Addison Wesley Longman Inc, 1998.
[9] Bang, Honchoing, Lee, Jung-Shin and Youn-Ju Eun.
Non Linear Attitude control for a rigid spacecraft by
feedback Linearization, 1999.
[10] Ning Xiaolin and Fang Jiancheng. An Autonomous
Celestial Navigation Method For Geosynchronous
Satellite, 16th IFAC Symposium on Automatic
Control In Aerospace,2004.
[11] Bao Liu. Modern control theory, China machine
press, 2000.
[12] KyleT A, Srinivas R. V. Spacecraft Formation flying,
[13] Sidi M.J. Spacecraft Dynamics and Control.Cambridge,
[14] Willey J. L., James R. W. Space Mission and Analysis.
Springer –Verlag, 1999.
[15] Lewis F. L., Syrmos V. L. Optimal Control. 2nd
edition.John Wiley &sons, 1995.
[16] Franklin G. F., Powell J. D., Emami-Naeini A. Feedback
Control of Dynamic Systems. Prentice-Hall, 2002.
[17] Roubal J., Augusta P., a Havlena V. A Brief
Introduction to Control Design Demonstrated on
Laboratory Model Servo DR300 –Amira.
ActaElectrotechnica et Informatica, No. 5,Vol.4,
[18] Yang C.D, C.C. Kung. Nonlinear H1 flight control of
general six degree of-freedom motions, Journal
of Guidance Control and Dynamics, 23, No. 2,
2000, -288.
[19] Wertz (Ed.) J.R. “Spacecraft Attitude Determination
and Control”, Reidel, Dordrecht, 1978.
[20] Wie H.W.B, A. Arapostathis. Quaternion Feedback
Regulator for Spacecraft Eigen axis Rotations,
Journal of Guidance Control and Dynamics,
12,.3, 1989, p.375-380.
[21] Wu C.S, B.S. Chen.Unified design for H2,H1 and mixed
control of spacecraft, Journal of Guidance, Control
and Dynamics, 22, No. 6, 1999, pp.884-896.
[22] Naasz, B. “Classical Element Feedback Control for
Spacecraft Orbital Manoeuvres”, S. Thesis,
Department of Aerospace Engineering, Virginia
Polytechnic Institute and State University, 2002
Aro Habeeb Olalekan
National Space Research and Development Agency,
Obasanjo Space Centre, Airport Road,
P.M.B 437 Garki Abuja, FCT Nigeria
Tel: +79117739883; e-mail:
Abstract: A successful designed control system should always be able to maintain stability and
performance level in spite of a priori uncertainty in system dynamics and or in the working environment to
a certain degree. Robustness is the ability of a designed control system to maintain it performance and
stability characteristics under the effect of all types of influences. The design of such systems using the
methods of mathematical simulation substantially reduces the cost and time of design as well as
increases the effectiveness of such designs. This makes it possible not to only set the mathematical
models of the robust control systems, but also to get the performance using the quality control algorithms
in accordance with the stated goal.
Keywords: flight control system, launcher, nonlinear systems, nonlinear dynamics, motion control, robust
control, flexible object, control methods, simulation and mathematical models.
A control system or plant or process is an interconnection of
components to perform certain tasks and to yield a desired
response, i.e. to generate desired signal (the output), when it
is driven by manipulating signal (the input).
A control system is a causal, dynamic system, i.e. the
output depends not only on the present input but also on the
input at the previous time. A control system can be either
open-loop or closed-loop type.
Designing a control system is a creative process involving a
number of choices and decisions. These choices depend on
the properties of the system that is to be controlled and on the
requirements that are to be satisfied by the controlled system.
The decisions imply compromises between conflicting
requirements. The design of a control system involves the
following steps:
Characterize the system boundary, that is, specify the
scope of the control problem and of the system to be
Establish the type and the placement of actuators in the
system, and thus specify the inputs that control the
Formulate a model for the dynamic behaviour of the
system, possibly including a description of its
Decide on the type and the placement of sensors in the
system, and thus specify the variables that are available
for feedforward or feedback.
Formulate a model for the disturbances and noise
signals that affect the system.
Specify or choose the class of command signals that
are to be followed by certain outputs.
Decide upon the functional structure and the character
of the controller, also in dependence on its technical
Specify the desirable or required properties and
qualities of the control system.
Robustness is of crucial importance in modern control
system design because real engineering systems are
vulnerable to external disturbance and measurement noise
and there are always differences between mathematical
models used for design and the actual system. The effect of
external disturbances and measurement noise as well as the
differences between mathematical models used for design
and the actual system is termed a priori uncertainty.
The differences between mathematical models used for the
design and the actual system are called dynamic
perturbations (∆). Dynamic perturbations are due to some
factors namely:
The initial conditions (IC) of a system may not be
accurately specified or completely unknown,
A mathematical model of any real system is always just
an approximation of the true, physical reality of the
system dynamics. Typical sources of the discrepancy
include unmodelled (usually high-frequency) dynamics,
neglected nonlinearities in the modelling, effects of
deliberate reduced-order models, and systemparameter variations due to environmental changes and
torn-and-worn factors.
G p ( s) = Go ( s ) + ∆ ( s) ,
where G p ( s ) denotes
dynamics, G0 ( s)
denotes nominal model description of the system and ∆ ( s )
denotes the dynamic perturbation of the system.
As it is mentioned earlier, we have two types of control
systems namely: an open-loop system or closed-loop system.
An open-loop system uses a controller or control actuator to
obtain the design response. In an open-loop system, the
output has no effect on the input. In contrast to an open-loop
system, a closed-loop control system uses sensors to
measure the actual output to adjust the input in order to
achieve desired output. The measure of the output is called
the feedback signal, and a closed-loop system is also called a
feedback system.
Robust systems belong to closed-loop type of systems. The
feedback controllers are used in achieving robustness in the
In classical single-input single-output control (SISO),
robustness is achieved by ensuring good gain and phase
margins. Due to the increasing complexity of physical
systems under control and rising demands on system
properties, most industrial control systems are no longer
single-input and single-output (SISO) but multi-input and
multi-output (MIMO) systems with a high interrelationship
(coupling) between these channels. The number of (state)
variables in a system could be very large as well. These
systems are called multivariable systems.
When multivariable design techniques were first developed
in the 1960s, the emphasis was placed on achieving good
performance, and not on robustness. These multivariable
techniques were based on linear quadratic performance
criteria and Gaussian disturbances, and proved to be
successful in many aerospace applications where accurate
mathematical models can be obtained, and descriptions for
external disturbances/noise based on white noise are
considered appropriate. However, application of such
methods commonly referred to as the linear quadratic
Gaussian (LQG) methods, to other industrial problems made
apparent the poor robustness properties exhibited by LQG
controllers. This led to a substantial research effort to develop
a theory that could explicitly address the robustness issue in
feedback design.
The appearance of new and the development of the known
methods of creating the robust laws of control under the
conditions of a priori uncertainty, i.e., with incomplete
information about the parameters or the characteristics of the
object, all this testifies to the fact that the problem of
robustness in control systems is not yet fully solved and is
going on all the time.
Due to its importance, lots of methods on robust application
in control system design has being developed that examines
the different approaches to robustness in controlling classes
of objects with parametric uncertainty. Such works are based
on different methods but I would mention mainly the most
presently used methods namely:
H∞ - method,
м – synthesis / analysis method,
Linear Matrix Inequalities, LMI method.
The H∞ - approach and the µ – synthesis / analysis method
are well developed and elegant, they are well quantized
application of the classical control design methods, fully
applied in the frequency domain. They provide systematic
design procedures of robust controllers for linear systems,
though the extension into nonlinear cases is being actively
researched. In the H∞ - approach, the designer from the
beginning specifies a model system of uncertainty, such as
additive perturbation and or output disturbance that is most
suited to the problem at hand. A constrained optimisation is
then performed to maximise the robust stability of the closedloop system to the type of uncertainty chosen, the constrain
being the internal stability of the feedback system. In most
cases, it would be sufficient to seek a feasible controller such
that the closed-loop system achieves certain robust stability.
Performance objectives can also be included in the
optimisation cost function.
Robust stability means the stability of all possible models
that are described by the combination of the nominal model
and the uncertainty.
3.1 The phase and gain margins
The phase φm and gain gm margins are used as
robustness measures to evaluate classical controller design
(SISO). Both values can be clearly interpreted in terms of
Nyquist plot and Bode diagram of a closed-loop SISO system.
Nyquist criterion states that a system is stable if the polar
curve does not encompass the point -1. Bode diagram of a
SISO system is plotted using MatLab command
“bode(system)” in the MatLab environment. The stability of
the system is evaluated by analysing both margins of the
3.2 The small-gain theorem
The small-gain theorem plays an important role in H∞ optimisation methods, and it is used in derivation of stability
Consider two linear time-invariant systems with transfer
functions G1(s) and G2(s) (Figure 1), the systems are stable if
G1 ∈ H ∞G2 ∈ H ∞ ,
Then the closed-loop system is internally stable if and only if
< 1,
< 1,
K (1 + GK )
If required to find a controller to robustly stabilise the largest
possible set of perturbations, in the sense of ∞-norm, it is
then clear that we need to solve the following minimisation
W2 (1 + GK )
Given a priori knowledge of the perturbation,
∆(s ) = ∆(s )W2( S ) ,
Fig. 1. Feedback configuration
where ∆ (s ) , is the unit norm perturbation set.
Correspondingly, the robust stabilisation condition becomes
W2 K (1 + GK )
and the optimisation problem
Fig. 2 Additive perturbation configuration
A closed-loop system of the plant G and controller K is
robustly stable if it remains stable for all possible, under
certain definition, perturbations on the plant. This implies that
K is a stabilising controller for the nominal plant G, since we
always assume that the perturbation set includes zero (no
Let us consider the case of additive perturbation (Figure 2),
where ∆(s) is the perturbation, a “full” matrix unknown but
It is easy to work out that the transfer function from the
signal v to u is
Tuv= –K(1+GK)
The controller K should stabilise the nominal plant G. Hence,
from the Small-Gain theorem, it can be concluded that for
stable ∆ ( s ) , the closed-loop system is robustly stable if
stabilises the nominal plant and the following holds:
∆K (1 + GK )
K (1 + GK ) ∆
or, in a strengthened form,
W2 K (1 + GK )
3.3 Performance analysis and enhancement
One of the goals of a feedback control is to achieve
performance requirement on all the family models, (i.e. robust
performance) with a single controller. Robust stability and
nominal performance are prerequisites for achieving robust
Performance enhancing goals include such things as
disturbance attenuation, sensitivity reduction, the reduction of
nonlinear effects and command tracking enhancement.
The robust control toolbox in Matlab is a collection of
functions and tools that helps to analyze and design multiinput and multi-output (MIMO) control systems with a priori
uncertainties. It computes controllers that optimize worst-case
performance and identify worst-case parameter values. The
toolbox allows simplifying and reduces the order of complex
models with model reduction tools that minimize additive and
multiplicative error bounds.
The main objective of this research is the development of a
cheap application methodology of robust control synthesis for
sounding rocket altitude stabilization. This methodology is
required to meet the below listed criteria:
Must be cheap to develop, since the cost of a sounding
rocket is not as high as conventional rocket used in
placing a payload into orbit, the budget for sounding
rocket is thus relatively lower. In order to work within the
limited fund available for sounding rocket project, it has
to be put into consideration the cost of the controller;
Simplicity is another factor to be considered in the
choice of robust control methodology;
Effectiveness and practicability is also to be considered
in the choice of methodology coupled with history of
successful real application.
The linearised model of the perturbed motion of a sounding
rocket take the form:
& = a α+a
Θδ z + Fy (t ) ,
θ = aΘΘ θ& + aθθ α + aθδ z δ z + M z (t )
α = θ−Θ ,
n y = an y α α + an y δ z δ z ,
where δ oz is the desired angle of fin’s deflection (servoactuator reference), ωδ x is the natural frequency and ξ δ x is
the damping coefficient of the servo actuator.
The sounding rocket transfer function takes the form:
n y ( s)
Gn δ o ( s ) =
y z
δ oz
In this research, it is likely that the object of research might
not be available for direct experimentation because of the
cost and safety, so I would start with high-fidelity simulation of
the finite-dimensional linear time-invariant model using
powerful application programs like Robust Control Toolbox in
MATLAB and use SIMULINK in the design of the model for
the stabilization system.
In this project the H∞ Sub-optimal method is going to be
implemented combine with m synthesis / analysis method
based on the following features:
P +Yα
Y δz
, aΘδ z =
mV ,
aΘΘ =
aθθ& =
M zω z
, aΘΘ = z ,
aθδ z =
M zδ z
an y α =
Q +Yα
Fy (t )
Y δz
, Fy (t ) =
M (t )
M z (t ) = z
J ,
an y αδ z =
Y α = cαy qS , Y δx = c δy x qS , M zα = mzα qSL ,
M zωx = mzωx qSL2 / V , M zδx = mzδ x δ z qSL ,
Fy (t ) =
Fy (t )
’ M z (t ) =
M z (t )
J ,
Equation describing the rotation of the fins:
δ z + 2ξδ x ω δ x δ& z + ω 2δ x δ z = ω δ2 δ oz
Fig.3. Block diagram of the stabilisation system
Based on powerful mathematical background
Systematic and simple application procedures
Good history of real applications
Clear and logical design steps that are similar to
classical methods
Developed and applied by leading Russian
professionals for the last decade
Well-developed tools and published literature
Availability of modern programming language – Matlab
for its implementation
Modified and extended for different application
It is intended in the course of research to obtain the
sufficient quantities of materials in order to give the objective
evaluation of the possibility of applying different procedures
for the solution of given problem. The choice of the object of
research would also determine to a great extent the
experimental part of this thesis. In fact the real physical
structure of a sounding rocket might not be available for
experimental purpose because of different factors which
License from different internal affairs ministry and other
agencies responsible for granting license to make a real life
model of a rocket;
Safety, since the sounding rocket is a solid-propellant
rocket, safety is a vital factor in dealing with highlyinflammable and combustible propellant;
Cost too would be considered as a vital factor to be
considered in carrying out a real life experimental project.
Different robust methods of approach might also be
considered together with the methods stated above during the
course of this research in order to achieve the desired
stability and performance characteristics of the system.
[1] Dullerud, G.E. and Paganini, F.G. A Course in Robust
Control Theory, Springer. USA, 2000.
[2] Siouris, G.M. Missile guidance and control systems,
Springer. USA, 2004.
[3] Petkov, G., Konstantinov, M.M. and others. Robust
Control Design with Matlab, Springer.USA, 2005.
[4] Astrom, K.J. and Murray, R.M. Feedback Systems.
Princeton and Oxford. UK, 2008.
[5] Nebylov, A.V. Ensuring Control Accuracy. Lecture Notes
in Control and Information Sciences, Springer.
USA, 1998.
[6] Sanchez-Pena, R and Sznaier, M. Robust Systems.
Theory and Applications, Wiley. USA. 1998.
[7] Yanushevsky R, Modern Missile Guidance. CRC. USA,
Adeniyi Gbadebo Omoniyi
International Institute for Advance Aerospace Technologies
of Saint Petersburg State University of Aerospace Instrumentation
Sain-Petersburg, Russian Federation
e-mail:, Tel.:+79117672101,+2348134762916
Abstract: Investigating gas dynamics, detecting dusted gas flow dynamics and flaws in aerospace vehicle engines
and materials or unveiling cracks due to dusted flow without destroying test engine or burning chamber have always
been a challenge for engineers and aerospace industries. To avoid damaging the samples being examined,
scientists have developed various non-destructive techniques such as X-ray test facility. This paper investigates the
theory, principles of operation and sensitivity of X-ray test facility to investigations of dusted flow. Also, the transient
response and dynamic response of various measurement systems accomplishing the X-ray testing (pressure and
temperature measurement system) and applicable to dusted gas flow were investigated. X-ray test facility, effective
simulation and measurement of pressure and temperature investigated in this paper gives precise dynamic of heat
defect and flow, pressure fluctuation and temperature profile in dusted flow.
Keywords: Gas dynamic, Dusted flow, X-ray Test Facility, Pressure Transducer, Temperature Sensor, Transfer
Function, Dynamic Error, Transient Response, Step Response.
The study of internal gas dynamics of dusty flow is important in
connection with processes which occur in the aerospace vehicle
engine during operation and the problem of thermo- effect arising from
generating the propelling force for a flying vehicle. Dusted flow
explains many features in the aerospace propulsion system. By
dusted flow, one can mean the flow of a mixture of gas and small
particles (ash or fine powder) such that the ash particles carry greatly
high temperature. Thus, from internal gas dynamic point of view, the
dusted flow is a two–phase flow of a gas and particles. Several
researchers have applied modern techniques to investigation of the
internal dynamics of gas flow in aerospace vehicle’s engine. Many
others have tried to solve the problems of dusted gas flow through
finite volume under some boundary conditions by using the techniques
of differential equations. Due to nonlinear characteristic of the basic
equations governing the problems and complication of internal
dynamics flow such as nozzle and because of the nonlinear nature
inherent in the strong coupling of the turbulence flow and chemical
reaction [5], attention of the researchers are still on applicable and
accurate investigation techniques for prediction and estimation of gas
dynamic parameters and even thermal-effect in recent years. X-ray
test facility together with some other complementary techniques such
as temperature measurement system (temperature sensor) and
pressure measurement system (pressure sensor) effective for dusted
flow problem are really drawing attention. However, this paper
presents all the mechanism of operation of the X-ray test facility for
detection and investigation of internal gas dynamics process of dusted
flow using both analytical approaches and simulation together with
dynamics of both temperature sensor and pressure sensor.
Detecting flaws in materials or unveiling cracks due to dusted flow
without destroying either one of them has always been a challenge for
engineers and aerospace industries. To avoid damaging the samples
being examined, scientists have developed various non-destructive
techniques. Use X-ray tomography as a precision method of materials
characterization and defect location to ensure high reliability of
aerospace hardware and conformance to design requirements.
Employ this sophisticated and proven technology for non-destructive
evaluation (NDE) of materials and structures. X-ray tomography can
be used to produce both two-dimensional and three-dimensional
images of structures, materials, parts, and components. These images
are providing information that is useful for inspection, evaluation, and
diagnostics of complex hardware. X-ray tomography uses high
intensity X-ray sources, sensitive electronic detectors, sophisticated
computer control and analysis techniques, and automated
manipulation systems capable of translating and rotating test objects,
to produce cross-sectional images of precision aerospace components
[1]. This test method can be used to verify tolerances, to determine
relative material densities, to locate inclusions or defects, and to
measure the extent of erosion and ablation in composite materials.
The equipment and procedure is applicable to metals, composites, or
combination metal/composite structures. Cross-sectional images
generated by rotating and translating test objects slowly in
determination and air traps in aerospace vehicle propellant and
possible cracks can also be investigated. The technique has been
used for verification of nozzle integrity, thermocouple location, location
of high or low density inclusions, delaminations or debonds and
profound material density variations [8].
Projection Radiography is the first method developed using
penetrating photons. It is based on the interaction of photons inside
the material. Photons are more or less absorbed depending on the
density of electrons of the tested sample. A sensitive film is placed
behind the sample on the opposite side of the X-ray source and it
shows variations in intensity due to density difference inside the
material (Fig. 1). The typical energy for X-ray photon used in this kind
of experiment is between 20 keV and 200 keV which is larger than the
visible photon energy that is around some 700 eV [10]. A different
technique, called Compton Backscattering Imaging (CBI), is based on
backscattered photons. The information is not given by photons which
pass throw the sample like in transmission radiography, but is given by
photons which are reflected back on the same side of the source. The
detector senses photons coming back from the sample. These
photons have interacted with the medium (Compton interaction) (Fig.
2) and are reflected back with a different energy. The energy of
backscattered photons depends on the medium with which they
interact. By counting the number of photons coming back, information
about the target can be deduced. This technique has an advantage
compared to transmission radiography because both sides of the
object do not need to be accessed. But surface imperfections and
ridges affect the CBI image quality. These features can prevent firstcollided photons from reaching the detectors and lead to multiplecollision photons which now enter the detectors. Because of this
obstruction and corruption, the CBI image of an object with unknown
surface imperfections does not uniquely represent the surface and the
regions underneath. Furthermore, slow running complex algorithms
are necessary to extract useful information. The shallow first scattered
component of the backscattered field, represented by the thick (red)
arrow is the dominant signal contribution and clearly overwhelms the
contribution of the deeper penetrating and/or multiple-collided photons
which are usually considered as noise (Fig.3 and Fig. 4), imposes a
constraint on detector size, detector collimation, and mode of detector
operation [2 ,9].
Compton scatter can be understood as a relativistic billiard ball
collision between incident photons and the electrons after which the
photon energy and angle change. The energy of the scattered photon
is given by (1):
α' =
1 + α (1 − µ)
where α and α ' are the incident and scattered photon energies,
respectively, expressed in terms of electron rest mass and is the
cosine of the scattering angle.
The Klein-Nishina differential scattering cross section gives the
probability of scatter from free electrons in a given direction as (2):
Fig.1. Radiography method
Fig.2. Compton backscattering interaction
σ(α,µ)dµ = πr02 (
α' 2 α' α
) ( + ' + µ 2 − 1)d µ .
α α
cm 2
where; r0 is the classical electron radius. The
above two equations form the basis for defining Compton scatter from
a material. The scattering probability from a material is given by (3):
The unit is
Ps = s =
σt σ a + σ s
where σ s is the scattering cross section and σt is the total cross
section of the material. The total cross section is the sum of the
absorption cross section, σ a and the scattering cross section σ s . σ a
is the sum of the photoelectric cross section which is high at very low
energies and the pair production cross section which increases with
the photon energy above the 1.02 MeV threshold.
The LPX-160 is a rugged, commercial X-ray tube designed for field
inspections. The LPX-160 has maximum X-ray spectrum energy of
160 kVp and a maximum power level of 800 watts. The optimum X-ray
source energy for the detection with backscattered X-rays is in the
range of 120 kVp to 160 kVp. Good imaging quality requires about two
million source X-rays per pixel and this translates into an electric
energy requirement of one joule per pixel. The X-ray energy chosen
by the user depends on the material sample. If the energy is too small,
photons are stopped before the flaw and if it is too large, photons go
through the sample without interacting with anything. The typical
energy range for plastic is between 50 and 150 kV [6] and for
aluminum; the energy used is 75 kV. The number of photons emitted
by the x-ray tube depends on the adjustable current. In most cases, as
many photons as possible is desirable, however too many photons
can saturate the detectors [6].
(2) Moving Platform
Fig.3. Representation of the shallow first scattered component of the
backscattered field
Fig.4. Multiple scattering in backscattering field
2.1 Basic Procedures of X-ray Test Facility and Its Components
The required procedures and basic components of the X-ray test
facility are illustrated as follows:
2.1.1 Image Acquisition System
The flaw detection image acquisition system consists of four
principal units: an industrial X-ray generator; a moving platform
assembly; radiation detectors photo multiplier, pre-amplifier,
associated counting devices and a PC that controls the image
acquisition process [3].
The moving platform allows a scanning, 2D (x-y) motion in the plane
perpendicular to the X-ray beam as well as the possibility of adjusting
the vertical distance between the X-ray generator and the target
(sample). The moving platform used is a silver Motioned positioning
table, which forms the base to which the top sample table is
connected. A piece of lead has been positioned on the top of the table
to avoid noise-backscattered photons due to the platform [11]. The
actual sweep of the x-ray beam is controlled by a dual motor and
servo amplifier system. These motors are fully controllable with a
motion controller that constantly reads the position of the
motors with on-board encoders [1]. This motion controller can be
programmed or controlled “on-the-fly” by a computer through a RS232
connection and ASCII messages. LabVIEW builds the strings and
communicates with the controller. The controller allows a large
number of operating modes and settings and the accuracy of the
position/motion of this platform well exceeds the requirements of these
(3) Detector
The setup is equipped with detectors that emit light when penetrated
by a photon backscattered by the target [6]. Scintillation detectors
(The radiation entering the detector excites the NaI medium and,
during the relaxation process, photons in the visible range are emitted)
function by turning radiation energy into visible light, which is
subsequently collected and converted into an electrical signal (Fig.6).
The process by which visible light is produced from incident radiation
takes place in the scintillation material, the portion of the detector
which interacts with the radiation and the process is referred to as
fluorescence [3, 11].
(4) Photo multiplier
(1) X-Ray Source
The flaw detection image acquisition process requires the
target to be moving under or in front of an X-ray beam (the Xray generator is static). The x-rays are produced for instance by a
LORAD LPX200 generator (Fig. 5) [3]. X-ray generator provides an
excellent source from the standpoint of performance, size and weight.
Photomultiplier tubes (Fig. 7) collect the visible scintillation photons
and convert them into a measurable electric signal. The total process
progresses in three distinct stages. Photons from the scintillation
crystal impinge upon the photocathode region of the tube where they
are converted to electrons. These electrons are then channeled down
the electron multiplier where they are proportionately multiplied by
several (typically 5-7) orders of magnitude [6]. After this multiplication
process, the electrons are then collected at the anode end of the tube
where they have effectively become a now measurable electric signal
proportionate to the incident scintillation photons and thus, to the
original radiant energy deposition.
Fig.5.The LORAD LPX-160 and the rotating x-ray source collimator.
(5) Preamplifier
The preamplifier serves as an intermediate signal amplification step
between the detector and the analytical circuit used to process the
detected signal. The circuit components and time constant of the
preamplifier have important implication on the detector behaviour as a
whole [4].
2.1.2 Modes of Operation
The two possible modes of operation of signal generation in this
technology are: Pulse mode and current/integral mode.
2.1.3 Image Processing and Filtering
The filtering involves three filters: the most basic filter is called a
“rectangular window”. It gives the same weight to all pixels in the
averaging process. The second filter is an adaptive filter. This means
that if the error associated with a given pixel is below a user- defined
threshold, the filter will not do anything. Otherwise, an averaging
process will occur. The goal here is to avoid unnecessary filtering. It
is well known that averaging reduces the error but it also
removes small features or smooth out an image. Since the
error for each pixel is known, this error can be compared to
a threshold value. The third filter is also an adaptive filter and is
identical in principle to the one described above.
Fig.7. Photograph of the photomultiplier assembly
2.2 Pressure Measurement System and Its Instrumentati
(Pressure Transducer)
It is critically important to measure accurately the transient pressure
being used in dusted flow. The accuracy of the measured unsteady
pressure is directly related to the transducer's frequency performance.
So, fundamental understanding of transient pressure measurement
(Fig.8 and Fig. 9) capabilities is vital to improved engine design. These
analyses are based on linear second-order models of the system.
2.2.1 Mathematical Models of a Pressure Measurement System
Various theoretical models of pressure measurement systems have
been derived. Two analytical prediction models of connected volumes
have been extensively used in the high pressure combustion process:
Small Perturbation Analysis and Lumped-Parameter Analysis. The
small perturbation model (S.P.) is developed to analyze the dynamic
response of a set of various diameter tubes and dead volumes
connected in a series above the pressure sensor surface.
Fig.8. General-tube volume system
Fig.6. Bicron NaI Scintillator detector
The Navier-Stokes equations are simplified by assuming the flow
field can be linearized with very small sinusoidal perturbations about
the mean values as follows (4-7):
where d is diameter, J n is the Bessel function of first kind of order n,
k is Polytropic constant, L is the Tube length, P is the pressure,
Pr is the Prandtl number , T is the temperature, t is the time, u , v are
the velocity components, V is the volume, α is the shear wave
number, ϕ is the dissipation factor, ω n is the natural
frequency(rad/sec), µ is the absolute viscosity of the fluid, is the
density, γ is the specific heat ration, σ is the dimensionless increase
in transducer volume due to diaphragm deflection, s is the mean
value, and l is the tube.
Using lumped parameter analysis, with gas as the fluid medium, the
compressibility of the gas in the volume becomes the major system
effect when the pressure sensor is stiff. Volume is assumed is
enclosed by rigid walls. This allows treatment as a lumped-parameter
system with fluid properties considered constant along the length of
the tube for small disturbances. This analysis yields a second-order
differential equation with constant coefficients. The following
differential equation is then obtained (12):
Fig.9. Pressure Sensor
P = PS + Peiω nt ,
T = TS + Teiωnt ,
U = Ueiω nt ,
V = Veiω nt ,
Vtj +1ϕ j +1L j J 0 (α j ) J 2 (α j +1 )
Vtj ϕ j L j J 0 (α j +1 ) J 2 (α j )
sin((ϕ j L j )
sinh((ϕ j +1 + L j +1 )
{cos(ϕ j +1L j +1 ) −
PJ +1
ϕj = n
α0 j
J 0 (α j )
J 2 (α j ) n j
γ − 1 J 2 (α j Pr ) −1
] ,
γ J 0 (α j Pr )
α j = i iR j S n ,
+ Pm = Pt ,
128µL 4ρL
where V is the volume of reservoir including the volume of the
pressure transducer. When the changes in temperature and pressure
are small, all coefficients become constants. Then, substituting (13),
(14) and (15) into (12) the following partial differential equation is
obtained for the measured pressure (16):
πdt2 γP
4ρLV ∂ 2 Pm
+ RC
= [cos(ϕ j L j ) +
(σ j +
) sinh(ϕ j L j )
PJ −1
n j = [1 +
where J is inertance, C is capacitance, R is resistance, Pm is
pressure change in reservoir, and Pi is the pressure disturbance.
Each coefficient term is defined as follows (13-15):
where P T , U and V are the mean values of pressure,
temperature, flow velocity and volume respectively. The equation is
further simplified (8-11).
∂ 2 Pm
128µLV ∂Pm
+ Pm = Pt ,
πdt4 γP ∂t
2.3 Temperature Measurement System in a Dusted Flow
(Temperature Sensor)
Dusted flow is a flow system in the flying vehicle engine in which the
sensed temperature changes rapidly, it becomes crucial to
understand: the transient response characteristics of the thermocouple
and Dynamic error (unavoidable dynamic errors occur during the
measurement of any temperatures, changing with time (Temperature
Sensor Dynamic error (as the difference between the sensor
temperature, and the temperature measured by another inertia free
sensor, exhibiting the same) static errors). This is that part of the
systematic error which varies with time. To simplify the problem,
assume that the static error equals zero (17-18):
d ϑT ( s )
ϑ (s)
FT ( s ) = T
ϑ (s)
KT =
∆θdyn (t ) = ϑT (t ) − ϑ (t ) ,
∆θdyn ( s ) = ϑT ( s ) − ϑ ( s ) ,
Now θ is also simply referred to as temperature. The initial
The dynamic properties of a temperature sensor to be described by
the following equation (19):
conditions at t −1 = 0 are given by: θT = ϑT − ϑ a and −ϑ a 〉0 .
According to Newton's law, when the sensor is immersed in the
medium, the heat transferred to the sensor in the time interval
be (26):
 y n (t ), y n −1 (t ),........... y (t ) 
 ϑ m (t ), ϑ m −1 (t ),...........ϑ (t ) 
where y (t ) ….. y n (t ) are the sensor output signal and its time
derivatives, and ϑ (t ) ……. ϑ m (t ) are the measured temperature
and its time derivatives. For linear dynamic behavior, equation
becomes (20):
∑ ai
d i y (t )
i =0
∑ bj
d j ϑ (t )
where ai and b j are constant coefficients and m < n. The transfer
function GT ( s ) can be used to describe and present the dynamic
properties of a temperature sensor. It is the ratio of the Laplace
transform of the sensor output signal, y(s), to the Laplace transform of
the measured temperature signal, ϑ ( s ) ), when the initial conditions
are zero so that (21):
GT ( s ) =
ϑ (s)
Taking equation into account the transfer function can then be given
by (22):
∑ bj s j
GT ( s ) =
j =0
∑ ai s
L( s)
M (s)
The operational transfer function, GT ( s ) is thus presented as the
ratio of the polynomials L( s ) and M ( s ) . The sensor transfer function
GT ( s ) of equation can be expressed as a product of the transfer
function of the thermal conversion stage, FT ( s ) and of the
coefficient KT presenting the properties of the electrical conversion
stage, and called the sensor gain so that equation is obtained (23-25):
GT ( s ) = KT FT ( s) ,
dQ = αA(θ − θT ) dt ,
where, α is the heat transfer coefficient between sensor and medium
and A is the heat exchange area. The heat stored in the sensor is
dQ = mcθT ,
where m is the mass of the sensor, and c is the specific heat of the
sensor material. It follows that (28-29):
αA(θ − θT )dt = mcθT ,
mc d θT
+ θT = θ ,
αA dt
Introducing the notation (30):
= NT ,
Equation can be expressed [7] as (31):
d θt
+ θT = θ ,
where NT which is called the sensor time constant at the given heat
transfer condition α = cons tan t , is expressed in time units. Taking
the Laplace transform, equation becomes (32):
NT sθT ( s ) + θT ( s) = θ( s) ,
where s is the Laplace operator. Defining the transfer function of the
thermal stage of the sensor as (33):
θ (s)
FT ( s ) = T
θ( s )
Equation becomes (34):
FT ( s ) =
1 + NT s
The sensor transfer function is (35-36):
GT ( s ) =
ϑ (s)
GT ( s) = KT FT ( s) = KT
1 + NT s ,
Equation shows that the transfer function of an idealized
temperature sensor is that of a first order inertia. To eliminate the
dynamic error of a temperature sensor, a corrector with a transfer
function GT ( s) , can be used. The dynamic error will be zero if (37):
GT ( s ) = GT ( s )GC ( s ) = KT KC .
engine which burns metalized fuel, oxidizer and other additives (grain)
to produce large volumes of relatively gases with hot particle and the
propelling force for the vehicle. These particles are small metals with
great tendency of carrying heat to the wall of the generator. The
generator used in the experiment comprises of the combustion
chamber and the nozzle and is made of Titanium metal. Titanium is a
chemical element with the symbol Ti and atomic number 22,
sometimes called the “space age metal”, it has a low density and is a
strong, lustrous, corrosion-resistant transition metal with a silver color.
The Titanium metal used here is alloyed with iron and aluminum, to
produce strong lightweight alloys and the detector. Projection of the gas
generator to the X-ray beam is shown Fig.11, together with the internal
view. The simulation was carried out using X-ray compass program.
Thus the transfer function of a perfect corrector should be (38):
GC ( s ) =
GT ( s )
where KC is the corrector gain. If the sensor is approximated by a
first order inertia element, with the transfer function given from
equation, the transfer function of the corrector should be (39):
GC ( s ) = KC (1 + sNC ) ,
where NC = NT since each practical corrector has some inertia,
which has to be considered in the transfer function of equation (36),
the final corrector transfer function is (40):
GC ( s ) = KC
(1 + sNC )
1+ s C
Fig.10. Experimental set-up for X-ray testing of flow and heat defect in
the dusted flow
where k is the correction coefficient, k 〉1. Satisfying the condition
NC = NT the transfer function of a sensor-corrector set will be (41):
GC ( s ) = KT KC
1+ s
1+ s C
3.1X-ray Testing Set-Up
The experimental set up for investigation of internal gas dynamics of
dusted flow is as shown in Fig.10.The set up involves the X-ray source
which is LORAD LPX200 which has the energy of 160 kVp and power
level of 800 watts. For this investigation, the energy is limited to range
of 75 -80 Kvp and power of 500 watts (for Titanium materials). The
test object is a gas generator. The gas generator is aerospace vehicle
Fig. 11.Test object (gas generator)
3.2 Pressure Fluctuation during Flow
Table1. Potassium –Based Grain
Pressure measurement simulation in dusted gas flow was based on
the differential equation (12). The model for the simulation is given as:
Grain Parameters
Pm =
1  πdt2 P γPi πdt2 P γPm 128µLsPm 
s  4ρLV
4ρdt2 
Using Matlab-Simulink, the simulink model for the investigation is
shown in figure 12. Also, the response of the system was predicted
with the transfer function given as (43):
Pm ( s)
Pt ( s) JCs 2 + RCs + 1
Values (dimensions)
Tube-diameter (dt)
Density( ρ )
( )
Length (L)
1.0 (m)
Volume (V)
0.5 (m2)
Viscosity (µ)
where J , C and R were defined earlier. The dusted flow
parameters used for the investigation were shown in Table 1. The
effect of pressure disturbance which is assumed to be noise on the
pressure change was predicted.
( ).
45 ( )
3.45 * ( )
73.4 * 10 − 8
Pressure disturbance (Pj)
Initial pressure (P)
3.3. Transient Response and Dynamic Error of Thermocouple
Thermocouple is the temperature sensor used for investigation in
this paper. It was chosen because it is a first order device, meaning
that its control will be easier. The simulation of the transient response
and the dynamic error of the thermocouple were carried out based on
the following mathematical models. The Boit number which is the
thermal resistance to heat transfer by convention to thermal resistant
to heat transfer by conduction (44):
Bi =
Tb is the bulk temperature of the thermocouple, T0 is the initial
temperature of thermocouple and T∞ is the dusted fluid temperature.
Heat transfer is assumed to be governed solely by conduction. It is
presented in a dimensionless form (46):
Fio ∂t
= ∇2Ti*
TC thermocouple
Fio =
αi t
αi =
c pρ
TC thermocouple
 D MeasuredDomain 
αi is the thermal diffusivity
where h is the heat transfer co-efficient by convection, v is the
volume immersed portion, k is the thermal conductivity of the
thermocouple and A is the surface area in contact with the fluid.
Thermocouple is a first order device and can be model as a lumped
heat capacity system. Solution of the energy balance of such system
yields the transient response of the thermocouple (45):
 − hAt 
Tb − T∞
= exp 
T0 − T∞
 c pV 
t * is the dimensionless time, Ti* is the dimensionless temperature
and Fio is the Fourier number( also known as a dimensionless time)
= − KTC TC , r = R
T0 r 〈 R 
T (r , t = 0) = 
T1 r 〉R 
R is the characteristic length ( radius of the thermocouple wire. Heat
conduction with no thermal resistance to heat flow is assumed at the
thermocouple surface. And heat transfer by conduction is assumed to
be prevalent thereafter.
The closed form of the solution of the fluid immersed thermocouple is
of the form (50):
Tb − T1
 α t 
= exp  B *  D2   ,
 R  
T0 − T1
Since thermocouple is a first order device, n is approximately equal
to 1 while varies from 1.582, 1.724, 1.830 and 1.833 [7].
Di vide1
Gaussi an
Di splay7
densi ty
Di vide2
T o Workspace
Di vide4
Fig.12. Simulation of pressure measurement system for dusted gas flow4
The result of the simulation of flow in the gas generator
reveals Turbulent flow (Fig. 13 and Fig.14) with great amount
of heat distribution concentrated along the wall of the
generator. The material of design gave good capability to
withstanding the heat produce with no heat defect at the
center line of flow but much defect along the wall. However,
the heat defect was seen to be much concentrated from the
chamber to the throat (Fig.15). The flow was uniform,
meaning a good combination of the aerospace flying vehicle
grain ingredient. The X-ray image produced corresponds to
the discontinuity in the density of the gas generator material.
Changes in the density or thickness of the defect
corresponds to the dose rate (intensity of radiation
transmitted along the section of the material). The brighter
the image, the higher the dose rate of the incident radiation.
The lower density region (defect region) appears brighter
than the higher density region. The resolution of the image
produced by X-ray facility was also good. The intensity of the
resolution was noticed to increase with time of exposure.
Pressure change over time during flow is given in the figure
16. Initially the pressure changes due to pressure
disturbance resulting from turbulent flow coupled with
chemical reaction. After a while, the change is stable.
Meaning, pressure change is a function of the pressure
disturbance. In design, selection of aerospace vehicle grain
combination which minimizes the pressure disturbance is a
solution to stabilizing burning pressure change. In the response
result (Fig.17), the stable response was obtained at time constant
of 1 and chi equal 0.85.
Thermocouple being a first order device is expected to give
a time response similar to that of a first order, but on the
contrary, it gives a response shown in figure 18. The time
constant was fast at the initial stage but the system gets to
stability lately. This abnormality is as a result of the effect of
harsh heat environment of dusted gas flow. Meaning, for
thermocouple to be effective and sensitive to measurement
of dusted gas flow, its thermal diffusivity must be far greater
than the thermal diffusivity of the dusted flow. Also, the
dynamic error and dynamics of correction were investigated.
The result shows that the correction model gave solution
for the device good response to dusted flow (Fig.19).
Fig.13. Initial flow within the chamber
Fig.14. Flow within the chamber after certain period of time
T b-T i/T o-T i
Fig.15. Flow towards the duct exit
Fig.18. Thermocouple response against Fourier number with
variable Biot (Bi) and n (order)
A m p litu d e
Fig.16. Pressure changes with time under the influence of the
pressure disturbance
Time(Sec) (sec)
Fig.19. Dynamic error response investigation
Fig.17. Pressure transducer time response
The following conclusions could be derived from this paper:
(1) X-ray test facility with good image representation of the
internal gas dynamic of dusted flow with good image
resolution and visibility was designed. The program also
calculated the model correctly.
(2) Effective simulation for pressure measurement and
dynamic behavior of the pressure transducer applicable to
dusted gas flow investigation was design.
(3) Thermocouple was proved to be the most effective
temperature sensor for dusted gas flow with its thermal
diffusivity far greater than the thermal diffusivity of the dusted
gas stream.
[1] Dugan, E., Jacobs, A., Shedlock, D. and Dan E.
Detection of Defects in Foam Thermal Insulation
Using Lateral Migration Backscatter X-ray
Radiography. Proceedings of SPIE 49th Annual
Meeting, Symposium on Optical Science and
Technology, Penetrating Radiation Systems and
Applications VI, Vol. 5541, 2004.
[2] Dugan, E., Jacobs, A., Houssay, L. and Ekdahl, D.
Detection of Flaws and Defects Using Lateral
Migration X-ray Radiography. Proceedings of SPIE
48th Annual Meeting, Symposium on Optical
Science and Technology, Penetrating Radiation
Systems and Applications V, Vol. 5199, pp. 47-61,
San Diego, 2003.
[3] Dugan, E., Jacobs, A., Su, Z., Houssay, L., Ekdahl, D.,
and Brygoo, S. Development and Field Testing of a
Mobile Backscatter X-ray Lateral Migration
Radiography Land Mine. SPIE Proceedings on
Detection And Remediation Technologies for Mines
and Mines like Targets VII Vol 4742, pp120-131,
Orlando, 2002.
[4] Dugan, S., Brygoo, D., Ekdahl, L., Houssay W and Su,
Z. Lateral Migration Radiography: A New X-ray
Backscatter Imaging Technique. Proceedings of
SPIE 47th Annual Meeting, Symposium on Optical
Science and Technology, Penetrating Radiation
Systems and Applications IV, Vol. 4786, pp. 1-16,
Seattle, 2002.
[5] Gou, O.V. and Kopchenov, V.I. Numerical investigation
of gasdynamic structure in the duct at supersonic
conditions at the entrance. Aeromechanikai
Gasovaya Dinamika, No. 1, pp 28-39, 2001.
[6.] Jacobs, E., Dugan, S., Brygoo, D., Ekdahl, L., Houssay,
I. and Su, Z. Lateral Migration Radiography: A New
X-ray Backscatter Imaging Technique,” Proceedings
of SPIE 48th Annual Meeting, Symposium on
Optical Science and Technology, Penetrating
Radiation Systems and Applications V, Vol. 5199,
pp. 47-61, San Diego, 2003.
[7] Michalski, L. Eckersdorf, K., Kucharski, J., and McGhee,
J., Dynamics of Temperature Sensors. John Wiley &
Sons Ltd ISBNs: 0-471-86779-9 (Hardback); 0-47084613-5 (Electronic), 2001.
[8] Salazar, S. Material Flaw Detection Characteristics of XRay Lateral Migration Radiography. Master’s
Project, University of Florida, 2000.
[9] Su, Z., Jacobs, A., Dugan, E., and Ekdahl, D. X-Ray
Lateral Migration Radiography System for the
Application of Landmine Detection. Proceedings of
SPIE 45th Annual Meeting International Symposium
on Optical Science and Technology, Vol. 4142, pp
150-160, San Diego, July, 2001.
[10] Wells, C., Su, Z., Allard, A., Salazar, S., Dugan, E., and
Jacobs, A. Suitability of Simulated Landmines for
Detection Measurements Using X-Ray Lateral
Migration Radiography. SPIE Proceedings on
Detection And Remediation Technologies for Mines
and Mines like Targets V, Vol. 4038, pp 578-589,
Orlando, FL, April, 2000.
[11] Wells, C., Su, Z., Moore, J., Dugan, E., and Jacobs, A.
Lateral Migration Radiography Measured Image
Signatures for the Detection and Identification of
Buried Landmines. Technical Report, ARO Grant,
No. DAAG-55-98-1-0400, University of Florida,
Johnson Opadere
International Institute for Advanced Aerospace Technology of
Saint Petersburg University of Aerospace Instrumentation.
Abstract: This paper presents a fault-tolerant intelligent actuator system capable of detecting and isolating fault. Triple Modular
Redundancy scheme is implemented with redundant processors and actuators. Architecture hinged on integration of smart
sensors and actuators to achieve fault location and isolation is demonstrated. Reconfiguration algorithm for actualization of
desired actuators outputs feasible with summator is also presented. Plurality of smart sensor systems is shown to provide plant
characteristic monitoring in a form of feedbacks to enhance system reconfiguration and faulty actuator(s) isolation. Also in this
paper, the transfer function of the system is also described.
Keywords: Fault-tolerance, Smart Sensors, Intelligent Actuators, Triple Modular Redundancy
Section 4 describes the transfer function of the overall system of
the intelligent actuator.
As control systems must operate under extreme, harsh,
unpredictable and unforeseen conditions, fault tolerance is
important in the control systems for mission-critical and safetycritical systems such as spacecraft avionics, fly-by-wire aircraft
Typically all methods of fault-tolerant design include some form of
avionics, drive-by-wire automotive electronics, railway signaling
redundancy. The configuration and integration of such system with
systems, medical and life-support systems, and nuclear and
redundancies are feasible with embedded processors. The faultchemical processes. Reliability, efficiency and robustness of a
tolerance tasks of computation, reconfiguration, identification on
control system, among other things, rely largely on the sensing and
and isolation are hinged on the embedded processors. This means
actuating system components. Faults at component-level
that the system that is finally built is more complicated than that
summarily imply the system failure. Actuator as one the main
needed simply to perform the required task. There are three main
control system components has to be unnegotiably reliable.
categories of hardware redundancy: Static redundancy that is
According to [4], an intelligent actuator is generally an entity than
based on the voting of the outputs of a number of modules to mask
can have an effect to a mechatronics system or its surroundings (in
the effects of a fault within these units. The simplest form of this
contrast to a sensor which can measure characteristics of the
arrangement consists of three modules and a voter and is termed a
system or its surroundings) and disposes of the capability to
triple modular redundant system (TMR). Dynamic redundancy on
process information. Actuators may fail during system operation
the other hand is based on fault detection rather than fault masking.
and actuator failures may lead to performance deterioration or even
This is achieved by using two modules and some sort of
overall system failure. Actuator failures are often uncertain in failure
comparison on their outputs that can detect possible faults. This
patterns, failure time instants, and failure values, which introduce
method has lower component count but is not suitable for real-time
not only signal uncertainties but also structure uncertainties into the
applications. Hybrid redundancy uses a combination of voting,
controlled system. For certain safety-critical systems such as the
fault-detection and module switching, thus combining static and
payload launch system, actuator failures, if not handled properly,
dynamic redundancy [3].This paper presents architecture
may result in disasters. Control system designers are tasked with
synonymous with hybrid redundancy that exhibits fault detection,
responsibility of not only designing a high performance system but
isolation and reconfiguration.
also with a system design that is reliably robust such that failures
The fault-tolerant embedded architecture for actuator employed in
are mitigated, or at the very least, reduced to within acceptable
this paper is based physically on Triple Modular Redundant (TMR)
system with “intelligence” to provide of fault isolation and
The paper focuses on design of a TMR based architecture to
reconfiguration capabilities by the processor-actuator subsystem in
provide fault location and isolation in the actuation componentthe event of faults.
level of the control system. However, the focus of the work is on
A successful triplication of processors has been done by [5] to
intelligent actuator due to its unparalleled advantages over the
provide fault-tolerant sensor in the TMR configuration. The
traditional actuator. TMR fault-tolerant system design remains one
configuration can be implemented with various sensor and
of the most widely used fault-tolerant approaches in control
processor sizes, as well as in System-on-Chip (SoC) design. The
applications. Fault tolerance has been implemented by hardware
overall synopsis of the work is the presentation of a TMR intelligent
triplication for processors, sensors and even traditional actuators.
Actuator System. A TMR intelligent actuator system goes further to
Like in traditional actuators, fault-tolerant design is imperative for
implement a fault-tolerant actuator incorporating some sets of
robustness and reliability of an intelligent actuator. The first part,
embedded smart sensors. The “intelligence” characteristics of the
section 2 discusses TMR as an embedded architecture. The actual
actuator system are actively influenced by the integration of the
design architecture employed in this work is presents in section 3.
embedded processors and smart sensors.
This work presents TMR architecture for fault-tolerant intelligent
actuator. Emphasis is placed on the topology. However, the
protocols for interconnection and communication between the
processors, actuators and the smarts sensors are not included in
this paper.
The design consists of three major components: three
processors, three actuators and four sets of TMR smart sensor
systems. TMR scheme is applied to the aforementioned major
components to achieve an overall system robustness and reliability.
3.1 The Processor-Actuator TMR subsystem
In this work the three processors (controllers) are all
interconnected, tolerating one faulty processor. The conventional
system-level diagnosis theory establishes that a system of n
processors is always diagnosable if:
n = 2f +1 .
where f is the maximum number of faulty processors. With threeprocessor system, that means no more than one faulty processor is
allowed [2]. Similarly for the actuators, an N-modular redundancy
scheme can tolerate:
n = ( N − 1) / 2.
where N (an odd number) is the number of modules. In this work N
is 3, so no more than one actuator is tolerated [5].
The triplicated processors works on TMR 3-2 which tolerates a
single failure: at least 2-out-of-3 components must be working.
Analyzing the reliability of the triplicated processors based on
TMR 3-2 by binomial expansion,
 n
B ( r; n; p )   p r (1 − p )
 r
R = B ( 3 : 3) + B ( 3 : 2) ,
R = 3 p 2 − 2 p3 = 3Rm2 − 2 Rm
 3
0  
=   p3 (1 − p ) +   p 2 (1 − p ) .
 3
 2
Fig.1. Internal Architecture of each smart sensor system (TSSS)
For constant failure rate,
If A = B = C, Then X = A or B or C
If A = B ≠ C, Then X = A or B
If A ≠ B = C, Then X = B or C
A = C ≠ B,
no failure
C failed
A failed
B failed
R(t ) = 3e −2λt − 2e −3λt .
The three actuators work on the TMR 3-2-1 approach which
implies that the if 3 or 2 or 1 actuator is functioning,the overall
system has not failed,that is, at least 1-out-of 3 components must
be working.The reliability of the 3 TMR actuators based on TMR 32-1 approach is:
R = B ( 3 : 3) + B ( 3 : 2) + B ( 3 :1) =
 3
0  3
1  3
=   p3 (1 − p ) +   p 2 (1 − p ) +   p1 (1 − p ) ,
 3
 2
 1
R = 3 p 2 − 2 p3 + 3 p (1 − p ) = p3 − 3 p 2 + 3 p.
And for constant failure rate,
R(t ) = e −3λt − 3e −2λt + −3e −λt .
3.2 The TMR smart sensor system
Smart sensor is chosen over the traditional sensor in the
integration in the intelligent actuator system design due to its
prominent “intelligence” advantages. The presence of
controller/processor in smart sensor has led to corrections for
different undesirable sensor characteristics which include input
offset and span variation, non-linearity and cross-sensitivity. As
these are carried in software, no additional hardware is required
and thus calibration becomes an electronic process. Thus it is
possible to calibrate the batches of sensor during production
without the need to remove the sensor from its current environment
or test fixture. Due to the existence of the processor with in the
package, it is possible to have digital communication via a standard
bus and a built in self-test. Smart sensor enhances self calibration,
computation, communication and multi-sensing. The internal TMR
scheme for the each set of the four smart sensor systems is as
illustrated in Fig.1.
Fig.2. The fault-tolerant Intelligent Actuator System
3.3 The TMR intelligent actuator architecture
and the reconfiguration algorithm
The overall system architecture of the intelligent actuator is as
shown in Fig.2. The triplicated processors (P1, P2 and P3) enhance
the TMR scheme for the actuators. Intelligence of the system is
additional enhanced by the incorporated TMR Smart Sensor Systems
(TSSS). The three actuators are depicted with A1, A2 and A3.
TSSS1, TSSS2 and TSSS3 sense the output from the A1, A2 and
A3 respectively. They (smart sensors) double as the triggers for the
respective “Normally Closed” switches that connect the actuators
output to the voters. This paper does not emphasize that the switches
should be implemented by hardware. The system upon detection and
location of the faulty actuator isolates the faulty component of a TMR
scheme, leaving it to be a duplicated system. The TSSS triggers the
corresponding switch open to disconnect the actuator in the event of
failure when its output is not sensed. As a result, the system provides
actuator fault-tolerance by fault masking, detection and isolation.
Isolation eliminates the potential additional drag torque and enhances
the overall system performance.
The function of TSSS4 is to monitor the plant’s output. The output
from TSSS4 provides a feedback of the plant output to the processors.
Output characteristics of the plant are transferred to the high level
system by the processors for control purposes.The feedback also
serves as an impetus for the overall system reconfiguration.
The summator adds the outputs from the plurality of Actuators. Many
works have been done on adding plurality of actuator outputs. It is
possible to perform force, torque, position, velocity and even flow
summation. The desired output from the summator is 3x, where x is
the magnitude of the output from each actuator only and only if all the
three actuators are working perfectly as desired. In the event of one or
two actuators isolation, the system would be reconfigured to still obtain
3x magnitude at the summator. With the reconfiguration and isolation,
the possible output magnitudes of the actuators are according to the
transition matrix (6).
 3x
 3x
 3x
x 
3x 
 3x 
 3x 
2 
 
3x 
 3x 
2   A1   
    3x 
0 0  A 2  =  3x  .
  A 3   
 3x 
0 
 3x 
0 
 
 0 
3x 
0 
The MATLAB/SIMULINK model to verify the reconfiguration
algorithm is illustrated in Fig.3. Stateflow chart inherently designed for
the algorithm. The transition matrix was verified with the stateflow
chart. From the transition matrix and simulation result, the summator
outputs the desired magnitude when the all the three, 2-out-of-3 or 1out-of-3 actuators are functioning correctly. The system only fails
when all the three actuator fail, as shown in the last row of the
transition matrix (6).
Fig.3. The Reconfiguration Algorithms with MATLAB/SIMULINK
The reliability of this design of plurality of actuators was validated
with BLOCKSIM7 in fig.4. This design shows better reliability than
actuator singularity. The blue curve is the intelligent actuator system
with 3 actuators and the black curve is the actuator singularity (one
Malakhanov and Opadere have described the transfer function of
TMR smart sensor system (TSSS) [5]. This paper goes further taking
the TSSS transfer function to describe the overall transfer function
system in this work. The overall system component-level transfer
function is as shown in fig.5.
ReliaSoft BlockSim 7 -
Reliability vs Time
Reliability, R(t)
Time, (t)
Johnson Opadere
10:36:17 PM
Fig.4. Reliability graph
Fig.5. The system components inherent transfer function Model
WC1,WC 2,WC 3 represent the transfer functions of the three
For TSSS1, TSSS2 and TSSS3, time delays
τ is
parallel processors. WA1,WA2,WA3 denote the transfer
functions of the actuators. WF1,WF 2,WF 3 and WP is the
expended on isolating faulty actuators. Hence the time delay
for each is:
plant’s transfer function.
τ s + τis .
WCA1 ( S ) =
WC1 ( S ).WA1 ( S )
1 + WC1 ( S ).WA1 ( S ).WF1 ( S )
Total time delay for the four smart sensors is:
4τ s + 3τis .
WCA2 ( S ) =
Furthermore, the time delay in the system processor is:
WC 2 ( S ).WA2 ( S )
1 + WC 2 ( S ).WA2 ( S ).WF 2 ( S )
τ p = n(t ADC + treq + tsend ) + tvote .
WC 3 ( S ).W A3 ( S )
WCA3 ( S ) =
1 + WC 3 ( S ).W A3 ( S ).WF 3 ( S )
Here, n = 3;
The overall system delay τ can be expressed as:
τ = 4τ s + 3τis + τ p .
From Wss ( s ) = Ws ( s )e −τs , the discrete-time transfer
function for zero-order hold, W(z) of the TMR fault-tolerant
system can be expressed as:
WCA ( S ) = WCA1 ( S ) + WCA2 ( S ) + WCA3 ( S ).
W (z) =
WS ( S ) =
L ( f ( t − τ ) ) = f ( s ) e −τs .
e −τs = 1 − τs +
(τs )2 (τs )3
+ ...
Z-transform of this series:
Ws(s) is the system transfer function. Wss(s) is the transfer
function of the system taking delay, τ into account.
For the each TMR smart sensor system containing digital
sensors the transfer function of the system depends on the
transfer function of the sensors and the time taken for the
digital sensors for analogue to digital conversion, tADC, the
time delay in the processor to request for data from the digital
sensor, treq, time delay in sending the data from the sensor to
the microprocessor, tsend, the time delay in the first voting, tvote
A and the time delay in the second voting, tvote B.
The total time delay,
τ s = n(t ADC + treq + tsend ) + tvoteA + tvoteB .
n is the number of sensor that makes up each TMR smart
Here, n = 3;
τ s = 3(t ADC + treq + t send ) + tvoteA + tvoteB .
τ is the total time delay in the system. There system
transfer function of the system, Wss(s) can be similarly
determined to be:
Wss ( s ) = Ws ( s ) e −τs .
z − 1  −1 Wss ( s ) −τs  
z L 
e  .
z 
 s
Applying Taylor series for the exponential function;
WCA ( S ).WP ( S )
1 + WCA ( S ).WP ( S ).WF 4 ( S )
This system exhibits a characteristic of the transform:
τ p = 3(t ADC + treq + tsend ) + tvote .
z − 1 τ  z − 1 τ 2
− ...
z T  z  2T 2
where T is the sampling time.
Z {L−1{e −τs }} = 1 −
Equation (9) which represents the discrete-time transfer
function for zero-order hold for the TMR fault-tolerant sensor
system now becomes:
W (z) =
z − 1  z − 1 τ   −1 Wss ( s )  
 .
1 −
 z L 
z 
z T 
 s 
W (z) =
z − 1  z − 1 τ  z − 1 τ 2   −1 Wss ( s )  
1 −
 z L 
z 
z T  z  2τ 2  
 s 
The presented actuator system design architecture with
provides fault detection, location and isolation; hence system
reliability and robustness is enhanced. Such performance in
an actuator system has been shown to be realizable by Triple
Modular Redundancy scheme.
The “intelligence”
characteristics of smart sensor system have been
demonstrated to enable fault location and isolation in an
actuator system. Plant monitoring and system reconfiguration
are possible with the closed loop by the smart sensor
system. The transfer function the system is described.
[1] Braunl.(2003)
[2] Christopher P. Fuhrman., et al . Fault Tolerance with
Multiple Task Redundancy.
[3] E. Touloupis., et al. A TMR-Processor Architecture for
Safety- Critical Automotive Applications.
[4] Lothar Seybold., et al. (2009). Intelligent Actuators for
the Future of Individual Mobility. 7th Workshop on
Advanced Control and Diagnosis. Zielona Gуra,
[5] Johnson Opadere.(2009). Mathematical Model of Triple
Modular Redundancy Fault-Tolerant Sensor
System. IFAC Workshop on Aerospace, Guidance,
Navigation and Flight Control Systems. Samara,
Aliyu B. Kisabo
International Institute for Advance Aerospace Technologies (IIAAT) of
St. Petersburg State University of Aerospace Instrumentation (SUAI),
Saint Petersburg, Russia., Tel., +79117817521
Abstract: This paper explores the application of Linear Quadratic Gaussian (LQG) control synthesis method for an Autopilot of a
generic based Expendable Launch Vehicle (ELV) during ascent flight, in pitch plane. Thus, gives some introduction about the
fundamental aspects of the LQG control theory and it shows an application option through an example. The movement of an
Expendable Launch Vehicle (ELV) can be modelled with a linear time invariant dynamic system, which must be controlled by a
flight controller especially during the boast phase. The synthesis and analysis of the stability and other qualitative control
parameters were done using Matlab/Simulink.
Keywords: LQG controller, Expendable Launch Vehicle (ELV), Autopilot, Matlab/Simulink
In this paper we shall give a short view of the so-called Linear
Quadratic Gaussian theory, which can be consulted for more
details in [1] and [2]. This article discusses an example of a PitchPlane Autopilot design for a generic based Expendable Launch
Vehicle (ELV). In order to realize the LQG (Linear Quadratic
Gaussian) Autopilot we employed the separation principle, which
means that we were able to design the LQG controller in two steps.
First, the design of the LQR (Linear Quadratic Regulator), and then
we have to find a state estimator, an LQE (Linear Quadratic
Estimator) applying a modified cost function.
1.1 Purpose of Work
This paper is motivated by an ESA research project on stability of
FLC (Fuzzy Logic Controller) concerning, in particular, aerospace
potential applications [3]. It is intended that the LQG autopilot
synthesized in this research (since LQG is one of the celebrated
results in the control field) will be used to compare with that of its
FL counterpart that will be developed. The comparison will be on
the bases of control law synthesis, nominal and robustness
performance and the simulated pitch responses.
1.2 Selection point of Design
In this work we consider the atmospheric ascent flight and the
design selection point are traditionally were the load created by the
non-zero angle-of-attack is the most extreme. The size of the load
with the most difficult design points during a boost phase trajectory
is the transonic region and maximum pressure regions. The
transonic region is characterized by large fluctuations in the
aerodynamic derivative coefficients. The dramatic changes create
elevated levels of uncertainty in the linearized equations, and
emphasize the importance of quality controller during this region.
The maximum dynamic pressure on the vehicle also places
increased importance on controller quality. During this section of
the trajectory, the structural is characterized by dynamic pressure
and angle-of-attack ( Qα ) , serves as a physical limitation that must
be accommodated by the control law. These historic points; Mach
1.0, occurs 72 seconds into the boost phase and the time at which
maximum dynamic pressure, Q , occurs is at exactly 82 seconds.
For the ELV, these two points occur within seconds of each other,
approximately 72 seconds after launch [4].Therefore, only one point
(Mach 1.0 and Q = 480 psf ) during the mission trajectory is used.
1.3 Design Specification
An Autopilot is a closed-loop system inside a Guidance
Navigation and Control System (GNC). The primary role of an
autopilot is to stabilize the vehicle and to control the attitude angle.
It is also of prime importance to minimize the deviation from
nominal trajectory and to limit the induced angle-of-attack in
response to wind. The vehicle flight control system will be required
to meet two conflicting requirements: to minimize trajectory
deviations since excessive guidance correction results in payload
penalty, and to minimize the angle-of-attack excursion in order to
reduce the resulting bending moment or in other words, ensure
structural integrity of the vehicle. Depending on which requirement
the emphasis is made on, the control structure will be different. The
final decision must be made by considering the specific mission
(what is the worst tolerable deviation from the nominal trajectory at
the end of the flight?) and the structural configuration of the
airframe (what is the maximum lateral force that it can withstand?).
Solving such trade-off problem of course is out of the scope of this
work. Rather, the emphasis has been put on the problem of
minimizing the deviation from nominal trajectory [5].
1.4 Expendable Launch vehicle (ELV)
An expendable launch vehicle (ELV) is used to carry a payload
into space. The vehicles are designed to be used only once (i.e.
they are “expended” during a single flight), and their components
are not recovered after launch. The vehicle typically consists of
several rocket stages, discarded one by one as the vehicle gains
altitude and speed. The ELV design differs from that of reusable
launch systems, where the vehicle is launched and recovered more
than once. Reuse might seem to make systems like the Space
Shuttle more cost effective than ELVs, but in practice launches
using ELVs have been less expensive than Shuttle launches. Most
satellites are currently launched using expendable launchers; they
are perceived as having a low risk of mission failure, a short time to
launch and a relatively low cost. A Shuttle orbiter is a major
national asset, and its high cost (far more than a single expendable
launch vehicle) and presence of a crew require stringent "man
rated" flight safety precautions that increase launch and payload
costs. Only five orbiters were built, and the unexpected loss of two
(Challenger and Columbia) significantly impacted the capacity and
viability of the Shuttle program. Each loss also resulted in an
extended hiatus in Shuttle flights compared to that following most
expendable launch failure, each of which impacted only that model
of launcher. For these reasons it is generally agreed that the Space
Shuttle has not delivered on its original promise to reduce the costs
of constructing and launching payloads into orbit. The Shuttle was
originally intended to replace expendable launchers in the
launching of satellites, but after the loss of Challenger the Shuttle
was reserved for previously planned missions and those requiring a
rotation, and vehicle flexibility. Expendable Launch vehicle
dynamics are generally described by ‘short-period’ equations of
motion during the atmospheric flight. Although, the ‘short-period’
equations are the results of a linearization process, the lateral
dynamics on the pitch and yaw axes can be coupled by the vehicle
roll rate. In fact, when the launch vehicle has a negligible roll rate,
an action on the pitch plane produces an effect in the yaw plane.
There are many possible frames of reference to present the forces
and moments that influence the motion of the vehicle in its pitch
plane. The three set of axes used in this study are shown in Figure
2.0.The earth-fixed coordinate frame ( X E − Z E ) , which coincide
with the trajectory plane with X E − axis and Z E − axis coincide
with the local horizontal and vertical lines of the earth at the launch
pad. The non-rotational vehicle reference system ( X b , Z b ) , where
X b − axis lies along the longitudinal axis of the vehicle, positive in
the nominal direction of positive thrust acceleration. The Zb − axis
completes for a standard right-handed system. The trajectory
inertial system ( X I , Z I ) that is tangent to the nominal trajectory of
the vehicle. It is obtained from the earth-fixed coordinate frame by
rotation of θ0 (nominal attitude) about the Z E × X E − axis.
Neglecting the effects of bending and sloshing and lags due to
actuator and instrumentation, the linearized equation of motion of
the vehicle have been derived in details in [6] and take the form:
Force equation:
(TT − D ) − Lα α − TC δ ,
Moment equation:
θ = µ α α + µC δC ,
Angle of attack:
α = θ + + αW ,
Fig.1. A typical Expendable Launch Vehicle
The paper is organized as follows: the mathematical description
for ELV is presented in Section 2, including the aspects of statespace representation and the non-linear dynamics modeling in
Simulink. A system analysis is carried out in section 3, here, Dirac’s
impulse investigation with Controllability and Observability of the
system are considered. Besides that, the adopted control strategy
is discussed in Section 4. Then, descriptions of the validation tests
and numerical simulations, as well as the presentation and analysis
of their results, are shown in Section 5. The final remarks based on
results obtained are presented in section 6 and the perspective of
future works concludes the paper, Section 7.
The equations of motion of an ELV are complicated by the fact
that the vehicle has time-varying mass and inertia. There can also
be relative motion between various masses within the vehicle and
origin of the body axes, such as fuel sloshing, engine gimbal
Gust angle of attack:
 v
αW =  −  W
  v
  ,
Thrust Vector deflection angle:
L l 
µα =  α α  ,
 I 
Control moment coefficient:
T l
µC = C C ,
-Acceleration due to gravity is constant.
-Engine deflections are small enough ( δ P << 1 ) such that
sin δ P = δ P and cos δ P = 1 .
-Inertial positions do not affect attitude dynamics directly.
2.1 Simulink Model of the non-linear ELV
The following sets of parameters in Table 1 are for a generic
launch vehicle based on the work of Ramnath [7] and are used as
the input parameters for the Simulation of the non-linear ELV that
gave the response in figure 2.1. These parameters were evaluated
for a time, t = 72sec in-flight. It should be noted here that the
values for aerodynamic moment coefficient, control moment
coefficient and forward velocity presented in Table 1, were from
Table 1: ELV simulation Parameters
Fig.2. Launcher Model in the pitch plane
where D − drag; I − moment of inertia of the vehicle about the
pitch axis; lC -distance from mass centre of the vehicle to engine
swivel point; lα − distance from mass centre of vehicle to centre of
Lα − aerodynamic load per unit angle of attack;
M − mass of vehicle; TC − control thrust; v − forward velocity of
the vehicle; vW − wind velocity; Z − normal displacement of the
vehicle relative to inertial frame; α − angle- of-attack;
θ − perturbation attitude error; θ0 − nominal attitude angle relative
to the earth-fixed reference; θ− attitude rate; θ − attitude
acceleration. The launch azimuth is assumed to be directly East.
The derivations and models assumption that lead to the
linearized equations of motion for the LV stated in this section
follow those stated by Greensite. The major assumptions are
outlined as:
-The inertial origin is at the launch pad on a flat-Earth coordinate
-The body axes origin is at C.G of the vehicle.
-Only rigid-body motions are considered in the pitch plane (3-DOF).
-The cross products of inertia are negligible as the vehicle is
assumed symmetric.
-Aerodynamic forces are found via look-up tables.
3.516e7 N
1.363e6 N
2.634e6 N
754.877 m / sec
This it due to the fact that Ramnath considered a Space Shuttle
in his work and ours here is an ELV with a different distance from
mass center to engine swivel point and center of pressure, and
forward velocity was evaluated from the expression:
µ ∞ v = 0.01958
It can be clearly seen from figure 2 that all the state vectors
behave in an unstable manner, i.e., as time increases each state
vector increase too. This unstable trend is seen among all three
state vectors, though that of the lateral drift is having an opposite
direction to the others. Wind velocity was not considered as part
the input to the ELV during the simulation because this will require
adequate modeling of wind, which is outside the scope of this
research. Wind, an external disturbance does not influence the
stability of the ELV (stability is a function of the inherent properties
of a system).
Non-Linear model of Expendable Launch Vehicle (ELV) Model
To Workspace
Fig.3. Simulink Model of non-linear ELV
The model described in Figure 3 is based on the equations of
motion described by equations ((2), (3) and;
L α T δ
T − D
Z = − T
+ g cos ( θ0 )  θ − α + C ,
 M
Step response of non-linear ELV
0    0 
x =  14.7805 0 0.01958   θ  +  3.485 [ δ ] ,
 
 −100.858 1 −0.1256  •   20.42
Z 
θ 
y = [1 0 0]  θ  .
 
Z 
2.2 State Space Model of ELV
The actuator deflection δ is a known input to the plant, whereas
the wind-induced angle- of-attack αW is an unknown input. This
has to be estimated and thus, it becomes an input estimate
problem, unfortunately outside the scope of this research. The
attitude angle ( θ ) and body rate ( θ ) are available on-line during
Time (seconds)
Fig.4. State Vector response to a step signal
Thus, for a linear time invariant system described by the equation
x ( t ) = Ax ( t ) + Bu ( t )
y ( t ) = Cx ( t ) + Du ( t )
x ( 0) = x0 ,
The pitch- plane model for the ELV, considering actuator deflection
as the only input to the plant is given as [9],
flight. The lateral drift ( Z ) is estimated and used thereby improving
the cost and weight significantly, since there is no need for a
separate sensor to measure the lateral drift. The estimated lateral
drift is used to limit the vehicle angle-of-attack ( α ) in high dynamic
pressure region ( Q ) regions of the ascent. Vehicle lateral loads are
proportional to the product of Q and total angle-of-attack α . All
launch vehicles are inherently unstable and require an attitude
stabilization system for successful operation. [10]
It is expected to analyze the pure dynamics. In this section we
study the transient response to the Dirac’s delta or impulse function
as a test input. The increasing nature of the pitch angle as time
increases as depicted in figure 3.2 is due to the eigenvalues of the
system, S1,2,3 = −3.9143, 3.7807, 0.0080.
To Workspace
Fig.5. Open-loop model of the ELV in Simulink
The Simulink model for Dirac’s investigation [11] is shown in fig 5
and the result is shown in fig 6.
The Linear Quadratic Gaussian (LQG) control consists of a
technique for designing optimal dynamic regulators, based on the
state-space system modeling. This technique is based on the
search of the tradeoff between regulation performance and control
efforts, and it takes into account process disturbances and
measurement noises. Basically, the LQG approach addresses the
problem where we consider a system dynamic model perturbed by a
dynamical noise w , and a state observation corrupted by
measurement noise v affecting the sensors data acquisition. The
LQG regulator comprises an optimal state-feedback gain and a
Kalman filter state estimator. This technique requires a linearized
state-space model of the system with the addition of the noise effect
as shown in Equation (9). The dynamic model is given by:
x ( t ) = Ax ( t ) + Bu ( t ) + Gw,
State vectors
y ( t ) = Cx ( t ) + v.
Fig.6. Open-loop response of ELV
3.1 Controllability and Observability
A system is said to be controllable if a control vector u (t ) exist
that will transfer the system from any initial state x(t0 ) to some
final state x(t ) in a finite time interval. Consider a system
described as in equation (5), sufficient condition for complete state
controllability is that the n × n matrix
M = [ B : AB : ... : An −1 B],
Contains n linearly independent row or column vectors, i.e. is of
the rank n (that is, the matrix is non-singular, i.e. the determinant is
non-zero). Equation (7) is called the controllability matrix.
The system described by equation (5) is completely observable if
the n × n matrix
T n −1 T
N = [C : A C : ... : ( A )
C ],
Is of the rank n , i.e. is non-singular having a non-zero determinant.
Equation (8) is called the Observability matrix [12].
For the hypothetical ELV described by equation (6), the following
result was obtained with respect to controllability and Observability;
rank_of_M = 3 , system_order = 3 and Rank_of_N = 3.
Thus, our linear system for the ELV is controllable and observable,
and because of the equality of the Kalman rank the Observability
and controllability matrix it is minimal too.
With equation (9) in perspective, we analyzed the dynamic
behavior of our ELV with external and measurement noise. We
suppose an external stochastic turbulent air and internal noise at
the same time. Thus, there exist, in our case, two types of noises
(external, internal). We will calculate with an external stochastic
noise (for example the wind turbulence), and an internal random
measurement noise.
The two random (external disturbance and sensor) noises are
white, Gaussian (normal) zero-mean stationary vector processes.
We know the covariance matrices, and the external disturbance
noise has a disturbance input matrix G . To find a controller which
stabilizes our plant, we have to solve a stochastic integral problem
The LQG design consists of obtaining the feedback control law in
the form u = − Kxˆ which optimizes the regulation index given by a
quadratic performance criterion:
J = ∫ ( xT Qx + uT Ru )dt ,
Where Q , and R are weighting matrices that define the tradeoff
between regulation performance and control efforts, i.e. the relative
weight of how fast the state x ( t ) goes to zero and the magnitude
of the control efforts u . A first selection for the matrices Q and
R is given by Bryson’s rule: select Q and R diagonal with
Qii =
max imum..acceptable..value..of ..zi 2
i ∈{1, 2,..., l }
R jj =
max imum..acceptable..value..of ..u j 2
j ∈ {1, 2..., m}
Where Q is an l × l symmetric positive-definite matrix, and R
an m × m symmetric positive-definite matrix. In essence the
Bryson’s rule scales the variables that appear in equation (10) so
that the maximum acceptable value for each term is one. This is
especially important when the unit used for the different
components of u and x makes the values for these variables
numerically different from each other.
Although Bryson’s rule sometimes gives good results, often it is
just the starting point to a trail-and-error iterative design procedure
aimed at obtaining desirable properties for the closed-loop system.
Considering a time invariant system, the gain matrix K is
obtained by solving the algebraic Riccati equation (Eq.12) and
taking K = R −1 BT P .
PA + AT P + Q − PBR −1BT P = 0,
[ K , P, E ] = lqr ( A, B, Q, R )
was use to
The Matlab function
obtain the solution to equation (12) and hence the appropriate gain
matrix K for the situation at hand was obtained with its
corresponding closed-loop eigenvalues for the system.
Kalman filter is used as observer to obtain the estimation of the
state variable, i.e. x̂ . The filter equation is given by:
xˆ = Axˆ + Bu + L ( y − yˆ )
Where L is the Kalman gain, given as L = PC T R −1 , obtained
from the solution to P in equation (14), a unique positive-definite
solution to the Algebraic Riccati Equation (ARE), given by,
AP + PA '+ BQB '− PC ' R −1CP = 0.
The Matlab function [ L, P, E ] = lqe ( A, G, C , QN , RN ) was used
to solve equation (14) and hence obtained the Kalman gain L The
schematic description of the LQG regulation problem is shown in
Fig.8. The LQG controlled Plant
Equation (13) is called a full-order observer for the process. Fullorder observers have two inputs − the process’ control input u and
its measured output y − and a single output – the state estimate
x̂ Note that the observer is also a dynamic system which its design
must also have good stability characteristics (preferably an
asymptotic stability).
The Simulink model for the filter implemented in this research is
as shown in Fig 7
Fig.8. The LQG controlled Plant
B* u
m* u
n* u
o* u
Different choices of QN and RN results in different estimator
gain L . When RN is very small (when compared to QN ), the
measurement noise is necessarily small so the optimal estimator
interprets a large deviation of ŷ from y as an indication that the
A* u
K_kf* u
estimate x̂ is bad and needs to be corrected. In practice this lead
to large matrices L and fast poles of the observer’s closed-loop
matrices. When RN is very large, the measurement noise is large
so the optimal estimator is much conservative in reacting to
deviations of ŷ from y .
This generally leads to smaller matrices and slow poles of the
observer’s close-loop matrices.
C* u
A plant-observer arrangement was used to synthesize the LQG
autopilot in Simulink; this is shown in figure 9. The proposed
autopilot, based on the LQG regulation, was tested by using
numerical simulations. Although several simulations over a wide
range of initial conditions were performed, just one case is shown.
The parameters and specifications considered in this simulation are
shown in Table 2.
Fig.7. Simulink model Kalman filter
LQG Autopilot for a Pitch-plane Expendable Launch Vehicle (ELV)
To Workspace
Reference Signal
Kalman Filter
Fig. 9. Simulink Model of the LQG Autopilot
The noise source input matrix G is a diagonal matrix
containing the open-loop eigenvalues of the ELV [14]. Unlike
other design methods, the full-state feedback system does not
compare the output to the reference; instead, it compares all
states multiplied by the control matrix ( Kx ) to the reference.
Thus, we should not expect to see the output equal to the input.
To obtain the desired output, we need to scale the reference
input so that the output equals the reference. This was easily
solved by introducing a feed-forwarding scaling factor in the
simulation model (Fig. 9).
• •
The time variation of the state variables ( θ, θ, Z ) of the
realized simulation, were carried out for an initial condition of
{ 0.05
0.017 0} . Fig 10 is without disturbance, and when
theses disturbances were applied, Fig 11. It could be seen in
these results that the stability of all the state vectors is still good.
It should also be noted that these result agrees with the ‘theory’
of conflicting design requirement put forward in section 1.3.
Execution of angle-of-attack during flight results in lateral drift,
this disturbing phenomenon is well depicted in the plot response
of lateral drift in both Figures, and thus, a deviation from nominal
value. Kalman filter could be seen to give a good estimate of the
state vector. The Pitch response plot shows the stabilization of
the ELV occurs at about 3.5s ; this is an acceptable result from a
practical standpoint.
Table 2: LQG Autopilot Simulation Parameters
diag { 0.01, 0.01, 0.01}
diag {10,10,10}
diag { 3.9143,3.7807, 0.008}
Pitch rate(rad/sec)
Fig.10. Autopilot response without simulated perturbation
The first question to ask after designing an LQG controller is
whether or not the closed-loop system will be stable.
To answer this question we collect all the equation that defines
the closed-loop system and these can be put in a matrix form as :
 x  A
 •  =  LC
 xˆ  
− BK
  x  B
A − BK − LC   xˆ   0 
 x
y = [ C 0]   .
 xˆ 
Separation Principle; If both matrices of the closed-loop
eigenvalues of the state-feedback regulator dynamics A − BK
together with those of the state-estimator dynamics A − LC are
asymptotically stable, then so is the closed-loop [15]. For our
situation, the closed-loop eigenvalues of equation (15) are;
−12.6184 ± 23.0489i
−0.7798 ± 1.3131i
Pitch rate(rad/sec)
Lateral Drift(m/sec)
Fig. 11. Autopilot response without simulated perturbation
The obtained numerical simulations results lead to the
conclusion that the control strategy based on the LQG theory
stabilizes the system exponentially and the synthesized autopilot
has an asymptotic stability. The former is deduced from
simulated response of the pitch angle and pitch angle rate while
the later is based on the result of the closed-loop eigenvalues of
the system. It has been demonstrated for the critical time instant
(high dynamic pressure region) of the flight duration. This is an
interesting result for the reason that the problem of ELV attitude
stabilization constitutes a highly nonlinear control system
problem and the application of linear approaches represents an
attractive issue in terms of real-time implementation. In LQG
design, the required states are estimated eliminating the need for
additional sensors but at the cost of time and the number
calculations involved.
[1] Roland S. Burns. ‘Advanced Control Engineering.’ Professor of
control engineering Department of Mechanical & Marine
Engineering University of Plymouth, UK. 2001.
[2] Albertos P. and A. Sala. ‘Multivariable Control Systems.’ An
Engineering Approach. Springer 2004.
[3] Jose M. Giron-Sierra, Guillermo Ortega. ‘A survey of stability of
Fuzzy logic control with aerospace application.’ IFAC
15thTreinnial World Congress, Bercelona Spain. 2002.
[4] Greensite A.L.., ‘Analysis and design of Space Vehicle Flight
Control Systems.’ Volume II Spartan Books, New York,
[5] Lt. Hague Tyler Nicklause. ‘An application of H 2 H ∞ Robust
Control Synthesis to Launch Vehicle ascent.’ Masters
ofScience Thesis, Department of Aeronautics and
Astronautics, MIT, June 2000.
[6] Greensite A.L... ‘Analysis and design of Space Vehicle Flight
Control Systems.’, Volume II Spartan Books, New York,
[7] Ramnath, R.V. ‘A New Approach in the Design of Time Varying
Control System with application to space shuttle Boost.’
Proc. IFAC congress, Geneva, Italy 1973.
[8] Amol A. Khalate, Abdul Naseer, Sheelu Jose and M. V.
Dhekane. ‘Multi-Objective Control of Aerodynamically
Unstable Launcher’. Proceedings of the 15th
Mediterranean Conference on Control & Automation,
Athens-Greece. July 27-29, 2007 Sreeja S, Dr. A Dinesh
Pai, V. R Lalithambika, K Sivan. ‘Launch
[9] Vehicle Trajectory Reconstruction Considering Wind
Estimation from Flight Data Using Kalman Filter’. June
[10] Philippe Le Fur. ’An asymptotic approach in the preliminary
design of Launch vehicle control system. ’ Masters of
Science Thesis, Department of Aeronautics and
Astronautics, MIT, January 1989.
[11] Steven T. Karris. ‘Introduction to Simulink with Engineering
Applications.’ Orchard Publications, 2006.
[12] Robert L. Williams II, Douglas A. Lawrence. ‘Linear
State-space Control Systems’.2007
[13] Balázs Kulcsár. ‘LQG/LTR Controller design for an Aircraft
model’. Department of Control and Transport Automation
Budapest University of Technology and Economics,
Hungary. November 10, 2000.
[14] Ashwin Dhabale, R N Banavar and M V Dhekane. ‘LQG
controller design from reduced order models for a Launch
vehicle’. April,2006.
[15] João Pedro Hespanha. Lecture note: LQG/LQR controller
design. University of California, Santa Barbara (UCSB).
The synthesized control law should be applied to the non linear
model of the ELV, thus, an Extended Kalman Filter (EKF) or
Unscented Kalman Filter (UKF) based observer need to be
designed. Also, this paper does not address issues like slosh
sensor characteristics, flexibility etc. In further
studies such effects could be taking into account.
Ibe Tochukwu Cosmas
International Institute for Advanced Aerospace Technologies of Saint Petersburg State University of Aerospace
Instrumentation, Saint Petersburg, Russian Federation
Tel: +79522063563, E-mail:
Abstract: Disturbances in the form of vibrations can degrade the performance of sensitive equipment. As technology
advances, more and more precision is expected from instruments used in a broad range of applications (such as machining,
precision pointing, and space applications). Completely removing the source of vibration is impossible as external
disturbances or vibration generating equipment will always be present. Therefore, the goal should be to isolate the vibrations
at the interfaces between the vibration source and the sensitive equipment. Vibration isolation control use either passive or
active control, or a combination of both. Active control (piezoceramic, electromagnetic, magnetostrictive, and voice-coil
actuators are possible candidates) used for many space applications, in the areas like positioning and reduction of vibration
transmissions at low frequencies. Passive control attenuates high frequency inputs. The advantage of semi-active control is
that it requires low external power, provides passive energy dissipation if the semi-active part fails, and has inherent stability.
A variable damping device using semi-active control would approach the performance of an active actuator in reducing low
frequency vibrations while offering several advantages. Therefore, this paper research is focused on understudying semiactive vibration control by looking at the different connector models used in accomplishing vibration isolation, positioning and
precision of equipment.
Keyword: Passive Vibration, Active Vibration, Actuator, Sensors, Damper, Connector, Coupling, Decoupling.
This paper addresses the issue of how semi-active
variable-damping coupled with passive damping can improve
vibration isolation on flexible structures. The usage of this in
controlling precision and position equipment is further
investigated (future study). Semi-active control is based on
smart material dampers specifically, magnetorheological
(MR) dampers.
1.2 Vibrations in Space Applications
bus whereas one of the major design considerations is to
reduce the cost by moving toward lighter and cheaper
spacecraft buses [16]. These two trends seem to be
incompatible as the lighter and cheaper bus will be unable to
provide the quiet environment required for precision pointing.
The solution is therefore to provide the isolation and
suppression either between the noise generating equipment
and the bus or between the sensitive equipment and the bus.
In the first case, the vibrations generated by machinery are
propagating into the host structure. In the second case, the
host structure may have disturbances affecting sensitive
The vibration problem with space applications can be
divided into launch operation and on orbit operation.
1.2.1 Launch Operation
2.1 Vibration Control Strategies
Launch dynamics are a major factor affecting design of the
spacecraft structure. Ensuring launch survival may often be a
more difficult design problem than establishing the desired
performance in orbit. The major concern during the launch
operation is to make sure structural and acoustical vibrations
do not cause damage to the payload. The main purposes of
launch isolation are to enable more sensitive equipment to be
launched, reduce the risk of equipment failures [7], and
reduce the cost and the mass of the spacecraft, which in turn
increases the mass margin of additional payload that can be
launched [5]. Passive isolation has been successfully used
during several launch events such as the Hubble Space
Telescope servicing missions [16]. Active, passive, and
hybrid isolators have been extensively studied and several
designs in the literature show improved performance of
active and hybrid isolation over purely passive isolation
[11,16]. Current and future satellite systems require
increasing precision from the spacecraft
Structures and mechanical systems should be designed to
enable better performance under different types of loading:
particularly dynamic and transient loads. There are three
fundamental control strategies to regulate or control the
response of a system: passive control, active control, and
semi-active control. Traditionally, vibration isolation has
focused on passive control. For passive control, mechanical
devices such as energy dissipation devices or isolators are
added to a mechanical system to increase energy dissipation
and improve the performance of the system [15]. The primary
advantage of passive damping systems is that they will not
induce instability under any degree of model uncertainty [8].
However, the performance of such a system is limited
because system parameters, especially damping cannot be
varied. This causes passive systems to behave differently
under changing conditions; therefore, they cannot always
meet the design requirements. Unlike passive systems,
active systems can constantly supply and vary the flow of
energy into the system. Based on the change in the
instantaneous operating conditions as measured by sensors,
the properties of the system can be adjusted [6]. The heart of
the control system is the actuator, which behaves as an
artificial muscle and can potentially affect the system in an
intelligent manner. This enables the active system to
command arbitrary control forces [10]. The actuators that are
typically used in active control are pneumatic, hydraulic,
electromagnetic, or intelligent material actuators. The
practical disadvantages to these types of control approaches
are the large power requirements needed for these actuators
to do work on relatively stiff and massive structures and the
limitations of the actuators. Traditional piezoelectric stacks
can only generate a few micrometers of displacement when
they are not being loaded. When they are loaded, the
displacement is severely impaired. The current designs
implementing intelligent material actuators can only control
micro vibrations [3].
Electromechanical actuators have the necessary force and
displacement capabilities for many applications. However,
their implementation is often not practical because of their
large weight, electrical demands, and limited bandwidth.
Semi-active control has been developed as a compromise
between passive and active control. Karnopp and co-workers
first proposed varying the properties of a passive element by
using active control, which was termed semi-active control. A
semi-active control system is incapable of injecting energy
into a system comprising the structure and actuator, but can
achieve favorable results through selective energy
dissipation [12]. Instead of directly opposing a primary
disturbance, semi-active vibration control is used to apply a
secondary force, which counteracts the effects of the
disturbance by altering the properties of the system, such as
stiffness and damping [4]. The adjustment of mechanical
properties, which is based on feedback from the excitation
and/or from the measured response, is what differentiates
semi-active control from passive control. A controller
monitors the feedback measurements and an appropriate
command signal is generated for the semi-active devices.
Unlike an active system, the control forces developed are
related to the motion of the structure. Furthermore, the
stability of the semi-active system is guaranteed as the
control forces typically oppose the motion of the structure
and can only remove energy from the system [13].
The isolation mount consists of a linear spring in parallel with
a passive damper. The corner frequency of the system is
ωn =
and the amount of damping in the isolator is
Fig. 2. Transmissibility FRF of a passive damper for various
values of damping ratio
defined by the damping ratio ξ; where c/M = 2ξωn. The
transfer function in Laplace transform, between the
disturbance displacement xd and the payload displacement
xc is given by
xc ( s )
1+ 2 ξ s / ω n
xd ( s )
1+ 2 ξ s / ω n + s 2 / ω n 2
Substituting s =
masses is found:
jω , the transmissibility between the two
ω 
1+  2 ξ
xc ( s )
 ωn 
= 
xd ( s )
  ω  
  1− 2  +  2 ξ ω
  ωn  
1/ 2
2.2 Single-Axis Passive Isolator
Consider the single-axis isolator shown in Fig 1., where m
is the mass of sensitive equipment, k and c are the stiffness
and damping of the isolator respectively.
Fig 2. shows a general plot for the transmissibility frequency
response function (FRF) of (2) where the abscissa is the ratio
between the disturbing frequency ω and the natural
frequency ωn . From Fig.2 it can be seen that the critical
frequency where the curve crosses over the 0 dB line is
equal to ωn
Fig.1. Sensitive equipment mounted on a vibrating structure
via passive isolator
= 2ω . This corner frequency separates
amplification from isolation. At the natural frequency of the
system, a resonance appears which is controlled by the
value of the damping ratio. In the low frequency range well
below the resonance, the displacement of the payload
essentially follows the displacement of the disturbance
source. After the corner frequency where isolation begins,
the curve rolls-off and the displacement of the payload
decreases gradually at the high frequency range [1].
Increasing the damping ratio, ξ reduces the resonance at the
natural frequency, but it also compromises the high
frequency isolation. As a result, the design of a passive
damper involves a trade-off between the resonant peak
reduction and the high frequency attenuation.
Reciprocating (4), one can calculate the natural frequency
as given by
ωn =
2.3 Passive isolator selection
Isolation is attained primarily by maintaining the proper
relationship between the disturbing frequency and the system
natural frequency. In order to design a passive isolator for a
vibrating system, the following technique can be followed:
1. Determine the minimum disturbing frequency ωd . For
rotating equipment, the disturbing frequency is equal to the
angular speed of rotation. If there are several disturbing
frequencies one should take into account the one with the
minimum frequency which is the most important.
2. Determine the maximum isolator natural frequency ωn
that can provide isolation. This natural frequency can be
calculated by using the following equation
ω n=
ω d = 0 . 707 ω
If the natural frequency of the isolator exceeds
calculated in this equation the isolation will not perform
properly, even it is quite possible that it will amplify the
vibrations. This step narrows the range of isolators to be
selected from, but it tells nothing about the level of isolation.
3. Determine, specifically, what natural frequency of the
isolator gives the required level of isolation. The desired level
of isolation determines the transmissibility of the system; if
the level of isolation is 80% then the transmissibility T is 0.2.
The transmissibility FRF can be calculated from the following
1+ ( 2 ξ
 1− ω
Formula (7) gives the natural frequency
as a function of
the disturbance frequency ωd and the required attenuation T
for an undamped isolator; it is valid only when
> 1.
4. Eventually, knowing the required natural frequency of the
isolator, one can select the material and dimensions of the
passive isolator that fits for the system according to the
application and the surrounding conditions of the system.
There are many commercial catalogues to select from.
3.1 Basic Connector Model
In this simple model of the connector shown in Fig.3, the
parameters and variables used are
x1 , the displacement of mass, m ;
x 0 , the displacement of mass, m ;
k 1 , the equivalent stiffness of the connector;
b1 , the actuator friction and damping, modelled as a
viscous damper;
FA, the force generated by the actuator, and
FD, input disturbance force applied to the bottom mass.
 + (2ξ ω
Neglecting the damping (ξ = 0), T reads:
 ω
1 − 
 ω
For ξ > 0 and at the resonance,
= 1,
transmissibility T has its maximum value and is related to the
damping ratio ξ by
T max=
Fig. 3. Basic connector model
In this model, if the damping required for vibration control is
provided by the active control of the actuator, then the
dynamic behavior of the system is determined by the
dynamic response of the actuator and the controller. The
main disadvantage of this is that the dynamic response of the
actuator may not match the desired dynamic performance of
the system. Also, all the energy will be dissipated by the
actuator. This can overheat the actuator. Furthermore, the
power efficiency of the system is reduced when active
devices (actuator) are used to simulate passive ones (spring,
damper) [2]. If passive control is preferred, the active
actuator may be replaced by a spring- damper pair.
The equations of motion for the basic connector are given
in (8) and then put into matrix form in (9).
m 0 x 0 + b  x 0 − x 1  + k 1 ( x 0 − x 1 ) = F D − F A
m 1 x 1 + b  x1 − x 0  + k 1( x 1 − x 0 ) = F A .
stiffness, k1, is 5,000 N/m. The state-space formulation is
carried out, and the Bode characteristic is shown in Fig.5.
The damping is varied to see how the response of the
system changes with varying damping. The same system is
modeled in Simulink as shown in Fig.4. The frequency
response obtained by the MATLAB and the Simulink models
is the same as shown in Fig.6.
 
 • 
 •• 
 m 0 0   x 0  +  b1 − b1   x 0  +  k1 − k1   x 0  =  FD − F A 
 0 m1   • •   − b1 b1   •   − k1 k1     F A 
 x1 
 x1 
 x 1
 
The state-space formulation is given in (10) and then put
into matrix form in (11).
x2 =x1
x3 = x0
x4 = x1
Fig.4. Simulink model of basic connector model
x1 =x3
x2 =x4
Bode Diagram
x3 = x0 =1 m0 [F D−F A−b1 (x3 −x4 )−k1 (x1−x2 )].
x = Ax+ Bu
 
• 
0 x1   0 0 
x•1   0
x   0
1 x2  0 0 FD 
 +
 •2  = 
 .
x3  −k1/m0 k1/m0 −b1/m0 b1/m0 x  1/m0 −1/m0FA 
 •   k /m −k /m b /m −b /m    0 1/m1 
1 1
1 1
1 1  
x4   1 1
y =Cx
P h as e (d e g)
x1 
 
1 0 0 0x2
y =
0 1 0 0x3
 
M a g n it u d e ( d B )
x4 = x1 =1 m1 [F A−b1 (x4 −x3 )−k1 (x2 −x1 )]
Next, the frequency response of the system between the
displacement of m1 and FD which is the input disturbance
force applied to the bottom mass, m0, is found. FA = 0 for the
case where vibration isolation is carried out by the passive
spring-damper pair incorporated into the system. The bottom
mass, m0 is 38 kg, the top mass, m1, is 16 kg, and the
Frequency (rad/sec)
Fig.5. Bode diagram of basic connector model
The coupling spring stiffness is much lower than the
actuator stiffness. Therefore, most of the deflection occurs in
the coupling stage. The main disadvantage of this model is
that when the actuator applies a force, the coupling stage
has to deform before this force can be transferred to the
payload and this introduces a delay and degrades the
dynamic performance.
The damping element attenuates some of the highfrequency vibrations, but it also increases the reaction time of
the system.
Also, some of the energy provided by the actuator is
dissipated in the damper instead of being transmitted to the
payload [2].The equations of motion for the connector model
with the coupling stage are given from (12) and then put into
matrix form in (13).
Frequency response function of passive basic connector leg
|X 1 / F | (d B )
b1 = 50 Ns/m
b1 = 200 Ns/m
b1 = 350 Ns/m
b1 = 650Ns/m
) (
m0 x0 + b1 x0 − x1 + k1 x0 − x1 = FD − FA
b1 = 500 Ns/m
) (
) (
) (
m1x1 + b1 x1 − x0 + k1 x1 − x0 + b2 x1 − x2 k 2 x1 − x2 = FA
b1 = 800 Ns/m
m2 x2 + b2 x2 − x1 + k 2 x2 − x1 = 0,
Frequency (Hz)
Fig.6. Frequency Response of basic connector model
A coupling stage may be introduced to improve the
vibration isolation performance of the first model as shown in
Fig.7. The additional components are
k 2 , the stiffness of the coupling stage
b 2 , the damping in the coupling stage, and
m 2 , the mass of the coupling stage.
m 
0 0
 k1
 − k1
 0
x 0
x 2
− k1
k1 + k 2
 b1
 − b1
 0
− k2
− b1
b1 + b 2
− b2
− b2
 FD − F A 
 FA .
 0 
The state-space formulation is given in (14) and then put
into matrix form in (15).
x1 = x 2
The displacement of the spring in the coupling stage can
be controlled to generate a force.
x 2 = x1
x3 = x 2
x4 = x0
x5 = x1
x6 = x 2
x1 = x 4
x 2 = x5
x3 = x 6
x 4 = x 0 =1 / m 0 FD − F A − b1 x 4 + b1 x5 − k1 x1 + k1 x 2
x5 = x1 =1 / m1 F A + b1 x 4 − b1 + b2 x5 + b2 x 6 + k1 x1 − k1 + x 2 x 2 + k 2 x3
x6 = x 2 = 1 / m 2 b2 x 5 − b2 x 6 + k 2 x 2 − k 2 x3 .
Fig.7. Two-stage connector model
The force applied by the connector to the payload is a
function of the coupling stage displacement and velocity, and
also the coupling stage stiffness and damping coefficients:
x = Ax+ Bu.
FD 
−1/m0 F A 
1/m1 
0 
P h a s e (d e g )
 0
 0
Bode Diagram
 
x1 
 
1 0 0
x2 
0 1 0
 
x3 
0 0 1
 .
0 x4 
−b /m −(b +b )/m b /m  
1   
 1 1
−b2 /m2 x5 
b2 /m2
 0
 
x6 
M a g n itu d e ( d B )
 x•1  
0 0 0
x  
0 0 0
 •2  
x3  
0 0 0
 • =
0 
x4   k1/m0
 •   k /m − (k +k )/m k /m 
1 
x5   1 1
 •   0
−k2/m2 
x 
 6
y = Cx
 x1 
x 
1 0 0 0 0 0 
x3 
y = 0 1 0 0 0 0 
0 0 1 0 0 0 4 
x5 
 
x6 
Frequency (rad/sec)
Fig.9. Bode diagram of two stage connector model
Next, the frequency response of the system between the
displacement of m2 and FD which is the input disturbance
force applied to the bottom mass, m0, is found. FA = 0 for the
case where the force actuator is used only for positioning and
the vibration isolation is carried out by the passive damping
and stiffness in the system. The bottom mass, m0, is 23 kg,
the intermediate mass, m1, is 15 kg, and the top mass, m2,
is 16 kg. The stiffness constants, k1 and k2 are 10,000 and
5,000 N/m, respectively, and the damping coefficient, b1, is
100 Ns/m. The state-space formulation is done in MATLAB.
The state-space formulation is carried out, and the Bode
characteristic is shown in Fig.9. The damping coefficient, b2,
is varied to see how the response of the system changes
with varying damping. The same system is modeled in
Simulink as shown in Fig.8. The frequency response
obtained by the MATLAB and the Simulink models is the
same and is shown in Fig.10.
Frequency response function of passive two-stage connector leg
|X 2 / F | ( d B )
b2 = 50 Ns/m
b2 = 200 Ns/m
b2 = 350 Ns/m
b2 = 500 Ns/m
b2 =immechanics650 Ns/m
b2 = 800 Ns/m
Frequency (Hz)
Fig.10. Frequency Response of two stage connector model
In order to eliminate high-frequency vibrations which are
transmitted to the actuator through the connector stiffness
element, a decoupling stage is introduced to the model as
shown in Fig.11.
Additional components are
k 3 , the stiffness element in the decoupling stage,
b 3 , the viscous damping element in the decoupling stage,
Fig.8. Simulink model of two stage connector model
m 3 , the mass of the decoupling stage.
 •• 
• 
0 x0   b1 −b1
0 0 x0 
0 x1  −b1 b1+b2 −b2 0 x1 
 ••  +
 • .
0 x2   0 −b2 b1+b3 −b3 x2 
m3••   0 0 −b3 b3  • 
x3 
x3 
k1+k3 −k1 0 −k3 x0  FD−FA 
 −k1 k1+k2 −k2 0 x1   F 
  = A 
 0 −k2 k2 0 x2   0 
  
0 k3 x3   FD 
 −k3
The state-space formulation is given in (19) and then put
into matrix form in (20).
x1 = x 0
x2 = x1
x3 = x2
x4 = x3
Fig.11. Connector model with decoupling stage
Unlike the coupling stage damper, the decoupling stage
damper will not transmit the disturbances to the actuator and
it will be used to attenuate high-frequency vibrations. The
decoupling stage stiffness needs to be very high as it will be
supporting the decoupling stage damper. If it is not stiff
enough, the decoupling stage will deform the spring instead
of the damper dissipating energy. The force applied by the
connector to the payload is a combination of the force
generated in the coupling and decoupling stages. This force
is a function of the displacement and velocity of the coupling
stage and the velocity of decoupling stage:
x5 = x0
x6 = x1
x7 = x 2
x8 = x3
x1= x5
x 2 = x6
x3 = x7
F =k 2 (x 2 −x1 ) + b2  x2 − x1  + b3  x2 − x3 .
 (16)
The problem with this model is that the decoupling stage
damper will always be dissipating energy, both from the
disturbance and from the actuator. In the latter case, some of
the energy of the actuator which would normally be used to
move the payload will be dissipated in the damper. The
decoupling stage will also increase the overall mass and
inertia of the connector [2]. The equations of motion for the
connector model with the decoupling stage are given in (17)
and then put into matrix form from (18).
( )
• •
m0 x 0 + b1 x 0 − x1 + k1 x 0 − x1 + k 3 x 0 − x3
( ) (
( ) (
) = FD − FA
• •
m1 x 1 + b1 x1 − x 0 + k1 x1 − x0 + b2 x1 − x 2 + k 2 x1 − x 2 = F A
• •
m 2 x 2 + b2 x 2 − x1 + k 2 x 2 − x1 + b3 x 2 − x3 = 0
( )
• •
m3 x 3 + b2 x 3 − x 2 + k 3 x3 − x 0
) = FD .
x 4 = x8
( )
x6 = x1=1 / m1[FA +b1x5 −( b1+b2 ) x6 +b2 x7 + k1x1−( k1+ x2 ) x2 + k2 x3 ]
x5 = x 0 =1 / m0 FD − FA −b1x5 +b1x6 − k1+ k3 x1+ k1x2 + k3x2
[ ( )
x 8 = x 2 = 1 / m 2 [b3x7 −b3 x8 + k3 x1−k2 x3 ].
x 7 = x 2 = 1 / m 2 b2 x6 − b2 +b3 x7 +b3x8 + k2 x2 −k2 x3
Next, the frequency response of the system between the
displacement of m2 and FD which is the input disturbance
force applied to the bottom mass, m0, is found. FA = 0 for the
case where the force of actuator is used only for positioning
and the vibration isolation is carried out by the passive
damping and stiffness in the system. The bottom mass, m0,
is 23 kg, the intermediate mass, m1, is 15 kg, the top mass,
m2, is 16 kg, and the decoupling stage mass, m3, is 10 kg.
The stiffness constants, k1, k2, and k3 are 10,000, 5,000,
and 10,000 N/m, respectively; and the damping coefficient,
b1, is 100 Ns/m. The coupling stage damping coefficient, b2,
and the decoupling stage damping coefficient, b3, are varied
to see how the response of the system changes with varying
x = Ax+ Bu.
Bode Diagram
From: F1 To: X3
 0
 
 0
 0
 
 0
+  1
 m0
 0
 0
 1
 m3
 0
 b1
 1
−(b2+b3 )
 
x1 
 
x2 
x 
 3 
0  
 4 .
 
0 x5 
 
b3 x 
 6
m2 x 
 7
−b3  
x8 
m3 
0 
 
0 
0 
 
0 
FD 
− 1  
m0 FA 
1 
m1 
0 
0 
M a g n it u d e ( d B )
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
−(b1+b2 )
Frequency (rad/sec)
Fig.13. Bode Diagram of decoupling stage connector
Frequency response of connector leg with decoupling stage
y = Cx
0 0 00
1 0 00
0 1 00
0 0 10
0 0
0 0
0 0
0 0
x1 
 
x2 
0x3 
 
0x4 
0x5 
 
0x6 
x7 
 
x8 
. (20)
The same system is modelled in Simulink as shown in
Fig.12. The state-space formulation is done in MATLAB and
the Bode characteristic and frequency response are shown in
Fig.13 and Fig.14.
|X 2 / F | (d B )
P h a s e (d eg )
0 0 0 0
• 
x•1  
0 0 0 0
x  
 •2  
x  
 •3   − +
x4   ( 1 3 )
 •  =  m0
x   k
−(k1+k2 )
 5   1
 •   m1
x6  
  
  
 •   k3
x8   m
b2 = 50 Ns/m
b2 = 200 Ns/m
b2 = 350 Ns/m
b2 = 500 Ns/m
b2 = 650 Ns/m
b2 = 800 Ns/m
Frequency (Hz)
Fig.14. Frequency Response of decoupling stage connector
By comparing the various connector models as shown in
Fig.15, we arrived at the following conclusions. For the model
with the decoupling stage, in addition to the high frequency
and the low frequency modes, two other modes are included
in the comparison.
Fig.15. Comparison of Frequency Response of basic, two
Fig.12. Simulink model of decoupling stage connector
The first of these two modes is when the decoupling stage
damping is set to its maximum value as the coupling stage
damping is varied. The second mode is when the coupling
stage damping is set to its minimum value as the decoupling
stage damping is varied. It can be seen from Figure 6 that
none of the four modes of the connector model with the
decoupling stage has a superior performance over the twostage connector model which would make it worthwhile to
incorporate the decoupling stage to the connector leg model.
Therefore, the two-stage connector model is seen as a very
good model to be incorporated into flexible structures due to
its simplicity stage and decoupling stage connectors in
design and its performance which is superior and
comparable to the performance of the basic connector model
and the connector model with the decoupling stage,
respectively. However, it would be desirable to combine the
advantages of the models above in one model. At low
frequencies, the model with no decoupling stage provides
minimum energy loss. However, at high frequencies, it does
not provide enough energy dissipation and the decoupling
stage is necessary. A connector with optimal performance at
both low frequencies and high-frequencies can be achieved
by varying the amount of damping in both the coupling and
decoupling stage dampers.
[1] Abu Hanieh, A., Active isolation and damping of
vibrations viastewart platform, Ph.D. Dissertation,
Universite Libre de Bruxelles, Department of ME
and Robotics, Brussels, Belgium, 2003.
[2] Baiges-Valentin, I., Dynamic modeling of parallel
manipulators, Ph.D. dissertation, University of
Florida, Mechanical Engineering Department,
Gainesville, Florida, 1996.
[3] Bamford, R., Kuo, C. P., Glaser, R., and Wada, B. K.,
AIAA/ASME/ASCE/AHS Structures, Structural
Dynamics & Materials Conf. Collection of Tech.
Papers, v 5. AIAA, NY, USA, 1995. pp. 3278-3284.
[4] Brennan, M. J., Day, M. J., and Randall, R. J.,
“Experimental investigation into the semiactive and
active control of longitudinal vibrations in a large
tie-rod structure,” Journal of Vibration and
Acoustics, Transactions of the ASME, v.120, #1,
1998, p. 1-12.
[5] Denoyer, K. K. and Johnson, C., “Recent achievements
in vibration isolation systems for space launch and
on-orbit applications,” Proceedings of the 52nd
International Astronautical Congress, Toulouse,
France, October 1-5, 2001.
Hac, A. and Youn, I., “Optimal semi-active suspension
with preview based on a quarter car model,”
Journal of Vibration, Acoustics, Stress, and
Reliability in Design, v.114, #1,1992, p. 84-92.
[7] Johnson, C. D., Wilke, P. S., and Darling, K. R., “Multiaxis whole-spacecraft vibration isolation for small
launch vehicles,” Proc. of SPIE- The Intern. Society
for Optical Eng., v 4331, 2001, p.153-161.
[8] Lane, J. S. and Ferri, A. A., “Control of a flexible
structure using combined active and semi-active
elements,” AIAA/ASME/ASCE/ AHS Structures,
Structural Dynamics & Materials Conference Collection of Tech. Papers, AIAA, NY, USA, v 2,
1995, p. 719-728.
[9] Lee, J., Investigations of Quality Indices of In-Parallel
Platform Manipulators and Development of Web
Based Analysis Tool, University of Florida,
Department of ME, Gainesville, Florida, 2000.
[10] Lee, J.K. and Clark, W.W., “Semi-active control of
flexural vibrations with an mr fluid actuator,”
Proceedings of SPIE – The International Society
for Optical Eng., v 3672, 1999, p.167-174.
[11] Lee-Glauser, G. J., Ahmadi, G., and Layton, J. B.,
“Satellite active and passive vibration control during
liftoff,” Journal of Spacecraft and Rockets, v 33, #3,
1996, p.428-432.
[12] Scruggs, J. and Lindner, D., “Active energy control in
civil structures,” Proc. of SPIE -The Intern. Soc. for
Optical Eng., v 3671, 1999, p.194-205.
[13] Symans, M.D. and Constantinou, M.C., “Experimental
testing and analytical modelling of semi-active fluid
dampers for seismic protection,” J. of Intel. Material
Systems and Structures, v 8, # 8,1997, p.644-657.
[14] Thomson, W. T., Theory of vibration with applications,
Prentice Hall, 3rd Edition, N.J., 1988.
[15] Taniwangsa, W. and Kelly, J.M., “Experimental testing
of a semi-active control scheme for vibration
suppression,” Proceedings of SPIE - The Intern.
Soc. for Optical Eng., v 3045, 1997, p.130-139.
[16] Winthrop, M.F. and Cobb, R.G., “Survey of state-of-theart vibration isolation research and technology for
space applications,” Proceedings of SPIE.
Ogun Funmilayo Adebimpe
International Institute for Advance Aerospace Technologies of
St. Petersburg State University of Aerospace Instrumentation (IIAAT SUAI),
Saint-Petersburg, Russia., Tel. +79117672035
Abstract: This research work entails the design principles of flow and level measurement in flying vehicles which are
limited to small launcher and satellites, various flow and level sensing methods ,static and dynamic characteristics of
measurement elements, mathematical modeling of capacitance flowmeters and future work.
Keywords:, sensor, measurement, capacitance, level and flow
2.1 The Sensing element
Fuel is a very vital part of any vehicle. The proper management of
fuel determines the success or failure of a mission. Real life
situation like the US Boeing Delta 4-Heavy rocket where errant
readings of the fuel sensors shutdown the main engines
prematurely reducing significantly the amount of thrust delivered to
its upper stage resulting in a lower orbit than planned. The
supercritical environment of high temperature and pressure in which
flight vehicles operate, and the nature of some the fuel such as
hydrazine, cryogenic propellants like oxygen makes the design of
sensors complex. In most cases accurate flow measuring is directly
related to fuel conservation which is also proportional to profits or
less cost. [1].
A sensor is a device that measures a physical quantity and
converts it to a signal which can be read by an observer or by an
instrument. Sensors play a vital role in space mission providing
engineers with essential data concerning the performance of
spacecraft such as satellite.
A fuel quantity sensor measures the level of fuel in a tank or
vessel and a fuel consumption sensor measures the rate of
consumption by measuring the mean value of consumption of fluid
per unit of time. A sensor is also part of a measurement system in
which the accuracy of the entire system is important not just the
sensor alone.
This is in contact with the process and gives an output which
depends in some way on the variable to be measured e.g.
Thermocouple where millivolt e.m.f. depends on temperature,
Orifice plate where pressure drop depends on flow rate. If there is
more than one sensing element in a system, the element in contact
with the process is termed the primary sensing element, the others
secondary sensing elements.
1.1 Scope of Work
The flying vehicles under consideration will be small launch
vehicles during flight, satellites and micro satellites of small
dimensions ie 30cm by 30cm and capacitive flowmeters.
A sensor does not function by itself it is always a part of a larger
system that may incorporate many other detectors, signal
conditioners, signal processors, memory devices, data recorders,
and actuators. The measurement system consists of several
elements or blocks. It is possible to identify four types of element[2].
2.2 The Signal conditioning element
This takes the output of the sensing element and converts it into a
form more suitable for further processing, usually a d.c. voltage, d.c.
current or frequency signal Amplifier which amplifies millvolts to
volts. For example an Oscillator which converts an impedance
change into a variable frequency voltage.
2.3 The Signal Processing element
This takes the output of the conditioning element and converts it
into a form more suitable for presentation e.g. Analogue-to-digital
converter (ADC) which converts a voltage into a digital form for input
to a computer, Computer which calculates the measured value of
the variable from the incoming digital data.
Typical calculations are: Computation of total mass of product gas
from flow rate and density data and Correction for sensing element
2.4 Data Presentation element
This presents the measured value in a form which can be easily
recognised by the observer e.g. Simple pointer–scale, Chart
recorder, Alphanumeric display, Visual display unit(VDU). The data
presentation element is the final element in the measurement
system, its function being to communicate the measured value of
the variable to a observer. They can be broadly classified into
displays and charts and recorders a choice can be made from digital
or the analog type of display this greatly increase accuracy.
Ana log
var iable
var iable
A/ D
Fig.1. Instrument model with amplifier, analog to digital converter, and computer output [3]
3.4 Ultrasonic or sonic devices
There are many widely varying methods for the measurement of
liquid level. Level measurement is an important part of process
control. Level sensing can be single point, continuous, direct, or
indirect Continuous level monitoring measures the level of the liquid
on an uninterrupted basis. In this case, the level of the material will
be constantly monitored, and hence the volume can be continuously
monitored, if the cross-sectional area of the container is known [4].
Level sensing devices can be divided into four main categories.
(1) Direct sensing, in which the actual level is monitored
(2) Indirect sensing, in which a property of the liquid, such as
pressure, is sensed to determine the liquid level;
(3) Single point measurement, in which it is only necessary to detect
the presence or absence of a liquid at a specific level;
and (4) free-flowing solid level sensing
This can be used for single point or continuous level
measurement of a liquid or a solid. A pulse of sonic waves
(approximately 10 kHz) or ultrasonic waves (more than 20 kHz) from
the transmitter is reflected from the surface of the liquid to the
receiver, and the time for the echo to reach the receiver is
measured. The time delay gives the distance from the transmitter
and receiver to the surface of the liquid, from which the liquid level
can be calculated, knowing the velocity of ultrasonic waves
(approximately 340 m/s). Since there is no contact with the liquid,
this method can be used for solids, and corrosive and volatile
3.5 Indirect Level Sensing
A number of techniques are used for direct level sensing, such as
direct visual indication using a sight glass or a float, ultrasonic
A commonly used method of indirectly measuring a liquid level is
to measure the hydrostatic pressure at the bottom of the container.
The level can be extrapolated from the pressure and the specific
weight of the liquid. The level of liquid can be measured using
displacers, capacitive probes, bubblers, resistive tapes, or by weight
3.2 The Sight glass or gauge
3.6 Single point sensing-
Is the simplest method for direct visual reading; the sight glass is
normally mounted vertically adjacent to the container. The liquid
level then can be observed directly in the sight glass .The ends of
the glass are connected to the top and bottom of the tank via shutoff
valves, as would be used with a pressurized container (boiler) or a
container with volatile, flammable, hazardous, or pure liquid.
Used for liquid that are non volatile and conducting examples are
3.1 Direct Level Sensing
3.3 Float sensors:
-Thermal probes
-Conductive probes
-Beam breaking probes.
-Free flowing solids
-Paddle wheel.
There are two types of floats the angular arm and the pulley. The
float material is less dense than the density of the liquid, and floats
up and down on top of the material being measured. This method
can be used with either liquids or free-flowing solids .The
advantages of the float sensor are that they are almost independent
of the density of the liquid or solid being monitored are accurate and
robust and have a linear output with level height. The pulley-type
float sensor has a linear radial scale that can be replaced with a
potentiometer to obtain an electrical signal.
Experimental observations have shown that two distinct types of
flow can exist. The first is laminar flow, or viscous or streamline flow
.Here the particles move in a highly ordered manner, retaining the
same relative positions in successive cross-sections. The second
type of flow is turbulent flow. This is highly disordered each particle
moves randomly in three dimensions and occupies different relative
positions in successive cross-sections. As a result, the velocity and
pressure at a given point in the pipe are both subject to small
random fluctuations with time about their mean values. The
Reynolds number tells us whether the flow in a given situation is
laminar or turbulent. It is the dimensionless number and given as
Re =
Re < 2 × 10
laminar flow,
2 × 10 < Re < 10
Re > 10
V= ∫ Qdt ,
Where l is a characteristic length e.g. pipe diameter. The
Reynolds number represents the ratio of inertial forces (proportional
to vl ρ ) to viscous forces (proportional to η ); thus a low value of
Re implies laminar flow and a high value turbulent flow. The
following is an approximate guide:
so that measurement of f yields Q. However, mechanical flowmeters
are often used to measure the total volume of fluid that has been
delivered during a time interval T this given below
transition region,
turbulent flow.
Some common types of flowmeters are described below.
4.1 Differential pressure flowmeters
These are the most common industrial flowmeters for clean liquids
and gases. Here a constriction is placed in the pipe and the
differential pressure developed across the constriction is measured.
The main problem is to accurately infer volume flowrate from the
measured differential pressure (D/P).
Some general characteristics of differential flowmeters are given
No moving parts; robust, reliable and easy to maintain; widely
established and accepted.
There is always a permanent pressure loss due to frictional effects.
The cost of the extra pumping energy may be significant for large
These devices are non-linear, i.e. Q ∝ ∆P this limits the useful
range of a meter to between 25% and 100% of maximum flow. At
lower flows the differential pressure measurement is below 6% of
full scale and is clearly inaccurate.
Can only be used for clean fluids, where there is well-established
turbulent flow, i.e Re > 104 approximately. Not generally used if
solids are present, except for Venturi with dilute slurries.
A typical flowmeter system consists of the differential pressure
sensing element, differential pressure transmitter, interface circuit
and microcontroller. For a transmitter giving a (d.c. current output
signal typically 4 to 20 mA) the interface circuit consists of an
amplifier acting as a current-to-voltage converter and an analogueto-digital converter. For a resonator transmitter giving a sinusoidal
output of variable frequency the interface circuit consists of a
Schmitt trigger and a binary counter.Examples of differential
flowmeters are the venturi, orifice, dall meter.
The total number of machine cycles during T is:
fdt = k ∫ Qdt = kv,
i.e. the total count is proportional to volume. A good example of this
is the turbine flowmeter.
4.3 Vortex flowmeters
The operating principle of the vortex flowmeter is based on the
natural phenomenon of vortex shedding. When a fluid flows over
any body, boundary layers of slower moving fluid move over the
surface of the body. If the body is streamlined, as in an aerofoil,
then the boundary layers can follow the contours of the body and
remain attached to the body over most of its surface. The boundary
layers become detached at the separation points which are well to
the rear of the body, and the resulting wake is small. If, however,
the body is unstreamlined, i.e. bluff, e.g. a rectangular, circular or
triangular cylinder, then the boundary layers cannot follow the
contours and become separated much further upstream. The
resulting wake is now much larger. The vortex mechanism produces
approximately sinusoidal variations in fluid velocity and pressure, at
the shedding frequency f, in the vicinity of the bluff body. Three
commonly used bluff body shapes are commonly used, these use
three different methods of vortex detection they are Piezoelectric,
thermal and ultrasonic. Vortex flowmeters gives a frequency pulse
output signal proportional to flow rate like turbine meters; however,
they have no moving parts and are more reliable. The bluff body
provides an obstruction to flow so that vortex meters can only be
used with clean liquids and gases; there is also a permanent
pressure loss. A typical range of vortex flowmeters covers pipe
diameters of 15, 25, 40, 50, 100, 150 and 200 mm [2].The accuracy
(including non-linearity, hysteresis and repeatability) for liquids is
±0.65% of flow rate for pipe Reynolds numbers greater than 20 000.
For gases and steam, the accuracy is ±1.35% of flow rate for pipe
Reynolds numbers greater than 15 000.
This is concerned with static or steady-state characteristics; these
are the relationships which may occur between the output O and
input I of an element when I is either at a constant value or
changing slowly [2].
4.2 Mechanical flowmeters
5.1 Systematic characteristics
A mechanical flowmeter is a machine which is placed in the path
of the flow, and made to move by the flow. The number of machine
cycles per second f is proportional to volume flow rate Q, i.e. f = KQ,
Systematic characteristics are those that can be exactly quantified
by mathematical or graphical means, they are explained below.
5.2 Range
to as environmental coupling constants or sensitivities. Thus we
must now correct eqn (3) such that
The input range of an element is specified by the minimum and
maximum values of I i.e. I MIN to I MAX . The output range is
specified by the minimum and maximum values of O , i.e.
5.3 Span
Span is the maximum variation in input or output, i.e. input span is
I MAX – I MIN , and output span is OMAX – OMIN .
An element is said to be linear if corresponding values of I and O
lie on a straight line. The ideal straight line connects the minimum
point A( I MIN , OMIN ) to maximum point B( I MAX , OMAX ).
5.5 Non-linearity
In many cases the straight-line relationship defined by equation(1)
is not obeyed and the element is said to be non-linear. Non-linearity
can be defined in terms o of a function N ( I ) which is the
difference between actual and ideal straight-line behaviour, i.e.
Non-linearity is often quantified in terms of the maximum nonlinearity expressed as a percentage of full-scale deflection ( f .s.d ) ,
i.e. as a percentage of span.
5.6 Environmental effects
In general, the output O depends not only on the signal input I
but on environmental inputs such as ambient temperature,
atmospheric pressure, relative humidity, supply voltage, etc.
A modifying input I M causes the linear sensitivity of an element to
change. K is the sensitivity at standard conditions when I M =0.If
the input is changed from the standard value, then I M is the
deviation from standard conditions, i.e. (new value – standard
value). The sensitivity changes from K to K + K M I M , where
K M is the change in sensitivity for unit change in I M .
An interfering input I I causes the straight line intercept or zero
bias to change. a is the zero bias at standard conditions when I I =0.
If the input is changed from the standard value, then I I is the
deviation from standard conditions, i.e. (new value –standard value).
The zero bias changes from a to a + K I I I , where K I is the
change in zero bias for unit change in I I . K M and K I are referred
For a given value of I , the output O may be different depending
on whether I is increasing or decreasing. Hysteresis is the
difference between these two values of O .
H ( I ) = O( I ) I ↓− O( I ) I ↑,
Maximum hysteresis as a percentage is given as
f .s.d =
× 100%.
K = ideal straight-line slope which is also the sensitivity for a linear
element and a = ideal straight-line intercept also called the zero
N ( I ) = O( I ) + ( KI + a),
5.7 Hysteresis
5.4 Ideal straight line
OIDEAL = KI + a,
O = KI + a + N ( I ) + K M I M I + K I I I .
5.8 Resolution
Some elements are characterised by the output increasing in a
series of discrete steps or jumps in response to a continuous
increase in input. Resolution is defined as the largest change in I
that can occur without any corresponding change in O .
5.9 Generalised model of a system element
If hysteresis and resolution effects are not present in an element but
environmental and non-linear effects are, then the steady-state
output O of the element is in general given by eqn(5.3), i.e.
O = KI + a + N ( I ) + K M I M I + K I I I .
5.10 Dynamic Characteristics
Under static conditions, a sensor is fully described by its transfer
function, span, calibration, and so forth. However, when an input
stimulus varies, a sensor response generally does not follow with
perfect accuracy [5]. The reason is that both the sensor and its
coupling with the source of stimulus cannot always respond
instantly. In other words, a sensor may be characterized with a timedependent characteristic, which is called a dynamic characteristic.
And these are most conveniently summarised using a transfer
function G(s). If a sensor does not respond instantly, it may indicate
values of stimuli which are somewhat different from the real; that is,
the sensor responds with a dynamic error. If a sensor is a part of a
control system which has its own dynamic characteristics, the
combination may cause, at best, a delay in representing a true
value of a stimulus or, at worst, cause oscillations.
The transfer function for a first-order element is:
1 + τs
where τ the time constant in sec −1 and G(s) is the transfer
A first-order system response is given by
S= Sm (1 − e − t / τ ),
V0 = 2πL( R2 − R1 ),
Amount of charge
here Sm is steady-state output, t is time, and e is the base of
natural logarithm.
Substituting t =τ, we get
Q0 = CU 0 ,
= 1 − = 0.6321,
ρV =
in other words, after an elapse of time equal to one time constant,
the response reaches about 63% of its steady-state level. Similarly,
it can be shown that after two time constants, the height will be
86.5% and after three time constants it will be 95%.The capacitance
flowmeter is an example of first order measurement element [5] .
The capacitance method of sensing is chosen because of its wide
range of application. The simplest capacitor or condenser consists
of two parallel metal plates separated by a dielectric or insulating
material. The capacitance of this parallel plate capacitor is given by:
ε εA
C= 0 ,
where ε 0 is the permittivity of free space (vacuum) of magnitude
8.85 pF m−1, ε is the relative permittivity or dielectric constant of the
insulating material, A m2 is the area of overlap of the plates, and d
m is their separation. We see from eqn(9) that C can be changed by
changing either d, A or ε. So we can have variable area, distance or
dielectric types of capacitance elements. The dielectric constant of a
liquid is usually different from that of the air or other gas, above it.
When one or more pairs of electrodes are immersed into a liquid,
the variation in dielectric due to rising or falling liquid level will cause
a change in capacitance between electrode pairs. The tank wall, if
metallic and the liquid if it has a reasonable dielectric can be used
as a pair of electrode. If the liquid has a low dielectric and the tank
non-conducting a pair of co-axial cylinders is used.[6].
6.1 Mathematical Model of Capacitive Sensor
Q0 CU 0
∂Q = −∂V .ρV =
−2πv.∂t ( R2 − R1 )CU 0
2πL( R2 − R1 )
−v.CU 0
.∂t ,
−v.CU 0
∂U =
.∂t ,
∂U = −U 0 . .∂t ,
∂Q =
U (t ) = U 0 e
− .t
l ,
l [ m]
= [ s] ,
τ= =
v v [ m / s]
where τ is the time constant in sec −1 , eqn 6.4 is the transfer
function of the flowmeter.
The research project is an ongoing one in future, topics such as
static and dynamic errors of measurement element will be
considered especially taking into account measurement errors
during flight, experimental setup for
flow measurement using
capacitance sensing which includes developing an equivalent
circuit, analysis, modeling and simulation of results using software
like matlab and labview.
The findings of this research will give necessary recommendations
for modeling accurate measurement of fuel in flight using the
capacitance method of sensing flow.
2 ∅
of flow
Fig.2: A pair of co-axial cylinders used to measure flowrate
The capacitance of two co-axial cylinder
2πεε0 L
ln 2
Volume of space between the two cylinders
V0 = VR2 − VR1 = 2πR2 L − 2πR1 L,
1.,Sensor Glitch cut Boeing’s first Delta
4-Heavy Flight short, SPACEFLIGHT NOW, January 2005.
2. Bentley P John, Principles of measurement systems Fourth
Edition ,England ,2005.
3. Webster G John, Measurement, Instrumentation& Sensor
Handbook, CRC press LLC,1999.
4. Dunn C William, Introduction to Instrumentation, sensors and
process control, London,2006.
5. Fraden Jacob, Handbook of Modern Sensors, Physics, Design
and Applications, Third Edition, San Diego, Califonia,2004.
6. A.V.Nebylov , Editor. Aerospace Sensors. Momentum Press,
USA (to be printed), 2010
Bawa, John Majinga
National Space Research and Developmehnt Agency (NASRDA),
Obasonjo Space Center Airport road Abuja, Nigeria.
Abstract: The Global Positioning System (GPS) is a US space–based global navigation satellite system. It provides
reliable positioning, navigation, andtiming services to worldwide user on a continuous basis in all weather. The
Global Positioning System can show you exact position on Earth anytime anywhere, in any weather the GPS was
originally a military project but is consider a dual-use technology, meaning it has significant applications for both the
military and civilian industry. The GPS signals can also be detected by GPS receivers, which calculate their locations
anywhere on Earth within less than a meter by determining distances from GPS satellites. This paper will outline the
requirements for GPS tracking of sounding rockets. And sounding rockets are suborbital launch vehicles capable of
carrying scientific payloads several hundred miles in altitude. These missions return a variety of scientific data
including; chemical makeup and physical processes taking place in the atmosphere, natural radiation surrounding
the Earth, data on the Sun, stars, galaxies and many other phenomena. In addition, sounding rockets provide a
reasonably economical means of conducting engineering tests for instruments and devices used on satellites and
other spacecraft prior to their use in more expensive activities.
Keyword: GPS, GLONASS, tracking, navigation system, data processing and antenna
The Global Positioning System (GPS) is a space –based global
navigation satellite system. It provides reliable positioning,
navigation, and timing services to worldwide user on a
continuous basis in all weather. The Global Positioning System
can show you exact position on Earth anytime anywhere, in any
weather. The signals can also be detected by GPS receivers,
which calculate their locations anywhere on Earth within less
than a meter by determining distances from GPS satellites. First
launched in 1978, the development of a global navigation system
dates back to the 1960s when The Aerospace Corporation was a
principal participant in the conception and development of GPS,
a technology that has significantly enhanced the capabilities of
our nation’s military and continues to find new uses and
applications in daily life. GPS belongs to a larger class of radio
navigation system that allow the user to determine his range
and/or direction from a know signal transmitting station by
measuring the differential time of travel of the signal. The Global
Orbiting Navigations Satellite System (GLONASS) developed by
Russian has almost an equivalent structure and used for satellite
radio navigation purposes similar to GPS.
On sounding rockets and launch vehicle, GPS can serve
multiple purposes for reasonable cost and effort, besides
providing essential information for flight safety operations during
the propelled flight phase. Sounding rockets are suborbital
launch vehicles capable of carrying scientific payloads several
hundred miles in altitude. These missions return a variety of
scientific data including; chemical makeup and physical
processes taking place in the atmosphere, natural radiation
surrounding the Earth, data on the Sun, stars, galaxies and many
other phenomena. In addition, sounding rockets provide a
reasonably economical means of conducting engineering tests
for instruments and devices used on satellites and other
spacecraft prior to their use in more expensive
activities.Sounding rockets are projectile that are launched into
the upper atmosphere to reach an altitude of between 40 and
200 km. They are developed primarily for Space and Earth
sciences research activities. Sounding rockets are important to
space sciences as ever, furnishing our most powerful means for
obtaining vertical profiles of atmospheric properties. Many Space
Agency in the world continue to depend on sounding rockets for
research in aeronomy, meteorology, ionospheres physics, and
many other disciplines. GPS deliver valuable information for a
post-mission trajectory determination and performance
assessment as well as highly accurate timing information for
event triggering or time tagging of measurements.
The launch of sounding rockets is traditionally supported and
monitored by ground based tracking radars. They provide
trajectory information for variety of application, including range
safety monitoring scientific data analysis and payload recovery.
Due to the pronounced operations and maintenance cost of radar
system the interest in alternative GPS based tracking and
navigation systems has continuously increased. The term
navigation is used for real-time processing of the positioning
data. The advantages of GPS include the provision of booth
positions and velocity information, the high accuracy and
onboard availability of navigation data as well as the provision of
precise timing reference.
The Global Positioning System (GPS) is a space-based radio
navigation system that provides consistent position, navigation,
and timing services to civilian users on a continuous worldwide
basis. The GPS system receiver provides exact location and time
information for an unlimited number of users in all weather day
and night, anywhere in the world.
L2 frequency contains the P code only. Furthermore, the
‘navigation message’ is modulated on both carriers at a chipping
rate of 50 Hz. In general, the signal structure can be described
by the following equation:
GPS (t )=
AcC (t )D (t )sin  π f L1t  + Ap P (t )D (t )cos  2π f L1t  +
B (t )D(t )cos(2π f
Fig. 1. GPS satellite constellations
The Global positioning System (GPS) consists of three
segments: Namely,
l. The space segment.
ll. The control segment.
lll. The user segment.
The space segment. The basic GPS Space Segment consists
of 24 operational satellites. The satellites are placed in 6 orbital
planes with 4 operational satellites in each plane. The satellite
orbital planes have an inclination, relative to the equator, of 55°
and the orbit height is of about 20,200 km. The satellites
complete an orbit in approximately 12 hours. The relative
phasing of satellites from one orbital plane to the next is 40°.
Fig.2. the GPS Space Segment
The GPS satellites continuously broadcast on two L-band
frequencies (L1 and L2). Superimposed on these carriers are two
coded signals unique to each satellite: a precision code (P-code)
Pseudo Random Noise (PRN) signal with a 10.23 MHz chip rate
(i.e., f and a coarse/acquisition code (C/A code) PRN signal with
1.023 MHz chip rate (i.e., f10). The L1 frequency contains both
the P-code and C/A code with the navigation signal time and
L 2 signal
Where AC, AP, and BP denote the amplitudes of the C/A-code
on L1, of the P-code on L2 and of the P-code on L2 respectively.
C (t) is the C/A code, P (t) is the P-code, and D (t) is the
navigation message.
The control segment: The GPS Control Segment consists of
one Master Control Station (MCS) at Falcon AFB in Colorado
Springs (USA), plus five monitor stations at the MCS, Hawaii,
Kwajalein, Diego Garcia and Ascension. All monitor stations
except Hawaii and Falcon are also equipped with ground
antennae for communications with the GPS satellites.
The user segment: The User Segment consists of a variety of
military and civilian users, with GPS receivers connected to GPS
antennas. GPS receivers are specifically designed to receive,
decode and process the satellite signals, which can also be used
to determine position anywhere in the world.
A Global Positioning system (GPS) was originally a military
project, but serves a dual use technology, meaning it has
significant applications for both the military and the civilian
industry. However, GPS receiver may serve multiple purposes on
a sounding rocket during the individual mission phase. Possible
applications include:
Range Safety: During the booted ascent trajectory, the GPS
position and velocity measurement allow for rapid recognition of
guidance errors and real-time prediction of the instantaneous
impact point (IIP). Based on this information the range safety
officer may decide on the feasibility of an abnormal flight
Geolocation and time tagging: Absolute position and timing
data collected jointly with the science measurements are
essential for the study of regional and temporal variation in the
atmosphere and magnetosphere. In case of multiple payloads
separated during mission or flown simultaneously on different
rockets, GPS can provide highly accurate relative state vector
and timing information for the science data synchronization.
Event triggering: Using absolute time and position data,
experiments and service system may precisely be activated at
the desirable flight stage. A GPS receiver may thus take over
function traditionally perform by mechanical times and barometric
Recovery: During the final decent and parachute phase, a GPS
receiver can continuously relay the instantaneous payload
position to the control center to allow a rapid reliable recovery
even in the presence of pronounce wind fields.
Performance and trajectory analysis: The position and velocity
measurement of a GPS receiver can be used to compare the
actual performance of a boost motor with pre-mission models
and to infer the aerodynamic properties of the sounding rocket.
Aside from a high accuracy of the basic navigation and timing
information, which is nowadays already available with singlefrequency of Coarse-Aacquisition C/A code receivers, GPS has
the additional benefit of onboard data availability. This offers the
prospect of an increased autonomy in future sounding rockets
system and may e.g. be applied for onboard geocoding or
onboard IIP prediction.
A Global Positioning system (GPS) tracking system for high
dynamic platforms involves some additional components. And for
the reception of the GPS signal suitable antennas and low noise
amplifiers (LNA) are required. Antenna for sounding rocket
should provide a near omni-directional coverage and be mounted
on the payload module to allow GPS tracking during all flight
phases independent of the instantaneous body orientation.
These requirements are almost ideally met by wrap-around
antennas that consist of properly couple individual patches
mounted like a belt around the circumference of the sounding
rockets. Despite the superior technical characteristics, wraparound antenna exhibit various draw back. They are difficult to
design and manufacture for large diameters and are typically
available for tubes with a maximum diameter of 4) inches only in
addition to high cost and are subject to United State of America
(US) export restriction which further limit or complicate their use
for European provider of sounding rocket services.
Triggered by the above limitations and drawback various
alternative antenna systems that supported tailored mission
requirement at reduced system engineering cost have, therefore
been developed by the authors and qualified on European
sounding rockets. Among the potential alternative is the use of
Tip antennas and Blade antennas.
Tip Antenna: For Global Positioning system (GPS) tracking
during the first flight phase a helical antenna mounted in the tip of
the rocket cone has been designed and manufacture by DLR.
The tip antenna provide near hemispherical coverage during the
ascent trajectory. As long as the rocket keeps its initial zenithfacing attitude, an antenna mounted in the rocket tip ensures the
best possible signal reception. Tip antennas have first been used
in 1998 with the Maxus-3 flight. Due to the particular antenna
placement, the GPS signal reception is independent of the
rocket’s spin about the body axis, therefore, provides an ideal
choice for GPS based range safety system that require reliable
tracking during the boost phase of a sounding rocket flight.
Fig. 3. Tip antenna
Blade antenna combination: In an effort to minimize antenna
switches and improve the overall mission coverage, the use of
combined blade antennas has been studies by DLR. This type of
antenna is widely use or employed in aeronautic and aerospace
applications for telemetry and telecommand data transmission.
They are known for their resistivity against high temperature and
mechanical stress. To achieve a near full sky visibility, a
minimum of two antennas is required, attached to the body of
payload segment of the rocket opposite each other and
connected to the receiver via power combiner. The coverage
resembles that of a wrap-around antenna but exhibits
pronounced gaps due to destructive interference of signals from
the two antennas. Compared to wrap-around antennas, blade
antennas combination system can be manufactured at less than
10% of the overall system cost.
Furthermore, this approach does not require special treatment
of rocket structure for mounting. The detailed shape of the
combined antenna diagram depends on the tube diameter and
can be measured in ground experiments. The simple design
makes the antennas environmentally robust and enables tracking
also during the hot reentry phase. The disadvantages of the
blade antenna are the linear polarization, which implies a 3db
gain loss and increase multipath errors in the launch site
environment. Furthermore, the large band width of the blade
antennas makes the GPS reception more sensitive to jamming
To overcome the disadvantages of blade antennas, specific
flush mounted patch antenna device have been designed and
implemented by kaysre-Threde. These can be fitted into specific
recesses in the walls of the rocket and are used to replace the
antennas dual combination.
Fig. 4. Blade antenna for GPS applications
Global Positioning system (GPS) tracking of sounding rockets
differs in various way from terrestrial GPS applications. Evidently,
the receiver must be operated at higher speed and altitude than
imposed by International Traffic in Arms Regulation (ITAR),
which restrict the use of public available receivers to a height of
less than 60000 feet and velocity of less than 1000 knots (500
m/s.). For the use of commercial receiver system onboard a
sounding rocket, the hardcoded firmware restrictions have to be
disable by the manufacturer and an appropriate waiver by the US
government is required for the receiver operation.
Out of various receiver that have been tested in a signal
simulator environment or flown in early trials deactivation of the
altitude and velocity constraints, the Ashtech G12 HDMA
receiver has emerged as the adopted standard for sounding
rocket applications. It has likewise been applied by KT in the
Texus and Maxus program from 2000 onwards. Within this flight,
a good tracking performance has generally been obtained, even
though dropouts related to antenna switching have been
observed in some cases that indicate problems in the signal
acquisition after occasional interrupts. Given these technical
limitations and high cost, a development program for an
independent GPS tracking system has been initiated at DLR in
1999. Within this programme a Mitel GPS Orion receiver has
specifically modified and adapted to support sounding rocket
The Orion receiver itself has been built by DLR based on Mitel
design information. It makes use of the GP2000 chips, which
comprises a GP2015 RF down- converter, a DW9255 SAW filter,
aGP2021 correlate and a 32_bit ARM-60B microprocessor.
The size and the open-source policy makes the Orion receiver
particularly interesting for the envisage application on a sounding
rocket. Because most function of the interface board are taken
over by the on board power system and data handling system,
allowing a total receiver of roughly 10 x5 x1 cm3.
Before the signal processed by the receiver, it is pre-amplified
and filtered at the antenna, and subsequently down-shifted I
frequency to a manageable level for processing. The mixed
signal is given by,
the satellite signal with frequency
f s , and A is an amplitude
The satellite signal is then shifted to an intermediate frequency
(IF), and appropriate filters are applied to control the amplitude of
the signal for subsequent processing. The signal then passes to
the main signal processing part of the receiver.
Each GPS satellite carries an atomic clock to provide timing
information for the signals transmitted by the satellites. The
clocks are oscillating at a particular frequency. The relationship
between the phase φ , frequency f , and the time t , is:
f (t ) =
dφ (t )
where t, represent true time. The phase of an oscillating signal is
the angle it sweeps out over time
(0 ≤ φ ≤ 2π ) . and has the units of cycle. The frequency of
the signal is the rate at which the phase changes in time and has
the units of Hertz (cycles per second).
φ (t ) = φ (t0 ) + ∫ f (t )dt ,
t0 , is some initial time τ, denote the indicated time related to the
τ (t ) =
(φ (t ) − φ 0) ,
f 0 , is some nominal (constant) frequency since the initial
indicted time does not coincide with initial phase (φ (t0 ) ≠ φ0 )
The oscillator clock time ( τ ) and the true time ( t ) differ from
each other both in scale and in origin. The true time reflects the
atomic clock time in U.S.A., which also differs from Coordinated
Universal Time (UTC) by 2000 with 13 seconds. However, the
GPS true time is calibrated by U.S. atomic time. The time reflects
the fact that the times indicated on satellite and receiver clocks
are not perfectly uniform and must be calibrated by master clocks
on Earth. The relationship between the phase-time, τ , and the
true time, t , is:
Fig.5. GPS receiver
Sγ (t )S P (t ) = A cos(2πf L 0t )cos(2πf S t + φ (t )),
( (f
cos 2π
− f L 0 )t + φ (t )
τ (t ) = t − t0 + τ (t0 ) + δτ (t ) ,
+ cos(2π ( f S + f L 0 )t + φ (t )).
δ (t ) =
where Sγ (t ) is the pure signal sinusoid generated by the
receiver oscillator, fLO is the local oscillator frequency,
S P (t ) is
1 t
∫ δf (t )dt.
f0 t0
The abbreviated form is:
τ (t ) = t − ∆τ (t ).
The signal is a carrier wave (sinusoidal wave) modulated in
phase by binary codes that represent interpretable data. It can be
represented mathematically by:
This paper describe the Navigation and Tracking of a sounding
rocket using GPS, the GPS technology has exciting potential for
bringing location awareness to many mainstream consumer
platform and making lives more convenient and safe. These
opportunities bring a new set of technical challenges that can be
solved through better infrastructure, improved receiver
architectures and use of high performance, low-cost and lowvoltage process technologies.
S (t ) = AC (t )D(t )cos(2πft ) ,
f , is the frequency of the carrier wave, and A , is the
amplitude of the signal. The code sequence С (t ) , is a step
function having values (1, -1), also known as chips or bits. D(t ) ,
represent a data message.
GPS consists of three main segments: Space, Control and
User. The Space Segment nominally consists of 24 satellites
orbiting at an altitude of approximately 20,200 km above the
Earth's surface.
These satellites transmit microwave signals to allow GPS users
to determine their location, velocity and time in real time or post
mission. The GPS satellites are distributed in six orbital planes
with nominally four satellites in each plane. The six orbital planes
have approximately 55 degrees inclination with respect to the
equatorial plane. GPS satellites are initially designed to transmit
carrier signals on two L-band frequencies: L1=1575.42MHz and
The Russian Global Navigation Satellite System (GLONASS)
has a constellation of 24 satellites, which continuously transmit
signals on two carrier frequencies. The satellite signals can be
received by users anywhere on the Earth's surface to determine
their position and velocity using code pseudorange and carrier
phase measurements.
The GLONASS Space Segment includes 24 satellites
distributed on three orbital planes. Each satellite can be identified
by its slot number. The three orbital planes are separated 120
degrees. The satellites on the same orbit plane are separated by
45 degrees. The satellite orbits are closely circular with an
inclination of about 64.8 degrees, a semi-axis of 25,440 km and a
period of 11h 15m 44s.
Fig.6. GLONASS Constellation with Three Orbital Planes (Zheng,
[1] Ashby, Neil. Relative and the Global Positioning System.
Physics 2002.
[2] B.,Bull, m., Markgraf, O., Montenbruck. A low cost GPS
system for real time tracking of sounding rockets.
German Space Operations centre, D-82234 Wessling,
Germany. 2004.
[3] Changsheng, Cai.Precise Point Positioning using Dualfrequency GPS and GLONASS measurements. 2009
[4] Chen, K. Real-Time Precise Point Positioning and Its
Potential Applications. Proceedings of ION GNSS2004, Sep.21-24,2004,Long Beach, California. 2004.
[5] Markus, Markgrf., Oliver, Montenbruck. A miniature GPS
tracking system for scientific and commercial rocket
launches, DLR, German Space centre (GSOC). 2006.
[6] Markgraf M., Hassenpflug F. A flexible GPS Antenna
concept for Sounding Rocket. 2001
[7] Moen, J.J.K Bekkeng, A Pedersen, J.G. Aase, T.A. Blix, M.
Lester, S.E. Pryse. ICI-1 A new sounding rocket to
observe micro-scale Physics in the cusp ionosphere
ESA SP-530. 2003.
[8] National Aeronautic and Space Administration. NASA
[9] Oliver, Montenbruck, Peter, Tuner., Wolfgang, Engler. GPS
tracking of sounding Rockets, A European perspective.
[10] W., Enderle, GPS Receiver on JaeSat. Cooperative
Reseach Centre for Satellite, Queensland University of
Technology, Australia. 2003.
Adebayo Matthew Abioye
International Institute for Advanced Aerospace Technologies of
State University of Aerospace Instrumentation,
67, Bolshaya Morskaya, St.-Petersburg, 190000, Russia
(Tel: +79500188350, +234838124262; e-mail:
ABSTRACT: Approaches to the mathematical descriptions algorithms of bending modes control of flexible
aerospace vehicles, subjected to significant dynamic loadings and resistance of an environment (inherent forces,
moment forces and environmental forces). The elastic property of the aerospace vehicle determines the
occurrence of fluctuations in a control system on various resonance frequencies. Elastic oscillations are usually
described by differential partial equations or ordinary differential
equations of the great dimension. Deformation of a body results in appearance of the local attack angles and
slide angles. As a result of it, the local forces and moments of forces arise.. These forces and moments act at the
same time with the changes of local angles of attack and slide. The local forces and moments are the reasons of
amplification or attenuation of elastic oscillations. This phenomenon is known as aeroflexibility.Divergent bending
mode can lead to structural failure and eventual destruction of aerospace design, taking cognizance of these
effects has a great significant at control of space stations and space probes, airplanes and other aerospace
vehicles, that are generally amenable to the considerable dynamic loads.
Keywords: bending modes, flexibility, oscillation, vehicle dynamics, stability, control, time response, frequency
the shear force and the bending moment will be zero at the
extreme ends of the aerospace construction.
Investigation and calculation of deformations in aerospace
The differential equation of bending oscillations of an
construction is very complex problem of mechanics for the
aerospace vehicles’ body in a plane of a pitch can be obtained,
aerospace constructions which contains thousands of different
having constructed an equation of oscillations of a hollow,
configuration items. These deformations are as a result of an
elastic rod, undergoing oscillations, experience pure bending in
operation of distributed loads. As a result of this connection, a
a plane z = 0. From the principle of d'Alembert, the equation of
real aerospace construction is represented by some simplified
oscillations of an elastic rod can be deduced from equations of
scheme or its engineering model. This simplified models allow
his elastic equilibrium, with the addition of all environmental
to investigate the main properties of flexible aerospace
forces that are acting on the rod (forces of inertial).This is
constructions flight, influence of the distributed aerodynamic
applicable to case of a pure bending in a plane z = 0, complete
forces to the flexible oscillations, influence of many others
equations of the elastic rod is in this form is shown in Fig.1.
factors, such as sloshing, burning of rigid propellant, impression
of the control forces and moments. The following assumptions
are made at a derivation of equations of a aerospace
constructions motion in a plane of the pitch with taking into
account elasticity. It will be assumed, that the body of a
aerospace constructions is deformed, as an elastic rod, and at
calculation of his flexural deformations we shall start with the
appropriate equations of strength of materials. We shall neglect
the effect of compressive strains, stretching, shift, a bending
and torsion. In addition to these, equation of a pure bending of
aerospace vehicles shall be used. It is assumed that the
Fig.1.Transverse oscillation of elemental beam
particles of an aerospace construction are positioned at any flat
cross section, at any instant of time. During the process of an
2.1. Mathematical model of the plant
aerospace construction, motions do not change the positional
relationship. This position is used as a reference in the course
of the analysis of the strength of materials. For complete
= qy ;
= Q − mz ; B y
equation of object in motion in space, it must be supplemented
dx 2
by rigid body equation, assumption of rigid body in aerospace
vehicle’s analysis is to eliminate the need to consider the forces
where q y , is the cross-sectional distributing force which is
acting between the individual elements of mass, thereby
eliminates the effect of aeroelasticity.It will be assumed, that
acting on a beam; Q is the intersection of the force arising in
an elastic beam; mz is the distributing moment of the external
forces which are acting on a rod; B y is flexural rigidity; y is a
sag of a rod; M is a bending moment. The bending oscillation of
the air frame, in the pitch plane occurred with the absence of
distributing cross-forces q y and moments, mz , these are
called the natural airframe. In this case the bending oscillation
differential equation is [1]
∂3 y
∂2 y
( By
) − ( jz
∂2 x
∂x 2
∂x∂t 2
∂t 2
with boundary conditions;
∂2 y
∂2 p
dx 2
d2 p
p( x, t ) = P ( x) g (t ) .
After substitution equation (4) in equation (2) we can write
d x2
d 2P
d x2
( By
d2 p
) g + [−
( jz
) + µP ]
= 0,
d t2
dx 2
dx 2
= − dt .
− ( j z ) + µp
d2 p
dx 2
) = 0,
d x2
for x = a and x = b .
d2 p
( By
) = 0,
d x2
= 0;
According to (8) it should be
For x = a and x = b , where a and b are extreme cross
sections of a beam; µ is distributed mass and j z is distributed
moment of inertia of a beam. If we consider the differential
equation (2) with the boundary conditions (3) and neglecting
slow variations of functions µ and jz within a chosen time
interval due to the fuel tank discharging let assume these
functions depending on single variable x. The homogeneous
differential equation (2) with homogeneous conditions (3) has
the apparent solution p(x,t) = 0, which is traditionally called the
trivial solution of this boundary problem. Nontrivial solutions of
the differential equation (2) that satisfy the boundary conditions
(3) should found as the multiplication of two functions
( By
With the boundary conditions
∂2 p
∂3 p
( By
) − jz
= 0 . (3)
∂ x2
∂ x∂ t 2
= 0;
∂ x2
For τ= 0 the differential equation (2) degenerates to
d2 p
dx 2
= A1 + A2 x ,
boundary conditions (5.21) both a1 and A2 are to be equal to
zero. Assuming A1 = A2 = 0 in (10) let get the differential
equation having two linearly independent solutions
p= 1 and p= x.
Both solutions satisfy the boundary conditions (9) and
consequently the boundary problem formed by the differential
equation (2) and the boundary conditions (3) has the
eigenvalue τ equal to zero. This eigenvalue is corresponded
by two natural functions of the boundary problem, which are
depended by formulae (11). The first eigen function of (11)
determines translational airframe movements in the pitch plane;
the second case determines the airframe rotation on the axis
OZ.The movement does not lead to airframe deformation, there
the zero values of eigenvalues does not determine the airframe
natural bending oscillations. All of other eigenvalues of
considered boundary problem should be positive. Let these
positive values being represented by τ j, j = 1, 2 … and to
number them in ascending order, and corresponding
eigenfunctions being represented by pj(x), j = 1. This approach
permits the calculation of shapes, slopes, and eigenfrequencies
and some other parameters of flexible oscillations for arbitrary
number of modes for different constructions.[1].
Left part of the equation (5) does not depend on the variable t
and the right one – on x. So the quotients in the equation (5)
should be represented by some constant lettered as λ.
Consequently the function p(x) should satisfy the differential
d x2
d 2P
d x2
) g + [−
( jz
) + µP ]
= 0 . (6)
d t2
Function g (t) corresponds to the differential equation.
dt 2
Fig.2. Animation of elastic vehicle flight
+ τg = 0 .
The constant τ is real nonnegative number [1].
Fig.3 shows an hypothetical representation of a flexible
elastic vehicle undergoing structural deformation as a result of
bending in a wide spectrum of frequency.
Fig.3. The plot of the first four modes of bending oscillation
The approach to the design and construction of the guidance
system will be in two parts. The first part will be the
development of control law for damping of certain flexible
oscillations. These oscillations are in the pass band of the
actuator. The plant with this type of control law is called a rigid
body plant, and its mathematical model is simplified. At the
second stage of the construction the simplified model of the
plant is used, for SISO system, we can write a common case
for arbitrary interval of time, and it is given in the following form:
W ( s ) = W1
W1 ( s ) =
( s 2 + 2ξ 21ω 21s + ω 21 )...( s 2 + 2ξω 2 k s + ω 22 k )
( s 2 + 2ξ 41s + ω 241 )...( s 2 + 2ξ 41ω 41s + ω 2 41 )
K ( s + ω11 )...( s + ω1 j )
( s − ω31 )...( s + ω 3 j )
k , ξ mn , ω mn are constant numbers. Some numerical
calculations are presented using the followings transfer
Fig.4 shows a close loop comprising of sensor of angular rate
(feedback), the controller, actuator and the plant. M-file runs on
Matlab, gives frequency and time responses of a close loop
system, containing the three modes of elastic oscillation of the
flexible vehicle.
3.1 The program for the properties of the system analysis
%bode plot and time response of a close system of a launch
vehicle in an interval of (30S) thirty seconds of flight.
s=tf('s'); % definition of s as Laplace variable
w_plant=minreal (rss_30, 0.1);
% TF of the plant, 0.1 is for accuracy (Toll)
w_act=1/(.1^2*s^2+2*.7*.1*s+1); % TF of the actuator,
% TF of the feedback
w0=w_plant*w_feedback*w_act; % TF of the open loop
Bode (w0), grid;
% Bode plot
w_cl=1* w_act*w_plant/ (1+w0); % TF of the closed loop
(1+w0) figure;
impulse (w_cl), grid;
% It shows the time
response of the system, parameters such as; settling time and
and W30
functions (TF) of the rigid and flexible models, W30
-0.525 (s^2 + 0.05401s + 0.0007627)
(s-0.03135) (s + 0.01475) (s^2 + 0.01588s + 1.766)
Imp ulse Response
The TFs, (13) and (14) are the (30s) seconds of hypothetical
flight respectively, the Bode plot and time response, of rudder
deflection are presented.
0. 2
0. 4
0. 6
0. 8
1. 2
1. 4
1. 6
1. 8
Time (sec)
= W30
(s^2 + 0.4057s + 887) (s^2 + 134.8s + 2.225e004) (s^2 - 129s + 2.295e004)
(s^2 + 1.138s + 2587)(s^2 + 2.787s + 1.304e004)(s^2 + 8.92s + 3.594e004)
Fig.5.Time response of acceleration on impulse of flexible
Bode Diagram
The input is the deflection of the rudder, commonly represented
Magnitude (dB)
by δ* , other inputs parameters could be the sum of all
environmental forces, and moments. Output could be the pitch
angle, angle of attack, slope of the trajectory, angular velocity
and sudden vertical acceleration called, g-load or over-load.
The time responses represent all the three mode of flexible
oscillations. These oscillations are the reason for the
appearance of big stress on the aerospace vehicles. As a result
of these, it is imperative to suspend these oscillations. The
plots below are obtained from Matlab-M-file.
Fre que ncy (rad/sec)
Fig.6. Frequency response of the flexible vehicle
3.2. Analysis of the responses
Fig. 5 and 6, show time and frequency responses of the three
modes of elastic oscillations of the plant, marginally stable. The
close loop model of the system was stable for the first three
modes of elastic oscillation of the plant but optimal stability was
not achieved, for this reason an optimal controller using linear
quadratic regulator (LQR) will be employed in the next stage of
this work, also the errors due to measurement and environment
will be investigated and checked. Separated multipliers in the
numerator and the denominator describe different component
of the plant motion and separate modes of elastic oscillations.
The TF is simplified in the form of simple fractions [1]
W ( s ) = 1 + 2 + ... + n + r ( s ) .
s − p1 s − p2
s − pn
For real plant the order of numerator is less than the
denominator, for this reason r(s) =0. The real pole corresponds
to a periodic component of plant movement. The complex pair
of poles corresponds to oscillatory movement of object or
modes of elastic oscillation. These pairs of poles are combining
to derive the TF of oscillatory parts with the real factors.
However, each TF will be transformed into system of system of
differential equations of the first order. The resulting equation
will be given in state space form.
x& (t ) = Ax (t ) + Bu (t ) + Gw(t ) ,
y (t ) = Cx(t ) + v(t ) ,
where, x& is the state vector of the plant, and is a function from
time, t ; A is the mass matrices , B is the control input matrix, u
is the input signal, and G is the coefficient of the process noise,
y is the measurement and v is the measurement noise.
3.3 Frequency response (Bode diagram) and Time response
Knowing the phase and gain of a transfer function across
wide range of frequencies provides a large amount of
information about that function. Bode plot shows this
information graphically. Determining the appropriate range of
frequencies for Bode plot takes some care. At very high
frequency, the energy requires to accelerate inertia increases
beyond what the system can produce. Beyond that frequency
there is little interest in the system output, conversely [7].
Below some low frequency, the system will act nearly the
same. Plotting Bode plot show phase and gain between these
frequency extreme. The two most common performance
measures are: response and stability. Command response
measures how quickly a system follows the command, in the
time domain, the most common measure of command
response is the settling time to a step, (Time constant). Close
loop responsiveness is commonly measure in the frequency
domain as the bandwidth, the frequency at which the gain has
falling to -3db,or to a gain of about 70%. Also, it can further be
described as the frequency at which the gain becomes
sufficiently small, that the magnitude of the output response
becomes insignificant [11]. In time domain, stability is the most
commonly measured from the step response, the key
characteristic is overshoot: the ratio of the peak to the
commanded change. The amount of overshoot is acceptable in
application varies from 0% to perhaps 30%.There is a
correlation between time and frequency domain. Settling time
correlates to bandwidth; overshoot correlates to peaking. At this
junction, the natural question is that, why both domains are
needed? The answer is that, in realistic system, time domain is
more difficult to interpret. Many phenomena in control systems
occurs in combination of frequencies. Finally, the observation is
that the time domain measure rely on the step command. The
reason is that step command has large frequency content.
Vehicles control/structure interactions commonly take one of
the following forms: transient phenomena such as the motion of
a satellite resulting from sudden bending of a flexible boom
cause by solar radiation. Unstable motion, such as may occur
when the attitude control system of two dock space crafts
sense and responds in a destabilizing fashion to bending in the
docking attachment. Stable limit-cycle oscillations, such as
when a space craft attitude control system is driven into
nonlinear operation at or beyond the boundary of a region of
locally unstable linear operation. Design and flight experience
has shown that the control-system parameters pertinent to
control-system/structure interaction include: time constant (or
natural frequencies) and damping ratios of the elements of the
control loop; nonlinear system parametric values (saturation
limits, dead-zone widths, hysteresis characteristics, etc);
sample intervals and quantization increments (digital system).
Pertinent structural parameters are: Modal frequencies and
damping ratios; modes shapes; inertial properties (masses,
moments of inertia); and local flexibility characteristics. Other
significant factors affecting control-system/structure interaction
are: solar radiation and pressure; magnetic torques;
gravitational torques; docking and other manoeuvres; dynamics
of contained liquids; appendages deployment; crew motion;
and operation of propulsive devices. These factors are
significant only in space and are not significant in the flight in
atmosphere. Atmosphere influence is in many times more then
other factors.
5.1 Flexibility
Large length- to - diameter ratio missile, while executing
maneuver undergoes structural bending due to aerodynamic
load distribution and vehicle initial parameters. The flexed
shape depends on the IE distribution along the length and gives
rise to change in local angle of attack, and thus brings changes
in aerodynamics loads. This gives rise to mismatch in
performance of the control system during flight and simulation.
5.2. Stability
It is the position or the state of a system being in static or
dynamic equilibrium. Flight vehicles consist of aircrafts, launch
vehicles and missiles. These bodies have six degrees of
freedom in space. In addition, they have many high frequencies
modes and none linearity which make the control system
design complicated. The real motion of aerospace vehicles is a
simultaneous combination of translation and rotational motion.
Translation has to do with how high it goes or far away it lands
and the rotation has to do with the stability. The rotational
motion occurs only when the missile is in a disturbed position.
The division of the aerospace vehicle’s motion into two leads
directly to the separate consideration of the center of gravity
(C.G) and the center of pressure (C.P). Translational forces act
through the center of gravity of the aerospace vehicle, while
forces associated with the rotational motion of the aerospace
vehicle are all forces that do not act through aerospace
vehicle’s center of gravity. A system in static equilibrium, the
total forces on each particle is, fi = 0 . Summing works
exerted by the forces on each particle that act thru an arbitrary
virtual displacement ' δr ' of the system tends to an expression
of virtual work that must be zero, since the forces are zero.
effect is not included in the equations with the assumption that
forces acting between individual elements of mass are
eliminated. With this assumption it allows the air frame motion
to be described completely by translation of the centre of
gravity and by rotation about the point.[10].
5.4. Time constant (‘T’)
It is a common feature in aerospace which must be small, it is
a function of inertia of the system, good values range between
0.1 to 3 seconds, in practice it can be less than 0.1 seconds, if
it is too small, it can cause oscillation. However, good actuators
should have small time constant ‘T’.
5.5 Damping ratio
It estimated the effect of flexibility. It is the ratio of
aerodynamic parameter of the flexible member to the rigid
e.g , ζCN α =
CN α flex
CN α rigid
5.6 Oscillation
δW = ∑ fi .δri = 0 .
We recover the original equation by separation the function
into applied forces fi and constrain forces ci yields;
δW = ∑ fi .δri + ∑ ci = 0 .
This leads to the formulation of the principle of virtual Work
for applied forces which state that the forces applied to a static
system do no virtual work.
δW = ∑ fi .δri = 0 .
Also, there is a corresponding principle, for accelerating
system called D’Alembert. Principle which forms the basis for
Lagrange’s mechanics. For applied forces is used on individual
particle of a rigid body, the principle can be generalized for a
rigid body, when a rigid body that is in equilibrium is subjected
to compatible displacement, the total virtual work of all external
forces is zero and conversely, if the total virtual work of external
forces acting on a rigid body is zero, then the body is in
equilibrium. In classical mechanics, a rigid body is consider as
a continuous mass distribution, which in quantum mechanics, it
is consider as a collection of point mass.
5.3 Tensors
It is a general name given to quantities that transform in a
prescribed way when the coordinate system is rotated. Rigid
body is an idealized system of particles; it is assumed that the
rigid body does not undergo any change in size or shape.
Translation of the body results in that every line in the body
remains parallel to its original positions at all times. Also, rigid
body can be treated as a particles whose mass is concentrated
at centre of the mass. In assuming a rigid body, the aeroelastic
Repetitive motion that are somewhat clean and regular, and
that occur at relatively low frequency are commonly called
oscillation, while any repetitive motion ,even at higher
frequency with low amplitudes and having irregular and random
behaviour falls into the general class of vibration.
5.7 Natural frequency
An engineering system, when given an initial disturbance and
allowed to execute free vibrations without a subsequent forcing
excitation, will tend to do so at the particular preferred
frequency and maintaining a particular preferred geometrical
This frequency is term the natural frequency of the system,
and the corresponding shape or motion ratio of an oscillating
system can be represented in term of its natural frequencies
and modes shapes of a dynamic system, once the modes are
determined, they can be used in understanding the dynamic
nature of the system and also in design and control. In general,
energy dissipation is a nonlinear phenomenon, properties such
as mass (inertial), flexibility (spring-like effect) and damping
(energy dissipation) are continuously distributed throughout
practical mechanical devices and structures to a large extent.
Representation (i.e. modelling) of these distributed parameters
(or continuous) vibrating systems will requires independent
variables in space(special coordinate) in addition to time. This
model is partial differential equations in term and space lumped
parameters model approximate continuous system.
This is the standard model use by control engineer. Most
linear control system analysis and design methods are given in
the state –space form. Model of a linear system are described
by differential equation. The equation can be organised in a
standard form called state- space representation. This form is a
set of first- order differential equation of a unit coefficient at the
first derivative. The model can also be transform in the form of
a transfer function after applying Laplace or Fourier
transformation. The state-space caries information about
internal structure of the model while transfer function described
the model in term of its input-output properties. Also, statespace is more convenient and creates less numerical difficulties
than TF when one deals with high order models.[9]
State-space representation:
x& (t ) = Ax (t ) + Bu (t ) + Gw(t )
y (t ) = Cx(t ) + v(t ) .
The description of linear time invariant system is of finite
dimension is described by the linear constant coefficient
differential equation (17) and (18).With initial state x(0) = x0 .
In the above, the N dimensional vector is called the state
vector, x0 is the initial condition of the state, s-dimensional
vector u is the system input and the r-dimensional vector is the
system output. The A, B and C matrices are real constant
matrices of appropriate dimension. (A is N x N, B is N x s, and
C is r x N). We called the triple (A, B, C) the system-state space
Besides the state-space representation, a system can be
represented by its transfer function, the definition of transfer
function is simply the relationship between a system output y(x)
and the input x (t) respectively. It is represented in this form:
y (t ) = G ( s )u ( s ) ,.
where y(s) u(s) are the Laplace Transformation of the output
and input respectively, ’s’ is the Laplace variable.
The motivation for state space is to convert a coupled system
of higher order ordinary differential equations, for example, the
representation of dynamic of a mechanical system to a coupled
set of first order differential equations.
From the resulting system of equations, (TF) can be
constructed from the mathematical model, for the design and
simulation and of the system. State space is one of the modern
approaches to design and simulation of most of the dynamic
systems in which aerospace vehicle is included.
Partial differential equation approach to the mathematical
modelling of bending mode oscillation was received. The first
four modes of bending oscillation were obtained. Close loop
system, resulting in time and frequency responses depicting the
first three modes of elastic oscillation of flexible aerospace
vehicle were received.Also, technical descriptions of frequently
used terminologies in bending mode control of flexible
aerospace vehicles were discussed in this paper. The
subsequent part of this work will be the design and construction
optimal controller, which will guarantee the stability of the three
modes of oscillations simultaneously. Simulation of each mode
and investigation of errors of measurement and environmental
My gratitude goes to the almighty God, who makes this
publication possible.
l thanks Prof. A.V. Nebylov,
Dr.S.A.Brodsky for their guidance and useful advice in the
course of my study. Special thanks go to my supervisor, Prof.
A.I.Panferov for his generousity, love and guidance, at all
times. l thank my wife, children, my aged mother and the entire
member of my family for their understanding during my
absence, as a result of this programme. Lastly, my special
thanks and appreciations go to my Agency, (National Space
Research and Development Agency) for given me the
opportunity to be part of this programme.
[1] Panferov A.I., Nebylov A.V. and Brodsky, S.A., (2008).
Mathematical Models of Flexible Missile and
Software for Control System Design and Simulation.
[2] Panferov A.I., Nebylov A.V. and Brodsky, S.A., (2008),
Mathematical models and software for flexible
vehicles simulation. Proceedings of 16th IFAC
symposium on Automatic Control in Aerospace,
Oxford, vol.2, 416-421.
[3] Nebylov A.V., A.I. Panferov and S.A. Brodsky (2005a).
Flexible aerospace vehicles simulation and nonlinear
control synthesis. Proceedings of 16th IFAC World
Congress, Prague.
[4] Beard, (1995) C-Engineering Vibration analysis with
application to control systems Arnold.
[5] Hatch, Michael R. (2001) Vibration using MATLAB and
ANSYS. Chapman & Hall/ CRC USA. NASA Space
Vehicle design criteria (Guidance and Control).
[6] A.F., Bower. Applied mechanics of solids.
[7] George Ellis. Control system design guide Third edition
[8] John H. Blakelock. Automatic control of Aircraft and
Missiles. Third edition.
[9] Wodek K., Gawronski.Advance Structural Dynamics and
Active Control of Structures. Mechanical engineering
series, Springer.
[10] George M., Siouris.Missles Guidance and Control
[11] Michael V., Cook. Flight Dynamic Principles, Second
edition. A linear system approach to stability and
Control. Elsevier Aerospace engineering series.
Lawal Ismail Olusegun
The National Space Research and Development Agency,
Obasanjo Space Centre, Airport Road P.M.B. 437 Garki, Abuja FCT, Nigeria
Tel: +7(952)3855682, +234-80362-31535; e-mail:
Abstract: In order to point the antenna toward the earth and the solar panels toward the sun, the altitude or
orientation of a spacecraft must be properly controlled. This can be achieved using reaction wheels. For this reason,
the attitude control algorithms were developing that receive data from vehicle sensors and derive the appropriate
commands to the actuators to rotate the vehicle to the desired attitude. The experimental nature of flying newly
developed digital systems hardware and an appropriate controller design in order to get the controlled system makes
it imperative to be able to modify flight algorithms during the mission.
Discrete-time systems have now become standard in most
dynamical applications with the advent of digital computers,
which are used to process sampled-data systems for estimation
and control purposes. The mechanism that acts on the sensor
output and supplies numbers to the digital computer is the
analog-to-digital (A/D) converter. Then, the numbers are
processed through numerical subroutines and sent to the
dynamical system input through the digital-to-analog (D/A)
converter. This allows the use of software driven systems to
accommodate the estimation/control aspect of a dynamical
system, which can be modified simply by uploading new
subroutines to the computer.
The performance of a spacecraft control system is limited by
the performance of its sensors and actuators. Pointing accuracy
is how tightly the spacecraft points at its target and knowledge is
how well one knows the attitude error. For example, a passively
stabilized spacecraft with a star sensor might have very high
attitude knowledge but relatively poor pointing accuracy. Jitter is
measured in a variety of ways but ultimately it is a measure of
how much the attitude changes in a given period of time. For
example, if one uses noisy sensors and low resolution D/A
converters, you are likely to have a lot of jitter. Jitter is attitude
motion that is not controllable by the control system. It may be
beyond the bandwidth of the controller or hidden by a notch filter
in the control loop. The pointing numbers may be given for the
spacecraft as a whole or for individual payloads. If the latter is
the case, you have the additional option of putting some of the
payloads on articulated platforms with their own sensors to
improve the pointing of those payloads. A dual spin spacecraft
only the despun platform has a pointing requirement. The
spinning part is only required to store momentum (and
sometimes generate power.) Pointing requirements only apply to
the payload. It is not unusual to collocate the payload and
attitude control sensors on the same composite structure (i.e.
structurally stiff with low coefficients of thermal expansion) to
minimize errors due to structural vibration and thermal distortion.
2.1 Attitude Control System Architecture
Attitude control system can be passive or active. Active means
that we take a measurement and compute a control action which
is accomplished with an actuator. The computation can be
analog or digital although most modern spacecraft use digital
control. A passive system uses the physics of the spacecraft and
possibly a control actuator to perform control. This work will
describe the application of digital control system analysis and
design techniques to improve performance of an instrument. The
basic approaches are the Mathematical Modelling of the
instrument, Stability Analysis, Controller Design to Achieve
Satisfactory Performance, and Computer Simulation of the
system. The combination of Theoretical Analysis and Computer
Simulation to provide an effective and better solution will be
Fig. 1 shows a hierarchy of typical attitude control systems.
Fig.1. Description AD&C subsystem block diagram
The Attitude Determination and Control (AD&C) subsystem is
an integral part of all phases of this mission. The AD&C
hardware includes: torque reaction wheels, magnetic torque
coils, star angle sensors, micro-electro-mechanical-structure
(MEMS) sensor and magnetometers. The attitude control
software is state-of-the-art and includes algorithms for: despin
Sun and Earth orientation and capture, network control, reaction
wheel desaturation, attitude recovery and intra-spacecraft
communication. During initial tip-off from the launch vehicle,
spacecraft spin rate is determined and despin algorithms are
employed as needed to reduce rotational velocity. After despin,
the AD&C subsystem determines the orientation of the
spacecraft relative to the Sun and Earth. A control algorithm is
then activated, which uses torques produced by reaction wheels
or magnetic torque coils to turn the solar panel toward the Sun
and the bottom of the spacecraft towards the Earth. Reaction
wheel desaturation is accomplished by using magnetic torque
coils. This operation is accomplished in a method which does not
interrupt mission operations. Sating algorithms that identify
anomalous spacecraft attitude scenarios and procedures for
recovery of nominal mission operations are part of the AD&C
software package.
Fire Code Reset
Power safing
Loss of Attitude
2.2 Attitude Determination and Control (AD&C)
Design Specifications
Low Power
High-Rate De-Spin
Medium-Rate De-Spin
This mission requires a stable platform from which
experimental data can be referenced. The AD&C subsystem
design requirements flow down from two sources, payload
requirements and bus requirements.
Attitude Requirements
Table 1 shows the operational requirements for the AD&C
subsystem as determined by payload and bus needs.
Low-Rate De-Spin
Sun/Earth Acquisition
Table 1. Operational Requirements for the AD&C Subsystem
± 5°
± 5°
± 5°
± 1.0°
± 1.0°
± 1.0°
Each of these angles are to be considered error angles from
the nominal mission attitude with the spacecraft x-axis pointing to
ram and the -y-axis pointing as close as possible to the Sun
vector in figure 2. All AD&C measurements and specifications
are referenced to the spacecraft axes. In other words, the AD&C
subsystem is responsible to point the spacecraft and measure
the spacecraft's attitude. The -y, -z corner of the -x face of the
spacecraft will be used as the reference location for analysis and
calibration. Except in orbital emergencies where the safety of the
mission results in loss .of attitude control, this requirement must
be observed. It is therefore not a time averaged specification but
an attitude .window not to be violated at any time during nominal
Sun/Earth Acquisition
Nominal Mission Operations
Fig. 3. Control mode flow graph
2.4 Attitude Determination and Control (AD&C)
Subsystem architecture component
An illustrative block diagram of the AD&C subsystem
architecture component is found in figure 4.
Motion Reference
Sensor Interface Electronics
Control Processor
Actuator Drive Electronics
2.3 Attitude Determination and Control (AD&C)
Subsystem Philosophy
Figure 2 shows the baseline coordinate system used for this
discussion. Figure 3 shows three control states or modes have
been identified. These modes are executed through software
algorithms and can be modified as needed from ground station
X-axis Roll
Fig. 2. Coordinate System
Fig. 4. AD&C Subsystem architecture
2.4.1 Instrument Identification
The performance of a spacecraft control system is limited by
the performance of its sensors and actuators. A mission AD&C
requirement determines the type and resolution of sensors and
effectors selected for this mission. A functional description of
each component and its intended use follows. A detailed
description of each sensor and effectors’ design can be found
under separate cover. Each hardware component of this
subsystem contains a 'smart' interface with the Flight Computer.
This interface which is programmed into firmware allows the flight
computer to interact with hardware components only when
needed to interrogate or control the device. Firmware algorithms
preprocess sensor data to reduce the data throughput to the
flight computer. Raw sensor data can be read by the Flight
Computer as well as preprocessed data. Communication is by
RS-422 9600 baud serial protocol with defined packets.
2.4.2 Attitude Sensors
Three attitude sensor units are used to determine the attitude
of the space vehicle.
During the early phases of the mission, tip-off and despin, a three
axis magnetometer will be used to measure spin rate of the
spacecraft. With the intended orbit being pollard the spin rate
about all three axes of the spacecraft can be determined by
geomagnetic field sampling and software computation.
Measurement of the ambient magnetic field will be used to
control magnetic Torque Coils to aid in despin and reaction
wheel desaturation.
Sun Angle Sensors
Mission attitude requirements are given as maximum deviation
from nominal attitude based on sun and earth vectors. The 2axis
sun angle sensor is the primary instrument used to establish
errors in roll and yaw relative to the sun vector defined as the
direction to sun center. The instrument is made up of six coarse
sensors with a 180° field of view with better than 10° resolution
and a fine sensor with a 25° field of view with better than 0.5°
Fig. 5. Sun Angle Sensors
Motion Reference Units
Motion Reference Units are single or multi-axis motion sensors.
They utilize Micro-Electro-Mechanical-Structure (MEMS) sensor
technology. These sensors are revolutionizing inertial sensor
technology by bringing together micro-electronics with micromachining technology, to make complete systems-on-a-chip with
high accuracy. Typical applications for Motion Reference Units
are: ANTENNA motion compensation and stabilization,
DYNAMIC POSITINING provides information to the computer
pertaining to the spacecraft position and the magnitude and
direction of environmental forces affecting its position.
2.4.3 Attitude Actuators
Two sets of actuators which apply rotation torques are
incorporated into AD&C subsystem. This torques is controlled by
the flight computer to orient the space vehicle attitude.
Magnetic Torque Coils
Three Orthogonal Torque Coils are to be mounted on the exterior
of the spacecraft. These coils will be shorted to dissipate high
speed rotational energy and commutated under software control
to despin the spacecraft. In the event of loss of reaction wheels,
the spacecraft attitude can be modified and controlled with
magnetic field torque.
Torque Reaction Wheels
A suite of four torque reaction wheels in an equilateral tetrahedral
geometry makeup the primary attitude effectors for nominal
mission operations. Each wheel is self contained and
independently controllable. For current mass properties the four
wheels have a total momentum storage capability to have a
dynamic control range of ±2 RPM. Power usage goes up
considerably for momentum storage beyond 1 RPM. Each wheel
has its own optimal set-point speed controller thus eliminating the
need of flight computer interaction in the speed control loop.
3.1 Drive Electronics
The reaction wheel contains an integrated controller and drive
electronics. Advantages of an integrated controller are reduced
mass and volume by the elimination of external control/drive
boxes and external wiring. The reaction wheel controller
architecture is based on the wheel developed for the program. A
digital signal processor (DSP) implements a precision motion
control algorithm for control of speed and torque. The velocity of
the flywheel is measured by dual optical reflective sensors and
fed back to the DSP digitally. The advantages of digital
implementation are elimination of time consuming analog trims,
and unit to unit consistency. The output of the DSP directly drives
an H bridge motor driver. The bridge implements three phase
drive with true four quadrant operation. The design includes an
adaptive gate dive feature that permits use of the controller with
different bus voltages without requiring component changes. One
of the goals was to produce a design that could be tailored for
the radiation environment by substitution of critical components.
This was accomplished by analyzing the design to determine
which components were most susceptible to radiation effects.
Parts were then selected so that several different radiation
hardness levels could be obtained in the same package type.
The circuit board was then designed so that parts could be
upgraded without layout changes.
3.2 Control Software
There are three sets of software associated with the AD&C
subsystem. The first is the firmware programmed into each
hardware component that handles data communications with the
flight computer and preprocesses sensor data. This software can
and will be tested as part of the qualification and acceptance
testing of each hardware device. The second set of software
algorithms reside in the read only memory (ROM) code section of
the main flight computer. This software typically uses
preprocessed sensor data for attitude determination and
generates high level commands to the reaction wheels and
torque coils. The third set of software algorithm is in uploadable
code. A fully functional and tested control algorithm based on the
network will be in memory at launch but activation of this
controller will require a command from the ground.
4.1 Plant Modeling
This instruments spacecraft usually require attitude control so
that antenna, sensors and solar panels are properly oriented.
Antennas are usually pointed towards a particular location on
earth, while solar panels need to be oriented toward the sun for
maximum power generation. To gain insight into the full threeaxis attitude control system, it is helpful to consider one axis at a
time. The control force from the wheels that produce a moment
of Fcd about the mass center with small disturbance moments MD
on the spacecraft, applying yields the equation of motion.
Model in state-variable from
We define the attitude and the angular velocity of the satellite as
the state variable so that x θ ω  , when MD =0. The single
second-order equation (1) can then be written in an equivalent
. d
way as two first order equation: θ = ω and ω = Fc
θ 
0 1  θ   0 
using x& = Fx + Gu as   = 
+  F
  0 0 ω  d  c
 I
The input and output of the spacecraft is ω and θ respectively.
θ = Hx + Ju , θ = [1 0]   + 0.
ω 
The matrices for the State-Variable from are
0 1
. G =   . H = [1 0] . J = 0. Ω Fc.
0 0
Fc d + MD = I θ
The transfer function can be obtained as the output of the
system is θ,
θ(s) 1 1
Ω(s) I S2
where Ω = Fc d + MD and the plant transfer function is referred to
0.005(z + 1)
, digital equivalent with Zero order hold is
(z - 1)2
the sampling time Ts=0.1
Fig. 6. Spacecraft control schematic
4.2 Actuator Modeling
Simply rotor of a motor driving by an armature-controlled DC
motor, so if reaction wheels turns in one direction, the satellite
will rotate in the opposite direction and measurements were
taking. The motor equations give the torque T(t) on the rotor in
terms of the armature current ia and express the back emf
voltage in terms of the shaft’s rotational velocity ω = θ& m . Thus
T = Kt × ia, e = Ke × θ&m.
Let assume that the rotor has inertia Jm, Kt equals the torque
constant, Ke equals the electric constant and viscous friction
coefficient b. by applying Newton’s laws yields
Jmθ&&m + bθ&m = Kt × ia = T(t).
Analysis of the electric circuit, including the back emf voltage,
shows the electrical equation to be
di (t)
L a a + R aia (t) + Va (t) = Ke × θ&m. were ω = θ& m .
Ra, La and ia are the resistor, inductor and the current of the
motor. With s substituted for d/dt and the transfer function for the
motor is readily found to be
θm (s)
Va (s) s {L a s + R a } { Jms + b} + K t K e 
But in this case, the transfer function between the motor input
and the output speed ω = θ& m is required. In such cases, the
transfer function would be
θm (s)
Ω (s)
Va (s)
Va (s)
Ω (s)
Va (s)
θm (s)
Va (s)
{La s + R a } { Jms + b} + K t K e 
For further simplification we have the equivalent transfer function
of the system
Va (s) s T + s { 2ξT} + 1
Actuator transfer function is referred to
{ }
Nyquist Diagram
z 4 - 3.8 z 3 + 5.5 z 2 - 3.7 z + 1
x 10
2 dB 0 dB
4 dB
-4 dB
6 dB -6 dB
Step Response
-2 dB
Im aginary A x is
a1s2 + a2 s + a3 
a1 =0.06sec-2, a2=0.4sec-1, a3= 1sec and Kt =4 digital equivalent
with Zero order hold is
0.32( z + 1)
and the sampling time Ts=0.1. The combination of
z2 - 2 z + 1
the plant and actuator transfer function is Transfer
when a1 =0.06sec-4, a2=0.4sec-3, a3=
a1 s + a2 s3 + a3 s 2
1sec-2 and Kt =4. Digital equivalent with Zero order hold is
0.0003 z 3 + 0.003 z 2 + 0.003 z + 0.0003
Real Axis
Fig. 9. Nyquist plot for transfer function of plant and actuator
4.3. Stability of the System: Digital Attitude Controller
Am p litud e
Time (sec)
Fig. 7. Step response for transfer function of plant and actuator
Bode Diagram
M agnit ude (dB )
P has e (deg)
Frequency (rad/sec)
Fig.8. Frequency response for the transfer function of plant and
Dynamic elements (or compensation) are typically added to
feedback controllers to improve the system’s stability and error
characteristics because the process itself cannot be made to
have acceptable characteristics with proportional feedback alone.
In this section we discuss these in terms of their frequencyresponse characteristics. Which in most cases, the
compensation will be implemented in a microprocessor.
Techniques for converting the continuous compensation D(s)
into form that can be coded in the computer, to specifically
address the issue of achieving a balance between good system
response and control effort, a most effective and widely used
technique of linear control system design is the optimal Linear
Quadratic Regulator (LQR) were adopted. This is used to find the
control such that the performance index is minimized for the
system. The cost function for this controller is made up of several
J = øPø' + wQw' + uRu' ,
where ø is position error, w is velocity error, and u is reaction
wheel drive. The first term is a penalty for error in angular
position, ø scaled by matrix P. Next, any angular velocity (w)
scaled by matrix Q is penalized. Third, in order to keep anyone
reaction wheel from saturating in speed, a penalty is placed on
the absolute value of each of the four reaction wheel drive
signals (u) scaled by matrix R. which only the angular velocity
error and the reaction wheel drive were considered in this paper.
Design Procedure
The first step in creating a controller is to define the state
variables, their range, and their sampling intervals for the control
function. This can be augmented by the knowledge of the
underlying dynamic equations.
x& = Fx + Gu , z = H1x.
From K=lqr (F, G, Q, ρ) matlab function, the matlab calculated
the loop gain is K = [0.8530
Transfer function for the controller when rho = 1.
b3 s3 + b2 s2 + b1 s + b0
Nyquist Diagram
, a1=0.66sec-3,
s 4 + a1 s3 + a2 s2
b3=3.4sec3, b2=8.1sec2, b1=57.5sec, b0=66.7. Digital equivalent
0.4 z 3 - 1 z 2 + 1z - 0.3
is 4
z - 3.8 z 3 + 5.5 z 2 - 3.7z + 1
Step Response
x 10
I m a g in a ry A x is
0 dB
A m p lit u d e
Real Axis
Fig.12. Nyquist plot for the controller transfer function
Nyquist plot, which has a good phase margin and gain margin.
These excellent stability properties are a general feature of LQG
Bode Diagram
Fig. 10. Unit step-response for the controller Transfer function
Root Locus
Im a g i n a r y A x i s
Frequency (rad/sec)
Fig.13. Frequency response for the controller transfer function
4.4 Full Order Compensator Design for the Spacecraft
Attitude Control
Real Axis
Fig.11. Root locus for the controller Transfer function
It is interesting to note the stable symmetric root locus for the
0.1 0.3 /T
P h a s e (d e g )
M a g n it u d e (d B )
Time (sec)
If we take the control law design, combine it with the estimator
design and implement the control law by using the estimated
state variables, the design is complete for a regulator that is able
to reject disturbances but has no reference input to be tracked. In
doing so we will compute the closed-loop characteristic equation
and the open-loop controller transfer function for different value
of q, where q is a scalar design parameter.
Use these results to compare the state-space designs with root
locus and frequency-response designs in digital form. We place
the control poles at s=-0.707±0.707j (ωn =1rad/sec, ξ =0.707)
and place the estimator poles at ωn =5rad/sec and ξ =0.5. The
corresponding desired (control) characteristic equation is αc(s) =
s2 + s√2 +1 for chose of ρ =1 the state feedback gain is found to
K = [0.8530 0.5060 3.5915 4.1670].
For estimator design, the desired estimator characteristic
polynomial is αe(s) = s2 + 5s +25. Using Linear Quadratic
Estimator (LQE), and Rv =1, Q1 = ГI*Г, Г is gam which is q*G for
value of q, when q is 1. The estimator value for estimator gain L
using lqe matlab function, when L=lqe(F, gam, H, Q1, Rv). The
resulting estimator feedback gain is
L = [ -10.7924;5.2045;6.2215;1.7280] for q=1.
The compensator transfer function is
b s 3 + b2 s2 + b1 s + b0
Dc (s) = 4 3
a1s + a2 s3 + a3 s2 + a 4 s + a5
a1=1sec-4, a2=11.3sec-3, a3=80sec-2 , a4=11.3sec-1 , a5=1031 and
b3=23sec3, b2=26sec2, b1=388.8sec, b0=266.7.
0.3696 z3 - z2 + z - 0.2916
digital equvalent 4
z - 3.776 z3 + 5.488 z2 - 3.647 z + 1
M a g n it u d e (d B )
System: sysk
Gain Margin (dB): 15.4
At frequency (rad/sec): 31.4
Closed Loop Stable? Yes
P h a s e (d e g )
System: sysk
Phase Margin (deg): 53.6
Delay Margin (samples): 1.98
At frequency (rad/sec): 4.73
Closed Loop Stable? Yes
Failure accommodation and detection is usually the most
difficult task in a digital systems design and sufficient resources
was allocated to it in this paper by model the plant and actuator.
Perfect algorithms were developed to validate all the control
system models. Simulations were carried out to texts for
functionality and performance of the system and there was focus
on controllers and estimators design for the spacecraft attitude
Attitude and orbit dynamics and control coupling is an area
requiring significant future work. As spacecraft pointing
requirements and formation relative position requirements
become stricter, the tasks of attitude and orbit determination and
control subsystems become more and more coupled. We can no
longer consider the dynamics and control of spacecraft in terms
of two subsystems. Rather, we must explore the concept of
unified attitude and orbit dynamics and control.
[1] Franklin F. Gene, Powell J. David, Emami-Naeini Abbas,
Feedback Control of Dynamic Systems. Prentice Hall,
Inc. 2002.
[2] Landau D. Ioan, Gianluca Zito, Digital Control Systems.
Design, Identification and Implementation. Lab.
d’Automatique de Grenoble (INPG/CNRS) ENSIEG,
BP 46’38402 Saint Martin d’Heres. France, 2008.
[3] Raymond G. Jacquot and John E. Mc Inroy, William
L Brogan. The Electrical Engineering Handbook. Ed.
Richard C Darf. Boca Raton: CRC press LLC, 2000.
[4] Sam M. Fadali. Digital Control Engineering. Analysis and
Design. Academic Press Imprint of Elsevier 30
Corporate Drive, Suite 400 Burlington, MA 01803
U.S.A, 2008.
[5] WILLIAMSON D. Digital Control and Implementation.
Prentice Hall, 1992
Bode Diagram
Frequency (rad/sec)
Fig.14. Frequency response for the Full Order Compensator
Design for the Spacecraft Attitude Control
Ajayi Abidemi Obatoyinbo
International Institute for Advance Aerospace Technologies of SUAI.
Saint Petersburg State University of Aerospace Instrumentation, Saint Petersburg, Russia., Tel: +79522063259, +2348036663550.
Abstract: This paper considers methods for optimizing the accuracy of micromechanical gyroscope and
accelerometers and also controlling the working frequencies of the device. It is shown that improvement of
characteristics of micromechanical devices may be achieved only by using an automatic system which tunes and
stabilizes the main parameters of the device. In this paper, a circuit of oscillation excitation with the reference
generator for which the frequency is automatically adjusted to the mechanical resonance frequency of the sensitive
element with respect to the excitation axis and a parametric study on the controller design scheme for a gyroaccelerometer to have a robust performance under some parameter variations. In particular, an integral and
derivative based controller design method is suggested to achieve the desired performances of stability margin and
uniformity of scale for both gyroscopes and accelerometers with uncertainties of quality factor and resonant
frequency demonstrating the usefulness of the proposed design scheme. Various test will be carried out using
analytical calculations and experimental data and a model of sensors measurement will be developed and used for
the final calculations of the Earth’s rotation rate. In this case a number of measurements will be performed and an
average value of the earth’s rotation rate is derived using calculations, experimental data’s and simulations in Matlab.
Keywords: Microelectro-mechanical Systems (MEMS) Gyroscope, Accelerometer, Accuracy, Resonance, Control
and Stabilization.
The idea to use microelectronics technology to manufacture a
gyroscope with low weight, small size, low power consumption
and cost apparently belongs to experts from the draper
laboratory from the USA. They were the first to develop
prototypes of micromechanical gyroscopes and to publish their
results. At the present time, many companies produce silicon
micromechanical gyros of their own design. The primary market
for such gyros is civilian applications. These micro gyros are
used in vehicle stability and breaking automation and also for
optical axes stability, and in medical equipment. All micro gyros
have low accuracy because of a natural frequency oscillation of
the device mechanical part. Analytical calculations and
experimental data display shows that improvement of
characteristics of micro gyros may be achieved only through
usage of an auto-tuning systems and stabilization of main
parameters of the device. Such parameters are frequency and
oscillation amplitude of an oscillator. Apart from that, it is
necessary to stabilize a difference of natural frequencies of axial
and equatorial oscillations, and also ensure damping of
equatorial oscillations with high accuracy.
Great advances in micro-electromechanical system (MEMS)
technology enables researchers to develop smaller and lighter
inertial sensors including inertial navigation systems (INS),
automotive applications, and consumer devices. MEMS sensors
have various advantageous properties such as miniature size,
low power consumption, high rigidity, and easy fabrication with
low cost by mass-production processes. Among these properties,
the size of the sensor is crucial in practical applications. If the
size of a device becomes smaller, the area of its applications can
be expanded considerably. The multi-sensor with single-structure
is one of the solutions to reducing the size of sensors. The
performance of the MEMS gyro-accelerometer is normally
affected by its manufacturing errors, e.g. irregularity of thickness,
width and vacuum level, which are the sources of uncertainties in
parameters such as resonant frequency and quality factor.
Therefore, a control scheme is needed to guarantee the
performance under those uncertainties. In this study, a controller
design that considers different operating principles of gyro and
accelerometer will be proposed. The force rebalance control is
used both for the gyro and the accelerometer to achieve
performance goals by force feedback loop that regulates the
motion of proof mass in some limited level. With this control
scheme, we can achieve performance goals for both the gyro
and accelerometer under some uncertainties in system
A common element of all known designs of microgyros is the
oscillating mass in an elastic suspension. Angular velocity of the
instrument relative to the inertial space results in the appearance
of the Coriolis forces which determine the origin of an additional
oscillating motion. Measurement and the analysis of the
parameters of this motion allows one to determine the basis
According to the type of movement of the sensing weight,
microgyros can be divided into four subclasses. In the scheme
LL sensitive element during excitation of primary oscillations
makes linear motion and the same linear motion in other plane at
origin of Coriolis forces. In devices constructed by scheme RR,
both vibration modes of a sensitive element are angular. Two
vibration modes of a countermeasure feeler are realized in
schemes LR and RL.
A classical example of construction of the micromechanical
gyro such as LL is the well-known scheme with divided weights
of the sensing element, designed by Draper Laboratory. Later
technicians of the same laboratory have licensed and published
the symmetrical scheme of a microgyro with a ring-shaped
sensing weight. According to the above classification, this
scheme belongs to the RR type. In the scheme, the system of
the elastic suspension of a countermeasure feeler consisting of
four main radial and two additional torsion bars is used. This
scheme provides necessary elastic properties of the suspension
on the axis of excitation of primary oscillations and the axis of the
output signal and the maximum rigidity.
Excitation of primary oscillations in the microgyro is
implemented with the help of a comb structured electrostatic
driver of force. Measurement of secondary oscillations is carried
out with the help of a capacitor transducer, one plate of which is
mounted in the case of the device, and the other plate is on the
sensitive element. For creation of additional control efforts
concerning a measuring axis, the electrostatic driver of force on a
design similar on capacitive, an angle transmitter off- turn of a
countermeasure feeler is stipulated. Variations of scheme RR
became the basis for many recent designs.
α0 ( t ) =
[ (
k g ve−ξωt
1−ξ 2  v2 −ω 2 +4ω 2v2ξ 2 
× −ω yξ ω2 +v2 sinω 1−ξ 2t + 1−ξ 2ω y v2 −ω 2 cosω 1−ξ 2t .
For convenience in the subsequent analysis, we shall enter a
new variable ∆ = ν − ω , describing the size of frequency offtuning primary and secondary oscillations.
A kv
rad -> sec
A sph
Amplitude of
the quadrature signal
Om y , 1/s
Om y
A kv 1
Om y
A sph1
rad -> sec1
Alf a
Alf a
Amplitude of
in-phase signal
The major metrology characteristics of microgyros are
sensitivity and working frequency band. As the first
approximation, movement of a sensitive element in RRmicrogyroscope at influence of angular speed of the basis ωу can
be described in the equation (1).
A kv1
Peak detector1
Om y , 1/s
Om y
d_gama, 1/s
&& + 2ξωα& + ω 2 α = χγ 0 ν cos νt ⋅ ω y
A sph1
Peak detector2
(Canal Alfa)
element on the output coordinate.
The solution of the equation (1) will be found analytically by the
following method (2):
kg ν
2 2
[( ν − ω ) + 4ω 2 ν2 ξ 2 ]
dAlfa, 1/s
Alfa, rad
×  −ω y ( ν − ω ) cos νt + 2ω y νξω sin νt 
A sph
Om y1/s
d gama, 1/s
where αf(t) – is a forced movement of sensitive element, and αo(t)
– is an own movement of sensitive element .
Equation for the forced movement is shown below:
gama, rad
Amplit. Detector2
α f (t ) =
A kv
Amplit. Detector1
ω y – forced frequency of oscillations of a sensitive
α (t ) = αo (t ) + α f (t ) ,
where ω - is a secondary oscillations resonant frequency;
ν - is a Coriolis force driving frequency;
γ 0 – is an established amplitude of primary oscillations;
Fig. 1 Above shows the simulink block for secondary oscillations
of the RR-type
Summation of the expressions describing forced and own
movements of a sensitive element in view of this new variable
allows us to write the common decision of the equation as shown
below (5):
Equation for the own movement is shown below:
α ( t ) = Ai ( t ) sin νt + Aq ( t ) cos νt ,
Ai (t ) =
[v − (v − ∆ ) + 4(v − ∆) v ξ ]
2 2
2 2 2
For further inquiry it is necessary to consider the gyro in-phase
and quadrature frequency characteristics. Those characteristics
can be received as gyro response on harmonic input signal, that
is given by ω y (t ) = ω y sin Ωt .
2ξv(v − ∆)ωy + e−ξωt v2 − (v − ∆)2 ×
−ξωt 2
× ×ωy sin∆ωt − e v + (v − ∆) × ;
×ξωy cos∆ωt
k vω
Aq (t ) = 2 2 2 2g y
v − v − ∆ + 4(v − ∆) v2ξ 2
 v2 − (v − ∆)2 ωy + e−ξωtξ v2 + (v − ∆)2
× ×ωy sin∆ωt + e−ξωt v2 − (v − ∆) ×
×ωy cos∆ωt
Then equation (1) will become
&& + 2ξωα& + ω 2 α = χγ 0 ⋅ νω M
y ,
where ω M
y = ω y sin Ωt cos νt is the modulated harmonious
input signal.
The transfer function corresponded to equation (8) is shown
below (9):
Aq (t ) components are in turn fading harmonic processes,
occurring on frequencies numerically equal to off-tuning
frequencies of the primary and secondary mode of oscillations of
a microgyroscope.
Amplitudes of those components may be received as shown
below (6):
2ξk g ν2 ω y ( ν − ∆ )
 2
 ν − (ν − ∆)
Aq ( t → ∞ ) =
+ 4( ν − ∆ )2 ν2 ξ 2 
2 2 2
α ( t ) = A ( ν, Ω ) sin[ ν(t ) + ψ ( Ω, ν)] =
= U (Ω, ν) sin ν(t ) + V (Ω, ν) cos νt
where A(Ω, ν), ψ (Ω, ν) are the amplitude and the phase
characteristics of
micromechanical gyroscope, and
U ( Ω, ν) = A(Ω, ν) cos ψ (Ω, ν)
V ( Ω, ν) = A(Ω, ν)sin ψ (Ω, ν)
Real and imaginary characteristics
U ( Ω,ν) and V ( Ω,ν)
Having written down ω M
y in basis of lateral frequencies (11):
+ 4( ν − ∆ ) ν ξ 
y =
sin ( Ω + ν) t − sin( ν − Ω)t  =
= sin ( Ω + ν) t + sin(Ω − ν)t 
Equation (10) can be written down in the form
α ( t ) = [U ( Ω + ν) sin ( Ω + ν) t + V ( Ω + ν) cos ( Ω + ν) t +
+ U ( Ω − ν) sin ( Ω − ν) t + V ( Ω − ν) cos ( Ω − ν) t ] ,
where U (Ω + ν) and V (Ω + ν) - frequency characteristics of
s = j ( Ω + ν) , а U ( Ω-ν) and V ( Ω-ν)
- the similar
characteristics received from (10) at replacement s = j ( Ω − ν) .
Aq (t ) = 0,
Ai (t ) =
s + 2ξωs + ω 2
ψ (Ω, ν) is a full amplitude and phase change of oscillations.
It is possible because exponential functions fading to zero with
time increasing. In Fig. 1. functions of sensitivity of a gyroscope
on in-phase and quadrature components result depending on the
frequency of off-tuning. . Schedules show that the gyroscope
finds the greatest sensitivity at small off-tunings with the in-phase
component, and at large off-tunings on the quadrature
component. Separation by in-phase and quadrature components
can be possible with a synchronous detector usage. Basic
signals thus are the measured oscillations in the channel of
excitation at a right angle and angular speed. Here we have got
an important particular case for gyroscope operating mode, that
can be named as “resonant mode”. In this mode both working
frequencies of gyro are matched to one value. In this case the
amplitudes of oscillations are equal to (7):
χγ 0 ν
describes the amplitude changes for both in-phase and
quadrature components. Amplitude A(Ω, ν) and phase
− k g νω y ( ν2 − ( ν − ∆ )2 )
 2
 ν − (ν − ∆)
( s) =
Its solution can be written as (10):
Therefore the resultant equations of full movement contains
two components that can be separated by phase on: in-phase
(according to driving force frequency) component and quadrature
component. Amplitudes of in-phase Ai (t ) and quadrature
Ai (t → ∞) =
(1 − e ) ω
After transformation, the equation (12) becomes (13):
sin νt.
α (t ) = [[U ( Ω + ν) − U ( Ω − ν])]cos Ωt +
+ [V ( Ω − ν) − V ( Ω + ν)]sin Ωt ]sin νt +
+ [[U ( Ω + ν) + U ( Ω − ν])]sin Ωt +
+ [V ( Ω + ν) + V ( Ω − ν)]cos Ωt ]cos νt.
Let's write down expressions for inphase and quadrature
amplitudes causing oscillations of a MMG as (14), (15):
U ( Ω, ν, t ) = Ai =
U ( Ω + ν) − U ( Ω − ν)
V ( Ω + ν) − V ( Ω − ν )
V ( Ω, ν, t ) = Aq =
sin Ωt ;
U ( Ω + ν) + U ( Ω − ν)
V ( Ω + ν) + V ( Ω − ν )
cos Ωt −
sin Ωt +
cos Ωt.
As follows from (14) and (15) amplitudes of oscillations of
inphase and quadrature components of a MMG are harmonious
processes occurring at the frequency of an input signal and
describe the bending around the modulated output signal.
Amplitudes of inphase and quadrature components bending
around in expression (15) are the material and imaginary
frequency characteristics of a MMG adequate to the transfer
function on bending around for the quadrature component
 ω My
Wα  j ( Ω + ν)  + Wα  j ( Ω − ν)  
Wq j (Ω + ν) =
2ν(s + ξ(ν+ ∆))χγ 0 ν
Wi (s) = 2
[s + 2ξ(ν+ ∆)s + (ν+ ∆)2 − ν2 ] + 4ν2 (ξ(ν+ ∆) + s)2
Amplitude frequency characteristics of the MMG for inphase
and quadrature components, constructed with use of equation
(12) and (13) are resulted on Fig. 2. The analysis of the shown
functions of sensitivity and frequency characteristics allows us to
draw the following conclusions:
− the quadrature component has the uniform
characteristic in a working range of frequencies in the
device of direct measurement and is used for formation
of the output signal of the microgyro
− the working band of frequencies of MMG and the
sensitivity is determined by size of a frequency offtuning primary and secondary oscillations. if the band
of working frequency is increasing, the sensitivity of the
device proportionally decrease and at narrowing –
− stability of the basic metrological characteristics
depends on the stability of frequency of the primary
oscillations and its amplitude, from accuracy of break
frequencies of primary and secondary oscillations and
good quality secondary oscillator.
A, Db
We shall have after transformations in view of expression and
substitutions j Ω = s, ω = ( ν + ∆ ) :
Wq s =
) (
χγ 0 ν( s + 2ξ ν + ∆ s + ν + ∆ − ν )
 s 2 + 2ξ ν + ∆ s + ν + ∆ 2 − ν 2  + 4 ν 2 ξ ν + ∆ + s 2
) (
) )
We shall use the following method to get the transfer function
of bending around for an inphase component.
Generally, let W ( j Ω) = U (Ω) + jV (Ω) . Therefore, as was
shown α = U (Ω)sin Ωt + V (Ω) cos Ωt .
If we enter into consideration transfer function
f, Hz
W * ( j Ω) = iW ( j Ω) that
W ( j Ω) = −V (Ω) + iU (Ω) and, therefore
α = −V (Ω)sin Ωt + U (Ω) cos Ωt .
From here, it follows that in equation (14), for an inphase
component bending around oscillations of a MMG, material and
imaginary frequency characteristics answer to the transfer
Having made necessary transformations, we have.
f, Hz
Fig 2. Dependence of frequency characteristics
of a
micromechanical gyroscope from frequency off-tuning primary
and secondary oscillations
а) for a quadrature component
b) for an inphase component
Thus, at designing MMG of the raised precision, it is necessary
to make the constructive and system solutions, which will allow
to stabilize metrological characteristics.
Substantial improvement of characteristics MMG can be
received in the device compensation type. By virtue of feature of
dynamics of a sensitive element the mode of indentifying Coriolis
force is better for realizing with the help of two contours of
feedback by the application to a sensitive element of managing
the influence of frequency of the excitation, generated under
information on amplitude inphase and quadrature making target
Thus it is supposed, that there are the gauges of the moment
(force) providing controlled movement of a sensitive element, i.e.
working of the device in a compensation mode.
Function chart of MMG with two contours of feedback is
resulted in Figure 3 below.
Another one is with angular speed of this movement. The
complex signal which is submitted to the gauge of moment is
forming as a result of sum of these voltages.
The structure and parameters of the regulator 1 included in an
internal contour of feedback, is choosing out so that in the
maximal degree suppresses the quadrature component of the
output's signal MMG. Therefore the device is working in a mode
“quasiresonant adjustments ". In this case as well as in the mode
of physical coincidence of own frequencies of primary and
secondary oscillations, sensitivity MMG achieves the maximal
value, and the output's signal has only inphase component.
The second contour of feedback uses this signal for providing
working frequencies band and linearization static characteristic
MMG by minimization of an equivalent constant of time of the
device on enveloping around an input signal. At synthesis of
structures of a regulator and a choice of their parameters at the
initial stage of designing, it is expedient to use peak models of
dynamics of movement of controlling MMG. In result receive
transfer function on enveloping oscillations sensitive element in
the form:
W (s) =
k (Ti s + 1)
2 2
T s + 2ξ T s + 1
0 0
W ( s) =
System of excitation
and stabilization of
2 2
T s + 2ξ T s + 1
0 0
U1=sign γ (t)
U2=sign γ& (t)
where ki =
The channel of
The channel of measurement
Measured angular speed
Regulator 1
Fig. 3. A function chart of a micromechanical gyroscope of the
compensation type
As follows from the chart, signals proportional to amplitude as
inphase and quadrature by a component are formed by phasesensitive rectifiers. Their work is provided with submission of
basic voltage from system of excitation of primary oscillations
MMG. Regulators of the first and second channels of feedback
form two functions of control on the basis of these signals.
These signals will be transformed into voltage of an alternating
current by modulators MOD1 and МОD2. One of these voltages
in phased with angle of turn of sensitive element concerning
axis of excitation.
 ω
×  k0 y + k1 y s ω y ( s ) + k1q s + k0q U yq ( s ) + k0i U iy ( s ) 
Regulator 2
Aq ( s ) = s 2 + 2ξωs + ω 2
ξ 2ν 2 + ∆2
T =
i ξν
If considering controlled influences, then amplitude inphase
and quadrature components in Laplas's images will be found by
equation (18):
, ξ0 =
ξ 2ν 2 + ∆2
T0 =
χγ0 νξ
χγ0 ∆
, kq =
4(ξ 2ν 2 + ∆2 )
4(ξ 2ν 2 + ∆2 )
Aq ( s ) = s 2 + 2ξωs + ω 2
×  n0 y ω y ( s ) + n0 U yq ( s ) + n0i + n1i s U iy ( s )  .
U iy
- controlling influence phased with a corner of turn of
a sensitive element on an axis of excitation, and U yq - with
angular speed of primary oscillations.
Structures of regulators of channels of feedbacks are formed
by common equations:
( s ) = Wq1 ( s ) Aq ( s ) + Wq 2 ( s ) Ai ( s ) ,
U i ( s ) = Wi1 ( s ) Aq ( s ) + Wi 2 ( s ) Ai ( s ) .
U iy ( s ) = k ifb1 Aq ( s ) + k ifb 2 Ai ( s ) ,
necessary to say, that the specified limits is quite enough for
automatic indentification of the frequencies of off-tuning, caused
by conditions of exploitation. It is possible to consider, that using
of the suggested structure of feedbacks in MMG will give the
possibility to resolve the basic contradictions at designing MMG
between increase of sensitivity of the device and realization of a
required band of working frequencies. That is why it will be
possible to approach characteristics to characteristics gyroscopic
GAS middle class of accuracy.
U yq ( s ) = (k qfb 2 + k qfbc3 )
Аi ( s ).
The choice of transfer functions
Wi1 ( s ) , Wi 2 ( s ) , Wq1 ( s ) , и Wq 2 ( s ) is ambiguous and is
determined by the accepted technique of synthesis, and also the
requirements showed to contours of feedback. However, good
enough results turn out already at use of the elementary laws of
Dynamics of a sensitive element in a mode of resonant
adjustment is described by transfer function (18). Thus, if the job
of an internal contour of feedback is ideal then it will be possible
to write down transfer function of a microgyroscope as:
χγ 0 (k qfb 2 s + k qfb3 )
(Tg s + 1)(T f s + 1) s
(s) =
k qfb 2
If to choose
k fb3
where k g =
(k fb 2 s + k fb3 )k
(Tg s + 1)(T f s + 1) s
= Tg , will get
χγ 0 k qfb3 )
Tf s
+ s + k fb3 k
For reliable excitation of primary oscillations, the scheme with
the reference generator which frequency is automatically set up
on frequency of a mechanical resonance of a countermeasure
feeler on an axis of excitation is used. The best outcomes are
reached in versions when the contour of self-tuning of a stable
reference frequency is under construction on the basis of a
phase detector, detecting a phase shift between assigned and
excited oscillations of a sensitive element.
Sensitivity of a phase method of measurement of a frequency
off-tuning can be estimated, using the simplified model of
dynamics of motion of a sensitive element under effect of the
harmonic moment created by the engine on a signal of the
reference generator
γ + 2ξνγ& + ν2 γ = kd sin ω g t ,
Т g2 s 2
+ 2ξ g Т g s + 1
- a steepness of the output's characteristic of a
Tg =
- a constant of time of a gyroscope
k fb3 k
Steepness of characteristics and frequency properties do not
depend on parameters of a sensitive element in the
compensation type device, but are determine only by parameters
of a regulator and the gauge of the moment. That means that in
realizing the circuit of a microgyroscope with the specified
feedbacks, it is possible to achieve substantial improvement in
metrological characteristic of their stability in real conditions of
exploitation. In Fig.3 above have got the amplitudes of a inphase
component in an internal contour of regulation and in an external
They were received during mathematical modeling with the
chosen parameters of the regulator 1 in MMG the quasiresonant
adjustments is realized, and at inclusion of a regulator 2 the set
speed of a measuring instrument connected to its working area
of frequencies is reached.
The structure of management of movement MMG on the circuit
in figure 3 appears effective at physical off-tune of own
frequencies of primary and secondary oscillations not more than
10 %. At more essential initial frequencies of off-tuning owing to
power restrictions of the gauge of controlling influences to realize
a mode "quasiresonant adjustment" MMG is not possible. It is
where γ - frequency of own not damped oscillations of a sensitive
element, ξ- a relative damping ratio, kd - a power factor of the
electrostatic engine.
The solution of equation (20) can be copied in the form:
γ = A(t ) cos(ω g t + φ(t )),
where A(t), ϕ(t) - amplitude and a phase of an output signal
concerning a quadrature equivalent of a signal of the reference
After the termination of transient, the constant value of a phase
shift is established. The value of this phase depends on quantity
of a frequency off-tuning and it is determined by a particular
solution of equation (21). Sensitivity of a phase may reliably
allows us to fix the value of a frequency off-tuning up to the 100th shares of hertz at resolution of a phase detector of 1-2
The signal of the reference generator on phase relations
represents mean model of motion of a countermeasure feeler on
angular rate and means, it may be used as reference in the
scheme of demodulation of an output signal of the microgyro.
microgyro, is used for change of suspension rigidity on a
measuring axis by an electrical way.
To Workspace1
gama, deg
To Workspace2
gama, deg
In1 Out1
gama' Amplit. Detector
To Workspace
gama', deg/s
Fig.5.functional diagram of a righting system of amplitude
Fig. 6: Transients in a contour of stabilization of amplitude
Fig. 4. Above shows the simulink block for primary oscillations
At a choice of the circuit of the phase detector, it is necessary
to take into account time assigned on process of synchronization
of frequencies of the basic generator and a mechanical
resonance of a sensitive element, and as efficiency of
suppression of the second harmonic in its output signal. The best
results as have shown in researches is the circuit using algorithm
of calculation of mutual correlation function between signals of
position of a sensitive element and the basic generator. The
session of calculation of correlation function is carried out on one
period of a signal of the basic generator. After its termination, the
estimation of phase shift is remembered and used in a contour
before the subsequent updating. For real designs of a
microgyroscope application of correlation algorithm allows to
reduce time of synchronization in a contour up to ~ 1 s.
The simple version of construction of a contour of stabilization
of amplitude of excited oscillations of a countermeasure feeler is
adduced in fig 6. It provides measurement of oscillation
frequency with the help of an amplitude detector. On the basis of
these measurements, the stress intensity given on the
electrostatic engine is shaped. For maintenance of a contour by
mistake of stabilization in the control unit, the integral law of
regulation is realized. Changing of the circuit, it is possible to
control a transient period (Fig.5) and sensitivity to exposures.
As it was shown, for maintenance of required metrology
characteristics, the certain difference of frequencies of primary
and secondary oscillations of a countermeasure feeler should be
supported during activity of the device. At a small index of
attenuation as shown in researches, changes of amplitude of
secondary oscillations occur under the harmonic law to
frequency, an equal difference in frequency of a frequency offtuning of primary and secondary oscillations. This attribute may
be used for construction of an appropriate meter. The detector of
a frequency difference is formed on the basis of phase
measurements between a signal of the reference generator of a
difference frequency and the detected signal of a measuring
channel of the microgyro. As the reference generator of a
difference frequency should be synchronized with the reference
generator of excitation, in practice it may is realized on the basis
of frequency dividers. The selected signal of a difference
frequency is used or for rectification in an output signal of the
During the construction of modern MMA, they use as a rule those
schematic drawings which are been used as macro-analogues in
all known different accelerometers which can be divided into
different main groups: Figure 7 below.
Central Support
Fig.7.Types of construction of MMA.
The above diagrams shows that the types of mechanical
deformation of the elastic suspension of microaccelerometer can
be determined in terms of the following:
1. In terms of feedback: (without feedback) and compensatory
a) one-component (measured projection of the acceleration in
one direction) - one measuring axis, and b) two-component - two
measuring axis, c) three-way - three measuring axis.
2. The nature of movement of inertial mass (IM):
a) axial accelerometers with forward movement of inertial mass
b) the pendulum accelerometers with an angular displacement of
inertial mass (IM);
3. By the type of elastic deformation of the mechanical
suspension of IM:
a) simple deformation - bending (cantilever suspension), b)
simple strain - torsion (torsion suspension), c) complex
deformation - bending with torsion (symmetric suspension). The
nature of deformation is shown in the above Figure.
4. As a way to convert mechanical strain into an electrical
signal (way pickup output signal).
5. The shape of the output signal:
a) analog and b) discrete (frequency or digital), with output in the
form of pulse repetition frequency, number and duration of
In the design of modern microaccelerometer, it is not always
possible to separately identify the inertial mass, the scheme of
suspension and removal of information, as in the case of macroanalogue. Microminiature response leads to the integration of
several functions on different elements of design that determines
its final architecture. It is very important if not the decisive role
played by the choice of method of transformation of mechanical
stress or strain construction of elements of microaccelerometer
under the influence of the applied acceleration. Considering
them in more detail.
efficient conversion of a non-electric quantities into electric
signals and a device with a simple kinematic schemes. They
have a relatively low weight, good zero stability, possess
insensitivity to varying magnetic fields and allow for precision
static calibration. Consider the phenomenon of strain effect in
more detail. Under the influence of the longitudinal mechanical
stress strain gauge changes its cross-sectional area, length and
electrical resistivity. The change in total resistance due to two
factors: the change in the geometric dimensions and the change
in resistivity. In the case of small strains
l / l . there is the
following dependence for the relative change in resistance of R
/ R : The coefficient K is a measure of the sensitivity of strain
gauges. Since the Poisson's ratio for metals is about 0.3, while
the contribution of changes in resistivity (determined by the
coefficient in the overall change in the resistance strain gauge
is 20%, the coefficient K for wire strain approximately equal to 2 .
A similar effect was observed in semiconductor materials in the
study of exposure to mechanical stress. The essential difference
from the metals, however, lies in the fact that the coefficient is
large and the overall change in resistance of nearly 98% due to a
change of electrical resistance, and only 2% due to the change in
geometric dimensions. As a result, the sensitivity of K
semiconductor strain reaches values of about 50 ... 150. Such
high sensitivity is a significant advantage of such strain. Their
main disadvantage is dependence of the resistance and
sensitivity to temperature. Semiconductor strain gauges are able
to provide high reliability, accuracy and vibration resistance,
small size and mass, structural simplicity, obtaining a unified
signal mechanoelectric converters. These devices are long time
were made of single crystals, where material was silicon, which
has a sufficiently high elasticity and strength and mechanical
properties. The development of integrated technology has
resulted in new opportunities in manufacturing gauges. As the
elastic element can be used with very good silicon wafers, and
strain gauges to form on it. The sensing elements in this are
made by local diffusion. Thereby eliminating glue joints used to
connect the sensitive and elastic elements. Such devices
constitute a new generation of transducers of mechanical
7.1 Transmitters: Tensoresistive Semiconductor converters
Relationship of mechanical and electrical properties of
conductors and semiconductors are determined by two major
phenomena: the deformation and piezoelectric effect. It should
be noted that transducers based on strain effects in
semiconductors, are unilateral, ie, they convert mechanical
energy into electrical energy. The deformation effect on their
physical nature is related to the interaction of electrons with the
crystal lattice. In homogeneous semiconductors, the shift of
energy levels leads to the dependence of the resistance of the
strain, ie, they are tensor-resistive effect. On the basis of this
effect which was built as the first integrated sensor, which is a
sensing element used as semiconductor strain gauge. Strain
called devices, which change their electrical resistance as a
result of a mechanical strain. Piezo-resistors change resistance
is converted into an electrical signal that carries information on
the mechanical effects on a deformable elastic element. With
this element of the strain gauge. it must be rigid and
mechanically connected. The advantages of strain gauge method
of measuring the mechanical parameters is the possibility of
Fig. 8. Basic Scheme of a MEMS Accelerometer
7.2 The Basic Architecture Of Microaccelerometer
The use of capacitive transducer in micromechanical silicon
accelerometers with elastically suspended inertial mass. The
design scheme of silicon micromechanical accelerometers with
reverse electrostatic or capacitive transducers have inertial
mass, which is kept in a set of elastic elements, and the mass
can be located between the electrodes, and over them. The
measured acceleration can cause a rotation or translational
motion of inertial mass, changing the air gap between the
electrodes, which is proportional to acceleration.
7.3 Axial Accelerometer With Elastic Mechanical Suspension
Inertial Mass And A Capacitance Transducer Based On A
Comb Structure.
The appearance of the most common sensor of this type in the
term shown in Fig. 9. An example of this design is the
accelerometer model ADXL50 from Analog Devices. Typically, a
system for measuring the acceleration on the basis of a
monolithic integrated circuit. It contains a polysilicon sensing
element thickness of 2 microns, manufactured by surface
micromachining. This sensor can measure both positive and
negative acceleration to a maximum of ± 50 g. To prevent
overloading and failure of MMA, it provides some limitation (the
second constraint is not shown). Electrical connections are
made using hidden wires implanted p-type 7, placed under a
layer of silicon dioxide 9. This layer also serves as insulation for
the other elements of electrical circuits.
Fig. 9. Appearance of accelerometer in perspective.
The appearance of the most common sensor of this type in the
term shown in Fig. 9. An example of this design is the
accelerometer model ADXL50 from Analog Devices. Typically, a
system for measuring the acceleration on the basis of a
monolithic integrated circuit. It contains a polysilicon sensing
element thickness of 2 microns, manufactured by surface
micromachining. This sensor can measure both positive and
negative acceleration to a maximum of ± 50 g. To prevent
overloading and failure of MMA, it provides some limitation (the
second constraint is not shown). Electrical connections are
made using hidden wires implanted p-type 7, placed under a
layer of silicon dioxide 9. This layer also serves as insulation for
the other elements of electrical circuits.
Kumar, K., N. Barbour, J. Elwell (1995). Emerging Low
Cost Inertial Sensors. In: Trans. 2nd St.-Petersberg
Intern. Conf. Jn Giroscopic Technology and Navigation,
May 24-25, 1995.CSRI “Electropribor”, St.-Petersburg.
Pt. 1, pp. 3-15.
Severov, L.A., V.K. Ponomarev, A.I. Panferov, C.G.
Kucherkov, A.V. Sorokin (1998). Micromechanical
gyros: designs, characteristics, technologies, paths of
development. In: News of High Schools. Instrument
making. St.-Petersburg. Vol. 41, #1-2,. pp. 57-72. (In
Barbour, N. et al. (1996). Micro-Electromechanical
Instrument and Systems Development at Draper
Laboratory. In: 3rd Saint-Peterburg International
Conference of Integrated Navigation Systems. CSRI
"Electropribor", St.-Petersburg.Pt.1, pp.3-10.
[4] Evstifeev, M.I. (2000). A condition of developments and
perspectives of micromechanical gyros development.
In: Navigating and mission control. CSRI
"Electropribor", St.-Petersburg. Pp. 54-71. Sensors. In:
Proceedings of the IEEE. Vol. 86, #8, pp.1640-1658.
[5] Ponomarev V. Amplitude models of the dynamic motion for
the guided micromechanical gyroscopes // III Int.
Symposium "Aerospace technologies". June, 2-4
2004г. Saint-Petersburg (in Russian)
[6] Yazdi, N. et al. (1998) Micromachined Inertial Atsjukovskij,
V.A. (1966) Capacitive converters of movings. Energia,
Moscow (In Russian).
[7] Severov L.., Ponomarev V., Panferov A, Kucherkov S.,
micromechanical gyroscopes //III Int. Symposium
"Aerospace technologies". June, 2-4, 2004г. SaintPetersburg (in Russian)
[8] Severov L.., Ponomarev V., Panferov A. The Problems of
Characteristic Stabilization for. International
Federation of Automatic Control//16th IFAC
Symposium on Automatic Control in Aerospace. 2004.
Russia. 266-270
9] Severov L.., Ponomarev V., Panferov A. Phase automatic
regulator of a frequency of the reference generator
micromechanical gyroscope, In the collected articles
of XXIII conference in memory of N.N. Ostryakov,
2002, Sain-Petersburg (in Russian)
[10] Severov L.., Ponomarev V., Panferov A. Review and
opening of the perfection for micromechanical
gyroscopes. In the collected articles of the Symposium
"Aerospace technologies" 2002, Saint-Petersburg (in
[11] Experimental study of the layout microaccelerometer
surface acoustic wave / Lukyanov, DP, Luchinin VV,
Skvortsov V.Yu., Shevel'ko MM St. Petersburg Journal
electronics. - 2001. № 4. - With. 63-64.
[12] Microaccelerometer and gyroscope SAW / Lukyanov DP,
Ladychuk IY, Mayzelis AY et al: Proceedings of XXIII
conference in memory of N.N Ostryakov - Gyros, and
navigation - 2002. - № 4. - With. 41.
[13] DP, Filatov Yu.V. and oth. / / Proceedings of the 11-th
Saint-Petersburg international conference on
integrated navigation systems, May 24-46, 2004. P.
[14] VK Varadan et al. Desing and development of a MEMSIDT gyroscope. Smart Mater. Struct., 9, 2000, p.898905.
Rao Vandana Parankusam
State University of Aerospace Instrumentation,
67, Bolshaya Morskaya, Saint-Petersburg, 190000 Russia
Fax: +7 905 22 33 534, E-mail:
Abstract: Smooth landing and take-off of sea planes depends on determining the main direction of the wind and the sea
wave distribution. The key problem consists in automation of choosing the landing trajectory direction regarding the main
direction of sea waves spread. A new concept of determining these important parameters through digital image
processing and analysis has been proposed here.
Keywords: Ekranoplanes, digital image processing, wave direction, wind direction, algorithms, simulation, analysis,
The purpose of any image consists in presenting the observer
a set of visual data promoting the best understanding of the
considered scene. Depending on the problem, the image can be
analyzed in various ways. For understanding the general
representation of objects in the image, it should be considered as
a whole. And, on the contrary, for reception of more information
on separate details it is necessary to concentrate on them
(Martinez F., 1990). Digital image processing encompasses the
processes that extract attributes from images (Gonzalez R.C., et
al., 2006). Ekranoplanes belong to the class of aviation without
aerodromes — for takeoff and landing they do not require special
runways, and require only a comparatively small amount of water
area or level land. The type of surface creating the screen
(“ekrano”) effect is not important for them – they can easily move
over frozen water surfaces, snow plains and over impossibilities
etc. (Belavin N.I., 1977). Special features of Ekranoplanes and
the fact that they can fly over sea surfaces make it possible for
us to use digital image processing to find different parameters of
flight – like speed and height of plane, direction of wind, etc.
using digital images of the disturbed sea surface. Only the
direction of wind is considered here. Take off and landing in a
storming sea are the most difficult and dangerous stages of flight
of sea planes and Ekranoplanes (Nebylov A.V., et al., 2007). It is
necessary to minimize the influence of sea waves on the flying
object by optimizing the landing approach according to the
direction of distribution of sea waves (Nebylov A.V., Wilson P.A.,
2002, Nebylov A.V., Wilson P.A., 2002).
The main target is to determine the general wave direction.
This could be done on the basis of surface monitoring by radars,
radio altimeters, other kinds of on-board altimeters, etc.
However, the purpose of this work is to propose different and
novel algorithms for optimizing vehicle direction during take off
and landing maneuvers by using results of numerical analysis of
sea wave distribution obtained after processing and analyzing
digital images of the sea surface over which the airplane is flying.
In real life the digital images of sea surface are obtained from a
digital camera which is directed downwards from the vehicle. The
best location of the camera is the nose of the Ekranoplane which
is usually above the water level. Thus, we receive digital images
that give us the exact top view of the disturbed sea surface,
devoid of any errors caused by angle of the camera.
Theoretically, for simplicity of determining the accuracy of the
offered algorithms, it is necessary to simulate the mathematical
model of the disturbed sea surface. The sea surface generally is
an irregular stochastic field, not consisting of only a single
system of simple cylindrical waves. Usually, Pearson’s model is
used as the 3-D model of non regular sea. Here sea roughness
is presented as superposition of sets of elementary twodimensional cylindrical waves with various amplitudes,
frequencies, phases and distributed directions. Thus in the
stationary system of co-ordinates 0x0y0z0, where 0x0z0 is
horizontal plane and axis 0x0 coincides with the wind direction,
the ordinate of wave surface changes under the law:
y0 (t, x0 , z0 ) = ∑ ∑ aij cos(Ω i t − ki x 0 cos χ j −k
−ki z0 sin χ j + ε ij )
where aij - amplitude, ki -frequency, Ωi - angular frequency, ε ij –
random phase, equally distributed in the interval [0, 2π], χ j –
direction of elementary waves corresponding to the direction of
wind and axis 0x0.
The basic experimentally defined characteristic of completely
developed wind sea waves is the power spectrum Sζ(ω),
describing the frequency distribution of waved surface point
elevation process energy, and also the law of waves energy
angular distribution, usually accepted in the form (Nebylov A.V.,
et al., 2007):
Θ( )=(2/ )cos2( )
Process of research is fulfilled in the following way:
Fig.2. Input for the components of waves.
Fig.1. Process of research
The process involves the following steps:
The first step is to simulate the disturbed sea surface
according to the above mathematical model of sea.
Simulation facilitates changing the parameters of the sea
waves like – amplitude, frequency, phase and the direction
of the elementary waves.
As a result of simulation we obtain a digital image either as
gray scale image or binary scale image (according to the
user’s choice.)
This digital image is then analysed to obtain the general
wave direction. Three new and different algorithms have
been proposed for this purpose. These algorithms form the
core of this project. These algorithms and their results have
been explained further.
The next step is to determine the accuracy of these
algorithms. They are fine-tuned till the error is less than one
degree. The method of testing the error is simple. The input
wave direction of simulation is compared with the computed
general wave direction. The difference – i.e., the error
should be less than one.
After this stage all the algorithms are compared using
different input parameters. Their results are tabulated.
The last step is to draw a conclusion on the most effective
and accurate algorithm.
Fig.3. Input for all the parameters
Process of research was fulfilled using MATLAB - Image
Processing Toolbox. It is one of the most effective software
packages of digital image processing and is intended to work
with structured matrix systems (Dyakonov V., Irina А., 2002).
For carrying out research of sea waves’ influence on quickly
moving vehicle on the sea, it is more convenient to use the
model of the "frozen" waved surface. Hence, the simulated
waves do not have a component of ‘time’ as well as it is sufficient
to use only image processing instead of video processing. The
images show the steps of simulation. For ease of simulation a
GUI interface is provided to change all the input parameters.
Fig.4. Result - Gray scale image of wave
Fig.5. Result - 3-D image of wave
A provision has been made to change the number of
harmonics or the wave elements. It is based on the simple
N.W. = N.D * N.F
Fig.7. Simulating a 6-component wave – 2nd step
where N.W. – Number wave components
N.D. – Number of components of direction
N.F. – Number of components of frequency
Fig.2, Fig.3, Fig.6 and Fig. 7 display the flexibility of the
simulation panel.
Analysis is carried out with two different variations:
1. With the simulated digital images
2. With real digital photographs of various sea surfaces
5.1 Analysis with simulated images
The process of analysis starts after obtaining images. The first
algorithm is named as the ‘transition method’. It counts the black
and white transitions in the image. The second and third
algorithms are based on correlation analysis. The transitions are
counted or the correlation analysis is done along a ‘scan’ line. A
‘scan’ line is a thin line segment of an image. The general
direction of waves is the direction in which the least number of
transitions are obtained or the direction in which the minimum
correlation interval is obtained. For this purpose the image needs
to be scanned in different directions. The method includes the
following steps:
Fig.6 Simulating a 6-component wave – 1st step
Fig.8. Multiple scan lines in the same direction.
Fig.11. The analysis panel showing the right answer
Fig.9. Multiple scan lines in the same direction.
The number of transitions obtained in each scan is compared
and the direction in which we obtain the least transition is the
general wave direction. The wind direction is always
perpendicular to the direction of wave.
From the Fig.10 and Fig.11 the complete procedure with the
results can be seen. The analysis panel provides a display to
compare all the methods against the same simulation
parameters. Accordingly, knowing the general input wave
direction, a conclusion on the most accurate method for various
input conditions can be drawn.
It should be noted, that the wave direction thus obtained is
always relative to the heading of the plane.
5.2. Analysis with real digital images
After ensuring that the algorithm works for different conditions
of simulation, the algorithm is finally tested with real digital
images. For this a provision has been made in the ‘simulate and
analysis’ GUI to select the necessary images for analysis.
Fig.12. Provision for selecting required images
Here is an example. The below image was used to test the
‘transition method’ algorithm.
Fig. 13. Digital gray scale image of the Indian Ocean
Fig.10. The whole simulation and analysis panel
The algorithm does not work for step angle less than 1
degree. If the general wind direction (the input angle) is
perfectly divisible by step angle then there are no
errors. Hence, a degree of 1 is the best step angle.
The accuracy does not depend upon the format of
image – Gray scale or binary.
It is observed from the results that the code has been
optimized in such a way as to be independent of the input
parameters. In other words, the algorithms work in ‘all weather’
In practice, we can use DSP image processors to execute the
algorithms. One of them is TMS320C6000 manufactured by
“Texas Instruments”.
Fig.14. Result of the final wind direction
So as to determine the accuracy of the algorithm, it is
necessary to observe the influence of all input parameters –
which are: frequency, amplitude, direction, phase, number of
scan lines, step angle and wave harmonics involved.
Here a sample of the result of work obtained by experimenting
with the transition method for only a single harmonic wave is
given. Observations for wave with one harmonic:
Change in amplitude does not affect accuracy. The
algorithm was tested with waves of amplitude from 1 20 m.
Change in frequency does not affect accuracy. When
frequency and amplitude don’t affect accuracy, it
means that height of flight doesn’t affect accuracy.
Algorithm was tested fro frequencies ranging from 1 to
The phase can be random. Changing the phase does
not affect the accuracy.
The table shows the effect of changing the number of
scan lines. It can be concluded that for waves with one
harmonic the best number of scan lines is 2.
Scan Line at
which answer is
An effort has thus been made to determine general wave and
wind direction from digital images. The transition method is very
simple and easy to implement as it involves simple calculations.
The algorithms based on correlation analysis are found to be
more accurate where multiple harmonics are involved. The
choice of algorithm finally depends upon the computation time
and the resources available as well as the accuracy required.
Future work may consist of determining other different
navigation parameters like velocity and height of plane, wind
velocity, etc. using digital images with probable additions of
further ancillary equipments.
[1] Martinez F. (1990). Synthesis of Images, Principles,
hardware, software. Moscow, Radio I Svias. (in
[2] Gonzalez R.C., et al. (2006). Digital image processing using
MATLAB. India, Pearson Education.
[3] Belavin N.I. (1977). Ekranoplanes. Leningrad, Sydoctroiniye.
(in Russian)
[4] Nebylov A.V., et al. (2007) Sea Plane landing control at
wave disturbances. In: IFAC
[5] Nebylov A.V. (1994) Measuring parameters of flight close to
sea surface. GUAP, Saint-Petersburg (in Russian).
[6] Nebylov A.V., Wilson P.A. (2002). Ekranoplanes: Controlled
flight close to the sea. Moscow, WIT Press.
[7] Dyakonov V., Irina А. (2002). MATLAB Signal and image
processing. St.Petersburg, Peter. (in Russian)
Тable 1. Influence of number of scan lines
Sukrit Sharan*
International Institute for Advanced Aerospace Technologies of SUAI,
St. Petersburg, Russia
Abstract: In the paper a scientific and technological analysis has been presented for the viability, advantages and
highly successful prospects for the realization of an Autonomous and Intelligent Wing-In-Ground Effect Vehicle
mainly for Coastal Patrolling and Search & Rescue Operations. The necessary Autopilots and Automatic Control
Systems have been proposed and described especially considering such vehicles for their various modes of motion.
Also has been described the special Phase-Radioaltimeter created in IIAAT for such purposes and missions. Some
alternate sources of power and energy for such systems for round the clock sea surveillance have also been
suggested for such highly promising and highly advanced next generation intellectual amphibious vehicles.
Keywods: Autonomous & Intelligent Vehicles, Automatic Control, Integrated Navigation systems and Data Fusion.
Thousands of years ago humans strived hard so that they allow
the human civilization to sail and navigate and constructed ships.
A century ago, the Wright brothers helped fulfil the great dream
of letting humans fly. But now it is ironical to note that we
currently strive hard to get humans out of such vehicles and let
man-made systems sail and fly without humans but as per his
desires as these are the demands of the future. Autonomy is
commonly described as the ability to make decisions without
human intervention. Thus, the main aim of autonomy is to teach
machines to be “smart” and act more like human beings. One
may pay attention to the fact that the development of
autonomous vehicle is closely associated with the development
in the field of artificial intelligence, expert systems, neural
networks, machine learning, natural language processing, and
However a great part of the contribution to the field of
autonomy may also be attributed to Control Sciences, and not
just Computer Science. In the near future, the two fields will
merge to a much greater degree, and specialists, practitioners
and researchers from both disciplines will work together to
bolster rapid technological development in the area.
Smart Systems are defined as miniaturized devices that
incorporate functions of sensing, actuation and control. They are
capable of describing and analysing a situation, and taking
decisions based on the available data in a predictive or adaptive
manner, thereby performing smart actions. In most cases the
“smartness” of the system can be attributed to autonomous
operation based on closed loop control, energy efficiency and
networking capabilities.
Because Unmanned Vehicles are not burdened with the
physiological and psychological limitations of human pilots, they
can be designed for maximized on-station situation times. The
maximum flight duration of unmanned aerial vehicles varies
widely. Internal combustion engine vehicle endurance depends
strongly on the percentage of fuel burned as a fraction of total
weight and so is largely independent of vehicle size. Of course a
few types of conventional UAVs (Unmanned Aerial Vehicles) are
able to perform very low-altitude flight close to the sea surface.
But the stabilization and control at extremely low altitude is a very
complex task. As for WIGs (Wing-In-Ground Effect) or
ekranoplane vehicles, they are specially designed for performing
such low-altitude motion and if apprpriately designed, stability is
highly supported by the Dynamic Air-Cushion itself besides
having many other important advantages, the most important
being the ability to integrate many properties, characteristics and
functions of both a marine vessel and an aircraft only in one
transport vehicle .
A significant amount of development has already taken place in
the field of Unmanned Aerial Vehicles (UAVs) and Unmanned
Surface Vessels (USVs). Such continuous developments have
now paved the path for developing Unmanned WIGs which are
aimed to integrate many properties, characteristics and functions
of both the UAVs and the USVs. The task is in many cases
simpler and more advantageous than Unmanned Aerial Vehicles.
2.1. Prospective Utilities
The main area for the use of such vehicles is for Coastal and
Maritime Surveillance. Round the clock patrolling is a major issue
for any coastal region. Anti-Pirate, Anti-Smuggling and AntiIllegal Immigration Tasks. Generally the Navy and Air
Surveillance, AWACS and Surface Monitoring Satellites work
together to monitor such activities in many developed parts of the
world. Unmanned WIGs provide the opportunity to contribute to
these joint tasks. In many regions however such Autonomous,
Intelligent and Smart Unmanned Ekranoplanes may be sufficient
and self-dependent to solve such problems. In places where
ships, hydrofoil vehicles and helicopters are too slow and
aircrafts are not able to carry out very low altitude tasks.
Anti-Terrorist and Peace-Keeping Operations. May be
equipped with ammunitions for critical situations. No human
pilot’s or sailor’s life to be risked. Another feature of
Ekranoplanes is that they have very low radio and infrared
signature and are difficult to be detected on the submarine
They may be deployed either singularly for surveillance of
different regions, or else a number of such vehicles may be used
for Joint and Combined Autonomous Operations. Some special
algorithms have to be developed when a group of such vehicles
have to work together.
Many other features include reconnaissance, sea-loitering and
air-loitering. They are less prone to mine-threats while engaged
in Patrolling. Thus may be used for purposes where human life
cannot be threatened and ships and hydrofoils may not be
risked. Usage of such crafts for the launch of Micro-Aerial
Vehicles to broaden the efforts during Search & Rescue
operations. Ecological Monitoring is another important task for
such vehicles. They may be extensively used where ships, boats
and hydrofoils are too slow and aircrafts find it difficult to carry
out extremely low flight. For the purpose of Marine Surveys and
Geological Monitoring. Servicing and Maintenance of Oil Rigs;
Some large sized variants may be used also for urgent
transportation of goods. This may be considered merely as a
secondary option. Commercialisation in rough weather conditions
where the life of the pilot or the sailor cannot be endangered.
Search and Rescue Operations. A few of such vehicles may be
deployed in different regions for round the clock SAR Missions.
In regions which are more prone to natural calamities such as
tsunami, volcanic eruptions or earthquakes near an island. Once
again the main idea is to minimize human life threat by using
unmanned and uninhabited vehicles. They may also be used to
monitor such regions. It can go into “the dull, the dirty and the
dangerous” situations and areas, the main advantage being, it is
unmanned thus executing potentially dangerous environmental
monitoring. Type B or C Ekranoplanes are able to execute their
pre-programmed mission tasks not only from extreme low
altitudes, but also from high and in some cases comparable to
aviation altitudes. Such vehicles may be used to carry out
extensive R&D Activities and study the complexities of Ground
Effect. Such vehicles will not carry either passengers or crew,
thus providing a broader and safer opportunity to experiment and
research even on very large sized variants.
2.2. Important Advantages of the Autonomous & Intelligent
Besides the many important advantages of Ekranoplanes like
absence of the necessity for runway and possibility to perform
special operations using amphibian property, they can take-off or
land from the surface of land, water or ice; there are many other
specific advantages which can be fulfilled by Unmanned
These include:
Cost of construction, maintenance and exploitation below
piloted or manned Ekranoplanes;
Reduced requirements towards engines reliability and,
therefore, possibility of their fuller use of service life as compared
to manned WIG;
Reduction of the quality and complexity of other type of
patrolling vehicles (patrol boats, ships, hydro-acoustic helicopters
and hydrofoils) as many of their tasks for a particular region may
be solved by such Autonomous WIG crafts. Reduction of the
requirements of commissioning large number of officers for naval
surveillance. Many of these tasks may be solved by the
Autonomous, ‘Smart’ and Intelligent Unmanned Ekranoplanes.
3.1 Autonomy technology that will become important to
development of Intellectual WIG
Sensor Systems Development: Combining information from
different sensors for use on board the vehicle. Communications:
Handling communication and coordination between multiple
agents in the presence of incomplete and imperfect information.
Motion planning (also called Path planning): Determining an
optimal path for vehicle to go while meeting certain objectives
and constraints, such as obstacles. Trajectory Generation:
Determining an optimal control maneuver to take to follow a
given path or to go from one location to another.
Task Allocation and Scheduling: Determining the optimal
distribution of tasks amongst a group of agents, with time and
equipment constraints. Cooperative Tactics: Formulating an
optimal sequence and spatial distribution of activities between
agents in order to maximize chance of success in any given
mission scenario.
3.2 Usage As a Launch Platform for Micro Aerial Vehicles
in the Coastal Region
The term micro air vehicle (MAV) refers to a new breed of
remotely controlled aircraft (UAV) that are significantly smaller
than similar craft obtainable with the current state of the art. The
target dimension for MAVs today is approximately six inches (15
centimeters) and development of insect-size aircraft is reportedly
expected in the near future. Potential coastal surveillance,
patrolling and serach and rescue operations may be the driving
factors to broaden and deepen the missions.
Three types of MAVs are under investigation. Airplane-like
fixed wing model, bird- or insect- like ornithopter (flapping wing)
model, and helicopter-like rotating wing model.
3.3 Embedded Software for Autonomous, Intellectual WIG
Exploitation is also cheaper as compared to manned
Ekranoplanes, also the thrust needed for propulsion is lesser
than manned Ekranoplanes of comparable size as there are no
life-supporting systems, also there are no necessities to pay
wages to pilots;
In the case of an emergency pilot-less ekranoplane may simply
alight on the sea surface and increase their survivability. They
may be equipped with special systems for transmitting “Distress
Signals” using GLONASS or GPS SNS up to the next few days
so that the unmanned WIG vehicle may later be recovered,
repaired and then again be re-used;
The first requirement for physical autonomy is the ability for an
unmanned vehicle to take care of itself without causing a threat
to its environment while still trying to maximize its mission’s
success rates. As per the study carried out in IIAAT, the
folllowing tasks may form an important part to be implemented by
the Embedded Software for Unmanned Ekranoplane:
Embedded Power PC real time solution;
Advanced System: Configurable to Wing-In-Ground Effect
mode and also during Free Flight mode;
Waypoint Navigation (GLONASS, GPS, D-GPS, INS);
Special Fail Safe Navigation Mode;
Integrated Systems;
Dynamic real time gains, limits etc. adjustments;
Space Time Adaptive Processing;
Dual extended Kalman Filtering for precision navigation;
Compressed digital image transferring;
Autonomous Launch or Autonomous Take-Off from land, water
or ice surface, Autonomous Flight in different modes as well as
floating on the sea surface, Autonomous Landing;
Return Home Mode for communication loss, GLONASS/GPS
signal loss, etc.;
Communication Relay Mode;
Customisable software filters for noise reduction in sensors
Auto target coordinate detection through INS support;
Fault tolerance embedded software;
Special Programs for supporting the joint operations of many
such unmanned Ekranoplanes working together.
3.4 Computational Intelligence:
Neural Networks;
Fuzzy Systems;
Evolutionary Computation;
Hybrid Intelligent Systems;
In order to fulfil its operations and missions successfully, the
following points must be given proper attention, also allocation
for such systems, algorithms and programs must be carefully
developed taking into consideration the necessities for its various
modes of motion:
Robust and Optimal Control Methods & Algorithms;
Special Automatic Control Systems and Flight Parameters
Monitoring System;
Artificial Intelligence;
Pre-Programmed Flight Control;
Usage of GLONASS and GPS SNS for Navigation;
Limited Ground Station Control for special modes and
Asynchronous Learning Environments;
Network Centric and Networking Operational Effectiveness
Robust acquisition of Re-locatable Targets using MMW Sensors;
Smart Antennas;
Antenna Arrays;
Beam Forming;
WIG-effect is an interesting physical phenomenon with
multilateral characteristics, having positive and negative
influence for providing the flight at extremely low altitudes. For
motion of WIG-craft near the surface it is necessary to take into
account a series of specific physical characteristics, related with
the influence of the WIG-effect on aerodynamic forces and
moments. It is well-known that for the essential action of WIG-effect
the altitude of ekranoplane flight has to be less than a half of the wing
chord. The size of ekranoplanes should allow maintaining an optimal
altitude at cruise mode and not be too limited by the height of sea
waves. Unfortunately, an ekranoplane has the essential instability
of motion in the longitudinal plane and perfect automatic control
system is necessary first of all for providing the flight stability. For
heavy machines the automatic control systems are required
definitely. For smaller ekranoplanes many attempts to exclude
any automation of motion control are known, but only the grim
necessity to lower the cost of commercial vehicles causes such
attempts that certainly degrade the safety of motion. Trouble-free
motion close to the disturbed sea surface may be guaranteed by
the application of special methods and means of navigation and
control, which have the capability to solve the following specific
The precise control of the altitude of motion with the error not
above 3-10 cm;
Restricting the angles of airframe inclination for prevention of
undesirable tangency of water by the extreme points of body or
Ensuring of the vehicle stability in the circumstances of the
action of flake non-linear aerodynamic effects near the surface;
Non-contact measurement, tracking and prediction of ordinates
and biases of the field of sea waves for the rising of motion
control effectiveness. Certainly, the automated control for WIGcraft is a necessity to prevent accidents and the automation of
motion control has to be callable for all ekranoplanes.
The primary source for power and energy definitely has to be
the engines installed on the vehicle. Apart from them another
important alternate source of power may be solar energy. Solar
Electric Cells may be installed on the vehicle thus helping in
solving long-term energy problems and also increasing the range
and endurance. When multiple unmanned WIGs are together
being used for joint operations, a new method to execute in-flight
re-fuelling from one autonomous vehicle to another may be an
interesting topic for research in the future. Some contemporary
research activities in the world have also proposed the idea of
nuclear energy as a source of energy, but for such autonomous
vehicles, the use of nuclear energy is rather very risky and
hazardous. In principle it is simpler to construct unmanned
vehicles running on nuclear fuel as compared to manned
vehicles. Even the latest current level of cutting-edge technology
does not ensure safe methods for such an implementation.
It is possible to execute the altitude control under the change of
wing lift force in:
a) Trailing-edge flap deflection;
b) Elevator deflection (thus a pitch varies);
c) Change of speed of flight at the expense of engines thrust
As at pitch angle variation, the drag and, therefore, the flight
speed changes, the version b) demands the presence of velocity
stabilization system. Thus all channels of the control complex
substantially participate in the maintenance of the ekranoplane
demanded motion in the longitudinal plane. The synthesis of control
laws can be fulfilled under the several criteria, but their general
structure appears to be almost similar in the majority of cases. The
estimations of the vehicle stabilization errors, linear and angular
rates and also wave disturbances, being filtered accurately, have
to be used at the formation of control signals. Now the main
problem of a good accuracy and measurement instruments for
WIG crafts has been solved with the development in IIAAT of the
Phase Radio Altimeters (RA) specially designed for low fly altitudes
(Figure 1).
The flight parameters measuring system was designed for
experimental WIG-craft It is intended for controlling and recording
the flight parameters:
- Altitude of flight up to 5 m with accuracy 5 cm;
- Speed up to 180 m/s with accuracy 0.1-0.2 m/s;
- Roll and pitch angles with accuracy 0.1-0.2 deg;
- Vertical overloads up to 3g with accuracy 0.06g;
- Considerable sea waves height up to 1.5 m with
accuracy 5 cm.
Primary sensors were: 3 special radioaltimeters, vertical
reference system, multi-antennas DGPS receiver, etc. They were
integrated with the aim of improving the accuracy and providing
the fault-tolerance properties.
The characteristics of RA are:
Altitude (or distance) measured - 0-10m;
Measurement error - not more than 5 cm under sea state
number 0-5;
Measured parameter frequency range - 0-50 Hz;
Fig. 1. Precise Phase Radio Altimeter (RA).
Structure of algorithm of a complex filtration of measurements
of RA and vertical accelerometer on the channel of height will be
installed for the measurement of height, pitching angle and
banking. The algorithm of complex filtration of measurements of
RA and vertical accelerometer includes:
The block of recalculation of measurements RA on a point of
installation Inertial Unit (IU);
The block of recalculation of an estimation of altitude from a
point of installation IU on CG ;
The filter of altitude (Filter 1);
The filter of vertical acceleration (Filter 2).
With the use of three described radioaltimeters, the integrated system for
measurement of parameters of motion close to a sea surface can be built,
the compact INS is also included in the system. This INS involves three
angular-rate sensors, three linear accelerometers, calculator and
temperature transmitter for compensation of temperature drift of angularrate sensors and accelerometers.
The structure of integration algorithm for altimeters and vertical
accelerometer output signals involves:
The unit of recalculation of altimeters outputs to a point of IU
installation (Unit 1);
The unit of recalculation of the altitude estimations to the point
of centre of gravity CG and to the points of altimeters installation
(Unit 2);
The filter of an altitude (Filter 1);
The filter of a vertical acceleration (Filter 2).
The operating RF - from X-range ( 9000 MHz);
The recalculation of altimeters outputs to CG is executed under
the formula
Radiated power - 20 mW;
hGC _ K = hk − x kψ + z kθ − y k ,
Power consumption - 2 W;
Output signal - digital and analog;
where the index k=n,l,r ( n - nose altimeter, l - left side altimeter,
and r- right side altimeter). For recalculation of altitudes from a
point of CG to a point of INS installation the relation
Mass - 1.2 kg;
^ Ail
= med (hGC _ k + xINS ψ − z INS θ + y INS )
is used, where med (.) is the operation of median definition. The
formula for recalculation of the filtered value of an altitude from a
point of the IU installation
to CG (Unit 2) looks like:
Separately, that is important for optimization of a mode of landing
approach and splashdown. Separately the problem of automatic
estimation the general direction of sea waves spread was solved,
that important for the optimization of landing on water. The
problem of automatic estimation of the general direction of sea
waves propagation with the use of three radioaltimeters outputs
will be lighted.
− x INS ψ + z INS θ − y INS
The filters of an altitude and vertical acceleration have the
transfer functions:
τ 2 s 2 + 2τk 3 s + k 3
s 3 + τ 2 k 3 s 2 + 2τk 3 s + k 3
where k 3 = 0.035 s− , τ =
= 4.035sc
H1 ( s) =
In discrete time the structure of filters is described by the
b21 z 2 + b11 z + b01
z 3 + a 12 z 2 + a11 z + a 10 ,
b2 z 2 + b2 z + b2
H 2 ( z) = 3 2 2 2 1 2 0 2
z + a 2 z + a1 z + a 0 .
H1 ( z) =
It remains an important question to be solved whether the
classification of such vehicles will fall under the authority of IMO
or else ICAO. Type A WIGs are completely under the guidelines
and authority of IMO.Type B are partially IMO and partially ICAO
and Type C WIGs are mainly under the authority of ICAO.
However special guidelines have to be laid down for the probable
realization of such autonomous vehicles in the future.
The applications of such autonomous, intelligent and
unmanned vehicles may be broadened to more extensive fields
upon the successful and reliable development of the maritime
surveillance vehicle. Future Autonomous and Unmanned
Ekranoplane may include very large sized cargo transport
vehicles up to the size of ships or even larger using the Ground
Effect principle and the dynamic air cushion and their advantages
but moving at a speed ten times faster than that of a cargo
vessel. As compared with UAVs of the same size, unmanned
ekranoplanes are generally cheaper to construct, maintain and
operate. Thus commercialization of such vehicles may be an
interesting idea in the future for multi-utilities.
Another future development may be in the field of modifiable
aero-hydrodynamic structures and the possibility to make a
temporary dive under the sea surface (for example, may be
during a storm or other externally critical environmental
situations) and then coming out and continuing in its conventional
mode of motion.
Unit 3
Filter 3
Unit 4
Filter 4
Filter 2
Unit 1
Filter 1
Unit 2
Fig. 2. Block-diagram of the integrated measuring system.
The measuring system allows tracking the profiles of sea
waves ξn, ξl, ξr in three points, corresponding to the points of
radioaltimeters installation at a nose and both sides of the
vehicle, with the accuracy 10 cm at seaway number 4.
A significant amount of development has already taken place in
the field of Unmanned Aerial Vehicles (UAVs) and Unmanned
Surface Vessels (USVs). Such continuous developments have
now paved the path for developing Unmanned WIGs which are
aimed to integrate many properties, characteristics and functions
of both the UAVs and the USVs. The Autonomous, Intelligent
and Unmanned WIG Effect provide a very important and useful
method to solve the problems of Maritime Surveillance, Coastal
Patrolling and also extensive Search & Rescue Operations.
It remains a significant question whether future developments
of autonomy technology, the perception of this technology and
most importantly the political environment surrounding the
application of such technology will limit the development and
utility of autonomy for the applications in UAVs, USVs or
Unmanned Amphibious Wing-In-Ground Effect Vehicles.
Compared to the manufacturing of Unmanned Vehicle
hardware for both UAVs and USVs, the market for autonomy
technology is fairly immature and undeveloped. Because of this,
autonomy and intelligent systems have been and may continue
to be important bases for future unmanned vehicle
developments, and the overall value and rate of expansion of the
future unmanned vehicle’s market could be largely driven by
advances to be made in the field of autonomy and intelligence.
The future research will concentrate on making such
Unmanned Ekranoplanes more intelligent by providing important
and appropriate Artificial Intelligence- Conventional AI and
Computational AI.
[1] Nebylov, A.V., Wilson P. Ekranoplane - Controlled
Flight close to the Sea.. WIT-Press, UK, 2002, 320
pp.+CD. 2002
[2] Denisov, V.I., Nebylov, A.V., Trofimov, Y.V. Commercial
Approach to the Problems of Large Russian
Ekranoplanes Development. In: Proc. of GEM -2000
Intern. Conf.., 32-39. St.-Petersburg. 2000/
[3] Nebylov, A.V. Measurement of Parameters of Flight close
to the Sea Surface. Monograph. SAAI, St. Petersburg.
1994 (In Russian).
[4] Zhukov, V.I. Peculiarities of aerodynamics, stability and
controllability of ekranoplanes. TsAGI, Moscow. (In
Russian). 1996.
[5] Tailor G.K. Wise or Otherwise? The dream or Reality of
Commercial WIG Effect Vehicles. In: Proc. of GEM2000 Intern. Conf.., 249-262. St.-Petersburg. 2000
[6] Nebylov, A.V., Tomita, N., Sokolov V.V., Ohkami, Y.
Performance and Technological Feasibility of Rocket
Powered HTHL-SSTO with Take-off Assist (Aerospace
Plane/Ekranoplane). Acta Astronautica, 45 (10), 629637. 1999
[7] Nebylov A.V. Controlled flight close to rough sea:
Strategies and means. In: Proc. of 15th IFAC World
Congress. Barcelona, 8a, on CD. 2002
[8] Nebylov, A.V. and Tomita, N. Effective WIG-craft – large
ekranoplane with facilities of automatic control. In:
Proc. of 10th IFAC Symposium on Control in
Transportation Systems. 109 - 114. Tokyo. 2003
[9] Nebylov, A.V., Rumyantseva, E.A., Sharan, S. Comparative
Analysis of Design Variants for Low Altitude Flight
Parameters Measuring System. In: Proc. of 17th IFAC
France. 2007
[10] Gessner, T. Smart Systems Integration 2008, VDE
Verlag, Berlin. 2008
[11] Nebylov A.V., Sukrit Sharan. Concepts & Principles for
Creating Autonomous and Intelligent WIG crafts & their
Peculiar Control Problems. In: Proc. of
SuperFAST2008 International Conference, SaintPetersburg Branch of the Institute of Marine
Engineering. 2008
[12] Nebylov, A.V, A.A. Ovodenko, S. Sharan. Wing-In-Ground
Vehicles: Perspectives & Trends for Development,
Modern Concepts of Design, Automatic Control &
Applications. In: Proc. of AERO INDIA International
Seminar, 9-11 Feb. 2009, Bangalore, India, 2009.
Benzerrouk Hamza
*IIAAT trainee from Algeria, e-mail:
International Institute for Advanced Aerospace Technologies of Saint-Petersburg State University
of Aerospace Instrumentation, e-mail:, Fax: +7 812 494 7018
Abstract: The new intelligent algorithms for navigation on the surface of Mars planet, especially for astronaut, are
proposed in this paper. The autonomous navigation system for astronaut is based on inertial measurement unit and
Laser ,which permit to localize and navigate in known environment and unknown environment constructing its map in
real time, using SLAM robust adaptive algorithms. The estimation of the position of astronauts is done using non linear
adaptive filters as the extended Kalman filter.
Keywords: Mars planet, navigation, SLAM, Kalman filter.
The new and intelligent algorithms for navigation on the
surface of Mars planet, especially for astronaut, are offered in
this paper. Though Mars is our second nearest neighbouring
planet, travelling there is both expensive and dangerous.
There have been many successful missions to the red planet,
but many more have failed in launch, in transit, and upon
arrival. To solve these problems, thinking about permanent
infrastructure for the exploration of the planet has been
started. Possibilities for this infrastructure include the
deployment of Mars orbiting communication relay satellites
and permanent unmanned ground stations. In 1999, JPL
proposed the deployment of a navigation and communication
satellite constellation for Mars to be called the Mars Network
(Hastrup et al., 1999). In our work , certainly in the future let
as expect that a constellation of satellites will be used, but at
this time it is advisable to consider the autonomous
navigation system for astronaut based on inertial
measurement unit (redundant) and Laser (redundant), which
permit for astronaut to localise and navigate in known
environment and unknown environment constructing its map
in real time, using SLAM robust adaptive algorithms (H.
Jacky Chang, C. S. George Lee, Yung-Hsiang Lu, and Y.
Charlie Hu, 2004). The estimation of the position of
astronauts is done using non linear adaptive filters as the
extended Kalman filter, the sigma point Kalman filters, and
the divided differences Kalman filters (Van der Merwe.R, E.
Wan, and S. J. Julier , 2004).
The locations of past, present and future lander missions
on the surface of Mars are shown in Fig.1.
The contour lines indicate zero elevation, with the northern
hemisphere being generally below zero elevation and the
southern hemisphere above. Viking 1 and 2 are labelled V1
and V2, the Mars Exploration Rovers are labelled MER1 and
MER2, and the four Netlanders are labelled NL1-NL4. The
target landing sites of two failed lander missions are also
indicated: Beagle 2 (2004) and Polar Lander (1999). Polar
Lander will be used in a simulation scenario in Chapter 6.
Mars geodesy is concerned primarily with two tasks: the
establishment of a spatial reference system for the planet
and the determination of the Mars gravity field (F.Forget,
F.Costard, Ph.Legnommé , 2008).
Fig.2. Labeled topographic map of Mars generated by the
Mars Orbiting Laser Altimeter (MOLA) (from MOLA Science
Team, 2004)
To appreciate more the difficulties and the differences
between the Earth and Mars planet, one have in the following
table the main properties of the two select objects.
Fig.1. Map of Mars showing the locations of past, present,
and future lander missions
Table 1
Physical properties of Mars and Earth from Lodders & Fegley Jr., 1998),
(King, 2001), and (Duxbury et al. 2002)
Mean Radius
Equatorial Radius
Polar Radius
Mean Density
Surface Gravity
Tropical Orbit Period
Mean Orbital Radius
Inclination of Equator
to Orbit
Length of Day
Mars value
3389500 m
3396190 m
3376200 m
3933.5 kg m−2
3.69 ms−2
686.973 days
1.52 AU
Earth value
6371010 m
6378136 m
6356753 m
5.515 kg m−2
9.78 ms−2
365.242 days
1.0 AU
24.6597 hours
24.0000 hours
Fig.5.The flank of endurance creator, surveyed by
Opportunity, June – December 2004 (NASA)
6.363 HPa
1013 HPa
214 K
288 K
As for inertial measurement unit, it will be very interesting
to test the different models of inertial equations using a
model of gravity of Mars and establish as for Earth model
phenomena affected the inertial sensors ; accelerometers
and gyroscopes or gyrometers and also the altitude variation
with pressure and temperature. In such area as in Fig.6,
navigation using natural beacons is possible using laser
telemeter for astronaut to correct the errors of inertial
measurement unit.
In such environment as are shown in Fig.3, the astronaut
couldn’t use telemeter, only inertial measurement unit. In this
situation, let as expect the using of satellites navigation as in
the following picture, and integrated the two system as on the
Earth (Sukkarieh. S., 1999).
The number of satellites will be less than GPS constellation
around the earth (K.O’Keefe, 2004). So, let as consider to
use in addition to the artificial satellite, the two natural small
satellites of Mars (Phobos and Deimos).
Fig.6. Area at the mouth of Ares Vallis
Fig.3. An expanse of sand, traversed by Opportunity
(September 2005)
Fig.4. Navigation using Mars Network
It is proposed a specific navigation system for astronaut
man, in ultimate conditions, when we assumed that he hasn’t
any radio navigation system and only the last known point of
communication between him and the reference station. The
proposed solution consists in an integrated system which use
Redundant Inertial Measurement Unit (RIMU) with (03) three
baro-altimeters and three (03) laser telemeter to avoid an
object (static or dynamic), which can be used to real map
building of the unknown environment. Based on altimetry
map correction, it is assumed that map of the planet that we
have to explore exists and is available as Mars planet
exploration. But for use the RADAR or laser as well as
possible is, let as define the area where is located the
astronaut according to the Fig.3, Fig.5 and Fig.6 . Three
others altimeters must be added to have an other
observation on altitude of the terrain which is explored by the
astronaut man. So, thus, the map matched principal can be
applied to correct the inertial navigation system using a non
linear adaptive filter and certainly with a non Gaussian
So, redundant inertial system is proposed to augment its
reliability and to permit the determination of a possible failure
of one of the three (03) inertial measurement unit using an
appropriate algorithm of integration (Savage.P.G, 1998).
transmit it to “on spacesuit” navigation system which use it as
observation for the integration adaptive filter. After 10-15Km
of distance, the mode of navigation change and use as
observation natural beacons (rock), with acceptable volume.
And the concept of SLAM will be apply to correct and
estimate the position of astronaut (K. Wee Lee, W Sardha
Wijesoma, Javier, I. Guzman, 2006) .
5.1 Description
He using of three other laser telemeter to calculate
distance and permit to applied a real time map building and
to implement a SLAM (Simultaneous Localization and
Mapping) (H.Durrant-Whyte, 2006).. The noises affected the
lasers can be multiplicative or additive impulsive noises , so,
the classic non linear algorithms of estimation must be
modified and must be robust according to this new
5.2 Non linear filtering
Kalman filters are widely used in state estimation problems.
However, the use of Kalman filters requires prior knowledge
about the system under consideration and if these are not
appropriately known, the estimation process will not be
optimal and it may even diverge. In real-world applications,
these assumptions are not always accurate; since there are
applications for which the exact system model is hard to
obtain and only approximations to the real model are
available. many works were made (C.Seong Yun; W.Sik Choi
,2006). and it is proposed to extend these solutions to the
non linear filters class as Extended Kalman filter , SPKF and
DDF(Simandl,M. and J.Duik,2006).
The state model in the case of IMU/RADAR mode is as in
the following equations :
The general non linear state model in continuous time is as
X& (t ) = f ( X (t ), U (t ),W (t )) ,
Z (t ) = h( X (t ), V (t )) .
Fig.8. Sensors position on spacesuit.
At first, must be define how calculate the mean of the
position, velocities and attitude of the three (03) IMU ,and
how to integrate the RIMU output , altimeters output and
Laser telemeters output through robust non linear filters. So,
in Fig.7 we can appreciate the general diagram of our system
and how it works.
In the case of additive noises, the model is:
X& (t ) = f ( X (t ), U (t )) + W (t )
Z (t ) = h( X (t )) + V (t ) .
1. The first model
N (k + 1) = N (k ) + T .V (k ).Cos (Ψk ) + wN (k )
E (k + 1) = E (k ) + T .V (k ).Sin(Ψk ) + wE (k )
Ψ(k + 1) = Ψ(k ) + ω (k ).T + wΨ (k )
where :
T -Time of integration
N - North distance
E -East distance
Ψ -Azimuth of the astronaut
Fig.9. Radar tracking for astronaut and real time correcting
inertial measurement system.
Really, different model scan be used, and depend on the
area where the astronaut is located in our work, let as
consider a solution without space navigation system, only
based on inertial system, base station which is equipped by a
RADAR (LIDAR) to track the position of the astronaut and
and where the observation equation is :
h( x, v) = ( x(t ) − xbs(t ))2 + ( y (t ) − ybs(t ))2 + v(t ) .
In discrete time we obtain :
With ( xbs, ybs ) is the base station position.
h( xk , vk ) = ( x(k ) − xbs(k ))2 + ( y(k ) − ybs(k ))2 + v(k ) .
The following figures are preliminary results using
Extended Kalman filter.
unbiased 2D North-East trajectory
Biased 2D North-Easttrajectory
RADAR trajectory
EKF trajectory
AEKF trajectory
East position in meter
Let as observe that in the case of RADAR correction in a
near area ,about 10-15Km, estimation of the positions
(North,East) and Azimuth is convenient and acceptable. But
let we see in the case where the RADAR can’t detect the
astronaut motion . there a second model of observation is
used to correct the inertial drift.
Two different cases; the first one is when the astronaut is in
known environment and has an artificial beacons for
measuring distance and observe his position. The second is
when the astronaut evaluate in unknown environment , and
apply the SLAM concept to localise its self, using laser
measurement model to detect and range the natural beacons
on the map. So, the first observation model according the
first case is given by the following equation:
The second model-1st case:
North position in meter
Fig.10. North East trajectory of Astronaut
E (k + 1) = E (k ) + T .V (k ).Sin(Ψk ) + wE (k ) .
Unbiased East Distance
Biased East Distance
RADAR East Distance
EKF distance
AEKF distance
Ψ(k + 1) = Ψ(k ) + ω (k ).T + wΨ (k )
East position in meter
System’s equations:
N (k + 1) = N (k ) + T .V (k ).Cos (Ψk ) + wN (k )
II. Observation equation:
h( xk , vk ) = ( x(k ) − xart (k ))2 + ( y(k ) − y art (k ))2 + v1 (k ) ,
Φ(xk , vk' ) = tan−1[( y(k ) − yart (k )) / ( x(k ) − xart (k ))] + v1' (k ) .
where : ( xart , ynat ) is the position of the artificial beacons.
time in second
Fig.11. North position during time
This can be understand more with the following results:
The second model-2nd case:
unbiased North Distance
Biansed North Distance
RADAR North distance
EKF distance
AEKF distance
System’s equations:
N (k + 1) = N (k ) + T .V (k ).Cos (Ψk ) + wN (k )
North position in m eter
E (k + 1) = E (k ) + T .V (k ).Sin(Ψk ) + wE (k ) .
Ψ(k + 1) = Ψ(k ) + ω (k ).T + wΨ (k )
Observation equation:
time in second
h( xk , vk ) = ( x(k ) − xnat (k ))2 + ( y(k ) − ynat (k ))2 + v2 (k )
Fig..12. East position during time
Φ(xk , vk' ) = tan−1[( y(k) − ynat (k)) / (x(k ) − xnat (k ))] + v2' (k ) .
unbiased Azimuth
biased Azimuth
EKF azimuth
AEKF azimuth
where: ( xnat , ynat ) is the position of the artificial beacons.
Let as understand more with the following results:
A z im uth
time in second
Fig.13. East position during time
and localisation is improved as was showed on the upper
figures. The second way consists on using artificial beacons
(reflectors) as references station and measuring the distance
and azimuth between the astronaut and beacons (laser
+gyroscope). Using this observation, and through the non
linear filter estimate the trajectory of astronaut and gives he
an accurate positioning and navigation data. The third way is
the most interesting and consists on using natural beacons
(rocks), without any knowledge about the external
environment, and measuring the distance and azimuth of
these beacons using laser and gyroscopes, there too, the
preliminary results confirm the possible investigations by
this way.
unbiased 2D North-East trajectory
Biased 2D North-Easttrajector
Laser positioning
EKF trajectory
Landmarks positions
E as t pos ition in m eter
North position in meter
Fig.14. North East trajectory estimation using natural
unbiased North Distance
Biansed North Distance
Laser North position
EKF distance
North position in meter
time in second
Fig.15. North position during time
Unbiased East Distance
Biased East Distance
Laser East Distance
EKF distance
East position in meter
time in second
Fig.17. East position during time.
So, in the figures 14-17 one can observe that the
estimation of the trajectory of the astronaut using laser and
natural beacons is useful and a convenient method for
navigation on the surface of Mars planet .
A possible conclusion is that for astronaut navigation, three
different ways are proposed; the first is to use the base
station (landing lark) for detecting and ranging using RADAR
or LIDAR. The astronaut‘s spacesuit inertial measurement
unit estimate the position, velocity and azimuth of the
astronaut each time T, and receive the correction from the
base station every interval of time T’<T, so, using non linear
filtering techniques as EKF, SPKF, the accurate navigation
[1] Cho Seong Yun; Wan Sik Choi (2006) ;Robust
positioning technique in low-cost DR/GPS for land
navigation; IEEE Transactions on Instrumentation
and Measurement, Volume 55, Issue 4, pp. 1132
– 1142.
[2] F.Forget, F.Costard, Ph.Legnommé (2008). Mars planet
original french edition ‘la planete Mars’- Springer
[3] H. Jacky Chang, C. S. George Lee, Yung-Hsiang Lu, and Y.
Charlie Hu (2004). ‘A Computational Efficient SLAM
Algorithm Based on Logarithmic-Map Partitioning.’
Proceeding 2004 of international conference on
intelligent robots and systems. School of Electrical and
Computer Engineering Purdue University West
Lafayette, IN 47907-2035.
[4] Hugh Durrant-Whyte,2006 Fellow, IEEE, and Tim
Bailey. Simultaneous Localisation and Mapping
(SLAM): Part I The Essential Algorithms
[5] Kwang Wee Lee, W Sardha Wijesoma, Javier, Ibanez
Guzman (2006),,’On the Observability and
Observability Analysis of SLAM’- Proceedings of
the 2006 IEEE/RSJ International Conference on
Intelligent Robots and Systems October 9 - 15,
Beijing, China
[6] Kyle O’Keefe (2004), ‘Simulation and Evaluation of the
Performance of the Proposed Mars Network
Constellation for Positioning, Orbit Improvement, and
Establishment of a Spatial Reference Frame for Mars’,
PhD thesis, UCGE Reports Number 20191, university of
[7] Savage.P.G.(1998) Strapdown Inertial Navigation
Integration Algorithm Design Part 1: Attitude
Algorithms. Journal of Guidance, Control, and
Dynamics, 21(1):19–28.
[8] Simandl,M. and J.Duik (2006). Design of derivative free
Smothers and predictors. In: Preprints of the 14 th IFAC
Symposium on System identification. Newcastle.
[9] Sukkarieh.S, ,(1999) “Aided Inertial Navigation Systems
for Autonomous Land Vehicles,” PhD thesis‘,
Australian Centre for Field Robotics, The
University of Sydney.
[10] Van der Merwe.R, E. Wan, and S. J. Julier (2004).
Sigma- Point Kalman Filters for Nonlinear
Estimation and Sensor- Fusion: Applications to
Integrated Navigation. In Proceedings of the AIAA
Guidance, Navigation & Control Conference,
Providence, RI.
Без категории
Размер файла
4 629 Кб
Пожаловаться на содержимое документа