close

Вход

Забыли?

вход по аккаунту

?

TIE.2017.2756595

код для вставкиСкачать
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIE.2017.2756595, IEEE
Transactions on Industrial Electronics
Robust Tube-based Predictive Control for Visual Servoing of
Constrained Differential-Drive Mobile Robots
Fan Ke, and Zhijun Li, Senior Member, IEEE, and Chenguang Yang, Senior Member, IEEE
Abstract?This work proposes a control strategy for the visionbased control problem of a noholonomic constrained differentialdrive mobile robots with bounded disturbance by using robust tube-based model predictive control (MPC) method. The
proposed control strategy mainly consists of an ancillary state
feedback controller and a MPC control for a nominal robotic
system. Firstly, the states-error kinematics of the nominal system
is converted into a chained form system, and then its MPC
optimization can be computed to deal with a quadratic programming (QP) optimization problem by integrating a linear variable
inequality-based primal-dual neural network (LVI-PDNN). Next,
the gain scheduling of the ancillary state feedback can be
obtained via solving robust pole assignment using LVI-PDNN. An
optimal state trajectory can be generated for the nominal robotic
system by the MPC without various uncertainties, the ancillary
state feedback control forces the state variables to be constrained
within an invariant designed tube. Finally, extensive experimental
results on stabilization control of the noholonomic mobile robot
are provided to verify the effectiveness of the proposed robust
tube-based MPC.
Index Terms?Mobile robots, Image-based servoing, Tuberbased model predictive control, Neural network optimization.
I. I NTRODUCTION
Vision-based control is one of main research fields for
autonomous mobile robot, which can control the position
and orientation of a robot via tightly combining the current visual feedback information obtained from cameras [1][3]. Until now, many previous approaches related to visionbased control of mobile robots have been investigated. For
example, an adaptive tracking controller was proposed for
a nonholonomic mobile robot with an un-calibrated camera
[5]. A motion-estimation technique are proposed, which only
needs THREE features to obtain a unique solution [4]. The
method also can use the continuity of motion to eliminate the
ambiguity problem even only two visible feature points. In
[6], a robust stabilization control was proposed to steer the
mobile robot to the desired posture without precise internal
parameters of camera and the depth information. In order
to achieve robust autonomous navigation based on vision
servoing, various methods have been explored, such as specific
Manuscript received February 22, 2017; revised August 06, 2017; and
accepted August 30, 2017. This work is supported in part by National Natural
Science Foundation of China grants (Nos. 61573147, 91520201, 61625303),
and Guangzhou Research Collaborative Innovation Projects (No. 2014Y200507), Guangdong Science and Technology Research Collaborative Innovation Projects under Grant Nos. 2014B090901056, 2015B020214003, Guangdong Science and Technology Plan Project (Application Technology Research
Foundation) No. 2015B020233006, National High-Tech Research and Development Program of China (863 Program) (Grant No. 2015AA042303).
? Corresponding author: Zhijun Li (zjli@ieee.org).
F. Ke, Z. Li and. C. Yang are with College of Automation Science
and Engineering, South China University of Technology, Guangzhou, China.
Email: zjli@ieee.org.
geometry features (line orientations and vanishing points)
[7], image memory or planning of image trajectory [8], [9],
catadioptric cameras, stereo cameras and omnidirectional [10][11], embedded velocity fields [12].
However, it should be noted that the above studies did
not consider the problem of the internal constraints inside
the robot, such as velocity limitation and actuator saturation
[13]. Due to the ability to cope with complex nonlinear or
linear systems with various constraints, MPC has received a
considerable attention. In [14], under condition of the presence
of both kinematic and dynamic constraints, a MPC method was
proposed to stabilize an nonholonomic mobile robot(NMR).
MPC also had been proposed for unmanned aerial vehicles
[15], and humanoid robot [16]. However, it should be noted
that the these works didn?t explicitly consider the problem of
handling the external disturbance in the actual implementation
and robust stability is dependent on the condition that the
nominal system itself is robust. Unfortunately, the properties
of inherently robust were not always present in the predictive
controllers [17]. However, in this paper, the proposed Tubebased MPC can explicitly suppress the effect of the external
disturbance by introducing a state feedback item in the designed control.
Tube-based MPC is a double layer control strategy which
is composed of an inner state feedback loop to robustify
the system trajectory and an open-loop MPC with constraint
condition [18], [19]. The first one is to design an auxiliary
state feedback control law to keep all the possible trajectories
of a uncertain system within an admissible sequential tube,
which is determined by considering constraints satisfaction.
Then, the second step is to solve a MPC problem under the
constraint conditions that generates a nominal trajectory to the
final goal. In tube-MPC context, actual trajectory obtained by
state feedback control will generate a series of sequences in the
state space, which is always located in a nominal trajectorycentric tube. Thus, tube-based MPC is attractive for practical
implementation due to the relatively low computational complexity.
On the other hand, real-time solving optimization problem
is always important for MPC implementation. The target cost
function of MPC is converted to a quadratic programming
problem. Traditionally, the sequential quadratic programming
(SQP) method requires repeated calculation of the Hessian
matrix of the Lagrange to solve a QP, which brings extra
computational complexity [20]. However, since the neural
network have its inherent nature of parallel and distributed
information processing, the emerging neural network optimization method has become a recognized efficient method to
deal with complex computation. Primal-dual neural network
(PDNN) can be treated as a promising computational tool for
0278-0046 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIE.2017.2756595, IEEE
Transactions on Industrial Electronics
Feature point
Yw
Ydc
P
zdc
xdc
Yc
Oc
?
xc
zc w
Ydr
c
vc
xw
w1
zdr
xd
yr
r
Oc
w2
xr
r
Or
Desired position
Or
zr w
r
ow
vr
zw
Initial position
Fig. 1.
Coordinate systems relationship.
solving quadratic programming (QP) problem [21], [14].
This paper describes a robust tube-MPC controller based on
the visual servoing control model for nonholonomic mobile
robots subject to bounded external disturbances. The proposed Tube-based MPC approach introduces the state feedback
controller into the nonlinear system to compensate mismatch
between real evolution of the system and the nominal system
because of the effect of the bounded uncertainties, where the
feedback gain can be solved online by using two linear variable
inequality-based primal-dual neural network (LVI-PDNN) so
that the state variables of the mobile robot can be constrained
in a allowed bounded tube and a nominal MPC is applied to
stabilize the nominal system which is limited to the tighter
constraint sets.
The main contributions of the paper are summarized as
follows:
1) The proposed Tube-based MPC approach makes the state
variables of the mobile robot constrained in an allowed
bounded tube by introducing a state feedback controller
into the closed-loop nonlinear system, where the feedback
gain can be solved online by using two LVI-PDNN, to
compensate the effect of the bounded uncertainties such
as the external disturbance from light source and the
vibration of camera.
2) Compared with the similar approaches where the boundary of tube can be obtained offline, the boundary of the
feasible tube is enlarged and the time-varying parameter system reduces the conservativeness because of the
proposed Tube-based MPC.
3) Considering the several feature points unavailable in the
actual implementation, only one feature point is utilized
to obtain the relative position of the robot.
4) The tube-based MPC can successfully achieve stabilizing
the vision-based mobile robot under various constraints
including velocity increment limit (acceleration limit), the
velocity limits and the field-of-view limit of the onboard
camera.
II. P ROBLEM F ORMULATION
A. Kinematic Model
We firstly define the relationship of three sets of coordinate,
namely, the robot coordinate, the camera coordinates and
the world coordinate which are shown in Fig. 1, where
Ow X w Y w Z w is the world coordinate frame attached on
the ground, Or X r Y r Z r is the robot coordinate fixed to the
mobile robot, while Oc X c Y c Z c is the camera coordinate
rigidly attached to the camera, vr , wr and vc , wc are the linear
velocity and angular velocity of the robot and camera, whose
orientation of the linear velocity is the same as their Z-axis,
respectively. Thus we can obtain the transformation matrix
between the two coordinates as [22]
[ r
]
Rc drc
Hcr =
(1)
0
1
where drc = [dx , dy , dz ]T , Hcr ? R4�represents the homogeneous transformation, Rcr ? R3�is the rotational matrix
and drc ? R3 is the position vector in the robot coordinate
expressed in the camera coordinate, dx , dy , dz are the relative displacements between the robot coordinate and camera
coordinate. In order to facilitate the experiment, we fixed
the camera on the mobile robot base. Thus, the relationship
between the robot velocity and the camera velocity from (1)
can be described as
[
]
Rcr s(drc )Rcr
r
T
?r = [0, 0, vr , 0, wr , 0] =
?cc
(2)
03�Rcr
where ?cc = [vx , vy , vz , wx , wy , wz ]T is the camera velocity
vector relative to the camera coordinate, ?rr is the robot
velocity vector relative to the robot coordinate, s(drc ) is the
skew symmetric matrix of the vector drc , and wr and vr are
the rotational and translational velocities of the robot on the
ground. Then, we can obtain the wheel speeds as
[
] [
]T [
]
w1
L/2 L/2
vr
=
(3)
w2
L/l ?L/l
wr
where w1 and w2 are the speeds of the wheels, l is the distance
between the two wheels and L is the wheel diameter.
B. Camera Projection Model
We assume that P is a feature point of the world coordinate,
whose coordinate is (Xc , Yc , Zc ) relative to the camera frame.
Correspondingly, we define that p is a projection point on
the image plane coordinates of the feature points with (r? , c? )
relative to the image plane coordinate, whose pixel coordinate
is (r, c). The coordinates of the original point is (r0 , c0 )
relative to the pixel coordinate and the focal length of camera
is denoted by f . The transformation relation between these
coordinates are given by
{
{ ?
c
r = fx X
r = sx (r ? r0 )
Zc + r0
(4)
?
Yc
c = sy (c ? c0 )
c = f y Zc + c 0
where sy and sx are the vertical and horizontal dimension of
a pixel, respectively, and fx = f /sx and fy = f /sy . We can
normalize image coordinates as
[
x
y
1
]T
[
=
r?r0
fx
c?c0
fy
1
]T
(5)
2 See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
0278-0046 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIE.2017.2756595, IEEE
Transactions on Industrial Electronics
where x, y are the normalized image coordinates. Considering
the derivative of (5), we can obtain
[
] [ ] [ X?c ?xZ?c ]
x?
r?
Z
=
=
(6)
Y?c ?y Z?c
y?
c?
Zc
The moving velocity of p can be described as
[
]
x?
= L?cc =
y?
[
]
x
0
xy
?(1 + x2 ) y
? Z1c
Zc
?cc
0
? Z1c Zyc 1 + y 2
?xy
?x
(7)
where ?cc has been defined in (2), the Zc in the interactionmatrix (7) denotes the depth information. Since the robot
motion is in the two-dimensional plane, the linear and angular
velocities of camera are consistent with the Z-axis direction
and Y-axis direction of coordinates, respectively. Thus, we can
formulate the image Jacobian matrix L as:
[ x
]
?(1 + x2 )
L = Zyc
(8)
?xy
Zc
C. Error Model
and
e0 = ? ? ? ?
(13)
where ? ? R represents the rotation angle of the robot, ??
is the desired angle. Considering (12), (13) and (11), we can
obtain the error system as
e?0 = wc
e?1 = ?e2 wc
e?2 = ? Y1c vc + e1 wc
(14)
To facilitate the nominal MPC design, let us define u1 = ?c ,
u2 = ? Y1c vc + e1 ?c , then the chained form system is written
as
e?0 = u1
e?1 = ?e2 u1
e?2 = u2
(15)
Then, we transform (15) into two subsystems
The underlying concept of vision-based control can be described as that the controller continuously adjusts the velocities
of the mobile robot through the position errors of the feature
point so that the mobile robot can continuously move from
the initial position to the desired position. The following two
state variables are introduced as
x
1
r ? r 0 fy
fy
?1 = =
, ?2 = =
(9)
y
c ? c0 fx
y
c ? c0
It is clear that the equation (9) is singularity when y = 0.
So we set the camera optical center higher or lower than the
feature point, i.e. y < 0 or y > 0. By utilizing (7), (8) and
(9), we can obtain the kinematics model as [4]:
??1 = ??2 ?c
robot, P osi is the total number of gratings of the encoder of
left wheel or right wheel in ?t time, Red is precision ratio
of the motor and Cnt is the resolution of the encoder so that
the location of the robot can be unique determined. The error
signals can be presented as
[
] [
] [
][ ? ]
e1
?1
cos ? ? sin ?
?1
=
?
(12)
e2
?2
sin ? cos ?
?2 ?
??2 = ? Z1c ?2 vc + ?1 ?c
(10)
1
c
Using (4) and (9), it is clear that Z1c ?2 = Z1c Z
Yc = Yc , where
Yc is the height of the camera coordinate origin to the feature
points and can be obtained by measurement. Thus, we can
rewrite (10) as
??1 = ??2 ?c
??2 = ? Y1c vc + ?1 ?c
(11)
by using (5), we can? define the new desired variables in
fy
?r0 fy
term of (9) : ?1 ? = rc? ?c
, ?2 ? = c? ?c
, where (r? , c? )
0 fx
0
represents the desired coordinate of the feature point in the
image plane coordinate. Then, the control objective turns
into the construction of appropriate velocities wc and vc to
guarantee that ?1 ? ?1 ? , ?2 ? ?2 ? .
Assume that the state variables are only the coordinate of
the feature point, the control model will be singularity. And
we only obtain the position of the robot relative to the feature
point and can?t make sure the unique position of the robot in
the world coordinate. Thus, a new global state variable ? is
introduced where the relative angle ?(k + 1) = ?(k) + ??(k),
?(k) represents the relative angle of the time k, ??(k) =
osi
1 ?L2
, Li = 2?識ad譖
arctan L2Dis
4Red證nt , i = 1 or 2, Dis is the
distance of the two wheel, Rad is the wheel radius of the
e?0 = u1
[
]
?e2 u1
e? =
u2
(16)
(17)
where e = [e1 , e2 ]T is the state of the system (17). We can
rewrite two sub-systems (16) and (17) as
e?0
= f1 (e0 ) + g1 (e0 )u1
(18)
e?
= f2 (e, u1 ) + g2 (e)u2
(19)
where f1 (e0 ) = 0, g1 (e0 ) = 1, f2 (e, u1 ) ? R2 and g2 (e) ? R2
are defined below: f2 (e, u1 ) = [?u1 e2 , 0]T , g2 (e) = [0, 1]T . It
should be noted that this system of the second subsystem (19)
is not necessarily controllable when the input of the system
(18) is on singular manifold. i.e., the control input u1 (0) = 0
will make the state transition matrix of the system (19) become
a zero matrix so that the system becomes uncontrollable.
Hence, we can prevent the uncontrollable system by making
e0 (t) out of singular manifold. Integrating an exponential
decaying term, the control input can be written as
u1 = u?1 + ?e??t .
(20)
where the notation ? ? R is a positive constant and ? ? R
is a nonzero constant, u?1 is the optimal input for (18). The
convergence and amplitude of u1 are mainly dependent on u?1 ,
because ?e??t is the exponential decaying term. Obviously,
if t ? ?, u1 ? u?1 . Noted that the decay term can make the
speed of the input u1 to 0 slow, so that the subsystem (19)
keeps its controllability until the state of system reaches the
desired state.
For the ancillary state feedback gains design, if we treat wc
as a time-varying parameter which is obtained at real time, we
can transform the model (19) into an linear parameter-varying
(LPV) model :
e(k + 1) = A(w(k))e(k) + Bu(k)
(21)
3 See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
0278-0046 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIE.2017.2756595, IEEE
Transactions on Industrial Electronics
xr
+
e
MPC
u
x
Nominal
system
? x
+
+
u
where u? ? Rm is the nominal control, x? ? Rn is the nominal
state. And, we assume the solution of (23) is ??(k; x?, u?) at time
k when the control sequence is u? = {u(0), u(1), � � � , u(k)}
and the initial state is x?(0). The nominal trajectory is a feasible
trajectory for the nominal system to enable the state variables
satisfy these constraints by the ancillary state feedback controller.
The nominal optimal control problem V?N (�) for the nominal
robotic system (23) is described as
?
v? K
+
Actual
system
x
min V?N (x?(0), xd ) = min
u?
Fig. 2.
s.t.
Control scheme of the system.
u?
k+N
??1
?(x?(k) ? xd , u?(k))
j=k
(24)
u?(k) ? U?, x?(k) ? X? , k = 0, 1, . . . , N
The proposed control can be described by the combination
of a nominal MPC and an ancillary feedback control law. The
nominal MPC produces a feasible trajectory without external
environmental disturbance for the future predicted states and
inputs. Meanwhile, the proposed ancillary feedback control is
to maintain the state variables close to the reference (nominal)
trajectory. Fig. 2 demonstrates the control architecture. The
control input can be presented as u = u?+e
v , where ve = K(x?
x?) is the control input generated by the ancillary feedback
controller and u? is the control input generated by the nominal
MPC and K ? Rm譶 is a constant state feedback gain matrix.
where x?(k) = ??(k; x?(0), u?), x?(0) is the initial state, xd is a
desire states for the nominal system satisfying, U? and X? are
nominal input and state constraints, respectively; the function
2
2
?(x, u) = ?x?Q + ?u?R , where Q ? Rn譶 and R ? Rm譵
are weighting matrices being symmetric and positive definite.
Definition 3.2: Assume a control invariant set ? for (22), if
for any state variable x(k) ? ?, there exists a feasible control
input u(k) ? U so that x(k + 1) ? ? is established for any
d(k) ? D and k ? 0.
Definition 3.3: Assume a set of sequences {?k } is an
invariant tube for (22), if for any state variable x(k) satisfying
with x(k) ? ?k , there exists a control input u(k) ? U so that
all x(k + 1) ? ?k+1 is established for any d(k) ? D.
In the sampling time k, we can define the optimal control sequence and the nominal state trajectory as follows:
x?? = {x?(k), x?(k + 1), . . . , x?(k + N ? 1)}, u?? = {u?(k), u?(k +
1), . . . , u?(k + Nu ? 1)}, where u?? is the optimal control
sequence obtained by solving (24), x?? is the nominal state
trajectory by applying u?? .
A. The Nominal Model Predictive Controller
B. Gain Scheduling
To illustrate the operation and inclusion relations between
sets and the convenience of later discussions about robust tubebased MPC scheme, we will introduce some basic concepts.
Definition 3.1: Given two sets A, B ? Rn , A ? B := {a +
b|a ? A, b ? B} and A ? B := {a|a ? B ? A} are defined
as the Minkowski set addition and Pontryagin set subtraction,
respectively.
The nonlinear uncertain system model in discrete time form
to be controlled is presented as
By integrating (22) into a linear time-variant model x(k +
1) = Ax(k) + Bu(k) + d(k), then we can design a robust
feedback control as
[
]
[
]
0
1
?T wc (k)
,
,B =
? YTc
T wc (k)
1
T represents the sampling time, wc is the angular velocity
around the Y-axis and u = vc is the linear velocity of camera.
where A(w(k)) =
III. T UBE - BASED MPC
x(k + 1) = f (x(k), u(k)) + d(k)
(22)
where x ? Rn is the state variable, u ? Rm the control input
of system and d ? Rn is a bounded white noise disturbance;
f (�) is a nonlinear function which is twice continuously
differentiable for all (x, u) with f (0, 0) = 0. We assume all
state variables are available. The state and control constraints
of the system are defined as x ? X ? Rn , u ? U ? Rm ,
where U and X are compact set with containing the origin
in its interior. The disturbance d is defined to be bounded
d ? D with a compact convex set D containing the origin
in its interior. We define the nominal(ref erence) system
corresponding to (22) as follows.
x?(k + 1) = f (x?(k), u?(k))
(23)
u = u?? + K(x ? x?? )
?
(25)
?
where the notations x, x? , K, u, u? have been defined above.
Combining with Definitions 3.1 and (25), we can have a
proposition as follows :
Proposition 3.1: Assume S is a disturbance invariant set
?
for x(k + 1) = AK x(k) + d(k), where AK = A + BK. If any
x ? x??S and u = u?+K(x? x?), then x(k +1) ? x?(k +1)?S
for all d ? D where x(k + 1) = Ax(k) + Bu(k) + d(k) and
x?(k + 1) = Ax?(k) + B u?(k).
Proof: Replacing the u of (22) with the formula of (25),
we can obtain
x(k + 1) ? x?(k + 1) = (A + BK)(x(k) ? x?(k)) + d(k) (26)
?
Let AK = A + BK and a set S is a disturbance invariant set
for the uncertain system (22), namely x ? x? ? S, satisfying,
therefore AK S ? D ? S according to the Definitions 3.1.
It is desirable that S is as small as possible (the minimal
k
disturbance invariant set is ??
k=0 AK D) to reduce conservativeness. So we can obtain x(k + 1) ? x?(k + 1) ? S, i.e.,
4 See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
0278-0046 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIE.2017.2756595, IEEE
Transactions on Industrial Electronics
x(k+1) ? x?(k+1)?S where the set S is computed as a outer
invariant approximation, disturbance invariant and polynomial
of the minimal disturbance set [18].
Proposition 3.1 illustrates the states x(k) of the system (22)
can be close to the states x?(k) of the nominal system (23) (for
all u?, x(k) ? x?(k) ? S) by applying the state feedback control
policy u(k) = u?(k) + K(k)(x(k) ? x?(k)) to compensate the
effect of the bounded uncertainties.
According to Definitions 3.2, 3.3 and Proposition 3.1, we
can obtain an invariant tube by {?k } = x?? ? S where S is
a robust invariant set for the control system and the set S
is robustly exponentially stable for the controlled uncertain
system (22) where d ? D [18] so that the state trajectory of
the uncertain system (22) robustly converges to the invariant
tube {?k }.
Considering the above problems, we adopt state-feedback
gain scheduling technique in the control design, i.e., the timevarying state feedback control [24] is presented as
u(k) = u?? (k) + ve(k)
(27)
where ve(k) = K(k)(x(k) ? x?? (k)), K(k) is a feedback gain
matrix with its value constantly varying with the instant k.
Designing feedback gain K(k) is desirable to achieve robustness against parametric perturbations, since we can hardly
obtain model accurate parameters A(w(k)) and B(w(k)) in
practice. In order to obtain robust gains in real time, robust
pole assignment is an effective approach by using neural
network optimization [25]. A time-varying LPV model (21)
can be translated into a time-discrete LPV model during each
sampling time interval T , i.e.,
e?(k + 1) = Ae?(k) + B u?(k)
x(k + 1) = f (x(k)) + g(x(k))u(k)
(29)
where the real pseudo-diagonal matrix ? ? Rn譶 is defined
as the desired eigenvalues matrix, whose eigenvalues all have
negative real parts. Then, calculating the following optimization problem, we can obtain a robust feedback gain matrix K
as
min ?22 (Z)
(30)
s.t : Z? ? AZ = BG
where Z ? Rn譶 is the control variable matrix with respect to
the eigne-system and G ? Rm譶 is also a variable, then, we
can calculate
GZ ?1 , ?2 (Z) =
?1 the state feedback
/ gain by TK =1/2
T
?Z?2 Z 2 = (?max (Z Z) ?min (Z Z))
is the spectral
condition number, where ?min and ?max are the smallest and
largest eigenvalues, respectively. It is shown that if (28) is
controllable, ? and A don?t have the same eigenvalue, then
the solution Z of (30) is generally non-singular matrix relative
to the parameter G. In conclusion, we can obtain the robust
feedback control gain in (27) by repeatedly solving (30) at
each sampling time.
(31)
subject to x(k) ? X , k = 1, 2, . . . , N , u(k) ? U , k =
1, 2, . . . , Nu , where x denotes the state variable vector, u
denotes the control input; f (�) ? Rm and g(�) ? Rm are
continuous nonlinear functions with f (0) = 0. Nu denotes the
control horizon and N denotes the prediction horizon, 1 ? N
and 0 ? Nu ? N .
In order to steer the state variables converge to the origin
for the system (31) by the proposed control, we can define the
following cost function as
?(k) =
N
?
||x (k +
T
j|k)||2Q
j=1
+
N?
u ?1
||?uT (k + j|k)||2R . (32)
j=0
Nu 譔u
In the quadratic form, R ? R
and Q ? RmN 譵N
represent appropriate weighting matrices which are symmetric
and positive, where m = 1 or 2, ? � ? denotes the Euclidean
norm, the x(k +j|k) denotes the predicted future horizon state
and ?u(k+j|k) denotes the increment of system input. In term
of control theoretical, it will guarantee stability if the finite
control horizon and prediction horizon are large enough in
stage cost. For the system (31), the quadratic program problem
can be obtained by using the cost function (32).
The two subsystems (18) and (19) can be rewritten in the
discrete form as
e0 (k + 1)
=
=
e(k + 1)
=
(28)
where the matrices of A and B are the time-varying parameters obtained by the matrices of A(w(k)) and B(w(k))
at the sampling time k, and then the state feedback control
u?(k) = K e?(k) can be applied, if (A, B) is controllable and
B is of full column rank. Thus, the closed-loop system can
be described as:
e?(k + 1) = (A + BK)e?(k)
C. Tube-based Model Predictive Control
The continuous time model (15) could be written as in the
discrete time form as
=
e0 (k) + T u1 (k)
f1 (e0 (k)) + g1 (e0 (k), u1 (k))
(33)
]
[
] [
] [
0
e1
?T u1 e2
u2
+
+
? YTc
e2
T u1 e1
f2 (e(k), u1 (k)) + g2 (e(k), u1 (k))
(34)
subject to constraints u1min 6 u1 (k) 6 u1max , u2min 6
u2 (k) 6 u2max , ?u1min 6 ?u1 (k) 6 ?u1max , ?u2min 6
?u2 (k) 6 ?u2max , e0min 6 e0 (k) 6 e0max , emin 6
e(k) 6 emax , where e = [e1 , e2 ]T denotes the state variable
vector of the subsystem (34), ?u(k) denotes the increment of
the control input at the instant of k, umin , ?umin , emin and
umax , ?umax , emax are the lower/upper bound of signals and
states, respectively.
The following vectors are defined: e?0 (k)
=
[e0 (k + 1|k), . . . , e0 (k + N |k)]T
?
RN , e?(k)
=
[e(k + 1|k), . . . , e(k + N |k)]T
?
R2N , u?(k)
=
T
[u(k|k), . . . , u(k + Nu ? 1|k)]
?
RNu , ?u?(k)
=
[?u(k|k), . . . , ?u(k + Nu ? 1|k)]T ? RNu .
We can denote a vector [x1 , x2 ]T = [e?0 , e?]T , then, the
predicted output is described as
x?i (k) = Gi ?u?i (k) + f?i + g?i
(35)
where i = 1 or 2, Gi =
?
gi (xi (k|k ? 1))
?
g
(x
i
i (k + 1|k ? 1))
?
?
..
?
.
...
...
..
.
gi (xi (k + N ? 1)|k ? 1)) . . .
0
0
..
.
?
?
?
?
?
gi (xi (k + N ? 1)|k ? 1))
5 See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
0278-0046 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIE.2017.2756595, IEEE
Transactions on Industrial Electronics
+
u
d
x
f (x, u)
+
F1 (? )
x+
F2 (? )
+
+
u
PDNN1 PDNN 2
Fig. 3.
?
?
f?i = ?
?
?
?
?
g?i = ?
?
P?
+
?
?
?
?1
?
?
?2
?
?
?
xd
?1
?2
?
?
Fk (? )
K
P?
+
?
?
?k
?
?
?k
Ck (? )
Block diagram of the robust MPC scheme
?
?
C2 (? )
PDNN3
?
v? = K (x ? x)
+
C1 (? )
x ?
LPV model
P?
Fig. 4.
fi (xi (k|k ? 1))
fi (xi (k + 1|k ? 1))
..
.
fi (xi (k + N ? 1|k ? 1))
gi (xi (k|k ? 1))ui (k ? 1)
gi (xi (k + 1|k ? 1))ui (k ? 1)
..
.
?
network ? and its bounds ?� can be expressed as
[
]
[
]
[
]
?u?
?u?max
?u?min
+
?
? :=
, ? :=
, ? :=
(40)
?
+? +
?? ?
?
?
?
?
?
?
?
?
?
gi (xi (k + N ? 1|k ? 1))ui (k ? 1)
Thus, the original optimization objective (32) can be reexpressed as
min ?Gi ?u?i (k) + f?i + g?i ?2Q + ??u?i (k)?2R
(36)
subject to ?u?min 6 ?u?i (k) 6 ?u?max , u?min 6 u?i (k ? 1) 6
? i (k) 6 u?max , x?min 6 f?i +
u?max , u?min 6 u?i (k ? 1) + I?u?
?
?
I 0 贩� 0
? I I 贩� 0 ?
?
?
g?i + Gi ?u?i (k) 6 x?imax , where I? = ? . . .
?
. . ... ?
? .. ..
?
I
I
贩�
The structure frame of primal-dual neural network
I
where ?+ is the upper bound and ?? is the lower bound, for
any i, the elements ?i + ? 0 in ? + are sufficiently positive to
represent +?. Thus, we can define ? as the convex set which
is expressed as ? = ?? ? ? ? ?+ .
We can define the matrix M and the vector p as
[
]
[
]
c1i
W1i ?E1i T
p=
,M =
(41)
?b1i
E1i
0
Then, the following lemma for the optimization of (37) can
be presented as :
Lemma 4.1: [21] The quadratic program problem (37) with
the corresponding constrains of (38) and (39) is the same effect
as, i.e.,to find a vector ?? ? ? satisfy the following linear
variational inequalities as :
(? ? ?? )T (M ?? + p) ? 0, ?? ? ?
(42)
RNu 譔u .
Then, the optimization objective (36) of the nominal system
will be translate into QP problem. We assume the dimension
parameter for the ith subsystem where i = 1, 2 is defined as
the integer h = 1 or 2. The QP problems can be expressed as:
where ?, M , p are defined in (40) and (41), respectively.
Moreover, we can obtain linear variational inequality (42)
transformed to the following system of piecewise linear equation:
P? (? ? (M ? + p)) ? ? = 0
(43)
1
?u?i (k)T W1i ?u?i (k) + cT1i ?u?i (k)
2
where P? (�) is the projection operator onto ? and defined as
P? (?) = [P? (?1 ), � � � ?
, P? (?6Nu +4N )]T , ?i ? {1, � � � , 6Nu +
? ?i ? , ?i < ?i ?
?i
, ?i ? ? ?i ? ?i + .
4N }, with P? (?i ) =
? +
?i
, ?i > ?i +
To solve the linear projection equation (43), we may build a
dynamical system by using the dual dynamical system design
approach. However, the matrix M is asymmetric so that leads
to such a unstable system. Thus, motivated by the design
experience, we can propose the following modified dynamical
neural network to solve (43)
min
(37)
subject to
E1i ?u?i 6 b1i
?u?min 6 ?u?i 6 ?u?max
(38)
(39)
where the coefficients are W1i = 2(GTi Qi Gi + Ri ) ?
RNu 譔u , c1i = 2GTi Qi (g?i + f?i ) ? RNu , E1i =
? I,
? ?Gi , Gi ]T ? R(2Nu +2hN )譔u , b1i = [?u?min + u?i (k?
[?I,
1), u?max + u?i (k ? 1), ?x?imin + f?i + g?i , x?imax ? f?i ? g?i ]T ?
R2Nu +2hN .
?? = ?(I + M T ){P? (? ? (M ? + p)) ? ?}
IV. N EURODYNAMIC A PPROACH
A. First PDNN Optimization
For the constraints (38) and (39), we can defined ? ?
R4Nu +4N . Hence, the decision vector of the primal-dual neural
(44)
where ? is a designed positive parameter, by adjusting which
the convergence rate of the system can be tuned. If we set
? = I + M T , S(?) = ? ? (M ? + p) and C(?) = ?, (44) can
be simplified as ?? = ??(P? (S(?)) ? C(?)).
6 See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
0278-0046 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIE.2017.2756595, IEEE
Transactions on Industrial Electronics
Remark 4.1: MPC approach is iteratively transformed as a
constrained quadratic programming (QP) problem with taking into account the constraints, where the resulting convex
optimization problem is solved in real time by applying
a primal-dual neural network (PDNN). The neural network
structure is shown in Fig.4, where ?i represents the ith row
of the scaling matrix ?. When the dimensions of input ? is
k = 6Nu + 4N , the circuit for PDNN (44) consists of k
integrators, k limiters (being the piecewise-linear activation
functions), 2k 2 multipliers, and 5k summers, so the proposed
PDNN methods contain O(7(6Nu + 4N ) + 2(6Nu + 4N )2 )
operations. To solve QP (37)?(39), a traditional sequential
quadratic programming (SQP) using gradient descent methods is usually adopted, where the computational complexity requires repeated calculation of the Hessian matrix to
solve a quadratic program. Traditional QP solution needs
O(N 4 + N + (6Nu + 4N ) ? Nu2 + (7Nu + 4N )3 ) operations; it
is impossible to be conducted online for mobile robot systems,
owing to inefficient numerical algorithm. When compared with
the computational complexity, the proposed PDNN approach
can reduce the computational cost. Thus the MPC solved by
the PDNN has a much smaller computational complexity than
SQP method.
B. Second PDNN Optimization
According to the understand for vectorization techniques
and Kronecker product, we can rewrite the equation AZ ?
Z? + BG = 0 as
Hz = 0
(45)
where z = [vec(Z)T , vec(G)T ]T ? Rn(n+m) and H = [In ?
A ? ?T ? In |In ? B] ? Rnn譶(n+m) . Hence, according to the
standard quadratic form, the optimization problem (30) can be
written in a standard form :
min
?2 (Z) = 12 ?T (Z)W ?(Z)
(46)
s.t Hz = 0
(47)
/
where
?(Z)
=
(?max (Z T Z) ?min (Z T Z))1/2 ,
W
= 2 represents appropriate weighting matrices,
T T
] and zi is the ith
z = [z1T , z2T , . . . , znT , g1T , . . . , gm
column of Z, gi is the ith column of G. Notice
that Z T Z is symmetric and positive definite. For the
constraints (47), the corresponding dual decision vector is
n(n+m)
defined
, the primal-dual decision vector
[ as ? ? ]RT
?2 = ?(Z) ?
? Rn(n+m)+1 . It is suitable to apply the
following piecewise linear equation for solving (46) for its
pseudo-convexity
??2 = ?2 (I + M2 T ){P? (?2 ? (M2 ?2 + p2 )) ? ?2 }
(48)
[
]
W ?H T
where the coefficient matrix M2 =
?
H
0
n(n+m)+1譶(n+m)+1
R
, and the vector p2 = 0, ?2 is a
positive proportional constant, and P? is a discontinuous
vector function with its components have been defined in (43).
C. Third PDNN Optimization
To obtain the eigenvalues and eigenvectors of real symmetric matrices Z T Z, we assume that u is the eigenvector of the
matrix Z T Z, and correspondingly, the eigenvalue is ?. Then,
we can obtain
Z T Zu = ?u
(49)
and, the both sides of equation (49) are multiplied by uT :
?uT u = uT Z T Zu
T
(50)
T
Further, due to ? = u u and Z Z is symmetric and positive
definite, we can get the final object function:
?2 = uT Z T Zu
(51)
The eigenvalue problem for the matrix A = Z T Z can be
rewritten as the following optimization problem:
min
subject to
1 T
2 u W3 u
umin ? u ? umax
E3 u ? b3
(52)
(53)
where W3 = 2A ? Rn譶 , E3 = [?I, I]T ? R2n譶 ,
b3 = [?umin , umax ]T ? R2n . According to [27], if A = Z T Z,
the equilibrium vector umin is the smallest eigenvector of
the matrix A, and correspondingly, the smallest eigenvalue
is ?min = uTmin umin ; If A = Z T Z ? ?min I, the equilibrium vector umax is the largest eigenvector of the matrix
A, and correspondingly, the largest eigenvalue is ?max =
uTmax umax + ?min .
For the constraints (53), the corresponding dual decision
vector is
Rn , the primal-dual decision vector
[ defined
]T as ? ? 2n
u ?
?3 =
? R . The neurodynamic optimization
model for solving (52) can be defined as follow
??3 = ?3 (I + M3 T ){P? (?3 ? (M3 ?3 + p3 )) ? ?3 }
(54)
[
]
[
]
W3 ?E3
0
where M3 =
, p3 =
.
E3
0
?b3
From Fig. 3, we can know that the proposed robust MPC
scheme is based on a LIV-PDNN and two interactive PDNNs.
In every sample period, the nominal MPC problem transformed into a QP problem can be calculated by the neural
network (44), and the ancillary feedback control gain K can
also be obtained by combining the neural networks (48) and
(54) to solve the robust pole assignment problem.
V. E XPERIMENT V ERIFICATION
A. Robotic System Description
The proposed tube-based MPC method has been implemented on the developed differential driven robot in Fig. 5.
The robot has two differential driving wheels, two support
wheels for balance purpose, 24-V battery and a Microsoft
Kinect camera, which equipped with a normal RGBD camera,
but we only use a RGB camera because the visual stabilization
of robot is based on image-based visual servoing(IBVS). The
wheels are mounted on a chassis of length 45 cm and have
a radius 6.4 cm. The wheels are driven by the motors with
its rated torque 72.1 mNm/A at 5200 rpm. And, in order to
7 See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
0278-0046 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIE.2017.2756595, IEEE
Transactions on Industrial Electronics
Envolution of the pixel coordinate r and c
350
300
vc
1
wc
0.8
250
input value
pixel value(pixels)
The control inputs w amd v of camera
r
r?
c
c?
200
150
0.6
0.4
0.2
0
100
?0.2
50
?0.4
0
0
10
20
30
40
50
60
70
0
10
time(s)
The system state errors e1 and e2
3.5
2.5
30
40
50
60
70
time(s)
The rotation angle of camera
?
e1
e?1
e2
e?2
3
error value(unitless)
0.1
0
2
?(rad)
Fig. 5.
The developed mobile robot in experiments and the different
experimental road surface condition. a) smooth ceramic tile ground; b) rough
concrete ground;c) epoxy resin ground; d) flat marble floor.
20
1.5
1
?0.1
?0.2
0.5
?0.3
0
?0.5
?0.4
?1
0
10
20
30
40
50
60
?0.5
0
70
10
20
time(s)
30
40
50
60
70
time(s)
Fig. 8. Experiment environmental condition is rough concrete floor, ? =
?0.65, ? = 1.05.
Envolution of the pixel coordinate r and c
accurately control the motor, each motor must be equipped
with a drive gear whose reduction ratio is 85.33 and a incremental encoders counting 1024 pulses/turn. The instruction of
controlling the wheels are sent out mainly through CAN bus
by Elmo drives.
B. Experiment Results
In each experiment, the parameters of the proposed model
predictive control are chosen as R1 = 0.1I, Q1 = 0.1I, R2 =
100I, Q2 = 10I, Nu = 2, N = 3. We choose the sampling
time as T = 0.2s according to program running time. In each
Envolution of the pixel coordinate r and c
input value
0.2
0
?0.2
10
20
30
40
50
60
?0.4
0
70
10
time(s)
The system state errors e1 and e2
20
30
40
50
60
2
?
0.1
1
0
0
e1
e?1
e2
e?2
?1
?2
?3
?0.1
?0.2
?0.3
?0.4
?4
?5
0
10
20
30
40
50
60
?0.5
0
70
10
20
time(s)
30
40
50
60
The control inputs w amd v of camera
100
vc
0.5
0
1
150
100
30
40
50
60
70
time(s)
The system state errors e and e
1
2.5
30
40
50
60
70
0.4
?(rad)
1
0.3
0.2
0.1
0.5
0
0
?0.1
30
40
time(s)
50
60
40
50
60
70
?0.2
0
3
20
30
40
50
60
70
time(s)
Fig. 7. Experiment environmental condition is epoxy resin ground, ? =
0.66, ? = 0.60.
10
20
30
40
50
60
70
time(s)
The rotation angle of camera
?
0.3
0.25
0.2
2
0.15
0.1
0.05
1
0
?0.05
0
10
?0.2
0
70
e1
e?1
e2
e?2
4
0.5
1.5
20
30
time(s)
The system state errors e1 and e2
0.6
2
10
20
?
e1
e?1
e2
e?2
3
20
time(s)
The rotation angle of camera
2
3.5
10
0.4
0
10
?(rad)
20
0
error value(unitless)
10
0.6
0.2
50
?0.5
0
wc
0.8
r
r?
c
c?
input value
r
r?
c
c?
150
70
time(s)
Experiment environmental condition is flat marble floor, ? =
Fig. 9.
?0.3, ? = 0.5.
pixel value(pixels)
200
70
time(s)
The rotation angle of camera
Envolution of the pixel coordinate r and c
wc
input value
pixel value(pixels)
150
200
50
error value(unitless)
wc
1
250
?0.5
0
200
100
0
vc
0
0
250
The control inputs w amd v of camera
300
vc
0.4
?(rad)
The different result of the four experiments.
The control inputs w amd v of camera
0.6
r
r?
c
c?
300
error value(unitless)
Fig. 6.
pixel value(pixels)
350
0
10
20
30
40
time(s)
50
60
70
?0.1
0
10
20
30
40
50
60
70
time(s)
Fig. 10. Experiment environmental condition is smooth tile floor, ? =
0.47, ? = 0.95.
8 See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
0278-0046 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIE.2017.2756595, IEEE
Transactions on Industrial Electronics
experiment, the parameter ? and ? of the exponential decaying
term need to be adjust constantly until the state error is zero
when the initial state or target state changes. The boundaries
of the input are chosen as u1max = [0.5, � � � , 0.5]T , u1min =
?u1max , u2max = [1, � � � , 1]T , u2min = ?u2max , ?u1max =
[0.5, � � � , 0.5]T , ?u1min = ??u1max , ?u2max =
[1, � � � , 1]T , ?u2min = ??u2max . The boundaries of the state
variable are [e0max , emax ]T = [1, 6, 6, � � � , 1, 6, 6] ? R3N ,
[e0min , emin ]T = [?1, ?6, ?6, � � � , ?1, ?6, ?6] ? R3N
Since the road surface condition can affect the dynamic
conditions, we have conducted the experiments using different
road conditions such as rough concrete ground, smooth ceramic tile ground, epoxy resin ground and flat marble ground
as shown in Fig. 5 for comparative experiment. In each
experiment, the initial states are different, but the initial input
vector of the robot is same. We set the initial input vector as
[wr (0), vr (0)]T = [0, 0]T .
Figs. 7?10 show the four experimental results which include
envolution of the pixel coordinate, control inputs, the states
error and the angle of camera. In each experiment, we chose
control parameters properly for individual case for different
dynamic condition and different initial states. From these
figures we can see that the error state e2 of the uncertain
system can track well the error state of the nominal system,
and correspondingly, the pixel coordination c also shows
good tracking performance. However, the error state e1 of
the uncertain system is not well tracking the error state
of nominal system for the existence of the robot?s speed
limitation and disturbance, but the errors of uncertain system
are finally tracked to the nominal system, which show the
feedback control law can force the actual system states to track
the nominal system state with considering the disturbances.
And, the greater the friction coefficient on the ground, the
greater the impact of disturbance on robot so that results in
a greater amplitude of oscillation. But the state errors of the
actual system are able to approach to zero gradually under
an exponential convergent rate so as to the current image
would approach to the desired one, which indicates that the
nominal MPC can force the nominal system states to desired
values. Finally, Fig. 6 shows the different result of the four
experiments according to the rolling friction coefficient of the
four ground materials. We can clearly know that the larger the
friction coefficient, the longer the response time of the system
and the greater the oscillation amplitude of linear velocity of
the robot.
VI. C ONCLUSIONS
In the paper, a robust MPC method has been proposed
for vision-based mobile robots. Firstly, the kinematics is
transformed into a chained form and partitioned into two
subsystems. Then, a quadratic programming (QP) optimization
problem through MPC framework can be formulated and
is computed by LVI-PDNN. Moreover, the gain scheduling
of the ancillary state feedback can be obtained via solving
robust pole assignment using LVI-PDNN. Finally, extensive
experimental results are provided to verify the effectiveness of
the proposed tubed-based MPC approach on the actual mobile
robotic system.
R EFERENCES
[1] S. Hutchinson, G. D. Hager, and P. I. Corke, ?A tutorial on visual servo
control,? IEEE Trans. Robot. Autom., vol. 12, no. 5, pp. 651?670, Oct.
1996.
[2] F. Chaumette and S. Hutchinson, ?Visual servo control part I: basic
approaches,? IEEE Robot. Autom. Mag., vol. 13, no. 4, pp. 82?90, Dec.
2006.
[3] F. Chaumette and S. Hutchinson, ?Visual servo control part II: advanced
approaches,? IEEE Robot. Autom.Mag., vol. 14, no. 1, pp. 109?118, Mar.
2007.
[4] X. Zhang, Y. Fang, and X. Liu, ?Motion-estimation-based visual servoing
of nonholonomic mobile robots,? IEEE Trans. Robot., vol. 27, no. 6, pp.
1167?1175, Dec. 2011.
[5] W. E. Dixon, D. M. Dawson, E. Zergeroglu, and A. Behal, ?Adaptive
tracking control of a wheeled mobile robot via an uncalibrated camera
system,? IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 31, no. 3, pp.
341?352, Jun. 2001.
[6] Z.-Y. Liang and C.-L. Wang, ?Robust stabilization of nonholonomic
chained form systems with uncertainties,? Acta Automat. Sinica, vol. 37,
no. 2, pp. 129?142, 2011.
[7] Z. Zhang, R. Weiss, and A. R. Hanson, ?Visual servoing control of
autonomous robot calibration and navigation,? J. Robot. Syst., vol. 16,
no. 6, pp. 313?328, 1999.
[8] A. Remazeilles and F. Chaumette, ?Image-based robot navigation from an
image memory,? Robot. Auton. Syst., vol. 55, no. 4, pp. 345?356, 2007.
[9] T. Goedeme, M. Nuttin, T. Tuytelarrs, and L. V. Gool, ?Vision based
intelligent wheel chair control: the role of vision and inertia sensing in
topological navigation,? J. Robot. Syst., vol. 21, no. 2, pp. 85?94, 2004.
[10] G. L. Mariottini and D. Prattichizzo, ?Image-based visual servoing with
central servoing with central catadioptric cameras,? Int. J. Robot. Res.,
vol. 27, no. 1, pp. 41?56, 2008.
[11] J. Gaspar, N.Winters, and J. Santos-Victor, ?Vision-based navigation
and environmental representations with an omnidirectional camera,? IEEE
Trans. Robot. Autom., vol. 16, no. 6, pp. 890?898, Dec. 2000.
[12] R. Kelly, E. Bugarin, and V. Sanchez, ?Image-based visual control
of nonholonomic mobile robots via velocity fields: case of partially
calibrated inclined camera,? in Proc. 45th IEEE Conf. Decis. Control,
San Diego, CA, Dec. 2006, pp. 3071?3076.
[13] Z. Li, C. Yang, N. Ding, S. Bogdan, and T. Ge, ?Robust adaptive
motion control for underwater remotely operated vehicles with velocity
constraints,? Int. J. Control, Autom. Syst., vol. 10, no. 2, pp. 421?429,
2012.
[14] Z. Li, C. Yang, C.-Y. Su, J. Deng, W. Zhang, ? Vision-based model
predictive control for steering of a nonholonomic mobile robot?, IEEE
Transactions on Control Systems Technology, vol. 24, no. 2, pp 553?564,
Mar. 2016.
[15] A. T. Hafez, A. J. Marasco, S. N. Givigi, M. Iskandarani, S. Yousefi,
C. A. Rabbath, ?Solving multi-UAV dynamic encirclement via model
predictive control, IEEE Transactions on Control Systems Technology,
vol. 23, no. 6, 2015, pp. 2251 - 2265.
[16] C. M. Best, M. T. Gillespie, P. Hyatt, L. Rupert, V. Sherrod, M. D.
Killpack, A new soft robot control method: using model predictive control
for a pneumatically actuated humanoid, IEEE Robotics & Automation
Magazine, vol. 23, no. 3, 2016, pp. 75?84.
[17] P. Scokaert, J. Rawlings, and E. Meadows, ?Discrete-time stability with
perturbations: application to model predictive control,? Automatica, vol.
33, no. 3, pp. 463?470, 1997.
[18] W. Langson, I. Chryssochoos, S. Rakovic, D. Mayne, ?Robust model
predictive control using tubes?, Automatica, vol. 40, no. 1, pp. 125?133,
2004.
[19] C. J. Ostafew, A. P. Schoellig, and T. D. Barfoot, ?Learning-based
nonlinear model predictive control to improve vision-based mobile robot
path-tracking in challenging outdoor environments?, 2014 IEEE International Conference on Robotics & Automation (ICRA), 2014, pp. 4029?
4036.
[20] G. Torrisi, S. Grammatico, R. S. Smith, M. Morari, ?A variant to Sequential Quadratic Programming for nonlinear Model Predictive Control?
Conference on Decision and Control , 2016.
[21] Z. Li, J. Deng, R. Lu, Y. Xu, J. Bai, C.-Y. Su, ?Trajectory tracking
control of mobile robot systems incorporating neural-dynamic optimized
model predictive approach?, IEEE Transactions on Systems Man and
Cybernetics: Systems, vol. 46, no. 6, pp. 740?749, 2016.
[22] Y. Wang, H. Lang, C. W. Silva, ?A hybird visual servo controller for
robust grasping by wheeled mobile robots?, IEEE/ASME Transactions on
Mechatronics, vol 15, no 5, pp 2323?2334, Oct, 2010.
9 See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
0278-0046 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIE.2017.2756595, IEEE
Transactions on Industrial Electronics
[23] M. W. Spong, S. Hutchinson, and M. Vidyasagar, Robot modeling and
control. New York: Wiley, 2006.
[24] D. Mayne, E. Kerrigan, E. Wyk, and P. Falugi, ?Tube-based robust
nonlinear model predictive control,? Int. J. Robust Nonlinear Control,
vol. 21, pp. 1341?1353, 2011.
[25] X. Le and J. Wang, ?Robust pole assignment for synthesizing feedback
control systems using recurrent neural networks,? IEEE Trans. Neural
Netw. Learn. Syst., vol. 25, no. 2, pp. 383?393, Feb. 2014.
[26] Y. Xia and J. Wang, ?A recurrent neural network for nonlinear convex
optimization subject to nonlinear inequality constraints,? IEEE Trans.
Circuits Syst. I, Reg. Papers, vol. 51, no. 7, pp. 1385?1394, Jul. 2004.
[27] Y. Liu, Z. You, and L. Cao, ?A simple functional neural network
for computing the largest and smallest eigenvalues and corresponding
eigenvectors of a real symmetric matrix,? Neurocomputing, vol. 67, pp.
369?383, Aug. 2005.
Fan Ke received the B.Eng. degree in College
of Automation Science and Engineering, Lanzhou
University of Technology, Lanzhou, China, in 2015.
He is working toward Master Degree in College of
Automation Science and Engineering, South China
Univ. of Technology. His research interests are
model predictive control, mobile robot, neural network control and optimization.
Zhijun Li (M?07-SM?09) received the Ph.D. degree
in mechatronics, Shanghai Jiao Tong University, P.
R. China, in 2002. From 2003 to 2005, he was a
postdoctoral fellow in Department of Mechanical
Engineering and Intelligent systems, The University of Electro-Communications, Tokyo, Japan. From
2005 to 2006, he was a research fellow in the
Department of Electrical and Computer Engineering, National University of Singapore, and Nanyang
Technological University, Singapore. Since 2012, he
is a Professor in College of Automation Science and
Engineering, South China university of Technology, Guangzhou, China.
From 2016, he has been the Co-Chairs of Technical Committee on
Biomechatronics and Biorobotics Systems (B 2 S), IEEE Systems, Man and
Cybernetics Society, and Technical Committee on Neuro-Robotics Systems,
IEEE Robotics and Automation Society. He is serving as an Editor-at-large
of Journal of Intelligent & Robotic Systems, and Associate Editors of several
IEEE Transactions. He has been the General Chair and Program Chair of 2016
and 2017 IEEE Conference on Advanced Robotics and Mechatronics (IEEEARM), respectively. Dr. Li?s current research interests include service robotics,
tele-operation systems, nonlinear control, neural network optimization, etc.
Chenguang Yang (M?10-SM?16) received the
B.Eng. degree in measurement and control from
Northwestern Polytechnical University, Xi?an,
China, in 2005, and the Ph.D. degree in control
engineering from the National University of
Singapore, Singapore, in 2010. He is a Senior
Lecturer with Zienkiewicz Centre for Computational
Engineering, Swansea University, UK. He received
postdoctoral training in the Department of
Bioengineering at Imperial College London,
UK. His major research interests lie in robotics,
automation and computational intelligence.
10 See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
0278-0046 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
Документ
Категория
Без категории
Просмотров
0
Размер файла
3 016 Кб
Теги
2017, tie, 2756595
1/--страниц
Пожаловаться на содержимое документа