close

Вход

Забыли?

вход по аккаунту

?

j.jfranklin.2017.08.054

код для вставкиСкачать
Accepted Manuscript
Finite-time stability and finite-time synchronization of neural network dual approach
Anna Michalak, Andrzej Nowakowski
PII:
DOI:
Reference:
S0016-0032(17)30527-6
10.1016/j.jfranklin.2017.08.054
FI 3179
To appear in:
Journal of the Franklin Institute
Received date:
Revised date:
Accepted date:
13 March 2017
20 June 2017
3 August 2017
Please cite this article as: Anna Michalak, Andrzej Nowakowski, Finite-time stability and finite-time
synchronization of neural network - dual approach, Journal of the Franklin Institute (2017), doi:
10.1016/j.jfranklin.2017.08.054
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service
to our customers we are providing this early version of the manuscript. The manuscript will undergo
copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please
note that during the production process errors may be discovered which could affect the content, and
all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
Finite-time stability and finite-time synchronization of
neural network - dual approach
CR
IP
T
Anna Michalaka,∗, Andrzej Nowakowskib
a Department
b
of Econometrics, Faculty of Economics and Sociology, University of Lódź,
Rewolucji 1905 r. No. 41, 90-214 Lódź, Poland
Faculty of Mathematics and Computer Science, University of Lódź, S. Banacha No. 22,
90-238 Lódź, Poland
Abstract
ED
M
AN
US
In this paper we develop new dual tools to study finite-time stability and finitetime synchronization for nonlinear neural networks with functions defining network depending on time. First we study a stability in finite time of the Hopfield
neural networks where the activation functions and the strength functions depend on time. Next we deal with a nonlinear coupled network containing the
nonlinear coupling function and prove using just duality tools that it synchronizes in finite time with the reference trajectory. We do not use any controller
as it is done in most papers on synchronization. We just build a dual Lyapunov
function in a dual space and then in the dual space we study the stability and
synchronization in the way similar to classical finite type stability. However, the
dual Lyapunov function is quite different than the classical one and therefore
the obtained results are both different and new.
PT
Keywords: stability, dual Lyapunov stability, finite-time stability, dual
finite-time stability, neural network, finite-time synchronization
1. Introduction
CE
The object of this paper is to provide a new method to study the stability
in finite time of the Hopfield neural networks of the type:
n
X
aij xj (t) +
j=1
n
X
bij (t)gj (t, xj (t)) + Ii (t), i = 1, . . . , n
(1)
j=1
AC
x0i (t) = −
in which n is the number of units in a neural network, xi (t) is the state of the ith
unit at the time t, aij represents the rate with which the ith unit will reset its
potential to the resting state in isolation when disconnected from the network
∗ Corresponding
author
Email addresses: anna.michalak@uni.lodz.pl (Anna Michalak),
annowako@math.uni.lodz.pl (Andrzej Nowakowski)
Preprint submitted to Elsevier
October 20, 2017
ACCEPTED MANUSCRIPT
x0i (t) = h(t, xi (t)) + k(t)
N
X
dij (t)Φ(xj (t), xi (t)),
j=1
CR
IP
T
and external inputs, gj (t, xj ) denotes the conversion of the membrane potential
of the jth unit into its firing rate, bij (t) denotes the strength of the jth unit on
the ith unit at the time t and Ii (t) denotes the external bias on the ith unit
at time t. We write gj dependent on t although we use bij (t) as we want to
take into account some more general network. We will consider also a nonlinear
coupled network with N nodes
i = 1, . . . , N,
(2)
AC
CE
PT
ED
M
AN
US
where xi = [x1i , . . . , xni ] ∈ Rn stands for the state vector of node i, Φ(xj , xi ) is
the nonlinear coupling function, h : [0, ∞) × Rn → Rn is a Caratheodory map,
k : [0, ∞) → R denotes the coupling strength. This type of neural networks is
applied to the signal and image processing, pattern recognition and optimization as well as associative memory. The simplest and most striking interaction
between dynamic systems is their synchronization. The individual dynamic
systems with trajectories that are quite different can be brought to follow exactly or approximately the same trajectory, through interaction in a network.
A stable synchrony can be achieved in many different types of asymmetrically
or symmetrically connected oscillator networks as long as the coupling between
the nodes is strong enough. Since neural network systems may also exhibit oscillation or chaotic behaviors, synchronization of coupled neural networks (see
[7], [6], [5]) has attracted tremendous attention of many researchers from a wide
range of disciplines. Synchronization (drive-response) was first proposed in [23].
A chaotic system, called the driver (or master), generates a signal sent over a
channel to a responder (or slave), which uses this signal to synchronize itself
with the driver. Aihara [1] first introduced chaotic neural networks to simulate
the chaotic behavior of biological neurons. Synchronization of chaotic neural
networks has attracted considerable attention due to its successful applications
in combinational optimization, associative memory, secure communication [22],
pattern recognition [9]. The study on synchronization of coupled neural networks is also an important step for both understanding brain science and designing coupled neural networks for real world applications. This is why those
networks have been the object of intensive study by many authors in recent time
(see [15], [17], [18], [10], [30], [13], [29], [27], [28] and references therein). In order
to get synchronize results for neural networks in finite time the authors developed a proper Lyapunov–Krasovskii functional, and then the properties of sign
function are employed and a class of discontinuous control law is proposed. The
controllers are usually very simple and can be easily implemented in practical applications. Recently in [14] nonfragile exponential synchronization for complex
dynamical networks with time-varying coupling delay has been investigated. By
constructing a suitable augmented Lyapunov function a sufficient condition of
sampled-data synchronization criteria is developed. The finite-time cluster synchronization issue of nonlinearly coupled complex networks which consist of discontinuous Lur’e systems is studied in [25]. By designing the finite-time pinning
controller, some sufficient conditions are obtained for cluster synchronization of
2
ACCEPTED MANUSCRIPT
Lur’e networks. The settling time for achieving the cluster synchronization is
estimated by applying the finite-time stability theory.
To study (1) we start with a general differential equation
x0 (t) = f (t, x(t)).
CR
IP
T
(3)
M
AN
US
It is interesting that a classical optimal control theory provides several examples
of systems that exhibit convergence to the equilibrium in finite time [24]. A wellknown example is the double integrator with a bang-bang time-optimal feedback
control [2]. These examples typically involve dynamics that are discontinuous.
This is why we consider f depending on time in measurable way.
One of the main areas of application of the neural network is an associative
memory and signal and image processing, where the capacity of the neural
network in pattern recognition is particularly useful. We are interested in finite
time recognition, which means that the neural network can handle with and
solve the problem in finite time horizon. So this is obvious that we need to
know not only that the neural network is stable, but also we need to know
much more, because now we are also interested in finite-time stability. There is
not so much knowledge about this issue in the literature. Some general results
of finite-time stability for differential equations or inclusion can be found in [3],
[20], however applications of these results to the neural network of type (1) or (2)
meet some difficulties. This is why the authors of the above mentioned papers
develop their own methodology for each class of neural networks to achieve the
synchronization of neural networks.
We follow the idea of stability in this paper. However, we present an absolutely new methodology of dual approach to finite-time stability:
ED
• We define a dual space and develop in it dual notions, in fact, we continue
the dual approach to Lyapunov stability of first order differential equation
introduced in [21].
PT
• The idea of the dual approach is that we move the investigation of stability
of the equation (3) to the so called dual space P.
CE
• In the dual space we define a dual Lyapunov function.
• The dual Lyapunov function is of a different form than all classical one.
AC
• In some cases using the dual Lyapunov function it is easier to predicate
the finite-time stability (see Example 1. and 2.).
We show that under some weakened restrictions on bij gi and h, we successfully apply our dual approach to the investigation of a local finite-time stability
of network described by (1) or (2). The advantage of that approach is that the
dual Lyapunov function seems to be more easy to handle and it turns to be
more regular although the data of differential equation is less regular. What
is more, we show (in section Examples) that a very classical Hopfield neural
3
ACCEPTED MANUSCRIPT
2. Preliminaries and duality
AN
US
CR
IP
T
network from [8] can be treated with the presented dual method. Some disadvantage of our method is that to be able to apply it to a concrete neural network,
first we have to decide in what spaces we want to work, but then it is all done
automatically (see the proofs of main theorems). We would like to underline
that in all cited above papers methods used to study finite-time stability apply
a kind of Lyapunov function in a direct way. Even in the very modern paper
[18] the Lyapunov quadratic function is used to get finite and fixed-time synchronization. We should stress once more that we investigate nonautonomous
differential equations while in most of cited papers the only autonomous case is
studied. Lastly we would like to say several words on the paper [20] which also
treats the problem of finite-time stability for nonautonomous general ordinary
differential equations. The approach presented in this thesis is of direct type
i.e. the classical Lyapunov-like function is applied. The main advantages of
the methodology developed in that paper is that it is admitted the Lyapunov
function to be nondifferentiable. Instead of differentiability, a kind of contingent derivative and generalized Frechet derivative is used. This allow to weaken
assumptions on the Lyapunov function, however they are difficult to apply for
neural networks of type (1).
M
We start with the investigation of the finite-time stability of the equation
x0 = f (t, x)
(4)
ED
and next we apply the obtained results to the neural network described by (1).
In general case (given by equation (4)) we assume that:
PT
• function f : [0, +∞) × X → Rn is Lebesgue measurable with respect to
the first variable and continuous with respect to the second variable; X is
an open domain of Rn such that 0 ∈ X .
• f (t, 0) = 0 for t ≥ 0.
CE
• there exists ascensional family of
S compact sets with non-empty interior
Qk ⊂ X such that 0 ∈ int Q1 , k∈N int Qk = X and there exist mk ∈
L∞ ([0, ∞)), k ∈ N such that for almost all t ≥ 0 and all x ∈ Qk , f satisfies
|f (t, x)| < mk (t).
AC
By L∞ ([0, ∞)) we denote the set of all essentially bounded measurable functions
and by B(0, 1) an open unit ball with center 0.
Our aim is to study the finite-time stability of equation (4) at the origin.
Let us observe that if it is needed to investigate the stability in different points
e.g. ȳ, one can simply transform f (t, x) by putting g(t, y) = f (t, y − ȳ(t)) + ȳ 0 (t)
and then investigate the stability of the equation y 0 = g(t, y) at the origin.
4
ACCEPTED MANUSCRIPT
According to the paper [21] let us consider an open set P in Rn contained 0,
called dual space and a function W : [0, +∞) × P → R of the variables (t, y)
such that
W ∈ C 1 ([0, +∞) × P).
(5)
•
for all t ≥ 0, Wy (t, 0) = 0,
•
CR
IP
T
By 0 we denote a derivative with respect to the variable t, by Wt we denote a
derivative with respect to the first variable of W and by Wy derivative of W
with respect to the second variable.
Assume that W satisfies:
(6)
(7)
for all t ≥ 0, W (t, 0) = 0 and − Wy (t, P) ⊂ P,
(8)
−Wy (t, P) ⊂ X .
(9)
AN
US
Wy (·, ·) is locally Lipschitz, Wyy (·, ·) exists in [0, +∞) × P,
•
•
ED
M
Proving stability theorems we will show that the assumptions (8) and (9) are not
difficult to verify as the function W is very flexible. By AC(J, Rn ) we denote
the collection of all absolutely continuous functions on J into Rn , where J is
an interval in R.
We denote P = {y ∈ AC(Iy , Rn ); Iy an interval in [0, +∞), y(t) ∈ P for t ∈ Iy
satisfies equality (10)}
d
Wy (t, y(t)) = −f (t, −Wy (t, y(t))) a.e. in Iy .
dt
(10)
CE
PT
The elements of the set P will be called the solutions of (10). Remembering
that f (t, 0) = 0 for t ≥ 0 and (6), we can observe that y(t) ≡ 0 is the solution
of (10), i.e. the set P is not empty because 0 ∈ P.
Now we introduce a dual Lyapunov function denoted by T . Let us define the
function T : [0, +∞) × P → R as follows (see [21]):
T (t, y) = W (t, y) − yWy (t, y).
(11)
AC
Remark 1. Note that the dual Lyapunov function (11) differs from all existing
in literature except that in [21]. It consists from two parts: W and yWy and
just this forms is more flexible in applications.
Definition 2. We say that (4) is stable at the origin if for any t0 ≥ 0 and
ε > 0 there exists δ > 0 such that any solution of (4) satisfying |x(t0 )| < δ is
extendable to [t0 , +∞) and |x(t)| < ε for t ≥ t0 .
5
ACCEPTED MANUSCRIPT
We denote by
X = {x ∈ AC(Ix , Rn ); Ix ⊂ [0, +∞), x(t) ∈ X , x0 (t) = f (t, x(t)), t ∈ Ix ,
x is maximal}.
CR
IP
T
and by
X∞ = {x ∈ AC(Ix , Rn ); Ix ⊂ (0, +∞), x(t) ∈ X , t ∈ Ix , x0 (t) = f (t, x(t))
for a.a. t ∈ Ix }.
It is obvious that X∞ ⊂ X.
AN
US
Notation 3. For any t0 ≥ 0, x0 ∈ X \ {0} and ϕ ∈ X such that ϕ(t0 ) = x0 we
denote by cϕ (t0 , x0 ) a finite number (if exists) such that:
• ϕ(t) ∈ X for t ∈ (t0 , cϕ (t0 , x0 ))
• limt→cϕ (t0 ,x0 )− ϕ(t) = 0.
Let
τ ϕ (t0 , x0 ) =
cϕ (t0 , x0 ),
+∞,
if exists
otherwise.
(12)
M
Now we are ready to introduce a settling-time function (see [20]), which we next
use in the definition of finite-time stability.
ED
Definition 4. A mapping S : [0, +∞)×X → Rn ∪{+∞} will be called settlingtime function if
PT
1. S(t0 , 0) = t0 for any t0 ≥ 0,
2. for any t0 ≥ 0 and x0 ∈ X \ {0}, S(t0 , x0 ) = sup{τ ϕ (t0 , x0 ), ϕ ∈ X such
that ϕ(t0 ) = x0 }.
CE
Definition 5. We say that (4) is finite-time stable at the origin if it is stable
at the origin and for each t0 ≥ 0 there exists δ > 0 such that for any solution
of (4) satisfying |x(t0 )| < δ, S(t0 , x(t0 )) is finite.
We also take an advantage of the following definitions.
AC
Definition 6. A real map ζ : [0, +∞) → [0, +∞) is said to belong to class K1
if it is increasing and ζ(0) = 0.
Definition 7. Let M be a class of measurable, bounded and nonnegative func+∞
R
tions c : [0, +∞) → [0, +∞) such that, there exists tc ≥ 0 such that
c(τ )dτ =
tc
+∞.
6
ACCEPTED MANUSCRIPT
We assume that the dual Lyapunov function T satisfies the condition:
T (t, y) ≥ a1 (|y|)
(13)
CR
IP
T
for all t ≥ 0 and y ∈ P and some a1 ∈ K1 .
In order to link the primary space X with the dual space P we introduce the
following operator (so called transition operator) as follows:
x = −Wy (t, y) for all t ≥ 0 and y ∈ P.
(14)
From assumption (9) we have that if y ∈ P and x = −Wy (t, y), then x ∈ X . If
y ∈ P is a solution of (10), then the function x(t) = −Wy (t, y(t)), t ∈ Iy , is a
solution of (4). Then we say that x corresponds to y.
Now we have to make the following assumptions:
AN
US
Assumption 8. Let W : [0, +∞) × P → R satisfies the following: for each
ε > 0 and each t0 ≥ 0 there exists ε1 > 0 such that for all t ≥ t0 and y ∈ P, if
only |y| < ε1 , then |Wy (t, y)| < ε.
Assumption 9. Let W : [0, +∞) × P → R satisfies the following: for each
x ∈ X there exists y ∈ P such that Ix ⊂ Iy and for each t ∈ Ix we have
x(t) = −Wy (t, y(t)).
M
Assumption 10. Let W : [0, +∞) × P → R satisfies the following: for all
t0 ≥ 0 and δ 1 > 0 there exists δ > 0 such that for each x ∈ X, if only |x(t0 )| < δ,
then there exists y ∈ P such that |y(t0 )| < δ 1 .
ED
Let us also recall the Proposition 2 from [21], which is crucial in proving stability.
Proposition 11. If W : [0, +∞) × P → R satisfies (5), (7), (8), (9) and for
almost all t ≥ 0 and each z ∈ Wy (t, P) the condition
PT
∂
(W (t, z) − zWy (t, z)) − zWyy (t, z)f (t, z) ≤ 0
∂t
(15)
CE
holds, then for any y ∈ P the function
Iy 3 t 7→ T (t, −Wy (t, y(t)))
AC
is nonincreasing.
The next conclusion is immediate.
Proposition
(6), (7), (8),
(13) then for
x(s) = 0 for
t ∈ [s, b).
12. If there exists a function W : [0, +∞)×P → R satisfying (5),
(9), Assumptions 8 and 10, (15) and T defined in (11) satisfies
arbitrary t0 ∈ [0, +∞) and arbitrary solution x ∈ X∞ such that
some s ∈ [t0 , b) (b could be infinity), we have x(t) = 0 for all
7
ACCEPTED MANUSCRIPT
CR
IP
T
Now we will recall the Cauchy problem with initial condition and comparison
lemma, which is taken from [20]. This lemma will be important in the proof of
Theorem 14.
Let c ∈ M, z ∈ R, t ∈ [0, +∞), σ ∈ (0, 1) and let us consider the following
Cauchy problem (for details see [20]):
u̇ = −c(s) sign (u)|u|σ , s ∈ [t, +∞),
(16)
u(t) = z.
Note that the right hand side is measurable function with respect to s ∈ [0, +∞)
and continuous with respect to u ∈ R.
Remark 13. (Comparison Lemma, for details see [20]) Let us notice, that for
Rs
any c ∈ M and t ≥ 0 the function Ct : s 7→ c(τ )dτ , s ∈ [0, +∞), is nonint
AN
US
R∞
creasing and absolutely continuous and that
t
and σ ∈ (0, 1) there exists t̄ ≥ t, such that
Ct (t̄) =
Zt̄
t


Zt̄
M
Let
c(τ )dτ =
tc,z = inf

t̄ ≥ t :
t
c(τ )dτ = ∞. So, for any z ∈ R
|z|1−σ
.
1−σ

|z|1−σ 
.
c(τ )dτ =
1−σ 
PT
ED
It is easy to verify (see [20]), that the following function
problem (16):

1
1−σ
Rs


1−σ
 sign (z) |z|
− (1 − σ) c(τ )dτ
t
µt,z (s) =


 0,
0,
(17)
(18)
is a solution of Cauchy
s ∈ [t, tc,z ), z 6= 0
s ≥ tc,z , z 6= 0
s ≥ t, z = 0.
(19)
CE
Moreover, note that µt,z (s) 6= 0 for t ≥ 0, s ∈ [t, tc,z ) and z 6= 0.
AC
Let us observe that the settling-time function for (16) is given by:
1−σ |z|
S(t, z) = inf Ct−1
1−σ
for z ∈ R, σ ∈ (0, 1) and Ct given by (17).
3. Dual approach to finite-time stability
In this section we will prove the main theorem concerning local finite-time Lyapunov stability.
8
ACCEPTED MANUSCRIPT
Theorem 14. If there exists a function W : [0, +∞) × P → R satisfying (5),
(6), (7), (8), (9), (13), Assumptions 8, 9 and 10 and there exist c ∈ M and
α ∈ (0, 1) such that for (t, z) ∈ [0, +∞) × (−Wy (t, P)) we have
(20)
CR
IP
T
∂
(W (t, z) − zWy (t, z)) − zWyy (t, z)f (t, z) + c(t)(T (t, z))α ≤ 0
∂t
then (4) is finite - time stable at the origin. Moreover, the settling time for a
trajectory starting at (t0 , x0 ) near (t0 , 0) is estimated by
S((t0 , x0 ) ≤ tc,T (t0 ,−Wy (t,y0 )) ,
where y0 is a point in dual space such that x0 = −Wy (t0 , y0 ).
AN
US
Proof. Because of the fact, that (20) implies (15), we know that (4) is stable at
the origin (see Theorem 4 [21]). Take any x ∈ X such that |x(t)| < δ. According
to Assumption 10 there exists y ∈ P satisfying Assumption 9 such that |x(t)| =
|Wy (t, y(t))| < δ and for s ∈ [t, +∞) ∩ Ix we have x(s) = −Wy (s, y(s)). Let
t0 ≥ 0 and ε > 0 and let Sε be a sphere of B̄(0, ε) such that B̄(0, ε) ⊂ P and
B̄(0, ε) ⊂ X . Let
γ = inf inf T (t, z).
t≥t0 z∈Sε
M
Then, according to Theorem 4 [21], T (t, −Wy (t, y(t))) < γ. Let ϕ be an arbitrary solution of (4) belonging to X such that ϕ(t) = x. From Proposition 11
we know that the function Tϕ (s) : s 7→ T (s, ϕ(s)) is nonincreasing for s ∈ (t, b)
and b ∈ (t, +∞) ∪ {+∞}. So, for these s we have
T (s, ϕ(s)) = Tϕ (s) ≤ Tϕ (t) = T (t, x) < γ
ED
and this means that |ϕ(s)| < ε for s ≥ t. From (20) we have that
d
T (s, ϕ(s)) ≤ −c(s)(T (s, ϕ(s)))α , for s ∈ [t, +∞).
dt
(21)
Since
PT
Let µt,T (t,−Wy (t,y)) given by formula (19), be a solution of (16) with initial
condition
µt,T (t,−Wy (t,y)) (t) = T (t, −Wy (t, y)).
CE
Tϕ (t) = T (t, ϕ(t)) = T (t, −Wy (t, y)) = µt,T (t,−Wy (t,y)) (t),
so from comparison lemma (remark 13) applied to (16), (21) and Tϕ (t) we obtain
Tϕ (s) ≤ µt,T (t,−Wy (t,y)) (s) for s ∈ [t, +∞).
(22)
AC
Because T is nonnegative, remembering formulae (19) and (22), it follows that
Tϕ (s) = 0 for s ∈ [tc,T (t,x) , +∞), where tc,T (t,x) is given by formula (18). Because of (13), for s ∈ [tc,T (t,x) , +∞) we have that ϕ(s) = 0. As a consequence,
due to the arbitrariness of the choice of ϕ, we get that S(t, x) ≤ tc,T (t,x) < ∞
and for a trajectory starting at (t0 , x0 ) near (t0 , 0) that
S((t0 , x0 ) ≤ tc,T (t0 ,−Wy (t,y0 )) ,
where y0 is a point in dual space such that x0 = −Wy (t0 , y0 ).
9
ACCEPTED MANUSCRIPT
4. Finite-time stability of neural network
CR
IP
T
Now we are ready to apply the finite-time stability theorem proved in the previous section to investigation and prediction of the stability of neural network
described by (1) and (2).
First, let us rewrite the network (1) in a following matrix form
x0 (t) = −A(t)x(t) + B(t)g(t, x(t)) + I(t),
(23)
where x = (x1 , . . . xn ), A = (aij ), B = (bij ), g(t, x) = (g1 (t, x1 ), . . . , gn (t, xn )),
I = (I1 , . . . , In ) and B(t)g(t, 0) = −I(t), g is Caratheodory function, A, B, I
are measurable in t. We show that under some assumptions on A, B, g we can
infer from Theorem 14 the finite-time stability results for problem (23).
β−2
I(t)z(β(β − 1) |z|
AN
US
Theorem 15. Assume that there exist numbers β ≥ 2, α ∈ (0, 1), αβ > 1, the
functions b(·), d(·) : (0, ∞) → R+ integrable and a(·) : (0, ∞) → R+ measurable
such that
2
a(t) |z| ≤ zA(t)z,
αβ−2
+ αβ(αβ − 1) |z|
zB(t)g(t, z) ≤ d(t) |z|
) ≤ b(t) |z|
αβ
,
2
ED
M
for t ∈ (0, ∞), z ∈ B(0, 1). Then the neural network described by (1) is finitetime stable at the origin. Moreover, for a trajectory starting at (t0 , x0 ) near
(t0 , 0) we have for settling time the following estimation
S((t0 , x0 ) ≤ tc,T (t0 ,−Wy (t,y0 )) ,
PT
where y0 ∈ P is such that x0 = −Wy (t0 , y0 ) with
W (t, y) =
1
β
αβ
C(t) |y| + |y|
,
C0
AC
CE
t ∈ (0, ∞), y ∈ P = B(0, 1), β ≥ 2, D > 0 sufficiently large,
C(t) = − exp(−Dβ
Z
t
(d(s) + b(s))ds),
0
C0 = sup{2β |C(t)| : t ∈ (0, ∞)}
and tc,T (t0 ,−Wy (t,y0 )) is defined by
tc,T (t0 ,−Wy (t,y0 )) = inf



t ≥ t0 :
Zt
t0

|T (t0 , −Wy (t, y0 ))|1−α 
.
c(τ )dτ =

1−α
10
(24)
ACCEPTED MANUSCRIPT
1
C0 C(t)
β
CR
IP
T
|z| + |z|
αβ
, t ∈ (0, ∞), z ∈ B(0, 1), β ≥ 2,
Rt
let us take D > 0 sufficiently large, C(t) = − exp(−Dβ 0 (d(s) + b(s))ds) and
C0 = sup{2β |C(t)| : t ∈ (0, ∞)} < ∞. Then for P = B(0, 1), X = B(0, 1) all
assumptions on W are satisfied. We have to check that (20) is satisfied as well.
To this effect let us use notations and the assumptions of the theorem. Then we
receive the following estimations for c(t) = η2−α (C0 )α ((1 − β)C(t))−α , where
η > 0 is sufficiently small:
Proof. For W (t, z) =
∂
(W (t, z) − zWy (t, z)) − zWyy (t, z)f (t, z) + c(t)(T (t, z))α
∂t
=
1
β−2
αβ−2
C(t)(β(β − 1) |z|
z + αβ(αβ − 1) |z|
z)(−A(t)z + B(t)g(t, z) (25)
C0
1
αβ
+I(t))) + c(t)( C(t)((1 − β)|z|β + (1 − αβ) |z| ))α
C0
1
≤
((1 − β)C 0 (t)|z|β + (1 − αβ)C 0 (t)|z|αβ )
C0
AN
US
−
1
1
|z|β (1 − β)C 0 (t) +
|z|αβ (1 − αβ)C 0 (t)
C0
C0
β
+a(t)C(t)(β(β − 1) |z| + αβ(αβ − 1) |z|
)
M
1
β
αβ
C(t)(d(t)(β(β − 1) |z| + αβ(αβ − 1) |z| ) + b(t)|z|αβ ) + η|z|αβ
C0
1
1
≤
(−C(t))(−Dβ(β − 1)(d(t) + b(t) + a(t))|z|β
C0
D
α
−Dβ(αβ − 1)(d(t) + b(t) + a(t))|z|αβ
D
ED
−
αβ
αβ
+d(t)β(β − 1)|z|β + (d(t)αβ(αβ − 1) + b(t)(αβ − 1)) |z|
αβ
) + η |z|
.
AC
CE
PT
Let us notice that in last part of the above inequalities the first two summands
are negative and the last three are positive, but bounded. Hence we infer that
for D sufficiently large, the inequality (25) is less than zero. Thus, by Theorem
14 from section 3 we get the first assertion of the theorem. The last assertion
of the theorem is a direct consequence of the second part of Theorem 14
Now we show the stability of the nonlinear coupled network with N nodes.
Recall the problem (2):
x0i (t) = h(t, xi (t)) + k(t)
N
X
dij (t)Φ(xj (t), xi (t))T ,
i = 1, . . . , N,
(26)
j=1
where N is the number of nodes in a neural network. We assume that h(t, 0) = 0
for all t > 0 and Φ(0, 0) = 0, where h is Caratheodory function, Φ is continuous,
the coupling strength k > 0 and k, dij are measurable. We will apply again
Theorem 14 to the above system to prove finite-time synchronization. We do
11
ACCEPTED MANUSCRIPT
N
1 X
x (t) =
xi (t)
N i=1
∗
CR
IP
T
it by proving that each of the subsystem xi is finite time stable to zero with
the same settling time. It means finally that xi = xj = 0 for all i, j and
t > T1 for some T1 , fixed for all i i.e. that the coupled network is in finite time
synchronized. Hence, if for the reference state we take
then the coupled network (26) is synchronized in finite time to x∗ (see [19]).
Theorem 16. Assume that there exist numbers β ≥ 2, α ∈ (0, 1), αβ > 1, the
Rt
functions a(·), b(·) ∈ L1 ((0, ∞); R+ ), exp − 0 (a(s)+b(s))ds bounded on (0, ∞),
2
such that zi h(t, zi ) ≤ a(t) |zi | for each i = 1, . . . , N and
zi k(t)
i=1
N
X
j=1
dij (t)Φ(zi , zj ) ≤
N
X
2
|zi | b(t)
AN
US
N
X
i=1
for t ∈ (0, ∞), zi ∈ B(0, 1), i = 1, . . . , N . Then the neural network described
by (2) is synchronized in finite time. Moreover, for any trajectory of the subsystem xi starting at (t0 , x0 ) near (t0 , 0) we have for settling time the following
estimation
S((t0 , x0 ) ≤ tc,T (t0 ,−Wyi (t,0,...,y0i ,...0)) ,
ED
M
where y0i ∈ P is such that x0 = −Wyi (t0 , 0, .., y0i , ..., 0) with
β αβ 1
W (t, 0, . . . , y0i , ..., 0) =
H(t) y0i + y0i ,
C0 + 2N
t ∈ (0, ∞), P = B(0, 1), β ≥ 2, D > 0 sufficiently large,
Z t
H(t) = − exp(−D
(a(s) + b(s))ds),
PT
0
C0 = sup{exp −βD
Z
0
t
(a(s) + b(s))ds : t ∈ (0, ∞)}
AC
CE
and tc,T (t0 ,−Wyi (t,0,...,y0i ,...0)) is defined by
= inf



tc,T (t0 ,−Wyi (t,0,...,y0i ,...0))
t ≥ t0 :
Zt
c(τ )dτ =
1−α
t0
Proof. For W (t, z1 , . . . , zN ) =

|T (t0 , −Wyi (t, 0, ..., y0i , ...0))|1−α 
1
C0 +2N H(t)
PN i=1
β
αβ
|zi | + |zi |

12
(27)
, t ∈ (0, ∞),
zi ∈ B(0, 1), β ≥ 2, let us take D > 0 sufficiently large,
Z t
C0 = sup{exp −βD
(a(s) + b(s))ds : t ∈ (0, ∞)} < ∞
0
.
ACCEPTED MANUSCRIPT
Rt
and H(t) = − exp(−D 0 (a(s)+b(s))ds). Then for P = B(0, 1)N , X = B(0, 1)N
all assumptions on W are satisfied. We have to check if (20) is also satisfied.
Let us take c(t) = η(2N )−α (C0 + 2N )α ((1 − β)H(t))−α , η > 0. Then we have
the following:
+c(t)(T (t, z1 , . . . , zN ))α
=
+k(t)
N
N
X
X
1
β
αβ
H 0 (t)(
|zi | (1 − β) +
|zi | (1 − αβ))
C0 + 2N
i=1
i=1
N
X
1
β−2
αβ−2
(zi H(t))(β(β − 1) |zi |
+ αβ(αβ − 1) |zi |
)(h(t, zi )
C0 + 2N i=1
N
X
AN
US
−
CR
IP
T
N
N
X
X
∂
(W (t, z1 , . . . , zN )−
zi Wzi (t, z1 , . . . , zN ))−
zi Wzi zi (t, z1 , . . . , zN )f (t, zi )
∂t
i=1
i=1
dij (t)Φ(zi , zj ))+c(t)(H(t)
j=1
ED
N
X
1
β
βα
((β − 1) |zi | + αβ(αβ − 1) |zi | )
(−H(t))((a(t) + b(t))
C0 + 2N
i=1
PT
+
N
X
1
β
βα
((β − 1) |zi | + (αβ − 1) |zi | )
(−H(t))D(a(t) + b(t))
C0 + 2N
i=1
M
≤−
N
X
1
β
αβ
(|zi | (1−β)+(1−αβ) |zi | ))α
C0 + 2N i=1
+η
N
X
i=1
|zi |
αβ
.
(28)
CE
Similarly as in the proof of the previous theorem (Theorem 15) we infer that,
for D sufficiently large the inequality (28) is less than zero. Thus by Theorem
14 from section 3 we get that the vector (x1 , ..., xN ) is zero for t > T1 =
tc,T (t0 ,−Wyi (t,0,...,y0i ,...0)) defined by (27). Note that for all i,
tc,T (t0 ,−Wyi (t,0,...,y0i ,...0)) = tc,T (t0 ,x0 )
AC
and hence the network (26) is synchronized in finite time to reference state x∗ .
13
ACCEPTED MANUSCRIPT
5. Examples and numerical simulations
1. We begin with the Hopfield neural network model (see [8]):
n
dxi (t) X
xi (t) ˜
=
+ Ii (t), i = 1, ..., n,
Tij (t)g(t, xj (t)) −
dt
Ri
j=1
(29)
CR
IP
T
ci
where xi is the state of the ith neural cell, g(t, xi ) is a nonlinear transfer function of xi , I˜i is an input, Tij simulates the connection of the
cells, ci is total input capacitance, Ri an input resistance. We transform (29) to the form of (23) assuming that x = (x1 , . . . xn ), A = (aij ),
T (t)
with aij = 0, for i 6= j, aii = R1i ci , B = (bij ), with bij (t) = ijci ,
g(t, x) = (g(t, x1 ), . . . , g(t, xn )) , I = (I˜1 /c1 , . . . , I˜n /cn ), Tij (t)g(t, 0) =
−Ii (t). Then we have:
AN
US
dx
= B(t)g(t, x) − Ax + I(t).
dt
Let us assume in Theorem 15 that β = 2, α = 0.9, d(t) = b(t) = 1,
0.01, t ∈ (0, 1] ,
a(t) =
t−2 , t ∈ (1, ∞).
Moreover assume that
2
M
Tij (t)xj g(t, xj ) ≤ c2j exp(−2t) |xj | , t ∈ (0, ∞), |xj | < 1, i, j = 1, ..., n
and
I˜i (t)xi ≤ ci exp(−2t) |xi | , t ∈ (0, ∞), |xi | < 1, i = 1, ..., n.
ED
Let us observe that all assumptions of Theorem 15 are satisfied, thus
the Hopfield network (29) is finite-time stable. Moreover, from (24) we
Rt
can calculate the settling time. Both functions in (24) c(τ )dτ and
t0
PT
|T (t0 ,−Wy (t,y0 ))|1−α
1−α
are continuous. Let t0 = 1, y0 = 1, η = 0.1 and D = 4.
Rt
|T (t0 ,−Wy (t,y0 ))|1−α
Then for t = 1 we have that c(τ )dτ = 0 and
> 0.
1−α
t0
AC
CE
On the other hand for t = 1.1 we have the following estimation:
Zt
t0
c(τ )dτ ∈ (2.5 · 1010 , 5 · 1010 )
and
|T (t0 , −Wy (t, y0 ))|1−α
∈ (5 · 10−2 , 7.5 · 10−2 ).
1−α
So we can deduce that tc,T (t0 ,−Wy (t,y0 )) in (24) is finite and belongs to
(1, 1.1).
Note that in order to get finite-time stability the functions I˜i and Tij
cannot be constant functions.
14
ACCEPTED MANUSCRIPT
2. The second example concerns coupled network. We consider network (26)
with Φ(xj (t), xi (t)) = xj (t) − xi (t) i.e.
N
X
j=1
dij (t)(xj (t) − xi (t))T , i = 1, ..., N.
(30)
CR
IP
T
x0i (t) = h(t, xi (t)) + k(t)
Notice that coupled network (30) differs from most coupled networks we
can meet in literature because our coupling strength k, coupling matrix
(dij ) and the function h depend on time (see e.g. [15], [17], [18], [10] and
references therein). Assumptions concerning h are the same as in Theorem
16 . Assume that dij , i, j = 1, ..., N are measurable in time and that they
satisfy dij = −dji for i 6= j. Then
xi (t)
i=1
N
X
j=1
dij (t)(xj (t) − xi (t)) =
N
X
i,j=1
dij (t)xi (t)xi (t).
AN
US
N
X
This is why our second assumption in Theorem 16 takes place:R there is
t
b(·) ∈ L1 ((0, ∞); R+ ) which together with a(·) makes exp − 0 (a(s) +
b(s))ds bounded on (0, ∞), such that
k(t)
N
X
i,j=1
dij (t)zi zi ≤ b(t)
N
X
i=1
2
|zi | , zi ∈ B(0, 1), i = 1, . . . , N.
PT
Example 17.
ED
M
Therefore all assumptions of Theorem 16 are satisfied and hence coupled
network (30) is in finite time synchronized.
3. We begin with simulation of Hopfield neural network. We show that if the
bias I(t) is strongly decreasing function then the network is finite-time
stable while if it is constant function then the network is not stable in
finite time. Next changing I(t) to slowly decreasing function 1/(t + 1) the
network is unstable as well while if it is stronger decreasing i.e. 1/(t2 + 1)
then it is stable in finite time.
1
1
(|x1 + 1| − |x1 − 2|) − 0.3x1 + 0.25
,
t+1
t+1
0 < x1 (0) < 1.
CE
x01 = 0.25
The next two examples relate to synchronization of two networks.
AC
Example 18.
(
2
0.2
1.8
x01 = −1.2x1 + t+1
(1 + tanh(x1 )) − t+1
(1 + tanh(x2 )) + t+1
,
3.8
2.5
6.3
0
x2 = −x2 + t+1 (1 + tanh(x1 )) + t+1 (1 + tanh(x2 )) + t+1 ,
Example 19.
(
x01 = −x1 +
x02
= −x2 +
1
(|x1
log(t)+1
1
(|x1
log(t)+1
+ 1| − |x1 − 1|) +
+ 1| − |x1 − 1|) +
15
5
(|x2
t+1
3
(|x2
t+1
+ 1| − |x2 − 1|),
+ 1| − |x2 − 1|),
0 < x1 (0) < 1,
0 < x2 (0) < 1.
0 < x1 (0) < 1,
0 < x2 (0) < 1.
M
AN
US
CR
IP
T
ACCEPTED MANUSCRIPT
Figure 1: Example 17
ED
6. Conclusion
AC
CE
PT
We developed new method so called dual approach to study finite-time stability and synchronization in finite time of neural networks. Dual approach
means that we move all considerations to another space and in this space we
define similar notions as in the primal setting. However that causes, among
other, that Lyapunov function is now different. In Theorem 15 we proved that
under some standard restrictions on the data of the Hopfield network the network is stable in finite time. We illustrate that theorem on example of classical
Hopfield network and its simulations. In Theorem 16 we show that the problem
of synchronization can also be treat with the same general result i.e. Theorem
14. We also give some simulations on synchronization. The advantage of the
dual approach is that we deal with different Lyapunov function, defined in dual
space, hence restrictions on the data of our networks differ on existing result in
literature.
References
[1] K. Aihara, T. Takabe, M. Toyoda, Chaotic neural networks, Phys. Lett. A.
144 (6-7) (1990) 333–340.
16
M
AN
US
CR
IP
T
ACCEPTED MANUSCRIPT
Figure 2: Example 18
ED
[2] M. Athans, P. L. Falb, Optimal Control: An Introduction to the Theory
and Its Applications, McGraw-Hill, New York, 1966.
[3] A. Bacciotti, L. Rosier, Lyapunov Functions and Stability in Control Theory, 2nd Edition, Springer, London, 2001.
PT
[4] S. P. Bhat, D.S. Bernstein, Finite-time stability of continuous autonomous
systems, SIAM Journal on Control and Optimization 38(3) (2000) 751–766.
CE
[5] J. Cao, G. Chen, P. Li, Global synchronization in an array of delayed neural
networks with hybrid coupling, IEEE Trans. Syst. Man, Cybern B, Cybern.
38(2) (2008) 488–498.
AC
[6] J. Cao, J. Lu, Adaptive synchronization of neural networks with or without
time-varying delay, Chaos 16(1) (2006) 013133.
[7] G. Chen, J. Zhou, Z. Liu, Global synchronization of coupled delayed neural
networks and applications to chaotic CNN models, Int. J. Bifurcation Chaos
14(7) (2004) 2229–2240.
[8] J.J. Hopfield, Neurons with graded response have collective computational
properties like those of two-state neurons, Proc. Natl. Acad. Sci., USA
81(10) (1984) 3088–3092.
17
M
AN
US
CR
IP
T
ACCEPTED MANUSCRIPT
Figure 3: Example 19
ED
[9] F.C. Hoppensteadt, E.M. Izhikevich, Pattern recognition via synchronization in phase-locked loop neural networks, IEEE Trans. Neural Netw. 11
(3) (2000) 734–738.
PT
[10] Y. Li, X.Yang, L. Shi, Finite-time synchronization for competitive neural
networks with mixed delays and non-identical perturbations, Neurocomputing 185 (2016) 242–253.
CE
[11] X. Lin, X. Li, S. Li, Y. Zou, Finite-time stability of switched nonlinear systems with finite-time unstable subsystems, Journal of the Franklin Institute
352 (2015) 1192–1214.
AC
[12] X. Liu, J. Cao, Local synchronization of one-to-one coupled neural networks
with discontinuous activations, Cogn Neurodyn 5(1) (2011) 13–20.
[13] X. Liu, X.Yu, H. Xi, Finite-time synchronization of neutral complex networks with Markovian switching based on pinning controller, Neurocomputing 153 (2015) 148–158.
[14] Y. Liu, B.Z. Guo, J.H. Park, S.M. Lee, Nonfragile exponential synchronization of delayed complex dynamic networks with memory sampled-data
control, IEEE Trans Neural Netw Learn Syst. PP(99) (2016) 1–11.
18
ACCEPTED MANUSCRIPT
[15] W. Lu, T. Chen, Dynamic behaviors of Cohen–Grossberg neural networks
with discontinuous activation functions, Neural Networks 18(3) (2005) 231–
242.
CR
IP
T
[16] W. Lu, T. Chen, Synchronization of coupled connected neural networks
with delays, IEEE Trans. Circuits Syst. I, Reg. Papers 51(12) (2004) 2491–
2503.
[17] J. Lu, D.W.C. Ho, Globally exponential synchronization and synchronizability for general dynamic networks, IEEE Trans. Syst. Man, Cybern B,
Cybern. 40(2) (2010) 350–361.
[18] W. Lu, X. Liu, T. Chen, A note on finite-time and fixed-time stability,
Neural Networks 81 (2016) 11–15.
AN
US
[19] W. Lu, T. P. Chen, Synchronization analysis of linearly coupled networks
of discrete time systems, Physical D Nonlinear Phenomena 198 (2004) 148–
168.
[20] R. Matusik, Finite time stability of neural network, Ph.D. Thesis, 2016 (in
Polish).
[21] A. Michalak, Dual approach to Lyapunov stability, Nonlinear Analysis:
Theory, Methods and Applications 85 (2013) 174–179.
M
[22] V. Milanović. M.E. Zaghloul, Synchronization of chaotic neural networks
and applications to communications, Int. J. Bifurcation Chaos 6 (1996)
2571–2585.
ED
[23] L. M. Pecora, T.J. Carroll, Synchronization in chaotic systems, Phys. Rev.
Lett. 64 (8) (1990) 821–824.
PT
[24] E.P. Ryan, Optimal Relay and Saturating Control System Synthesis, IEEE
Control Engineering Series 14, London, 1982.
CE
[25] Z. Tang, J.H. Park, H. Shen, Finite-time cluster synchronization of Lur’e
networks: a nonsmooth approach, IEEE Trans. Syst., Man, Cybern., Syst.
PP (99) (2017) 1–12.
AC
[26] W. Wu, T. Chen, Global synchronization criteria of linearly coupled neural network systems with time-varying coupling, IEEE Trans Neural Netw
19(2) (2008) 319–332.
[27] X. Yang, D.W.C. Ho, Synchronization of Delayed Memristive Neural Networks: Robust Analysis Approach, IEEE Trans Cybern 46 (2016) 3377 –
3387.
[28] X. Yang, D.W.C. Ho, J. Lu, Q. Song, Finite-time cluster synchronization
of T–S fuzzy complex networks with discontinuous subsystems and random
coupling delays, IEEE Trans. Fuzzy Syst. 23(6) (2015) 2302 – 2316.
19
ACCEPTED MANUSCRIPT
[29] X. Yang, J. Lu, Finite-time synchronization of coupled networks with
Markovian topology and impulsive effects, IEEE Trans. Autom. Control
61(8) (2016) 2256 – 2261.
CR
IP
T
[30] X. Yang, Q. Song, J. Liang, B. He, Finite-time synchronization of coupled
discontinuous neural networks with mixed delays and nonidentical perturbations, Journal of the Franklin Institute 352(10) (2015) 4382–4406.
AC
CE
PT
ED
M
AN
US
[31] Z. Zhang, Z. Zhang, H. Zhang, Finite-time stability analysis and stabilization for uncertain continuous-time system with time-varying delay, Journal
of the Franklin Institute 352(3) (2015) 1296–1317.
20
Документ
Категория
Без категории
Просмотров
3
Размер файла
1 542 Кб
Теги
2017, 054, jfranklin
1/--страниц
Пожаловаться на содержимое документа