close

Вход

Забыли?

вход по аккаунту

?

Oninformation transmission in linear feedback tracking systems.

код для вставкиСкачать
ASIA-PACIFIC JOURNAL OF CHEMICAL ENGINEERING
Asia-Pac. J. Chem. Eng. 2008; 3: 630?637
Published online 22 September 2008 in Wiley InterScience
(www.interscience.wiley.com) DOI:10.1002/apj.206
Special Theme Research Article
On information transmission in linear feedback tracking
systems?
Hui Zhang* and Youxian Sun
State Key Laboratory of Industrial Control Technology, Institute of Industrial Process Control, Department of Control Science and Engineering,
Zhejiang University, Hang Zhou, China
Received 24 July 2008; Accepted 25 July 2008
ABSTRACT: Information transmission in discrete time linear time-invariant (LTI) feedback tracking systems was
investigated by using measures of directed information and mutual information. It was proved that, for a pair of
extraneous input and internal variable, directed information (rate) is always equal to mutual information (rate); for
a pair of internal variables, the former is smaller than the latter. Furthermore, the feedback changes the information
transmission between internal variables, while it has no influence on information transmission from extraneous variable
to internal variable. Consideration on system design was discussed. ? 2008 Curtin University of Technology and John
Wiley & Sons, Ltd.
KEYWORDS: directed information; mutual information; linear tracking system; information transmission; feedback
INTRODUCTION
As a new measure of information transmission, the
so-called directed information defined by Massey,[1]
which is different from the traditional measure of
mutual information defined by Shannon,[2] is attracting
attention in the fields of information theory[3] and
control systems with communication constraints.[4,5]
It was demonstrated[1] that for finite states channels
with or without memory, the directed information and
the mutual information between channel input and
output are identical if the channel is used without
feedback; when there is feedback from channel output
to encoder, the directed information is strictly smaller
than mutual information. The key point here is that
?causality independence? does not mean ?statistical
independence?.[1,3]
On the other hand, information theoretic approaches
to the analysis and design of control system (with
no communication constraints) are attracting more and
more attention recently.[6 ? 12] The attempts at investigating the relation between control and information make it
important to investigate the information transmission in
feedback systems. For example, as measures concerning
*Correspondence to: Hui Zhang, State Key Laboratory of Industrial
Control Technology, Institute of Industrial Process Control, Department of Control Science and Engineering, Zhejiang University, Hang
Zhou, 310027, China. E-mail: zhanghui iipc@zju.edu.cn
?
This work is supported by the National Natural Science Foundation
of China (60674028).
? 2008 Curtin University of Technology and John Wiley & Sons, Ltd.
information transmission, entropy rate and mutual information rate play important roles in stochastic control
and estimation problems.[6,8,11,12] However, the function of directed information (rate), which measures the
real information transmission in causal systems, has not
been discussed for control.
In this paper, we will investigate the relation between
the directed information (rate) and mutual information (rate) in linear feedback tracking control systems (with no communication constraints). The sample
space of random variables is continuous. Our works
lead to the conclusions that, in measuring information transmission between extraneous inputs and internal variables, the directed information is always equal
to the mutual information; for pairs of internal variables, the former is identical with or smaller than
the latter. Furthermore, by comparing the open- and
closed-loop systems, we understand that the feedback
changes the information transmission between internal
variables, while it makes no influence on information
transmission from extraneous variables to internal variables. This is slightly different from the conclusion in
communication channel, which states that for continuous alphabet Gaussian channels with colored noise,
the capacity (defined as the maximum mutual information between message and channel output) is increased
by feedback.[13] Information theoretic preliminaries and
the system under discussion will be presented in Section 2 with notations. Section 3 will give the main
results. Section 4 will be the conclusion and discussion.
Asia-Pacific Journal of Chemical Engineering
ON INFORMATION TRANSMISSION
NOTATIONS AND PRELIMINARIES
Definitions and Lemmas concerning
information
In this paper, we denote the vector of the sequence
of a (discrete-time) stochastic process ?(k ) ? R, (k =
1, 2, . . .), as
? n = [?(n), ?(n ? 1), . . . , ?(1)]T
(1)
The entropy rate [13] of a stationary stochastic process
X (k )
1
H (X ) =: limn?? H (X n )
(2)
n
describes the per unit time information or uncertainty of
X , where H (X n ) is the entropy of X n ; while the mutual
information rate [13] between two stationary stochastic
processes X and Y
I (X ; Y ) =: limn??
1
I (X n ; Y n )
n
(3)
describes the time average information transmitted
between processes X and Y , where I (X n ; Y n ) is the
mutual information of X n and Y n . The notation ?=:?
means definition. The directed information [1,3] from the
sequence X n to the sequence Y n is defined as
I (X n ? Y n ) =:
n
I (X k ; y(k )|Y k ?1 )
x (k ) ? R (k = 1, 2, . . .) is stationary. Then the entropy
rate of system output y(k ) ? R is
?
1
H (y) = H (x ) +
ln |F (e i ? )|2 d?
(8)
4? ??
Remark 3: The second term in the right hand of
Eqn (8) reflects the variation of time average information of the signal after it transmitted through the system
F (z ), and was defined as the variety of system F (z ).[11]
Denote it as
?
1
ln |F (e i ? )|2 d?
(9)
V (F ) =:
4? ??
Intrinsically, the system variety is caused by system
dynamics, or, memory. When V (F ) = 0, we say that
the system F (z ) is entropy preserving.
Remark 4: The proof of Lemma 2[14] is based on the
fact that for sequences of the input x n ? Rn and output
y n ? Rn with y n = Fn x n , where Fn is the invertible
linear transformation matrix defined by system F (z ),
H (y n ) = H (x n ) + ln det J , where J is the Jacobian
matrix of Fn . However, if the number of samples of
x (k ) and y(k ) is finite, then H (y n ) = H (x n ). In this
case, the Eqn (8) is modified as H (y) = H (x ), i.e. the
system is always entropy preserving.
(4)
The system under discussion
(5)
In this paper we will discuss the information transmission in the discrete time SISO LTI tracking system
shown in Fig. 1: where r(k ), d(k ), u(k ), y(k ) ? R
are the reference input, disturbance, control signal and
output, respectively, k is the time index; C (z ) and P (z )
are proper rational transfer functions of controller and
plant, respectively.
k =1
while the directed information rate is
1
I(X ? Y ) =: lim I (X n ? Y n )
n?? n
Some conclusions concerning entropy are stated as
follows.
Lemma 1: Let X , Y , Z be random vectors with
appropriate (needless same) dimensions, and f (и) be a
deterministic map. Then
H (x + f (y)|y) = H (x |y),
(6)
H (x |f (y) + z , y) = H (x |z , y)
(7)
Assumptions 5:
(a) The reference input r and the disturbance d are
mutual independent stationary random sequences.
The system has zero initial condition (i.e., for k ? 0,
the variables in system are zero).
(b) The closed-loop system is well-posed and internally
stable. The well-posedness requires the transfer
where H (и|и) denotes the conditional entropy.[13]
Proof : See Eqn. Appendix A1.
Lemma 2[14] : Let F (z ) ? RH? be the transfer function of a discrete time, single-input and single-output
(SISO), invertible linear time invariant (LTI) system
with variables taking values in continuous spaces,
where RH? denotes the set of all stable, proper
and rational transfer functions.[15] The stochastic input
? 2008 Curtin University of Technology and John Wiley & Sons, Ltd.
Figure 1. Feedback LTI tracking system with
disturbance.
Asia-Pac. J. Chem. Eng. 2008; 3: 630?637
DOI: 10.1002/apj
631
632
H. ZHANG AND Y. SUN
Asia-Pacific Journal of Chemical Engineering
function 1 + L(z ), where L(z ) = P (z )C (z ), is oneto-one and onto, i.e. invertible.[15,16]
(c) The open-loop transfer function L(z ) has no polezero cancellation and no pole or zero on the unit
circle in complex plane, and can be represented as:
l0
p
(z ? zi )
i =1
q
L(z ) =
(10)
(z ? pj )
for the open-loop system. For the closed-loop system,
we have
y(k ) = yr (k ) + yd (k ),
y k = yrk + ydk
(15)
u(k ) = ur (k ) ? ud (k ),
u =
(16)
k
urk
?
udk
where yr (k ) =: Tk r k , yrk =: Tk r k , ur (k ) =: Uk r k ,
ud (k ) =: Uk d k , urk =: Uk r k , udk =: Uk d k . We also have
y(k ) = Pk u k + d(k ),
y k = Pk u k + d k
(17)
j =1
where l0 = 0 is the leading coefficient of the numerator
of L(z ) when the dominator of L(z ) is monic, and is
chosen to stabilize the closed system; zi and pj are zeros
and poles of L(z ), respectively.
The closed-loop transfer functions are S (z ) = [1 +
L(z )]?1 , T (z ) = 1 ? S (z ). The response of S (z ) to
the disturbance in a finite time interval can be represented as
k
sk ?i d(i )
(11)
yd (k ) =
i =1
yd (k ) =: Sk d k ,
ydk =: Sk d k
(12)
respectively, where
?
?
S =?
?
?
Sk
0
0
k
0
Sk ?1
..
.
?
?
? (13)
?
?
Sk ?2
0
S1
are linear deterministic maps. It is seen in our system
that Sk is an invertible transformation matrix. We
also denote respectively the linear maps Tk and Tk
corresponding to the transfer function T (z ), Lk and Lk
corresponding to the open-loop transfer function L(z ),
Pk , Pk corresponding to P (z ), and Uk , Uk corresponding
to U (z ) = C (z )S (z ), in the same sense as Sk and Sk .
For discrimination, we will denote the variables in openloop system with the subscript ?o?. For examples, uo
denotes the control variable in open-loop system, while
u denotes that in the closed-loop system; r denotes the
reference input in open- and closed-loop system because
it is the same in both cases.
By using these notations, we can write
yo (k ) = Lk r k + d(k ),
?
ln |S (e )|d? = 2?
j?
??
m
ln |piu |
i ?1
(18)
where the piu ?s, i = 1, и и и , m, are unstable (i.e. outside
the unit disk in complex plane) poles of L(z ), and
? = limz ?? L(z ).
Note that in the above equation, ? = 0 if the openloop transfer function L(z ) is strictly proper (p < q
in Eqn (10)), and ? = l0 if L(z ) is biproper (p = q in
Eqn (10)).
?
0
иии
Lemma 6:[17] Under the conditions stated in Assumption 5, the Bode integral of sensitive function S (z ) of
the closed-loop stable discrete-time LTI system shown
in Fig. 1 satisfies
? ln |? + 1|
where sk ?i ?s are response weighting parameters. The
signal yd (k ) and vector ydk are linear functions of the
input sequence d k . Denote them as
Sk = [s0 s1 и и и sk ?1 ],
?
for both open- and closed-loop systems.
yok = Lk u k + d k
(14)
? 2008 Curtin University of Technology and John Wiley & Sons, Ltd.
INFORMATION TRANSMISSION IN LTI
CONTROL SYSTEMS
The tracking system shown in Fig. 1 is similar to
a channel with intersymbol interference (ISI),[3] to a
large degree. The reference input r, control signal u,
and system output y can be considered as the source
message, encoded channel input, and channel output,
respectively.
If the open-loop system is stable, the spectrum of the
output is
yo (?) = |L(e j ? )|2 r (?) + d (?)
(19)
where r and d are spectrums of r and d, respectively. For the closed-loop system,
y (?) = |T (e j ? )|2 r (?) + |S (e j ? )|2 d (?)
= |S (e j ? )|2 [|L(e j ? )|2 r (?)
+ d (?)]
(20)
Asia-Pac. J. Chem. Eng. 2008; 3: 630?637
DOI: 10.1002/apj
Asia-Pacific Journal of Chemical Engineering
ON INFORMATION TRANSMISSION
Hence, y can be considered as the response of system
S (z ) to stationary input y0 . Then by Lemma 2 and
Lemma 6 we get the following conclusion.
Proposition 7: For the open-loop stable feedback
tracking system satisfies Assumption 5, the entropy
rates of the outputs of open- and closed-loop systems
have relation
H (y) = H (yo ) + V (S )
= H (yo ) ? ln |? + 1|
(21)
1 ? ln |S (e i ? )|2 d? is the variety of
where V (S ) =: 4?
??
system S (z ). If the open-loop transfer function L(z ) is
strictly proper (p < q in Eqn (10)), then
H (y) = H (yo )
(22)
Remark 8: It can be seen from Lemma 6 and
Proposition 7 that for a stable and strictly proper L(z ),
the feedback does not change the output uncertainty.
For biproper L(z ), the output uncertainty is reduced
by feedback if |? + 1| > 1. However, from Remark
4 we know that if the number of samples of system
variables is finite, then the system S (z ) is always
entropy preserving, and the feedback does not change
the output uncertainty even if L(z ) is biproper.
Proposition 9: Suppose the system shown in Fig. 1
satisfies conditions stated in Assumptions 5, then,
Remark 11: Propositions 9 and 10 state that feedback does not change the information transmission from
extraneous inputs to internal variables. This is slightly
different from the conclusion in the communication
channel, which states that for continuous alphabet Gaussian channels with colored noise, the capacity (defined
as the maximum mutual information between message
and channel output) is increased by feedback.[13] Furthermore, with similar analysis as in the proofs of
Propositions 9 and 10, it can be concluded that for the
pairs of extraneous input variable and internal variable
(such as (r, y), (d, y), (r, u), and (d, u)), mutual information and directed information are equivalent in both
cases of open- and closed-loop systems. This property
is based on the fact that the feedback does not change
the statistic (in)dependence between the future inputs
and the current (and previous) internal variables. Specifically, let e(k ) = r(k ) ? y(k ) be the tracking error in
closed-loop system, then e(z ) = S (z )[r(z ) ? d(z )]. We
can get
I (r; y) = I(r ? y) = H (y) ? H (d)
(24)
(25)
I(r ? e) = I (r; e)
= H (r ? d) ? H (d)
? 1
r
=
ln 1 +
d?
4? ??
d
We then consider the information transmission
between d and u.
? 2008 Curtin University of Technology and John Wiley & Sons, Ltd.
(28)
Hence, the information transmission from r to e is
defined uniquely by the signal?noise ratio.
As stated in Ref. [1], the directed information is not
symmetry. Considering a ?fictitious? directed information from internal signal to extraneous input signal will
throw light on an interesting relation. Define the fictitious directed information (rate) from u n to d n as:
if the open-loop system L(z ) is stable
Proof: See A2.
(27)
Then if the system is Gaussian,
(23)
and,
I (r; y) = I(r ? y) = I (r; yo )
= I(r ? yo )
(26)
Proof: See A3.
? H (Sn d n )
In this section, we will investigate the information
transmission between two pairs of extraneous and
internal variables, (r, y) and (d, u), respectively.
? V (S )
I(d ? u) = I (d; u) = H (u) ? H (ur )
I (r n ? e n ) = I (r n ; e n ) = H (e n )
Information transmission from extraneous
inputs to internal variables
I (r; yo ) = I(r ? yo ) = H (yo ) ? H (d)
Proposition 10: Suppose the system shown in Fig. 1
satisfies conditions stated in Assumptions 5, then
I (u n ? d n ) =:
n
I (u k ; d(k )|d k ?1 ),
I(u ? d)
k =1
1
I (u n ? d n )
n?? n
=: lim
(29)
Asia-Pac. J. Chem. Eng. 2008; 3: 630?637
DOI: 10.1002/apj
633
634
H. ZHANG AND Y. SUN
Asia-Pacific Journal of Chemical Engineering
We have
In the closed-loop system,
I(u ? y) ? I (u; y)
I (u n ? d n )
n
=
I ((u k ?1 , u(k )); d(k )|d k ?1 )
with equality holds if and only if d is white, where
k =1
n
(a)
=
[I (u k ?1 ; d(k )|d k ?1 )
k =1
+ I (u(k ); d(k )|d k ?1 , u k ?1 )]
n
=
[H (d(k )|d k ?1 ) ? H (d(k )|d k ?1 , u k ?1 )
+ I (u(k ); d(k )|d
(b)
=
k ?1
,u
k ?1
)]
(35)
I (u; y) = H (y) ? H (d)
+ I(d ? u)
(36)
Remark 13: Although in the closed-loop system,
directed information and mutual information may be
identical, the feedback changes the information transmission between internal variables even if the disturbance is white. This can be seen in the following relation derived from Eqns (33),(35), and (21),
[H (d(k )|d k ?1 ) ? H (d(k )|d k ?1 , urk ?1 )
k =1
+ I (u(k ); d(k )|d k ?1 , u k ?1 )]
n
(c)
=
I (u(k ); d(k )|d k ?1 , u k ?1 )
(30)
k =1
where (a) is based on the chain rule of mutual
information,[13] (b) is based on (7) and (c) on the fact
that d and r are mutually independent. On the other
hand, with the chain rule of mutual information,
I (d n ? u n ) =
n
[I (d k ?1 ; u(k )|u k ?1 )
k =1
+ I (d(k ); u(k )|u k ?1 , d k ?1 )]
(31)
where I (d k ?1 ; u(k )|u k ?1 ) = H (d k ?1 |u k ?1 )
? H (d k ?1 |u k ?1 , u(k )) ? 0 with equality if and only if
d(k ) is white. Hence,
I (d ? u ) ? I (u ? d ), I(d ? u)
? I(u ? d)
n
I(u ? y) = H (y) ? H (d)
+ I(u ? d),
Proof: See A4.
k =1
n
(34)
n
n
n
(37)
where the quantity on the right-hand side of the equality
is not identical to zero in general. Let us consider
a special case when d is white, C (z ) is invertible,
and the system is Gaussian. With Eqns (26),(32), and
1 ?
Lemma 2 we get I(u ? d) = I(d ? u) = 4?
??
d )d?, and hence
ln(1 + r
I(u ? y) ? I(uo ? yo ) = V (S )
?
1
d
+
ln(1 +
)d?
4? ??
r
(38)
If S (z ) is entropy preserving, the variation of information transmission caused by feedback is a constant.
(32)
CONCLUDING REMARKS
with equalities if and only if d(k ) is white.
Information transmission between internal
variables
Only the pair of control variable and output is considered in this section.
Proposition 12: Suppose the system shown in Fig. 1
satisfies conditions stated in Assumptions 5. If the openloop system with L(z ) is stable,
I(uo ? yo ) = I (uo ; yo ) = H (yo ) ? H (d)
I(u ? y) ? I(uo ? yo ) = V (S )
+ I(u ? d)
(33)
? 2008 Curtin University of Technology and John Wiley & Sons, Ltd.
For pairs of system extraneous inputs and internal
variables (including system output), the directed information (rate) is always equal to the mutual information (rate); For the pair of internal variables,
the former is smaller than or equal to the latter.
And, the feedback changes the information transmission between internal variables, while it makes no
influence on information transmission from extraneous variables to internal variables. Our conclusion
is slightly different from that in the communication
channel.[13]
In designing of communication systems, one always
intends to design the probability distribution of the
Asia-Pac. J. Chem. Eng. 2008; 3: 630?637
DOI: 10.1002/apj
Asia-Pacific Journal of Chemical Engineering
ON INFORMATION TRANSMISSION
channel input to maximize the information transmission
from channel input to output, to achieve the channel capacity. However, in our tracking system, neither
maximizing information transmission from r to y nor
maximizing information transmission from u to y is
a suitable choice to get good tracking performance.
The key distinction between structures of tracking system and the communication channel is that, in a communication system a postfilter, the decoder, is used
before transmitted signal is received by user, while
whereas in a tracking system there is no postfilter.
From Proposition 9 (Eqn (24)) and Proposition 12
(Eqn 35) we see that maximizing I(r ? y) or I(u ? y)
may make H (y) too large. This implies the output
may contain more information or uncertainty than the
reference signal. Therefore, maximizing I(r ? y) or
I(u ? y) can not be used directly in tracking control systems. Another intuitive choice is minimizing
the information transmission from reference to tracking
error. However, I(r ? e) is not a suitable performance
function, too, because I(r ? e) is defined by the signal?noise ratio (Remark 11) and is independent of system parameters. In our viewpoint, a rational choice is to
adopt I(r ? y) as an auxiliary performance function,
as discussed in[12.]
The proof of (7) is given as:
H (x |f (y) + z , y)
= H (x |y) ? I (x ; f (y) + z |y)
= H (x |y) ? H (f (y) + z |y) + H (f (y) + z |x , y)
= H (x |y) ? H (z |y) + H (z |x , y)
= H (x |y) ? I (x ; z |y))
= H (x |z , y)
where the third equality is based on (6).
A1. Proof of Lemma 1
A2. Proof of Proposition 9
First, we consider the open-loop system (i.e. there is no
feedback in Fig. 1) with L(z ) stable. With Eqn (14) we
get the mutual information between r n and yon as
I (r n ; yon ) = H (yon ) ? H (yon |r n )
= H (yon ) ? H (Ln r n + d n |r n )
= H (yon ) ? H (d n )
I (r; yo ) = H (yo ) ? H (d)
Let ? = x + f (y) denote certain semi-open intervals
x0 ? x < x0 + dx
(A1.1)
for random variables ? and x , respectively. For a fixed
y = y0 , random events ?0 ? ? < ?0 + d? and x0 ? x <
x0 + dx are one-to-one. The probability
P (?0 ? ? < ?0 + d?|y = y0 ) = p(?|y)|d?|
(A1.2)
equals the probability
P (x0 ? x < x0 + dx |y = y0 ) = p(x |y)|dx |
(A2.1)
|dx |
|dx |
= p(x |y) T
|d?|
|d? |
(A1.4)
For dxT = I , | dxT | = |det[ dxT ]| = 1, we have
d?
d?
d?
p(?|y) = p(? ? f (y)|y) = p(x |y)
(A2.2)
The directed information is
I (r n ? yon )
=
n
[H (yo (k )|yok ?1 ) ? H (yo (k )|yok ?1 , r k )]
k =1
n
=
[H (yo (k )|yok ?1 ) ? H (Lk r k + d(k )|yok ?1 , r k )]
k =1
(A1.3)
where |dx | denotes |dx1 | и |dx2 | и и и |dxn |, n is the dimension of x . Then
p(?|y) = p(x |y)
where the third equality is based on (6) and the fact
that d and r are mutually independent. Then the mutual
information rate is
APPENDICES
>?0 ? ? < ?0 + d?,
(A1.6)
(A1.5)
Moreover, p(?, y) = p(x , y). Then, by the definition
of conditional entropy,[13] Eqn (6) is arrived at.
? 2008 Curtin University of Technology and John Wiley & Sons, Ltd.
n
(d) [H (yo (k )|yok ?1 )
=
k =1
? H (d(k )|Lk ?1 r k ?1 + d k ?1 , r k )]
n
(e) =
[H (yo (k )|yok ?1 ) ? H (d(k )|d k ?1 ]
(A2.3)
k =1
where (d) is based on (6), (e) is based on (7), and the
fact that d and r are independent. Then the directed
information rate is
I(r ? yo ) = H (yo ) ? H (d)
(A2.4)
Asia-Pac. J. Chem. Eng. 2008; 3: 630?637
DOI: 10.1002/apj
635
636
H. ZHANG AND Y. SUN
Asia-Pacific Journal of Chemical Engineering
Second, let us consider the closed-loop system. For
the sequences of r n and y n , we have
I (d n ? u n )
n
=
H (u(k )|u k ?1 ) ? H (u(k )|u k ?1 , d k )]
I (r n ; y n ) = H (y n ) ? H (y n |r n )
= H (y n ) ? H (Tn r n + ydn |r n )
= H (y n ) ? H (ydn )
and
k =1
(A2.5)
=
where the third equality is based on Eqn (6) and the
fact that d and r are independent. Then the mutual
information rate between r and y is
I (r; y) = H (y) ? H (yd )
H (u(k )|u k ?1 ) ? H (ur (k ) ? Uk d k |urk ?1
k =1
? Uk ?1 d k ?1 , d k )]
n
=
H (u(k )|u k ?1 ) ? H (ur (k )|urk ?1 )]
(A2.6)
(A2.7)
Then with the definition of entropy rate we get
I(d ? u) = I (d; u) = H (u) ? H (ur )
The directed information rate from r to y is
n
=
[H (y(k )|y k ?1 ) ? H (y(k )|y k ?1 , r k )]
k =1
A4. Proof of Proposition 12
We first consider the open-loop system with L(z ) stable
(This implies P (z ) is stable). In this case, we have
n
[H (y(k )|y k ?1 ) ? H (Tk r k
=
k =1
k ?1 k ?1
+ yd (k )|T
=
(A3.3)
I (r n ? y n )
(f)
(A3.2)
k =1
With Lemma 2 and Eqn (12), we have
I (r; y) = H (y) ? H (d) ? V (S )
n
r
+
I (uon ; yon ) = H (yon ) ? H (yon |uon )
ydk ?1 , r k ]
= H (yon ) ? H (Pn uon + d n |uon )
n
[H (y(k )|y k ?1 ) ? H (yd (k )|Tk ?1 r k ?1
= H (yon ) ? H (don )
(A4.1)
k =1
+ ydk ?1 , r k ]
n
(g) =
with Eqn (6) and the fact that d and u are independent
in open-loop system. And
[H (y(k )|y k ?1 ) ? H (yd (k )|ydk ?1 ) (A2.8)
k =1
where the equalities (f) are based on Eqn (6); (g) is
based on (7) and the fact that d and r are independent.
By the property of the entropy rate[13] we have
k =1
n
[H (yo (k )|yok ?1 ) ? H (d(k )|Pk ?1 uok ?1
k =1
+ d k ?1 , uok )]
(A2.9)
With Eqns (A2.2)?(A2.9) and Proposition 7, we get
the conclusions.
A3. Proof of Proposition 10
=
n
[H (yo (k )|yok ?1 ) ? H (d(k )|d k ?1 ]
(A4.2)
k =1
Then we consider the closed-loop system. With
Eqns (18) and (6), it can be understood that the mutual
information is
We have
I (d n ; u n ) = H (u n ) ? H (u n |d n )
= H (u ) ? H (U d ?
(a)
=H (u n ) ? H (urn )
n
n
[H (yo (k )|yok ?1 ) ? H (Pk uok + d(k )|yok ?1 , uok )]
=
=
I(r ? y) = H (y) ? H (yd ) = H (y)
? H (d) ? V (S )
I (uon ? yon )
n n
I (u n ; y n ) = H (y n ) ? H (y n |u n )
urn |d n )
= H (y n ) ? H (Pn u n + d n |u n )
(A3.1)
? 2008 Curtin University of Technology and John Wiley & Sons, Ltd.
= H (y n ) ? H (d n ) + I (d n ; u n ) (A4.3)
Asia-Pac. J. Chem. Eng. 2008; 3: 630?637
DOI: 10.1002/apj
Asia-Pacific Journal of Chemical Engineering
ON INFORMATION TRANSMISSION
The directed information in the closed-loop system is
I (u n ? y n )
=
n
[H (y(k )|y k ?1 ) ? H (y(k )|y k ?1 , u k )]
k =1
=
n
[H (y(k )|y k ?1 ) ? H (Pk u k + d(k )|y k ?1 , u k )]
k =1
=
n
[H (y(k )|y k ?1 ) ? H (d(k )|Pk ?1 u k ?1
k =1
+ d k ?1 , u k )]
n
=
[H (y(k )|y k ?1 ) ? H (d(k )|d k ?1 , u k )]
k =1
=
n
[H (y(k )|y k ?1 ) ? H (d(k )|d k ?1 )
k =1
+ I (d(k ); u k |d k ?1 )]
= H (y n ) ? H (d n ) + I (u n ? d n )
(A4.4)
Then (33)?(36) are arrived at by using (26),(32),
and (A4.1)?(A4.4).
[2] C.E. Shannon. Bell Syst. Tech. J., 1948; 27, 379?423,
623?656.
[3] S.-H. Yang. The capacity of communication channels
with memory. PhD Dissertation, Cambridge, Massachusets,
Harvard University, May 2004.
[4] N. Elia. IEEE Trans. Automat. Contr., 2004; 49(9),
1477?1488.
[5] S. Tatikonda. Control under Communication Constraints.
PhD Dissertation, Cambridge, Massachusets, Massachusetts
Institute of Technology, Sept. 2000.
[6] S. Engell. Kybernetes, 1984; 13, 73?77.
[7] G.N. Saridis. IEEE Trans. Automat. Contr., 1988; 33,
713?721.
[8] A.A. Stoorvogel, J.H. Van Schuppen. System identification
with information theoretic criteria. In Identification, Adaptation, Learning (Eds.: S. Bittanti, G. Picc), Springer: Berlin,
1996; pp.289?338.
[9] L. Wang. J. Syst. Sci. Complexity, 2001; 14(1), 1?16.
[10] H.L. Weidemann. Entropy analysis of feedback control
systems. Advances in Control Systems, Vol 7, Academic Press,
New York: 1969 pp pp.225?255.
[11] H. Zhang, Y.-X. Sun. Bode integrals and laws of variety in
linear control systems. In Proceedings 2003 American Control
Conference, Denver, Colorado 2003.
[12] H. Zhang, Y.-X. Sun. Information theoretic interpretations for
H? entropy. In Proceedings 2005 IFAC World Congress,
Prague, 2005.
[13] S. Ihara. Information Theory for Continuous Systems, World
Scientific Publishing Co. Pte. Ltd.: Singapore, 1993.
[14] A. Papoulis. Probability, Random Variables, and Stochastic
Processes, 3rd edn, McGraw-Hill, Inc.: New York, 1991.
[15] K. Zhou. Essential of Robust Control, Prentice-Hall: Upper
Saddle River, NJ, 1998.
[16] J.C. Willems. The Analysis of Feedback Systems. The MIT
Press, Cambridge, Massachusets: 1971.
[17] B.-F. Wu, E.A. Johckheere. IEEE Trans. Automat. Contr.,
1992; 37(11), 1797?1802.
REFERENCES
[1] J.L. Massey. Causality, feedback and directed information.
In Proceedings 1990 International Symposium on Information
Theory and its Applications, Waikiki, Hawaii, Nov. 1990;
27?30.
? 2008 Curtin University of Technology and John Wiley & Sons, Ltd.
Asia-Pac. J. Chem. Eng. 2008; 3: 630?637
DOI: 10.1002/apj
637
Документ
Категория
Без категории
Просмотров
0
Размер файла
123 Кб
Теги
transmission, feedback, oninformation, system, linear, tracking
1/--страниц
Пожаловаться на содержимое документа