close

Вход

Забыли?

вход по аккаунту

?

How to Count Targets Given Only the Number of Measurements - ISIF

код для вставки
How to Count Targets Given Only the Number of Measurements
Roy L. Streit
Metron, Inc.
Reston, VA 20190 USA
streit@metsci.com or r.streit@ieee.org
AbstractтАФThe Bayes-optimal distribution of the number of
targets is derived, given only the numbers of measurements in a
sequence of K consecutive sensor scans. Target states and spatial
properties of measurements are completely ignored. A backward
recursion for the joint probability generating function of the
target-measurement count is derived using a branching process
model and independent causal influence assumptions. The
counting model can be generalized to spatial branching
processes.
variance of target count can be derived from the PGF of the
Bayes posterior distribution.
The key concept behind the backward recursion for the
joint PGF is the notion of independent causal influence (ICI).
The concept was apparently first proposed for conditional
logic modeling by Pearl under the name тАЬcausal
independenceтАЭ (see [1, 2, 3] and the references therein). The
ICI assumption is natural for counting purposes because it
means that when there are several sources of objects, the total
number of objects is the sum of the numbers of objects
generated by each source.
Branching processes are an established field of discrete
random variables. Standard texts are [6] and [7]. They rely
heavily on classical PGF methods introduced by Laplace.
Their history dates to the classic 1874 paper by Watson and
Galton on the extinction of family names over multiple
generations. In the twentieth century, branching processes
were applied to problems in nuclear chain reactions and
cosmic ray cascades.
Section II presents the branching process model for target
count using a sequence of sensor measurement counts.
Section III gives several examples.
Conclusions and
directions for future work are discussed in Section IV. The
Appendix gives an overview of multivariate PGFs.
KeywordsтАФ Branching processes, Bayes estimation, Probability
generating function, Independent causal influence, Spatial
branching processes, Point processes
I.
INTRODUCTION
Counting targets is hard because targets can leave the region
of regard and new ones arrive at any time, because sensors can
fail to detect targets that are present, and because spurious
clutter measurements (false alarms) can be superimposed with
target-originated measurements. When the spatial distributions
are part of the mix, the problem is harder.
This paper presents a branching process model of total
target count, or canonical number, given only the numbers of
measurements in a sequence of K тЙе 1 sensor scans. The
spatial properties of targets and measurements are ignored
completely to focus exclusively on counting issues.
Such a restriction may seem far too severe to be of interest
in applications, but such is not the caseтАФthe counting model
presented here can be extended to target and measurement
counts in spatial grids, and the gridded model becomes, in the
limit of infinitesimal grid-cells, a spatial branching process.
This and related topics are the subject of on-going work [4, 8],
and are outside the scope of the present paper. Spatial
branching processes and Monte Carlo methods for K = 1
problems are studied in a series of papers by Caron and his
colleagues; see [5] and the references cited therein.
The joint probability generating function (PGF) of the
branching process model of target count is derived in closed
form as a nested sequence of compositions of the PGFs that
define the specific, or constituent, target and measurement
processes. Said more carefully, the PGF of the joint targetmeasurement count distribution is the output of a K-step
backward recursion. The backward recursion for the joint PGF
is new.
The PGF of the Bayes optimal posterior distribution of
target count conditioned on measurement count is the ratio of
derivatives of the joint PGF. The conditional mean and
This work was supported in part by the US Office of Naval Research
under grant N0001412C0259.
II.
BRANCHING PROCESS MODEL FOR TARGET COUNT
In the target branching process model, a target in the previous
time step generates a random number of targets in the current
time step. New targets are added to the process, and this
number too is random. This gives the total number of targets
at the current time step. Each such target тАЬcausesтАЭ a
branching process of measurements in the sensor. A random
number of new measurements are added to the targetoriginated measurement branching process. This gives the
total number of measurements at the current time.
A. Conditional independence assumptions
The backbone of the generative model of target count is its
conditional independence structure. This fundamental
structure is visualized using a directed graph called a Bayes
network. These networks are helpful to conceptualize the
relationships between the many variables in the model.
The branching process model for target count is depicted
by the directed graph shown in Fig. 1. The nodes of the graph
are discrete random variables whose outcomes are
nonnegative numbers. The measurement times are
t0 < t1 < < tK . The time t0 is interpreted as the time at
1
A0
CK тИТ1
CK
The joint PGF is
AK тИТ1
AK
G( x, y, z, w)
B2
BK тИТ1
BK
where
D2
DK тИТ1
DK
тИС
C1
C2
A1
A2
B1
D1
iii
= тИС p( a, b, c, d ) x0a0 x1a1
= тИСтИС
a0
xKaK y1b1
yKbK z1c1
zKcK w1d1
wKdK ,
тИС тИС тИС тИС тИС тИС тИС
a1
aK
c1
cK
d1
dK
b1
(4)
. (5)
bK
The sums on the right hand side are from 0 to тИЮ .
Substituting the factorization (2) into (4) and nesting the
sums in (5) gives
Figure 1. The branching process target and measurement count network. Nodes are
discrete random variables. Conditional independence assumptions are depicted as
the directed edges of a Bayesian network.
which the prior distribution on target count is specified; the
times t1 ,..., tK are sensor measurement times. Nodes are
discrete random variables defined on the set of nonnegative
integers, ┬╗ . Nodes A0 , A1 ,..., AK are the total numbers of
G( x, y , z, w)
= тИС p( a0 ) x0a0 тИС p( c1 ) z1c1 тИС p( a1 | a0 , c1 ) x1a1
a0
c1
a1
тИС p ( d ) w тИС p (b | a , d ) y
d1
1
1
targets at the measurement times; nodes B1 ,..., BK are the
d1
тИС p(c
numbers of target-generated measurements; nodes C1 ,..., CK
are the numbers of new targets that appear spontaneously and
are superimposed with targets in the corresponding A- nodes;
and nodes D1 ,..., DK are the numbers of clutter measurements
that are superimposed with target-generated measurements in
the corresponding B - nodes. Realizations of these random
variables are denoted by
1
1
1
b1
1
(6)
b1
K
cK
) zKcK тИС p( aK | aK тИТ1 , cK ) xKaK
aK
тИС p(d
K
dK
) wKd K тИС p(bK | a K , d K ) y KbK .
bK
This form reveals, albeit implicitly, a relationship between the
joint PGFs of the prior variables
тИЮ
a = (a0 , a1 ,..., a K ) тИИ ┬╗ K +1
b = (b1 ,..., bK ) тИИ ┬╗ K
c = (c1 ,..., cK ) тИИ ┬╗ K
d = (d1 ,..., d K ) тИИ ┬╗ K .
G0# A ( x0 ) = тИС p ( a0 ) x0a0 ,
(1)
a0 =0
тИЮ
Gk# C ( zk ) = тИС p ( ck ) zkck ,
k = 1,..., K ,
(7)
ck =0
The probability of a realization is denoted by p ( a , b, c, d ) .
The conditional independence assumptions depicted in Fig.
1 are equivalent to the factorization
тИЮ
Gk# D ( wk ) = тИС p ( d k ) wkdk ,
k = 1,..., K ,
d k =0
p ( a , b, c, d ) = p ( a0 ) p ( c1 ) p ( d1 ) p ( a1 | a0 , c1 ) p ( b1 | a1 , d1 )
and the conditional PGFs
p ( cK ) p ( d K ) p ( a K | a K тИТ1 , cK ) p ( bK | a K , d K ),
тИТ
(2)
тИЮ
GkA|A C ( xk | ak тИТ1 , ck ) = тИС p( ak | ak тИТ1 , ck ) xkak
where p ( a0 ) = Pr [ A0 = a0 ] and, for k = 1,..., K ,
ak =0
(8)
тИЮ
B| AD
k
G
p (ak | ak тИТ1 , ck ) = Pr [ Ak = ak | Ak тИТ1 = ak тИТ1 , Ck = ck ]
p (bk | ak , d k ) = Pr [ Bk = bk | Ak = ak , Dk = d k ]
p (ck ) = Pr [Ck = ck ]
( yk | ak , d k ) = тИС p(bk | ak , d k ) y .
bk
k
bk =0
(3)
The conditional PGFs (8) correspond to the time evolution of
target count and to the measurement process, respectively.
p (d k ) = Pr [ Dk = d k ]
B. Branching process models with ICI assumptions
The PGF of p ( a , b, c, d ) has 4 K + 1 complex variables, one
for each node of the graph. They are denoted by
x = ( x0 , x1 ,..., x K ),
y = ( y1 ,..., y K ),
z = ( z1 ,..., zK ),
w = ( w1 ,..., wK ).
Consider targets in node AkтИТ1 , 1 тЙд k тЙд K . Each such target is
a тАЬparentтАЭ that produces a random number of тАЬdescendentтАЭ
targets in node Ak . The parent target in AkтИТ1 is not counted in
node Ak . The PGF of the random number of descendants
2
тИТ
aK
dK
Sd = тИС p ( d K ) wKd K (GKB|A ( y K ))
from a single parent is denoted GkA|A ( xk ) . The PGF is
(GKB|D ( yK ))
dK
assumed the same for all parent targets in node Ak . Added to
these targets is a random number of new targets generated in
node Ck . The PGF of the number of newborn targets is
= (GKB|A ( y K ))
GkAC| ( xk ) , and it is assumed the same for all new parent targets
= (GKB|A ( y K )) GK# D ( wK GkB|D ( y K )).
aK
(
тИТ
тИТ
akтИТ1
)
ck
(GkAC| ( xk )) ,
aK
dk
тИТ
(
aK тИТ1
) (G ( x G
тИТ
= GKA|A ( xK GKB|A ( y K ))
(9)
AC
|
K
K
B| A
K
(
aK тИТ1
)
тИТ
Sc = GKA| A ( x K GKB| A ( y K ) )
яг▒яг┤
cK яг╝
яг┤
├Чяг┤яг▓тИС p ( cK ) z KcK (GKAC| ( x K GKB| A ( y K ))) яг┤яг╜ GK# D (тИТ)
яг┤яг┤ cK
яг┤яг┤
яг│
яг╛
(
├ЧG
#C
K
(z
K
G
AC
|
K
( xK GKB|A ( yK )))GK# D (тИТ).
The PGFs in (14) move outside the remaining summations.
Applying the same procedure from K backwards to 1
gives the summation over all nodes except A0 in terms of the
composition of known PGFs. The final sum over A0 is
straightforward. The backward recursion is:
(10)
Algorithm.
A K +1 = 1
For k = K to k = 1 in steps of тИТ1
X k = GkB|A ( yk )A k +1
тИТ
A k = GkA|A ( xk X k )
C k = Gk# C ( zk GkAC| ( xk X k ))
D k = Gk# D ( wk GkB|D ( yk ))
End
A 0 = G0# A ( x0 A1 )
Sb = тИС p (bK | a K , d K ) y KbK = GkB|AD ( y K | a K , d K )
K
(11)
= (GKB|A ( y K ))
dK
(GKB|D ( yK ))
(14)
a K тИТ1
)
тИТ
= GKA| A ( x K GKB| A ( y K ))
C. Backward recursion for the joint PGF
The following steps replace the sums over the last four nodes
of the Bayes net by a composition of PGFs. Substituting (10)
with k = K into the sum over bK in (6) gives
aK
cK
( y K ))) GK# D (тИТ),
uses the second definition in (7), while the last step uses (9).
The next sum is over cK . Substituting (13) gives
The simple forms of the conditional PGFs (9) and (10) are the
result of the ICI assumption applied, respectively, to targets
and measurements.
bK
(13)
where GK# D (тИТ) = GK# D ( wK GKB|D ( y K )) . The second step in (13)
( yk ) . The
.
GK# D (тИТ)
= GKA|A C ( xK GKB|A ( y K ) aK тИТ1 , cK ) GK# D (тИТ)
measurement branching process in node Bk is, thus, the
superposition of target-originated measurements from node
Ak and clutter-originated measurements from node Dk .
Under the ICI assumption, these random numbers are
mutually independent. The conditional PGF of the total
number of measurements is
(GkB|D ( yk ))
(12)
aK
clutter source in node Dk produces measurements in node Bk ;
ak
dK
( y K ))
Sa = тИС p( aK | aKтИТ1 , cK ) xKaK (GKB|A ( y K ))
number bk of measurements in node Bk . (Extended targets,
e.g., may produce more than one measurement.) The PGF of
this number is assumed known; it is denoted GkB| A ( yk ) . Each
GkB|AD ( yk | ak , d k ) = (GkB|A ( yk ))
B|D
K
Substituting (12) into (6) gives the sum over aK as
тИТ
the PGF is assumed known and denoted G
K
aK
where the PGFs GkA|A ( xk ) and GkAC| ( xk ) are assumed known.
They are called constituent PGFs because they are defined by
the needs of the application. The product form in (9) is due to
the ICI assumption.
Similarly, each target in node Ak produces a random
B|D
k
K
dK
in node Ck . The branching process in node Ak is, under the
ICI assumption for targets, the superposition of the descendent
processes due to targets in node AkтИТ1 and new sources in Ck .
The PGF of the superposition of conditionally independent
processes is the product of their conditional PGFs. Thus, the
PGF of the number of targets in node Ak is
GkA|A C ( xk | akтИТ1 , ck ) = GkA|A ( xk )
тИС p (d ) ( w G
G = A0 тИП C k Dk
.
k =1
Substituting (11) into (6) gives the sum over d K as
3
:: Joint PGF ::
The number of factors in the joint PGF is the same as the
number of тАЬpriorтАЭ nodes in the graphical model, i.e., nodes all
of whose edges point to other nodes. Each factor corresponds
to a pathway by which measurements are generated, and each
is a composition of (assumed known) constituent PGFs.
For K = 1 and K = 2 , the joint PGFs are
(
G ( x, y, z, w) = G0# A x0G1A|A ( x1 G1B|A ( y1 ) )
тИТ
(
)
)
├Ч G1# C z1G1A|C ( x1G1B|A ( y1 ) ) G1# D ( w1 G1B|D ( y1 ) )
with, respectively, mean numbers p0A N 0A , pkC N kC , and
pkD N kD . If no clutter is assumed, then Gk# C ( s ) = 1 . If no new
targets are assumed, then Gk# D ( s ) = 1 .
The target count evolution model comprises two PGFs.
One is analogous to a Markov evolution model because it
relates how targets in the current time step generate targets in
the next time step. A reasonable model for targets such as
ships and aircraft is
(15)
тИТ
GkA|A ( xk ) = ╬▒kAтИТ1 + ╬▓kAтИТ1 xk ,
╬▒kAтИТ1 тЙе 0, ╬▓kAтИТ1 тЙе 0,
and
╬▒kAтИТ1 + ╬▓kAтИТ1 = 1,
k = 1,..., K ,
(19)
where ╬▓ kAтИТ1 is the probability that a target at time tk тИТ1
survives to time tk , i.e., generates exactly one target at time
G ( x , y , z , w)
( (
(z G ( x G
= G0# A x0G1A|A x1 G1B| A ( y1 )G2A| A ( x2G2B| A ( y2 ) )
тИТ
├Ч G1# C
1
A|C
1
1
B| A
1
тИТ
( y1 )G2A| A ( x2G2B|A ( y2 ) )
тИТ
tk . If targets persist indefinitely (that is, parent targets
))
))
generate one and only one descendant), then ╬▓ kAтИТ1 = 1 and
тИТ
GkA|A ( xk ) = xk . Splitting is reasonable in some applications,
in which case terms are added to the right hand side of (19).
The other PGF relates how newborn targets in node Ck
(16)
├Ч G1# D ( w1 G1B|D ( y1 ) )
(
)
├Ч G2# C z2G2A|C ( x2G2B| A ( y2 ) ) G2# D ( w2 G2B|D ( y2 ) ) ,
generate targets in node Ak . It is reasonable to suppose that
GkA|C ( x k ) = xk , k = 1,..., K ,
(20)
In words, (20) says that a newborn target generates one and
only one target in node Ak . Problems in which newborn
respectively. As a check, note that G = 1 when all the
variables in the joint PGF are equal to one.
III.
targets generate multiple targets in Ak can be accommodated
by adding terms to (20).
The measurement model also comprises two PGFs. The
traditional measurement model for a point target is that it
produces one measurement if detected and none if not. The
PGF of the number of measurements per target is
EXAMPLES
The joint PGF of the branching process model is characterized
by a small number of PGFs which are specified in an
application-specific manner. The examples in this paper are
representative of the wealth of possibilities, but by no means
exhaustive.
The prior distributions need not be all of the same kind
(e.g., all Poisson). If the priors are Poisson distributed, the
PGFs are
GkB | A ( y k ) = ╬▒kB + ╬▓ kB y k , ╬▒kB + ╬▓ kB = 1,
╬▒kB тЙе 0, ╬▓ kB тЙе 0,
The parameter ╬▓ kB is often called the probability of target
detection. This model is inappropriate if the target is extended,
i.e., produces more than one point measurement. This can
happen because of target size and sensor resolution. If the
number of target-generated measurements is random with
specified PGF, then that PGF replaces the PGF (21). Said
another way, to model extended targets add terms to the PGF
(21). The other PGF relates how points in a clutter-origin
node Dk correspond to clutter points that are superposed with
G0# A ( x0 ) = exp (тИТ╬╗0A + ╬╗0A x0 )
Gk# C ( zk ) = exp (тИТ╬╗kC + ╬╗kC zk ),
Gk# D ( wk ) = exp (тИТ╬╗kD + ╬╗kD wk ),
k = 1,..., K ,
(21)
k = 1,..., K .
(17)
k = 1,..., K ,
where the arguments of the PGFs are (univariate) complex
variables. The mean numbers are ╬╗0A , ╬╗kC , and ╬╗kD ,
respectively. If the priors are binomially distributed, the PGFs
are
target-generated measurements in node Bk . A reasonable
N 0A
G0# A ( x0 ) = ( q0A + p0A x0 ) ,
model is that a point in Dk generates one and only one point
in Bk , that is,
N 0A тЙе 0,
GkB |D ( y k ) = y k .
(22)
Including more terms in (22) may accommodate known clutter
points (e.g., shipwrecks in a sonar application); however, such
problems are not considered here.
N kC
Gk# C ( zk ) = ( qkC + pkC zk ) , qkC + pkC = 1,
N kC тЙе 0, qkC тЙе 0, pkC тЙе 0,
N kD
Gk# D ( wk ) = ( qkD + pkD wk )
(18)
k = 1,..., K ,
, qkD + pkD = 1,
N kD тЙе 0, qkD тЙе 0, pkD тЙе 0,
k = 1,..., K ,
4
A. Example 1. One-scan problem with Poisson priors
╬║( x1 ) = exp(тИТ╬╗0A тИТ╬╗1C тИТ ╬╗1D + ╬╗0A (╬▒0A + ╬▓0A╬▒1B x1 ) + ╬╗1C (╬▒1B x1 )).
The one-scan problem is defined by setting K = 1 .
(Equivalently, the one-scan problem can be defined for any
K тЙе 2 by setting xk = yk = zk = wk = 1 for k = 2,..., K . )
Assumptions (19) - (22) simplify the one-scan PGF (15) to
( (
G тЙб G0# A x0 ╬▒ 0A + ╬▓ 0A x1 (╬▒1B + ╬▓1B y1 )
(
b1
The conditional PGF is (26) divided by ╬║(1) (╬╗1D + ╬▓1B E [ A1 ]) ,
that is,
G ( x1 | b1 )
))
b
= exp (тИТ╬▒ E [ A1 ]+ ╬▒ E [ A1 ] x1 )
(23)
)
├Ч G1# C z1 x1 (╬▒1B + ╬▓1B y1 ) G1# D ( w1 y1 ) .
B
1
G = exp ( тИТ╬╗0A тИТ ╬╗1C тИТ ╬╗1D + ╧А ) ,
where ╧А is the multivariate polynomial
+ ╬╗C1 (╬▒1B x1 z1 + ╬▓1B x1 y1 z1 ) + ╬╗1D ( y1w1 )
.
measurements in B1 . The expected number of such targets is
(24)
╬▒1B E [ A1 ] . The other term is the PGF of a Bernoulli trial (e.g.,
a coin toss) with
In the absence of measurements, the expected number of
targets in node A1 is
E [ A1 ] =
d
G
dx1 x = x = y = z =w =1
0
A
0
A
0
1
1
1
ягл ╬╗ + ╬▓ B E [ A ] x яг╢яг╖ 1 (27)
1
1 1 яг╖
ягмягм D1
ягмягм ╬╗ + ╬▓ B E A яг╖яг╖яг╖ .
[ 1] яг╕
ягн D1
1
As a check, note that G (1 | b1 ) = 1 .
The PGF is a product of conditionally independent discrete
distributions. The exponential is the PGF of a Poisson
distribution of the number of targets in A1 that do not generate
Under the Poisson assumptions (17), G becomes
╧А = ╬╗0A (╬▒0A x0 + ╬▓0A╬▒1B x0 x1 + ╬▓ 0A╬▓1B x0 x1 y1 )
B
1
яго a measurement in node B1 is яг╣
╬▓1B E [ A1 ]
яг║=
Pr ягп
. (28)
ягп generated by a target in node A1 яг║ ╬╗D + ╬▓1B E [ A1 ]
яг░
яг╗
1
(25)
1
The numerator on the right hand side is the expected number
of measurements generated by targets in A1 , and the
denominator is the expected total number of measurements in
node B1 from all sources. Raising the Bernoulli PGF to the
C
1
= ╬╗ ╬▓ +╬╗ ,
a result that matches intuition perfectlyтАФthe expected number
of targets in node A1 is the expected number in A0 that
power b1 gives the PGF of the number of targets that produce
generate a target in A1 plus the expected number of new
measurements in node B1 . Thus, the result (27) accords well
with intuition.
The expected number of targets in A1 conditioned on b1 is
targets from C1 .
The conditional PGF of the number of targets in node A1
given b1 measurements in node B1 is found from the
marginal distribution over A0 , C1 , and D1 . Marginalization
can be done either before or after conditioning. Marginalizing
first is the easier choice. Letting x0 = z1 = w1 = 1 gives
E[ A1 | b1 ] =
1
+ ╬╗1C (╬▒1B x1 + ╬▓1B x1 y1 ) + ╬╗1D ( y1 ) .
This result is intuitiveтАФit is the sum of the means of the
conditionally independent random variables whose PGFs are
the factors of the product (27).
The variance of the number of targets in A1 conditioned
Using well known properties of PGFs (see the Appendix), the
conditional expectation of A1 is a normalized derivative of
the marginalized PGF G . Differentiating b1 times with
on b1 is
respect to y1 gives
1
(29)
ягл
яг╢
╬▓1B
= E [ A1 ] ягм ╬▒1B + b1
.
яг╖
B
ягм
╬╗D1 + ╬▓1 E [ A1 ] яг╖яг╕
ягн
╧А = ╬╗0A (╬▒0A + ╬▓0A╬▒1B x1 + ╬▓ 0A╬▓1B x1 y1 )
b1
d b1
G
= ╬║( x1 ) ягояг░ягп╬╗0A╬▓0A╬▓1B x1 + ╬╗1C ╬▓1B x1 + ╬╗1D яг╣яг╗яг║
b1
dy1
y =0
d
G ( x1 | b1 )
dx1
x =1
Var [ A1 ] = ╬▒1B E [ A1 ] +
b1╬╗D1 ╬▓1B E [ A1 ]
(╬╗
D1
2
+ ╬▓1B E [ A1 ])
.
(30)
(26)
This result follows from (27) and the fact that the variance of
the sum of independent random variables is the sum of their
variances. Here, one variable is Poisson, so its variance is
╬▒1B E [ A1 ] . The other is binomial with b1 (Bernoulli) trials; its
b1
= ╬║( x1 ) (╬╗1D + ╬▓1B E [ A1 ] x1 ) ,
where (25) was used in the last step, and where
5
variance is b1 pq , where q = 1тИТ p and p is the right hand
side of (28).
d r╧А C
dy1r
(
G = яго q + p x ╬▒ + ╬▓ x (╬▒ + ╬▓ y
яг░
A
0 0
A
0
A
0 1
B
1
B
1 1
├Ч яг░яго q1C + p1C z1 x1 (╬▒1B + ╬▓1B y1 ) яг╗яг╣
N1C
) )яг╣яг╗
d r╧А D
dy1r
N 0A
ягояг░ q1D + p1D w1 y1 яг╣яг╗
N1D
(N
= N 0D ( N 0D тИТ 1)
.
тИТ r + 1)
N1C тИТ r
(N
D
0
(p
C
1
r
╬▓1B x1 ) ,
тИТ r + 1)
y1 = 0
├Ч ( q1D )
(31)
C
0
y1 = 0
├Ч ягояг░ q1C + p1C╬▒1B x1 яг╣яг╗
B. Example 2. One-scan problem with binomial priors
Under the binomial assumptions (18), the PGF is a
polynomial. For one scan, from (23),
A
0
= N 0C ( N 0C тИТ 1)
N1D тИТ r
D r
1
(p ) .
The conditional PGF is the normalized derivative:
In the absence of measurements, the expected number of
targets in the first scan is
G ( x1 | b1 ) =
d b1
d b1
G
├╖
G
dy1b1 y =0 dy1b1 y =0;
1
E [ A1 ] =
d
G
= p0A╬▓0A N 0A + p1C N1C ,
dx1 x = x = y = z =w =1
0
1
1
1
1
.
(34)
x1 =1
The expected number of targets in node A1 is the first
(32)
derivative with respect to x1 , namely,
1
E [ A1 | b1 ] = G (1) (1| b1 ) .
a result easily derived directly. The conditional PGF given b1
An explicit expression for E [ A1 | b1 ] is omitted.
measurements in node B1 is determined from the appropriate
Note that G ( x1 | b1 ) is a polynomial of degree N 0A + N 1C
marginal PGF; thus, letting x0 = z1 = w1 = 1 gives
in x1 regardless of the number of measurements. It follows
G = ╧А A ( x1 , y1 )╧А C ( x1 , y1 )╧А D ( x1 , y1 ) ,
that E [ A1 | b1 ] тЙд N 0A + N1C , as anticipated given the model.
(33)
where
(
╧А ( x1 , y1 ) = яго q + p ╬▒ + ╬▓ x (╬▒ + ╬▓ y
A
яг░
A
0
A
0
A
0
A
0 1
B
1
╧А C ( x1 , y1 ) = ягояг░ q1C + p1C x1 (╬▒1B + ╬▓1B y1 )яг╣яг╗
╧А D ( x1 , y1 ) = ягояг░ q1D + p1D y1 яг╣яг╗
N1D
B
1 1
) )яг╣яг╗
C. Example 3. Two-scan problem: Poisson priors;
Persistent targets; No clutter; No new targets
No clutter and no new targets are modeled by letting
GkC ( z k ) = 1 and GkD ( wk ) = 1 for k = 1, 2 . Persistent targets
N 0A
N1C
are modeled as ╬▓ 0A = ╬▓1A = 1 . Substituting these expressions
and (19) - (22) into (16) gives the two-scan PGF
.
G тЙб G0# A ( x0 x1 x2 (╬▒1B + ╬▓1B y1 )(╬▒ 2B + ╬▓ 2B y2 ) ) . (35)
Using the Leibnitz chain rule twice gives the derivative of
order b1 with respect to y1 as
d b1 G
dy1b1
G = exp ( тИТ╬╗0A + ╬╗0A x0 x1 x2 (╬▒1B + ╬▓1B y1 )(╬▒2B + ╬▓ 2B y2 ) ) .
The quantity of interest is the expected number of targets in
node A2 , so nodes A0 and A1 are marginalized out of the
y1 = 0
ягл b яг╢ягл i яг╢ d ╧А
= тИСтИС ягм 1 яг╖ягм яг╖
i
i = 0 j = 0 ягн i яг╕ягн j яг╕ dy1
b1
As a check, note that G = 1 for x0 = x1 = x2 = y1 = y2 = 1 .
Invoking the Poisson assumption (17) gives
i
i
A
iтИТ j
C
d ╧А
dy1i тИТ j
y =0
1
b1 тИТi
D
d ╧А
dy1b1 тИТi
y =0
1
,
problem by letting x0 = x1 = 1 . Now, suppose that b1 тЙе 0
y1 = 0
and b2 тЙе 0 measurements are observed in nodes B1 and B2 ,
respectively. From the appendix, the conditional PGF is the
normalized derivative
where, for r = 0,1,... ,
d r╧А A
dy1r
= N 0A ( N 0A тИТ 1)
(N
A
0
тИТ r + 1)
G ( x2 | b1 , b2 ) =
y1 = 0
├Ч ягояг░ q0A + p0A (╬▒ 0A + ╬▓ 0A╬▒1B x1 ) яг╣яг╗
N 0A тИТ r
d b1 +b2
G
dy1b1 dy2b2 y = y
1
r
( p0A ╬▓0A╬▓1B x1 ) ,
├╖
2 =0
d b1 +b2
G
dy1b1 dy2b2 y = y
1
.
2 = 0; x2 =1
Direct calculation (details omitted) gives the conditional PGF
6
G ( x2 | b1 , b2 ) = x2b1 exp ( тИТ╬╗0A╬▒1B╬▒ 2B + ╬╗0A╬▒1B╬▒ 2B x2 )
where
╧А = ╬╗0A x0 (╬▒ 0A
ягл b1 яг╢ ягл 1 A B B i тИТ ╬╗0A╬▒1B╬▒2B яг╢ i
яг╖ x2 (36)
ягм
яг╖ ягм ╬╗0 ╬▒1 ╬▒ 2 e
яг╕
i = 0 ягн b2 тИТ i яг╕ ягн i !
├Ч b2
,
ягл b1 яг╢ ягл 1 A B B i тИТ ╬╗0A╬▒1B╬▒2B яг╢
b2
тИС
(
тИС ягмягн b
i =0
ягм
2
тИТ i яг╖яг╕ ягн i !
)
(╬╗ ╬▒
0
╬▒2
1
)e
(
+ ╬▓ 0A x1 (╬▒1B + ╬▓1B y1 ) ╬▒1A + ╬▓1A x2 (╬▒ 2B + ╬▓ 2B y2 )
(
(
G ( x2 | b1 , b2 ) is the product of
As a check, note that G = 1 when all the variables are equal to
one. Conditioned on b1 тЙе 0 and b2 тЙе 0 measurements in
nodes B1 and B2 , respectively, the problem is to find the
for targets that produce measurements in node B2 but not in
conditional PGF G ( x2 | b1 , b2 ) of the number of targets in
node B1 . The (a priori) distribution of the number of targets
node A2 . Marginalizing over nodes A0 and A1 , C1 and C2 ,
that produce no measurements in nodes B2 and B2 is Poisson
and D1 and D2 is equivalent to setting all variables except
distributed with mean ╬╗0A╬▒1B╬▒ 2B . Its PGF is the second factor
x2 , y1 , and y2 equal to one. The desired conditional PGF is
in (36). The third PGF is a polynomial of degree b2 . The
computed from the derivatives of G with
coefficient of x0i is the probability that i targets produce
((
(
coefficient that counts the number of ways that the b1 known
(
├Ч G1# C
d b1 G
dy1b1
├Ч G1# D
(
╬│ ( y2 , x2 ) = ╬╗0A ╬▓ 0A ╬▓1B ╬▒1A + ╬▓1A x2 (╬▒ 2B + ╬▓ 2B y2 )
(
B
1
B
1
#C
2
)
and
A
1
A
1 2
A
1
1
A
1 2
B
2
2 2
B
2
2
B
2
B
2
(
(
+ ╬╗1C╬▒1B ╬▒1A + ╬▓1A x2 (╬▒ 2B + ╬▓ 2B y2 )
B
2
#D
2
2
B
2
2
D
1
C
2
)
2
The Leibnitz chain rule gives the derivative of (38) of order
b2 with respect to y2 as
2
d b1 +b2 G
dy1b1 dy2b2
G тЙб exp ( тИТ╬╗ тИТ ╬╗ тИТ ╬╗ тИТ ╬╗ тИТ ╬╗ + ╧А ) ,
C
1
))
+ ╬╗2C x2 (╬▒ 2B + ╬▓ 2B y2 ) + ╬╗2D y2 .
Under Poisson assumptions, G takes the form
A
0
)
+╬╗1C ╬▓1B ╬▒1A + ╬▓1A x2 (╬▒ 2B + ╬▓ 2B y2 ) +╬╗1D
(
B
1 1
1
(38)
y1 = 0
where
) (╬▒ + ╬▓ x (╬▒ + ╬▓ y ) ) ) )
(37)
z
x
+
y
+
x
+
y
╬▒
╬▓
╬▒
╬▓
╬▒
╬▓
(
)
(
)
(
)
(
)
( w y ) G ( z x (╬▒ + ╬▓ y ) ) G ( w y ) .
1
b
= e╬║ ( y2 , x2 ) [╬│ ( y2 , x2 )] 1 ,
╬║ ( y2 , x2 ) = ╬╗0A ╬▒ 0A + ╬▓ 0A╬▒1B ╬▒1A + ╬▓1A x2 (╬▒ 2B + ╬▓ 2B y2 )
B
1
1 1
)
This is a straightforward calculation. The derivative of order
b1 with respect to y1 evaluated at y1 = 0 is
A
0
+ ╬▓ x (╬▒ + ╬▓ y
A
0 1
))
(
+ ╬╗2C x2 (╬▒ 2B + ╬▓ 2B y2 ) + ╬╗2D ( y2 ) .
D. Example 4. General two-scan problem with clutter and
new targets: Poisson priors
Substituting (19) - (22) into (16) gives the two-scan PGF
0
)) )
+ ╬╗1C (╬▒1B + ╬▓1B y1 ) ╬▒1A + ╬▓1A x2 (╬▒ 2B + ╬▓ 2B y2 ) +╬╗1D ( y1 )
targets produce b1 тИТ i measurements in B2 . Summing over i
and normalizing gives the PGF of the number of targets that
are first observed at time step 2 . Thus, the PGF (36) accords
with intuition.
The conditional expected number of targets is the
derivative of (36) with respect to x0 at x0 = 1 . Equivalently,
it is the sum of the means of the mutually independent
variables whose PGFs are the factors of (36). The conditional
mean is b1 + ╬╗0A╬▒1B╬▒ 2B plus the mean of the polynomial factor
in (36).
( x (╬▒
(
╧А = ╬╗0A ╬▒ 0A + ╬▓ 0A (╬▒1B + ╬▓1B y1 ) ╬▒1A + ╬▓1A x2 (╬▒ 2B + ╬▓ 2B y2 )
measurements in B2 but not in B1 multiplied by a binomial
G=G
)
+╬╗2C z2 x2 (╬▒ 2B + ╬▓ 2B y2 ) + ╬╗2D ( w2 y2 ) .
three PGFs. The first is that of a random variable that
produces exactly b1 targets in node B2 . The next two account
#A
0
))
+╬╗1D ( w1 y1 )
where the binomial coefficient is defined to be 0 if b2 > b1
and i = 0,1,..., b2 тИТ b1 тИТ 1 .
(
+╬╗1C z1 x1 (╬▒1B + ╬▓1B y1 ) ╬▒1A + ╬▓1A x2 (╬▒ 2B + ╬▓ 2B y2 )
яг╖
яг╕
))
D
2
y1 = 0
яг╝ яг▒ d b2 тИТi
яглb яг╢яг▒ di
b яг╝
= тИС ягм 2 яг╖ яг▓ i e╬║ ( y2 , x2 ) яг╜ яг▓ b2 тИТi [╬│ ( y2 , x2 )] 1 яг╜ .
i =1 ягн i яг╕ яг│ dy2
яг╛ y2 =0 яг│ dy2
яг╛ y2 = 0
b2
7
(39)
[3]
For r = 0,1,... ,
яг▒ d r ╬║ ( y2 , x2 ) яг╝
яг▓ re
яг╜
яг│ dy2
яг╛ y =0
[4]
2
(
)
= e╬║ (0, x2 ) яго ( ╬╗0A ╬▓ 0A + ╬╗1C ) ╬▒1B ╬▓1A + ╬╗2C ╬▓ 2B x2 + ╬╗2D яг╣
яг░
яг╗
r
[5]
and
яг▒ dr
b1 яг╝
яг▓ r [╬│ ( y2 , x2 )] яг╜
яг│ dy2
яг╛y
= b1 (b1 тИТ 1)
[6]
[7]
2 =0
[8]
(b1 тИТ r + 1)
b тИТr
├Ч [╬│ (0, x2 )] 1
((╬╗
A
0
)
r
╬▓ 0A + ╬╗1C ) ╬▓1B ╬▓1A ╬▓ 2B x2 .
D. Heckerman, тАЬCausal Independence for Knowledge Acquisition and
Inference,тАЭ Proc. of Ninth Conference on Uncertainty in Artificial
Intelligence, Washington, DC, 122-127, Morgan Kaufmann Publishers,
July 1993.
R. Streit, тАЬBranching Process for Estimating Spatial Distribution of
Target Count,тАЭ IEEE Transactions on Aerospace and Electronic
Systems, submitted.
F. Caron, P. Del Moral, A. Doucet, and M. Pace, тАЬOn the Conditional
Distributions of Spatial Point Processes,тАЭ Advances in Applied
Probability, vol.43, 2011, 301-307.
T. E. Harris, The Theory of Branching Processes, Springer, 1963.
K. B. Athreya and P. E. Ney, Branching Processes, Springer-Verlag,
1972. (Reprinted by Dover Publications, 2004).
A. O. Bozdogan, R. Streit, M. Efe, тАЬPalm Intensity for Track
Extraction,тАЭ in preparation.
APPENDIX. BASIC PROPERTIES OF GENERATING FUNCTIONS
Let A be a discrete random variable with outcomes in the
nonnegative integers ┬╗ = {0,1, 2,...} . Let p ( a ) = Pr [ A = a ] .
Close examination of (39) reveals that it is the product of an
exponential of a term that is linear in x2 and a polynomial in
The PGF of A is defined by the Maclaurin series
тИЮ
x2 of degree b1 + b2 . Normalizing the derivative (39) gives
the conditional PGF G ( x2 | b1 , b2 ) . The conditional mean of
the number of targets in node A2 is the derivative
G ( x) = тИС p(a ) x a .
a =0
Because G (1) = тИС тИЮ
a = 0 p ( a ) = 1 , G ( x ) is an analytic function
in the complex x-plane whose domain of analyticity includes
the closed disc x тЙд 1 . Since G ( x ) is analytic,
E [ A2 ] = G (1) (1 | b1 , b2 ) .
Details are omitted.
IV.
G ( n ) (0) тЙб
CONCLUDING REMARKS
Branching processes that count targets given a sequence of the
number of measurements in K тЙе 1 scans are investigated in
this paper. The total target count, or canonical number, is
formulated in the absence of spatial information. The joint Kscan PGF of target and measurement counts is derived via a
new backward recursion involving the PGFs of the priors and
the constituent counting processes.
The concept of independent causal influence (ICI) plays a
pivotal role in the derivation of the K-scan PGF. Under the ICI
assumption the PGFs of the conditioning processes are simple
тАЬpower lawтАЭ expressions that enable the backward recursion
for the PGF to be derived in closed form. The ICI assumption
is applied to targets as well as measurements.
The PGF of the Bayes posterior distribution of target
canonical number is a ratio of derivatives of the K-scan PGF.
Several examples are given, including both binomial and
Poisson distributions. The derivatives of the posterior PGF
provide useful summary statistics of the posterior distribution
of target number. The first derivative evaluated at one gives
the conditional expected number of targets. Second-order
cross-derivatives are the subject of on-going work [8].
so that p ( n ) = n1! G ( n ) (0) . The conditional mean of A is the
first derivative evaluated at x = 1 , that is, E [ A] = G ( ) (1) .
1
Multivariate PGFs are defined similarly. For example, the
PGF of the joint discrete random variable ( A, B, C , D ) with
outcomes (a , b, c, d ) тИИ ┬╗ 4 is
тИЮ
[2]
тИЮ
тИЮ
тИЮ
G ( x, y, z, w) = тИСтИСтИСтИС p ( a, b, c, d ) x a y b z c wd ,
a=0 b=0 c=0 d =0
where p ( a , b, c, d ) is the joint probability distribution. The
series converges when the variables x , y , z , w are each in the
closed unit disc. Differentiating term by term gives
1
a ,b ,c ,d )
p (a, b, c, d ) =
G(
(0,0,0,0)
a !b!c !d !
where G ( a ,b ,c ,d ) ( x, y, z , w) denotes the partial derivative of G
with respect to x, y , z , w . Conditioning on B = b gives the
trivariate random variable, ACD | b . Bayes Theorem and
differentiation gives its PGF as
тИЮ
тИЮ
тИЮ
G ( x, z , w | b) = тИСтИСтИС p (a, c, d | b ) x a z c wd
a=0 c=0 d =0
REFERENCES
[1]
dn
G( x ) = n ! p(n ),
dx n
x=0
1 0,b ,0,0)
= G(
( x,0, z , w),
╬▒
where ╬▒ = G ( 0,b ,0,0) (1,0,1,1) .
J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of
Plausible Inference, Morgan Kaufman Publishers, 1988.
D. Heckerman and J. S. Breese, тАЬCausal Independence for Probability
Assessment and Inference Using Bayesian Networks,тАЭ IEEE Trans. on
Systems, Man, and CyberneticsтАФPart A: Systems and Humans, Vol. 26,
826-831, 1996.
8
Документ
Категория
Вокруг Света
Просмотров
9
Размер файла
187 Кб
Теги
1/--страниц
Пожаловаться на содержимое документа