close

Вход

Забыли?

вход по аккаунту

?

apr.2017.6

код для вставкиСкачать
Adv. Appl. Prob. 49, 388–410 (2017)
doi:10.1017/apr.2017.6
© Applied Probability Trust 2017
ASYMPTOTICS FOR THE TIME OF
RUIN IN THE WAR OF ATTRITION
PHILIP A. ERNST,∗ Rice University
ILIE GRIGORESCU,∗∗ University of Miami
Abstract
We consider two players, starting with m and n units, respectively. In each round, the
winner is decided with probability proportional to each player’s fortune, and the opponent
loses one unit. We prove an explicit formula for the probability p(m, n) that the first player
wins. When m ∼√Nx0 , n ∼ Ny0 , we prove the fluid limit as N → ∞. When x0 = y0 ,
z → p(N, N + z N) converges to the standard normal cumulative distribution function
and the difference in fortunes scales diffusively. The exact limit of the time of ruin τN
is established as (T − τN ) ∼ N −β W 1/β , β = 41 , T = x0 + y0 . Modulo a constant,
W ∼ χ12 (z02 /T 2 ).
Keywords: War of attrition; gambler’s ruin; evolutionary game theory; diffusive scaling;
noncentered chi-squared
2010 Mathematics Subject Classification: Primary 60G40
Secondary 91A60
1. Introduction
In this paper we develop a ruin problem (brought to our attention by Robert W. Chen and
Larry Shepp) which derives inspiration from two important branches in the applied probability
community. The first of the two is the renowned ‘war of attrition’, a game theoretic model
developed by John Maynard Smith (see [23] and [24]), and later generalized in [6], and which
has proven to be critical for understanding animal conflict and behavior, particularly within the
context of evolutionary stable strategies (see [11], [13], and [26], among others). The second
of the two is the classical gambler’s ruin problem, dating to the work of de Moivre in 1711 [7],
and which has been extensively studied and furthered in [8], [10], [12], [22], [28], and [30],
among others. A further modification of the classical gambler’s ruin recently appeared in [16].
In all of the above two-player scenarios, the losing player must give one unit (a point, a
dollar, etc.) to his/her opponent. A very compelling reformulation of this appeared in 1979
[15] and was called the ‘attrition ruin problem’. In the setup of [15], the two opponents are in
attrition and aim to wear the other down over time. However, unlike the classical gambler’s
ruin, when the losing player loses a point, the point does not go to his/her opponent; the point
is simply discarded and the winning player stays as is. Kaigh [15] claimed that this model was
in many ways better suited for modeling games between opponents in contests such as board
games and in best-of-seven series. Yet, despite this model’s apparent applicability, it has only
been mentioned once in the literature [18]. We surmise that the reason is that the model is
Received 24 August 2016; revision received 13 January 2017.
∗ Postal address: Department of Statistics, Rice University, 6100 Main Street, Houston, TX 77005, USA.
Email address: philip.ernst@rice.edu
∗∗ Postal address: Department of Mathematics, University of Miami, 1365 Memorial Drive, Coral Gables, FL 33146,
USA.
388
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
Asymptotics for the time of ruin in the war of attrition
389
limited by one of its key assumptions: there is a constant probability of winning and losing on
each turn.
We thus formulate the following model below, which, like the model in [23], involves a ‘war’
between units, and like the model in [15], is an attrition ruin problem. The game is played by
two armies, which for convenience we call army A and army B, starting with a pair (m, n) of
soldiers, where m and n are nonnegative integers, designating the assets of armies A and B,
respectively. At discrete times, as long as m, n > 0, army A (with m units) wins with probability
p = m/(m + n) and its number of units remains the same. When this happens, army B loses
one unit and (m, n) → (m, n − 1). Furthermore, with probability n/(m + n), army B wins and
(m, n) → (m − 1, n). Let p(m, n) be the ‘ruin’ probability; namely, the probability that army
A is reduced to 0 soldiers before army B is reduced to 0 soldiers.
It is important to note that our model is in some ways similar to the work on ruin probabilities
for correlated insurance claims. For references to this very active literature, we refer the reader
to [1]–[3], and [20], among others.
In Section 2 we prove an explicit formula for p(m, n), i.e.
p(m, n) =
n
(−1)j (n − j )m+n
,
j ! (m + n − j )!
m ≥ 0, n ≥ 0, m + n > 0.
j =0
We then present our first key result (Theorem 1): as m → ∞,
√
p(m, m + x m) → (x),
−∞ < x < ∞,
where is the standard normal cumulative distribution function (CDF). A difficult and essential
part of Theorem 1 is completed by noting a surprising connection between p(m, n) and the
Eulerian numbers. In Section 3 we take a closer look at the model in [15], where A and B lose
a unit with fixed probabilities independent of m and n. The models studied can be viewed as
random walks in the first quadrant, for which we refer the reader to [19], [21], and [29]. In
Section 4 we consider the continuous-time version of the model presented in Section 2 with Xt
and Yt denoting the assets of the two players at time t ≥ 0 and T (m, n) denoting the duration of
the game, i.e. the time when one of the players assets are reduced to 0, assuming exponentially
distributed interevent times. We prove a fluid limit in Theorem 3 based on the law of large
numbers scale. Namely, if the two players start with m = X0N ∼ N x0 , n = Y0N ∼ Ny0 ,
N , Y N ) converges in distribution, as N → ∞, to
T = x0 + y0 , and t → N t, then N −1 (XNt
Nt
a pair of coupled solutions of time-inhomogeneous differential equations exploding in a finite
(nonrandom) time τ ≤ T . The result reveals that x0 − y0 = 0 is critical, implying that τ = T
and a finer scaling (diffusive) is available, when Z0N = X0N − Y0N = N 1/2 z0 . The difference
scales under t → N t (Theorem 4) to a diffusion bearing similarities to the Brownian bridge.
If τN = N −1 T (X0N , Y0N ) is the scaled time of ruin then Theorem 5 (the second key result
of our work) and Corollary 2 determine exactly the limiting distribution of the residual time
T − τN ∼ N −β W 1/β , where (modulo a known constant) W ∼ χ12 (3z2 /T ), the noncentral chisquared distribution with one degree of freedom and noncentrality parameter 3z2 /T . When
z = 0 this is simply χ12 . Remarkably, if T − τN is seen as a fluctuation term from the
deterministic limit T , then the scale is non-Gaussian (which would correspond to β = 21 as in
the classical central limit theorem), being equal to β = 41 instead.
The idea of the proof of Theorem 5 is to determine a sufficiently large family of martingales
for the limiting diffusion, in order to evaluate the moments of the residual time. These are
obtained as confluent hypergeometric functions indexed by a continuous parameter in (39).
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
390
P. A. ERNST AND I. GRIGORESCU
The most challenging technical difficulty, as is the case with scaling limits, is the replacement
of the smooth martingales with their discrete approximations. This is done in Section 6, where
careful estimates are carried out near the singularity τN .
2. Ruin probabilities
For m > 0 and n > 0, the recurrence
p(m, n) =
m
n
p(m − 1, n) +
p(m, n − 1)
m+n
m+n
(1)
immediately holds. The corresponding boundary conditions are
p(m, 0) = 0,
m > 0,
p(0, n) = 1,
n > 0.
(2)
We now find an explicit form of p(m, n) in Proposition 1.
Proposition 1. It holds that
p(m, n) =
n
(−1)j (n − j )m+n
,
j ! (m + n − j )!
m ≥ 0, n ≥ 0, m + n > 0,
(3)
j =0
satisfies (1) and (2) which uniquely determine p(m, n).
Proof. Proof by induction. First, note that (3) agrees with (2) for m = 0. In addition, (3)
agrees with (2) for n = 0. This completes the base case. We now proceed with the induction
step. As is defined in (3), p(m, n) must satisfy (1) since
m
n
p(m − 1, n) +
p(m, n − 1)
m+n
m+n
n−1 1
m+n−1
=
(−1)j [n(n − j )m+n−1 + m(n − j − 1)m+n−1 ]
j
(m + n)!
j =0
n−1
1
=
(m + n)!
j =0
m+n−1
m+n−1
n−
m (−1)j (n − j )m+n−1
j
j −1
n
(−1)j (n − j )m+n
,
=
j ! (m + n − j )!
(4)
j =0
where we have used the combinatorial identity
m+n−1
m+n−1
m+n
n−
m = (n − j )
.
j
j −1
j
Since (4) agrees with p(m, n) in (3), (1) follows. This completes the proof.
Note that (3) is not suitable for analysis with large n. We thus turn to examining p(m, n)
for m and n both large. We begin with Lemma 1 below. As we shall see, the closed form of (5)
in Lemma 1 is essential for proving the asymptotic formula (8) in Theorem 1.
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
Asymptotics for the time of ruin in the war of attrition
391
Lemma 1. For (x, y) ∈ [0, 1) × [0, 1),
φ(x, y) =
∞
∞ xe−x
y
1
+
xe−x − ye−y
1−y y−x
p(m, n)x m y n =
m=0 n=0
(5)
holds for the generating function φ(x, y).
Before supplying the proof, we pause to mention a key idea: we ‘guessed’ the correct righthand side of the above equation by noting that, for small m and n, p(m, n) are related to the
Eulerian numbers Am,n (see [25]) by
An+m,n = (n + m)! (p(m, n) − p(m + 1, n − 1)) ,
m > 0.
Proof. To calculate p(m, n) for m and n both large, consider the generating function
appearing after the first equality of (5). Without loss of generality, p(0, 0) = 1. After lengthy
algebraic manipulation, it can be shown, using (1), that
x(1 − y)φx + y(1 − x)φy =
y
,
(1 − y)2
φx =
∂φ
,
∂x
φy =
∂φ
,
∂y
(6)
holds. Furthermore, lengthy algebraic manipulation can be employed to show that
ψ(x, y) =
xe−x
y
1
+
−x
−y
xe − ye
1−y y−x
also satisfies (6) when φ is replaced by ψ. We now let (x0 , y0 ) be an arbitrary point of
(0, 1) × (0, 1) and let, for t ≤ 0, x = xt and y = yt be trajectories defined by
ẋt = x(1 − y),
ẏt = y(1 − x).
(7)
The derivative of φ(xt , yt )-ψ(xt , yt ) is 0 along the trajectory, as φ and ψ satisfy (6). Along the
trajectories of (7), for some constant c > 0,
xt e−xt = cyt e−yt
must hold. Since x0 (1 − y0 ) > 0 and y0 (1 − x0 ) > 0, as t decreases below 0, then xt and yt
will also decrease. Thus, for some t0 < 0, xt0 = yt0 = 0. Given that φ(xt , yt ) − ψ(xt , yt ) is
constant along the trajectory, we have, for any arbitrary point (x0 , y0 ),
φ(x0 , y0 ) − ψ(x0 , y0 ) = φ(xt0 , yt0 ) − ψ(xt0 , yt0 ) = φ(0, 0) − ψ(0, 0) = 0,
where φ(0, 0) = ψ(0, 0) = 0. Thus, φ ≡ ψ, and, for (x, y) ∈ [0, 1) × [0, 1), (5) holds. This
concludes the proof.
We now proceed to prove our first key result (Theorem 1).
Theorem 1. As m → ∞,
√
p(m, m + x m) → (x),
−∞ < x < ∞,
(8)
where is the standard normal CDF.
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
392
P. A. ERNST AND I. GRIGORESCU
Proof. It seems intractable to proceed directly from (3) since the terms become large in
absolute value. In lieu, we proceed with Lévy’s convergence theorem. To utilize Lévy’s
convergence theorem, we first define the following characteristic functions:
φm (u) =
∞
[p(m, n) − p(m, n − 1)] einu = E(eiξm u ),
(9)
n=0
where the ξm are random variables such that
P(ξm ≤ n) = p(m, n).
√
To prove the theorem, we must show that the distribution of (ξm − m)/ m is asymptotically
normal. Equivalently, by Lévy’s convergence theorem, we must show that, for −∞ < θ < ∞,
√
√
θ
2
φm √
(10)
e−iθ m = E(eiθ(ξm −m)/ m ) → e−θ /2 .
m
From (5), we observe that
φ(x, y)(1 − y) =
=
=
∞
∞ [p(m, n) − p(m, n − 1)] x m y n
m=0 n=0
∞
(y = eiu )
φm (u)x m
m=0
xe−x (1 − y)
+
xe−x − ye−y
y
.
y−x
(11)
Letting x = z, we employ Cauchy’s formula in (11) and obtain
−z
1
ze (1 − eiu )
1
eiu
+ iu
dz,
φm (u) =
2πi C zm+1 ze−z − eiu e−eiu
e −z
where C is a contour surrounding 0, with |z| < 1 on C. We now shift the contour out to
iu
|z| = R → ∞. By doing so, we pick up the residues at each of the zeros zk of ze−z − eiu−e ,
except z = eiu since the latter is a removable pole of the integrand, where
|z1 | ≤ |z2 | ≤ . . . , zk e−zk = eiu−e ,
iu
zk = eiu .
It is straightforward to check that z = eiu is the only solution of modulus 1. We calculate the
residues and obtain
∞
1 1 − eiu
1 1 − eiu
φm (u) =
∼ m
,
(12)
m
z 1 − zk
z1 1 − z 1
k=1 k
√
since |zk | > |z1 | for k > 1. We now return to (10) and make the substitution u = θ/ m. Then
z1 e−z1 = eiu−1−iu+u
2 /2
and, thus,
z1 = 1 + ε,
2 /2−1
= eu
iθ
ε = ±√ .
m
,
(13)
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
Asymptotics for the time of ruin in the war of attrition
393
Substituting (13) into (12), we obtain
√
θ
2
φm √
e−iθ m ∼ e−θ /2 ,
m
which is (10). This completes the proof.
3. Simple random war
We now consider the model in [15], in which each army loses a soldier independently of
the size of m and n. Let the success probability be 21 , independent of m and n. We refer to
such a model as a ‘simple random war’. Without further delay, we note that the results of this
section up until (14) can also be obtained via the framework of a simple symmetric random
walk; however, for the purposes of consistency, we employ the framework from Section 2.
Let q(m, n) denote the probability that army A is reduced to 0 soldiers before army B is
reduced to 0 soldiers. Then q(m, n) satisfies the recurrence
q(m, n) = 21 q(m − 1, n) + 21 q(m, n − 1),
with
q(m, 0) = 0,
m > 0,
q(0, n) = 1,
n > 0,
as the boundary conditions. By induction, we easily have
n−1
1
m+k−1
q(m, n) =
.
k
2m+k
(14)
k=0
This result is in agreement with [15, Equation (3), p. 23]. However, the author of [15] does not
consider asymptotics. We now proceed with Theorem 2.
Theorem 2. As m → ∞,
√
x
q(m, m + x m) → √ ,
2
−∞ < x < ∞.
Proof. We first define the characteristic functions analogous to (9) as
m (u) =
∞
[q(m, n) − q(m, n − 1)] einu = E(eiηm u ),
n=1
where ηm is a random variable with P(ηm ≤ n) = q(m, n). Summing by Newton’s formula,
∞ m+n−1 n
x ,
(1 − x)−m =
n
n=0
we find, letting x = exp(iu),
m (u) = eiu (2 − eiu )−m .
√
Making a substitution of u = θ/ m, we have,
√
√
θ
2
m √
e−iθ m = E(eiθ(ηm −m)/ m ) → e−θ ,
m
Invoking Lévy’s convergence theorem, the result follows.
−∞ < θ < ∞.
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
394
P. A. ERNST AND I. GRIGORESCU
Table 1: Values of p(m, n) and q(m, n).
Initial capital
m+n
m
n
Win probability
p(m, n) 1 − p(m, n)
Win probability
q(m, n) 1 − q(m, n)
20
8
9
10
12
11
10
0.939
0.779
0.500
0.061
0.221
0.500
0.820
0.676
0.500
0.180
0.324
0.500
100
45
48
50
55
52
50
0.958
0.755
0.500
0.042
0.245
0.500
0.843
0.656
0.500
0.157
0.344
0.500
200
90
95
100
110
105
100
0.993
0.890
0.500
0.007
0.110
0.500
0.922
0.761
0.500
0.078
0.239
0.500
1000
480
490
500
520
510
500
0.986
0.863
0.500
0.014
0.137
0.500
0.897
0.737
0.500
0.103
0.263
0.500
2000
960
980
1000
1040
1020
1000
0.999
0.939
0.500
0.001
0.061
0.500
0.963
0.815
0.500
0.037
0.185
0.500
Values of p(m, n) and q(m, n) for various values of m and n are shown above in Table 1.
4. Scaling limit
We start with a scaling of the problem. First, assume that the random evolution is a pure
jump process in continuous time. Let (Xt , Yt )t≥0 be the joint process of the fortunes at time
t ≥ 0 of both players and q(x, y) = x/(x + y), x + y > 0. At times t > 0 separated by
independent and identically distributed exponential waiting times of mean 1, the process is
updated according to the transition matrix
⎡ ⎤ ⎡
⎤
Xt Xt − with probability q
⎢Xt ⎥ ⎢Xt − − 1 with probability 1 − q ⎥
⎢ ⎥=⎢
⎥
(15)
⎣ Yt ⎦ ⎣ Yt − with probability 1 − q ⎦ ,
Yt − − 1 with probability q
Yt where q = q(Xt − , Yt − ). We shall assume that the process is defined on a filtered probability
space (, F , P, (Ft )t≥0 ) satisfying the usual conditions. Note that
Xt + Yt = X0 + Y0 − Nt ,
(16)
where (Nt )t≥0 is the Poisson process of intensity 1 defined by the exponential waiting times.
The scale at which we define the process is seen as macroscopic in the sense of (8) in
Theorem 1. More precisely, we introduce a scaling factor N > 0 such that
tmicro → N tmacro ,
xmicro = N xmacro ,
ymicro = Nymacro ,
such that the quantities t, Xt , and Yt from (15) are microscopic, or, equivalently, amplified by a
factor of N . The macroscopic quantities will survive in the limit as N → ∞ and are of order 1.
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
Asymptotics for the time of ruin in the war of attrition
395
At a macroscopic scale, the sum of the two processes decreases in steps of size 1/N according
to a Poisson process with sped up rate N . The time-space scaling is ‘Eulerian’ since time and
space have the same scaling x/t = const., and is not diffusive (when x 2 /t = const.). It is the
difference between the two fortunes of the players that shall scale in a diffusive (equivalently,
parabolic) manner. Theorem 3 refers to the process under Eulerian scaling, while in the critical
scale of equal initial fortunes, the second-order approximation is reflected in Theorem 4. For
references regarding the approximation of Markov processes with differential equations, we
refer the reader to the classical text [9].
Henceforth, let t denote the macroscopic time and we let lower case letters denote the other
macroscopic quantities. We shall also suppress the subscript ‘macro’. Denote
xtN = N −1 XNt ,
ytN = N −1 YNt ,
NtN = N −1 NNt .
(17)
We assume the initial conditions
lim x0N = x0 ,
N→∞
lim y0N = y0 ,
(18)
T = z0 = x0 + y0 .
(19)
N→∞
and let
TN = z0N = x0N + y0N ,
As in the discrete-time case, the process ends at time
τN = inf{t > 0 | xtN ∧ ytN = 0},
TN = T (N x0N , Ny0N ) = N τN ,
(20)
where the capitalized notation designates the time of ruin before scaling.
Theorem 3 shows the criticality of the x0 − y0 = 0 case and motivates the necessity of a
finer scale for the difference.
Theorem 3. Under (17) and (18), the processes (xtN ), (ytN ), t ∈ [0, T ), converge jointly in
probability to the deterministic solutions of the affine equation
ut =
ut
− 1,
T −t
ut =
u0 T
1 (T − t)2 − T 2
+
,
T −t
2
T −t
xt + yt = T − t,
(21)
with initial values u0 = x0 , respectively u0 = y0 . Without loss of generality, assume that
x0 ≥ y0 . The time to extinction is
T − x02 − y02 ≤ T if x0 ≥ y0 ,
(22)
τ=
T
if x0 = y0 .
Proof. The proof is given in Section 7.1.
P
→ T.
Corollary 1. If x0 = y0 then τN −
Proof. We know that the process (x·N , y·N ) converges in distribution to the deterministic
pair of ordinary differential equations (x· , y· ) given in (21). Fix t > 0. For any Borel set F , if
P(xt yt ∈ ∂F ) = 0, where ∂F is the boundary of F , we have
lim P(xtN ytN ∈ F ) = P(xt yt ∈ F ).
N→∞
The product xt yt is deterministic and, hence, has a delta distribution. If t < τ from (22),
xt yt = 0 and 0 is a continuity point of the distribution. Since {τN ≤ t} = {xtN ytN = 0}, for
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
396
P. A. ERNST AND I. GRIGORESCU
F = {0}, we have shown that if t < τ then limN→∞ P(τN ≤ t) = 0. When x0 = y0 and
τ = T , τN ≤ TN almost surely since the process decreases only from the initial value. Since
d
limN→∞ TN = T , τN −
→ T . Since T is nonrandom, the convergence holds in probability. Proposition 2. Under the same assumptions as in Theorem 3, and using the same notation as
in (1), if x0 − y0 = 0,
lim p(N x0N , Ny0N ) = 1{x0 <y0 } .
N→∞
Proof. The result can be proven directly or as a corollary of Theorem 1. In the continuoustime setting, p(m, n) is defined identically in distribution over the skeleton Markov chain at
jump times. By noting that p(m, ·) is increasing and n = Ny0N x as N → ∞ (here x is as
in Theorem 1), we see that the limit must be greater than any value (x), x ∈ R, and, thus,
equal to 1. The opposite case is obtained by complementarity.
It is now clear that x0 − y0 = 0 is the critical case. We now require some additional
assumptions on the initial states,
ztN = N 1/2 (xtN − ytN ),
lim z0N = z0 .
N→∞
(23)
The last condition requires us to have
x0 = y0 ,
T = 2x0 .
(24)
Theorem 4. Under (23) and (24), in addition to the conditions of Theorem 3, the difference
process (ztN ) converges in distribution, as a process on the Skorokhod space of right-continuous
with left limits path space, to the diffusion
dzt =
zt
dt + dWt ,
T −t
0 ≤ t < T starting at z0 .
(25)
Furthermore, (25) can be solved explicitly as the Gaussian process
t
s
t −1
zt = 1 −
z0 +
1−
dWs .
T
T
0
Proof. The proof is given in Section 7.2.
(26)
Remark 1. Note that (25) is similar to the stochastic differential equation (SDE) satisfied by
the Brownian bridge, with the difference that the drift has a positive sign (alternatively, positive
direction) relative to the sign of zt . This is significant, leading to (26), which shows that the
mean (when z0 = 0) and standard deviation of the process zt are of the size (1 − t/T )−1 .
Unlike the Brownian bridge, the mean and standard deviation do not shrink to 0 as t → T . In
fact, as t → T , zt → ∞ almost surely.
4.1. The C-tightness
We begin with a definition.
Definition 1. Let a family of right-continuous with left limits processes (ηtN ), defined on t ∈
[0, T ), with values in Rd , d ≥ 1, indexed by N > 0, be C-tight if, for any T ∈ (0, T ) and
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
Asymptotics for the time of ruin in the war of attrition
397
ε > 0, conditions (27) and (28) below are satisfied. To simplify notation, we let | · | denote the
Euclidean norm, irrespective of dimension. We have
(27)
lim sup lim sup P sup |ηtN | > M = 0,
M→∞
N→∞
lim lim sup P
δ→0 N→∞
t∈[0,T ]
sup
t,t ∈[0,T ], |t−t |<δ
|ηtN − ηtN | > ε = 0.
(28)
In the proof of Theorem 3, we need to briefly consider d = 2 for ηtN = (xtN , ytN ), but in the
proofs of Theorems 4 and 5, we only need to consider d = 1.
Condition (27) of the definition states uniform boundedness on any compact time interval
and condition (28) states the uniform modulus of continuity of the family of processes based on
the Arzelà–Ascoli theorem. The term C-tightness refers to the fact that the processes, in our case
(17) and (23), defined as pure jump processes, live on the Skorokhod space D = D([0, T ), R),
endowed with the J1 metric as a Polish space. Conditions (27) and (28) guarantee that the
probability laws PN of (ηtN ) are precompact on D and that their limit points are continuous
paths in the subspace C([0, T ), R) ⊆ D([0, T ), R). For further references, we refer the reader
to [5] and [14].
We now let φ(x) be a real, nonnegative function, having at most exponential growth.
Possibilities for φ(x) include all absolute values of polynomials as well as other standard
test functions (i.e. φ(x) = |x|m , m even, or φ(x) = exp(λx), λ > 0). We shall now replace
condition (27) with the stronger condition
(29)
lim sup E sup φ(ηtN ) < ∞,
N→∞
t∈[0,T ]
since condition (27) becomes a consequence of Markov’s inequality.
Let f ∈ Cb1,2 ([0, T ) × Rd ) be a test function with bounded derivatives. Recall that the
process is sped up by a factor of N, appearing in front of the time integrals below. We give
(see, for example, [17]) the analogue of Itô’s differential formula for pure jump processes. It
N,f
N,f
states that (Mt ) and (M t ) are (Ft )-martingales for t ∈ [0, τN ]. The jumps J±N (f ) and
their probabilities given by qsN correspond to the dynamics in (15), after the scaling (17). They
are
N,f
Mt
= f (t, xtN , ytN ) − f (0, x0N , y0N )
t
−
N[∂s f (t, xsN , ysN ) + J+N (f )qsN + J−N (f )(1 − qsN )] ds,
(30)
0
and
N,f
Mt
where
=
N,f
(Mt )2
t
−
0
N [(J+N (f ))2 qsN + (J−N (f ))2 (1 − qsN )] ds,
(31)
1
N
N
N
N
− f (t, xs , ys ) ,
= f t, xs , ys −
N
1 N
xN
N
N
N
N
J− (f ) = f t, xs − , ys − f (t, xs , ys ) ,
qsN = N s N .
N
xs + y s
J+N (f )
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
398
P. A. ERNST AND I. GRIGORESCU
5. The residual time to extinction in the critical case
Throughout the entirety of this section we shall assume that x0N + y0N = TN = T , a slightly
stronger initial condition than (19). Consistent with the scaling in (23), we note that, for a
given N, the process (ztN ) lives in the angle bounded by |z/(TN − t)| < N 1/2 , within the small
error appearing in (33). Let τN , defined in (20), be the time, on the macroscopic scale, when
one of the players reaches 0. Then
τN = τN (T , z0 ) = inf{t > 0 | |ztN | = (xtN + ytN )N 1/2 },
xtN
+ ytN
=
TN − NtN
= TN − t + o(N
−1/2
(32)
(33)
).
Equations (23) and (24) show that the time to ruin is a hitting time of a set depending on the
initial state z0N and the total fortune at time 0, given by TN . We shall denote TN by τN when
the other dependence is not essential. Before scaling, the time to ruin is
TN = TN (N x0N , Ny0N ) = N τN .
From (17) and (18), recall that X0 + Y0 = N TN and limN→∞ TN = T > 0. Corollary 1
shows that, in the critical case, limN→∞ (T − τN ) = 0 in probability. A more refined scaling
is necessary to estimate the order of magnitude of the residual time of ruin T − τN . Theorem 5
d
→ S, where
will establish the exact power β = 41 such that SN −
T − τN ∼ SN N −β
⇐⇒
N T − TN ∼ SN N 1−β ,
T = 2x0 ,
(34)
and S > 0 has a distribution θ (T , z0 ), which will be determined exactly in Corollary 2.
To evaluate the distribution of T − τN , we need to construct a family of martingales for the
limiting process (zt ) from (25), indexed by a parameter ρ and adapted to the filtration of the
process. This family has the form f (T − t, zt ) given by
f (a, u) = a ρ g(a −1/2 u),
where
a > 0, u, ρ ∈ R,
(35)
g (u) + 3ug (u) − 2ρg(u) = 0.
A power series solution to (35) is of the form
ak uk ,
ak+2 =
gρ (u) =
k≥0
2ρ − 3k
ak ,
(k + 2)(k + 1)
(36)
k = 0, 1, . . .
(37)
We note the presence of two independent solutions, one even and one odd, which occurs because
the coefficients depend on a two-step recurrence. The solution is convergent on the real line
and is a linear combination of the odd and even solution, with coefficients a0 and a1 determined
by the initial conditions of the second-order ordinary differential equation (36).
Equation (36) can be solved after a substitution of g(u) = h( 21 u2 ), with h obtained from a
Kummer function in (39). This implies that g = gρ , depending on the exponent ρ, is even in u.
We have
xh (x) + 21 + 3x h (x) − ρh(x) = 0.
Let w(z) = M(a, b, z) be the solution of the Kummer equation
zw (z) + (b − z)w (z) − aw(z) = 0,
w(z) =
b = 21 ,
∞
∞
a (n) n a (n) n
=
z ,
z
(n!)2
b(n) n!
k=0
a=
1
2
+ 13 ρ,
k=0
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
Asymptotics for the time of ruin in the war of attrition
399
where a (0) = 1 and, for integers k ≥ 1, a (k) = k−1
j =0 (a + j ) is the rising factorial. The
Kummer transformation
e−z M(a, b, z) = M(b − a, b, −z)
yields
hρ (x) = e−3x w(3x) = e−3x M
1
2
(38)
+ 13 ρ, 21 , 3x = M − 13 ρ, 21 , −3x .
(39)
It is remarkable that these functions are generalized Laguerre polynomials for − 13 ρ ∈ Z, which
will become more transparent after we identify the moments of the limiting random variable S.
As z → ∞,
z a−b
(−z)−a
e z
+
.
(40)
M(a, b, z) ∼ (b)
(a)
(b − a)
Given (40) and the power series form of the Kummer function at z = 0, it follows that, for any
ρ > 0, there exist 0 < c1 (ρ) < c2 (ρ) such that
c1 (ρ)|z|ρ/3 ≤ hρ (z) ≤ c2 (ρ)|z|ρ/3
for all z.
(41)
Working with the Kummer function’s integral representation or with the contiguous relations,
there exists a positive c3 (ρ) such that
ρ/3−k
|h(k)
,
ρ (z)| ≤ c3 (ρ)(|z| ∨ 1)
k ≥ 0.
The above being established, we present the main result of this section. It identifies the
distribution θ (T , z0 ) on (0, ∞) of the scaled asymptotic residual time defined in (34).
Theorem 5. If, in addition to the conditions of Theorem 4, the initial value TN = T then the
scaled residual time SN = N 1/4 (T − τN ) converges in distribution as N → ∞ to a positive
random variable S. Its distribution θ (T , z0 ) has the exact generating function
2
q/4
z
(1/2 + q/4) 3q/4
2
T
h3q/4 0 ,
E[S ] =
3
(1/2)
2T
q
q > 0.
(42)
The above fully determines the distribution on (0, ∞).
Proof. The result is proven in Section 6. Regarding the identification of S, equation (42) is
established in Section 6.8 and the positive definiteness of S is proven in Section 6.9.
Remark 2. We note that (42) is obtained from (39) by setting q = 43 ρ.
In fact, the fourth power of S, modulo a constant, has a classical distribution.
Corollary 2. The distribution of the adjusted asymptotic residual time R = 3T −3 S 4 is the
noncentral chi-squared distribution with k = 1 degrees of freedom and noncentrality parameter
3z02 /T . In the z0 = 0 case, we have hρ (0) = 1 and R is χ12 , i.e. R is the square of a standard
normal.
Proof. Set p and q such that m = 41 q = 13 ρ, m ≥ 0 is an integer. Using (38) and (39), we
see that
z2
1
m! (1/2) (−1/2)
h3m (w) = M −m, , −3w =
(−3w),
w= 0,
Lm
(43)
2
(1/2 + m)
2T
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
400
P. A. ERNST AND I. GRIGORESCU
(−1/2)
where Lm
(x), x ∈ R, is the mth generalized Laguerre polynomial of type α = − 21 , the
latter defined as
x −α ex dm m+α −x
m+α
(x)
=
(x
e
)
=
M(−m, α + 1, x).
L(α)
m
m
n! dx m
1
m (α)
The generating function ∞
m=0 λ Lm (x), when x = −3w, α = − 2 , combined with (42), is
then equal to the moment generating function of a random variable with moments given as
(1/2 + m)
1
(−1/2)
m! Lm
(−3w) =
h3m (w) = M −m, , −3w .
(44)
(1/2)
2
This is equal (see [27]) to
M(λ) =
∞
(−1/2)
λm L m
m=0
(−3w) =
e3wλ/(1−λ)
,
(1 − λ)1/2
(45)
which is equivalent to saying it is the distribution of 21 R , where R ∼ χ12 (6w), the noncentered
chi-squared distribution with two degrees of freedom and noncentrality parameter 6w = 3z02 /T .
Alternatively, it is the square of a normal with nonzero mean with variance 1.
The result immediately follows by combining (43)–(45) with (42).
Corollary 3. The residual time up to extinction before scaling N T − TN is of order N 3/4 .
More precisely,
d
→ S.
N −3/4 [T N − TN (N x0 , N x0 − N 1/2 z0 )] −
Proof. This is an immediate consequence of Theorem 5 and the scaling in (17).
6. Proof of Theorem 5
The goal of this section is to prove Theorem 5, the second key result of our paper. To do so,
we first state and prove Propositions 3, 4, and 5. Throughout, let the scaled residual times be
ŜN = N β (T − NτNN ),
SN = N β (T − τN ),
β = 41 .
6.1. Preliminary bounds
Proposition 3. The following inequality holds:
E[|ŜN − SN |2 ] ≤ T N −1/2 .
Furthermore, if one of the families of random variables (ŜN )N>0 , (SN )N>0 is tight with limit
S in distribution, then so is the other.
Proof. Doob’s maximal inequality applied to the square-integrable martingale NtN − t (the
compensated Poisson process scaled by N ) satisfies
E sup |NτNN − τN |2 ≤ T N −1 .
(46)
s∈[0,T ]
We obtain the desired result by taking the expected value of |ŜN − SN |2 , noting that the
supremum dominates the value at t = τN , and multiplying with N 2β . The exponent of N
is 2 × 41 − 1 = − 21 . The second statement of the proposition thus follows.
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
Asymptotics for the time of ruin in the war of attrition
401
Proposition 4. Let a, b > 0 with b/a > ln 2/ ln 23 . For any x ≥ y ≥ 1,
[(x + y − 1)a (x − y + 1)b − (x + y)a (x − y)b ]x
+ [(x + y − 1)a (x − y − 1)b − (x + y)a (x − y)b ]y
≥ 0.
(47)
Proof. Divide both sides by (x + y − 1)a (x − y)b (x + y)−1 . The inequality becomes
b
b a
1
y
1
1
x
1+
+
≥ 1+
.
1−
x+y
x−y
x+y
x−y
x+y−1
Let f (u) = ub , a convex function for u ≥ 0. Jensen’s inequality implies that it is sufficient to
show that
b a
1
1
1+
≥ 1+
.
x+y
x+y−1
For z = 1/(x + y), the above expression is equivalent to
(1 + z)b (1 − z)a ≥ 1,
0 < z < 21 ,
b
ln 2
>
.
a
ln 3/2
The logarithm g(z) = b ln(1 + z) + a ln(1 − z) is well defined since 0 < z < 21 has g(0) = 0
and in all cases, g (0) > 0. Since the critical point is z0 = (b − a)/(b + a), it is the case that
z0 > 21 when b/a > 3 and, thus, g (0) > 0 on z ∈ (0, 21 ]. This shows the above inequality
holds. If b/a ≤ 3, the minimum will be achieved at either endpoint. We only have to verify the
inequality at z = 21 , where the function takes the value b ln 23 − a ln 2, which is nonnegative by
assumption.
Proposition 5. Let a, b > 0 be as in Proposition 4. Then HtN (a, b) = (xtN + ytN )a (ztN )b ,
where zb = (z2 )b/2 , is an Ft -submartingale. Using Doob’s maximal inequality, for p > 1,
p
p p
(a+b)p
N
pc
≤N
E[ŜN
],
E
sup Hs (a, b)
p
−
1
s∈[0,τN ]
(48)
b−a
b a+b
c=
= −
.
4
2
4
Proof. We base the proof on (47). After dividing by N everywhere, we note that the ds term
in (30) applied to HtN (a, b) yields the left-hand side of the expression in (47). If the integrand
is nonnegative, HtN (a, b) is a submartingale. Without loss of generality, assume that x ≥ y.
By construction, we stop at τN when the minimum is 0, but we know it is impossible that both x
and y will equal 0 at the same time. With these remarks, to verify that the conditions of (47)
are met, we also note that the expression under the integral is calculated only before τN , i.e.
NxtN ≥ NytN ≥ 1 for t < τN . We then simply apply (47), proving the submartingale claim. We
shall have an inequality between the values of the submartingale at time 0 and τN . The stopping
time is uniformly bounded since τN ≤ T for all N . We employ Doob’s maximal inequality
in the Lp norm. Taking into account (32), the inequality is easily verified. We conclude by
verifying the exponent of N . At τN a factor of N 1/2 enters the factor with exponent b only.
To compensate the power N β(a+b) , where β = 41 , we must subtract, giving the formula for c
in (48).
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
402
P. A. ERNST AND I. GRIGORESCU
6.2. Plan of the proof of Theorem 5
Sections 6.3–6.10 supply the proof. We shall apply (54) to the function f (xtN + ytN , ztN )
from (35). Ideally, we would like to work √
with f (T − t, ztN ), but this would add errors
N
that would hinder the proof. Note that zt = N (xtN − ytN ), so that the application of Itô’s
formula is direct in the form given in the setup for the process (xtN , ytN ). On the other hand,
xtN + ytN = TN − NτNN , and Proposition 3 will be necessary to replace TN − NτNN with T − τN .
The family of functions f , for all ρ ∈ R, constructed to satisfy (36) and (37), will nullify
the time integral in Itô’s formula for (zt ), making it a martingale for the limiting diffusion.
Of course, we are using the formula for (ztN ), and an error term is expected. After showing
that this error term vanishes as N → ∞, uniformly in t ∈ [0, τN ], we apply an optional
stopping argument to evaluate the martingale at both ends t = 0 and t = τN .
6.3. The differential formula
The function gρ in (37) that defines f relates to the Kummer function h = hρ associated
to ρ by g(u) = h( 21 u2 ), present in (39). Itô’s formula, (30) and (31), yields the terms
√
xsN
1
zN
zsN + 1/ N
N
N
f xs + y s − , − f xsN + ysN , s
N
xsN + ysN
xsN + ysN − 1/N
xsN + ysN
√
ysN
1
zsN
zsN − 1/ N
N
N
N
N
+ f xs + ys − , .
− f x s + ys , N
xsN + ysN
xsN + ysN − 1/N
xsN + ysN
All terms present in the Taylor formula of order two—we include the second derivative to
match (61)—including the error terms in Lagrange form, depend on the first three derivatives
of f with respect to z and the second derivative with respect to t. They are of the form
(k)
a ρ−i ui hρ (u2 /2a), 0 ≤ k ≤ 3, and 0 ≤ i, i ≤ 4, with a = xsN + ysN and u = zsN .
At z → ∞, the Kummer functions satisfy (40) and (41) and, based on (51) and the choices of
(k)
a, b in (39), the function hρ (z), and by consequence z → f (T − t, z) are of polynomial order
1
3 ρ − k at z → ∞. After inspecting the derivatives, we see that in the asymptotic formula,
modulo constants independent of N , a, u, but possibly not ρ,
|∂aa f (a, u)| ≤ c21 (ρ)a 2ρ/3−2 u2ρ/3 ,
∂z3 f (a, u) ≤ c22 (ρ)a 2ρ/3 u2ρ/3−3 .
(49)
(3)
Additionally,
the term ∂aa f has a factor of N × (1/N )2 and the term ∂z f has a factor of N ×
√ 3
(1/ N) . This shows that (54) is a martingale plus an error term E N (f ) = E1N (f ) + E2N (f ).
6.4. The error terms from the space derivatives and time derivatives
We start with the error term issued from the time derivatives present in (49),
j = 2, j = 0.
E1N (f ) ≤ N −1 T C21 (ρ) sup HsN 13 ρ − j, 43 ρ − j ,
s∈[0,τN ]
We continue with the error term issued from the space derivatives present in (49),
j = 0, j = 3.
E2N (f ) ≤ N −1/2 T C22 (ρ) sup HsN 13 ρ − j, 43 ρ − j ,
s∈[0,τN ]
For ρ >
9
4,
let
2ρ/3 + 2ρ/3
> 1.
2ρ/3 + 2ρ/3 − j − j Above, we have written the sum of two terms 23 because one comes from the power of a =
T − τN , which is ρ − ρ/3 = 2ρ/3, while the power of u = z is 2 × ρ/3. The exponent ρ/3 is
the asymptotic order of magnitude of the Kummer functions hρ .
p=
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
Asymptotics for the time of ruin in the war of attrition
403
6.5. Bounds based on Proposition 5
The error comes from the derivatives in time and space of f from Taylor’s formula. Then,
with C21 (ρ) and C22 (ρ) constants incorporating usual terms in the Taylor series of order up
to three as well as c21 (ρ) and c22 (ρ), with the factor T coming from the time integral, we
calculate, with p = p1 corresponding to the second derivative in time, i.e. j = 2, j = 0,
p 1/p
(E[(E1N (f ))p ])1/p ≤ N −1 T C21 (ρ) E sup HsN 23 ρ − j, 23 ρ − j s∈[0,τN ]
≤N
−1+c1
4ρ/3 1/p
])
T C21 (ρ)(E[ŜN
,
with c1 calculated for a = 23 ρ − j , b = 23 ρ − j and j = 2, j = 0, yielding
c1 − 1 = 41 (j − j ) − 1 = − 21 .
Similarly, we calculate, with p = p2 corresponding to the third derivative in space, i.e. j = 0,
j = 3,
p 1/p
(E[(E2N (f ))p ])1/p ≤ N −1/2 T C22 (ρ) E sup HsN 13 ρ − j, 43 ρ − j s∈[0,τN ]
≤N
−1/2+c2
5ρ/3 1/p
])
T C22 (ρ)(E[ŜN
,
with c2 calculated for a = 23 ρ − j , b = 23 ρ − j , and giving for j = 0, j = 3,
c2 −
1
2
= 41 (j − j ) −
1
2
= − 45 .
6.6. Equation based on the optional stopping theorem
For fixed N , τN < T almost surely because neither xtN nor ytN reaches zero at the same time
and TN − τN = xτNN + yτNN . We are ready to apply the optional stopping theorem to equate the
initial value with the final value at t = τN . We obtain
zτNN
z0
= T ρ gρ √
E (T − NτNN )ρ gρ + E[EN (f )].
T
T − NτNN
(50)
As mentioned in (32), at τN we have the identity
√
√
|zτNN | = N (TN − NτNN ) ≈ N (TN − τN ),
obtained from algebraic manipulation of (23). The replacement of the Poisson process with the
value τN will be carried out based on Doob’s maximal inequality, given that τN ≤ T uniformly
in N > 0, as in Proposition 3. We have used the fact that g(u) is even, being a function of u2 ,
having equal values at z = ±(T − t). These correspond to player X (with +), respectively Y
(with −), being the winners. Relation (50) can be written in terms of the Kummer function
h = hρ associated to ρ by g(u) = h( 21 u2 ) as follows:
E[(T − τN ) hρ (N (T − τN ))] = T hρ
ρ
ρ
z02
2T
+ E[EN (f )].
(51)
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
404
P. A. ERNST AND I. GRIGORESCU
6.7. Tightness of (ŜN )N>0 and (SN )N>0
4ρ/3
Denote vN = E[ŜN
]. Note that (41) yields a lower bound
zτNN
c1 (ρ)vN ≤ E (T − NτNN )ρ gρ ,
T − NτNN
and that the upper bound is
E[EN (f )] ≤ N −1/2 T C21 (ρ)vN
1/p1
+ N −5/4 T C22 (ρ)vN
1/p2
.
This implies that, with the obvious meaning of the constants,
z0
c1 (ρ)vN ≤ T ρ gρ √
+ c̄0 N −1/2 + c̄1 N −1/2 (vN + 1),
T
showing that
z0
4ρ/3
−1 ρ
lim sup vN = lim sup E[ŜN ] ≤ c1 (ρ) T gρ √
< ∞.
T
N→∞
N→∞
This proves that (ŜN ) is a tight family of nonnegative random variables. Let S be a limit point
in the distribution sense. We know from Proposition 3 that (SN ) is also tight.
6.8. Distribution of the positive part S+ of the limiting distribution
We proceed to prove that any limit point S has distribution completely determined by (42),
d
and as a consequence it is unique in law. We conclude that SN −
→ S. To simplify notation, we
keep the same notation ŜN for the subsequence converging to S. Fix a sufficiently small ε > 0.
First we determine the moments of S ∨ ε.
Returning to (51), we now know that the error term vanishes as N → ∞ and
E[(T − NτNN )ρ hρ (N (T − NτNN ))] = N −ρ/4 E[ŜN hρ (N 3/4 ŜN )]
ρ
= N −ρ/4 E[(ŜN ∨ ε)ρ hρ (N 3/4 (ŜN ∨ ε))] + C(N, ε),
where C(N, ε) ≤ c2 (ρ)ε 4ρ/3 .
At this point, our goal is to determine the distribution of the limit S. As we see, the last
relation involves only the deterministic functions hρ and ŜN . We shall appeal to the Skorokhod
representation theorem (see [4]). There exists a probability space supporting all (SN ), S, such
that all preserve the same distributions and SN → S almost surely.
The random variables ŜN ∨ ε ≥ ε and N 3/4 (ŜN ∨ ε) → ∞ pointwise. Due to this fact and
(40), we know exactly the limit, i.e.
ρ/3
hρ (N 3/4 (ŜN ∨ ε))
(1/2 + ρ/3) −1
2
lim
=
almost surely.
N→∞ (N 3/4 (ŜN ∨ ε))ρ/3
3
(1/2)
Then
lim N
N→∞
−ρ/4
E[(ŜN ∨ ε) hρ (N
ρ
3/4
ρ/3
(1/2 + ρ/3) −1
2
(ŜN ∨ ε))] =
E[(S ∨ ε)4ρ/3 ].
3
(1/2)
By dominated convergence, if S+ = S 1{S>0} then
2
ρ/3
z0
(1/2 + ρ/3) ρ
2
4ρ/3
4ρ/3
E[S+ ] = lim E[(S ∨ ε)
T hρ
]=
ε↓0
3
(1/2)
2T
for all ρ > 49 , (52)
which is essentially (42).
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
Asymptotics for the time of ruin in the war of attrition
405
6.9. Uniqueness of the moments and S+ = S
The next step is to identify that a random variable with moments given by (52) is uniquely
determined if the moments are known on an interval (3, ∞). This holds due to the Mellin
transform theorem (see [31, Theorem 6a, p. 243]). Finally, we want to show that a distribution
with those moments has exactly the moments on the right-hand side of (52) for all ρ > 0. This
is proven in the derivation of (45).
The final step is to show that such a distribution does not concentrate at 0. This holds because
now we can use all moments down to q = 0 for S+ . For q = 0, we obtain the 0th moment,
equal to the total mass of S+ . This is equal to 1, which shows that S = S+ or P(S = 0) = 0.
6.10. Special case z0 = 0
In the z0 = 0 case, we obtain S 2 ∼ c0 |Z|, Z ∼ N (0, 1), with the c0 constant determined in
Corollary 2. This concludes the proof of Theorem 5.
7. Proofs of Theorems 3 and 4
7.1. Proof of Theorem 3
The processes (xtN ) and (ytN ) are bounded in the interval [0, T ] by construction, so condition
(27) immediately holds. The test function can be assumed to be simply continuous in all
derivatives since the time and state spaces are compact. Let f (t, x, y) = x, while f (t, x, y) = y
is analogous. Equations (30) and (31) show that there exists mN (t , t) = O(1/N ) uniformly
in t, t such that
xtN − xtN =
t
t
(1 − qsN ) ds + mN (t , t)(t − t ).
This shows that condition (28) holds. Conditions (27) and (28) guarantee that any limit point
of the tight sequence is actually continuous.
For a general test function f (t, x, y) = f (t, x), let
x
(53)
∂x f (t, x) − 1,
At f (t, x) =
T −t
corresponding to (25). Given (ηt ) a path in the Skorohkod space and t ∈ [0, T ], 0 < T < T ,
it follows that the functional ,
t
∂s f (s, ηs ) + As f (s, xs ) ds ,
η → (η) = sup f (t, ηt ) − f (0, η0 ) −
t∈[0,T ]
0
is continuous and bounded. Let x· be a limit point of the tight sequence (x·N )N>0 . The quadratic
variation is easily verified to be of order 1/N according to (31). To explain the operator At
in (53), note that the first-order term in the Taylor series at x = xsN with change −1/N is
T − NsN − xsN
= As f (s, xsN ) + eN ,
∂x f (s, xsN )(1 − qsN ) = ∂x f (s, xsN )
T − NsN
2 ] → 0 uniformly in time. The error term is controlled by (46).
with E[eN
From the continuity theorem, E[(x)] = 0. This reasoning is repeated in more detail in
the proof of Theorem 4 in Section 7.2. It follows that x· is a path (possibly random) satisfying,
almost surely,
t
∂s f (s, xs ) + Ls f (s, xs ) ds = 0.
f (t, xt ) − f (0, x0 ) −
0
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
406
P. A. ERNST AND I. GRIGORESCU
Since x· is continuous, the equality must be satisfied for all t. There is only one possible
continuous solution to this ordinary differential equation posed in integral (weak) form. One
can verify the existence and uniqueness of the strong solution of (21) by standard results for
differential equations with continuous, Lipschitz coefficients on [0, T ], 0 < T < T . The
solution as well as the extinction time are elementary.
7.2. Proof of Theorem 4
First, we recall that due to (16) and (23), the joint process (ztN , NtN ) is a Markovian pure
jump process. We start by writing the differential equation for the process (ztN ) from (23),
following with (30) and (31). Let f ∈ C 1,2 ([0, ∞) × R, R) be a test function. Note that the
processes (xtN ), (ytN ), and (ztN ) are bounded by N 1/2 (the first two are simply bounded of order
one, uniformly in N ) almost surely, and so the condition on the finite expected value of all
martingales in Itô’s formula given below is automatically satisfied as long as N remains fixed.
In the limit we shall work with the diffusion (25) which is not uniformly bounded, even on a
compact time interval [0, T ], 0 < T < T .
Appearing in front of the integrals below, the factor N marks the scaling of the time variable.
The corresponding Itô formula (30) is
f (t, ztN ) − f (0, z0N )
=N
t
0
N,f
(I) + (II) + (III) ds + Mt
,
(54)
0 ≤ t ≤ τN ,
(55)
where
(I) = ∂s f (s, zsN ),
1
(II) = qsN f s, zsN + √
− f (s, zsN ) ,
N
1
− f (s, zsN ) ,
(III) = (1 − qsN ) f s, zsN − √
N
N
N
x
xs
qsN = N s N =
,
xs + y s
TN − NsN
N,f
with the last term Mt
N,f
Mt
where
from (54) satisfying the two processes
as well as
N,f
(Mt )2
t
−N
(J1 ) + (J2 ) ds,
0
2
1
J1 = qsN f s, zsN + √
− f (s, zsN ) ,
N
2
1
− f (s, zsN )
J2 = (1 − qsN ) f s, zsN − √
N
are (Ft )-martingales. The following five steps complete the proof.
Step 1. First, we shall prove an estimate for the factor (xtN + ytN )−1 appearing in the
denominator of the probability q from (15). Fix T ∈ (0, T ).
Let φ(x) be a positive function. Note that the process runs until τN ≤ TN = x0N + y0N ,
making the denominator xtN + ytN = TN − N −1 NNt positive at all times. We proceed to
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
Asymptotics for the time of ruin in the war of attrition
407
replace t by t ∧ τN and obtain
N
E[φ((xt∧τ
N
N
+ yt∧τ
)−1 )]
N
N
N TN − NNt
NT
−1
N
N
≤
φ
P(NNt = m)
N TN − m
m=0
N
t +T
≤φ
P NNt ≤ N
N TN − N ((t + T )/2)
2
t +T
+ φ(N)P NNt > N
2
1
t +T
≤φ
+
(N
)
exp(−I
N ),
N −1 TN − ((t + T )/2)
2
=E φ
where I (·) is the large deviations rate function for the Poisson process NNt . For any exponential φ dominated by the large deviations rate function I (·) for the linear Poisson process of
intensity 1, the inequality leads to an upper bound
N
N
E[φ((xt∧τ
+ yt∧τ
)−1 )] ≤ C(φ, t),
N
N
(56)
where the constant C(φ, t) does not depend on N > 0 but limt→T C(φ, t) = +∞. Setting
t = T , we obtain (56) on [0, T ].
Step 2. Let f (t, z) = z be as in Itô’s formula. It follows that
ztN = z0N + N
t
0
1 xsN − ysN
ds + MtN,z ,
√
N TN − NsN
(57)
and, by (23), the integrand is equal to zsN /(TN − NsN ). It is straightforward that (57) directly
yields (25) since the martingale part converges, in distribution, to a Brownian motion.
However, this does not close the argument. It is easy to square the equality and obtain, after
employing the Cauchy–Schwarz inequality, the Hölder inequality for the time integral, as well
as the bound t ≤ T ,
1 N 2
[z ] ≤ [z0N ]2 + T
3 t
t
0
zsN
TN − NsN
2
ds + [MtN,z ]2 .
We recall that we only need to work with a compact subinterval [0, T ] ⊆ [0, T ). Combining
this with (56) for φ(x) = x 2 and denoting the constant by C(2, T ) since we shall only
concentrate on the time interval [0, T ], we obtain
t
1 N 2
N 2
(58)
[z ] ≤ [z0 ] + T C(2, T ) [zsN ]2 ds + [MtN,z ]2 .
3 t
0
After inspecting the quadratic variation part of (55), we see that we can bound the last square
term in (58) by C2 T . We do so by Doob’s maximal inequality, obtaining
t
1 N 2
N 2
sup [zsN ]2 ds + C2 T .
[z ] ≤ [z0 ] + T C(2, T )
3 t
0 0≤s ≤s
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
408
P. A. ERNST AND I. GRIGORESCU
We now take the supremum over time on the right-hand side, apply Doob’s maximal inequality
to the martingale part, and observe that Grönwall’s lemma yields the bound
E sup φ(ztN ) ≤ 3([z0N ]2 + C2 T )eT C(2,T )T := C3 (T ),
φ(z) = z2 ,
(59)
0≤t≤T which is (29).
Remark 3. The dependence on T < T is essential in the bound since C(2, T ) blows up as
T ↑ T.
Step 3. We shall obtain a similar bound as (59) for exponential functions. We start with
φ(z) = eλz , λ > 0. The difficult term in Itô’s formula is the time integrand
λ
λ
xsN
ysN
N
N exp(λzs ) exp √
,
+ exp − √
−1
−1
TN − NsN
TN − NsN
N
N
where the prefactor N is from the time scaling. Given that |eh − 1 − h| ≤ c4 h2 e|h| and
considering again the general bound (56) from step 1, we see that an upper bound for the above
term reduces to a constant depending on T multiplied by
N
λeλzs [zsN + c4 λ2 ].
(60)
We have used the Cauchy–Schwarz inequality followed by Doob’s maximal inequality. The
martingale part is of a smaller order of magnitude. Combining the bound obtained in (60)
with the bound on the quadratic variation, and employing analogous notation to that of step 2,
together with (59), we obtain
N
E sup e2λzt ≤ 3(φ(z0N ) + C5 (T ))eC(4,T )T := C4 (T ).
0≤t≤T Step 4. To verify the modulus of continuity in (28), we subtract (57) at two times s and t and
observe that, under the time integral, we have, after simplification with the scaling constants, a
term bounded above by
t
zsN
1
N
≤
δE
sup
ds
|z
|
,
E
sup
N
TN − NTN s ∈[0,T ] s
s TN − N s s,t∈[0,T ], |t−s|<δ
where we used the observation that the function s → (TN − NsN )−1 is nondecreasing. The
Cauchy–Schwarz inequality, with (56) for φ(x) = |x|2 and the bound (59) proving (28) for the
time integral term. The martingale part is straightforward due to Doob’s maximal inequality
applied to the second moment. We have thus proven that (ztN ) is tight.
Step 5. Keeping T ∈ (0, T ), we note that the limit process (25) is uniquely defined by its
martingale problem. This holds for less regular coefficients but here we have smooth, bounded,
Lipschitz coefficients. Thus, to identify the limit, we proceed in the standard way. First,
we assume that (zt ) is a continuous path process found as a possible limit point of the tight
sequence (ztN ). Second, we shall show that for any f ∈ Cc1,2 ([0, ∞) × R, R) as in (54), only
with compact support for convenience (this is a determining class for the martingale problem),
t
zs
1
f (t, zt ) − f (0, z0 ) −
∂s f (s, zs ) +
(61)
∂z f (s, zs ) + ∂zz f (s, zs ) ds,
T −s
2
0
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
Asymptotics for the time of ruin in the war of attrition
409
is an (Ft )-martingale. Comparing (61) with (54), we see that, when we replace zt with ztN
N,f
in (61), we obtain a martingale Mt
plus an error term EN .
The error term needs to go to 0 uniformly in N in L1 norm. However, the error term is less
complicated here. The function f has compact support and, thus, satisfies a uniform bound
in N and t ∈ [0, T ]. Simply applying Taylor’s formula in Lagrange form with degree two, and
error of degree three,
|f (s, z + ε) − f (s, z) − ε∂z f (s, z) − 21 ε 2 ∂zz f (s, z)| ≤ C(f )ε 3 ,
1
ε=√ .
N
Note that the terms containing ε are exactly the second-order operator corresponding to the
diffusion (25). Terms (II) and (III) in (54) yield the error
t
C(f )N −3/2 ds ≤ N −1/2 C(f )T .
EN ≤ N
0
The rest of the proof is technical but standard. If (η(·)) is a continuous bounded functional
d
of a path η ∈ D([0, T ], R) and (z·N ) −
→ (z· ), the continuity theorem implies that
lim E[(z·N )] = E[(z· )].
N→∞
In our case, we fix s, t, 0 ≤ s ≤ t ≤ T , and a bounded function (η(·)) measurable with
respect to Fs . Then by writing
t
(η(·)) := f (t, ηt ) − f (s, ηs ) −
Ls f (s , ηs ) ds (η(·)),
s
we can verify that the functional is continuous, bounded in the J1 topology on the Skorokhod
space. It has an expected value equal to the error term exactly, and when we pass to the limit we
find that the expected value with respect to the law of the limiting process vanishes. It follows
that (zt ) satisfies the martingale problem (61), and by uniqueness, it is the solution to (25). This
completes the proof of Theorem 4.
Acknowledgements
We would like to thank Larry Shepp and Robert W. Chen for introducing this problem to
us. We also wish to thank Larry Shepp for his kind assistance with Section 2 and Min Kang
for helpful discussions. We thank Quan Zhou for his assistance with numerics. Finally, we are
extremely grateful to an anonymous referee whose invaluable suggestions greatly improved the
quality of this work.
References
[1] Albrecher, H. and Boxma, O. J. (2004). A ruin model with dependence between claim sizes and claim
intervals. Insurance. Math. Econom. 35, 245–254.
[2] Asmussen, S. and Albrecher, H. (2010). Ruin Probabilities (Adv. Ser. Statist. Sci. Appl. Prob. 14), 2nd edn.
World Scientific, Hackensack, NJ.
[3] Asmussen, S., Schmidli, H. and Schmidt, V. (1999). Tail probabilities for non-standard risk and queueing
processes with subexponential jumps. Adv. Appl. Prob. 31, 422–447.
[4] Billingsley, P. (1995). Probability and Measure, 3rd edn. John Wiley, New York.
[5] Billingsley, P. (1999). Convergence of Probability Measures. John Wiley, New York.
[6] Bishop, D. T. and Cannings, C. (1978). A generalized war of attrition. J. Theoret. Biol. 70, 85–124.
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
410
P. A. ERNST AND I. GRIGORESCU
[7] De Moivre, A. (1710). De mensura sortis seu; de probabilitate eventuum in ludis a casu fortuito pendentibus.
Phil. Trans. 27, 213–264.
[8] Dubins, L. E. and Savage, L. J. (1965). How to Gamble if You Must. Inequalities for Stochastic Processes.
McGraw-Hill, New York.
[9] Ethier, S. N. and Kurtz, T. G. (1986). Markov Processes: Characterization and Convergence. John Wiley,
New York.
[10] Feller, W. (1968). An Introduction to Probability Theory and Its Applications, Vol. I, 3rd edn. John Wiley, New
York.
[11] Haigh, J. (1989). How large is the support of an ESS? J. Appl. Prob. 26, 164–170.
[12] Hald, A. (2003). A History of Probability and Statistics and Their Applications before 1750. John Wiley, New
York.
[13] Hines, W. G. S. (1981). Multispecies population models and evolutionarily stable strategies. J. Appl. Prob. 18,
507–513.
[14] Jacod, J. and Shiryaev, A. N. (1987). Limit Theorems for Stochastic Processes. Springer, Berlin.
[15] Kaigh, W. D. (1979). An attrition problem of gambler’s ruin. Math. Mag. 52, 22–25.
[16] Katriel, G. (2014). Gambler’s ruin: the duration of play. Stoch. Models 30, 251–271.
[17] Kipnis, C. and Landim, C. (1999). Scaling Limits of Interacting Particle Systems. Springer, Berlin.
[18] Kozek, A. S. (1995). A rule of thumb (not only) for gamblers. Stoch. Process. Appl. 55, 169–181.
[19] Kurkova, I. and Raschel, K. (2015). New steps in walks with small steps in the quarter plane: series expressions
for the generating functions. Ann. Combinatorics 19, 461–511.
[20] Mikosch, T. and Samorodnitsky, G. (2000). Ruin probability with claims modeled by a stationary ergodic
stable process. Ann. Prob. 28, 1814–1851.
[21] Raschel, K. (2014). Random walks in the quarter plane, discrete harmonic functions and conformal mappings.
Stoch. Process. Appl. 124, 3147–3178.
[22] Ross, S. M. (1983). Stochastic Processes. John Wiley, New York.
[23] Smith, J. M. (1974). The theory of games and the evolution of animal conflicts. J. Theoret. Biol. 47, 209–221.
[24] Smith, J. M. and Price, G. R. (1973). The logic of animal conflict. Nature 246, 15–18.
[25] Stanley, R. P. (2011). Enumerative Combinatorics (Camb. Stud. Adv. Math. 49), Vol. I, 2nd edn. Cambridge
University Press.
[26] Taylor, P. D. (1979). Evolutionary stable strategies with two types of player. J. Appl. Prob. 16, 76–83.
[27] Thangavelu, S. (1993). Lectures on Hermite and Laguerre Expansions (Math. Notes 42). Princeton University
Press.
[28] Uspensky, J. V. (1937). Introduction to Mathematical Probability. McGraw-Hill, New York.
[29] Van Leeuwaarden, J. S. H. and Raschel, K. (2013). Random walks reaching against all odds the other side
of the quarter plane. J. Appl. Prob. 50, 85–102.
[30] Whitworth, W. A. (1901). Choice and Chance, with One Thousand Exercises. Hafner.
[31] Widder, D. (1946). The Laplace Transform. Princeton University Press.
Downloaded from https://www.cambridge.org/core. The Libraries - University of Missouri-St Louis, on 27 Oct 2017 at 07:15:51, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/apr.2017.6
Документ
Категория
Без категории
Просмотров
0
Размер файла
279 Кб
Теги
2017, apr
1/--страниц
Пожаловаться на содержимое документа