Necessary and sufficient conditions for weak convergence of first-rare-event times for semi-Markov processes. I
Necessary and sufficient conditions for weak convergence of first-rareevent times for semi-Markov processes with finite set of states are obtained. These results are applied to risk processes and give necessary and sufficient conditions for stable approximation of ruin probabilities including the cas...
Збережено в:
Дата: | 2006 |
---|---|
Автори: | , |
Формат: | Стаття |
Мова: | English |
Опубліковано: |
Інститут математики НАН України
2006
|
Онлайн доступ: | http://dspace.nbuv.gov.ua/handle/123456789/4465 |
Теги: |
Додати тег
Немає тегів, Будьте першим, хто поставить тег для цього запису!
|
Назва журналу: | Digital Library of Periodicals of National Academy of Sciences of Ukraine |
Цитувати: | Necessary and sufficient conditions for weak convergence of first-rare-event times for semi-Markov processes. I / D.S. Silvestrov, M.O. Drozdenko // Theory of Stochastic Processes. — 2006. — Т. 12 (28), № 3-4. — С. 151–186. — Бібліогр.: 62 назв.— англ. |
Репозитарії
Digital Library of Periodicals of National Academy of Sciences of Ukraineid |
irk-123456789-4465 |
---|---|
record_format |
dspace |
spelling |
irk-123456789-44652009-11-12T12:00:31Z Necessary and sufficient conditions for weak convergence of first-rare-event times for semi-Markov processes. I Silvestrov, D.S. Drozdenko, M.O. Necessary and sufficient conditions for weak convergence of first-rareevent times for semi-Markov processes with finite set of states are obtained. These results are applied to risk processes and give necessary and sufficient conditions for stable approximation of ruin probabilities including the case of diffusion approximation. 2006 Article Necessary and sufficient conditions for weak convergence of first-rare-event times for semi-Markov processes. I / D.S. Silvestrov, M.O. Drozdenko // Theory of Stochastic Processes. — 2006. — Т. 12 (28), № 3-4. — С. 151–186. — Бібліогр.: 62 назв.— англ. 0321-3900 http://dspace.nbuv.gov.ua/handle/123456789/4465 en Інститут математики НАН України |
institution |
Digital Library of Periodicals of National Academy of Sciences of Ukraine |
collection |
DSpace DC |
language |
English |
description |
Necessary and sufficient conditions for weak convergence of first-rareevent times for semi-Markov processes with finite set of states are obtained. These results are applied to risk processes and give necessary and sufficient conditions for stable approximation of ruin probabilities
including the case of diffusion approximation. |
format |
Article |
author |
Silvestrov, D.S. Drozdenko, M.O. |
spellingShingle |
Silvestrov, D.S. Drozdenko, M.O. Necessary and sufficient conditions for weak convergence of first-rare-event times for semi-Markov processes. I |
author_facet |
Silvestrov, D.S. Drozdenko, M.O. |
author_sort |
Silvestrov, D.S. |
title |
Necessary and sufficient conditions for weak convergence of first-rare-event times for semi-Markov processes. I |
title_short |
Necessary and sufficient conditions for weak convergence of first-rare-event times for semi-Markov processes. I |
title_full |
Necessary and sufficient conditions for weak convergence of first-rare-event times for semi-Markov processes. I |
title_fullStr |
Necessary and sufficient conditions for weak convergence of first-rare-event times for semi-Markov processes. I |
title_full_unstemmed |
Necessary and sufficient conditions for weak convergence of first-rare-event times for semi-Markov processes. I |
title_sort |
necessary and sufficient conditions for weak convergence of first-rare-event times for semi-markov processes. i |
publisher |
Інститут математики НАН України |
publishDate |
2006 |
url |
http://dspace.nbuv.gov.ua/handle/123456789/4465 |
citation_txt |
Necessary and sufficient conditions for weak convergence of first-rare-event times for semi-Markov processes. I / D.S. Silvestrov, M.O. Drozdenko // Theory of Stochastic Processes. — 2006. — Т. 12 (28), № 3-4. — С. 151–186. — Бібліогр.: 62 назв.— англ. |
work_keys_str_mv |
AT silvestrovds necessaryandsufficientconditionsforweakconvergenceoffirstrareeventtimesforsemimarkovprocessesi AT drozdenkomo necessaryandsufficientconditionsforweakconvergenceoffirstrareeventtimesforsemimarkovprocessesi |
first_indexed |
2025-07-02T07:42:14Z |
last_indexed |
2025-07-02T07:42:14Z |
_version_ |
1836520188120399872 |
fulltext |
Theory of Stochastic Processes
Vol. 12 (28), no. 3–4, 2006, pp. 151–186
D. S. SILVESTROV AND M. O. DROZDENKO
NECESSARY AND SUFFICIENT CONDITIONS FOR
WEAK CONVERGENCE OF FIRST-RARE-EVENT TIMES
FOR SEMI-MARKOV PROCESSES. I
Necessary and sufficient conditions for weak convergence of first-rare-
event times for semi-Markov processes with finite set of states are ob-
tained. These results are applied to risk processes and give necessary
and sufficient conditions for stable approximation of ruin probabilities
including the case of diffusion approximation.
1. Introduction
Limit theorems for random functionals of similar first-rare-event times
known under such names as first hitting times, first passage times, first
record times, etc. were studied by many authors.
The case of Markov chains and semi-Markov processes with finite set
of states is the most deeply investigated. We mention here the works
by Simon and Ando (1961), Kingman (1963), Darroch and Seneta (1965,
1967), Keilson (1966, 1979), Korolyuk (1969), Korolyuk and Turbin (1970,
1976), Silvestrov (1970, 1971, 1974, 1980), Anisimov (1970, 1971a, 1971b),
Turbin (1971), Masol and Silvestrov (1972), Zakusilo (1972a, 1972b) Ko-
valenko (1973), Latouch and Louchard (1978), Shurenkov (1980a, 1980b),
Gut and Holst (1984), Brown and Shao (1987), Alimov and Shurenkov
(1990a, 1990b), Hasin and Haviv (1992), Elĕıko and Shurenkov (1995), Ki-
jima (1997), Gyllenberg and Silvestrov (1999, 2000a).
The case of Markov chains and semi-Markov processes with countable
and an arbitrary phase space was treated in works by Gusak and Korolyuk
(1971), Silvestrov (1974, 1980, 1981, 1995, 2000), Korolyuk and Turbin
(1978, 1982), Kaplan (1979), Aldous (1982), Korolyuk D. and Silvestrov
(1983, 1984), Kartashov (1987, 1991, 1995), Anisimov (1988), Silvestrov and
Velikii (1988), Silvestrov and Abadov (1991, 1993), Motsa and Silvestrov
(1996), Korolyuk V.V. and Korolyuk V.S. (1999).
We refer to the book by Silvestrov (2004) for the detailed list of references.
The main features for the most previous results is that they give sufficient
conditions of convergence for such functionals. As a rule, those conditions
involve assumptions, which imply convergence of distributions for sums of
2000 Mathematics Subject Classification. Primary 60K15, 60F17, 60K20.
Key words and phrases. Weak convergence, semi-Markow processes, first-rare-event
time, limit theorems, necessary and sufficient conditions.
151
152 D. S. SILVESTROV AND M. O. DROZDENKO
i.i.d random variables distributed as sojourn times for the semi-Markov
process (for every state) to some infinitely divisible laws plus some ergod-
icity condition for the imbedded Markov chain plus condition of vanishing
probabilities of occurring rare event during one transition step for the semi-
Markov process.
Our results are related to the model of semi-Markov processes with a
finite set of states. In this paper, we consider the case of stable type as-
ymptotics for distributions of sojourn times. Instead of conditions based
on “individual” distributions of sojourn times, we use more general and
weaker conditions imposed on distributions (of sojourn times) averaged by
stationary distribution of the imbedded Markov chain. Moreover, we show
that these conditions are not only sufficient but also necessary conditions
for the weak convergence for first-rare-event times, and describe the class
of all possible limiting laws not-concentrated in zero. The results presented
in the paper give some kind of a “final solution” for limit theorems for
first-rare-event times for semi-Markov process with a finite set of states.
The paper is organized in the following way. In Section 2, we formulate
and prove our main Theorem 1, which describes the class all possible limiting
distributions for first-rare-event times for semi-Markov processes and give
necessary and sufficient of weak convergence to distributions from this class.
Several lemmas describing asymptotical solidarity cyclic properties for sum-
processes defined on Markov chains are used in the proof of Theorem 1.
These lemmas and their proofs are collected in Section 3.
Applications to counting processes generating by the corresponding flows
of rare events, random geometric sums as well as give necessary and suffi-
cient conditions for stable approximation for non-ruin probabilities are given
in the second part of the present paper.
2. Main results
Let (ηn, κn, ζn), n = 0, 1, . . . be a Markov renewal process, i.e. a ho-
mogenous Markov chain with phase space Z = X × [0, +∞) × Y (here
X = {1, 2, . . . , m}, and Y is some measurable space with σ–algebra of mea-
surable sets BY ) and transition probabilities,
(1)
P{ηn+1 = j, κn+1 ≤ t, ζn+1 ∈ A/ηn = i, κn = s, ζn = y}
= P{ηn+1 = j, κn+1 ≤ t, ζn+1 ∈ A/ηn = i}
= Qij(t, A), i, j ∈ X, s, t ≥ 0, y ∈ Y, A ∈ BY .
The characteristic property, which specifies Markov renewal processes in
the class of general multivariate Markov chains (ηn, κn, ζn), is (as shown in
(1)) that transition probabilities do depend only of the current position of
the first component ηn.
FIRST-RARE-EVENT TIMES 153
As is known, the first component ηn of the Markov renewal process is
also a homogenous Markov chain with the phase space X and transition
probabilities pij = Qij(+∞, Y ), i, j ∈ X.
Also, the first two components of Markov renewal process (namely ηn and
κn) can be associated with the semi-Markov process η(t), t ≥ 0 defined as,
η(t) = ηn for τn ≤ t < τn+1, n = 0, 1, . . . ,
where τ0 = 0 and τn = κ1 + . . . + κn, n ≥ 1.
Random variables κn represent inter–jump times for the process η(t). As
far as random variables ζn are concerned, they are so-called, “flag variables”
and are used to record “rare” events.
Let Dε, ε > 0 be a family of measurable “small” in some sense subsets of
Y . Then events {ζn ∈ Dε} can be considered as “rare”.
Let us introduce random variables
νε = min(n ≥ 1 : ζn ∈ Dε),
and
ξε =
νε∑
n=1
κn.
A random variable νε counts the number of transitions of the imbedded
Markov chain ηn up to the first appearance of the “rare” event, while a
random variable ξε can be interpreted as the first-rare-event time for the
semi-Markov process η(t).
Let us consider the distribution function of the first-rare-event time ξε,
under fixed initial state of the imbedded Markov chain ηn,
F
(ε)
i (u) = Pi{ξε ≤ u}, u ≥ 0.
Here and henceforth, Pi and Ei denote, respectively, conditional proba-
bility and expectation calculated under condition that η0 = i.
We give necessary and sufficient conditions for weak convergence of dis-
tribution functions F
(ε)
i (uuε), where uε > 0, uε → ∞ as ε → 0 is a non-
random normalizing function, and describe the class of possible limiting
distributions.
The problem is solved under the four general model assumptions.
The first assumption A guaranties that the last summand in the random
sum ξε is negligible under any normalization uε, i.e. κνε/uε
P→ 0 as ε → 0:
A: lim
t→∞
lim
ε→0
Pi{κ1 > t/ζ1 ∈ Dε} = 0, i ∈ X.
Let us introduce the probabilities of occurrence of rare event during one
transition step of the semi-Markov process η(t),
piε = Pi{ζ1 ∈ Dε}, i ∈ X.
The second assumption B, imposed on probabilities piε, specifies inter-
pretation of the event {ζn ∈ Dε} as “rare” and guarantees the possibility
for such event to occur:
154 D. S. SILVESTROV AND M. O. DROZDENKO
B: 0 < max1≤i≤m piε → 0 as ε → 0.
The third assumption C is a standard ergodicity condition for the em-
bedded Markov chain ηn:
C: ηn, n = 0, 1, . . . is an ergodic Markov chain with the stationary dis-
tribution πi, i ∈ X.
Let us define a probability which is the result of averaging of the proba-
bilities of occurrence of rare event in one transition step by the stationary
distribution of the imbedded Markov chain ηn,
pε =
m∑
i=1
πipiε.
We will say that a positive function wε, ε > 0 is from a class W if (a1)
wε → ∞ as ε → 0, (a2) there exist a sequence 0 < εn → 0 such that
wεn+1/wεn → 1 as n → ∞.
The fourth assumption D is some kind of regularity condition for the
corresponding normalizing functions:
D: uε, vε = p−1
ε ∈ W.
Condition D is not restrictive. For example it holds if uε and vε are
continuous functions of ε satisfying (a1).
Let us also introduce the distribution functions of a sojourn times κ1 for
the semi-Markov processes η(t),
Gi(t) = Pi{κ1 ≤ t}, t ≥ 0, i ∈ X,
and the distribution function, which is a result of averaging of distribution
functions of sojourn times by the stationary distribution of the imbedded
Markov chain ηn,
G(t) =
m∑
i=1
πiGi(t), t ≥ 0.
Now we are in position to formulate the necessary and sufficient conditions
for weak convergence of distribution functions of first-rare-event times ξε.
Let 0 < γ ≤ 1 and a > 0. Let also Γ(α) =
∫ ∞
0
tα−1e−tdt be the Gamma
function.
The necessary and sufficient conditions of convergence, mentioned above,
have the following form:
Eγ::
t[1−G(t)]
t
0
sG(ds)
→ 1−γ
γ
as t → ∞.
Fa,γ::
uε
0
sG(ds)
pεuε
→ a γ
Γ(2−γ)
as ε → 0.
We use the symbol ⇒ to show weak convergence of distribution functions
(pointwise convergence in points of continuity of the limiting distribution
function).
The main result of the paper is the following theorem.
FIRST-RARE-EVENT TIMES 155
Theorem 1. Let conditions A, B, C, and D hold. Then:
(i):: The class of all possible non-concentrated in zero limiting dis-
tribution functions (in the sense of weak convergence) for the dis-
tribution functions of first-rare-event times F
(ε)
i (uuε) coincides with
the class of distribution functions Fa,γ(u) with Laplace transforms
φa,γ(s) = 1
1+asγ , 0 < γ ≤ 1, a > 0.
(ii):: Conditions Eγ and Fa,γ are necessary and sufficient for the fol-
lowing relation of weak convergence to hold (for some or every i ∈ X,
respectively, in the statements of necessity and sufficiency),
(2) F
(ε)
i (uuε) ⇒ Fa,γ(u) as ε → 0.
Remark 1. Fa,γ(u), for 0 < γ ≤ 1 and a > 0, is the distribution func-
tion of a random variable ξ(ρ), where (b1) ξ(t), t ≥ 0 is a non-negative
homogeneous stable process with independent increments and the Laplace
transform Ee−sξ(t) = e−asγt, s, t ≥ 0, (b2) ρ is an exponentially distributed
random variable with parameter 1, (b3) the random variable ρ and the
process ξ(t), t ≥ 0 are independent. In particular, Fa,1(u) is an exponential
distribution function with parameter a.
Remark 2. Distribution function Fa,γ(u), for 0 < γ ≤ 1 and a > 0, is
continuous. Thus, weak convergence pointed out in the statement (ii) of
Theorem 1 means that F
(ε)
i (uuε) → Fa,γ(u) as ε → 0 for every u ≥ 0.
Proof. We split the proof of Theorem 1 into several steps.
As the first step, we obtain an appropriate representation for the first-
rare-event time ξε in the form of geometric type random sum of random
variables connected with cyclic returns of the semi-Markov process η(t) in
a fixed state i ∈ X.
Let τi(n) be the number of transitions after which the imbedded Markov
chain ηn reaches a state i ∈ X for the n-th time,
τi(n) = min{k > τi(n − 1) : ηk = i}, n = 1, 2, . . . ,
where τi(0) = 0. For simplicity, we will also write τi(1) as τi.
Let βi(n) be the duration of the n-th i-cycle between the moments of
(n − 1)-th and n-th return of the semi-Markov process η(t) in the state i,
βi(n) =
τi(n)∑
k=τi(n−1)+1
κk, n = 1, 2, . . . .
For simplicity, we will also write βi(1) as βi. The moments of return of the
semi-Markov process η(t) to a fixed state i ∈ X are regenerative moments
for this process. Due to this property, βi(n), n = 1, 2 . . . are independent and
random variables identically distributed for n ≥ 2. As far as the random
variable βi(1) is concerned, it has the same distribution as βi(2) if the initial
distribution of the imbedded Markov chain ηn is concentrated in state i.
Otherwise, the distribution of βi(1) can differ of the distribution βi(2).
156 D. S. SILVESTROV AND M. O. DROZDENKO
Let us also introduce the random variable νiε which counts the number
of cycles ended before the moment νε,
νiε = max{n : τi(n) ≤ νε}.
Finally, let β̃iε be a the duration of the residual sub-cycle between the
moment of the last return of the semi-Markov process η(t) in the state i
before the first-rare-event time ξε and the time ξε,
β̃iε =
νε∑
n=τi(νiε)+1
κn.
Now, the following representation, in the form of random sum, can be
written down for the first-rare-event time ξε,
(3) ξε =
νiε∑
n=1
βi(n) + β̃iε.
It should be noted that the random index νiε and summands βi(n), n =
1, 2, . . ., and β̃iε are not independent random variables. However, they
are conditionally independent with respect to indicator random variables
χiε(n) = χ(τi(n − 1) < νε ≤ τi(n)), n = 1, 2, . . .. It will be seen in the
best way when we shall re-write the representation formula (3) in terms of
Laplace transforms.
Let us introduce Laplace transforms of the first-rare-event time,
Φiε(s) = Ei exp{−sξε}, s ≥ 0, i ∈ X.
Let us denote qiε the probability of occurrence the rare event during the
first i-cycle,
qiε = Pi{νε ≤ τi}, i ∈ X.
Let us also introduce the conditional Laplace transforms of the duration
of the first i-cycle βi under condition νε > τi of non-occurrence of the rare
event in the first i-cycle,
ψiε(s) = Ei{exp{−sβi}/νε > τi}, s ≥ 0,
and the conditional Laplace transform of the duration of residual sub-cycle
βiε under condition that νε ≤ τi of occurrence of the rare event in the first
i-cycle,
ψ̃iε(s) = Ei{exp{−sβ̃iε}/νε ≤ τi}, s ≥ 0.
The Markov renewal process (ηn, κn, ζn) regenerates at moments of re-
turn to every state i and νε is a Markov moment for this process. Due to
FIRST-RARE-EVENT TIMES 157
these properties the representation formula (3) takes, in terms of Laplace
transforms, the following form,
Φiε(s) = Ei exp{−sξε}
=
∞∑
n=0
(1 − qiε)
nqiεψiε(s)
nψ̃iε(s)
=
qiεψ̃iε(s)
1 − (1 − qiε)ψiε(s)
=
ψ̃iε(s)
1 + (1 − qiε)
(1−ψiε(s))
qiε
, s ≥ 0.(4)
As the second step, we prove that the weak convergence for the first-rare-
event times is invariant with respect to the choice of initial distribution of
the imbedded Markov chain ηn.
At this stage we are interested in solidarity statements concerned the
relation of weak convergence,
(5) F
(ε)
i (uuε) ⇒ F (u) as ε → 0,
where (c1) F (u) is a distribution function concentrated on non-negative
half-line but not concentrated in zero, and (c2) uε is a positive normalizing
function such that uε → ∞ as ε → 0.
We shall prove that, under conditions A, B and C, (d) the assumption
that relation (5) holds for some i ∈ X implies that this relation holds for
every i ∈ X and, in this case, (e) the limiting distribution function F (u) is
the same for all i ∈ X.
In terms of Laplace transforms relation (5) is equivalent to the relation,
(6) Φiε(s/uε) → Φ(s) as ε → 0, s ≥ 0,
where (f) Φ(s) is a Laplace transform of some non-negative random variable,
(g) Φ(s) < 1 for s > 0 (this is equivalent to the requirement that the
corresponding limiting distribution function is not concentrated in zero).
Thus, in order to prove the solidarity statement formulated above, we
should prove that, under conditions A, B and C, (h) the assumption that
relation (6) holds for some i ∈ X implies that this relation holds for every
i ∈ X and, in this case, (i) the limiting Laplace transform Φ(s) is the same
for all i ∈ X.
In what follows, we the use several lemmas describing asymptotical sol-
idarity cyclic properties for functional defined on trajectories of Markov
renewal processes (ηn, κn, ζn).
It will be proved in Lemma 1 that conditions B and C imply the following
asymptotic relation, for every i ∈ X,
(7) qiε ∼ pε
πi
as ε → 0.
158 D. S. SILVESTROV AND M. O. DROZDENKO
Here and henceforth relation a(ε) ∼ b(ε) as ε → 0 means that
a(ε)/b(ε) → 1 as ε → 0.
It follows from (7) that, for every i ∈ X,
(8) qiε → 0 as ε → 0.
It will be shown in Lemma 2, with the use of (7), that conditions A, B, and
C implies the following asymptotic relation, for every i ∈ X,
(9) ψ̃iε(s/uε) → 1 as ε → 0, s ≥ 0.
Relation (9) implies that, under conditions A, B, and C, for every i ∈ X,
(10) Φiε(s/uε) ∼ 1
1 + (1 − qiε)
(1−ψiε(s/uε))
qiε
as ε → 0, s ≥ 0.
It follows from relations and (8) and (10) that, under conditions A, B,
and C, relation (6) holds, for given i ∈ X, if and only if,
(11)
1 − ψiε(s/uε)
qiε
→ ς(s) as ε → 0, s ≥ 0,
where ς(s) is a function such that (j) 1
1+ς(s)
is a Laplace transform of some
non-negative random variable, and (k) ς(s) > 0 for s > 0.
Obviously, that the limiting functions in relations (6) and (11) are con-
nected by the following relation,
(12) Φ(s) =
1
1 + ς(s)
, s ≥ 0.
To simplify the following asymptotic analysis and to make it possible to
use later powerful Tauberian theorems and theorems about regularly vary-
ing functions, we shall now try to replace the conditional Laplace transform
ψiε(s) in the relation (11) by the unconditional Laplace transform of the
duration of the first i-cycle βi,
ψi(s) = Ei exp{−sβi}, s ≥ 0.
The Laplace transform ψi(s) can obviously be represented in the following
form,
(13) ψi(s) = (1 − qiε)ψiε(s) + qiεψ̂iε(s), s ≥ 0,
where ψ̂iε(s) is the conditional Laplace transform of the duration of the first
i-cycle βi under condition νε ≤ τi of occurrence of the rare event in the first
i-cycle,
ψ̂iε(s) = Ei{exp{−sβi}/νε ≤ τi}, s ≥ 0.
Relation (13) can be re-written in the following form,
(14)
1 − ψi(s/uε)
qiε
= (1 − qiε)
1 − ψiε(s/uε)
qiε
+ qiε
1 − ψ̂iε(s/uε)
qiε
, s ≥ 0.
FIRST-RARE-EVENT TIMES 159
It will be shown in Lemma 3 that conditions A, B, and C imply that,
for every i ∈ X,
(15) ψ̂iε(s/uε) → 1 as ε → 0, s ≥ 0.
It follows from relation (15) that, under conditions A, B, and C, relation
(11) holds, for given i ∈ X, if and only if,
(16)
1 − ψi(s/uε)
qiε
→ ς(s) as ε → 0, s ≥ 0,
where ς(s) is a function such that (j) 1
1+ς(s)
is a Laplace transform of some
non-negative random variable, and (k) ς(s) > 0 for s > 0.
It will be shown in Lemma 4 that, under conditions B and C, (l) the
assumption that relation (16) holds for some i ∈ X implies that this relation
holds for every i ∈ X and, in this case, (m) the limiting function ς(s) is
the same for all i ∈ X, (n) ς(s) is a cumulant of an infinitely divisible law
concentrated on non-negative half-line and not concentrated in zero.
Note that, in this case, (o1) the function 1
1+ς(s)
is a Laplace transform
of the random variable ξ(ρ), where (o2) ξ(t), t ≥ 0 is a non-negative ho-
mogeneous process with independent increments and the Laplace transform
Ee−sξ(t) = e−ς(s)t, (o3) ρ is exponentially distributed random variable, with
parameter 1, (o4) the random variable ρ is independent of the process ξ(t),
t ≥ 0, and (o5) ς(s) > 0 for s > 0. These properties are consistent with
requirements (j) and (k).
Let introduce the Laplace transforms for the sojourn times κ1,
ϕi(s) = Eie
−s 1 =
∫ ∞
0
e−stGi(dt), s ≥ 0,
and the corresponding Laplace transform averaged by the stationary distri-
bution of the imbedded Markov chain ηn,
ϕ(s) =
m∑
i=1
πiϕi(s) =
∫ ∞
0
e−stG(dt), s ≥ 0.
Finally, it will be shown in Lemma 5 that, under conditions A, B, and
C, relation (16) holds, for given i ∈ X, if and only if,
(17)
1 − ϕ(s/uε)
pε
→ ς(s) as ε → 0, s ≥ 0,
where (p) ς(s) is a cumulant of an infinitely divisible law concentrated on
non-negative half-line and not concentrated in zero.
Relation (17) is the final point in series the solidarity statements con-
cerned the distributions of first-rare-event times and based on conditions
A, B, and C.
The last third step in the proof is more or less standard. It is based on an
accurate use of theorems about regularly varying functions, Tauberian and
Abelian theorems applied to the Laplace transforms ϕ(s), and the central
160 D. S. SILVESTROV AND M. O. DROZDENKO
criterium of convergence for sums of independent random variables. Here,
conditions D, Eγ, and Fa,γ are involved.
We prove, in Lemma 6, that, under condition D, (r) the limiting cumulant
in relation (17) can only be of the form ς(s) = asγ , where 0 < γ ≤ 1 and
a > 0, and that (s) conditions Eγ, Fa,γ are necessary and sufficient for
relation (17) to hold with the limiting cumulant ς(s) = asγ .
This completes the proof of Theorem 1. �
3. Cyclic conditions of convergence
In this section we prove Lemmas 1-6 used in the proof of Theorem 1.
These lemmas present a series of so-called cyclic solidarity conditions of
convergence connected with the first-rare-event times and, as we think, have
their own value.
The first lemma describes asymptotic behavior of the probability of oc-
currence the rare event during one i-cycle.
Lemma 1. Let conditions B, C hold. Then, for every i ∈ X,
(18) qiε ∼ pε
πi
as ε → 0.
Proof. Let us define the probabilities of occurrence the rare event before the
first hitting of the imbedded Markov chain in the state i under condition
that the initial state of this Markov chain η0 = j,
qjiε = Pj{νε ≤ τi}, i, j ∈ X.
By the definition,
(19) qiiε = qiε, i ∈ X.
The probabilities qjiε, j ∈ X satisfy, for every i ∈ X, the following system
of linear equations,
(20)
{
qjiε = pjε +
∑
k �=i
p
(ε)
jk qkiε
j ∈ X,
where
p
(ε)
jk = Pj{η1 = k, ζ1 /∈ Dε}, j, k ∈ X.
System (20) can be rewritten, for every i ∈ X, in the following matrix
form,
(21) qiε = pε + iP
(ε)qiε,
where
qiε =
⎡⎣ q1iε
...
qmiε
⎤⎦ , pε =
⎡⎣ p1ε
...
pmε
⎤⎦ ,
FIRST-RARE-EVENT TIMES 161
and
iP
(ε) =
⎡⎢⎣ p
(ε)
11 . . . p
(ε)
1(i−1) 0 p
(ε)
1(i+1) . . . p
(ε)
1m
...
...
...
...
...
p
(ε)
m1 . . . p
(ε)
m(i−1) 0 p
(ε)
m(i+1) . . . p
(ε)
mm
⎤⎥⎦ .
Let us show that the matrix I− iP
(ε) has the inverse matrix for all ε small
enough, and, therefore, the solution of the system (21) has the following
form, for every i ∈ X,
(22) qiε =
[
I − iP
(ε)
]−1
pε.
Let us also introduce the matrix,
iP
(0) =
⎡⎣ p11 . . . p1(i−1) 0 p1(i+1) . . . p1m
...
...
...
...
...
pm1 . . . pm(i−1) 0 pm(i+1) . . . pmm
⎤⎦ .
Let us introduce random variable δik which is the number of visits of the
imbedded Markov chain ηn of the state k up to the first visit to the sate i,
δik =
τi∑
n=1
χ(ηn−1 = k), i, k ∈ X.
As is known, due to the ergodicity of the Markov chain ηn, (a1) Ejδik < ∞
for all j, i, k ∈ X. Moreover, for every i ∈ X, there exists the inverse matrix,
(23)
[
I − iP
(0)
]−1
= ‖Ejδik ‖
Let us also introduce random variable δikε which is the number of visits
of the imbedded Markov chain ηn of the state k before the first visit to the
sate i or the occurrence of the first rare event,
δikε =
τi∧νε∑
n=1
χ(ηn−1 = k), i, k ∈ X.
It follows from the definition of these random variables that (a2) 0 ≤
δikε ≤ δik and, therefore, (a3) Ejδikε ≤ Ejδik < ∞ for all j, i, k ∈ X.
Moreover, the matrix [iP
(ε)]n = ‖Pj{ηn = k, νε ∧ τi > n} ‖ for n ≥ 1, and,
therefore,
(24)
[
I− iP
(ε)
]−1
= I + iP
(ε) + (iP
(ε))2 + · · · = ‖Ejδikε ‖
Condition B implies that random variables νε
P−→ ∞ as ε → 0 and
therefore (a4) random variables δikε
P−→ δik as ε → 0. It follows in from
(a2) and (a4) that, for every i, j, k ∈ X,
(25) Ejδikε → Ejδik as ε → 0.
One can prove that (b1) the existence of the inverse matrix
[
I − iP
(ε)
]−1
for all ε small enough, and the convergence relation (b2)
[
I− iP
(ε)
]−1 →
162 D. S. SILVESTROV AND M. O. DROZDENKO[
I− iP
(0)
]−1
as ε → 0 by the following simpler way. As was mentioned
above, the inverse matrix
[
I − iP
(0)
]−1
exists. This means that (b3) det(I−
iP
(0)) �= 0. Condition B obviously implies that, (b4) iP
(ε) → iP
(0) as ε → 0.
Thus, det(I− iP
(ε)) �= 0 for all ε small enough. Moreover, (b4) implies (b2),
since the elements of the inverse matrix
[
I − iP
(ε)
]−1
are continuous rational
functions of the elements of the matrix I− iP
(ε). This rational function has
a non-zero denominator that is det(I − iP
(ε)).
Using relations (19) and (24) we get the following formula,
(26) qiε =
m∑
k=1
Eiδikε pkε.
As is known, the following formula holds, since the Markov chain ηn is
ergodic,
(27) Eiδik =
πk
πi
, i, k ∈ X.
Using formulas (26) and (27) we get,∣∣∣qiε − pε
πi
∣∣∣
pε
πi
≤
m∑
k=1
∣∣∣∣Eiδikε − πk
πi
∣∣∣∣ · πipkε∑m
j=1 πjpjε
≤
m∑
k=1
∣∣∣∣Eiδikε − πk
πi
∣∣∣∣ · πi
πk
→ 0 as ε → 0.(28)
Relation (28) implies asymptotic relation (18). The proof of the lemma
is complete. �
Lemma 2. Let conditions A, B, C hold. Then, for any normalization
function 0 < uε → ∞ as ε → 0, and for i ∈ X,
(29) ψ̃iε(s/uε) → 1 as ε → 0, s ≥ 0.
Proof. Let us introduce the Laplace transforms,
ψ̃jiε(s) = Ej exp{−sβ̃iε}χ(νε ≤ τi), s ≥ 0, i, j ∈ X.
Obviously,
(30) ψ̃iε(s) =
ψ̃iiε(s)
qiε
, s ≥ 0, i ∈ X.
Let us also introduce the Laplace transforms,
p
(ε)
jk (s) = Eje
−s 1χ(ζ1 /∈ Dε, η1 = k), s ≥ 0, j, k ∈ X,
and
p
(ε)
j (s) = Eje
−s 1χ(ζ1 ∈ Dε) = ϕ̂jε(s)pjε, s ≥ 0, j ∈ X,
FIRST-RARE-EVENT TIMES 163
where
ϕ̂jε(s) = Ej{e−s 1/ζ1 ∈ Dε}, s ≥ 0, j ∈ X.
Functions ψ̃jiε(s/uε), j ∈ X satisfy, for every s ≥ 0 and i ∈ X, the
following system of linear equations,
(31)
{
ψ̃jiε(s/uε) = p
(ε)
j (s/uε) +
∑
k �=i
p
(ε)
jk (s)ψ̃kiε(s/uε),
j ∈ X.
System (31) can be rewritten in the following equivalent matrix form
(32) Ψ̃
(ε)
i (s/uε) = p(ε)(s/uε) + iP
(ε)(s/uε) Ψ̃
(ε)
i (s/uε)
where
Ψ̃
(ε)
i (s) =
⎡⎢⎣ ψ̃1iε(s)
...
ψ̃miε(s)
⎤⎥⎦ , p(ε)(s) =
⎡⎢⎣ p
(ε)
1 (s)
...
p
(ε)
m (s)
⎤⎥⎦ ,
and
iP
(ε)(s) =
⎡⎢⎣ p
(ε)
11 (s) . . . p
(ε)
1(i−1)(s) 0 p
(ε)
1(i+1)(s) . . . p
(ε)
1m(s)
...
...
...
...
...
p
(ε)
m1(s) . . . p
(ε)
m(i−1) 0 p
(ε)
m(i+1)(s) . . . p
(ε)
mm(s)
⎤⎥⎦ .
Let us show that, for every s ≥ 0 and i ∈ X, matrix I −iP
(ε)(s/uε) has
the inverse matrix for all ε small enough, and, therefore, the solution of the
system (32) has the following form,
(33) Ψ̃
(ε)
i (s/uε) =
[
I − iP
(ε)(s/uε)
]−1
p(ε)(s/uε)
Condition B implies, in an obvious way, that, for every s ≥ 0 and j, k ∈
X,
p
(ε)
jk (s/uε) = Ej exp{−sκ1/uε}χ(ζ1 /∈ Dε, η1 = k)
→ Ejχ(η1 = k) = pjk as ε → 0.(34)
Thus (c1) iP
(ε)(s/uε) → iP
(0) as ε → 0, for every s ≥ 0 and i ∈ X. It
was shown in the proof of Lemma 2 that the inverse matrix [I − iP
(0)]−1
exists. Thus, (c1) implies that (c2) there exists, for every s ≥ 0 and i ∈ X,
the inverse matrix [I − iP
(ε)(s/uε)]
−1 for all ε small enough. Moreover, for
every s ≥ 0 and i ∈ X,
[I − iP
(ε)(s/uε)]
−1 = ‖Δ(ε)
jik(s)‖
→ [I − iP
(0)]−1 = ‖Ejδik‖ as ε → 0.(35)
Taking in account formulas (30), (33) and the definition of p
(ε)
j (s), we get,
for every s ≥ 0 and i ∈ X,
(36) ψ̃iiε(s/uε) =
m∑
k=1
Δ
(ε)
iik(s)ϕ̂kε(s/uε)pk(ε)
164 D. S. SILVESTROV AND M. O. DROZDENKO
Condition A implies that, for every s ≥ 0 and k ∈ X,
(37) ϕ̂kε(s/uε) → 1 as ε → 0.
Indeed, using condition A, we get, for any v > 0,
0 ≤ limε→0(1 − ϕ̂kε(s/uε)) ≤ 1 − exp{−sv}
+ limε→0Pk{κ1 > vuε/ζ1 ∈ Dε}
= 1 − exp{−sv} → 0 as v → 0.(38)
Relations (35) and (37) imply that, for every s ≥ 0 and i, k ∈ X,
(39) Δ
(ε)
iik(s)ϕ̂kε(s/uε) → Eiδik =
πk
πi
as ε → 0.
Using relation (39) we get, for every s ≥ 0 and i, k ∈ X,∣∣∣ψ̃iiε(s/uε) − pε
πi
∣∣∣
pε
πi
≤
m∑
k=1
∣∣∣∣Δ(ε)
iik(s)ϕ̂kε(s/uε) − πk
πi
∣∣∣∣ πipk(ε)∑m
j=1 πjpjε
≤
m∑
k=1
∣∣∣∣Δ(ε)
iik(s)ϕ̂kε(s/uε) − πk
πi
∣∣∣∣ πi
πk
→ 0 as ε → 0.(40)
Relation (40) means that, for every s ≥ 0 and i ∈ X,
(41) ψ̃iiε(s/uε) ∼ pε
πi
as ε → 0.
Finally, relation (18) given in Lemma 1, formula (30), and relation (41),
we get, for every s ≥ 0 and i ∈ X,
(42) ψ̃iε(s/uε) =
ψ̃iiε(s/uε)
qiε
→ 1 as ε → 0.
The proof is complete. �
Lemma 3. Let conditions A, B, C hold. Then for any normalization
function 0 < uε → ∞ as ε → 0, and for i ∈ X,
(43) ψ̂iε(s/uε) → 1 as ε → 0, s ≥ 0.
Proof. The following representation can be written, for every i ∈ X,
ψ̂iε(s) = q−1
iε Ei exp{−sβi}χ(νε ≤ τi)
=
m∑
k=1
q−1
iε Ei exp{−s(
νε∑
n=1
κn +
τi∑
n=νε+1
κn)}χ(νε ≤ τi, ηνε = k)
=
m∑
k=1
q−1
iε Ei exp{−sξε}χ(νε ≤ τi, ηνε = k)ψk(s)
FIRST-RARE-EVENT TIMES 165
Obviously, ψk(s/uε) → 1 as ε → 0 for every s ≥ 0 and k ∈ X. Thus, for
every s ≥ 0 and i ∈ X,
ψ̂iε(s/uε) ∼
m∑
k=1
q−1
iε Ei exp{−sξε/uε}χ(νε ≤ τi, ηνε = k)
= q−1
iε Ei exp{−sξε/uε}χ(νε ≤ τi)
= ψ̃iε(s/uε) → 1 as ε → 0.(44)
The proof is complete. �
In what follows we assume that η0 = j and shall mark the corresponding
processes based on the Markov renewal process (ηn, κn, ζn) by the index j
in order to distinguish the cases with different initial states η0.
Let us introduce, for every i, j ∈ X, the following “cyclic” stochastic
process,
(45) ξjiε(t) =
[tq−1
iε ]+1∑
n=1
βi(n)
uε
, t ≥ 0.
Note that ξjiε(t) is a step sum-process with independent increments. In-
deed, by the definition, random variables βi(n), n = 1, 2, . . . are independent
and,
(46) E exp{−sβi(n)} =
{
ψji(s) for n = 1,
ψii(s) for n ≥ 2,
where
ψji(s) = Ej exp{−sβi}, s ≥ 0, i, j ∈ X.
We are interested to prove some solidarity statements concerned two as-
ymptotic relations.
The first one is the following relation of weak convergence,
(47) ξjiε(t), t ≥ 0 ⇒ ξ(t), t ≥ 0 as ε → 0,
where (d) ξ(t), t ≥ 0 is a non-zero and non-decreasing and stochastically
continuous process with the initial value ξ(0) = 0.
The second one is the following asymptotic relation,
(48)
1 − ψi(s/uε)
qiε
→ ς(s) as ε → 0, s ≥ 0,
where (e) ς(s) > 0 for s > 0.
The following lemma presents the variant of so-called solidarity proposi-
tion concerned weak convergence for cyclic step sum-processes ξjiε(t).
Lemma 4. Let conditions B, C hold and η0 = j. Then: (α) the assumption
that the relation of weak convergence (47) holds for some i, j ∈ X implies
that this relation holds for every i, j ∈ X; (β) the limiting process ξ(t), t ≥ 0
in (47) is the same for any i, j ∈ X; (γ) ξ(t), t ≥ 0 is a non-zero and non-
decreasing homogenous process with independent increments; (δ) relation
166 D. S. SILVESTROV AND M. O. DROZDENKO
(47) holds for given i, j ∈ X if and only if relation (48) holds for the same i;
(ε) the limiting function ς(s) in (48) is the same for any i ∈ X; (ζ) ς(s) is a
cumulant of the process ξ(t), t ≥ 0, i.e. Ee−sξ(t) = e−ς(s)t, s, t ≥ 0; (η) under
condition D, conditions Eγ and Fa,γ (with replacement of function pε by qiε
in these conditions), imposed on the distribution of random variable βi, are
necessary and sufficient for relation (48) to hold; (θ) cumulant ς(s) = asγ
in this case.
Proof. Let us first prove that (f) the assumption that (47) holds for given
i, j ∈ X implies that this relation holds for the same i and every j ∈ X,
moreover the limiting process ξ(t), t ≥ 0 does not depend on j.
Indeed, the pre-limiting process ξjiε(t) can be represented in the form of
the following sum,
(49) ξjiε(t) = βi(1)/uε + ξ′iε(t), t ≥ 0,
where
ξ′iε(t) =
[tq−1
iε ]+1∑
n=2
βi(n)/uε, t ≥ 0.
The random variable βi(1)/uε and the process ξ′iε(t), t ≥ 0 are indepen-
dent. The distribution of random variable βi(1)/uε depends on j while the
finite-dimensional distributions of process ξ′iε(t), t ≥ 0 do not depend on j.
But, the random variables βi(1)/uε
P−→ 0 as ε → 0, for every j ∈ X, or,
equivalently, (f1) the random variables ξjiε(t) − ξ′iε(t)
P−→ 0 as ε → 0, for
every t > 0 and j ∈ X. Thus, the assumption that (47) holds for given
i, j ∈ X implies the weak convergence of the process ξ′jiε(t), t ≥ 0 to the
same limiting process. This convergence, due to (f1), implies that (f2) the
process ξjiε(t), t ≥ 0 weakly converge to the same limiting process, for every
j ∈ X, moreover, the finite-dimensional distributions of the limiting process
do not depend on j since it is so for the pre-limiting process ξ′iε(t), t ≥ 0.
Let us now prove that (g) the assumption that (47) holds for given i, j ∈
X implies that this relation holds for the same j and every i ∈ X, moreover
the limiting process ξ(t), t ≥ 0 does not depend on i.
Note that two partial solidarity propositions (f) and (g), formulated
above, imply the solidarity statements (α) and (β) formulated in Lemma 4.
To prove the proposition (g), let us introduce, for j ∈ X, the following
step sum-processes based on sojourn times for semi-Markov process η(t),
(50) ξjε(t) =
[tp−1
ε ]∑
n=1
κn
uε
, t ≥ 0,
Let us also introduce, for i, j ∈ X, the processes μjiε(t) which counts
the number of transitions for the semi-Markov process η(t) that occurs in
[tq−1
iε ] + 1 cycles,
μjiε(t) = pετi([tq
−1
iε ] + 1), t ≥ 0.
FIRST-RARE-EVENT TIMES 167
The process ξjiε(t) can be represented, for every i, j ∈ X, in the form of
superposition of the processes introduced above,
(51) ξjiε(t) = ξjε(μjiε(t)), t ≥ 0.
Let now consider the following relation of weak convergence for the pro-
cesses ξjε(t),
(52) ξjε(t), t ≥ 0 ⇒ ξ(t), t ≥ 0 as ε → 0,
where (d) ξ(t), t ≥ 0 is non-zero, non-decreasing, and stochastically contin-
uous process with the initial value ξ(0) = 0.
Let us now prove that (g1) relation (47) holds, for given i, j ∈ X, if and
only if the relation (52) holds, for the same j, moreover the limiting process
ξ(t), t ≥ 0 can be taken the same in both relations.
Note that (g1) implies (g). Indeed, due to “iff” character, the relation
(52) for given j ∈ X implies that (47) should hold for the same j and every
i ∈ X, and with the same limiting process. Moreover, the limiting process
in (52) does not depend on i since the pre-limiting process ξjε(t), t ≥ 0 does
not depend on i.
We display the proof of (g1) for one-dimensional distributions. The proof
for multi-dimensional distributions is similar.
Let us first prove that (g2) the weak convergence of random variables
ξjε(t) in (52), assumed to hold for every t > 0 and given j ∈ X, implies the
weak convergence of random variables ξjiε(t) in (47) for every t > 0, the
same j and every i ∈ X, moreover the limiting random variable ξ(t) can be
taken the same in both relations.
The process μjiε(t) can be represented, for every i, j ∈ X, in the form of
sum-process with independent increments,
(53) μjiε(t) = pε([q
−1
iε ] + 1)
[tq−1
iε ]+1∑
n=1
αi(n)
[q−1
iε ] + 1
, t ≥ 0,
where αi(n) = τi(n) − τi(n − 1), n = 1, 2, . . .. Indeed, the random variables
αi(n), n ≥ 1 are independent and,
(54) E exp{−sαi(n)} =
{
ϑji(s) for n = 1,
ϑii(s) for n ≥ 2,
where
ϑji(s) = Ej exp{−sαi(1)}, s ≥ 0, i, j ∈ X.
Since the Markov chain ηn is ergodic, Eiαi(1) = π−1
i . Thus, using the
standard weak law of large numbers for i.i.d. random variables with finite
mean, the asymptotic relation (18) given in Lemma 1, and representation
(53), we get, for every t > 0 and i, j ∈ X,
(55) μjiε(t)
P−→ πitEiαi(1) = t as ε → 0.
168 D. S. SILVESTROV AND M. O. DROZDENKO
Let us choose an arbitrary t > 0 and a sequence 0 < cn < t, n = 1, 2, . . .
such that cn → 0 as n → ∞.
By the definition, the processes ξjiε(t), ξjε(t), and μjiε(t) are non-negative
and non-decreasing. Taking into account this fact and the representation
(51), we get, for every t > 0, i, j ∈ X, any real-valued x, and n ≥ 1,
P{ξjiε(t) > x} = P{ξjiε(t) > x, μjiε(t) ≤ t + cn}
+ P{ξjiε(t) > x, μjiε(t) > t + cn}
≤ P{ξjε(t + cn) > x}
+ P{μjiε(t) > t + cn}.(56)
Let Ut be the set of continuity points the distribution functions of the
limiting random variables ξ(t) and ξ(t ± cn), n = 1, 2, . . . in (52). This set
is the real line R except at most a countable set of points.
Using the estimate (56), relation (55), and the assumptions that relation
(52) holds for one-dimensional distributions, for every t > 0 and given
j ∈ X, and that the limiting process ξ(t) in (52) is stochastically continuous,
we get, for every t > 0, the same j, and every i ∈ X,
lim
ε→0
P{ξjiε(t) > x} ≤ lim
n→∞
lim
ε→0
(P{ξjε(t + cn) > x}
+ P{μjiε(t) > t + cn})
= lim
n→∞
P{ξ(t + cn) > x}
= P{ξ(t) > x}, x ∈ Ut,(57)
or, equivalently,
(58) lim
ε→0
P{ξjiε(t) ≤ x} ≥ P{ξ(t) ≤ x}, x ∈ Ut.
We can also employ the following estimate, for every t > 0, i, j ∈ X, any
real x, and n ≥ 1,
P{ξjiε(t) ≤ x} ≤ P{ξjε(t − cn) ≤ x} + P{μjiε(t) ≤ t − cn}.(59)
Then, using the estimate (59), relation (55), and the assumptions that
relation (52) holds for one-dimensional distributions, for every t > 0 and
given j ∈ X, and that the limiting process ξ(t) in (52) is stochastically
continuous, we get, for every t > 0, the same j, and every i ∈ X,
(60) lim
ε→0
P{ξjiε(t) ≤ x} ≤ P{ξ(t) ≤ x}, x ∈ Ut.
Relations (58) and (60) implies that P{ξjiε(t) ≤ x} → P{ξ(t) ≤ x} as
ε → 0, x ∈ Ut, Since the set Ut is dense in R, this relation implies that, for
every t > 0, given j (for which relation (52) is assumed to hold) and every
i ∈ X,
(61) ξjiε(t) ⇒ ξ(t) as ε → 0.
Let us now prove that (g3) the weak convergence of random variables
ξjiε(t) in (47), assumed to hold for every t > 0 and given i, j ∈ X, implies
FIRST-RARE-EVENT TIMES 169
the weak convergence of random variables ξjε(t) in (47) for every t > 0 and
the same j, moreover the limiting random variable ξ(t) can be taken the
same in both relations.
Let us choose an arbitrary t > 0 and a sequence 0 < dn < t, n = 1, 2, . . .
such that dn → 0 as n → ∞.
Using again that the processes ξjiε(t), ξjε(t), and μjiε(t) are non-negative
and non-decreasing, and the representation (51), we get, for every t > 0,
given i, j ∈ X, any real-valued x, and n ≥ 1,
P{ξjε(t) > x} = P{ξjε(t) > x, μjiε(t + dn) > t}
+ P{ξjε(t) > x, μjiε(t + dn) ≤ t}
≤ P{ξjiε(t + dn) > x}
+ P{μjiε(t + dn) ≤ t}.(62)
Let Vt be the set of continuity points for the distribution functions of the
limiting random variables ξ(t) and ξ(t ± dn), n = 1, 2, . . . in (47). This set
is the real line R except at most a countable set of points.
Using the estimate (62), relation (55), and the assumptions that relation
(47) holds for one-dimensional distributions, for every t > 0 and given i, j ∈
X, and that the limiting process ξ(t) in (47) is stochastically continuous,
we get, for every t > 0 and the same j,
lim
ε→0
P{ξjε(t) > x} ≤ lim
n→∞
lim
ε→0
(P{ξjiε(t + dn) > x}
+ P{μjiε(t + dn) ≤ t})
= lim
n→∞
P{ξ(t + dn) > x}
= P{ξ(t) > x}, x ∈ Vt,(63)
or, equivalently,
(64) lim
ε→0
P{ξjε(t) ≤ x} ≥ P{ξ(t) ≤ x}, x ∈ Vt.
We can also employ the following estimate, for every t > 0, i, j ∈ X, any
real-valued x and n ≥ 1,
P{ξjε(t) ≤ x} ≤ P{ξjiε(t − dn) ≤ x} + P{μjiε(t − dn) ≤ t}.(65)
Then, using the estimate (65), relation (55), and the assumptions that
relation (47) holds for one-dimensional distributions, for every t > 0 and for
given i, j ∈ X, and that the limiting process ξ(t) in (47) is stochastically
continuous, we get, for every t > 0 and the same j,
(66) lim
ε→0
P{ξjε(t) ≤ x} ≤ P{ξ(t) ≤ x}, x ∈ Vt.
Relations (64) and (66) implies that P{ξjε(t) ≤ x} → P{ξ(t) ≤ x} as
ε → 0, x ∈ Vt. Since the set Vt is dense in R, this relation implies that, for
every t > 0 and given j (for which relation (47) is assumed to hold),
(67) ξjε(t) ⇒ ξ(t) as ε → 0.
170 D. S. SILVESTROV AND M. O. DROZDENKO
The proof of statements (α) and (β) formulated in Lemma 4 is complete.
As was mention above ξjiε(t) − ξ′iε(t)
P−→ 0 as ε → 0, for every t ≥ 0,
and, therefore, the weak convergence for the processes ξjiε(t), t ≥ 0 and
ξ′iε(t), t ≥ 0 is equivalent.
The statement (γ) follows directly from the definition of the sum-process
ξ′iε(t), t ≥ 0 since the random variables βi(n), n ≥ 2 are independent and
identically distributed and ξ′iε(t), t ≥ 0 is the homogeneous step sum-process
with independent increments. As is known, the class of possible limiting
processes (in the sense of weak convergence) for such step sum-process co-
incides with the class of stochastically continuous homogeneous processes
with independent increments.
Moreover, as is known, the weak convergence of finite-dimensional distri-
butions follows in this case from the weak convergence of one-dimensional
distributions. The statements (δ) and (ε) follows, in an obvious way, from
the following formula,
(68) E exp{−sξ′iε(t)} = ψi(s/uε)
[tq−1
iε ], s, t ≥ 0, i ∈ X.
Indeed, (68) implies that, for given t > 0 and i ∈ X, the random variables
ξ′iε(t) converge weakly to some non-zero limiting random variable if and only
if relation (48) holds and, in this case,
E exp{−sξ′iε(t)} = ψi(s/uε)
[tq−1
iε ]
∼ exp{−(1 − ψi(s/uε))tq
−1
iε }
→ exp{−ς(s)t} as ε → 0, s ≥ 0,(69)
where ς(s) > 0 for s > 0.
Since, according the remarks above, the random variable ξ(t) has, for
every t > 0, an infinitely divisible distribution, and ς(s)t is the cumulant of
this random variable. this proves the statement (ζ).
The proof of two last statements (η) and (θ) of Lemma 4 are given in
Lemma 6. �
Remark 3. The proof presented above shows that the only property of
the quantities qiε and pε, used in the proof of Lemma 4, is (h) 0 < qiε/πi ∼
pε → 0 as ε → 0, i ∈ X. Lemma 4 and its proof remain to be valid if any
functions qiε and pε, satisfying the assumption (h), would be used in the
formulas (45) and (50) defining, respectively, the processes ξjiε(t), t ≥ 0 and
ξjε(t), t ≥ 0, and in the expression (1−ψi(s/uε))/qiε used in the asymptotic
relation (48). In this case, conditions A and B in Lemma 4 can be replaced
by the simpler assumption (h) while condition C should remain.
The proof of Lemma 4 is based on the proposition about equivalence of
weak convergence of the cyclic step sum-processes ξjiε(t), t ≥ 0 introduced
in (45) and the step sum-processes ξjε(t), t ≥ 0 introduced in (50).
FIRST-RARE-EVENT TIMES 171
Let us now formulate the proposition about equivalence of the relation of
weak convergence (52) for processes ξjε(t), t ≥ 0 and the following asymp-
totic relation formulated in terms of averaged Laplace transforms ϕ(s),
(70)
1 − ϕ(s/uε)
pε
→ ς(s) as ε → 0, s ≥ 0,
where (k) ς(s) > 0 for s > 0.
Lemma 5. Let conditions B, C hold, and η0 = j. Then: (ι) the re-
lation of weak convergence (47) holds, for given i, j ∈ X, if and only if
the relation of weak convergence (52) holds, for the same j, (κ) the lim-
iting process ξ(t), t ≥ 0 is the same in relations (47) and (52); (λ) the
assumption that the relation of weak convergence (52) holds for some j ∈ X
implies that this relation holds for every j ∈ X; (μ) the limiting process
ξ(t), t ≥ 0 in (52) is the same for any j ∈ X; (ν) ξ(t), t ≥ 0 is a non-zero
and non-decreasing homogenous process with independent increments; (ξ)
relation (52) holds for given j ∈ X if and only if relation (70) holds; (π)
the limiting function ς(s) in (70) is a cumulant of the process ξ(t), t ≥ 0,
i.e. Ee−sξ(t) = e−ς(s)t, s, t ≥ 0; (ρ) under condition D, conditions Eγ and
Fa,γ are necessary and sufficient for relation (70) to hold; (σ) cumulant
ς(s) = asγ in this case.
Proof. The statements (ι) – (ν) have been already verified in the proof of
Lemma 4.
Let us introduce conditional distribution functions for sojourn times κn
for the semi-Markov process η(t),
Gij(t) = P{κ1 ≤ t/η0 = i, η1 = j}, t ≥ 0, i, j ∈ X.
Obviously
Qij(t) = pijGij(t), t ≥ 0, i, j ∈ X,
and
Gi(t) =
m∑
j=1
Qij(t) =
m∑
j=1
pijGij(t), t ≥ 0, i, j ∈ X,
Note that one can choose Gij(t) as arbitrary distribution functions con-
centrated on the positive half-line if pij = 0. This does not affect transition
probabilities Qij(t) and distribution functions Gi(t).
As is known from the theory of semi-Markov Processes that the so-
journ times κn are conditionally independent with respect to the values
of the imbedded Markov chain ηn. More precisely this means that, for any
t1, . . . , tn ≥ 0, i0, i1, . . . , in, n = 1, 2, . . .,
P{κ1 ≤ t1, . . . , κk ≤ tn/η0 = i0, . . . , ηn = in}
(71) = Gi0i1(t1) × · · · × Gin−1in(tn).
As in the proof of Lemma 4, we assume that η0 = j.
172 D. S. SILVESTROV AND M. O. DROZDENKO
It follows from relation (71) that the process ξjε(t) has, for every j ∈ X,
the same finite-dimensional distribution as the following process ξ̆jε(t) (we
use the symbol
d
= to show this stochastic equality),
(72) ξjε(t) =
[tp(ε)−1]∑
n=1
κn
uε
, t ≥ 0
d
= ξ̆jε(t), t ≥ 0,
where
(73) ξ̆jε(t) =
[tp(ε)−1]∑
n=1
κn(ηn−1, ηn)
uε
, t ≥ 0,
and
(i1) {ηn, n = 1, 2, . . .} is a Markov chain with a state space X and the
matrix of transition probabilities ‖pij‖;
(i2) κn(i, j), i, j ∈ X, n ≥ 1 are mutually independent random variables;
(i3) P{κn(i, j) ≤ t} = Gij(t), t ≥ 0 for i, j ∈ X, n ≥ 1;
(i4) the set of random variables {κn(i, j), i, j ∈ X, n ≥ 1} and the
Markov chain {ηn, n = 1, 2, . . .} are independent.
It follows from the stochastic equality (72) that (j) the relation of weak
convergence (51), treated in Lemma 4, is equivalent to the following relation,
(74) ξ̆jε(t), t ≥ 0 ⇒ ξ(t), t ≥ 0 as ε → 0,
where (d) ξ(t), t ≥ 0 is a non-zero and non-decreasing and stochastically
continuous process with the initial value ξ(0) = 0.
Let us define, for every j, i, k ∈ X, the counting random variables for the
random sequence η̄n = (ηn−1, ηn), n = 1, 2, . . .,
νjn(i, k) =
n∑
r=1
χ{(ηr−1, ηr) = (i, k)}, n = 0, 1, . . . .
It follows from the defining properties (i1) - (i4) listed above that the
process ξ̆jε(t) has, for every j ∈ X, the same finite-dimensional distribution
as the following process ξ̃jε(t),
(75) ξ̆jε(t), t ≥ 0
d
= ξ̃jε(t), t ≥ 0,
where
(76) ξ̃jε(t) =
∑
(i,k)∈X
ν
j[tp−1
ε ]
(i,k)∑
n=1
κn(i, k)
uε
, t ≥ 0.
and
X̃ = {(i, k) ∈ X : pik > 0}.
Note that the definition of the process ξ̃jε(t) takes into account that
random variables νjn(i, k) = 0, n = 0, 1, . . . with probability 1 if pik = 0.
FIRST-RARE-EVENT TIMES 173
The stochastic equalities (72) and (75) let us replace the processes ξjε(t)
by the processes ξ̃jε(t) when studying their weak convergence.
It follows from the stochastic equality (75) that (k) the relation of weak
convergence (51), treated in Lemma 5, is also equivalent to the following
relation,
(77) ξ̃jε(t), t ≥ 0 ⇒ ξ(t), t ≥ 0 as ε → 0,
where (d) ξ(t), t ≥ 0 is a non-zero and non-decreasing and stochastically
continuous process with the initial value ξ(0) = 0.
Let us also introduce the following step sum-processes,
(78) ξ̂ε(t) =
∑
(i,k)∈X
[tπipikp−1
ε ]∑
n=1
κn(i, k)
uε
, t ≥ 0.
We are also interested in the following relation of weak convergence,
(79) ξ̂ε(t), t ≥ 0 ⇒ ξ(t), t ≥ 0 as ε → 0,
where (d) ξ(t), t ≥ 0 is a non-zero and non-decreasing and stochastically
continuous process with the initial value ξ(0) = 0.
Let us prove the equivalence of relations (77) and (79). This means that
(l) the relation (77) holds for some j ∈ X if and only if the relation (79)
holds, and, moreover, the limiting process can be taken the same in both
relations.
We display the proof for one-dimensional distributions. The proof for
multi-dimensional distributions is similar.
Let us prove that (l1) the assumption that relation (79), assumed to hold
for every t > 0, implies that relation (77) holds for every t > 0 and j ∈ X,
moreover the limiting random variable ξ(t) can be taken the same in both
relations.
The law of large numbers for ergodic Markov chains implies that, for
every t > 0 and j, i, k ∈ X,
(80)
νj[tp(ε)−1](i, k)
p−1
ε
P−→ πipikt as ε → 0.
Let us choose an arbitrary t > 0 and a sequence 0 < cn < t, n = 1, 2, . . .
such that cn → 0 as n → ∞.
The processes
[tp−1
ε ]∑
n=1
κn(i, k)/uε, t ≥ 0
and pενj[tp−1
ε ](i, k), t ≥ 0 are non-negative and non-decreasing, for every
j, i, k ∈ X. Taking into account this fact, and representation (76), we get,
174 D. S. SILVESTROV AND M. O. DROZDENKO
for every t > 0, j ∈ X, any real-valued x, and n ≥ 1,
P{ξ̃jε(t) > x} = P{ξ̃jε(t) > x,
⋂
(i,k)∈X
A
(ε)
jik(t, t + cn)}
+ P{ξ̃jε(t) > x,
⋃
(i,k)∈X
Ā
(ε)
jik(t, t + cn)}
≤ P{ξ̂ε(t + cn) > x}
+
∑
(i,k)∈X
P{Ā(ε)
jik(t, t + cn)},(81)
where
A
(ε)
jik(t, s) = {νj[tp−1
ε ](i, j) ≤ sπipikp
−1
ε }, t, s > 0, j, i, k ∈ X.
Note that (80) implies that, for every 0 < t < s and j ∈ X, (i, k) ∈ X̃,
(82) P{A(ε)
jik(s, t)} + P{Ā(ε)
jik(t, s)} → 0 as ε → 0.
Let Yt be the set of continuity points for the distribution functions of the
limiting random variables ξ(t) and ξ(t ± cn), n = 1, 2, . . . in (79). This set
is the real line R except at most a countable set of points.
Using the estimate (81), relation (82), and the assumptions that relation
(79) holds for one-dimensional distributions, for every t > 0, and that the
limiting process ξ(t) in (79) is stochastically continuous, we get, for every
t > 0 and j ∈ X,
lim
ε→0
P{ξ̃jε(t) > x} ≤ lim
n→∞
lim
ε→0
(P{ξ̂ε(t + cn) > x}
+
∑
(i,k)∈X
P{Ā(ε)
jik(t, t + cn)})
= lim
n→∞
P{ξ(t + cn) > x}
= P{ξ(t) > x}, x ∈ Yt,(83)
or, equivalently,
(84) lim
ε→0
P{ξ̃jε(t) ≤ x} ≥ P{ξ(t) ≤ x}, x ∈ Yt.
Similarly, we can get, for every t > 0 and j ∈ X,
(85) lim
ε→0
P{ξ̃jε(t) ≤ x} ≤ P{ξ(t) ≤ x}, x ∈ Yt.
Relations (84) and (85) implies that P{ξ̃jε(t) ≤ x} → P{ξ(t) ≤ x} as
ε → 0, x ∈ Yt, for every j ∈ X. Since the set Yt is dense in R, this relation
implies that, for every t > 0 and j ∈ X,
(86) ξ̃jε(t) ⇒ ξ(t) as ε → 0.
FIRST-RARE-EVENT TIMES 175
We omit details in the proof of an inverse proposition that (l2) the as-
sumption that relation (77), assumed to hold for every t > 0 and given
j ∈ X, implies that relation (79) holds for every t > 0 and, moreover the
limiting random variable ξ(t) can be taken the same in both relations.
Let us choose an arbitrary t > 0 and a sequence 0 < dn < t, n = 1, 2, . . .
such that dn → 0 as n → ∞.
Analogously to (81), we get the following “inverse” to (81) estimate, for
any every t > 0, real-valued x and n ≥ 1,
P{ξ̂ε(t) > x} = P{ξ̂ε(t) > x,
⋂
(i,k)∈X
Ā
(ε)
jik(t + dn, t)}
+ P{ξ̂ε(t) > x,
⋃
(i,k)∈X
A
(ε)
jik(t + dn, t)}
≤ P{ξ̃jε(t + dn) > x}
+
∑
(i,k)∈X
P{A(ε)
jik(t + dn, t)},(87)
Let Zt be the set of continuity points for the distribution functions of the
limiting random variables ξ(t) and ξ(t ± dn), n = 1, 2, . . . in (77). This set
is the real line R except at most a countable set of points.
Using the estimate (87), relation (82), and the assumptions that relation
(77) holds for one-dimensional distributions, for every t > 0 and given
j ∈ X, and that the limiting process ξ(t) in (77) is stochastically continuous,
we get, for every t > 0 and x ∈ Zt,
lim
ε→0
P{ξ̂ε(t) > x} ≤ lim
n→∞
lim
ε→0
(P{ξ̃jε(t + dn) > x}
+
∑
(i,k)∈X
P{A(ε)
jik(t + dn, t)})
= lim
n→∞
P{ξ(t + dn) > x}
= P{ξ(t) > x}.(88)
The continuation of the proof for the proposition (l2) is analogous to
those given above in the proof of the proposition (l1).
Let now introduce the step sum-process,
(89) ξ̆∗ε (t) =
[tp(ε)−1]∑
n=1
κ
∗
n(η′
n, η
′′
n)
uε
, t ≥ 0,
where
(m1) {η̄∗
n = (η′
n, η
′′
n), n = 1, 2, . . .} a sequence of i.i.d. random vectors
which takes values (i, j) with probabilities πipij for i, j ∈ X;
(m2) κ
∗
n(i, j), i, j ∈ X, n ≥ 1 are mutually independent random variables;
176 D. S. SILVESTROV AND M. O. DROZDENKO
(m3) P{κ
∗
n(i, j) ≤ t} = Gij(t), t ≥ 0 for i, j ∈ X, n ≥ 1;
(m4) the set of random variables {κ
∗
n(i, j), i, j ∈ X, n ≥ 1} and the ran-
dom sequence {η̄n, n = 1, 2, . . .} are independent.
We are interested in the following relation of weak convergence,
(90) ξ̆∗ε (t), t ≥ 0 ⇒ ξ(t), t ≥ 0 as ε → 0,
where (d) ξ(t), t ≥ 0 is a non-zero and non-decreasing and stochastically
continuous process with the initial value ξ(0) = 0.
Let us define, for every i, k ∈ X, the counting random variables for the
random sequence η̄∗
n = (η′
n, η
′′
n), n = 1, 2, . . .,
ν∗
n(i, k) =
n∑
r=1
χ{(η′
r, η
′′
r ) = (i, k)}, n = 0, 1, . . . .
It follows from the defining properties (m1) - (m4) listed above that the
process ξ̆∗ε (t) has, for every j ∈ X, the same finite-dimensional distribution
as the following process ξ̃∗ε (t),
(91) ξ̆∗ε(t), t ≥ 0
d
= ξ̃∗ε (t), t ≥ 0,
where
(92) ξ̃∗ε (t) =
∑
(i,k)∈X
ν∗
[tp−1
ε ]
(i,k)∑
n=1
κ
∗
n(i, k)
uε
, t ≥ 0.
It follows from stochastic equality (92) that (n) the relation of weak
convergence (90) is equivalent to the following relation,
(93) ξ̃∗ε (t), t ≥ 0 ⇒ ξ(t), t ≥ 0 as ε → 0,
where (d) ξ(t), t ≥ 0 is a non-zero and non-decreasing and stochastically
continuous process with the initial value ξ(0) = 0.
Let us also introduce the following step sum-processes,
(94) ξ̂∗ε (t) =
∑
(i,k)∈X
[tπipikp−1
ε ]∑
n=1
κ
∗
n(i, k)
uε
, t ≥ 0.
Let us also consider the following relation of weak convergence,
(95) ξ̂∗ε (t), t ≥ 0 ⇒ ξ(t), t ≥ 0 as ε → 0,
where (d) ξ(t), t ≥ 0 is a non-zero and non-decreasing and stochastically
continuous process with the initial value ξ(0) = 0.
We state that these relations (93) and (95) are equivalent. This means
that (o) the assumption that relation (93) holds if and only if relation (95),
moreover the limiting stochastic process ξ(t), t ≥ 0 can be taken the same
in both relations.
FIRST-RARE-EVENT TIMES 177
By the definition, χ{(η′
r, η
′′
r ) = (i, k)}, r = 1, 2, . . . are i.i.d. random
variables taking value 1 and 0 with probabilities πipik and 1 − πipik. Thus,
by the standard weak law of large number, for every t > 0 and i, k ∈ X,
(96)
ν∗
[tp−1
ε ]
(i, k)
p−1
ε
P−→ πipikt as ε → 0.
The careful analysis of the proof of the proposition (j) about the equiv-
alence of the relations of weak convergence (77), for processes ξ̃ε(t), t ≥ 0,
and (79), for processes ξ̂ε(t), t ≥ 0, shows that conditions (i2) - (i4) were
used in this proof plus the asymptotic relation (80), which is a weak law
of large numbers for the corresponding frequency random variables for the
random sequence ηn. Condition (i1) was used together with condition C
only as conditions providing the asymptotic relation (80).
These remarks let us state that the proof given for the proposition (l) can
be just replicated in order to prove the proposition (o). Indeed, conditions
(m2) - (m4) replace, in this case, conditions (i2) - (i4), and the asymp-
totic relation (96), implied by the condition (m1), replaces the asymptotic
relation (80).
Now let us use the following stochastic equality that obviously follows
from comparison of conditions (i2) - (i4) and (m2) - (m4),
(97) ξ̂ε(t), t ≥ 0
d
= ξ̂∗ε(t), t ≥ 0.
The propositions (l) and (o) combined with the stochastic equalities (72),
(75), (91), and (97) implies that (p) the assumption that relation of weak
convergence (51), treated in Lemma 5, holds if and only if the relation (90)
holds, moreover the limiting stochastic process ξ(t) can be taken the same
in both relations.
We are now in position to make the last step in the proof. Conditions
(m2) - (m4) imply that κ
∗
n(η′
n, η
′′
n), n = 1, 2, . . . are i.i.d. random variables.
Moreover, the corresponding distribution has the following form,
P{κ
∗
1(η
′
1, η
′′
1) ≤ t} =
∑
i,k∈X
Gik(t)πipik
=
∑
i∈X
πi
∑
j∈X
Gik(t)pik
=
∑
i∈X
πiGi(t) = G(t), t ≥ 0.(98)
The statements (ξ) and (π) follows, in an obvious way, from the propo-
sition (p). Indeed, ξ̆∗ε (t), t ≥ is the step sum-process based on i.i.d. random
variables, and, therefore,
(99) E exp{−sξ̆∗ε (t)} = ϕ(s/uε)
[tp−1
ε ], s, t ≥ 0.
178 D. S. SILVESTROV AND M. O. DROZDENKO
Relation (99) implies that, for given t > 0 the random variables ξ̆∗ε (t)
converge weakly to some non-zero limiting random variable if and only if
relation (70) holds and, in this case,
E exp{−sξ̆∗ε (t)} = ϕ(s/uε)
[tp−1
ε ]
∼ exp{−(1 − ϕ(s/uε))tp
−1
ε }
→ exp{−ς(s)t} as ε → 0, s ≥ 0,(100)
where ς(s) > 0 for s > 0.
The random variable ξ(t) has, for every t > 0, an infinitely divisible
distribution, as a week limit of sums of i.i.d. random variables, and ς(s)t is
the cumulant of ξ(t).
The statements (ρ) and (σ) of Lemma 5 are proved in Lemma 6. �
Remark 4. The proof presented above shows that the only property of the
quantities pε, used in the proof of Lemma 5, was (r) 0 < pε → 0 as ε → 0.
Lemma 5 and its proof remain to be valid if any function pε, satisfying
the assumption (r), will be used in the formulas (45) and (50) defining,
respectively, the process ξjε(t), t ≥ 0, and in the expression (1−ϕ(s/uε))/pε
used in the asymptotic relation (70). In this case, condition B in Lemma 5
can be replaced by the simpler assumption (r)
Remark 5. The proof presented above can be applied to any sum-process
of conditionally independent random variables ξ̆∗ε (t), t ≥ 0 defined by for-
mula (89) under the assumption that (s1) conditions (m2) - (m4) hold.
Condition (m1) can be replaced by a general assumption that (s2) {η̄∗
n =
(η′
n, η
′′
n), n = 1, 2, . . .} is a sequence of random vectors taking values in the
space X × X such that the weak law of large numbers in the form of the
asymptotic relation (96). Also, (s3) the positivity of πi is not needed, and
(s4) any function satisfying assumption (r) can be taken as pε. Under the
assumptions (s1) - (s4), the asymptotic relation (70) is necessary and suffi-
cient condition for weak convergence of processes ξ̆∗ε (t), t ≥ 0. The limiting
process is a non-negative homogeneous process with independent increments
with the cumulant ς(s) which appears in (70). Moreover, under condition
D, conditions Eγ and Fa,γ are necessary and sufficient for relation (70) to
hold, and cumulant ς(s) = asγ in this case.
In conclusion, let us make some bibliographical remarks concerned the
solidarity statements formulated in Lemmas 4 and 5. It should be noted
that the solidarity statements, similar to statements (ι) – (ν), given in
Lemma 5, can be found, for example, in Loève (1955), Chung Kai Lai
(1960), Pyke and Schanflie (1964), Silvestrov (1970, 1974), Silvestrov and
Poleščuk (1974), and Pyke (1999).
However, other solidarity statements given in Lemmas 4 and 5 in the
form of necessary and sufficient conditions imposed on cyclic or averaged
FIRST-RARE-EVENT TIMES 179
characteristics, as well as their extensions formulated in Remarks 4 – 6,
were not pointed out in the literature.
Let us now complete the proof of Theorem 1 and Lemmas 4 and 5, by
clarifying the role of conditions D, Eγ and Fa,γ . In the main, Lemma 6 for-
mulated below combines the statements known in the literature, in particu-
lar, those given in Feller (1966). Some new element is the form of conditions,
which unites both the case of degenerated and stable convergence and give
a convenient description of the balancing condition connecting functions uε
and pε.
Let introduce the step sum-process with i.i.d. random summands,
ξε(t) =
[tp−1
ε ]∑
n=1
ξn/uε, t ≥ 0.
where ξn, n = 1, 2, . . . are i.i.d. non-negative random variables with the
Laplace transform,
E exp{−sξ1} = ϕ(s) =
∫ ∞
0
e−stG(dt), s ≥ 0.
Lemma 6. Let condition D holds. Then, (τ ) the processes ξε(t), t ≥ 0
weakly converge to a non-zero and non-negative process if and only if re-
lation (70) holds; (υ) the limiting process, is in this case, a non-negative
homogeneous process with independent increments with the Laplace trans-
form Ee−sξ(t) = e−ς(s)t, s, t ≥ 0; (φ) conditions Eγ, Fa,γ are necessary and
sufficient for relation (70) to hold; (χ) the limiting cumulant in relation
(70) takes, in this case, the form ς(s) = asγ, where 0 < γ ≤ 1 and a > 0.
Proof. The statements (τ ) and (υ) are well-known and, in fact, they are
explained above, in (99) and (100).
Let 0 < γ ≤ 1, a > 0, and L(t) is a slowly varying function. Let us
introduce conditions:
Gγ:: 1 − ϕ(s) ∼ sγL(1/s) as 0 < s → 0;
Ha,γ::
L(uε)
pεuγ
ε
→ a as ε → 0.
We shall use that (t) conditions Gγ and Ha,γ are necessary and sufficient
for relation (70) to hold. It should be noted that this proposition is known
and we give its proof just to keep the text self-readable.
If these conditions holds, then for every s > 0,
1 − ϕ(s/uε)
pε
∼ L(uε)
pεu
γ
ε
· sγ · L(uε/s)
L(uε)
→ asγ as ε → 0.(101)
On the other hand, let us assume that (70) hold. Define the auxiliary
function ϕ̃(s) = 1−ϕ(1/s). Function ϕ̃(s) is monotonically decreasing. Due
to D, there exists εn → 0 as n → ∞ such that pεn/pεn+1 → 1 as n → ∞.
Relation (70) implies that, p−1
εn
ϕ̃(uεns) → ς(1/s) > 0 as n → ∞, for s > 0.
Thus, by known criterium (see, for example, Feller (1966)), function ϕ̃(s)
180 D. S. SILVESTROV AND M. O. DROZDENKO
regularly varies, i.e. ϕ̃(s) = sρL(s), where L(s) is a slowly varying function,
and ς(1/s) = asρ, where −∞ < ρ < +∞ and a is a positive constant. The
representations can be re-written in the following equivalent form,
(102) 1 − ϕ(s) = sγL(1/s), ς(s) = asγ , s > 0,
where −∞ < γ = ρ−1 < ∞ and a > 0. Since function e−ς(s) = e−asγ
should be a Laplace transform of some non-negative and non-zero random
variable, the only values 0 < γ ≤ 1 are admissible. In this case e−asγ
is the
Laplace transform of the non-negative stable law with parameter γ. The
cases γ ≤ 0 should be obviously excluded. The case γ > 1 should be also
excluded since for any non-negative and non-zero random variable ξ the
corresponding Laplace transform Ee−sξ ≥ e−sδP{ξ ≤ δ}. Therefore, Ee−sξ
can not decline in s with the super-exponential rate e−asγ
.
The condition Gγ follows from (102). To verify condition Ha,γ , we should
just repeat the calculations given in (101), based on the assumed relation
(70) and proved representation (102). Relation (101) coincides with condi-
tion Ha,γ if s = 1.
Let us now show that (u) conditions Gγ and Ha,γ are equivalent to con-
ditions Eγ and Fa,γ .
We first consider the case γ = 1. To simplify notations let us write
ξnε = ξn/uε.
The case γ = 1 corresponds to situation when limiting process
ξ(t) = at, t ≥ 0,
is a non-random linear function. According to the central criterium of con-
vergence for the sums of i.i.d. random variables (see, for example, Loève
1955), the necessary and sufficient conditions for weak convergence of such
sums (which automatically are equivalent to G1 and Ha,1) have the follow-
ing form,
I: : p−1
ε P{ξ1ε > u} → 0 as ε → 0, u > 0;
J: : p−1
ε Eξ1εχ(ξ1ε ≤ v) → a as ε → 0, for some v > 0.
Note that under condition I, condition J either holds or not simulta-
neously for all v > 0. Indeed, I implies that p−1
ε Eξ1εχ(v′ < ξ1ε ≤ v′′)
≤ v′p−1
ε P{ξ1ε > v′} → 0 as ε → 0, for any 0 < v′ < v′′ < ∞. Taking into
account this remark we can transform conditions I and J in the following
equivalent form,
I′:: P{ξ1ε > u}/Eξ1εχ(ξ1ε ≤ u) → 0 as ε → 0, u > 0;
J′:: p−1
ε Eξ1εχ(ξ1ε ≤ 1) → a as ε → 0.
It is easy to check that P{ξ1ε > u} = 1 − G(uuε) and Eξ1εχ(ξ1ε ≤ u)
=
∫ uuε
0
sG(ds)/uε. Thus, conditions I′ and J′′ can be rewritten as,
I′: : uε(1 − G(uuε))/
∫ uuε
0
sdG(s) → 0, as ε → 0, u > 0;
J′: :
∫ uε
0
sG(ds)/(pεuε) → a as ε → 0.
FIRST-RARE-EVENT TIMES 181
Condition J′ is identical to Fa,1. Condition E1 implies I′ that easily seen
by setting y = uuε in E1. It remains to show that I′ implies E1.
Since uε ∈ W, there exists sequence 0 < εn → 0 as n → ∞ such that
uεn+1/uεn → 1 as n → ∞. For any t, u > 0 we define n(t) = max(n :
uuεn ≤ t). By the definition, (w1) uuεn(t)
≤ t < uuεn(t)+1
for t > 0, and (w2)
n(t) → ∞ as t → ∞. Thus, (w3) uεn(t)+1
/uεn(t)
→ 1 as t → ∞. Using I′
and (w1) - (w3) we get,
t(1 − G(t))∫ t
0
sG(ds)
≤ uuεn(t)+1
(1 − G(uuεn(t)
))∫ uuεn(t)
0 sG(ds)
=
uεn(t)+1
uεn(t)
· uuεn(t)
(1 − G(uuεn(t)
))∫ uuεn(t)
0 sG(ds)
→ 0 as t → ∞.(103)
Let us now consider the case 0 < γ < 1. Due to the corresponding
Tauberian theorem (see, for example, Feller (1966)), condition Gγ is equiv-
alent to the condition,
Kγ:: 1 − G(t) ∼ t−γL(t)
Γ(1−γ)
as t → ∞.
Due to the corresponding theorem about regularly varying functions (see,
for example, Feller (1966)) condition K is equivalent to the following con-
dition,
K′
γ:: t[1 − G(t)]/
t∫
0
[1 − G(s)]ds → 1 − γ as t → ∞.
Since
t∫
0
[1−G(s)]ds = t[1−G(t)] +
t∫
0
sG(ds), t ≥ 0, condition K′
γ can be
re-written in the following equivalent form,
K′′
γ:: t[1 − G(t)]/
t∫
0
sG(s)ds → 1−γ
γ
as t → ∞.
Condition K′′
γ is identical to condition Eγ. Let us show that under con-
dition Eγ, conditions Fa,γ and Ha,γ are equivalent. Indeed, Kγ and K′′
γ
(equivalent to Eγ) imply the following asymptotic relation,
uε∫
0
sG(ds)
pεuε
∼ uε(1 − G(uε))
uεpε
· γ
1 − γ
∼ L(uε)
pεu
γ
εΓ(1 − γ)
· γ
1 − γ
=
L(uε)
pεu
γ
ε
· γ
Γ(2 − γ)
as ε → 0.(104)
This completes the proof. �
182 D. S. SILVESTROV AND M. O. DROZDENKO
Remark 6. The proof of Lemma 6 can be applied in the same way to
the asymptotic relation (48). Thus, under condition D, conditions Eγ and
Fa,γ (with replacement of function pε by qiε in these conditions), imposed
on distribution of the random variable βi, are necessary and sufficient for
relation (48) to hold.
Remark 7. As follows from the proof presented above, the assumption
(x1) uε ∈ W can be omitted in the statements of necessity in Lemma 6; the
assumption (x2) p−1
ε ∈ W in the statement of sufficiency in Lemma 6, for
the case γ = 1; and the assumption (x3) uε, p
−1
ε ∈ W, i.e. condition D, in
the statement of sufficiency in Lemma 6, for the case 0 < γ < 1. In sequel,
these assumptions can be omitted in the corresponding statements in and
in Lemmas 4-5 and Theorem 1.
Remark 8. The simplest variant for normalization functions is when
uε = ε−1.
In this case, due to condition Ha,γ, function
p−1
ε =
aε−γ
L(ε−1)
.
Both functions belong to the class W. In this case, condition D can me
omitted in Lemmas 4-6 and Theorem 1.
Remark 9. As follows from the proof of Lemma 6, conditions Eγ, and Fa,γ
can be replaced in Lemmas 4-6 and Theorem 1 by the equivalent condi-
tions Gγ , and Ha,γ.
Remark 10. In the case 0 < γ < 1, condition Eγ is equivalent to the
condition Kγ which means that the distribution G(t) belongs to the domain
of attraction of the stable law with the parameter γ. In the case γ = 1, the
condition E1 is necessary and sufficient condition for the distribution G(t) to
belong to the domain of attraction of the degenerated law, as it was given,
for example, in Feller (1966). In both cases, Lemma 6 just unifies these
conditions for both cases and gives a convenient form for the additional
balancing condition that should connect the normalization function uε and
the function p−1
ε determining the number of summands in the sum ξε(t).
Remark 11. The specific Markov property (1) possessed by the Markov
renewal process (ηn, κn, ζn) implies in an obvious way that
(y) Pi{κνε > t} =
∑
j∈X
Pi{ηνε−1 = j}Pj{κ1 > t/ζ1 ∈ Dε}.
It follows from (y) that, under condition A for any normalization uε,
(z) the random variables κνε/uε
P→ 0 as ε → 0.
This relation implies that the first-rare-event times ξε =
∑νε
n=1 κn can be
replaced in Theorem 1 by the modified first-rare-event times ξ′ε =
∑νε−1
n=1 κn
and, moreover, by any random variable ξ′′ε such that ξ′ε ≤ ξ′′ε ≤ ξε.
FIRST-RARE-EVENT TIMES 183
References
1. Aldous, D.J. (1982) Markov chains with almost exponential hitting times. Stoch.
Proces. Appl., 13, 305–310
2. Alimov, D., Shurenkov, V.M. (1990a) Markov renewal theorems in triangular
array model. Ukr. Mat. Zh., 42, 1443–1448 (English translation in Ukr. Math.
J., 42, 1283–1288)
3. Alimov, D., Shurenkov, V.M. (1990b) Asymptotic behavior of terminating
Markov processes that are close to ergodic. Ukr. Mat. Zh., 42, 1701–1703 (Eng-
lish translation in Ukr. Math. J., 42 1535–1538)
4. Anisimov, V.V. (1971a) Limit theorems for sums of random variables on a
Markov chain, connected with the exit from a set that forms a single class in the
limit. Teor. Veroyatn. Mat. Stat., 4, 3–17 (English translation in Theory Probab.
Math. Statist., 4, 1–13)
5. Anisimov, V.V. (1971b) Limit theorems for sums of random variables in array
of sequences defined on a subset of states of a Markov chain up to the exit time.
Teor. Veroyatn. Mat. Stat., 4, 18–26 (English translation in Theory Probab.
Math. Statist., 4, 15–22)
6. Anisimov, V.V. (1988) Random Processes with Discrete Components. Vys- shaya
Shkola and Izdatel’stvo Kievskogo Universiteta, Kiev
7. Anisimov, V.V., Zakusilo, O.K. and Donchenko, V.S. (1987) Elements of Queue-
ing and Asymptotical Analysis of Systems. Lybid’, Kiev
8. Brown, M., Shao, Y. (1987) Identifying coefficients in spectral representation for
first passage-time distributions. Prob. Eng. Inf. Sci. 1 69–74
9. Chung, Kai Lai (1960) Markov Chains with Stationary Transition Probabilities.
Fundamental Principles of Mathematical Sciences, 104, Springer, Berlin
10. Darroch, J., Seneta, E. (1965) On quasi-stationary distributions in absorbing
discrete-time finite Markov chains. J. Appl. Probab., 2, 88–100
11. Darroch, J., Seneta, E. (1967) On quasi-stationary distributions in absorbing
continuous-time finite Markov chains. J. Appl. Probab., 4, 192–196
12. Elĕıko, Ya.I., Shurenkov, V.M. (1995) Transient phenomena in a class of matrix-
valued stochastic evolutions. Teor. Ǐmorvirn. Mat. Stat., 52, 72–76 (English
translation in Theory Probab. Math. Statist., 52, 75–79)
13. Feller, W. (1966, 1971) An Introduction to Probability Theory and Its Applica-
tions, Vol. II. Wiley Series in Probability and Statistics, Wiley, New York
14. Gusak, D.V., Korolyuk, V.S. (1971) Asymptotic behaviour of semi-Markov
processes with a decomposable set of states. Teor. Veroyatn. Mat. Stat., 5, 43–50
(English translation in Theory Probab. Math. Statist., 5, 43–51)
15. Gut, A., Holst, L. (1984) On the waiting time in a generalized roulette game.
Statist. Probab. Lett., 2, No. 4, 229–239
16. Gyllenberg, M., Silvestrov, D.S. (1999) Quasi-stationary phenomena for semi-
Markov processes. In: Janssen, J., Limnios, N. (eds) Semi-Markov Models and
Applications. Kluwer, Dordrecht, 33–60
17. Gyllenberg, M., Silvestrov, D.S. (2000a) Nonlinearly perturbed regenerative
processes and pseudo-stationary phenomena for stochastic systems. Stoch. Pro-
ces. Appl., 86, 1–27
18. Gyllenberg, M., Silvestrov, D.S. (2000b) Cramér–Lundberg approximation for
nonlinearly perturbed risk processes. Insurance: Math. Econom., 26, 75–90
19. Hassin, R., Haviv, M. (1992) Mean passage times and nearly uncoupled Markov
chains. SIAM J. Disc Math., 5, 386–397
20. Iglehart, D.L. (1969). Diffusion approximation in collective risk theory. J. Appl.
Probab., 6, 285–292
184 D. S. SILVESTROV AND M. O. DROZDENKO
21. Kaplan, E.I. (1979) Limit theorems for exit times of random sequences with
mixing. Teor. Veroyatn. Mat. Stat., 21, 53–59 (English translation in Theory
Probab. Math. Statist., 21, 59–65)
22. Kartashov, N.V. (1987) Estimates for the geometric asymptotics of Markov times
on homogeneous chains. Teor. Veroyatn. Mat. Stat., 37, 66–77 (English transla-
tion in Theory Probab. Math. Statist., 37, 75–88)
23. Kartashov, N.V. (1991) Inequalities in Renyi’s theorem. Theory Probab. Math.
Statist., 45, 23–28
24. Kartashov, N.V. (1995) Strong Stable Markov Chains, VSP, Utrecht and TBiMC,
Kiev
25. Keilson, J. (1966) A limit theorem for passage times in ergodic regenerative
processes. Ann. Math. Statist., 37, 866–870
26. Keilson, J. (1978) Markov Chain Models – Rarity and Exponentiality. Applied
Mathematical Sciences, 28, Springer, New York
27. Kijima, M. (1997) Markov Processes for Stochastic Modelling. Stochastic Mod-
eling Series. Chapman & Hall, London
28. Kingman, J.F. (1963) The exponential decay of Markovian transition probabili-
ties. Proc. London Math. Soc., 13, 337–358
29. Korolyuk, D.V., Silvestrov D.S. (1983) Entry times into asymptotically reced-
ing domains for ergodic Markov chains. Teor. Veroyatn. Primen., 28, 410–420
(English translation in Theory Probab. Appl., 28, 432–442)
30. Korolyuk, D.V., Silvestrov D.S. (1984) Entry times into asymptotically receding
regions for processes with semi-Markov switchings. Teor. Veroyatn. Primen., 29,
539–544 (English translation in Theory Probab. Appl., 29, 558–563)
31. Korolyuk, V.S. (1969) On asymptotical estimate for time of a semi-Markov
process being in the set of states. Ukr. Mat. Zh., 21, 842–845
32. Korolyuk, V.S., Korolyuk, V.V. (1999) Stochastic Models of Systems. Ma-
thematics and its Applications, 469, Kluwer, Dordrecht
33. Korolyuk, V.S., Turbin, A.F. (1970) On the asymptotic behaviour of the oc-
cupation time of a semi-Markov process in a reducible subset of states. Teor.
Veroyatn. Mat. Stat., 2, 133–143 (English translation in Theory Probab. Math.
Statist., 2, 133–143)
34. Korolyuk, V.S., Turbin, A.F. (1976) Semi-Markov Processes and its Applica-
tions. Naukova Dumka, Kiev
35. Korolyuk, V.S., Turbin, A.F. (1978) Mathematical Foundations of the State
Lumping of Large Systems. Naukova Dumka, Kiev (English edition: Mathemat-
ics and its Applications, 264, Kluwer, Dordrecht (1993))
36. Korolyuk, V.S., Turbin, A.F. (1982) Markov Renewal Processes in Problems of
System Reliability. Naukova Dumka, Kiev
37. Kovalenko, I.N. (1973) An algorithm of asymptotic analysis of a sojourn time
of Markov chain in a set of states. Dokl. Acad. Nauk Ukr. SSR, Ser. A, No. 6,
422–426
38. Latouch, G., Louchard, G. (1978) Return times in nearly decomposible stochastic
processes. J. Appl. Probab., 15, 251–267
39. Loève, M. (1955, 1963) Probability Theory. Van Nostrand, Toronto and Prince-
ton
40. Masol, V.I., Silvestrov, D.S. (1972) Record values of the occupation time of a
semi-Markov process. Visnik Kiev. Univ., Ser. Mat. Meh., 14, 81–89
41. Motsa, A.I., Silvestrov, D.S. (1996) Asymptotics of extremal statistics and
functionals of additive type for Markov chains. In: Klesov, O., Korolyuk,
FIRST-RARE-EVENT TIMES 185
V., Kulldorff, G., Silvestrov, D. (eds) Proceedings of the First Ukrainian–
Scandinavian Conference on Stochastic Dynamical Systems, Uzhgorod, 1995.
Theory Stoch. Proces., 2(18), No. 1-2, 217–224
42. Pyke, R. (1999) The solidarity of Markov renewal processes. In: Janssen, J.,
Limnios, N. (eds) Semi-Markov Models and Applications. Kluwer, Dordrecht,
3–21
43. Pyke, R., Schanflie, R. (1964) Limit theorems for Markov renewal processes.
Ann. Math. Statist., 35, 1746–1764
44. Shurenkov, V.M. (1980a) Transition phenomena of the renewal theory in
asymptotical problems of theory of random processes 1. Mat. Sbornik, 112, 115–
132 (English translation in Math. USSR: Sbornik, 40, No. 1, 107–123 (1981))
45. Shurenkov, V.M. (1980b) Transition phenomena of the renewal theory in
asymptotical problems of theory of random processes 2. Mat. Sbornik, 112, 226–
241 (English translation in Math. USSR: Sbornik, 40, No. 2, 211–225 (1981))
46. Silvestrov, D.S. (1970) Limit theorems for semi-Markov processes and their
applications. 1, 2. Teor. Veroyatn. Mat. Stat., 3, 155–172, 173–194 (English
translation in Theory Probab. Math. Statist., 3, 159–176, 177–198)
47. Silvestrov, D.S. (1971) Limit theorems for semi-Markov summation schemes. 1.
Teor. Veroyatn. Mat. Stat., 4, 153–170 (English translation in Theory Probab.
Math. Statist., 4, 141–157)
48. Silvestrov, D.S. (1974) Limit Theorems for Composite Random Functions.
Vysshaya Shkola and Izdatel’stvo Kievskogo Universiteta, Kiev
49. Silvestrov, D.S. (1980) Semi-Markov Processes with a Discrete State Space. Li-
brary for an Engineer in Reliability, Sovetskoe Radio, Moscow
50. Silvestrov, D.S. (1981) Theorems of large deviations type for entry times of a se-
quence with mixing. Teor. Veroyatn. Mat. Stat., 24, 129–135 (English translation
in Theory Probab. Math. Statist., 24, 145–151)
51. Silvestrov, D.S. (1995) Exponential asymptotic for perturbed renewal equations.
Teor. Ǐmovirn. Mat. Stat., 52, 143–153 (English translation in Theory Probab.
Math. Statist., 52, 153–162)
52. Silvestrov, D.S. (2000a) Nonlinearly perturbed Markov chains and large devia-
tions for lifetime functionals. In: Limnios, N., Nikulin, M. (eds) Recent Advances
in Reliability Theory: Methodology, Practice and Inference. Birkhäuser, Boston,
135–144
53. Silvestrov, D.S. (2000b) Perturbed renewal equation and diffusion type approxi-
mation for risk processes. Teor. Ǐmovirn. Mat. Stat., 62, 134–144 (English trans-
lation in Theory Probab. Math. Statist., 62, 145–156)
54. Silvestrov D.S. (2004) Limit Theorems for Randomly Stopped Stochastic
Processes, Springer, London
55. Silvestrov, D.S., Abadov, Z.A. (1991) Uniform asymptotic expansions for expo-
nential moments of sums of random variables defined on a Markov chain and
distributions of entry times. 1. Teor. Veroyatn. Mat. Stat., 45, 108–127 (English
translation in Theory Probab. Math. Statist., 45, 105–120)
56. Silvestrov, D.S., Abadov, Z.A. (1993) Uniform representations of exponential
moments of sums of random variables defined on a Markov chain, and of distri-
butions of passage times. 2. Teor. Veroyatn. Mat. Stat., 48, 175–183 (English
translation in Theory Probab. Math. Statist., 48, 125–130)
57. Silvestrov, D.S., Poleščuk V.S. (1974) Cyclic conditions for the convergence of
sums of random variables defined on a recurrent Markov chain. Dokl. Acad. Nauk
Ukr. SSR, Ser. A, No. 9, 790–792
186 D. S. SILVESTROV AND M. O. DROZDENKO
58. Silvestrov, D.S., Velikii, Yu.A. (1988) Necessary and sufficient conditions for
convergence of attainment times. In: Zolotarev, V.M., Kalashnikov, V.V. (eds)
Stability Problems for Stochastic Models. Trudy Seminara, VNIISI, Moscow,
129–137 (English translation in J. Soviet. Math., 57, 3317–3324 (1991))
59. Simon, H.A., Ando, A. (1961) Aggregation of variables in dynamic systems.
Econometrica, 29, 111–138
60. Turbin, A.F. (1971) On asymptotic behavior of time of a semi-Markov process
being in a reducible set of states. Linear case. Teor. Verotatn. Mat. Stat., 4,
179–194 (English translation in Theory Probab. Math. Statist., 4, 167–182)
61. Zakusilo, O.K. (1972a) Thinning semi-Markov processes. Teor. Veroyatn. Mat.
Stat., 6, 54–59 (English translation in Theory Probab. Math. Statist., 6, 53–58)
62. Zakusilo, O.K. (1972b) Necessary conditions for convergence of semi-Markov
processes that thin. Teor. Veroyatn. Mat. Stat., 7, 65–69 (English translation in
Theory Probab. Math. Statist., 7, 63–66)
Department of Mathematics and Physics, Mälardalen University, Box
883, SE-721 23 Väster̊as, Sweden
E-mail address: dmitrii.silvestrov@mdh.se
E-mail address: myroslav.drozdenko@mdh.se
|