Semi-Markov reward models for disability insurance
A semi-Markov model for disability insurance is described. Statistical evidences of relevance semi-Markov setting are given. High order semi-Markov backward reward models are invented. Applications of these models to profit-risk analysis of disability insurance contracts are considered.
Gespeichert in:
Datum: | 2006 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | English |
Veröffentlicht: |
Інститут математики НАН України
2006
|
Online Zugang: | http://dspace.nbuv.gov.ua/handle/123456789/4468 |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Назва журналу: | Digital Library of Periodicals of National Academy of Sciences of Ukraine |
Zitieren: | Semi-Markov reward models for disability insurance / F. Stenberg, R. Manca, D. Silvestrov // Theory of Stochastic Processes. — 2006. — Т. 12 (28), № 3-4. — С. 239–254. — Бібліогр.: 34 назв.— англ. |
Institution
Digital Library of Periodicals of National Academy of Sciences of Ukraineid |
irk-123456789-4468 |
---|---|
record_format |
dspace |
spelling |
irk-123456789-44682009-11-12T12:00:37Z Semi-Markov reward models for disability insurance Stenberg, F. Manca, R. Silvestrov, D. A semi-Markov model for disability insurance is described. Statistical evidences of relevance semi-Markov setting are given. High order semi-Markov backward reward models are invented. Applications of these models to profit-risk analysis of disability insurance contracts are considered. 2006 Article Semi-Markov reward models for disability insurance / F. Stenberg, R. Manca, D. Silvestrov // Theory of Stochastic Processes. — 2006. — Т. 12 (28), № 3-4. — С. 239–254. — Бібліогр.: 34 назв.— англ. 0321-3900 http://dspace.nbuv.gov.ua/handle/123456789/4468 en Інститут математики НАН України |
institution |
Digital Library of Periodicals of National Academy of Sciences of Ukraine |
collection |
DSpace DC |
language |
English |
description |
A semi-Markov model for disability insurance is described. Statistical evidences of relevance semi-Markov setting are given. High order semi-Markov backward reward models are invented. Applications of these
models to profit-risk analysis of disability insurance contracts are considered. |
format |
Article |
author |
Stenberg, F. Manca, R. Silvestrov, D. |
spellingShingle |
Stenberg, F. Manca, R. Silvestrov, D. Semi-Markov reward models for disability insurance |
author_facet |
Stenberg, F. Manca, R. Silvestrov, D. |
author_sort |
Stenberg, F. |
title |
Semi-Markov reward models for disability insurance |
title_short |
Semi-Markov reward models for disability insurance |
title_full |
Semi-Markov reward models for disability insurance |
title_fullStr |
Semi-Markov reward models for disability insurance |
title_full_unstemmed |
Semi-Markov reward models for disability insurance |
title_sort |
semi-markov reward models for disability insurance |
publisher |
Інститут математики НАН України |
publishDate |
2006 |
url |
http://dspace.nbuv.gov.ua/handle/123456789/4468 |
citation_txt |
Semi-Markov reward models for disability insurance / F. Stenberg, R. Manca, D. Silvestrov // Theory of Stochastic Processes. — 2006. — Т. 12 (28), № 3-4. — С. 239–254. — Бібліогр.: 34 назв.— англ. |
work_keys_str_mv |
AT stenbergf semimarkovrewardmodelsfordisabilityinsurance AT mancar semimarkovrewardmodelsfordisabilityinsurance AT silvestrovd semimarkovrewardmodelsfordisabilityinsurance |
first_indexed |
2025-07-02T07:42:24Z |
last_indexed |
2025-07-02T07:42:24Z |
_version_ |
1836520197975965696 |
fulltext |
Theory of Stochastic Processes
Vol. 12 (28), no. 3–4, 2006, pp. 239–254
FREDRIK STENBERG, RAIMONDO MANCA, AND DMITRII SILVESTROV
SEMI-MARKOV REWARD MODELS FOR DISABILITY
INSURANCE
A semi-Markov model for disability insurance is described. Statistical
evidences of relevance semi-Markov setting are given. High order semi-
Markov backward reward models are invented. Applications of these
models to profit-risk analysis of disability insurance contracts are con-
sidered.
1. Introduction
Several authors have discussed the use of Markov processes techniques
in insurance. We cite only some of them, i.e. Hoem (1969, 1988), Consael
and Sonnenscheim (1978), Moller (1992), Norberg (1993), Waters (1984),
Wolthuis (1994) and the books Habermann and Pitacco (1999) and Wolthuis
(2003). For a wider bibliography the reader can refer to Hoem (1988) and
to the two cited books.
Markov models, in discrete and continuous time setting, clearly work well
in insurance environment. However, there exists a problem that can not be
solved in the frame of Markov models. As a matter of fact, transition times
between states should have geometrical or exponential distributions, respec-
tively, in discrete and continuous time Markov models. These distributions
possess so-called memoryless property that may make them not relevant in
some of applications.
An alternative is the use of more general semi-Markov models for which
transition times can be arbitrary distributed on the positive half-line.
The first application of semi-Markov models in insurance field was pro-
posed by Janssen (1966). Hoem (1972) proposed a non-homogeneous semi-
Markov model. Iosifescu (1972) defined non-homogeneous semi-Markov
processes in a different way and only from theoretic side. Janssen and
De Dominicis (1984) and De Dominicis and Manca (1984) gave applica-
tions of non-homogeneous semi-Markov processes following the Iosifescu-
Manu approach. Dominicis and Manca (1986) gave the definition of non-
homogeneous semi-Markov rewards and applied them to insurance disabil-
ity problems. Semi-Markov insurance applications were given in pension
2000 Mathematics Subject Classification. Primary 62P05; Secondary 60K15.
Key words and phrases. Semi-Markov process, discrete time, higher order reward,
disability, insurance.
This work was supported by the Graduate School in Mathematics and Computing
(FMB) Sweden, Sparbanken Stiftelsen Nya, Knowledge Foundation, MIUR 2004133218,
and Universitá La Sapienza.
239
240 F. STENBERG, R. MANCA, AND D. SILVESTROV
schemes by, Sahin and Balcer (1979), Balcer and Shain (1986), De Domini-
cis, Manca, and Granata (1991), and Janssen and Manca (1997). Other ap-
plications of semi-Markov models in health insurance were given in CMIR12
(1991) and more recently in Janssen and Manca (2002, 2004). In particu-
lar the last two papers develop the application of semi-Markov processes in
insurance giving great relevance to the reward processes. In Blasi, Janssen
and Manca, (2004), the multiple states insurance model were seen as an
example of the concept of generalized homogeneous stochastic annuities.
The definition of this financial tool was given by means of homogeneous
semi-Markov reward processes. The latest results in this area can be found
in Stenberg, Manca, and Silvestrov (2005) and Janssen and Manca (2006).
An insurance contract ensures the holder benefits in the future from some
random events occurring at some random moments in time. Denote the dis-
counted cash flow that occurs between the counter parties as the discounted
accumulated reward where both the benefits and premiums are considered
to be rewards. When developing an insurance contract between the writer
and the receiver the following question must be asked. How shall the reward
structure of the contract be determined? Different reward structures will
lead to significant changes in such characteristics as expectation, variance,
skewness, and kurtosis for the implied discounted accumulated reward. For
both parties it is of great importance to be able to handle not only the
accumulated reward but also the risk for the insurance contracts.
In this paper, we start from previous results given in Janssen and Manca
(2004, 2006), where expectations of accumulated rewards for semi-Markov
disability insurance models were studied. We improve these results in sev-
eral directions.
First, we develop a method for calculating not only expectations but also
higher order moments for accumulated rewards for disability insurance con-
tracts. This approach let one to compare contracts not only on the base
of expected accumulated rewards, as in the works mentioned above, but
also to perform a full-scale profit-risk analysis and comparison of insurance
contracts based, for example, on various criteria combining expected re-
wards and variances of rewards for different contracts. We illustrate this by
presenting examples based on the real data.
It should be noted that such results related to higher order semi-Markov
reward models were not pointed out in the literature. The most general
results of such type relates to more simple semi-Markov rewards accumu-
lated up to the first hitting time into some domain and can be found in
Silvestrov (1980a, 1980b, 1996). In this model for accumulated rewards de-
pend only of initial states, and systems of linear equations for moments of
different order, recursive by orders of moments, are involved. In the model
considered in the present paper, moments of different order for rewards ac-
cumulated up to fixed time depend not only of initial states but also of time
variable. This make the corresponding systems of linear equations more
SEMI-MARKOV REWARD MODELS FOR INSURANCE 241
complicated. Here, they are ”doubly recursive”, by order of moments and
in time. We show, however, that the corresponding effective numerical ma-
trix algorithms can be developed. These results, as we think, have their
own value beyond the framework of the present work.
Secondly, we pay more attention to analysis of statistical evidences, which
confirm the relevance of semi-Markov setting for disability insurance appli-
cations.
2. Semi-Markov reward models
Let us consider a discrete time Markov renewal process (ηn, κn), n =
0, 1, ... that is a homogeneous discrete time Markov chain with the phase
space E × {0, 1, . . .}, where E = {1, . . . , m}, and transition probabilities,
Qij(t) = P{ηn+1 = j, κn+1 ≤ t|ηn = i, κn = s}(1)
= P{ηn+1 = j, κn+1 ≤ t|ηn = i}.
A semi-Markov process η(t), t ≥ 0 can be associated with the Markov
renewal process (ηn, κn). In this case random variables ηn are interpreted
as positions of this process at moments of jumps, κn as a inter-jump times,
τn = κ1+ · · ·+κn as moments of jumps, ν(t) = max(n : τn ≤ t) as a number
of jumps in the time interval [0, t], and the semi-Markov process is defined
in the following way,
η(t) = ην(t), t ≥ 0.(2)
It is also useful to introduce the counting process κ(t), t ≥ 0, which counts
time between the moment of the last jump occurred before moment t and
the moment t,
κ(t) = t − τν(t), t ≥ 0.(3)
We shall use also the following transition characteristics which have an
obvious probabilistic interpretation,
pij = Qij(∞), bij(t) = Qij(t) − Qij(t − 1), Qi(t) =
∑
j∈E
Qij(t).(4)
We exclude instant transitions, i.e. assume that probabilities Qi(0) =
0, i ∈ E.
We accept the case where the semi-Markov process η(t) has absorption
states. If a state i is absorption then transition probabilities pij = 0, j �= i
and probabilities Qii(t) = 0, t = 0, 1, . . ..
Let us also define conditional transition characteristics,
bij,u(t) =
⎧⎨
⎩
bij(u+t)
1−Qi(u)
, if 1 − Qi(u) �= 0,
0, if 1 − Qi(u) = 0 and t = 1, 2, . . . or j �= i,
1, if 1 − Qi(u) = 0 and t = 0, j = i,
(5)
242 F. STENBERG, R. MANCA, AND D. SILVESTROV
and
1 − Qi,u(t) =
{ 1−Qi(u+t)
1−Qi(u)
, if 1 − Qi(u) �= 0,
0, if 1 − Qi(u) = 0.
(6)
We will consider two major types of rewards, permanence and instant
rewards. The permanence reward ψi(t), is associated with continuous main-
taining in state i time which is not less than t, while instant rewards, γij(t),
are collected when we change state from state i to state j occurs after
continuous maintaining in state i exactly time t.
The simplest case is where the permanence reward in state i at time
t, ψi(t) = ψi only depends on the current state i and the instant reward
γij(t) = γij only depends on the state i before the jump and the state j
after the jump.
Let e−tδ denote the discount factor for t periods with common fixed con-
tinuous compounded interest rate δ. Let ξ(s, t), 0 ≤ s ≤ t < ∞ denote
the accumulated discounted reward during the time interval (s, t] defined
by the following relation,
ξ(s, t) =
∑
s<u≤t
e−uδψη(u)(κ(u)) +
∑
s<τn≤t
e−τnδγηn,ηn+1(κn).(7)
Let us also denote by ξi,u(s, t) the random variable, which has the distri-
bution the same with the conditional distribution for the random variable
ξ(s, t) given that at moment s the process is at state i and the last (before
s) jump occurred at the moment s−u if 1−Qi(u) > 0 or 0 if 1−Qi(u) = 0.
By the definition, the random variables ξ(t, t) = 0 for all t and, therefore,
we apply the convention that ξi,u(t, t) = 0 for any i, u, t.
Let us use the symbol α
d
= β to denote that two stochastic variables α
and β have the same distribution.
Lemma. The accumulated discounted reward process ξ(s, t) satisfies the
following stochastic relation, for i, j ∈ E, u = 0, 1, . . . , 0 ≤ s ≤ t ≤ T < ∞,
ξi,u(s, t)
d
= e−δsξi,u(0, t − s).(8)
Lemma 1 shows that we can reduce consideration to staidies of random
variables ξi,u(0, t).
The object of our interest are power moments,
V
(k)
i,u (t) = E[ξi,u(0, t)]
k, 0 ≤ t ≤ T, k = 1, 2 . . . .(9)
To simplify formulas we introduce notations,
ai,u(t) =
t∑
s=1
ψi(s + u)e−δs, ãij,u(t) = ai,u(t) + e−δtγij(t + u).(10)
We shall also use more compact matrix notations. Let V(k)
u (t) denote the
m×1 vector with E[ξi,u(t)]
k at position i, Du(t) denote the m×m diagonal
matrix with entry 1 − Qi,u(t) in the center diagonal and zero elsewhere,
SEMI-MARKOV REWARD MODELS FOR INSURANCE 243
Bu(t) be the m × m matrix with elements bij,u(t) in position < i, j >, and
A
(k)
u (t) be the diagonal matrix with (ai,u(t))
k in the center diagonal and
zero elsewhere, Ã
(k)
u (t) be the m × m matrix with elements with elements
(ãij,u(t))
k position < i, j >, and 1m is the m×1 vector with all components
equal 1.
Theorem 1. The vectors of expected discounted accumulated rewards
V(1)
u (t), u, t = 0, . . . , T,
are uniquely determined by the following recursion relation:
V(1)
u (t) = Du(t)A
(1)
u (t)1m
+
t∑
s=1
Bu(s)Ã
(1)
u (s)1m +
t∑
s=1
e−δsBu(s)V
(1)
0 (t − s).
(11)
This relation should be used in the following recursion order: (a) for u = 0
sequentially for t = 0, 1, . . . , T , (b) for every u = 1, . . . , T sequentially for
t = 0, 1, . . . , T .
Proof. Let us introduce the random variable ϑi,u which has the same dis-
tribution as the time to the next jump given that the process η(s) already
have spent in the state i for time u and let ζi,u denote the corresponding
state the process η(s) end up in after the jump. According to the definition
these random variables have the following distributions,
P{ϑi,u > t} = 1 − Qi,u(t),(12)
and
P{ϑi,u = t, ζi,u = j} = bij,u(u + t).(13)
Let us construct a stochastic relation for the random variables ξi,u(t). We
will have to consider two cases, if no jump occurs before moment t, or if at
least one jump occurs between moment 0 up to moment t. Using indicator
variables χ(·) for random events we can write down the following stochastic
relation,
(14)
ξi,u(0, t)
d
= χ(ϑi,u > t)ai,u(t) +
∑
j∈E
t∑
s=1
χ(ϑi,u = s, ζi,u = j)ãij,u(s)
+
∑
j∈E
t∑
s=1
e−δsχ(ϑi,u = s, ζi,u = j)ξj,0(0, t − s),
i ∈ E, t = 0, . . . , T,
where the random variables χ(ϑi,u = s, ζi,u = j) and ξj,0(0, t − s) are inde-
pendent.
The first term in (14) represents the discounted reward received for main-
taining in state i for t moments. The second term is the sum over all possi-
ble times when the first jump can occur, and the corresponding reward for
244 F. STENBERG, R. MANCA, AND D. SILVESTROV
maintaining in state i for this amount of time including the possible instant
reward at the jump. The third term is due to the fact that the process
restarts and possess Markov property at moments of jumps. All terms are
conditioned with the help of the indicator random variable denoting when
and where the first jump will occur.
The corresponding relation for expectations can now be calculated using
this stochastic relation and independence relationships mentioned earlier,
(15)
E[ξi,u(0, t)] = (1 − Qi,u(t))ai,0(t) +
∑
j∈E
t∑
s=1
bij,u(t)ãij,0(s)
+
∑
j∈E
t∑
s=1
e−δsbij,u(t)E[ξj,0(0, t− s)],
i ∈ E, u, t = 0, . . . , T.
This relation, rewritten in matrix form is equivalent to relation (11). �
The following theorem gives an analogous result for higher order mo-
ments.
Theorem 2. The vectors of k-order moments for discounted accumulated
rewards V
(k)
u (t), u, t = 0, . . . , T are uniquely determined for k = 1, 2, . . . by
the following recursion relation:
V(k)
u (t) = Du(t)A
(k)
u (t)1m +
t∑
s=1
(Bu(s) · Ã(k)
u (s))1m(16)
+
t∑
s=1
k−1∑
l=1
(
k
l
)
e−δs(k−l)(Bu(s) · Ã(k)
u (s))V
(k−l)
0 (t − s)
+
t∑
s=1
e−kδsBu(s)V
(k)
0 (t − s).
This relation should be used in the following recursion order:
(a) for k = 1 and u = 0 sequentially for t = 0, 1, . . . , T ,
(b) for k = 1 and every u = 1, . . . , T sequentially for t = 0, 1, . . . , T ,
(c) for k = 2 and u = 0 sequentially for t = 0, 1, . . . , T ,
(d) for k = 2 and every u = 1, . . . , T sequentially for t = 0, 1, . . . , T ,
(e) sequentially for any higher moment order k > 2 and u = 0 sequen-
tially for t = 0, 1, . . . , T ,
(f) for k > 2 and every u = 1, . . . , T sequentially for t = 0, 1, . . . , T .
SEMI-MARKOV REWARD MODELS FOR INSURANCE 245
Proof. The following stochastic relation can be written down for random
variables ξk
i,u(t),
(17)
ξk
i,u(0, t)
d
= χ(ϑi,u > t)(ai,u(t))
k +
∑
j∈E
t∑
s=1
χ(ϑi,u = s, ζi,u = j)(ãij,u(s))
k
+
∑
j∈E
t∑
s=1
k−1∑
l=1
(
k
l
)
e−δs(k−l)(ãij,u(s))
k−lχ(ϑi,u = s, ζi,u = j)ξk−l
j,0 (0, t − s)
+
∑
j∈E
t∑
s=1
χ(ϑi,u = s, ζi,u = j)e−δskξk
j,0(0, t − s),
i ∈ E, u, t = 0, . . . , T,
where the random variables
χ(ϑi,u = s, ζi,u = j) and ξj,0(0, t − s)
are independent.
Here we have used the fact that the product of indicators
χ(ϑi,u > t)χ(ϑi,u = s, ζi,u = j) = 0
for s ≤ t and
χ(ϑi,u = s, ζi,u = j)χ(ϑi,u = s′, ζi,u = j′) = 0
if j �= j′ or s �= s′, while powers of these indicators coincides with these
indicators themselves.
The corresponding relation for expectations can now be calculated using
this stochastic relation and independence relationships mentioned earlier,
(18)
E[ξi,u(0, t)]
k = bi,u(t)(ai,u(t))
k +
∑
j∈E
t∑
s=1
bij,u(s)(ãij,u(s))
k
+
∑
j∈E
t∑
s=1
k−1∑
l=1
bij,u(s)
(
k
l
)
e−δs(k−l)(ãij,0(s))
k−lE[ξj,0(0, t − s)]k−l
+
∑
j∈E
t∑
s=1
χ(ϑi,u = s, ζi,u = j)e−δskE[ξj,0(0, t − s)]k,
i ∈ E, u, t = 0, . . . , T.
This relation, rewritten in matrix form is equivalent to relation (16). �
246 F. STENBERG, R. MANCA, AND D. SILVESTROV
3. Disability insurance
In the papers by Janssen and Manca (2002, 2004) it is shown how to apply
continuous time semi-Markov reward processes in multiple life insurance.
In the paper Blasi, Janssen and Manca (2004) a real case study based
on historical disability data is described. We extend these studies in the
directions listed above, in the introduction.
The historical data gives the disability history of 840 persons that had
silicosis problems lived in Campania, a region in Italy. Each individual with
silicosis were examined by a doctor. The doctor determined approximately
the degree of disability in percentage for each patient ranging from 0%
to 100%. Depending on the degree of disability the policy maker have
determined 5 possible states, which differs by reward payments. Also a
”death” state with no rewards payed to in this state should be added in the
model. These states are categorized in Table 1.
Given 6 states defined above we attach a reward policy to the disability
degree. The rewards that is given to construct the example represents the
money amount that is paid per one time period to the disabled person as a
function of his degree of illness. In fact, a year is the duration of one time
period. Table 1 also defines two variants of the rewards (per year in Euro)
used for two different insurance contracts,
Table 1. Disability states and rewards.
states disability degree contract - I contract - II
1 [0%, 10%) 1000 1200
2 [10%, 30%) 1500 1600
3 [30%, 50%) 2000 2000
4 [50%, 70%) 2500 2400
5 [70%, 100%] 3000 2800
6 death 0 0
This subdivision is similar of the ones used in Yntema (1962) and Janssen
(1966) that were the first to apply respectively Markov and semi-Markov
environment in disability problems. Transition between states occurs after
a visit to the doctor that can be seen as the check to decide in which state
the disable person is in. This gives naturally an example where virtual
transitions are possible, i.e., the individual have neither become sufficiently
better or worse to change state.
The data mentioned above give for every individual durations of time
intervals between visits to the doctor. The states of disability determined
by the doctor during these visits. It should be taken into account that an
actual change in rewards payed to the individual, resulted by one or several
visit to doctor during the same period of time, can occur only in the end of
this period of time.
SEMI-MARKOV REWARD MODELS FOR INSURANCE 247
This means that, reward process can be considered in discrete time and
times between transitions should be counted in numbers of years, when the
individual was classified and payed according to a given disability class.
The specific feature of the original data is that every observed trajectory
ends with a visit to a doctor, where either a new disability state was de-
termined, or a new degree of disability in percentage was determined which
did not caused a change disability state (virtual transition) or an actual
new state or degree of disability were not determined. A part of these cases
should be interpreted as transition to the ”death” state.
The original data mentioned above can be transformed to the data rep-
resenting stepwise realisations of the discrete time process described above.
We use a semi-Markov model to describe this process. Therefore, we a pri-
ory accept Markov property at moments of transitions. In order to keep
a reasonable relation between the original “sample size” and a number of
parameters of the model that must be estimated from these data, we also
a priory accept a simple semi-Markov model with distribution of transition
(sojourn) times that do depend only on an actual state of the process but
do not depend on a “destination” state into which the process occurs after
transition.
The difference between the semi-Markov model described above and a
Markov model is in the assumption about the distribution of sojourn times.
We do not accept geometrical form for these distributions but do prefer to
use the distribution estimated in non-parametric way from sample data.
In this case, the semi-Markov model is determined by a 6 × 6 matrix of
transition probabilities P = ||pij||, for so-called embedded Markov chain
controlling transitions in the phase space, and 5 discrete distributions bi =
< bi(1), . . . , bi(T ), b̄i(T + 1) >. Here pij is a probability of transition from
disability state i to disability state j; bi(t) is a probability that such transi-
tion occurs after t years, b̄i(T+1) = 1−bi(1)−· · ·−bi(T ) is the corresponding
tail probability; T is the time horizon which is the subject of actual reward
studies. In our example, we take T = 10.
Note that the state 6 is an absorption state and, therefore, probabilities
p6i = 0, i = 1, . . . , 5 and probabilities bi(t) = 0, t = 1, 2, . . ..
The transition matrix P is estimated from the original data in the fol-
lowing way. First we construct a matrix counting numbers nij of jumps
between the states in observed trajectories, this is done by adding 1 in the
position < i, j > for each time that a person that was in state i makes a visit
to the doctor and the doctor gives the disability degree that corresponds to
the state j. By normalizing these numbers by the corresponding numbers
ni =
∑
j nij of visits in state i, we get sample estimates p̂′ij for conditional
transition probabilities p′ij for states i, j = 1, . . . , 5. Here p′ij is a probability
of transition from state i to state j conditioned by the event that a disable
person did not died during the corresponding sojourn time in state i. To
get estimates p̂ij for probabilities pij one should multiply sample estimates
248 F. STENBERG, R. MANCA, AND D. SILVESTROV
p̂′ij by quantities 1 − p̂i6 where p̂i6 is a sample estimate for probability to
death during sojourn time in the state i.
Unfortunately, the original data do not contain exact information about
death cases. These cases are hidden among the cases where the actual
new disability state was not determined during the final visit to a doctor.
We estimate the transition probabilities pi6, i = 1, . . . , 5 in the following
way. First, the mean values mi of the sojourn times for states i = 1, . . . , 5
are estimated by the standard sample means m̂i for every disability state
i = 1, . . . , 5. Here actual observed values for sojourn times create the corre-
sponding samples (remind that a priory assumption about independence of
these times on the destination states was accepted). Then transition prob-
abilities pi6 are estimated by products p̂i6 = pm̂i where p is an average one
year death probability for a reasonable age range for a given type of insur-
ance contracts. For simplicity, the age range in chosen as 50 years (from 20
to 70 years) and a natural value for p = 0.02 is used in this case. In fact,
some demographical data related to the actual historical period of observa-
tion and given geographical region could be used but this would require a
special analysis which is beyond of the goal of this paper.
Table 2 represent the value of the estimate P̂ of the transition matrix P
obtained from sample data in the way described above.
Table 2. Estimated transition matrix P̂ = ||p̂ij||.
j = 1 j = 2 j = 3 j = 4 j = 5 j = 6
i = 1 0.0000 0.9489 0.0000 0.0000 0.0000 0.0511
i = 2 0.0000 0.5532 0.3483 0.0154 0.0051 0.0779
i = 3 0.0000 0.0156 0.6376 0.2628 0.0104 0.0736
i = 4 0.0000 0.0211 0.0352 0.5354 0.3311 0.0772
i = 5 0.0000 0.0000 0.0000 0.0183 0.9132 0.0685
i = 6 0.0000 0.0000 0.0000 0.0000 0.0000 1.0000
The transition matrix for the embedded Markov-chain can be visualized,
see Figure 1, which show by arrows transitions with positive transition
probabilities.
The probabilities bi(t), t = 1, . . . , T and b̄i(T + 1) are estimated with the
use of natural frequency estimates b̂i(t) and ˆ̄bi(t). In the first, quotients
of numbers ni(t) of cases with sojourn time in a disability state i taken
exactly value t, in the second greater than t, respectively, normalized by a
total number ni of visits into state i observed in the sample data.
Table 3 represents the estimates of the distributions bi, i = 1, . . . , 5 ob-
tained from the sample data in the way described above.
In order to confirm the relevance of semi-Markov setting, we use p-value
technique for checking the ”semi-Markov” hypothesis, which means that the
sojourn times are not geometrically distributed. As was mentioned above,
a natural estimator for the probability bi(t) is b̂i(t) = ni(t)/ni, where ni(t)
is the number of cases in our sample data when the sojourn time in this
SEMI-MARKOV REWARD MODELS FOR INSURANCE 249
1
2
6
3 4
5
Figure 1. Visualization of the transition matrix for the em-
bedded Markov chain.
Table 3. Estimated distributions b̂i.
t = 1 t = 2 t = 3 t = 4 t = 5 t = 6 t = 7 t = 8 t = 9 t = 10 t > 10
b̂1(t) 0.0000 0.4444 0.5556 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
b̂2(t) 0.0855 0.2124 0.2080 0.1799 0.1091 0.1224 0.0265 0.0162 0.0177 0.0029 0.0192
b̂3(t) 0.0464 0.2781 0.2362 0.1965 0.0905 0.0817 0.0265 0.0155 0.0044 0.0066 0.0177
b̂4(t) 0.0323 0.2452 0.2258 0.2129 0.1097 0.1032 0.0258 0.0065 0.0129 0.0065 0.0194
b̂5(t) 0.0164 0.4098 0.2295 0.2131 0.0328 0.0328 0.0164 0.0164 0.0000 0.0164 0.0164
state took the value t. Note that quantities ni(t) are random numbers.
Under ”geometrical hypothesis” the equality bi(1)(1 − bi(1)) = bi(2) must
hold. Thus, the statistic b̂i(1)(1 − b̂i(1)) − b̂i(2) should take small val-
ues for sample sizes large enough. Moreover, by evaluating asymptotical
variances for this statistic and by applying some results for statistics with
random sample size, it can be shown that statistic
√
ni(b̂i(1)(1 − b̂i(1)) −
b̂i(2))/
√
bi(1)(1 − bi(1))2(2 − bi(1)) should have asymptotically the stan-
dard normal distribution. Since, probability bi(1) is unknown it can be
replaced by its estimate b̂i(1). So, the statistic Ŝi =
√
ni(b̂i(1)(1 − b̂i(1)) −
b̂i(2))/
√
b̂i(1)(1 − b̂i(1))2(2 − b̂i(1)) should have, under the ”geometrical”
hypothesis, approximately the standard normal distribution. The corre-
sponding proof can be accomplished, for example, with the use of results
given in Silvestrov (2004).
The proposition about asymptotic normality of statistic Ŝi can be utilized
in the following way. In the case, when ŝi is the value of this statistics
obtained from sample date, one can calculate the quantity 2(1 − F (|ŝi|)),
where F (s) is the standard normal distribution. If this value is small enough,
this would be an evidence to reject the hypothesis that the distribution bi
is geometrical, i.e., it would be an evidence to the benefit of ”semi-Markov”
hypothesis.
250 F. STENBERG, R. MANCA, AND D. SILVESTROV
This method applied, for example, to the disability state 2 gives the
values 58, 144 and 678, for statistics n2(1), n2(2), and n2, respectively. Se-
quentially, ŝ2 = −9.440 and 2(1 − F (|ŝ2|)) ≈ 3.7 · 10−21. According to the
remarks above, this value can be interpreted as a strong evidence to the
benefit of semi-Markov hypothesis.
Let us e−tδ denotes the discount factor for t periods with common fixed
continuous compounded interest rate δ. Recall that we denote ξiu(s, t), s ≤ t
as the accumulated discounted reward during the time interval (s, t] given
that at time s the process is at state i ∈ E and the previous jump accursed
u moments ago. In section 2, we developed the method for finding higher
order moments E[ξi,u(s, t)]
k, k = 0, 1, . . . , 0 ≤ s ≤ t ≤ T as functionals of
transition characteristics for the corresponding semi-Markov process.
The expectations E[ξi,u(s, t)] and variances V ar[ξi,u(s, t)] are the objects
of special interest. These characteristics can be used in a profit-risk analysis
and comparison of insurance contracts that have obvious advantages against
a profit analysis and comparison of insurance contracts based only on ex-
pectations of accumulated rewards. For, example one can use combined
risk-profit characteristics C
(a)
i,u (s, t) = E[ξi,u(s, t)] − a
√
V ar[ξi,u(s, t)] with
reasonably chosen weighting parameter a for comparison of contracts for
time interval (s, t]. For example, the values 1, 2 and 3 for parameter a are
inspired by the so-called one-, two- and three sigma rules.
Table 4 presents characteristic E[ξ1,0(0, t)], V ar[ξ1,0(0, t)], and C
(3)
1,0(0, t)
for two different contracts with rewards given if Table 1.
The simple interest rate per year 3% is chosen and transformed into
continuous compounded interest rate δ = log(1 + 0.03).
It is readily seen from the data given in Table 4 that, for the case
i = 1, u = 0, contract I has a slightly better expectations of accumulated
rewards but significantly worse variances of accumulated rewards than con-
tract II for the time interval (0, 10], while the combined risk-profit charac-
teristic C
(3)
1,0(0, 10) takes a better value for contract II.
The comparison of these contracts made on the simpler base of expec-
tations of accumulated rewards E[ξ1,0(0, 10)], would recommend to choose
contract I, while more accurate comparison of these contracts taking into ac-
count risk factors and based on the the combined risk-profit characteristics
C
(3)
1,0(0, 10) would recommend to choose contact II.
As explained in Habermann and Pitacco (1999) the Markov environment
cannot consider all the previous evolution that system had before nor the
time u spent inside a state before a transition. The semi-Markov environ-
ment lets one evaluate characteristics of accumulated rewards as function
of the state i and the time u spent in a given disability state.
Table 5 show that expected values of accumulated rewards can signifi-
cantly depend on time u spent in a given disability state 2 for contract I:
SEMI-MARKOV REWARD MODELS FOR INSURANCE 251
Table 4. Profit-risk characteristics of discounted accumulated
rewards for contacts I and II.
E[ξ1,0(t)] V ar[ξ1,0(t)] C
(3)
1,0 [ξ1,0(t)]
t I II I II I II
1 970 1067 0 0 970 1067
2 1912 2103 0 0 1912 2103
3 2998 3163 77470 34106 2163 2609
4 4263 4257 251952 162127 2757 3049
5 5500 5316 636019 434561 3108 3339
6 6714 6343 1286450 879215 3312 3530
7 7907 7337 2270228 1531392 3387 3624
8 9076 8300 3645316 2425160 3348 3628
9 10220 9235 5462352 3592225 3208 3549
10 11339 10142 7760581 5062041 2982 3392
Table 5. Expectations and variances of accumulated discounted rewards.
E[ξ2,u(t)] V ar[ξ2,u(t)]
t u = 0 u = 1 u = 2 u = 0 u = 1 u = 2
1 1456 1456 1456 0 0 0
2 2875 2886 2891 21910 59292 75512
3 4268 4291 4303 137129 287425 357793
4 5636 5671 5688 441487 783425 944535
5 6978 7023 7048 1025020 1631242 1925198
6 8292 8348 8375 1964034 2906036 3373795
7 9580 9640 9669 3326448 4670956 5335672
8 10836 10900 10932 5168873 6964287 7850892
In conclusion, we would like to note that some alternative semi-Markov
models also may be considered. In particular, we also examined the model
with more complicated mechanism of transitions, where transitions into
”death” state 6 and other disability states 1, . . . , 5 occur on the base of
“competition” between death transitions and transitions to other disability
states. More precisely, the sojourn time in a state i is modelled as a mini-
mum of two independent variables. The first one is a random variable with
some unknown distribution b̃i given in non-parametric form, and another is
a geometrically distributed random variable with parameter p, where again
p is the average one year death probability for a reasonable age range for
a given type of insurance contracts. In this case, distributions of transition
times to the “death” state differs of distribution of transition time to other
disability states.
252 F. STENBERG, R. MANCA, AND D. SILVESTROV
We can report that the actual estimates for transition characteristics and
expected accumulated rewards are very close to those obtained for a basic
semi-Markov model. In particular, the differences in expected accumulated
rewards for two models are in the limits of -2% – +1 %.
It should be noted that the distributions of sojourn times are formed by
very complicated mechanisms determined by stochastic disability dynamics
for individuals, rules of insurance medical service in a country, and many
other factors. That is why, the non-parametric semi-Markov models may be
involved. It should, however, be noted that these models have a disadvan-
tage since they may involve too many parameters required to be estimated
from sample data.
Our conjecture is that some semi-parametric models for distributions of
sojourn times could also be applied as an alternative. In particular, the
so-called “burned” geometrical distribution, which have a non-geometrical
form for probabilities bi(t) for a few small values of t and a geometrical
form for probabilities bi(t) otherwise. Looking at sample data available
in the example described above, we conjecture that such a model could
possibly be used for distributions of sojourn times for disability states 2, 3,
and 4.
4. Conclusions
In this paper a first step for the application of the higher order backward
semi-Markov rewards in insurance has been done. Reward processes repre-
sent the first moment of the total revenues that are given in a stochastic
financial operations. Not only the first moment was considered, we also
showed how to develop algorithms to calculate higher order moments for
discounted accumulated rewards. A real-world example was given in Sec-
tion 3. Calculating higher moments is important, this gives the tools to
compare different insurance contracts and analyze not only profit but also
risk properties of individual contracts. Our setup also makes it possible
to see how small changes in the underlying parameters such as duration of
contract, fees and benefits influence the cash flow between the counter par-
ties. When developing new insurance policies such properties are of great
importance.
References
1. Balcer Y, Sahin I. Pension accumulation as a semi-Markov reward process, with
applications to pension reform. In J. Janssen Semi-Markov models. Plenum:
N.Y. (1986) 181-200.
2. Blasi A., Janssen J., Manca R. Generalized Discrete Time Homogeneous Sto-
chastic Annuities and Multi-State Insurance Model. Proceedings of IME 2004,
Roma (2004).
3. CMIR12 Continuous Mortality Investigation Report 12. The analysis of per-
manent health insurance data. The Institute of Actuaries and the Faculty of
Actuaries.
SEMI-MARKOV REWARD MODELS FOR INSURANCE 253
4. Consael R. Sonnenscheim J. Theorie mathematique des assurances des personnes.
Modèle markovien. Mitteilungen der Vereinigung schweizerischer Versincherungs-
mathematik, 78 , (1978) 75-93.
5. Çinlar E. Markov renewal theory. Advances in Applied Probability 1, (1969)
123-187.
6. Christofides N. Graph Theory. An Algorithmic Approach. Academic Press: New
York - London. (1975).
7. De Dominicis R., Manca R. An algorithmic approach to non-homogeneous semi-
Markov processes,Communications in statistics, Simulation and Computation.
13, (1984), 113-127.
8. De Dominicis R., Manca R. Some new results on the transient behavior of semi-
Markov reward processes., Methods of Operations Research, 54, (1986), 387-397.
9. De Dominicis R., Manca R., Granata L. The dynamics of pension funds in a
stochastic environment. Scandinavian Actuarial Journal, (1991).
10. Haberman S. Pitacco E. Actuarial Models for Disability Insurance, Chapman
and Hall, (1999).
11. Hoem J. M. Markov chain models in life insurance. Blätter der Deutschen
Gesellschaft für Versincherungsmathematik 9, (1969) 91-107.
12. Hoem J. M. Inhomogeneous semi-Markov processes, select actuarial tables, and
duration-dependence in demography. In T.N.E. Greville, Population, Dynamics,
Academic Press: NY, (1972), 251-296.
13. Hoem J. M. The versatility of the Markov chain as a tool in the mathematics of
life insurance. Transctions of the 23rd Congress of Actuaries, Volume R, (1988)
141-202.
14. Iosifescu Manu A. Non homogeneous semi-Markov processes, Stud. Lere. Mat.24,
(1972), 529-533.
15. Janssen J. Application des processus semi-markoviens à un probléme d’invalidité,
Bulletin de l’Association Royale des Actuaries Belges 63, (1966), 35-52.
16. Janssen J., De Dominicis R. Finite non-homogeneous semi-Markov processes,
Insurance: Mathematics and Economics 3, (1984) 157-165.
17. Janssen J., Manca R. A realistic non-homogeneous stochastic pension funds
model on scenario basis. Scandinavian Actuarial Journal, (1997), 113-137.
18. Janssen J., Manca R. General actuarial models in a semi-Markov environment.
Proceedings of ICA Cancun 2002, (2002).
19. Janssen J., Manca R. Discrete Time Non-Homogeneous Semi-Markov Reward
Processes, Generalized Stochastic Annuities and Multi-State Insurance Model.
Proceedings of XXVIII AMASES Modena (2004).
20. Janssen J., Manca R. Applied semi-Markov Processes Springer: New York.
(2006).
21. Janssen J., Manca R., Volpe di Prignano E., Continuous time non homogeneous
semi-Markov reward processes and multi-state insurance application. Proceedings
of IME 2004, (2004).
22. Levy P. Processus semi-Markoviens, Proceedings of International Congress of
Mathematics, Amsterdam 1954.
23. Moller C.M., Numerical evaluation of Markov transition probabilities based on
the discretized product integral. Scandinavian Actuarial Journal (1992) 76-87.
24. Norberg R. Identities for present values of life insurance benefits. Scandinavian
Actuarial Journal (1993), 100-106.
25. Sahin I., Balcer Y. Stochastic models for a pensionable service, Operations Re-
search, 27, (1979), 888-903.
254 F. STENBERG, R. MANCA, AND D. SILVESTROV
26. Silvestrov D. S. Mean hitting times for semi-Markov processes, and queueing
networks. Elektronische Information, Kybernetik, 16, (1980), 399-415.
27. Silvestrov D. S. Semi-Markov Processes with a Discrete State Space. Sovetskoe
Radio: Moscow. (1980).
28. Silvestrov D. S. Recurrence relations for generalised hitting times for semi-
Markov processes. The Annals of Applied Probability. 6, (1996), 617-649.
29. Silvestrov D. S. Limit Theorems for Randomly Stopped Stochastic Processes.
Springer: London. (2004).
30. Stenberg F., Manca R., Silvestrov D. Discrete Time Backward Semi-Markov
Reward Processes and an Application to Disability Insurance Problems. Research
Reports Mdh/IMa 2005-1, ISSN 1404-4978 1, 2005, 1-44.
31. Waters H. An approach for the study of multiple state models. Journal of the
Institute of Actuaries 116, (1984), 611-624.
32. Wolthuis H. Actuarial equivalence. Insurance Mathematics and Economics, 15,
(1994), 163-179.
33. Wolthuis H. Life Insurance Mathematics (The Markovian Model) IAE. Univer-
siteit van Amsterdam, Amsterdam II edition: Amsterdam, (2003).
34. Yntema L. A markovian treatment of silicosis. Acta III Conferencia Int. De
Actuarios y Estadisticos de la Seguridad Social. Madrid, (1965).
Department of Mathematics and Physics, Mälardalen University, P.O.
Box 883, 721 23 Väster̊as, Sweden
E-mail address: fredrik.stenberg@mdh.se
Department of Mathematics for Economic, Financial and Insurance De-
cision, Rome University ”La Sapienza”, via del Castro Laurenziano 9, 00161
Roma, Italy
E-mail address: raimondo.manca@uniroma1.it
Department of Mathematics and Physics, Mälardalen University, P.O.
Box 883, 721 23 Väster̊as, Sweden
E-mail address: dmitrii.silvestrov@mdh.se
|