Bounds for a sum of random variables under a mixture of normals

In two papers: Dhaene et al. (2002). Insurance: Mathematics and Economics 31, pp.3-33 and pp. 133-161, the approximation for sums of random variables (rv’s) was derived for the case where the distribution of the components is lognormal and known, but the stochastic dependence structure is unknown or...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Datum:2007
Hauptverfasser: Kukush, A., Pupashenko, M.
Format: Artikel
Sprache:English
Veröffentlicht: Інститут математики НАН України 2007
Online Zugang:http://dspace.nbuv.gov.ua/handle/123456789/4515
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Назва журналу:Digital Library of Periodicals of National Academy of Sciences of Ukraine
Zitieren:Bounds for a sum of random variables under a mixture of normals / A. Kukush, M. Pupashenko // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 4. — С. 82–97. — Бібліогр.: 3 назв.— англ.

Institution

Digital Library of Periodicals of National Academy of Sciences of Ukraine
id irk-123456789-4515
record_format dspace
spelling irk-123456789-45152009-11-25T12:00:33Z Bounds for a sum of random variables under a mixture of normals Kukush, A. Pupashenko, M. In two papers: Dhaene et al. (2002). Insurance: Mathematics and Economics 31, pp.3-33 and pp. 133-161, the approximation for sums of random variables (rv’s) was derived for the case where the distribution of the components is lognormal and known, but the stochastic dependence structure is unknown or too cumbersome to work with. In finance and actuarial science a lot of attention is paid to a regime switching model. In this paper we give the approximation for sums under a mixture of normals and consider approximate evaluation of provision under switching regime. 2007 Article Bounds for a sum of random variables under a mixture of normals / A. Kukush, M. Pupashenko // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 4. — С. 82–97. — Бібліогр.: 3 назв.— англ. 0321-3900 http://dspace.nbuv.gov.ua/handle/123456789/4515 en Інститут математики НАН України
institution Digital Library of Periodicals of National Academy of Sciences of Ukraine
collection DSpace DC
language English
description In two papers: Dhaene et al. (2002). Insurance: Mathematics and Economics 31, pp.3-33 and pp. 133-161, the approximation for sums of random variables (rv’s) was derived for the case where the distribution of the components is lognormal and known, but the stochastic dependence structure is unknown or too cumbersome to work with. In finance and actuarial science a lot of attention is paid to a regime switching model. In this paper we give the approximation for sums under a mixture of normals and consider approximate evaluation of provision under switching regime.
format Article
author Kukush, A.
Pupashenko, M.
spellingShingle Kukush, A.
Pupashenko, M.
Bounds for a sum of random variables under a mixture of normals
author_facet Kukush, A.
Pupashenko, M.
author_sort Kukush, A.
title Bounds for a sum of random variables under a mixture of normals
title_short Bounds for a sum of random variables under a mixture of normals
title_full Bounds for a sum of random variables under a mixture of normals
title_fullStr Bounds for a sum of random variables under a mixture of normals
title_full_unstemmed Bounds for a sum of random variables under a mixture of normals
title_sort bounds for a sum of random variables under a mixture of normals
publisher Інститут математики НАН України
publishDate 2007
url http://dspace.nbuv.gov.ua/handle/123456789/4515
citation_txt Bounds for a sum of random variables under a mixture of normals / A. Kukush, M. Pupashenko // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 4. — С. 82–97. — Бібліогр.: 3 назв.— англ.
work_keys_str_mv AT kukusha boundsforasumofrandomvariablesunderamixtureofnormals
AT pupashenkom boundsforasumofrandomvariablesunderamixtureofnormals
first_indexed 2025-07-02T07:44:31Z
last_indexed 2025-07-02T07:44:31Z
_version_ 1836520331586568192
fulltext Theory of Stochastic Processes Vol.13 (29), no.4, 2007, pp.82–97 ALEXANDER KUKUSH AND MYKHAILO PUPASHENKO BOUNDS FOR A SUM OF RANDOM VARIABLES UNDER A MIXTURE OF NORMALS In two papers: Dhaene et al. (2002). Insurance: Mathematics and Economics 31, pp.3-33 and pp. 133-161, the approximation for sums of random variables (rv’s) was derived for the case where the distri- bution of the components is lognormal and known, but the stochastic dependence structure is unknown or too cumbersome to work with. In finance and actuarial science a lot of attention is paid to a regime switching model. In this paper we give the approximation for sums under a mixture of normals and consider approximate evaluation of provision under switching regime. 1. Introduction In an insurance context, one is often interested in the distribution func- tion of a sum of random variables (rv’s). Such a sum appears when consid- ering the aggregate claims of an insurance portfolio over a certain reference period. It also appears when considering discounted payments related to a single policy or a portfolio, at different future points in time. The as- sumption of mutual independence of the components of the sum is very convenient from a computational point of view, but sometimes not a realis- tic one. In the papers Dhaene et al. (2002a) and (2002b) the approximation for sum of rv’s was derived for the case where the distributions of the com- ponents are lognormal and known, but the stochastic dependence structure is unknown or too cumbersome to work with. In this paper we consider the case of a switching regime which can represent a change in the economic environment, see Yang (2006). The distribution of the components is a mixture of lognormal distributions. The paper is organized as follows. In Sections 2 and 3 we give upper and lower bounds for a sum under a mixture of arbitrary distributions. In A. Kukush is supported by the grant from the authorities of K.U. Leuven, Belgium, and by the Swedish Institute grant SI-01424/2007. 2000 Mathematics Subject Classifications 62P05 . Key words and phrases. Convex stochastic order, bounds for provision, regime switching model. 82 BOUNDS FOR A SUM OF RANDOM VARIABLES 83 Sections 4 and 5 we derive bounds for provision under the switching regime. In Section 6 we give some numerical illustration of lower and upper bounds, and Section 7 concludes. We consider the problem similar to Section 4.1 from Dhaene et al. (2002b). We want to bound the sum S := n∑ i=1 αie −(Y1+...+Yi), where αi ∈ R, i = 1, n. In Dhaene et al. (2002b) it was assumed that the vector (Y1, . . . , Yn) has a multivariate normal distribution. The random variable (r.v.) S is then a linear combination of dependent lognormal rv’s. The computation of upper and lower bounds in Dhaene et al. (2002b) is based on the concept of comonotonicity. In this paper we give the approximation for sums under a mixture of arbitrary distributions and consider approximate evaluation of provision under the switching regime. Then we calculate the bounds for a mixture of normals in linear and Markovian ways. 2. Upper bound for a sum Let X1, . . . , Xn be rv’s, and Λ be some r.v. with a given cdf, such that we know the conditional cdfs of the r.v. Xi, given Λ = λ, for all i = 1, n and possible values of λ. Denote by F−1 Xi|Λ(U) the r.v. fi(U, Λ), where U is uniform (0, 1) and fi(u, λ) = F−1 Xi|Λ=λ(u), F−1 stands for a generalized inverse of a cdf F . Definition 1. Consider two random variables X and Y . Then X is said to procede Y in the convex order sense, notation X ≤cx Y , if and only if, E[X] = E[Y ], and E[(X − d)+] ≤ E[(Y − d)+], for all d ∈ R, where (x − d)+ = max(x − d, 0). It can be proven that X ≤cx Y if, and only if, E[g(X)] ≤ E[g(Y )] for all convex functions g, provided the expectations exist. Theorem 9 from Dhaene et al. (2002a) states that if U is uniform (0, 1) and independent of Λ, then n∑ i=1 Xi ≤cx n∑ i=1 F−1 Xi|Λ(U). (1) Assume the following. (i) Λ = Φ(X1, . . . , Xn), where Φ is a nonrandom function. 84 A. KUKUSH AND M. PUPASHENKO (ii) A joint distribution of (X1, . . . , Xn) equals ∑N j=1 pjμ X j , where 0 < pj < 1, j = 1, N , ∑N j=1 pj = 1, and μj are probability measures on (R,B(R)), B(R) being a Borel σ−field on real line. Here pj have a sense of prior probabilities, and μX j is a conditional dis- tribution provided (X1, . . . , Xn) belongs to a class Aj ; j = 1, N . Due to condition (i), a joint distribution of (X1, . . . , Xn, Λ) equals N∑ j=1 pjμ XΛ j , (2) where μXΛ j is a conditional distribution of (X1, . . . , Xn, Λ) provided (X1, . . . . . . , Xn) belongs to the class Aj; j = 1, N. Now, we find a distribution of Xi given Λ = λ. A joint distribution of (Xi, Λ) equals N∑ j=1 pjμ XiΛ j , (3) where μXiΛ j is a conditional distribution of (Xi, Λ) provided (X1, . . . , Xn) belongs to the class Aj ; j = 1, N. Suppose that d(μXiΛ j ) = ρXiΛ j (xi, λ)dxidλ, (4) i.e., the measure μXiΛ j has a density. Then a conditional density of Xi given Λ = λ equals ρXi|Λ=λ(xi) = ∑N j=1 pjρ XiΛ j (xi, λ)∫ ∑N j=1 pjρ XiΛ j (xi, λ)dxi , ρXi|Λ=λ(xi) = N∑ j=1 qj(λ)ρj Xi|Λ=λ(xi). (5) Thus the conditional density of Xi given Λ = λ is a mixture of partial conditional densities, with the posterior probabilities qj(λ) instead of the prior probabilities pj, qj(λ) = pj∫ ∑N j=1 pjρ XiΛ j (xi, λ)dxi ∫ ρXiΛ j (xi, λ)dxi, (6) ρj Xi|Λ=λ(xi) = ρXiΛ j (xi, λ)∫ ρXiΛ j (xi, λ)dxi . (7) At the end of Section 3 we will explain that qj(λ) does not depend of i. The cdf FXi|Λ=λ can be computed based on (5)-(7), BOUNDS FOR A SUM OF RANDOM VARIABLES 85 FXi|Λ=λ(z) = ∫ z −∞ ρXi|Λ=λ(xi)dxi, z ∈ R. This can be applied, e.g., when under the class Aj , (log X1, . . . , log Xn) ∼ N(mj , Sj), (8) and Λ = n∑ i=1 βi log Xi, βi ∈ R, i = 1, n. (9) Then qj(λ) and ρj Xi|Λ=λ(xi) can be computed directly. 3. Lower bound for a sum Theorem 10 from Dhaene et al. (2002a) states that for any r.v. Λ, n∑ i=1 E[Xi|Λ] ≤cx n∑ i=1 Xi. (10) We assume (i) and (ii). From (5) to (7) we obtain that the conditional density of Xi given Λ equals ρXi|Λ(xi) = n∑ j=1 qj(Λ)ρj Xi|Λ(xi). Here qj(Λ) = qj(λ)|λ=Λ, ρj Xi|Λ(xi) = ρXiΛ j (xi, Λ)∫ ρXiΛ j (xi, Λ)dxi . Then E[Xi|Λ] = ∫ xiρXi|Λ(xi)dxi = n∑ j=1 qj(Λ) ∫ xiρ j Xi|Λ(xi)dxi, E[Xi|Λ] = N∑ j=1 qj(Λ)Ej[Xi|Λ]. (11) Here Ej[Xi|Λ] is a conditional expectation of Xi given Λ, provided (X1, . . . , Xn) belongs to the class Aj. Now, (10) and (11) imply that 86 A. KUKUSH AND M. PUPASHENKO n∑ i=1 N∑ j=1 qj(Λ)Ej[Xi|Λ] ≤cx n∑ i=1 Xi. (12) Formula (6) can be rewritten as qj(λ) = pjρ Λ j (λ)∑N j=1 pjρΛ j (λ) . (13) Here ρΛ j (λ) is a density of Λ provided (X1, . . . , Xn) belongs to the class Aj . Thus the posterior probability qj(λ) does not depend on i. Relation (12) is simplified as N∑ j=1 qj(Λ) n∑ i=1 Ej[Xi|Λ] ≤cx n∑ i=1 Xi. (14) 4. Approximate evaluation of provisions under switching regime: lower bound Let Y1, . . . , Yn be an i.i.d. sequence. We deal with the sum S := n∑ i=1 αie −(Y1+...+Yi), (15) where αi ∈ R, i = 1, n. Let Y(i) := Y1 + . . . + Yi and Y (i) := Yi+1 + . . . + Yn. Theorem 1 from Dhaene et al. (2002b) states the following. Theorem 1 Let S be given in (15), where the random vector (Y1, . . . , Yn) has a multivariate normal distribution. Consider the conditional r.v. Λ′ =∑n i=1 βiYi. Then the lower bound Sl and upper bound Su are given by Sl = n∑ i=1 αi exp [−E[Y(i)] − riσY(i) Φ−1(V ) + (1 − r2 i )σ 2 Y(i) /2], Su = n∑ i=1 αi exp [−E[Y(i)] − riσY(i) Φ−1(V ) + sign(αi) √ 1 − r2 i σY(i) Φ−1(U)], where U and V are mutually independent uniform (0, 1) rv’s, Φ is the cdf of the N(0, 1) distribution and ri is defined by ri = r(Y(i), Λ ′) = cov[Y(i), Λ ′] σY(i) σΛ′ . BOUNDS FOR A SUM OF RANDOM VARIABLES 87 In this paper we consider the sum from (15) for a mixture of normal distributions in both linear and Markovian way. 4.1. Mixture of N independent normals in linear way Let the distribution of Y1 be a mixture of N independent normals: N∑ i=1 πiN(μi, σ 2 i ), (16) where πi > 0, i = 1, N , ∑N i=1 πi = 1, (μi, σ 2 i ) �= (μj, σ 2 j ), i �= j, σi > 0, i = 1, N . The joint distribution of Y1, . . . , Yn is( N∑ i=1 πiN(μi, σ 2 i ) )n , where the power corresponds to a product of measures. We consider the conditioning r.v. Λ := n∑ i=1 Yi. We have by (10) n∑ i=1 αiE[e−(Y1+...+Yi)|Λ] ≤cx S. (17) Consider a joint distribution of Y(i) := Y1 + . . . + Yi and Λ = Y(i) + Y (i), where Y (i) := Yi+1 + . . . + Yn. A joint distribution of Zi := (Y(i), Y (i)) equals L(Y(i)) × L(Y (i)), where L(·) stands for the probability law. Now, L(Y(i)) = ∑ k1+...+kN=i ( i k1 . . . kN ) πk1 1 · . . . · πkN N N ( N∑ j=1 kjμj , N∑ j=1 kjσ 2 j ) , (18) L(Y (i)) = L(Y(n−i)) = ∑ l1+...+lN=n−i ( n − i l1 . . . lN ) πl1 1 · . . . · πlN N × × N ( N∑ j=1 ljμj, N∑ j=1 ljσ 2 j ) , (19) 88 A. KUKUSH AND M. PUPASHENKO where ( i k1 . . . kN ) = i! k1! · . . . · kN ! . Let U1 ∼ N(m1, τ 2 1 ), U2 ∼ N(m2, τ 2 2 ), U1 and U2 be independent. Then (U1, U1 + U2) ∼ N(m1, m1 + m2, τ 2 1 , τ 2 1 + τ 2 2 , ρ), ρ = E(U1 − m1)(U1 + U2 − m1 − m2) τ1 √ τ 2 1 + τ 2 2 = E(U1 − m1) 2 τ1 √ τ 2 1 + τ 2 2 = τ1√ τ 2 1 + τ 2 2 . Therefore we have, using (18) and (19): (Y(i), Λ) ∼ ∑ k1+...+kN=i ∑ l1+...+lN=n−i ( i k1 . . . kN )( n − i l1 . . . lN ) × × πk1+l1 1 · . . . · πkN+lN N · N ( N∑ j=1 kjμj, N∑ j=1 (kj + lj)μj , N∑ j=1 kjσ 2 j , N∑ j=1 (kj + lj)σ 2 j , √√√√ ∑N j=1 kjσ2 j∑N j=1(kj + lj)σ 2 j ) . (20) In particular, Λ = Y(n) ∼ ∑ k1+...+kN=n ( n k1 . . . kN ) πk1 1 · . . . · πkN N N ( N∑ j=1 kjμj, N∑ j=1 kjσ 2 j ) , but this can be rewritten based on (20): Λ ∼ ∑ k1+...+kN=i ∑ l1+...+lN=n−i ( i k1 . . . kN )( n − i l1 . . . lN ) πk1+l1 1 · . . . . . . · πkN+lN N N ( N∑ j=1 (kj + lj)μj, N∑ j=1 (kj + lj)σ 2 j ) . (21) Now, we use (11). In our case the prior probabilities for joint distribution of (Y(i), Λ) are pk1...kN l1...lN = ( i k1 . . . kN )( n − i l1 . . . lN ) πk1+l1 1 · . . . · πkN+lN N . And the posterior probabilities given Λ are, see (13), BOUNDS FOR A SUM OF RANDOM VARIABLES 89 qk1...kN l1...lN (Λ) = pk1...kN l1...lN · ρΛ k1...kN l1...lN (Λ)∑ k1+...+kN=i ∑ l1+...+lN=n−i pk1...kN l1...lN · ρΛ k1...kN l1...lN (Λ) , (22) where according to (21), ρΛ k1...kN l1...lN (Λ) is a density at point Λ of N (∑N j=1(kj + lj)μj, ∑N j=1(kj + lj)σ 2 j ) . Next we need Ek1...kN l1...lN (e−Y(i)|Λ). (23) The joint distribution (Y(i), Λ) under the class Ak1...kN l1...lN has the fol- lowing distribution, cf. (20): N ⎛⎝ N∑ j=1 kjμj, N∑ j=1 (kj + lj)μj , N∑ j=1 kjσ 2 j , N∑ j=1 (kj + lj)σ 2 j , √√√√ ∑N j=1 kjσ 2 j∑N j=1(kj + lj)σ2 j ⎞⎠ . (24) We use the next well-known Regression Theorem. Theorem 2 (Regression Theorem.) Let ξ ∼ N(μ1, μ2, σ 2 1, σ 2 2, ρ), then a conditional density equals fξ1|ξ2(x|z) ∼ N(m(z), σ2 1(1 − ρ2)), where m(z) = E(ξ1|ξ2 = z) = μ1 + ρσ1 σ2 (z − μ2). We have for a conditional density, if the joint distribution of Y(i) and Λ equals (24), that fY(i)|Λ(y|λ) ∼ N(m(λ), σ̃1 2(1 − ρ2)), (25) σ̃1 2 := N∑ j=1 kjσ 2 j , σ̃1 2(1 − ρ2) = ∑N j=1 kjσ 2 j∑N j=1(kj + lj)σ2 j ( N∑ j=1 ljσ 2 j ) , (26) and m(λ) = μ̃1 + ρ σ̃1 σ̃2 (λ − μ̃2), m(λ) = N∑ j=1 kjμj + √ ρ2σ̃1 2 σ̃2 2 ( λ − N∑ j=1 (kj + lj)μj ) . (27) 90 A. KUKUSH AND M. PUPASHENKO Here ρ2σ̃1 2 σ̃2 2 = ( ∑N j=1 kjσ 2 j∑N j=1(kj + lj)σ 2 j )2 . Thus m(λ) = N∑ j=1 kjμj + ∑N j=1 kjσ 2 j∑N j=1(kj + lj)σ 2 j ( λ − N∑ j=1 (kj + lj)μj ) . (28) As a result we have a conditional density fY(i)|Λ(y|λ), under the class Ak1...kN l1...lN , if a joint distribution of (Y(i), Λ) equals (24). Next, Ek1...kN l1...lN (e−Y(i) |Λ) = E[e−m(λ)+σ1 √ 1−ρ2Z|Λ], where Z ∼ N(0, 1), and Z is independent of Λ. Then Ek1...kN l1...lN (e−Y(i)|Λ) = e−m(λ)e σ1 2(1−ρ2) 2 . Finally S ≥cx n∑ i=1 αi ∑ k1+...+kN=i ∑ l1+...+lN=n−i qk1...kN l1...lN (Λ) × × exp { −m(Λ) + 1 2 ∑N j=1 kjσ 2 j∑N j=1(kj + lj)σ 2 j ( N∑ j=1 ljσ 2 j )} , where qk1...kN l1...lN is given in (22), and m(Λ) is given in (28), where we plug-in Λ instead of λ. 4.2. Mixture of N independent normals in Markovian way In Yang (2006) a simple discrete-time model, consisting of one bank and one risky asset is considered. Trading of assets is allowed only at the be- ginning of each time period. The distribution of the return of risky asset depends on the market mode, which can switch among a finite number of states. Switching of the regime can represent a change in the economic en- vironment. Regime is assumed to switch among a finite number of possible states in Markovian way. We consider the problem similar to Section 4.1. We want to bound the sum BOUNDS FOR A SUM OF RANDOM VARIABLES 91 S := n∑ i=1 αie −(Y ξ1 1 +...+Y ξi i ), (29) where Y ξ1 1 , . . . , Y ξn n are rv’s, and {ξ1, . . . , ξn} is a finite-state, time-homoge- neous Markov chain, with phase space S = {1, . . . s}. The transition prob- ability matrix is denoted as P = ( p̃ij )s i,j=1 , where ∑s j=1 p̃ij = 1, ∀i = 1, s. Denote also P (ξ1 = k) = q̃k, ∀k = 1, s,∑s k=1 q̃k = 1. Let the conditional distribution of Y ξi i given ξi = k be L(Y ξi i |ξi = k) = L(Y k i ) = L(Y k 1 ) = N(μk, σ 2 k), k = 1, s, ∀i = 1, n, (30) where (μi, σ 2 i ) �= (μj, σ 2 j ), i �= j, σi > 0, i = 1, s, and the normals are independent. Therefore Y k 1 , . . . , Y k n are i.i.d. rv’s, for all k = 1, s. We consider the conditioning r.v. Λ := n∑ i=1 Y ξi i . We have by (10) n∑ i=1 αiE[e−(Y ξ1 1 +...+Y ξi i )|Λ] ≤cx S. (31) Consider a joint distribution of Y(i) := Y ξ1 1 + . . .+Y ξi i and Λ = Y(i)+Y (i), where Y (i) := Y ξi+1 i+1 + . . . + Y ξn n . Let Ak1...ksm = {(i1, . . . im) ∈ {1, . . . , s}m|{i1, . . . im} = = {1, . . . , 1︸ ︷︷ ︸ k1 , . . . , s, . . . , s︸ ︷︷ ︸ ks }}, (32) k1 + . . . + ks = m, 0 ≤ ki ≤ m, i = 1, s, ∀m = 0, n. Then introduce ak1...ksm = ak1...ksm(q̃1, . . . , q̃s, P ) in the next way, ak1...ksm = ∑ (i1,...im)∈Ak1...ksm q̃i1 p̃i1i2 p̃i2i3 · . . . · p̃im−1im . (33) We have 92 A. KUKUSH AND M. PUPASHENKO L(Y(i)) = ∑ k1+...+ks=i ak1...ksiN ( s∑ j=1 kjμj, s∑ j=1 kjσ 2 j ) , (34) L(Y (i)) = L(Y(n−i)) = ∑ l1+...+ls=n−i al1...lsn−iN ( s∑ j=1 ljμj, s∑ j=1 ljσ 2 j ) , (35) Similarly to Section 4.1 we obtain, using (34) and (35): (Y(i), Λ) ∼ ∑ k1+...+ks=i ∑ l1+...+ls=n−i ak1...ksial1...lsn−i · N ( s∑ j=1 kjμj, s∑ j=1 (kj + lj)μj, s∑ j=1 kjσ 2 j , s∑ j=1 (kj + lj)σ 2 j , √ ∑s j=1 kjσ2 j∑s j=1(kj + lj)σ2 j ) . (36) In particular, Λ = Y(n) ∼ ∑ k1+...+ks=n ak1...ksnN ( s∑ j=1 kjμj , s∑ j=1 kjσ 2 j ) , but this also can be written from (36): Λ ∼ ∑ k1+...+ks=i ∑ l1+...+ls=n−i ak1...ksial1...lsn−i × × N ( s∑ j=1 (kj + lj)μj , s∑ j=1 (kj + lj)σ 2 j ) . (37) Now we use (11). In our case the prior probabilities for joint distribution of (Y(i), Λ) are pk1...ksl1...ls = ak1...ksial1...lsn−i. And the posterior probabilities given Λ are, see (13), qk1...ksl1...ls(Λ) = pk1...ksl1...ls · ρΛ k1...ksl1...ls (Λ)∑ k1+...+ks=i ∑ l1+...+ls=n−i pk1...ksl1...ls · ρΛ k1...ksl1...ls(Λ) , (38) where according to (37), ρΛ k1...ksl1...ls(Λ) is a density at point Λ of N (∑s j=1(kj + lj)μj, ∑s j=1(kj + lj)σ 2 j ) . BOUNDS FOR A SUM OF RANDOM VARIABLES 93 Next we need Ek1...ksl1...ls(e −Y(i)|Λ), (39) i.e., conditional expectation provided (Y(i), Λ) has the following distri- bution, cf. (36): N ⎛⎝ s∑ j=1 kjμj, s∑ j=1 (kj + lj)μj , s∑ j=1 kjσ 2 j , s∑ j=1 (kj + lj)σ2 j , √ ∑s j=1 kjσ2 j∑s j=1(kj + lj)σ2 j ⎞⎠ (40) Similarly to Section 4.1 we have that S ≥cx n∑ i=1 αi ∑ k1+...+ks=i ∑ l1+...+ls=n−i qk1...ksl1...ls(Λ) × × exp { −m(Λ) + 1 2 ∑s j=1 kjσ 2 j∑s j=1(kj + lj)σ2 j ( s∑ j=1 ljσ 2 j )} . Here qk1...kN l1...lN is given in (38), and m(Λ) is given in (28), where we plug-in Λ instead of λ, and s instead of N . 5. Approximate evaluation of provisions under switching regime: upper bound 5.1. Mixture of N independent normals in linear way We keep the notation from Section 4.1. From (1) we have S ≤cx n∑ i=1 F−1 αiXi|Λ(U), Xi := e−Y(i), i = 1, n, where U is uniform (0,1) and independent of Λ. Now, we assume that αi �= 0, for all i = 1, n. Then FαiXi|Λ=λ(z) = P{αie −Y(i) ≤ z|Λ = λ}. If αi > 0 then for z > 0 FαiXi|Λ=λ(z) = P{Y(i) ≥ − log z αi |Λ = λ} = F Y(i)|Λ=λ(− log z αi ). Here we suppose that the conditional distribution is continuous, and F = 1 − F is a survival function. Otherwise if αi < 0 then for z > 0 94 A. KUKUSH AND M. PUPASHENKO FαiXi|Λ=λ(z) = P{Y(i) ≤ − log z αi |Λ = λ} = FY(i)|Λ=λ(− log z αi ). In all the cases we need a conditional distribution of Y(i) provided Λ = λ. Such a distribution, for the class Ak1...kN l1...lN , is given in (25). And the final conditional law of Y(i) provided Λ = λ, equals the mixture of independent normals: ∑ k1+...+kN=i ∑ l1+...+lN=n−i qk1...kN l1...lN (λ)N ( mk1...kN l1...lN (λ), σ̃1 2(k1 . . . kN)(1 − ρ2(k1 . . . kN l1 . . . lN)) ) . Here mk1...kN l1...lN (λ) is given in (28), and σ̃1 2(k1 . . . kN)(1 − ρ2(k1 . . . kN l1 . . . lN)) is given in (26). Thus everything is ready to compute the upper bound numerically. 5.2. Mixture of N independent normals in Markovian way The upper bound in this case can be taken from Section 5.1, where we have to plug-in qk1...kN l1...ls(λ) from (38) instead of qk1...kN l1...lN (λ), mk1...kN l1...lN (λ) from (28), and σ̃1 2(k1 . . . kN)(1− ρ2(k1 . . . kN l1 . . . lN )) from (26) with s instead of N . 6. Numerical illustrations In this section, we illustrate numerically the bounds we derived for S = ∑5 i=1 αie −(Y1+Y2+...+Yi). We assume that the random variables Yi are i.i.d. and have the distribution π1N(μ1, σ 2 1) + π2N(μ2, σ 2 2). The conditional random variable Λ is defined as above: Λ = 5∑ i=1 Yi. In our numerical illustration, we choose the parameters of the involved normal distributions as follows: π1 = 0.25, μ1 = 0.04, σ2 1 = 0.07, π2 = 0.75, μ2 = 0.08, σ2 2 = 0.01. First we show the cdf’s of S(solid black line), Sl-lower bound (dashed line), Su-upper bound (dotted line) for the following payments: αk = 1, k = 1, 5. BOUNDS FOR A SUM OF RANDOM VARIABLES 95 2 3 4 5 6 7 0.0 0.2 0.4 0.6 0.8 1.0 In order to have a better view on the behaviour of the lower Sl and upper Su bounds in the tails, we consider a QQ-plot where the quantiles of Sl and Su are plotted against the quantiles of S obtained by simulation. The lower Sl and upper Su bounds will be a good approximation for S if the plotted points (F−1 S (p), F−1 Sl (p)) and (F−1 S (p), F−1 Su (p)) for all values of p in (0,1) do not deviate too much from the straight line y = x respectively. Hereafter, we present a QQ-plot illustrating the accurateness of the ap- proximations.The dashed line represents the quantiles of the lower bound versus the ’exact’ quantiles, whereas the dotted line represents quantiles of the upper bound versus the ’exact’ quantiles. The solid black line represents the straight line y = x. 4 5 6 4 5 6 Now we will present some quantiles in the following table. 96 A. KUKUSH AND M. PUPASHENKO p F−1 Sl (p) F−1 S (p) F−1 Su (p) 0.95 5.1384 5.2228 5.2146 0.975 5.4018 5.4581 5.4490 0.99 5.6312 5.8464 5.8273 0.995 5.8545 6.0262 6.0346 0.999 6.2806 6.2921 6.3544 Next, we consider a series of negative and positive payments: ak = { −1, k = 1, 2 1, k = 3, 4, 5. �1 0 1 2 3 0.2 0.4 0.6 0.8 1.0 And a QQ-plot and quantiles are as follows. 0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0 BOUNDS FOR A SUM OF RANDOM VARIABLES 97 p F−1 Sl (p) F−1 S (p) F−1 Su (p) 0.95 1.0212 1.0995 1.4312 0.975 1.2001 1.2432 1.7314 0.99 1.3094 1.3675 1.9813 0.995 1.4340 1.4234 2.3021 0.999 1.4873 1.8813 3.2011 The solid black line in the first and third pictures is the ”exact” cdf of S, which was obtained by generating 10,000 quasi-random paths. 7. Conclusions In the papers Dhaene et al. (2002a) and (2002b) the approximations for sums of rv’s were derived when the distributions of the components are lognormal and known, but the stochastic dependence structure is unknown or too cumbersome to work with. Any distribution can be approximated by a mixture of normals, in the sense of weak convergence. We considered the case of mixture of N normals. We got more complicated formulas compared with Dhaene et al. (2002b). Also we considered the case of switching among finite number of possible states in Markovian way. The result can be applied in finance and actuarial science, see Yang (2006). Acknowledgement The authors are grateful to Prof. J. Dhaene (Belgium) for support and fruitful discussions. References 1. J. Dhaene, M. Denuit, M. J. Goovaerts, R. Kaas, and D. Vyncke (2002a), The concept of comonotonicity in actuarial science and finance: theory. Insurance: Mathematics and Economics 31, 3–33. 2. J. Dhaene, M. Denuit, M. J. Goovaerts, R. Kaas, and D. Vyncke (2002b), The concept of comonotonicity in actuarial science and finance: applica- tions. Insurance: Mathematics and Economics 31, 133-161. 3. H. Yang (2006), Optimal portfolio strategy under regime switching model. Proceedings of the Twelfth International Conference on Computational and Applied Mathematics ICCAM 2006, Leuven, Belgium. Department of Mathematical Analysis, Kyiv National Taras Shevchenko University, Vladimirskaya st.64, 01033 Kyiv, Ukraine. E-mail: alexander kukush@univ.kiev.ua Department of Probability Theory and Mathematical Statistics, Kyiv National Taras Shevchenko University, Vladimirskaya st.64, 01033 Kyiv, Ukraine. E-mail: myhailo.pupashenko@gmail.com