Convergence of option rewards for Markov type price processes

A general price process represented by a two-component Markov process is considered. Its first component is interpreted as a price process and the second one as an index process controlling the price component. American type options with pay-off functions, which admit power type upper bounds, are stu...

Повний опис

Збережено в:
Бібліографічні деталі
Дата:2007
Автори: Silvestrov, D., Jönsson, H., Stenberg, F.
Формат: Стаття
Мова:English
Опубліковано: Інститут математики НАН України 2007
Онлайн доступ:http://dspace.nbuv.gov.ua/handle/123456789/4523
Теги: Додати тег
Немає тегів, Будьте першим, хто поставить тег для цього запису!
Назва журналу:Digital Library of Periodicals of National Academy of Sciences of Ukraine
Цитувати:Convergence of option rewards for Markov type price processes / D. Silvestrov, H. Jönsson, F. Stenberg // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 4. — С. 189–200. — Бібліогр.: 29 назв.— англ.

Репозитарії

Digital Library of Periodicals of National Academy of Sciences of Ukraine
id irk-123456789-4523
record_format dspace
spelling irk-123456789-45232009-11-25T12:00:31Z Convergence of option rewards for Markov type price processes Silvestrov, D. Jönsson, H. Stenberg, F. A general price process represented by a two-component Markov process is considered. Its first component is interpreted as a price process and the second one as an index process controlling the price component. American type options with pay-off functions, which admit power type upper bounds, are studied. Both the transition characteristics of the price processes and the pay-off functions are assumed to depend on a perturbation parameter δ ≥ 0 and to converge to the corresponding limit characteristics as δ → 0. Results about the convergence of reward functionals for American type options for perturbed processes are presented for models with continuous and discrete time as well as asymptotically uniform skeleton approximations connecting reward functionals for continuous and discrete time models. 2007 Article Convergence of option rewards for Markov type price processes / D. Silvestrov, H. Jönsson, F. Stenberg // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 4. — С. 189–200. — Бібліогр.: 29 назв.— англ. 0321-3900 http://dspace.nbuv.gov.ua/handle/123456789/4523 en Інститут математики НАН України
institution Digital Library of Periodicals of National Academy of Sciences of Ukraine
collection DSpace DC
language English
description A general price process represented by a two-component Markov process is considered. Its first component is interpreted as a price process and the second one as an index process controlling the price component. American type options with pay-off functions, which admit power type upper bounds, are studied. Both the transition characteristics of the price processes and the pay-off functions are assumed to depend on a perturbation parameter δ ≥ 0 and to converge to the corresponding limit characteristics as δ → 0. Results about the convergence of reward functionals for American type options for perturbed processes are presented for models with continuous and discrete time as well as asymptotically uniform skeleton approximations connecting reward functionals for continuous and discrete time models.
format Article
author Silvestrov, D.
Jönsson, H.
Stenberg, F.
spellingShingle Silvestrov, D.
Jönsson, H.
Stenberg, F.
Convergence of option rewards for Markov type price processes
author_facet Silvestrov, D.
Jönsson, H.
Stenberg, F.
author_sort Silvestrov, D.
title Convergence of option rewards for Markov type price processes
title_short Convergence of option rewards for Markov type price processes
title_full Convergence of option rewards for Markov type price processes
title_fullStr Convergence of option rewards for Markov type price processes
title_full_unstemmed Convergence of option rewards for Markov type price processes
title_sort convergence of option rewards for markov type price processes
publisher Інститут математики НАН України
publishDate 2007
url http://dspace.nbuv.gov.ua/handle/123456789/4523
citation_txt Convergence of option rewards for Markov type price processes / D. Silvestrov, H. Jönsson, F. Stenberg // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 4. — С. 189–200. — Бібліогр.: 29 назв.— англ.
work_keys_str_mv AT silvestrovd convergenceofoptionrewardsformarkovtypepriceprocesses
AT jonssonh convergenceofoptionrewardsformarkovtypepriceprocesses
AT stenbergf convergenceofoptionrewardsformarkovtypepriceprocesses
first_indexed 2025-07-02T07:44:53Z
last_indexed 2025-07-02T07:44:53Z
_version_ 1836520354456010752
fulltext Theory of Stochastic Processes Vol.13 (29), no.4, 2007, pp.189–200 D. SILVESTROV, H. JÖNSSON, AND F. STENBERG CONVERGENCE OF OPTION REWARDS FOR MARKOV TYPE PRICE PROCESSES A general price process represented by a two-component Markov pro- cess is considered. Its first component is interpreted as a price process and the second one as an index process controlling the price compo- nent. American type options with pay-off functions, which admit power type upper bounds, are studied. Both the transition char- acteristics of the price processes and the pay-off functions are as- sumed to depend on a perturbation parameter δ ≥ 0 and to converge to the corresponding limit characteristics as δ → 0. Results about the convergence of reward functionals for American type options for perturbed processes are presented for models with continuous and discrete time as well as asymptotically uniform skeleton approxima- tions connecting reward functionals for continuous and discrete time models. 1. Introduction This paper is devoted to studies of conditions for convergence of re- ward functionals for American type options for Markov type price processes controlled by stochastic indices. Markov type price processes controlled by stochastic indices and option pricing for such processes were studied by many authors. The corresponding references can be found in the report by Silvestrov, Jönsson and Stenberg (2006). We also would like to refer to the recent book by Peskir and Shiryaev (2006) for an account of various models of stochastic price processes and optimal stopping problems for options. We consider a variant of Markov type price process controlled by stochas- tic index as it was introduced in Kukush and Silvestrov (2000, 2001, 2004). We are interested in a two-component process Z(δ)(t) = (Y (δ)(t), X(δ)(t)), 2000 Mathematics Subject Classification: 60J05, 60H10, 91B28, 91B70. Key words and phrases: Reward, convergence, optimal stopping, American option, skeleton approximation, Markov type price process, stochastic index. 189 190 D. SILVESTROV, H. JÖNSSON, F. STENBERG where the first component Y (δ)(t) is a real-valued càdlàg process and the second component X(δ)(t) is a measurable process with a general measur- able phase space. The first component is interpreted as a log-price process while the second component is interpreted as a stochastic index controlling the log-price process. The process X(δ)(t) can be a global price index “controlling” market prices, or a jump process representing some market regime index (indi- cating, for example, growing, declining, or stable market situation, or high, moderate, or low level of volatility) modulating the log-price process Y (δ)(t). The log-price process Y (δ)(t) as well as the corresponding price pro- cess S(δ)(t) = eY (δ)(t) themselves are not assumed to be Markov processes but the two-component process Z(δ)(t) is assumed to be a continuous time two-component Markov process. Thus, the component X(δ)(t) represents information which addition to information represented by the log-price pro- cess Y (δ)(t) makes the two-component process (Y (δ)(t), X(δ)(t)) a Markov process. In the literature, the values of options in discrete time markets has been used to approximate the value of the corresponding option in continuous time. The paper by Cox, Ross, and Rubinstein (1979) is a seminal paper, where convergence of European option values for the binomial tree model to the Black-Scholes value for geometrical Brownian motion was shown. Further results on convergence of the values of European and Ameri- can options can be found in Barone-Adesi and Whaley (1987), Lamberton (1993), Amin and Khanna (1994), Cutland, Kopp, Willinger, and Wyman (1997), Mulinacci and Pratelli (1998), Prigent (2003), Neiuwenhuis and Vellekoop (2004), Silvestrov and Stenberg (2004), Dupuis and Wang (2005), Jönsson (2005), Coquet and Toldo (2007), and Stenberg (2007). In par- ticular, Amin and Khanna (1994) gave conditions for convergence of the values for American options in a discrete-time model to the value of the option in a continuous-time model, under the assumption that the sequence of processes describing the value of the underlying asset converge weakly to a diffusion. Martingale based methods were used by Prigent (2003) in the book covering the recent results on weak convergence in financial mar- kets, both for European and American type options. We would also like to mention the papers by Mackjavičjus (1973, 1975), Fährmann (1978, 1979, 1982), Dochviri and Shashiashvili (1992), and Dochviri (1988, 1993), where convergence in optimal stopping problems are studied for general Markov processes. Our results differ from the results obtained in the papers mentioned above by several features; by generality of models for price processes and non-standard pay-off functions and by conditions of convergence. We consider so called triangular array model in which the processes under consideration depend on a small perturbation parameter δ ≥ 0. It CONVERGENCE OF OPTION REWARDS 191 is assumed that transition probabilities of the perturbed processes Z(δ)(t) converge in some sense to the corresponding transition probabilities of the limiting process Z(0)(t) as δ → 0. This let one consider the processes Z(δ)(t) to be a perturbed modification of the corresponding limit process Z(0)(t). We also consider American type options with non-standard payoff func- tions g(δ)(t, s), which are also assumed to be non-negative functions with not more than polynomial growth. The pay-off functions also are assumed to converge to the corresponding limit pay-off functions g(0)(t, s) as δ → 0. As well known, the optimal stopping moment for the exercise of Ameri- can option has the form of the first hitting time into the optimal price-time stoping domain. It is worth to note that, under the general assumptions on the payoff functions listed above, the structure of the reward functions and the corresponding optimal stopping domain can be rather complicated. For example, as shown in Kukush and Silvestrov (2000), Jönsson (2001), and Jönsson, Kukush, and Silvestrov (2004, 2005) the optimal stopping domains can possess a multi-threshold structure. Despite of this complexity, we can prove convergence of the reward func- tionals which represent the optimal expected rewards in the class of all Markov stopping moments. We do not involve directly the condition of finite-dimensional weak con- vergence for the corresponding processes, which is characteristic for general limit theorems for Markov type processes. Our conditions also do not use any assumptions about convergence of auxiliary processes in probability which are characteristic for martingale based methods. The latter type of conditions usually do involve some special imbedding constructions replac- ing perturbed and limiting processes on one probability space that may be difficult to realise for complex models of price processes. Instead of the con- ditions mentioned above, we introduce general conditions of local uniform convergence for the corresponding transition probabilities. These condi- tions do imply finite-dimensional weak convergence for the price processes and can be effectively used in applications. We also use conditions of exponential moment compactness for the in- crements of the log-price processes which are natural for applications to Markov type processes. Our approach is based on the use of skeleton approximations for price processes given in Kukush and Silvestrov (2001), where continuous time re- ward functionals have been approximated by their analogues for imbedded skeleton type discrete time models. In this paper, skeleton approxima- tions were given in the form suitable for applications to continuous price processes. We improve these approximations to the form that let us apply them to càdlàg price processes and, moreover, give them in the form asymp- totically uniform as the perturbation parameter δ → 0. Another important element of our approach is a recursive method for asymptotic analysis of 192 D. SILVESTROV, H. JÖNSSON, F. STENBERG reward functionals for discrete time models developed in Jönsson (2005). Key examples of price processes controlled by semi-Markov indices and cor- responding convergence results are also given in Silvestrov and Stenberg (2004) and Stenberg (2007). 2. Price processes controlled by stochastic indices Let Z(δ)(t) = (Y (δ)(t), X(δ)(t)), t ≥ 0 be, for every δ ≥ 0, a Markov process with the phase space space Z = R1 × X, where R1 is the real line and (X, B ) is a measurable space, transition probabilities P (δ)(t, z, t+u, A) and an initial distribution P (δ)(A). We assume that the process Z(δ)(t), t ≥ 0 is defined on a probability space (Ω(δ), F(δ), P(δ)). Note that these spaces can be different for different δ, i.e., we consider a triangular array model. It is useful to note that Z is also a measurable space with the σ–field of measurable sets B = σ(B1 × B ), where B1 is the Borel σ–field in R1 and the transition probabilities and the initial distribution are probability measures on B . We assume that the process Z(δ)(t), t ≥ 0 is a measurable process, i.e., Z(δ)(t, ω) is a measurable function in (t, ω) ∈ [0,∞)×Ω(δ). Also, we assume that the first component Y (δ)(t), t ≥ 0 is a càdlàg process, i.e., a process that is almost surely continuous from the right and has limits from the left at all points t ≥ 0. We interpret the component Y (δ)(t) as a log-price process and the com- ponent X(δ)(t) as a stochastic index controlling the log-price process Y (δ)(t). Let also define a price process S(δ)(t) = exp{Y (δ)(t)}, t ≥ 0. We also consider the two-component process V (δ)(t) = (S(δ)(t), X(δ)(t)), t ≥ 0. Due to one-to-one mapping and continuity properties of exponential func- tion, V (δ)(t) is also a measurable Markov process, with the phase space V = (0,∞) × X and its first component S(δ)(t), t ≥ 0 is a càdlàg process. The process V (δ)(t) has the transition probabilities Q(δ)(t, z, t + u, A) = P (δ)(t, z, t+u, ln A), and the initial distribution Q(δ)(A) = P (δ)(lnA), where ln A = {y ∈ R1 : y = ln s, s ∈ A}, A ∈ B+, and B+ is the Borel σ-algebra of subsets of (0,∞). 3. Main results Let g(δ)(t, s), (t, s) ∈ [0,∞) × (0,∞) be, for every δ ≥ 0, a pay-off function. We assume that g(δ)(t, s) is a nonnegative measurable (Borel) function. Let F (δ) t , t ≥ 0 be a natural filtration of σ-fields, associated with process Z(δ)(t), t ≥ 0. We shall consider Markov moments τ (δ) with respect to the filtration F (δ) t , t ≥ 0. It means that τ (δ) is a random variable which takes values in [0,∞] and with the property {ω : τ (δ)(ω) ≤ t} ∈ F (δ) t , t ≥ 0. CONVERGENCE OF OPTION REWARDS 193 It is useful to note that F (δ) t , t ≥ 0 is also a natural filtration of σ-fields, associated with process V (δ)(t), t ≥ 0. Let us denote M (δ) max,T , the class of all Markov moments τ (δ) ≤ T , where T > 0, and consider a class of Markov moments M (δ) T ⊆ M (δ) max,T . The goal functional that is a subject of our studies is a reward functional that is the maximal expected pay-off over the class of Markov moments M (δ) T , Φ(M (δ) T ) = sup τ (δ)∈ (δ) T Eg(δ)(τ (δ), S(δ)(τ (δ))). (1) Note that we do not impose on the pay-off functions g(δ)(t, s) any mono- tonicity conditions. However, it is worth noting that the cases where the pay-off function g(δ)(t, s) is non-decreasing or non-increasing in argument s correspond to call and put American type options, respectively. The functional Φ(M (δ) T ) can take the value +∞. However, we shall impose below conditions C1 and C2 on price processes and pay-off functions which will guarantee that, for all δ small enough, Φ(M (δ) max,T ) < ∞. We are interested in conditions, which would also imply the following convergence relation, Φ(M (δ) max,T ) → Φ(M (0) max,T ) as δ → 0. The first condition assumes the absolute continuity of pay-off functions and imposes power type upper bounds on their partial derivatives: A1: There exist δ0 > 0 such that for every 0 ≤ δ ≤ δ0: (a) function g(δ)(t, s) is absolutely continuous upon t with respect to the Lebesgue measure for every fixed s ∈ (0,∞) and upon s with respect to the Lebesgue measure for every fixed t ∈ [0, T ]; (b) for every s ∈ (0,∞), the partial derivative |∂g(δ)(t,s) ∂t | ≤ K1 + K2s γ1 for almost all t ∈ [0, T ] with respect to the Lebesgue measure, where 0 ≤ K1, K2 < ∞ and γ1 ≥ 0; (c) for every t ∈ [0, T ], the partial derivative |∂g(δ)(t,s) ∂s | ≤ K3 + K4s γ2 for almost all s ∈ (0,∞) with respect to the Lebesgue measure, where 0 ≤ K3, K4 < ∞ and γ2 ≥ 0; (d) for every t ∈ [0, T ], the function g(δ)(t, 0) = lims→0g (δ)(t, s) ≤ K5, where 0 ≤ K5 < ∞. Note that condition A1 (a) admits the case where the corresponding partial derivatives exist in points from [0, T ] or (0,∞), respectively, except some subsets with zero Lebesgue measures, while conditions A1 (b) and (c) admit the case where the corresponding upper bounds hold in points from the sets where the corresponding derivatives exist except some subsets (of these sets) with zero Lebesgue measures. It is useful to note that condition A1 implies that function g(δ)(t, s) is continuous in argument (t, s) ∈ [0, T ] × (0,∞). The second condition is the standard condition of pointwise convergence for pay-off functions: 194 D. SILVESTROV, H. JÖNSSON, F. STENBERG A2: g(δ)(t, s) → g(0)(t, s) as δ → 0, for every (t, s) ∈ [0, T ] × (0,∞). Let us now formulate conditions assumed for the transition probabilities and the initial distributions of process Z(δ)(t). Symbol ⇒ is used below to denote weak convergence of probability mea- sures, i.e. convergence of their values for sets of continuity for the corre- sponding limit measure. The first condition assumes weak convergence of the transition probabil- ities that should be locally uniform with respect to initial states from some sets, and also that the corresponding limit measures are concentrated on these sets: B1: There exist measurable sets Zt ⊆ Z, t ∈ [0, T ] such that: (a) P (δ)(t, zδ, t + u, ·) ⇒ P (0)(t, z, t + u, ·) as δ → 0, for any zδ → z ∈ Zt as δ → 0 and 0 ≤ t < t + u ≤ T ; (b) P (0)(t, z, t + u, Zt+u) = 1 for every z ∈ Zt and 0 ≤ t < t + u ≤ T . The typical example is where the sets Z̄t = ∅. In this case, condition B1 (b) automatically holds. Another typical example is where Zt = Yt × X, where the sets Ȳt are at most finite or countable sets. In this case, the assumption that the measures P (0)(t, z, t+u, A×X), A ∈ B1 have no atoms implies that conditions B1 (b) holds. The second condition assumes weak convergence of the initial distribu- tions to some distribution that is assumed to be concentrated on the sets of convergence for the corresponding transition probabilities: B2: (a) P (δ)(·) ⇒ P (0)(·) as δ → 0; (b) P (0)(Z0) = 1, where Z0 is the set introduced in condition B1. The typical example is again when the set Z̄0 is empty. In this case condition B2 (b) holds automatically. Also in the case, where Z0 = Y0 ×X and Ȳ0 is at most finite or countable sets, the assumption that the measures P (0)(A × X), A ∈ B1 has no atoms implies that conditions B2 (b) holds. Condition B2 holds, for example, if the initial distributions P (δ)(A) = χA(z0) are concentrated in a point z0 ∈ Z0, for all δ ≥ 0. This condition also holds, if the initial distributions P (δ)(A) = χA(zδ) for δ ≥ 0, where zδ → z0 as δ → 0 and z0 ∈ Z0. As usual we use notations Ez,t and Pz,t for expectations and probabilities calculated under condition that Z(δ)(t) = z. Let us define, for β, c, T > 0, an exponential moment modulus of com- pactness for the càdlàg process Y (δ)(t), t ≥ 0, Δβ(Y (δ)(·), c, T ) = sup 0≤t≤t+u≤t+c≤T sup z∈ Ez,t(e β|Y (δ)(t+u)−Y (δ)(t)| − 1). We need also the following conditions of exponential moment compact- ness for log-price processes: CONVERGENCE OF OPTION REWARDS 195 C1: limc→0 limδ→0Δβ(Y (δ)(·), c, T ) = 0 for some β > γ = max(γ1, γ2 + 1), where γ1 and γ2 are the parameters introduced in condition A1, and C2: limδ→0Eeβ|Y (δ)(0)| < ∞, where β is the parameter introduced in condi- tion C1. The following theorem presenting conditions for convergence of reward functionals Φ(M (δ) max,T ) is the first main result of the present paper. Theorem 1. Let conditions A1, A2, B1, B2, C1, and C2 hold. Then, Φ(M (δ) max,T ) → Φ(M (0) max,T ) < ∞ as δ → 0. (2) Let Π = {0 = t0 < t1 < . . . tN = T} be a partition of interval [0, T ] and d(Π) = max{tk − tk−1, k = 1, . . . N}. We consider the class M (δ) Π,T of all Markov moments τ (δ) from M (δ) max,T , which only take the values t0, t1, . . . tN and such that event {ω : τ (δ)(ω) = tk} ∈ σ[Z(δ)(t0), . . . , Z (δ)(tk)] for k = 0, . . . N . By the definition, M (δ) Π,T ⊆ M (δ) max,T and, therefore, Φ(M (δ) Π,T ) ≤ Φ(M (δ) max,T ) < ∞ for all δ small enough if conditions C1 and C2 hold. It is also readily seen in which way the reward functional Φ(M (δ) Π,T ) cor- responds to the model of American type options in discrete time. The following theorem presents the second main result of the present paper. It gives an asymptotically uniform skeleton approximation of the reward functional in the continuous time model by the corresponding reward functional in the corresponding discrete time model. Theorem 2. Conditions A1, C1, and C2 imply that there exist con- stants L′, L′′ < ∞ and δ1 such that the following skeleton approximation inequality holds, for 0 ≤ δ ≤ δ1, Φ(M (δ) max,T ) − Φ(M (δ) Π,T ) ≤ L′d(Π) + L′′(Δβ(Y (δ)(·), d(Π), T )) β−γ β . (3) It is useful to note that the explicit expressions for the constants L′, L′′ and δ1 is given in the proof of Theorem 2. Let us now formulate conditions of convergence for discrete time reward functionals Φ(M (δ) Π,T ). We first give conditions, which provide convergence of reward functionals Φ(M (δ) Π,T ) for a given partition Π = {0 = t0 < t1 · · · < tN = T} of interval [0, T ]. In this case, it is natural to use conditions based on the transition probabilities between the sequential moments at this partition and values of the pay-off functions at the moments of this partition. We replace conditions A1 and A2 by a simpler condition, which is im- plied by conditions A1 and A2: 196 D. SILVESTROV, H. JÖNSSON, F. STENBERG A3: There exist δ0 > 0 such that, for every 0 ≤ δ ≤ δ0, function g(δ)(tn, s) ≤ K6 + K7s γ, for n = 0, . . . , N and s ∈ (0,∞) for some γ ≥ 1 and constants K6, K7 < ∞. Note that, in the continuous time case, the derivatives of the pay-off functions were involved in condition A1. The corresponding assumptions implied continuity of the pay-off functions. These assumptions play an essential role in the proof of Theorem 2. In discrete time case, the derivatives of the pay-off functions are not involved. In this case, the pay-off functions can be discontinuous. This is compensated by a stronger assumption concerned the convergence of the pay-off functions. This assumption does require locally uniform convergence for pay-off functions on some sets, which later will be assumed to have the value 1 for the corresponding limit transition probabilities and the limit initial distri- bution: A4: There exists a measurable set Stn ⊆ (0,∞) for every n = 0, . . . , N , such that g(δ)(tn, sδ) → g(0)(tn, s) as δ → 0 for any sδ → s ∈ Stn and n = 0, . . . , N . Obviously, condition A4 can be re-written in terms of function g(δ)(t, ey), (t, y) ∈ [0,∞) × R1: A′ 4: There exists a measurable set Y′ tn ⊆ R1 for every n = 0, . . . , N , such that g(δ)(tn, eyδ) → g(0)(tn, ey) as δ → 0 for any yδ → y ∈ Y ′ tn and n = 0, . . . , N . It is obvious that the sets Stn and Y′ tn are connected by the relations Y ′ tn = ln Stn , n = 0, . . . , N . Let us also denote Z ′ tn = Y ′ tn × X. The typical examples are where the sets Ȳ′ tn = ∅ or where Ȳ′ tn are finite or countable sets. For example, if pay-off functions g(δ)(t, ey) are monotonic functions in y, the point-wise convergence g(δ)(t, ey) → g(0)(t, ey) as δ → 0, y ∈ Y∗ tn , for every n = 0, . . . , N , where Y∗ tn are some countable dense in R1 sets, implies the locally uniform convergence required in condition A′ 4 for sets Y′ tn , which are the sets of continuity points for the limit functions g(0)(tn, ey), as functions in y, for every n = 0, . . . , N . Due to monotonicity of these functions, Ȳ ′ tn are at most countable sets. We replace convergence condition B1 by a simpler condition, which is implied by condition B1: B3: There exist measurable sets Ztn ⊆ Z, n = 0, . . . , N such that (a) P (δ)(tn, zδ, tn+1, ·) ⇒ P (0)(tn, z, tn+1, ·) as δ → 0, for any zδ → z ∈ Ztn as δ → 0 and n = 0, . . . , N − 1; (b) P (0)(tn, z, tn+1, Z ′ tn+1 ∩ Ztn+1) = 1 for every z ∈ Ztn and n = 0, . . . , N−1, where Z′ tn+1 are sets introduced in condition A′ 4. CONVERGENCE OF OPTION REWARDS 197 The typical example is where the sets Z̄′ tn ∪ Z̄tn = ∅. In this case, condition B3 (b) automatically holds. Another typical example is where Z′ tn = Y′ tn × X and Ztn = Ytn × X, where the sets Ȳ′ tn and Ȳtn are at most finite or countable sets. In this case, the assumption that the measures P (0)(t, z, t + u, A × X), A ∈ B1 have no atoms implies that conditions B3 (b) holds. As far as condition B2 is concerned, this condition can be replaced by the condition of weak convergence for the initial distributions to some distribution that is assumed to be concentrated on the intersections of the sets of convergence for the corresponding transition probabilities and pay-off functions: B4: (a) P (δ)(·) ⇒ P (0)(·) as δ → 0; (b) P (0)(Z′ t0 ∩Zt0) = 1, where Z ′ t0 and Zt0 are the sets introduced in conditions A′ 4 and B3. The typical example is where the sets Z̄′ t0 ∪ Z̄t0 = ∅. In this case, condition B4 (b) automatically holds. Another typical example is where Z′ t0 = Y′ t0 × X and Zt0 = Yt0 × X, where the sets Ȳ′ t0 and Ȳt0 are at most finite or countable sets. In this case, the assumption that the measures P (0)(A × X), A ∈ B1 have no atoms implies that conditions B4 (b) holds. Condition B2 holds, for example, if the initial distributions P (δ)(A) = χA(z0) are concentrated in a point z0 ∈ Z ′ t0 ∩ Zt0 , for all δ ≥ 0. This condition also holds if the initial distributions P (δ)(A) = χA(zδ) for δ ≥ 0, where zδ → z0 as δ → 0 and z0 ∈ Z′ t0 ∩ Zt0 . We also weaken condition C1 by replacing it by a simpler condition, which is implied by condition C1: C3: limδ→0 supz∈ Ez,tn(eβ|Y (δ)(tn+1)−Y (δ)(tn)|−1) < ∞, n = 0, . . . , N −1, for some β > γ, where γ is the parameter introduced in condition A3. Condition C2 does not change and takes the following form: C4: limδ→0Eeβ|Y (δ)(t0)| < ∞, where β is the parameter introduced in condi- tion C3. The following theorem presents the third main result of the present paper. Theorem 3. Let conditions A3, A4, B3, B4, C3, and C4 hold. Then, the following asymptotic relation holds for the partition Π = {0 = t0 < t1 · · · < tN = T} of interval [0, T ], Φ(M (δ) Π,T ) → Φ(M (0) Π,T ) as δ → 0. (4) The following theorem and its proof are based on the fact that conditions of Theorem 1 imply conditions of Theorem 3 to hold for any partition Π of interval [0, T ]. 198 D. SILVESTROV, H. JÖNSSON, F. STENBERG Theorem 4. Let conditions A1, A2, B1, B2, C1, and C2 hold. Then, the following asymptotic relation holds for any partition Π = {0 = t0 < t1 · · · < tN = T} of interval [0, T ], Φ(M (δ) Π,T ) → Φ(M (0) Π,T ) as δ → 0. (5) The proofs of Theorems 1 – 4 can be found in the report by Silvestrov, Jönsson, and Stenberg (2006). Here, we only show the final step in these proofs, i.e., in which way Theorems 2 and 4 imply Theorem 1. Let ΠN = {0 = t0,N < t1,N < . . . tN,N = T} be a sequence of partitions such that d(ΠN) → 0 as N → ∞. Relations (3) and (5) imply that limδ→0|Φ(M (δ) max,T ) − Φ(M (0) max,T )| ≤ (6) limN→∞limδ→0(|Φ(M (δ) max,T ) − Φ(M (δ) ΠN ,T )| + |Φ(M (0) max,T ) − Φ(M (0) ΠN ,T )| + |Φ(M (δ) ΠN ,T ) − Φ(M (0) ΠN ,T )|) = 0. References 1. Amin, K., Khanna, A., Convergence of American option values from dis- crete- to continuous-time financial models, Math. Finance, 4, no. 4, (1994), 289–304. 2. Barone-Adesi, G., Whaley, R., Efficient analytical approximation of Amer- ican option values, J. Finance, 42, (1987), 301–310. 3. Coquet, F., Toldo, S., Convergence of values in optimal stopping and con- vergence of optimal stopping times, Electr. J. Probab., 12, (2007), 207–228. 4. Cox, J., Ross, S., Rubinstein, M., Option price: A simplified approach, J. Finanic. Econom., 7, (1979), 229–263. 5. Cutland, N.J., Kopp, P.E., Willinger, W., Wyman, M.C., Convergence of Snell envelopes and critical prices in the American put, In: Dempster, M.A.H. et al. (eds) Mathematics of Derivative Securities, Publ. Newton Inst. Cambridge Univ. Press, (1997), 126–140. 6. Dochviri, V.M., On optimal stopping with incomplete data, In: Probabil- ity Theory and Mathematical Statistics, Kyoto, 1986. Lecture Notes in Mathematics, 1299, Springer, Berlin, (1988), 64–68. 7. Dochviri, V.M., Optimal stopping of a homogeneous nonterminating stan- dard Markov process on a finite time interval, In: Trudy Mat. Inst. Steklov., 202, Statist. Upravlen. Sluchain. Protsessami, 120–131 (English transla- tion in Proc. Steklov Inst. Math., 202, no. 4, (1993), 97–106. 8. Dochviri, V., Shashiashvili, M., On the optimal stopping of a homogeneous Markov process on a finite time interval, Math. Nachr., 156, (1992), 269– 281. 9. Dupuis, P., Wang, H., On the convergence from discrete time to continuous time in an optimal stopping problem, Ann. Appl. Probab., 15, (2005), 1339–1366. CONVERGENCE OF OPTION REWARDS 199 10. Fährmann, H., Zur Konvergenz der optimalen Werte der Gewinnfunktion beim Abbruch von Zufallsprozessen im Fallen von unvollstndiger Informa- tion, Math. Operationsforsch. Statist., Ser. Statist., 9, no 2, (1978), 241–253. 11. Fährmann, H., On the convergence of the value in optimal stopping of random sequences with incomplete data, Zastos. Mat., 16, no. 3, (1979), 415–428. 12. Fährmann, H., Convergence of values in optimal stopping of partially ob- servable random sequences with quadratic rewards, Theory Probab. Appl., 27, (1982), 386–391. 13. Jönsson, H., Monte Carlo studies of American type options with discrete time, Theory Stoch. Process., 7(23), no. 1-2, (2001), 163–188. 14. Jönsson, H., Optimal Stopping Domains and Reward Functions for Discrete Time American Type Options, Mälardalen University, Ph.D. Thesis 22, (2005). 15. Jönsson, H., Kukush, A.G., Silvestrov, D.S., Threshold structure of optimal stopping strategies for American type options. I, Theor. Ĭmovirn. Mat. Stat., 71, (2004) 113–123. (English translation in Theory Probab. Math. Statist., 71, 93–103). 16. Jönsson, H., Kukush, A.G., Silvestrov, D.S., Threshold structure of optimal stopping strategies for American type options. II, Theor. Ĭmovirn. Mat. Stat., 72, (2005), 42–53. (English translation in Theory Probab. Math. Statist., 72, 47–58). 17. Kukush, A.G., Silvestrov, D.S., Structure of optimal stopping strategies for American type options, In: Uryasev, S. (ed.) Probabilistic Constrained Optimisation: Methodology and Applications. Kluwer, (2000), 173–185. 18. Kukush, A.G., Silvestrov, D.S., Skeleton approximation of optimal stopping strategies for American type options with continuous time, Theory Stoch. Process., 7(23), no. 1-2, (2001), 215–230. 19. Kukush, A.G., Silvestrov, D.S., Optimal price of American type options with discrete time, Theory Stoch. Process., 10(26), no. 1-2, (2004), 72– 96. 20. Lamberton, D., Convergence of the critical price in the approximation of American options, Math. Finance, 3, no. 2, (1993), 179–190. 21. Mackevičius, V., Convergence of the prices of games in problems of optimal stopping of Markovian processes, Lit. Mat. Sb., 13, no. 1, (1973), 115–128. 22. Mackevičius, V., Convergence of the prices of games in problems of optimal stopping of Markovian processes, Lith. Math. Trans., 14, no. 1, (1975), 83–96. 23. Mulinacci, S., Pratelli, M., Functional convergence of Snell envelopes: Ap- plications to American options approximations, Finance Stochast., 2, (1998), 311–327. 200 D. SILVESTROV, H. JÖNSSON, F. STENBERG 24. Neiuwenhuis, J.W., Vellekoop, M.H., Weak convergence of three methods, to price options on defautable assets, Decis. Econom. Finance, 27, (2004), 87–107. 25. Peskir, G. Shiryaev, A., Optimal Stopping and Free-Boundary Problems, Birkhäuser, Basel, (2006). 26. Prigent, J.L., Weak Convergence of Financial Markets, Springer, New York, (2003). 27. Silvestrov,D., Jönsson, H., Stenberg, F., Convergence of option rewards for Markov type price processes controlled by stochastic indices. 1, Research Report 2006-1. Department of Mathematics and Physics, Mälardalen Uni- versity, (2006). 28. Silvestrov, D.S., Stenberg, F., A price process with stochastic volatility con- trolled by a semi-Markov process, Comm. Statist., 33, no. 3, (2004), 591– 608. 29. Stenberg, F., Semi-Markov Models for Insurance and Option Rewards, Mälardalen University, Ph.D. Thesis 38, (2007). Department of Mathematics and Physics, Mälardalen University, Box 883, SE-72123, Väster̊as, Sweden E-mail address: dmitrii.silvestrov@mdh.se Eurandom, P.O. Box 513 - 5600 MB, Eindhoven, The Netherlands E-mail address: jonsson@eurandom.tue.nl Department of Mathematics and Physics, Mälardalen University, Box 883, SE-72123, Väster̊as, Sweden E-mail address: fredrik.stenberg@mdh.se