Asymptotically optimal estimator of the parameter of semi-linear autoregression
The difference equations ξk = af(ξk-1) + εk, where (εk) is a square integrable difference martingale, and the differential equation dξ =-af(ξ)dt + dη, where η is a square integrable martingale, are considered. A family of estimators depending, besides the sample size n (or the observation period, if...
Збережено в:
Дата: | 2007 |
---|---|
Автор: | |
Формат: | Стаття |
Мова: | English |
Опубліковано: |
Інститут математики НАН України
2007
|
Онлайн доступ: | http://dspace.nbuv.gov.ua/handle/123456789/4475 |
Теги: |
Додати тег
Немає тегів, Будьте першим, хто поставить тег для цього запису!
|
Назва журналу: | Digital Library of Periodicals of National Academy of Sciences of Ukraine |
Цитувати: | Asymptotically optimal estimator of the parameter of semi-linear autoregression / D. Ivanenko // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 1-2. — С.77-85. — Бібліогр.: 7 назв.— англ. |
Репозитарії
Digital Library of Periodicals of National Academy of Sciences of Ukraineid |
irk-123456789-4475 |
---|---|
record_format |
dspace |
spelling |
irk-123456789-44752009-11-20T12:00:29Z Asymptotically optimal estimator of the parameter of semi-linear autoregression Ivanenko, D. The difference equations ξk = af(ξk-1) + εk, where (εk) is a square integrable difference martingale, and the differential equation dξ =-af(ξ)dt + dη, where η is a square integrable martingale, are considered. A family of estimators depending, besides the sample size n (or the observation period, if time is continuous) on some random Lipschitz functions is constructed. Asymptotic optimality of this estimators is investigated. 2007 Article Asymptotically optimal estimator of the parameter of semi-linear autoregression / D. Ivanenko // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 1-2. — С.77-85. — Бібліогр.: 7 назв.— англ. 0321-3900 http://dspace.nbuv.gov.ua/handle/123456789/4475 en Інститут математики НАН України |
institution |
Digital Library of Periodicals of National Academy of Sciences of Ukraine |
collection |
DSpace DC |
language |
English |
description |
The difference equations ξk = af(ξk-1) + εk, where (εk) is a square integrable difference martingale, and the differential equation dξ =-af(ξ)dt + dη, where η is a square integrable martingale, are considered. A family of estimators depending, besides the sample size n (or the observation period, if time is continuous) on some random Lipschitz functions is constructed. Asymptotic optimality of this estimators is investigated. |
format |
Article |
author |
Ivanenko, D. |
spellingShingle |
Ivanenko, D. Asymptotically optimal estimator of the parameter of semi-linear autoregression |
author_facet |
Ivanenko, D. |
author_sort |
Ivanenko, D. |
title |
Asymptotically optimal estimator of the parameter of semi-linear autoregression |
title_short |
Asymptotically optimal estimator of the parameter of semi-linear autoregression |
title_full |
Asymptotically optimal estimator of the parameter of semi-linear autoregression |
title_fullStr |
Asymptotically optimal estimator of the parameter of semi-linear autoregression |
title_full_unstemmed |
Asymptotically optimal estimator of the parameter of semi-linear autoregression |
title_sort |
asymptotically optimal estimator of the parameter of semi-linear autoregression |
publisher |
Інститут математики НАН України |
publishDate |
2007 |
url |
http://dspace.nbuv.gov.ua/handle/123456789/4475 |
citation_txt |
Asymptotically optimal estimator of the parameter of semi-linear autoregression / D. Ivanenko // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 1-2. — С.77-85. — Бібліогр.: 7 назв.— англ. |
work_keys_str_mv |
AT ivanenkod asymptoticallyoptimalestimatoroftheparameterofsemilinearautoregression |
first_indexed |
2025-07-02T07:42:41Z |
last_indexed |
2025-07-02T07:42:41Z |
_version_ |
1836520216630132736 |
fulltext |
Theory of Stochastic Processes
Vol.13 (29), no.1-2, 2007, pp.77-85
DMYTRO IVANENKO
ASYMPTOTICALLY OPTIMAL ESTIMATOR OF
THE PARAMETER OF SEMI-LINEAR
AUTOREGRESSION
The difference equations ξk = af(ξk−1) + εk, where (εk) is a square
integrable difference martingale, and the differential equation dξ =
−af(ξ)dt + dη, where η is a square integrable martingale, are con-
sidered. A family of estimators depending, besides the sample size
n (or the observation period, if time is continuous) on some random
Lipschitz functions is constructed. Asymptotic optimality of this
estimators is investigated.
1. Introduction
Discrete time
We consider the difference equation
ξk = af(ξk−1) + εk, k ∈ N, (1)
where ξ0 is a prescribed random variable, f is a prescribed nonrandom
function, a is an unknown scalar parameter and (εk) is a square integrable
difference martingale with respect to some flow (Fk, k ∈ Z+) of σ-algebras
such that the random variable ξ0 is F0-measurable. In the detailed form,
the assumption about (εk) means that for any k εk is Fk-measurable,
Eε2
k < ∞ (2)
and
E(εk|Fk−1) = 0. (3)
The word ”semi-linear” in the title means that the right-hand side of
(1) depends linearly on a but not on ξk−1.
2000 Mathematics Subject Classifications. Primary 62F12. Secondary 60F05.
Key words and phrases. Martingale, estimator, optimization, convergence.
77
78 DMYTRO IVANENKO
We use the notation: l.i.p. – limit in probability;
d→ – the weak conver-
gence of finite-dimensional distributions of random functions, in particular
convergence in distribution of random variables.
Let for each k ∈ Z+ hk = hk(ω, x) be an Fk−1 ⊗ B-measurable function
(that is the sequence (hk) be predictable) such that
E [(|ξk+1| + |af(ξk+1)|) |hk(ξk)|] + E|hk(ξk)| < ∞.
Then from (1) – (3) we have E (ξk+1 − af(ξk))hk(ξk) = 0, whence
a = (Eξk+1hk(ξk)) (Ef(ξk)hk(ξk))
−1
provided (Ef(ξk)hk(ξk)) �= 0. This prompts the estimator
ǎn =
(
n−1∑
k=0
ξk+1hk(ξk)
) (
n−1∑
k=0
f(ξk)hk(ξk)
)−1
, (4)
coinciding with the LSE if hk(x) = f(x) for all k.
Continuous time
We consider the differential equation
dξ(t) = −af(ξ(t))dt + dη(t), t ∈ R, (5)
where η(t) is a local square integrable martingale w.r.t. a flow (F(t)) such
that the random variable ξ(0) is F(0)-measurable.
Let h(t, x) be a predictable random function such that for all t ∈ R+
E [(|ξ(t)| + |af(ξ(t))|) |h(t, ξ(t))|] + E|h(t, ξ(t))| < ∞.
Let us multiply (5) on h(t, ξ(t)) and integrate from 0 to T . The same
rationale as in the discrete case yields the estimator
ǎT = −
(∫ T
0
h(t, ξ(t))dξ
) (∫ T
0
f(ξ(t))h(t, ξ(t))dt
)−1
, (6)
coinciding with the LSE if h(t, x) = f(x).
Asymptotic normality of
√
n
(
Ǎn − A
)
, where Ǎn is the LSE of a matrix
parameter A, was proved in [1] under the assumptions of ergodicity and
stationarity of (ξn). Convergence in distribution of this normalized deviation
was proved in [2] with the use of stochastic calculus. Ergodicity and even
stationarity of (εk) was not assumed in [2], so the limiting distribution could
be other than normal.
The goal of the article is to match a sequence (hk) (if time is discrete) or
a function h(t, ·) (if time is continuous) so that to minimize the value of some
random functional Vn which, as we shall see in Section 3, is asymptotical
close in distribution to some numeral characteristic of the estimator (in
case the latter is asymptotically normal this characteristic coincides with
the variance).
ASYMPTOTICALLY OPTIMAL ESTIMATOR 79
2. The main results
Discrete time
Denote σ2
k = E[ε2
k|Fk−1], μk = hk(ξk). Let Lip(C) denote the class of
functions satisfying the Lipschitz condition with some constant C and equal
to zero at the origin, Lip =
⋃
C>0Lip(C), and let H(C) denote the class
of all predictable random functions on Z+ × R (discrete time) or R+ × R
(continuous time) whose realizations hk(·) (respectively h(t, ·)) belong, as
functions of x, to Lip(C), H =
⋃
C>0H(C). Predictability means P ⊗ B-
measurability in (ω, t, x) (the σ-algebra P is defined in [4, p. 28], [6, p. 13]).
We are seeking for (h̃k) ∈ H minimizing the functional
Vn(h0, . . . , hn−1) =
1
n
∑n−1
k=0 σ2
k+1μ
2
k(
1
n
∑n−1
k=0 f(ξk)μk
)2 . (7)
Theorem 1. Let
Vn(h̃0, . . . , h̃n−1) = min
h0,...,hn−1∈H
Vn(h0, . . . , hn−1). (8)
Then
σ2
k+1μ̃k
n−1∑
i=0
f(ξi)μ̃i = f(ξk)
n−1∑
i=0
σ2
i+1μ̃
2
i , k = 0, n − 1. (9)
Proof. To obtain the necessary conditions for extremum of the functional Vn
(9) we will vary [3] just one of functions hk, k = 0, n − 1, leaving the other
functions without changes. Thus regarding Vn(h0, . . . , hn−1) as a functional
depending on only one function Vn(h0, . . . , hn−1) = Ṽn(hk).
Let’s choose some scalar function g ∈ H and denote gλ(x) = h̃k(x) +
λ(g(x) − h̃k(x)), v(λ) = Ṽn(gλ).
Obviously, gλ ∈ H so the minimum of v(λ) is attained at zero and
therefore
v′(0) = 0. (10)
The expression for the left-hand side is
v′(0) =
2n(g(ξk) − μk)
(
σ2
k+1μ̃k(
∑n−1
i=0 f(ξi)μ̃i − f(ξk)
∑n−1
i=0 σ2
i+1μ̃
2
i
)
(∑n−1
i=0 f(ξi)μ̃i
)3 .
Hence in view of (10) we obtain the i th equation of system (9).
It remains to apply this argument to each function hk, k = 0, n − 1.
Remark. The Lipschitz condition was not used in the proof. It will be
required in Section 3.
Corollary 1. Let f ∈ Lip(C) and there exist a constant q > 0 such that
σ2
k ≥ q for all k. Then hi(x) = f(x)/σ2
i+1, i = 0, n − 1, is a solution to the
problem (8).
80 DMYTRO IVANENKO
Continuous time
Let m denote the quadratic characteristic of η.
We shall match h̃ = h̃(ω, t, x) from H(C) (C is independent of t) so that
to minimize the value of the functional
VT (h) =
1
T
∫ T
0 h(t, ξ(t))2dm(t)(
1
T
∫ T
0 f(ξ(t))h(t, ξ(t))dt
)2 . (11)
Theorem 2. Let
VT (h̃) = min
h∈H
VT (h). (12)
Then for all g ∈ H∫ T
0 h̃(t, ξ(t))g(t, ξ(t))dm(t)
∫ T
0 f(ξ(t))h̃(t, ξ(t))dt =
∫ T
0 f(ξ(t))g(t, ξ(t))dt
∫ T
0 h̃(t, ξ(t))2dm(t).
(13)
Proof. Let’s choose some scalar function g ∈ H and denote gλ(t, x) =
h̃(t, x) + λg(t, x), v(λ) = VT (gλ).
Obviously gλ(t, ·) ∈ H so the minimum of v(λ) is attained in zero and
therefore
v′(0) = 0. (14)
The expression for the left-hand side is
v′(0) = 2T
(∫ T
0 f(ξ(t))h̃(t, ξ(t))dt
)−3 ×(∫ T
0
f(ξ(t))h̃(t, ξ(t))dt
∫ T
0
h̃(t, ξ(t))g(t, ξ(t))dm(t)−
∫ T
0
f(ξ(t))g(t, ξ(t))dt
∫ T
0
h̃(t, ξ(t))2dm(t)
)
.
Hence in view of (14) we come to (13).
Corollary 2. Let f ∈ Lip(C), m be absolutely continuous w.r.t. the
Lebesgue measure and there exist a constant q > 0 such that for all t
ṁ ≥ q. Then h(t, x) = f(x)/ṁ is a solution to the problem (12).
3. An illustration
Denote E0 = E(· · · |F0), Qn = 1
n
∑n−1
k=0 f(ξk)μk, Gn = 1
n
∑n
k=1 σ2
kμ
2
k−1.
We denote E0 = E(· · · |F0) and introduce the conditions
CP1. For any r ∈ N and any uniformly bounded sequence (αk) of R-valued
Borel functions on Rr
1
n
n−1∑
k=r
(
αk(εk−r+1, . . . , εk) − E0αk(εk−r+1, . . . , εk)
)
P−→ 0,
ASYMPTOTICALLY OPTIMAL ESTIMATOR 81
1
n
n−1∑
k=r
(
σ2
kαk(εk−r+1, . . . , εk) − E0σ2
kαk(εk−r+1, . . . , εk)
)
P−→ 0.
CP2. For such r and (αk) the sequences(
1
n
n−1∑
k=r
E0αk(εk−r+1, . . . , εk), n = r + 1, . . .
)
,
(
1
n
n−1∑
k=r
E0σ2
kαk(εk−r+1, . . . , εk), n = r + 1, . . .
)
converge in probability.
Denote f0(x) = x and, for r ≥ 1,
fr(x0, . . . , xr) = af(fr−1(x0, . . . , xr−1)) + xr.
Then
ξk = fr(ξk−r, εk−r+1, . . . , εk), r < k.
Lemma 1. Let conditions (2), (3), CP1 and CP2 be fulfilled. Suppose
also that
lim
N→∞
lim
n→∞
1
n
n∑
k=1
Eε2
kI{|εk| > N} = 0 (15)
and there exist an F0-measurable random variable υ such that for all k
σ2
k ≤ υ (16)
and positive numbers C, C1 such that
|a|C < 1, (17)
f ∈ Lip(C), (hk) ∈ H(C1). Then
(Gn, Qn)
d→ (G, Q). (18)
Proof. Denote ξr
k = fr(0, εk−r+1, . . . , εk), μr
k = hk(ξ
r
k), Qr
n = 1
n
∑n−1
k=r f(ξr
k)μ
r
k,
Gr
n = 1
n
∑n
k=r σ2
k(μ
r
k−1)
2. We claim that conditions (2), (3), (15), (16), (17)
and the relation
(Qr
n, Gr
n)
d→ (Qr, Gr) as n → ∞ (19)
imply (18).
Let Xr denote (x1, . . . , xr) ∈ Rr. Then under the assumptions on f and
hk for any N > 0
lim
r→∞ sup
|x|≤N,Xr∈Rr
|fr(x, Xr) − fr(0, Xr)| = 0,
82 DMYTRO IVANENKO
whence with probability 1 for any k
lim
r→∞ sup
|x|≤N,Xr∈Rr
|f(fr(x, Xr))hk(fr(x, Xr)) − f(fr(0, Xr))hk(fr(0, Xr))| = 0,
(20)
lim
r→∞ sup
|x|≤N,Xr∈Rr
|hk(fr(x, Xr))
2 − hk(fr(0, Xr))
2| = 0.
These relations were proved in [5].
Let us prove that conditions (2), (3), (15), (16) and (17) imply that
almost surely
lim
r→∞ lim
n→∞E0|Qn − Qr
n| = 0, lim
r→∞ lim
n→∞E0|Gn − Gr
n| = 0. (21)
By (20) for any N > 0
lim
r→∞ lim
n→∞
1
n
n−1∑
k=r
E|f(ξk) ⊗ μk − f(ξr
k)μ
r
k|I{|ξk| ≤ N} = 0. (22)
Denote χN
k = I{|ξk| > N}, IN
k = I{|εk| > (1 − C)N}, bN
k = E0|ξk|2χN
k . Due
to (17) and because of (hk) ∈ H(C1)
E0|f(ξk)μk|χN
k ≤ CC1b
N
k .
Hence and from (2), (3), (15)–(17) we get by Corollary 1 [5]
lim
N→∞
lim
n→∞
1
n
n−1∑
k=0
E0|f(ξk)μk|χN
k = 0. (23)
Further, for k ≥ r,
E0|f(ξr
k)μ
r
k| = E0|f(fr(0, εk−r+1, . . . , εk))||hk(fr(0, εk−r+1, . . . , εk))|,
whence
E|f(ξr
k)μ
r
k|χN
k ≤ CC1E
(
r−1∑
i=0
Ci|εk−i|
)2
χN
k . (24)
Writing the Cauchy – Bunyakovsky inequality(
r−1∑
i=0
Ci|εk−i|
)2
≤
r−1∑
j=0
Cj
r−1∑
i=0
Ci|εk−i|2,
we get for an arbitrary L > 0
E
(∑r−1
i=0 Ci|εk−i|
)2
χN
k ≤
(1 − C)−1
(
E
r−1∑
i=0
Ciε2
k−iI{|εk−i| > L} + L2P{|ξk| > N}
r−1∑
i=0
Ci
)
. (25)
ASYMPTOTICALLY OPTIMAL ESTIMATOR 83
In view of (2) and (3) Lemma 1 [5] together with (17) and (15) implies
that
lim
N→∞
lim
n→∞
1
n
n∑
k=0
P{|ξk| > N} = 0. (26)
Obviously, for arbitrary nonnegative numbers u0, . . . , ur−1, v1, . . . , vn−1
n−1∑
k=r
r−1∑
i=0
uivk−i ≤
r−1∑
i=0
ui
n−1∑
j=1
vj ,
so conditions (17) and (15) imply that
lim
L→∞
sup
r
lim
n→∞
1
n
n−1∑
k=r
E
r−1∑
i=0
Ciε2
k−iI{|εk−i| > L} = 0,
whence in view of (24) – (26)
lim
N→∞
sup
r
lim
n→∞
1
n
n−1∑
k=r
E|f(ξr
k)μ
r
k|χN
k = 0.
Combining this with (22) and (23), we arrive at the first relation of (21).
The proof of the second relation of (21) is similar.
Condition CP1 implies that
lim
r→∞ lim
n→∞E0|Qr
n − E0Qr
n| = 0, lim
r→∞ lim
n→∞E0|Gr
n − E0Gr
n| = 0.
Under condition CP2 the sequences (E0Gr
n, n ∈ N) and (E0Qr
n, n ∈ N)
converge in probability for any r ∈ N. Thus relation (19) holds.
From (19) and (21) we obtain that the sequence ((Qr, Gr), r ∈ N) con-
verges in distribution to some limit (Q, G) and relation (18) holds.
By construction Vn(h0, . . . , hn−1) = GnQ−2
n . The value Qn = 0 is ex-
cluded by the choice of the tuple (h0, . . . , hn−1) minimizing Vn.
Corollary 3. Let the conditions of Lemma 1 be fulfilled and Q �= 0 a.s.
Then Vn
d→ V , where V = GQ−2.
Having in mind the use of stochastic analysis, we introduce the processes
ǎn(t) = ǎ[nt] and the flows Fn(t) = F[nt] with continuous time.
Theorem 3. Let conditions of Lemma 1 be fulfilled. Then
√
n (ǎn(·) − a)
d→
β(·), where β is a continuous local martingale with quadratic characteristic
〈β〉(t) = tV, (27)
and initial value 0.
Proof. Denote Yn(t) = 1√
n
∑[nt]
k=1 εkμk−1. Then because of (4)
√
n(ǎn(t) − a) = Yn(t)Q−1
n . (28)
84 DMYTRO IVANENKO
By construction and conditions (2), (3), (17) Yn is a locally square inte-
grable martingale with quadratic characteristic 〈Yn〉(t) = n−1[nt]G[nt].
It was proved in [5] that under conditions (2), (3), (15), (16), (17) and
(18)
√
n(ǎn(·) − a)
d→ Y (·)Q−1, where Y is a continuous local martingale
w.r.t. some flow (F(t), t ∈ R+) such that 〈Y 〉(t) = tG and the random
variable Q is F(0)-measurable (and so does G, which can be seen from the
expression for 〈Y 〉). In view of Lemma 1 it remains to note that Vn =
〈Yn〉(1)Q−2
n and V = 〈Y 〉(1)Q−2.
Remark. This theorem explains the form of functional (7). In the most
general case (without conditions CP1 and CP2) the denominator (28) in
limit is an F(0)-measurable random variable, and the numerator tends to
quadratic characteristic at the point t = 1 of the continuous local martingale
Y . Thus, the numerator (7) is the quadratic characteristic at t = 1 of
the pre-limit martingale Yn, and the denominator satisfies the law of large
numbers. Minimizing the pre-limit variance in (hk) ∈ H(C1), we lessen the
value of the limit variance of the normalized deviation of estimator (4).
Let further hk(x) = f(x)/σ2
k+1. Recall that (hk, k = 0, n − 1) is a
solution to the problem (8). For such hk we have
Corollary 4. Let the conditions of Corollary 1 and Theorem 3 be fulfilled.
Then
V =
(
lim
r→∞ l.i.p.
n→∞
1
n
n−1∑
k=r
E0 f(ξr
k)
2
σ2
k+1
)−1
.
Proof. Obviously Vn = Q−1
n . By Lemma 1 Qn
d→ Q, where Q =
lim
r→∞ l.i.p.
n→∞
E0Qr
n. To complete the proof it remains to note that Qr
n =
1
n
n−1∑
k=r
E0 f(ξr
k
)2
σ2
k+1
.
4. An example
Suppose that f ∈ Lip(C), hk ∈ H(C1) condition (17) be fulfilled. Let
also εn = γnbn(ξn−1), where (γn) be a sequence of independent random
variables with zero mean and variances ς2
n, |γk| ≤ C2, bn ∈ H(C3) and
C + C2C3 < 1. Let also Eξ2
0 < ∞. For Fk we take the σ-algebra generated
by ξ0; γ1, . . . , γk. Then σ2
k = ς2
kbk(ξk−1)
2 and (εn) satisfies (2), (3).
Denote further
f̂r(x0, . . . , xr) = af(f̂r−1(x0, . . . , xr−1)) + xrbr(f̂r−1(x0, . . . , xr−1)),
ξ̂r
k = f̂r(0, γk−r+1, . . . , γk), μ̂r
k = hk(ξ̂
r
k), Q̂r
n =
1
n
n−1∑
k=r
f(ξ̂r
k)μ̂
r
k,
ASYMPTOTICALLY OPTIMAL ESTIMATOR 85
Ĝr
n = 1
n
∑n−1
k=r ς2
k+1bk+1(ξ̂
r
k)
2(μ̂r
k)
2. Similarly to the proof of Lemma 1 we
obtain
lim
r→∞ lim
n→∞E0|Gn − Ĝr
n| = 0, lim
r→∞ lim
n→∞E0|Qn − Q̂r
n| = 0.
Summands in Ĝr
n and Q̂r
n are nonrandom functions of γk−r+1, . . . , γk, so
they satisfy the law of large numbers in Bernstein’s form.
If besides εn satisfies CP2 and Q �= 0 a.s. then Theorem 3 asserts (27).
If herein f(x)
ς2
k
bk(x)2
∈ Lip then h̃k(x) = f(x)
ς2
k
bk(x)2
is a solution to the problem (8)
and
V =
(
lim
r→∞ l.i.p.
n→∞
1
n
n−1∑
k=r
E0 f(ξ̂r
k)
2
ς2
k+1bk+1(ξ̂r
k)
2
)−1
.
Example. Let bn = b, hn = h and γn be i.i.d. random variables. In view of
expressions for Q̂r
n and Ĝr
n we may confine ourselves with the case αk = α.
By the Stone – Weierstrass theorem for σ-compact spaces [7, p. 317] α
can be uniformly on compacta approximated with finite linear combinations
of functions of the kind g1(x1) . . . gr(xr). By the choice of Fk and the as-
sumptions on (γn) E0g1(γk−r+1) . . . gr(γk) =
r∏
i=1
Egi(γ1). Whence condition
CP2 emerges.
Acknowledgement. The author is grateful to A. Yurachkivsky for helpful
advices.
Bibliography
1. Dorogovtsev A. Ya., Estimation theory for parameters of random processes
(Russian). Kyiv University Press. Kyiv (1982).
2. Yurachkivsky A. P., Ivanenko D. O., Matrix parameter estimation in
an autoregression model with non-stationary noise (Ukranian), Th. Prob.
Math. Stat. 72 (2005), 158–172.
3. Elsholz, L. E., Differential equations and calculus of variations (Russian),
Nauka, Moscow (1969).
4. Chung K. L., Williams R. J., Introduction to stochastic integration (Rus-
sian), Mir, Moscow (1987).
5. Yurachkivsky A. P., Ivanenko D. O., Matrix parameter estimation in
an autoregression model, Theory of Stochastic Processes 12(28) No 1-2
(2006), 154-161.
6. Liptser R. Sh., Shiryaev A. N., Theory of martingales (Russian), Nauka,
Moscow (1986).
7. Kelley, J., General topology (Russian). Nauka, Moscow (1981).
Department of Mathematics and Theoretical Radiophysic,
Kyiv National Taras Shevchenko University, Kyiv, Ukraine
E-mail address: ida@univ.kiev.ua
|