Efficiency comparison of two consistent estimators in nonlinear regression model with small measurement errors
We study a nonlinear measurement model where the response variable has a density belonging to the exponential family. We consider two consistent estimators: Corrected Score (CS) and Quasi Score (QS) ones. Their relative efficiency is compared with respect to asymptotic covariance matrices. We derive...
Збережено в:
Дата: | 2007 |
---|---|
Автор: | |
Формат: | Стаття |
Мова: | English |
Опубліковано: |
Інститут математики НАН України
2007
|
Онлайн доступ: | http://dspace.nbuv.gov.ua/handle/123456789/4483 |
Теги: |
Додати тег
Немає тегів, Будьте першим, хто поставить тег для цього запису!
|
Назва журналу: | Digital Library of Periodicals of National Academy of Sciences of Ukraine |
Цитувати: | Efficiency comparison of two consistent estimators in nonlinear regression model with small measurement errors / A. Malenko // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 1-2. — С. 122-131. — Бібліогр.: 4 назв.— англ. |
Репозитарії
Digital Library of Periodicals of National Academy of Sciences of Ukraineid |
irk-123456789-4483 |
---|---|
record_format |
dspace |
spelling |
irk-123456789-44832009-11-20T12:00:30Z Efficiency comparison of two consistent estimators in nonlinear regression model with small measurement errors Malenko, A. We study a nonlinear measurement model where the response variable has a density belonging to the exponential family. We consider two consistent estimators: Corrected Score (CS) and Quasi Score (QS) ones. Their relative efficiency is compared with respect to asymptotic covariance matrices. We derive expansions of these matrices for small error variances. It is shown that the QS estimator is more efficient than the CS one. The polynomial and Poisson regression models are studied in more detail. 2007 Article Efficiency comparison of two consistent estimators in nonlinear regression model with small measurement errors / A. Malenko // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 1-2. — С. 122-131. — Бібліогр.: 4 назв.— англ. 0321-3900 http://dspace.nbuv.gov.ua/handle/123456789/4483 en Інститут математики НАН України |
institution |
Digital Library of Periodicals of National Academy of Sciences of Ukraine |
collection |
DSpace DC |
language |
English |
description |
We study a nonlinear measurement model where the response variable has a density belonging to the exponential family. We consider two consistent estimators: Corrected Score (CS) and Quasi Score (QS) ones. Their relative efficiency is compared with respect to asymptotic covariance matrices. We derive expansions of these matrices for small error variances. It is shown that the QS estimator is more efficient than the CS one. The polynomial and Poisson regression models are studied in more detail. |
format |
Article |
author |
Malenko, A. |
spellingShingle |
Malenko, A. Efficiency comparison of two consistent estimators in nonlinear regression model with small measurement errors |
author_facet |
Malenko, A. |
author_sort |
Malenko, A. |
title |
Efficiency comparison of two consistent estimators in nonlinear regression model with small measurement errors |
title_short |
Efficiency comparison of two consistent estimators in nonlinear regression model with small measurement errors |
title_full |
Efficiency comparison of two consistent estimators in nonlinear regression model with small measurement errors |
title_fullStr |
Efficiency comparison of two consistent estimators in nonlinear regression model with small measurement errors |
title_full_unstemmed |
Efficiency comparison of two consistent estimators in nonlinear regression model with small measurement errors |
title_sort |
efficiency comparison of two consistent estimators in nonlinear regression model with small measurement errors |
publisher |
Інститут математики НАН України |
publishDate |
2007 |
url |
http://dspace.nbuv.gov.ua/handle/123456789/4483 |
citation_txt |
Efficiency comparison of two consistent estimators in nonlinear regression model with small measurement errors / A. Malenko // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 1-2. — С. 122-131. — Бібліогр.: 4 назв.— англ. |
work_keys_str_mv |
AT malenkoa efficiencycomparisonoftwoconsistentestimatorsinnonlinearregressionmodelwithsmallmeasurementerrors |
first_indexed |
2025-07-02T07:43:02Z |
last_indexed |
2025-07-02T07:43:02Z |
_version_ |
1836520238015840256 |
fulltext |
Theory of Stochastic Processes
Vol.13 (29), no.1-2, 2007, pp.122-131
ANDRII MALENKO
EFFICIENCY COMPARISON OF TWO
CONSISTENT ESTIMATORS IN NONLINEAR
REGRESSION MODEL WITH SMALL
MEASUREMENT ERRORS
We study a nonlinear measurement model where the response vari-
able has a density belonging to the exponential family. We con-
sider two consistent estimators: Corrected Score (CS) and Quasi
Score (QS) ones. Their relative efficiency is compared with respect
to asymptotic covariance matrices. We derive expansions of these
matrices for small error variances. It is shown that the QS estima-
tor is more efficient than the CS one. The polynomial and Poisson
regression models are studied in more detail.
1. Introduction
In this paper we consider general nonlinear regression model with errors
in the variables, where the response variable has a density belonging to
the exponential family. It is well known that ignoring measurement error
leads to inconsistent estimators. We consider two consistent estimators: the
Corrected Score (CS) one and the Quasi Score (QS) one.
There is a number of papers dealing with these estimators. Kukush et
al. (2006) prove that asymptotic covariance matrix (ACM) of QS estimator
is not greater than ACM of CS one in Lowener order, and give conditions
for strict inequality. In Kukush and Schneeweiss (2005) it is proved that
the ACMs are equal up to O(σ4
δ ), where σ2
δ is error variance tending to zero.
The goal of this paper is to compare the terms of expansions of order σ4
δ .
We denote by E the expectation of random values, vectors, or matrices,
V stands for the variance. The expectation Ef(z, β) is taken under the
same parameter β of the distribution of z as the β of the argument of f
unless otherwise specified. Derivatives are denoted as subindexes, vector
derivatives are column vectors of partial derivatives. The sign t means
transposition, the symmetrization operation [A]S := A+At makes sense for
2000 Mathematics Subject Classification: 62J10, 62J02, 62J12, 62F12.
Key words and phrases. Errors-in-variables models, corrected score, quasi score.
122
COMPARISON OF TWO ESTIMATORS IN NONLINEAR EIVM 123
square matrices. We denote convergence in distribution of random vectors
by
d→.
The paper is organized as follows. In the next section the model is
described. Section 3 introduces the estimators. In Section 4 we derive
expansions of the difference of ACMs. In Section 5 and 6 we consider two
particular models and Section 7 concludes.
2. General model
Let (Ω, F , E) be a probability space. We study a nonlinear errors-in-
variables model, as considered in Kukush and Schneeweiss (2005). Let ν be
a σ-finite measure on Borel σ-field on R. We observe a random variable y
with conditional density f(y|η) with respect to the measure ν. The density
belongs to an exponential family,
f(y|η) = exp
{
yη − C(η)
ϕ
+ c(y, ϕ)
}
. (1)
The C(·) function is smooth enough, C ′′ > 0, and c(y, ϕ) is measurable and
does not depend of η. The parameter ϕ > 0 is the dispersion parameter of
y, it is supposed to be known.
Assume that η = η(ξ, β), where ξ is a random latent regressor, and β is
unknown parameter vector. We observe noisy variable x = ξ + δ, where ξ
and δ are independent. δ is called measurement error.
Let for i = 1, . . . , n random vectors (yi, ξi, δi) be i.i.d., ξi ∼ N (μξ, σ2
ξ ),
δi ∼ N (0, σ2
δ ), where parameters μξ, σξ and σδ are known, σξ > 0, σδ > 0,
and ξi, δi are independent.
Suppose that β ∈ Θ, where Θ is a compact set in R
k. Vector β is to be
estimated based on observations (yi, xi), i = 1, . . . , n.
Introduce the following smoothness assumptions.
(i) The true value of β is an interior point of the set Θ.
(ii) C(·) ∈ C(6)(R), and there exist constants A, B > 0 such that
∀ξ ∈ R ∀β ∈ Θ :
∣∣C(i)(η(ξ, β))
∣∣ ≤ A · eB|ξ|, i = 1, . . . , 6.
(iii) η(·, ·) ∈ C(4,1)(R × Θ), and there exist constants A, B > 0 such that
∀ξ ∈ R ∀β ∈ Θ :
∣∣∣∣
∣∣∣∣ ∂i+j
∂ξi ∂βj
η(ξ, β)
∣∣∣∣
∣∣∣∣ ≤ A · eB|ξ|, i = 0, . . . , 4, j = 0, 1.
124 ANDRII MALENKO
3. Estimators
Several consistent estimators of β are proposed in the literature, see
Carroll et al. (1995). We will consider and compare the Corrected Score
(CS) and the Quasi Score (QS) ones.
The Quasi Score method is based on conditional expectation and con-
ditional variance of response variable y given x:
m(x, β) := E(y|x) = E[C ′(η(ξ, β))|x],
v(x, β) := V(y|x) = V[C ′(η(ξ, β))|x] + ϕE[C ′′(η(ξ, β))|x].
Estimator β̂Q is defined as a measurable solution to the equation
n∑
i=1
yi − m(xi, β)
v(xi, β)
· mβ(xi, β) = 0. (2)
We will say that for a sequence of random variables {Un : n ≥ 1} a
sequence of statements An(Un(ω)), ω ∈ Ω, holds eventually, if
∃Ω0 ⊂ Ω, P(Ω0) = 1, ∀ω ∈ Ω0 ∃N = N(ω) ∀n ≥ N : An(Un(ω)) holds.
Consider the following assumptions.
(iv) For some A, B > 0, ∀ξ ∈ R ∀β ∈ Θ : C ′′(η(ξ, β)) ≥ A · e−B|ξ|.
(v) The equation E[v−1(m0 − m)mβ] = 0, β ∈ Θ, has the only solution
β = β0. Here β0 is the true value of parameter β, m0 := m(x, β0),
m = m(x, β) and v = v(x, β).
(vi) The matrix Emβmt
β is positive definite at the true point β = β0.
Theorem 1. Let conditions (i) to (vi) hold true. Then:
a) eventually, the equation (2) has a solution β̂Q ∈ Θ;
b) eventually, the solution to the equation (2) is unique;
c) the estimator β̂Q is strictly consistent, i.e.
β̂Q → β a.s., as n → ∞.
The theorem is proved in Kukush and Schneeweiss (2005). The next
statement about the asymptotic normality is also proved there.
Theorem 2. Let conditions (i) to (vi) hold. Then β̂Q is asymptotically
normal with ACM ΣQ = Φ−1, where
Φ = E
mβ(x, β)mt
β(x, β)
v(x, β)
.
COMPARISON OF TWO ESTIMATORS IN NONLINEAR EIVM 125
To define the Corrected Score we consider the likelihood score function
in the error-free model. Denote
ψ(y, ξ, β) = yηβ − C ′(η)ηβ, (3)
where η and the derivative ηβ are taken at the point (ξ, β). To find the ML
estimator of β by observations (yi, ξi), i = 1, . . . , n, one should solve the
equation
1
n
n∑
i=1
ψ(yi, ξi, β) = 0, β ∈ Θ.
Consider the limit equation
E [(C ′(η(ξ, β0)) − C ′(η(ξ, β)))η(ξ, β)] = 0, β ∈ Θ, (4)
where β0 is the true value of parameter β.
Assume the following identifiability condition for the error-free model.
(vii) The equation (4) has unique solution β = β0.
We introduce the corrected score function ψc(y, x, β) such that
E(ψc(y, x, b)|y, ξ) = ψ(y, ξ, b), b ∈ Θ.
Denote f1(x, β) = ηβ(x, β), f2(x, β) = C ′(η(x, β))ηβ(x, β). We search
for such functions fic(x, β), i = 1, 2, that
E(fic(x, β)|ξ) = fi(ξ, β), i = 1, 2. (5)
Then ψc(y, x, β) = yf1c(x, β) − f2c(x, β).
Suppose that
(viii) Functions fic in (5) are defined in a neighborhood of Θ.
(ix) For small enough σδ the following relations hold true:∥∥∥∥ ∂j
∂βj
fic −
(
∂j
∂βj
fi − 1
2
σ2
δ
∂j
∂βj
(fi)xx +
1
8
σ4
δ
∂j
∂βj
(fi)x4
)∥∥∥∥ ≤ C · eA|x|σ6
δ ,
for i = 1, 2 and j = 0, 1 and for some fixed A > 0, C = const.
The last condition holds true for the polynomial and Poisson models. It
is closely related to a series expansion of the solution to the deconvolution
problem like (5), which is presented in Stefanski (1989).
The Corrected Score estimator β̂C is defined as a solution to the equation
1
n
n∑
i=1
ψc(yi, xi, β) = 0, β ∈ Θ.
126 ANDRII MALENKO
Under n → ∞ we get exactly the equation (4).
Asymptotic properties of β̂C are studied in Kukush and Schneeweiss
(2005). Under conditions (vii) to (ix), β̂C is strictly consistent and asymp-
totically normal, its ACM is given by the sandwich formula
ΣC = A−1
c · Bc · A−1
c ,
where matrices Ac and Bc are
Ac = EC ′′(η)ηβη
t
β , η = η(ξ, β), Bc = Eψc(y, x, β)ψt
c(y, x, β).
4. Approximation of ΣC and ΣQ
A reader can find the exact comparison of ΣQ and ΣC in Kukush et al.
(2006). In Kukush and Schneeweiss (2005) it is proved that under conditions
(i) to (ix),
ΣQ − ΣC = O(σ4
δ), as σ2
δ → 0.
That is, for small σ2
δ the asymptotic efficiency of these estimators is approx-
imately equal up to O(σ4
δ ).
Under stronger conditions on C(η) and η(ξ, β), we can find further terms
of expansion of ΣQ and ΣC .
Theorem 3. Let conditions (i) to (ix) hold and the next condition holds as
well.
(x) The matrix S = EC ′′ηβηt
β is positive definite.
Then under σ2
δ → 0 we have
ΣC − ΣQ = σ4
δ S−1 · Δ · S−1 + O(σ6
δ), (6)
where
ϕΔ = EC ′′3η4
xηβηt
β −E[C ′′2η2
xηβηt
β]S−1E[C ′′2η2
xηβηt
β ]
+ ϕE
( 1
σ2
x
C ′′2η2
xηβηt
β + C(3)2η4
xηβηt
β + 2C ′′C(3)η3
x[ηxβηt
β ]S
+ 3C ′′2η2
xηxβηt
xβ + 2C ′′C(3)η2
xηxxηβηt
β + 3C ′′2ηxηxx[ηxβηt
β]S
+ C ′′2η2
xxηβηt
β −E[C ′′2η2
xηβηt
β]S−1E[C ′′ηxβηt
xβ]
− E[C ′′ηxβηt
xβ]S−1E[C ′′2η2
xηβηt
β]
)
+ ϕ2E
( 1
σ2
x
C ′′ηxβηt
xβ − C(3)ηxxηxβηt
xβ + C ′′ηxxβηt
xxβ
+
(
C(3)2
C ′′ − C(4)
)
η2
xηxβηt
xβ −E[C ′′ηxβηt
xβ]S−1E[C ′′ηxβηt
xβ]
)
.
(7)
Here we calculate the function η and its derivatives at the point (x, β), and
the function C and its derivatives at the point η(x, β), where β is the true
parameter.
COMPARISON OF TWO ESTIMATORS IN NONLINEAR EIVM 127
Proof. The idea of the proof is to approximate each ACM with summands
of similar structure. These will be the expectations of products of functions
C, η and their derivatives at the point (x, β).
4.1◦. Approximation of ΣQ. We approximate functions m(x, β) and
v(x, β). We note that they can be expressed in terms of summands like
E[f(ξ, β)|x], where function f(ξ, β) and its derivatives are bounded by
CeB|ξ| uniformly for all β ∈ Θ because of conditions (ii) and (iii).
Since ξ|x ∼ N (μ(x), τ 2), where μ(x) = x− σ2
δ
σ2
x
(x− μ), τ 2 = σ2
δ − σ4
δ
σ2
x
, we
have for γ ∼ N (0, 1), γ is independent of x, that
E[f(ξ, β)|x] = E[f(μ(x) + τγ, β)|x] = Ef(μ(t) + τγ, β)
∣∣∣
t=x
.
Denote α = (x − μ)σ−2
x . Expanding the function f(μ(x) + τγ, β) into the
Taylor series near the x point and taking expectation w.r.t. γ, we have:
E[f(μ(x) + τγ, β)|x] = f(x, β) − ασ2
δfx(x, β)
+
τ 2 + α2σ4
δ
2
fxx(x, β) − ασ4
δ
2
fxxx(x, β) +
σ4
δ
8
fx4(x, β) + r(x, β, σδ),
(8)
and there exists constant A such that for all β ∈ Θ and for small enough
σ2
δ , E |r(x, β, σδ)| ≤ Aσ6
δ . Here the expectation is taken w.r.t. x ∼ N(μ, σ2
x).
To approximate m(x, β) we use (8) with f(x, β) = C ′(η), η = η(x, β).
We rewrite v(x, β) = A1(x, β) + ϕA2(x, β). Here
A1(x, β) = E[C ′2(η(ξ, β))|x] − m2(x, β), A2(x, β) = E[C ′′(η(ξ, β))|x].
We use (8) with f(x, β) = C ′2(η) and f(x, β) = C ′′(η), respectively.
Because of (iv) the random variable v−1(x, β) is well-defined and uni-
formly in β ∈ Θ bounded from above by const · eB|x|. The random matrix
mβ(x, β)mt
β(x, β) is also majorized by const · eB|x| uniformly in β.
In approximation of Φ we have summands of the form Eαkh(x, β), k =
1, 2. Here the function h(x, β) satisfies conditions (ii) and (iii). To transform
these summands we use the partial integration formulae:
Eαh(x, β) = Ehx(x, β), Eα2h(x, β) = σ−2
x Eh(x, β) + Ehxx(x, β).
Summarizing we have:
Φ = ϕS +
σ2
δ
2
Q +
σ4
δ
8
T + O(σ6
δ), as σ2
δ → 0.
To invert Φ we use the following expansion: as δ → 0,
(A−δB+δ2C)−1 = A−1+δA−1BA−1+δ2A−1(BA−1B−C)A−1+O(δ3), (9)
128 ANDRII MALENKO
which holds true for all square matrices A, B, C of the same size, where A
is nonsingular. Based on (x), we use (9) for A = ϕ−1S, B = 1
2
Q, C = 1
8
T ,
δ = σ2
δ . We have
ΣQ = ϕS−1 +
σ2
δ
2
ϕ2S−1QS−1 +
σ4
δ
8
ϕ2S−1(2ϕQS−1Q − T )S−1 + O(σ6
δ).
4.2◦. Approximation of ΣC . To expand Ac we use the next general result.
Let the function g(x, β) satisfy the condition∣∣∣∣ ∂i
∂xi
g(x, β)
∣∣∣∣ ≤ const · eC2|x|, i = 0, . . . , 6,
with some positive constant C2, which may depend on β, ξ ∼ N(μ, σ2
ξ ) and
δ ∼ N(0, σ2
δ ) are independent, x = ξ + δ. Then
Eg(ξ, β) = Eg(x, β) − σ2
δ
2
Egxx(x, β) +
σ4
δ
8
Egx4(x, β) + O(σ6
δ ), as σδ → 0.
We set g(x, β) = C ′′(η(x, β))ηβ(x, β)ηt
β(x, β). Then
Ac = S − 1
2
σ2
δAc2 +
1
8
σ4
δAc4 + O(σ6
δ ),
where
Ac2 = E(C ′′ηβηt
β)xx(x, β), Ac4 = E(C ′′ηβηt
β)x4(x, β).
We apply (9):
A−1
c = S−1 +
1
2
σ2
δS
−1Ac2S
−1 +
1
8
σ4
δS
−1(2Ac2S
−1Ac2 − Ac4)S
−1 + O(σ6
δ ).
To approximate the matrix Bc we use condition (vi):
ψc(y, x, β) ≈ yf1 − f2 − 1
2
σ2
δ (yf1 − f2)xx +
1
8
σ4
δ (yf1 − f2)x4.
The remainder in the last approximate equality is bounded by
const · (|y| + 1)eA|x|σ6
δ .
Since yf1 − f2 = (y − C ′(η))ηβ, then in the approximation of ψcψ
t
c we
can find terms like (y − C ′(η))k, k = 1, 2. We get rid of them and finally
obtain
Bc = ϕS − 1
2
σ2
δBc2 +
1
8
σ4
δBc4.
We have
ΣC = ϕS−1 + 1
2
σ2
δS
−1(2ϕAc2 − Bc2)S
−1
+
1
8
σ4
δS
−1(Bc4 − 2ϕAc4 + (6ϕAc2 − 2Bc2)S
−1Ac2 − 2Ac2S
−1Bc2).
COMPARISON OF TWO ESTIMATORS IN NONLINEAR EIVM 129
At last we write the difference between ΣC and ΣQ and simplify it. �
Lemma. Let F and G be two random matrices of the same size such that
EGGt is positive definite. Then EFF t−EFGt (EGGt)
−1
EGF t is positive
semidefinite matrix. Moreover, it is zero matrix if, and only if, F = HG,
H = EFGt (EGGt)
−1
, a.s.
Proof. Consider the matrix A = F − HG. The matrix AAt is positive
semidefinite a.s. Its expectation equals
E (F −HG)(F −HG)t = EFF t −EHGF t −EFGtH t −EHGGtH t ≥ 0.
After substitution of H we have the lemma proved. �
We rewrite Δ from (7) in the form
ϕΔ = EFF t − EFGt
(
EGGt
)−1
EGF t + ϕL + ϕ2M,
where
F = (C ′′)3/2
η2
xηβ , G = (C ′′)1/2
ηβ.
Here the function C ′′ = C ′′(η), and η = η(x, β). By Lemma the inter-
cept term of matrix polynomial ϕΔ (it is polynomial w.r.t. ϕ) is positive
semidefinite. It can be zero if, and only if,(
C ′′η2
x · I − E
[
C ′′2η2
xηβηt
β
]
· S−1
)
ηβ = 0 a.s., (10)
where S comes from condition (x), and I is the identity matrix. Thus
lim
ϕ→0+
[
ϕ lim
σ2
δ→0+
σ−4
δ (ΣC − ΣQ)
]
is a positive semidefinite matrix. It is zero iff the condition (10) holds.
5. Polynomial model
Polynomial measurement error model has a form{
yi = β0 + β1ξi + . . . + βmξm
i + εi,
xi = ξi + δi,
i = 1, . . . , n. (11)
Here m ≥ 1, εi are i.i.d., εi ∼ N(0, σ2
ε), and εi are independent of ξi and δi,
{ξi} and {δi} are the same as in Section 2.
The model (11) belongs to the exponential family (1) with functions
C(η) = η2/2, η(ξ, β) = β0 + β1ξi + . . . + βmξm
i , and ϕ = σ2
ε . The unknown
parameter β = (β0, . . . , βm)t.
The conditions (ii) to (iv) are fulfilled. The conditions (v) to (ix) are
explained in Kukush and Schneeweiss (2005). The matrix S from condition
130 ANDRII MALENKO
(x) is Gram matrix for random vector ζ(x) = (1, x, . . . , xm)t, S = E ζζ t,
and therefore it is positive definite.
We apply Theorem 3 and have
ϕΔ = E η4
ξK + ϕE(σ−2
x η2
ξK + 3η2
ξK1 + 3ηξηξξKs + η2
ξξK)+
+ϕ2
E(σ−2
x K1 + K2) − E(η2
ξK + ϕK1) · S−1 · E(η2
ξK + ϕK1),
K := ζζ t, K1 := ζ ′ζ ′t, K2 := ζ ′′ζ ′′t, Ks := [ζ ′ζ t]S.
The senior term of ϕΔ is zero, i.e.
E(σ−2
x K1 + K2) = EK1 · S−1 · EK1. (12)
To prove this fact we use the orthonormal Hermite polynomials
hi(x) =
(−σx)
i
√
i!
exp
{
(x − μ)2
2σ2
x
}
di
dxi
exp
{
−(x − μ)2
2σ2
x
}
, Ehi(x)hj(x) = δij .
Denote h = (h0(x), . . . , hm(x))t. Then there exists lower triangular non-
singular matrix B such that ζ = Bh. We have h′
i(x) = hi−1(x)
√
i/σx,
i ≥ 1, therefore h′ = Dh, where the matrix D has zero components except
di,i−1 =
√
i/σx. We substitute in (12) the following expressions
ζ ′ = B · D · h, ζ ′′ = B · D2 · h, Ehht = I.
Then (12) holds due to the equality DDt + σ2D2D2t = σ2DDtDDt.
We have that ϕΔ is linear in ϕ, i.e., ϕΔ = A+ϕB. By Lemma we have
A ≥ 0. But under βm
= 0 we can easily prove that A is positive definite.
Next, it was proved in Kukush et al. (2006) that ΣC ≥ ΣQ. Thus B is
positive semidefinite. So we can summarize that under βm
= 0,
lim
σ2
δ→0+
σ−4
δ (ΣC − ΣQ)
is positive definite matrix. Thus QS is more efficient than CS for small
measurement error variance.
6. Poisson measurement error model
In Poisson model the conditional distribution y|η belongs to the expo-
nential family (1) with functions C(η) = eη and η(ξ, β) = β0 + β1ξ, and
constant ϕ = 1. Here ν is a counting measure, ν(A) = #(A∩ {0, 1, 2, . . .}),
A ∈ B(R). The unknown parameter β = (β0, β1)
t.
It is easy to check that the conditions (ii), (iii), (iv), (vi), and (vii) hold
true. The matrix S = E[eηηβηt
β] is positive definite.
The matrix Δ is equal to
Δ = β2
1σ
−2
x Ee2ηηβηt
β + β4
1A1 + A2,
COMPARISON OF TWO ESTIMATORS IN NONLINEAR EIVM 131
where A1 is positive semidefinite and A2 is positive definite under β1
= 0.
Then under β1
= 0 we have ΣC − ΣQ is positive definite for small σ2
δ , and
the first positive definite term of expansion of this difference is the term of
order σ4
δ . Under β1 = 0 we have Δ = 0 and ΣC = ΣQ + O(σ6
δ ).
7. Conclusions
In this paper we considered the nonlinear regression model with normal
measurement errors and compared the efficiency of two consistent estimators
of unknown parameter. All nuisance parameters, that is measurement error
variance σ2
δ , parameters of distribution of latent variable μξ and σ2
ξ , and
dispersion parameter ϕ, were supposed to be known.
We considered two consistent estimators, the Quasi Score (QS) and the
Corrected Score (CS) ones. We found expansions of their ACMs up to O(σ6
δ)
and proved that in polynomial and Poisson regression models the difference
between ACMs of CS and QS is positive definite for small measurement
error. Kukush and Schneeweiss (2005) proved, that choosing CS estimator
instead of QS one results into negligible loss of efficiency (up to the order
O(σ4
δ )). In this paper we showed that QS is more efficient than CS up to
O(σ6
δ ). This result can be useful for selection of estimator if one knows a
priori that the measurement error variance is small.
The author is grateful to Prof. A. Kukush for the problem statement
and discussions.
Bibliography
1. Carroll, R. J., Ruppert, D., and Stefanski, L. A. (1995). Measurement
Error in Nonlinear Models. – Chapman and Hall, London.
2. Kukush, A., and Schneeweiss, H. (2005). Comparing different estimators in
a nonlinear measurement error model. I. Mathematical Methods of Statis-
tics, 14, 53-79.
3. Kukush, A., Malenko, A., and Schneeweiss, H. (2006). Optimality of the
quasi-score-like estimator in a mean-variance model. Discussion paper 384.
SFB 386, University of Munich.
4. Stefanski, L. A. (1989). Unbiased estimation of a nonlinear function of a
normal mean with application to measurement error models. Communica-
tion in Statistics, Series A, 18, 4335-4358.
Department of Probability Theory and Mathematical Statistics,
Kyiv National Taras Shevchenko University, Kyiv, Ukraine
|