Comparing the efficiency of estimates in concrete errors-in-variables models under unknown nuisance parameters

We consider a regression of y on x given by a pair of mean and variance functions with a parameter vector θ to be estimated that also appears in the distribution of the regressor variable x. The estimation of θ is based on an extended quasi score (QS) function. Of special interest is the case where...

Повний опис

Збережено в:
Бібліографічні деталі
Дата:2007
Автори: Kukush, A., Malenko, A., Schneeweiss, H.
Формат: Стаття
Мова:English
Опубліковано: Інститут математики НАН України 2007
Онлайн доступ:http://dspace.nbuv.gov.ua/handle/123456789/4514
Теги: Додати тег
Немає тегів, Будьте першим, хто поставить тег для цього запису!
Назва журналу:Digital Library of Periodicals of National Academy of Sciences of Ukraine
Цитувати:Comparing the efficiency of estimates in concrete errors-in-variables models under unknown nuisance parameters / A. Kukush, A. Malenko, H. Schneeweiss // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 4. — С. 69–81. — Бібліогр.: 11 назв.— англ.

Репозитарії

Digital Library of Periodicals of National Academy of Sciences of Ukraine
id irk-123456789-4514
record_format dspace
spelling irk-123456789-45142009-11-25T12:00:28Z Comparing the efficiency of estimates in concrete errors-in-variables models under unknown nuisance parameters Kukush, A. Malenko, A. Schneeweiss, H. We consider a regression of y on x given by a pair of mean and variance functions with a parameter vector θ to be estimated that also appears in the distribution of the regressor variable x. The estimation of θ is based on an extended quasi score (QS) function. Of special interest is the case where the distribution of x depends only on a subvector α of θ, which may be considered a nuisance parameter. A major application of this model is the classical measurement error model, where the corrected score (CS) estimator is an alternative to the QS estimator. Under unknown nuisance parameters we derive conditions under which the QS estimator is strictly more аfficient than the CS estimator. We focus on the loglinear Poisson, the Gamma, and the logit model. 2007 Article Comparing the efficiency of estimates in concrete errors-in-variables models under unknown nuisance parameters / A. Kukush, A. Malenko, H. Schneeweiss // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 4. — С. 69–81. — Бібліогр.: 11 назв.— англ. 0321-3900 http://dspace.nbuv.gov.ua/handle/123456789/4514 en Інститут математики НАН України
institution Digital Library of Periodicals of National Academy of Sciences of Ukraine
collection DSpace DC
language English
description We consider a regression of y on x given by a pair of mean and variance functions with a parameter vector θ to be estimated that also appears in the distribution of the regressor variable x. The estimation of θ is based on an extended quasi score (QS) function. Of special interest is the case where the distribution of x depends only on a subvector α of θ, which may be considered a nuisance parameter. A major application of this model is the classical measurement error model, where the corrected score (CS) estimator is an alternative to the QS estimator. Under unknown nuisance parameters we derive conditions under which the QS estimator is strictly more аfficient than the CS estimator. We focus on the loglinear Poisson, the Gamma, and the logit model.
format Article
author Kukush, A.
Malenko, A.
Schneeweiss, H.
spellingShingle Kukush, A.
Malenko, A.
Schneeweiss, H.
Comparing the efficiency of estimates in concrete errors-in-variables models under unknown nuisance parameters
author_facet Kukush, A.
Malenko, A.
Schneeweiss, H.
author_sort Kukush, A.
title Comparing the efficiency of estimates in concrete errors-in-variables models under unknown nuisance parameters
title_short Comparing the efficiency of estimates in concrete errors-in-variables models under unknown nuisance parameters
title_full Comparing the efficiency of estimates in concrete errors-in-variables models under unknown nuisance parameters
title_fullStr Comparing the efficiency of estimates in concrete errors-in-variables models under unknown nuisance parameters
title_full_unstemmed Comparing the efficiency of estimates in concrete errors-in-variables models under unknown nuisance parameters
title_sort comparing the efficiency of estimates in concrete errors-in-variables models under unknown nuisance parameters
publisher Інститут математики НАН України
publishDate 2007
url http://dspace.nbuv.gov.ua/handle/123456789/4514
citation_txt Comparing the efficiency of estimates in concrete errors-in-variables models under unknown nuisance parameters / A. Kukush, A. Malenko, H. Schneeweiss // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 4. — С. 69–81. — Бібліогр.: 11 назв.— англ.
work_keys_str_mv AT kukusha comparingtheefficiencyofestimatesinconcreteerrorsinvariablesmodelsunderunknownnuisanceparameters
AT malenkoa comparingtheefficiencyofestimatesinconcreteerrorsinvariablesmodelsunderunknownnuisanceparameters
AT schneeweissh comparingtheefficiencyofestimatesinconcreteerrorsinvariablesmodelsunderunknownnuisanceparameters
first_indexed 2025-07-02T07:44:28Z
last_indexed 2025-07-02T07:44:28Z
_version_ 1836520328598126592
fulltext Theory of Stochastic Processes Vol.13 (29), no.4, 2007, pp.69–81 ALEXANDER KUKUSH, ANDRII MALENKO, AND HANS SCHNEEWEISS COMPARING THE EFFICIENCY OF ESTIMATES IN CONCRETE ERRORS-IN-VARIABLES MODELS UNDER UNKNOWN NUISANCE PARAMETERS We consider a regression of y on x given by a pair of mean and variance functions with a parameter vector θ to be estimated that also appears in the distribution of the regressor variable x. The es- timation of θ is based on an extended quasi score (QS) function. Of special interest is the case where the distribution of x depends only on a subvector α of θ, which may be considered a nuisance parame- ter. A major application of this model is the classical measurement error model, where the corrected score (CS) estimator is an alter- native to the QS estimator. Under unknown nuisance parameters we derive conditions under which the QS estimator is strictly more efficient than the CS estimator. We focus on the loglinear Poisson, the Gamma, and the logit model. 1. Introduction Suppose that the relation between a response variable y and a covariate (or regressor) x is given by a pair of conditional mean and variance functions: E (y|x) =: m(x, θ), V (y|x) =: v(x, θ). (1) Here θ is an unknown d-dimensional parameter vector to be estimated. The parameter θ belongs to an open parameter set Θ. The variable x has a density ρ(x, θ) with respect to a σ-finite measure ν on a Borel σ-field on the real line. We assume that v(x, θ) > 0, for all x and θ, and that all the functions are sufficiently smooth. Such a model is called a mean-variance Alexander Kukush is supported by the Swedish Institute grant SI-01424/2007. Invited lecture. 2000 Mathematics Subject Classifications: 62J05, 62J12, 62F12, 62F10, 62H12, 62J10. Key words and phrases: Mean-variance model, measurement error model, quasi score estimator, corrected score estimator, nuisance parameter, optimality property. 69 70 A.KUKUSH, A.MALENKO, AND H.SCHNEEWEISS model, cf. Carroll et al. (1995). We want to estimate θ on the basis of an i.i.d. sample (xi, yi), i = 1, . . . , n. General statements and results on the polynomial EIVM can be found in Shklyar et al. (2007) for known nuisance parameter α and in Kukush et al. (2006) for unknown α. Here we consider other special cases of the models, which can be treated as mean-variance model (1), i.e. the loglinear Poisson, the Gamma, and the logit model. We focus on the case of unknown mean and variance of the latent variable. The case of known nuisance parameters is considered in Kukush and Schneeweiss (2006). We assume regularity conditions, which make it possible to differentiate integrals with respect to parameters and which guarantee that the consid- ered estimators, generated by unbiased scores, are consistent and asymp- totically normal with asymptotic covariance matrices that are given by the sandwich formula, see Carroll et al. (1995). These regularity conditions are discussed in Kukush and Schneeweiss (2005) for a nonlinear measurement error model. See also the discussion concerning the sandwich formula in Schervish (1995), p. 428. We use the symbols E to denote the expectation of random values, vec- tors, and matrices and V to denote the variance or the covariance matrix. We often omit the arguments of functions, e.g., instead of ρ(x, θ) we write ρ for simplicity. All vectors are considered to be column vectors. We use sub- scripts to indicate partial derivatives, e.g., ρθ = ∂ρ ∂θ . For any scalar function its derivative with respect to a vector is a column vector and for a vector it is a matrix. We compare real matrices in Lowener order, i.e., for symmetric matrices A and B of equal size, A < B and A ≤ B means that B − A is positive definite and B − A is positive semidefinite, respectively. The paper is organized as follows. Section 2 contains general results on mean-variance models and measurement error model. In Section 3 spe- cial cases of Poisson, Gamma, and logit EIVM are treated, and Section 4 concludes. 2. General results The estimation of θ in the mean-variance model (1) cannot be performed by the maximum likelihood (ML) approach because the conditional distri- bution of y given x is by assumption not known. Instead an estimator of θ is based on an unbiased estimating (or score) function, which we suppose to be given. A typical example of such an estimating function is a member of a general class of estimating functions. Let L be the class of all unbiased linear-in-y score functions (for short: linear score (LS) functions): SL(x, y; θ) := yg(x, θ)− h(x, θ), (2) where unbiasedness means that ∀ θ ∈ Θ : E SL(x, y; θ) = 0. Here g and h are vector-valued functions of dimension d, the same dimension as θ. The EFFICIENCY COMPARISON OF ESTIMATORS 71 expectation is meant to be carried out under the same θ as the θ of the argument. The estimator of θ based on SL is called linear score (LS) estimator θ̂L and is given as the solution to the equation n∑ i=1 SL(xi, yi; θ̂L) = 0. Under general conditions θ̂L exists and is consistent and asymptotically normal. The asymptotic covariance matrix (ACM) ΣL of θ̂L is given by the sandwich formula, cf. Heyde (1997), ΣL = A−1 L BLA−� L , AL = − E SLθ, BL = E SLS� L . (3) AL is supposed to be nonsingular (this is the identifiability condition). A quasi score functions is defined as follows, see Kukush et al. (2006): SQ(x, y; θ) := (y − m)mθ v + lθ, l := log ρ(x, θ). (4) The QS estimator θ̂Q of θ is defined as the solution to the equation n∑ i=1 SQ(xi, yi, θ̂Q) = 0. (5) The quasi-score function (4) belongs to L, therefore the estimator θ̂Q is consistent and asymptotically normal under regularity conditions. Theorem 2.1 (Optimality of QS) Let SL be a score function from the class L and SQ be the quasi-score function (4). Then ΣQ ≤ ΣL. Moreover, ΣL = ΣQ for all θ if, and only if, θ̂L = θ̂Q a.s. Theorem 2.2 (Strict Optimality of QS)Under the conditions of Theo- rem 2.1 rank (ΣL − ΣQ) = rank [( mgi − hi vgi ) , ( lθi mθi ) , i = 1, . . . , d ] − d, (6) where rank [·] is the maximum number of linearly independent random vec- tors inside the square brackets. In particular, ΣQ < ΣL if, and only if, the random vectors in (6) are linearly independent. 72 A.KUKUSH, A.MALENKO, AND H.SCHNEEWEISS If span {( mgi − hi vgi ) , i = 1, d } ∩ span {( lθi mθi ) , i = 1, d } = {( 0 0 )} , then rank (ΣL − ΣQ) = rank [( hi gi ) , i = 1, . . . , d ] . As an immediate consequence, we have the following corollary: Corollary 2.1 A sufficient condition for ΣQ < ΣL is that the random variables {(mg − h)i, i = 1, . . . , d, lθj , j ∈ Bθ} (7) are linearly independent, where {lθj , j ∈ Bθ} is a basis of span {lθj , j = 1, . . . , d}. A reader can find the proofs of Theorems 2.1 and 2.2 in Kukush et al. (2006). 2.1 Measurement error model A measurement error model is a model where the response variable y depends on a latent (unobservable) variable ξ with distribution p(ξ, α). Here θ is split into two subvectors, θ = (β�, α�)�, β ∈ R k, α ∈ R d−k. (8) In such a case we call β the unknown parameter of interest and α – the unknown nuisance parameter. The variable ξ can be observed only indirectly via a surrogate variable x, which is related to ξ through a measurement equation of the form x = ξ + δ, (9) where the measurement error δ is independent of ξ and y and E δ = 0. Additionally, we assume δ ∼ N(0, σ2 δ ) with known σ2 δ . The dependence of y on ξ is either given by a conditional distribution of y given ξ or simply by a conditional mean function supplemented by a conditional variance function: E (y|ξ) = m∗(ξ, β), V (y|ξ) = v∗(ξ, β). (10) Note that m∗ and v∗ do not depend on α. From (10) we can derive the conditional mean and variance functions of y given x: m(x, β, α) := E (y|x) = E [m∗(ξ, β)|x] (11) v(x, β, α) := V (y|x) = E [v∗(ξ, β)|x] + V [m∗(ξ, β)|x]. (12) EFFICIENCY COMPARISON OF ESTIMATORS 73 To compute these, we need to know the conditional distribution of ξ given x, which we can derive from the unconditional distribution of ξ, p(ξ, α), and the measurement equation (9). The quasi-score function (4) takes the form SQ = ( (y − m)v−1mβ (y − m)v−1mα + lα ) . (13) An important special case for p(ξ, α) is the normal distribution ξ ∼ N(μξ, σ 2 ξ ), σ2 ξ > 0. In this case, x ∼ N(μ, σ2), μ = μξ, σ2 = σ2 ξ + σ2 δ , α = (μ, σ)�, and ξ|x ∼ N(μ(x), τ 2) with μ(x) = Kx + (1 − K)μ, (14) τ 2 = Kσ2 δ , (15) where K = σ2 ξ/σ 2 is the reliability ratio, 0 < K < 1. The subvector lα in the score function SQ takes the special form lα = (lμ, lσ)� = ( x − μ σ2 , (x − μ)2 σ3 − 1 σ )� . (16) Among the linear score functions, the so-called corrected score (CS) function is of particular interest. It is given by special functions g and h. Suppose we can find functions g = g(x, β) and h = h(x, β) such that E [g|ξ] = v∗−1m∗ β (17) E [h|ξ] = m∗v∗−1m∗ β. (18) Then, because of E (yg−h) = E E [(yg−h)|y, ξ] = E (y−m∗)v∗−1m∗ β = 0, SC := ( yg − h lα ) is a linear score function within the class L. It is called the corrected score function of the measurement error model. In a number of important cases such functions g and h can be found in closed form. But there are also cases where g and h do not exist, see Stefanski (1989). 2.2 Pre-estimation In the measurement error model with θ� = (β�, α�), we could also define a modified QS estimator, which is based on a score function that instead of (13) consists of the two subvectors (y−m)v−1mβ and lα, implying an estimator of α which uses the second subvector only. This means that α would be pre-estimated using only the data xi, not the data yi. We can then 74 A.KUKUSH, A.MALENKO, AND H.SCHNEEWEISS substitute the resulting estimator α̂ in the first subvector, (y−m)v−1mβ , and use this to estimate β. We might call this estimator of β a QS estimator with pre-estimated nuisance parameters or simply pre-estimated QS estimator. Such a two-step estimation procedure is, of course, simpler to apply than the one we propose, but according to Theorem 2.1 it is at most as efficient and often less efficient than the latter one. There are, however, cases where pre-estimation of the nuisance parame- ter is in accordance with our QS approach and does not reduce the efficiency of QS. Suppose that mα = Amβ (19) with some nonrandom matrix A (which may depend on θ). Corollary 2.2 Suppose in a model with nuisance parameters as described by (8, 9) condition (19) holds, then a sufficient condition for Σ (β) Q < Σ (β) L is that the two systems of random variables {mβi , i = 1, . . . , k} and {(mg − h)i, i = 1, . . . , k, lαj , j = 1, . . . , d − k} are both linearly independent. For later use, we formulate an extension of Corollary 2.2, which deals with the case where only part of mα is linearly related to mβ . It can be proved in the same way as Corollary 2.2. Corollary 2.3 Suppose in a model with nuisance parameters the nuisance parameter vector α is subdivided into two subvectors α′ ∈ R r and α′′ ∈ R (d−k−r) such that mα′′ = Amβ with some nonrandom matrix A. Suppose further that there exists a nonrandom nonsingular square matrix B such that l̃α′′ := Blα′′ is a function of x and α′′ only. Let θ′ = (β�, α′�)�. Then a sufficient condition for Σ (θ′) Q < Σ (θ′) L is that the two system of random variables {mβi , i = 1, . . . , k, mαj , j = 1, . . . , r} and {(mg − h)i, i = 1, . . . , k, lαj , j = 1, . . . , d − k} are both linearly independent. The proof of Corollaries 2.2 and 2.3 can be found in Kukush et al. (2006). 3. Special cases Consider the mean-variance measurement error model of Section 2.1 and assume that the error free mean function m∗ is a function of a linear predictor in ξ: m∗(ξ, β) = m̃(β0 + β1ξ), β = (β0, β1) �. (20) EFFICIENCY COMPARISON OF ESTIMATORS 75 The mean function m = m(x, β, α) can then be computed as follows: m = E (m∗|x) = E [m̃{β0 + β1(Kx + (1 − K)μ + τγ)}|x] , (21) where γ ∼ N(0, 1) and γ is independent of x. This is a Generalized Linear Model (GLM). We have mμ = β1(1−K)mβ0 and thus by Corollary 2.3 the QS estimator of μ is just empirical mean, μ̂ = 1 n ∑n i=1 xi. Now suppose that in GLM m̃′′ = c0m̃ ′ (22) with some constant c0. Then by Corollary 2.3 we obtain the QS estimator of σ2 is just empirical variance. This property holds for the Poisson and the Gamma models, but it does not hold for Logit one. We give indirect proof of the fact that in the Logit model QS estimator of σ2 is not empirical variance and σ has to be estimated together with other unknown parameters. 3.1 Poisson model In the loglinear Poisson measurement error model, y|ξ ∼ Po(λ) with λ = exp(β0 + β1ξ), and x = ξ + δ. Here m∗ = v∗ = λ. For QS, we have, cf. Shklyar and Schneeweiss (2005), m(x, θ) = exp { β0 + β1μ(x) + β2 1τ 2 2 } , v(x, θ) = m2(x, θ)(eβ2 1τ2−1)+m(x, θ) with μ(x) and τ 2 from (14) and (15), respectively. The β-component of the CS function is, cf. Shklyar and Schneeweiss (2005), S (β) C = yg−h, g = (1, x)�, h = exp { β0 + β1x − 1 2 β2 1σ 2 δ } (1, x−σ2 δβ1) �. We know that μ and σ2 can be pre-estimated and therefore ΣC − ΣQ is of the form ΣL − ΣQ = ( Σ (β) L − Σ (β) Q 0 0 0 ) . (23) We can apply Corollary 2.2. For β1 �= 0, the variables {(mg − h)0, (mg − h)1, lμ, lσ} are linearly independent, since the functions{ 1, x, x2, eβ1Kx, eβ1x, xeβ1Kx, xeβ1x } are linearly independent. For the same reason, mβ0 and mβ1 are linearly independent under β1 �= 0: mβ0 = econst · eβ1Kx, mβ1 = const · eβ1Kx + const · xeβ1Kx. Thus by Corollary 2.2, Σ (β) Q < Σ (β) C under β1 �= 0. 76 A.KUKUSH, A.MALENKO, AND H.SCHNEEWEISS 3.2 Gamma model In the loglinear Gamma measurement error model, y|ξ follows a Gamma distribution G(ω, π) with ω = exp(β0 + β1ξ), π > 0, and x = ξ + δ: f(y|η) = 1 Γ(π) (π ω )π yπ−1 exp ( −yπ ω ) , y > 0. Here m∗ = ω and v∗ = π−1ω2, where π−1 corresponds to the dispersion parameter ϕ, which, according to Kukush et al. (2006), we can assume to be known. For QS, we have m(x, θ) = exp { β0 + β1μ(x) + β2 1τ 2/2 } , v(x, θ) = ( 1 + 1 π ) exp{2β0+2β1μ(x)+2β2 1τ 2}−exp{2β0+2β1μ(x)+β2 1τ 2}. The β-component of the CS function is S (β) C = yg−h, g = exp { −β0 − β1x − 1 2 β2 1σ 2 δ } (1, x+β1σ 2 δ ) �, h = (1, x)�, cf. Kukush et al. (2005). As in Section 3.1, we can apply Corollary 2.2. For β1 �= 0, the variables {(mg−h)0, (mg−h)1, lμ, lσ} are linearly independent, since the functions { 1, x, x2, eβ1(1−K)x, xeβ1(1−K)x } are linearly independent. In addition, as in Section 3.1, mβ0 and mβ1 are linearly independent under β1 �= 0. Thus by Corollary 2.2, Σ (β) Q < Σ (β) C under β1 �= 0. 3.3 Logit model In the logit measurement error model, y is a binary variable following a binomial distribution, the mean of which is a logistic function of a linear predictor in ξ: y ∼ B(1, π), π = H(η) = (1 + e−η)−1, η = β0 + β1ξ, x = ξ + δ. For this model, m∗ = π, v∗ = π(1 − π). For QS, we need the mean and variance functions of y given x, which are given by m = E [{1 + exp(−β0 − β1(Kx + (1 − K)μ + τγ)}−1|x] , v = m(1 − m), (24) where γ ∼ N(0, 1), and γ is independent of x. EFFICIENCY COMPARISON OF ESTIMATORS 77 We can then construct the quasi score function (13) for θ = (β0, β1, μ, σ)� with lα from (16). As y is binary, the QS estimator of θ is just the ML es- timator. Note that, according to properties of GLM, the QS estimator of μ is the empirical mean x. We cannot say the same for the QS estimator of σ2, see below. To find the CS estimator, we start from the maximum likelihood score function for β in the error free model, which is given by S (β) M = ( y − 1 1 + e−η ) (1, ξ)�. Due to complex zeros in the denominator one cannot solve the deconvolution problem E(S (β) C |y, ξ) = S (β) M . Therefore we construct a modified corrected score (C*S) function for β, as a function S (β) C∗ = S (β) C∗ (y, x, β) such that E (S (β) C∗ |y, ξ) = S (β) M (1 + e−η) = {y(1 + e−η) − 1}(1, ξ)�. S (β) C∗ is of the form S (β) C∗ = ygc − hc, where gc and hc are functions of x and β such that E (gc|ξ) = (1 + e−β0−β1ξ)(1, ξ)�, E (hc|ξ) = (1, ξ)�. The solutions to these deconvolution problems are gc = (1 + ea−β1x, x + (x + β1σ 2 δ )e a−β1x)�, hc = (1, x)�, (25) where a = −β0 − β2 1σ 2 δ/2. Function S (β) C∗ has to be supplemented by the subvector lα, which yields the conventional estimators of the nuisance pa- rameters μ and σ2: μ̂C∗ = x and σ̂2 C∗ = s2 x. In addition to the QS and CS estimators, we also consider the conditional score (DS) estimator, cf. Carroll et al. (1995). Let z = x + yσ2 δβ1, η∗ = β0 + β1z. Then E (y|z) = m∗ := H(η∗ − β2 1σ 2 δ/2) V (y|z) = v∗ := H(1 − H). The conditional score function for β is then given by, cf. Carroll et al. (1995), S (β) D = (y − m∗)(1, z)t. It is obviously unbiased. By using the fact that y is binary, the conditional score function can be written as a linear function of y: S (β) D = ygd − hd, where gd = {1 − H(β0 + β1x + β2 1σ 2 δ/2)}(1, x + β1σ 2 δ ) � + + H(β0 + β1x − β2 1σ 2 δ/2)(1, x)�, hd = −H(β0 + β1x − β2 1σ 2 δ/2)(1, x)�. 78 A.KUKUSH, A.MALENKO, AND H.SCHNEEWEISS If S (β) D is supplemented by the subvector (lμ, lσ) �, then DS is a member of the class L of linear score functions. The conditional score estimators of μ and σ2 are μ̂D = x and σ̂2 D = s2 x. Now, according to Theorem 2.1, ΣQ ≤ ΣC∗ and ΣQ ≤ ΣD. (26) But we can also compare Σ (β,σ) C∗ and Σ (β,σ) D to Σ (β,σ) Q , where these matrices are the ACMs of the corresponding estimators of (β0, β1, σ)�. Since μ̂C∗ = μ̂D = μ̂Q, we have for the μ-components Σ (μ) C∗ = Σ (μ) D = Σ (μ) Q , and thus by (26), rank ( Σ (β,σ) C∗ − Σ (β,σ) Q ) = rank (ΣC∗ − ΣQ) , rank ( Σ (β,σ) D − Σ (β,σ) Q ) = rank (ΣD − ΣQ) . Theorem 3.1 In the logit model, Σ (β,σ) Q ≤ Σ (β,σ) C∗ and Σ (β,σ) Q ≤ Σ (β,σ) D . When β1 �= 0, the inequalities become strict inequalities. In particular, under β1 �= 0, Σ (σ) Q < Σ (σ) C∗ and Σ (σ) Q < Σ (σ) D . This means that in the logit model σ̂2 Q is an asymptotically more efficient estimator of σ2 than σ̂2 C∗ = σ̂2 D = s2 x. 3.4 Proof of Theorem 3.1 The first statement is a direct consequence of Theorem 2.1. So we need only prove the strict inequalities under β1 �= 0. First we prove the linear independence of [lμ, lσ, (mgc − hc)0, (mgc − hc)1], then the linear independence of [lμ, lσ, (mgd − hd)0, (mgd − hd)1], and finally the linear independence of [mβ0 , mβ1, mσ], where lμ ∝ x − μ, lσ ∝ (x − μ)2 − σ2. By Corollary 2.3 with α′ = σ and α′′ = μ, these facts will yield that Σ (β,σ) Q < Σ (β,σ) C∗ and Σ (β,σ) Q < Σ (β,σ) D . Consider the case β1 > 0 (the case β1 < 0 can be treated similarly). EFFICIENCY COMPARISON OF ESTIMATORS 79 1) From (24), we have, as x → −∞, m(x) = E [H(β0+β1ξ)|x] ∼ exp{β0+β1(Kx+(1−K)μ)} E eβ1τγ = Ceβ1Kx. Together with (25) it follows that (mgc − hc)(x) ∼ const · eβ1(K−1)x(1, x)� as x → −∞. Thus the functions lμ, lσ, (mgc − hc)0, (mgc − hc)1 have different asymptotic behavior, as x → −∞, and are therefore linearly independent. 2) As to the asymptotic behavior of (mgd − hd), we have, as x → −∞, (gd)0 → 1, (gd)1 ∼ x, (hd)0 ∼ const · eβ1x, (hd)1 ∼ const · xeβ1x, and thus (mgd − hd)0 ∼ const · eβ1Kx, (mgd − hd)1 ∼ const · xeβ1Kx. Again the functions lμ, lσ, (mgd−hd)0, (mgd−hd)1 have different asymptotic behavior as x → −∞ and are therefore linearly independent. 3) We have, by (24), mβ0 = E [H ′|x] = E [H ′{β0 + β1(Kx + (1 − K)μ + τγ)}|x], mβ1 = (Kx + (1 − K)μ) E [H ′|x] + τ 2 E [H ′′|x], mσ = β1Kσ(x − μ) E [H ′|x] + β1ττσ E [H ′′|x], where H(i) = H(i)(β0 + β1ξ). This system of equations can also be written in matrix form:⎛ ⎝ mβ0 mβ1 mσ ⎞ ⎠ = ⎛ ⎝ 1 0 0 (1 − K)μ K τ 2 −β1Kσμ β1Kσ β1ττσ ⎞ ⎠ ⎛ ⎝ E [H ′|x] x E [H ′|x] E [H ′′|x] ⎞ ⎠ (27) Because of τ 2 = Kσ2 δ , see (15), and Kσ �= 0, the transformation matrix on the right hand side of (27) is nonsingular if β1 �= 0. By the properties of the logistic function, we have H ′ = H − H2, H ′′ = H ′ − 2(H2 − H3). Therefore the vector on the right hand side of (27) is a nonsingular linear transformation of the vector of functions (f1(x), f2(x), f3(x))�, where f1(x) = E [H −H2|x], f2(x) = x E [H −H2|x], f3(x) = E [H2 −H3|x]. To prove the linear independence of [mβ0 , mβ1, mσ] it thus suffices to show that [f1, f2, f3] are linearly independent. But this is guaranteed by the fact that these functions have different asymptotic behavior, as x → −∞. Indeed, E [Hr|x] ∼ const · erβ1Kx and thus f1(x) ∼ const · eβ1Kx, f2(x) ∼ const · xeβ1Kx, f3(x) ∼ const · e2β1Kx.� 80 A.KUKUSH, A.MALENKO, AND H.SCHNEEWEISS 4. Conclusions We studied the Poisson, the Gamma, and the logit errors-in-variance models with unknown nuisance parameters. For the Poisson and the Gamma models, we showed that the Quasi-Score estimator for β is strictly more ef- ficient than the Corrected Score one for β. For the logit model, we proved that the compound Quase-Score estimator for β and σ is strictly more ef- ficient than both Corrected Score and Conditional Score estimators for β and σ. In particular in the logit model the Quasi-Score estimator for σ is different from the empirical variance of x. For the Gamma and the Pois- son models the Quasi-Score estimators for σ coincides with the empirical variance of x. All the three models are GLMs, therefore the Quasi-Score estimator of μ is just the empirical mean of x. References 1. Carroll, R. J., Ruppert, D., and Stefanski, L.A., Measurement Error in Nonlinear Models, Chapman and Hall, London, (1995). 2. Heyde, C. C., Quasi-Likelihood And Its Application, Springer, New York, (1997). 3. Kukush, A. and Schneeweiss, H., Comparing different estimators in a non- linear measurement error model. I., Mathematical Methods of Statistics, (2005), 14, 53–79. 4. Kukush, A., Schneeweiss, H., and Shklyar, S., Quasi Score is more efficient than Corrected Score in a general nonlinear measurement error model, Dis- cussion Paper 451, SFB 386, Universität München, (2005). 5. Kukush., A. and Schneeweiss, H., Asymptotic optimality of the quasi-score estimator in a class of linear score estimators, Discussion Paper 477, SFB 386, Universität München, (2006). 6. Kukush, A., Malenko, A., and Schneeweiss, H., Optimality of the quasi- score estimator in a mean-variance model with applications to measurement error models, Discussion Paper 494, SFB 386, Universität München, (2006). 7. Schneeweiss, H., and Kukush, A., Comparing the efficiency of structural and functional methods in measurement error models. Submitted (a). 8. Schervish, M. J., Theory of Statistics, Springer, New York, (1995). 9. Shklyar, S., and Schneeweiss H. (2005). A comparison of asymptotic co- variance matrices of three consistent estimators in the Poisson regression model with measurement errors, Journal Multivariate Analysis, (2005), 94 (2), 250–270. 10. Shklyar, S., Schneeweiss, H., and Kukush, A., Quasi Score is more efficient than Corrected Score in a polynomial measurement error model, Metrika, (2007), 65, 275–295. 11. Stefanski, L., Unbiased estimation of a nonlinear function of a normal mean with application to measurement error models, Communications in Statistics, Part A - Theory and Methods, (1989), 18, 4335–4358. EFFICIENCY COMPARISON OF ESTIMATORS 81 Department of Mathematical Analysis, Kyiv National Taras Shevchenko University, Kyiv, Ukraine. E-mail address: alexander kukush@univ.kiev.ua Department of Probability Theory and Mathematical Statistics, Kyiv National Taras Shevchenko University, Kyiv, Ukraine E-mail address: exipilis@yandex.ru University of Muenchen, Germany E-mail address: Hans.Schneeweiss@stat.uni-muenchen.de