A googness of-fit-test for a multivariate errors-in-variables model

A multivariate errors-in-variables model AX ≈ B is considered, where the data matrices A and B are observed with errors, and a matrix parameter X is to be estimated. A goodness-of-fit test which is based on the moment estimator is constructed. The proposed test is asymptotically chi-squared under nu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Datum:2006
Hauptverfasser: Kukush, A., Polekha, M.
Format: Artikel
Sprache:English
Veröffentlicht: Інститут математики НАН України 2006
Online Zugang:http://dspace.nbuv.gov.ua/handle/123456789/4458
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Назва журналу:Digital Library of Periodicals of National Academy of Sciences of Ukraine
Zitieren:A googness of-fit-test for a multivariate errors-in-variables model / A. Kukush, M. Polekha // Theory of Stochastic Processes. — 2006. — Т. 12 (28), № 3-4. — С. 63–74. — Бібліогр.: 6 назв.— англ.

Institution

Digital Library of Periodicals of National Academy of Sciences of Ukraine
id irk-123456789-4458
record_format dspace
spelling irk-123456789-44582009-11-12T12:00:26Z A googness of-fit-test for a multivariate errors-in-variables model Kukush, A. Polekha, M. A multivariate errors-in-variables model AX ≈ B is considered, where the data matrices A and B are observed with errors, and a matrix parameter X is to be estimated. A goodness-of-fit test which is based on the moment estimator is constructed. The proposed test is asymptotically chi-squared under null hypothesis. The power of the test is discussed. 2006 Article A googness of-fit-test for a multivariate errors-in-variables model / A. Kukush, M. Polekha // Theory of Stochastic Processes. — 2006. — Т. 12 (28), № 3-4. — С. 63–74. — Бібліогр.: 6 назв.— англ. 0321-3900 http://dspace.nbuv.gov.ua/handle/123456789/4458 en Інститут математики НАН України
institution Digital Library of Periodicals of National Academy of Sciences of Ukraine
collection DSpace DC
language English
description A multivariate errors-in-variables model AX ≈ B is considered, where the data matrices A and B are observed with errors, and a matrix parameter X is to be estimated. A goodness-of-fit test which is based on the moment estimator is constructed. The proposed test is asymptotically chi-squared under null hypothesis. The power of the test is discussed.
format Article
author Kukush, A.
Polekha, M.
spellingShingle Kukush, A.
Polekha, M.
A googness of-fit-test for a multivariate errors-in-variables model
author_facet Kukush, A.
Polekha, M.
author_sort Kukush, A.
title A googness of-fit-test for a multivariate errors-in-variables model
title_short A googness of-fit-test for a multivariate errors-in-variables model
title_full A googness of-fit-test for a multivariate errors-in-variables model
title_fullStr A googness of-fit-test for a multivariate errors-in-variables model
title_full_unstemmed A googness of-fit-test for a multivariate errors-in-variables model
title_sort googness of-fit-test for a multivariate errors-in-variables model
publisher Інститут математики НАН України
publishDate 2006
url http://dspace.nbuv.gov.ua/handle/123456789/4458
citation_txt A googness of-fit-test for a multivariate errors-in-variables model / A. Kukush, M. Polekha // Theory of Stochastic Processes. — 2006. — Т. 12 (28), № 3-4. — С. 63–74. — Бібліогр.: 6 назв.— англ.
work_keys_str_mv AT kukusha agoognessoffittestforamultivariateerrorsinvariablesmodel
AT polekham agoognessoffittestforamultivariateerrorsinvariablesmodel
AT kukusha goognessoffittestforamultivariateerrorsinvariablesmodel
AT polekham goognessoffittestforamultivariateerrorsinvariablesmodel
first_indexed 2025-07-02T07:41:54Z
last_indexed 2025-07-02T07:41:54Z
_version_ 1836520167082819584
fulltext Theory of Stochastic Processes Vol. 12 (28), no. 3–4, 2006, pp. 63–74 ALEXANDER KUKUSH AND MARIA POLEKHA A GOOGNESS OF-FIT-TEST FOR A MULTIVARIATE ERRORS-IN-VARIABLES MODEL A multivariate errors-in-variables model AX ≈ B is considered, where the data matrices A and B are observed with errors, and a matrix para- meter X is to be estimated. A goodness-of-fit test which is based on the moment estimator is constructed. The proposed test is asymptotically chi-squared under null hypothesis. The power of the test is discussed. 1. Introduction Errors-in-variables (EIV) models are rather important in practical appli- cations. It is reasonable to develop appropriate goodness–of–fit test for such models. Consistent estimators for a multivariate errors-in-variables model under various conditions are presented in [1 – 3]. A goodness–of–fit test is con- structed in [4] for a linear structural EIV model, where the distribution of the latent variable and the error distributions are normal. A polynomial EIV model is considered in [5], without the normality assumption. Present paper modifies the results of [5] for a multivariate errors-in-variables model. We use the following notations: ‖A‖ is Frobenius norm of a matrix A, Ip is the unit matrix of size p. The symbols E, D, and cov denote the expectation of a random matrix, the variance of random variable, and the variance - covariance matrix of a random vector, respectively. Op(1) de- notes a sequence of stochastically bounded random variables, and op(1) is a sequence of random variables that converges to 0 in probability. All the vectors in the paper are column vectors. The paper is organized as follows. In Section 2 we introduce the model and construct an estimator. In Section 3 we present a goodness–of–fit test and show that it is asymptotically chi-squared with p degrees of freedom under null hypothesis. We introduce a local alternative and investigate the power of the test in Section 4. Section 5 concludes, and the proofs of the results are presented in Appendix. 2. The model and the estimator Consider the model of observations: (1) A0X = B0, A = A0 + Ã, B = B0 + B̃, 2000 Mathematics Subject Classification. Primary 62H15. Key words and phrases. Multivariate errors-in-variables model, goodness-of-fit test, moment estimator, asymptotically chi-squared, power of the test. 63 64 ALEXANDER KUKUSH AND MARIA POLEKHA where A0 ∈ R m×n, X ∈ R n×p, B0 ∈ R m×p. Here the data matrices A, B are observed, and A0, B0 are unknown nonrandom matrices, and Ã, B̃ are matrices of random errors. Let AT = [a1...am], BT = [b1...bm], and we use similar notations for the rows of A0, B0, Ã, B̃. Rewrite the model (1) as a multivariate lineal model: (2) XT a0 i = b0 i , i = 1, m; bi = b0 i + b̃i, ai = a0 i + ãi, i = 1, m. We assume the following conditions: a) the sequences of errors vectors {ãi, i ≥ 1} and {b̃i, i ≥ 1} are two IID centered sequences of random errors, independent of each other, b) for all i, ã d = ãi, b̃ d = b̃i and Eã = 0, Eb̃ = 0; c) covã =: Sã is known and covb̃ =: Sb̃ is unknown. The adjusted least squares (ALS) estimator of matrix parameter X is X̂ := (AT A −EÃT Ã)−1AT B = ( m∑ i=1 aia T i − Eãiãi T )−1 m∑ i=1 aib T i = = (aaT −EããT )−1abT , (3) X̂ = H̄−1abT , where H := aaT − EããT . Hereafter the bars denote averages, e.g., abT = m∑ i=1 aib T i /m. Lemma 1[6]. Assume that the following conditions are satisfied. (i) E‖ã‖4 < ∞, E‖b̃‖4 < ∞. (ii) There exists V := lim m→∞ a0a0T and V is positive definite. Then H̄ is nonsingular with probability tending to 1, and (4) X̂ P−→ X as m → ∞, (5) Ŝb̃ := bbT − baT X̂ P−→ Sb̃ as m → ∞. The estimator of X̂ is well-defined for m ≥ m0(ω) a.s., under the condi- tions of Lemma 1. If the matrix H̄ = H̄(m, ω) is singular, then the estimator is X̂ = H̄†abT , where H̄† is pseudoinverse matrix. 3. Construction of the test For the response vector b and the corresponding latent vector a0 we consider the following hypotheses H0 : there exists a matrix X ∈ R n×p, for which the equality holds true: (6) E(b − XTa0) = 0, A GOOGNESS OF-FIT-TEST 65 and H1 : for all matrices X ∈ R n×p, (7) E(b − XT a0) is not identically equal to 0. We want to construct a test statistic for the null hypothesis using the ob- servations ai and bi, i = 1, 2, ..., n. Let w(a0) be a scalar weight function. Then under null hypothesis we have equality E[(b−XT a0)w(a0)] = 0. We will construct a vector polynomial s(a), such that under H0 the following relation is true: (8) E[(b − XT s(a))w(a)] = 0. Such a construction is possible if one chooses w(a) as follows: w(a) = eλT a, a ∈ R n, λ = (λ1, λ2, ..., λn) T is fixed, λk �= 0, k = 1, n. We fix such a λ and assume that the corresponding exponential moment of ã exists and satisfies the condition: (iii) E[(1 + ‖ã‖)eλT ã] < ∞. For the chosen weight function, relation (8) holds if for every a0 one has: a0 · E(eλT ã) = E(s(a0 + ã)eλT ã). Then (8) holds for s(a) = a − E(ãeλT ã) E(eλT ã) . Denote μ0 = E(eλT ã) and μ1 = E(ãeλT ã), then s(a) = a − μ1 μ0 . Define a statistic of the score type (9) T 0 m = 1 m m∑ i=1 (bi − X̂Ts(ai))e λT ai = (b − X̂T s(a))eλT a. We introduce further assumptions to derive an asymptotic expansion of√ m · T 0 m. (iv) E[(1 + ‖ã‖2)e2λT ã] < ∞. This condition is stronger than (iii). For arbitrary function f(a0), we denote M(f(a0)) = lim m→∞ f(a0), provided the limit exists and finite; a0 j is jth component of the vector a0. (v) ∃M((a0(j))l(a0(k))reλT a0 ), for all l, r ≥ 0, l + r ≤ 2, j, k = 1, n. (vi) ‖a0a0T − V ‖ = o(m−1/4), as m → ∞. Lemma 2. Assume (i), (ii), and (iv) to (vi). Then (10) √ m · T 0 m = 1√ m m∑ i=1 b̃i(e λT ai − aT i f) + XT 1√ m m∑ i=1 ηi + op(1), where ηi := (a0 i −s(ai))e λT ai +(Hi−a0 i a T i )f are independent random vectors with expectation 0, Hi = aia T i −Eãiã T i , f := V −1M(a0eλT a0 )μ0, and matrix V comes from (ii). We need some more assumptions in order to apply the central limit the- orem in the Lyapunov form to the statistic √ m · T 0 m. (vii) ∃δ > 0 : E[(1 + ‖ã‖2+δ)e(2+δ)λT ã] < ∞, and E‖b̃‖2+δ < ∞. 66 ALEXANDER KUKUSH AND MARIA POLEKHA (viii) There exist M((a0(j))l(a0(k))reλT a0 ), for all l, r ≥ 0, l + r ≤ 3, and M((a0(j))l(a0(k))re2λT a0 ), for all l, r ≥ 0, l + r ≤ 2; j, k = 1, n. (ix) ∃δ > 0 : ‖a0‖4+δ + e(2+δ)λT a0 + ‖a0‖2+δe(2+δ)λT a0 ≤ const. (x) e2λT a0 · ‖a0‖4 = o(m), as m → ∞. Condition (vii) absorbs conditions (iii) and (iv), and conditions (viii) absorbs condition (v). Condition (ix) means that the higher empirical mo- ments are bounded. Lemma 3. Assume (ii), and (vii) to (ix). Then √ m · T 0 m d−→ N(0, ΣT ), where ΣT := Sb̃ ·M [E(eλT a−aT f)2]+XT [In, fT ⊗In]·M(U)· [In, fT ⊗In]T X, M(U) := lim m→∞ cov(Z(a)), Z(ai) := [ (a0 i − s(ai))e λT ai vec(Hi) − vec(ai 0ai T ) ] , i = 1, m, the symbol ⊗ is Kronecker product, and vector f comes from Lemma 2. Under the conditions of Lemma 3 and condition (x), a consistent estima- tor Σ̂ of ΣT is constructed, Σ̂T := Ŝb̃ · (eλT a − aT f)2+ (11) +X̂T [In, fT ⊗ In] · ĉov [ (a0 − s(a))eλT a vec(H − a0aT ) ] · [In, fT ⊗ In]T X̂, where f̂ , ĉov are approximations described below. A. Since H̄ P−→ V and s(a)eλT a P−→ M(a0eλT a0 )μ0 as m → ∞, we get the estimator f̂ = H̄−1s(a)eλT a. B. M ( cov [ (a0 − s(a))eλT a vec(H − a0aT ) ]) = M ( Σ11 Σ12 ΣT 12 Σ22 ) . We want to construct Σ̂ij for M(Σij), i, j = 1, 2, based on observations ai, i = 1, m. We need the following auxiliary statement. Lemma 4. Let k ≥ 0, and p(a0) be a polynomial of degree k, and {a0 i , i ≥ 1, } be a sequence of nonrandom vectors in R n, satisfying the condition (xi) (1 + ‖a0‖2k)e2λT a0 = o(m), as m → ∞. Let ai = a0 i + ãi, i ≥ 1, and vectors ãi satisfy the conditions a) and b), and the following condition (xii) E[(1 + ‖ã‖2k)e2λT ã] < ∞. Assume also that the limit M(p(a0)eλT a0 ) = lim m→∞ 1 m m∑ i=1 p(a0 i )e λT a0 i exists and is finite. Then there exists a polynomial p1(a) of degree k, a ∈ R n, such that (12) 1 m m∑ i=1 p1(ai)e λT ai P−→ M(p(a0)eλT a0 ), as m → ∞. Consider the matrix Σ11 = a0a0T e2λT a0 Ee2λT ã − Es(a)e2λT aa0T− −a0Es(a)Te2λT a + Es(a)s(a)T e2λT a =: U1 − U2 − UT 2 + U3. Next, A GOOGNESS OF-FIT-TEST 67 E(aaT e2λT a) = a0a0T e2λT a0 m1 + a0e2λT a0 m2 + a0T e2λT a0 mT 2 + e2λT a0 m3, E(ae2λT a) = a0e2λT a0 m1 + e2λT a0 m2, E(aTe2λT a) = a0T e2λT a0 m1 + e2λT a0 mT 2 , E(e2λT a) = e2λT a0 m1, where m1 = Ee2λT ã, m2 = Eãe2λT ã, m3 = EããT e2λT ã. Then by Lemma 4, the estimator of U1 equals Û1 = aaT e2λT a − ae2λT a · mT 2 m1 − aT e2λT a · m2 m1 − e2λT ã( m3 m1 − 2mT 2 m2 m2 1 ). Again from the previous expression and the following identity E(s(a)aT e2λT a) = E(s(a)e2λT a)(a0)T +a0e2λT a0 m2 +e2λT a0 (m3−μ1/μ0 ·mT 2 ), we get an approximation: Û2 = s(a)aT e2λT a − ae2λT a · mT 2 m1 − e2λT ã( m3 − μ1/μ0 · mT 2 m1 − mT 2 m2 m2 1 ). The next approximation is Û3 = s(a)s(a)T e2λT a. Finally, Σ̂11 =: Û1 − Û2 − ÛT 2 + Û3. In a similar way one can construct other approximations Σ̂ij and obtain the approximation (11). Then the test statistic defined as follows: T 2 m = m · ‖Σ̂−1/2 T T 0 m‖2. Since Σ̂T is the consistent estimator of ΣT , we obtain by Lemma 3 the following theorem. Theorem 1. Suppose that the conditions of lemma 3 and condition (x) are satisfied. Assume as well that at least one of the following two conditions is satisfied: (xiii) M [E(eλT a − aT f)2] > 0, and Sb̃ is positive definite; (xvi) n ≥ p, rankX = p, and the matrix M(U) := M ( cov [ (a0 − s(a))eλT a vec(H − a0aT ) ]) is nonsingular. Then T 2 m d−→ χ2 p, under hypothesis H0. Let α > 0 and χ2 pα be corresponding quantile of the χ2 p distribution, i.e., P{χ2 p > χ2 pα} = α. Based on Theorem 1, we construct the following goodness-of -fit test with asymptotic confidence probability 1 − α. If T 2 m ≤ χ2 pα then we accept the hypothesis H0; if T 2 m > χ2 pα then we reject the null hypothesis. 4. The power properties of the test Consider the following sequences of models: (13) H1,m : bi = XT a0 i + g(a0 i )√ m + b̃i, ai = a0 i + ãi, i = 1, m, 68 ALEXANDER KUKUSH AND MARIA POLEKHA where g : R n → R p is a nonlinear vector function which satisfies the condi- tions: (xv) ∃ M(g(a0)eλT a0 ) and ∃ M(g(a0)a0T ); (xvi) ‖g(a0)‖2 · (1 + ‖a0‖2 + e2λT a0) = o(m), as m → ∞. Then under H1,m we have: 1√ m m∑ i=1 (bi − X̂T s(ai))e λT ai d−→ N(C, ΣT ), where a vector C is found bellow. Now, we define a noncentral chi-squared distribution χ2 p(τ) with p degrees of freedom, and noncentrality parameter τ . Definition. For p ≥ 1 and τ ≥ 0, let χ2 p(τ) d = ‖N(τe, Ip)‖2, where e ∈ R p, ‖e‖ = 1, or equivalently χ2 p(τ) d = (γ1 +τ)2 + p∑ i=2 γ2 i , where {γi} are independent standard normal variables. Theorem 2. Suppose that all the conditions of Theorem 1 and conditions (xv), (xvi) are satisfied. Then, under H1,m, T 2 m d−→ χ2 p(‖Σ−1/2 T C‖), (14) where C := μ0 · M(g(a0)eλT a0 ) − M(g(a0)a0T )V −1M(a0eλT a0 ). Here χ2 p(‖Σ−1/2 T C‖) is noncentral chi-squared random variable with p degrees of freedom and noncentrality parameter ‖Σ−1/2 T C‖. From Theorem 2 we can find the asymptotic power of the test under local alternative (13). It is easy to see that the asymptotic power of the test is increasing function of ‖Σ−1/2 T C‖. In other words, the larger ‖Σ−1/2 T C‖, the more powerful test we will have. Since in present paper the vector λ is arbitrary chosen and the function g is unknown, it is reasonable to consider the next two problems. 1) We assume that the weight function w(a) = eλT a is fixed. We discuss for which g the power is the largest. For simplicity we suppose that {a0 i , i ≥ 1} are IID random vectors, independent of {ãi, and b̃i, i ≥ 1}, and a0 d = a0 i . Then ‖Σ−1/2 T C‖ = μ0 · ‖Σ−1/2 T [E(g(a0)eλT a0 )− −E(g(a0)a0T )E(a0a0T )−1E(a0eλT a0 )]‖ = μ0‖E(Σ −1/2 T g(a0)hλ(a 0))‖. Here hλ is defined from the expansion: eλT a0 = zT a0 + hλ(a 0), z ∈ R n and Ehλ(a 0)(vTa0) = 0, for all v ∈ R n. The ratio ‖Σ−1/2 T C‖2/‖Σ−1/2 T g(a0)‖2 L2 is maximal, if g(a0) = hλ(a 0)w, for certain nonrandom w ∈ R p, w �= 0. We have hλ(a 0) = eλT a0 − E(a0a0T )−1/2E(eλT a0 a0T )a0E(a0a0T )−1/2, and its consistent estimator is ĥλ(a 0) = eλT a0 − H̄−1/2 · 1 μ0 (eλT as(a)T − eλT a(EãT eλT ã −EãeλT ã))a0H̄−1/2. The function ĥλ(a 0)w, w �= 0, is asymptotically optimal choice of the func- tion g for a local alternative (13), when the weight function w is fixed. A GOOGNESS OF-FIT-TEST 69 2) Now, we consider the second problem. Let the function g be fixed and we want to choose optimally the weight function w(a0) = eλT a0 . We need to maximize the function ‖Σ−1/2 T C(λ)‖2 for λi ∈ R n\{0̄}, i = 1, m. Here the vector function C = C(λ) is given in (14) provided all corresponding moments of random vectors {ãi, a0 i , i ≥ 1} are exist. This is a nonlinear problem, and it can be solved numerically. Of course, one has to incorporate the approximations for ‖Σ−1/2 T C(λ)‖2 constructed by data. 5. Conclusion We constructed a goodness-of-fit test for a multivariate errors-in-variables model if the covariance structure of errors b̃ is unknown, and the exponen- tial moments and the covariance structure of errors ã are known. Using an exponential weight function, we obtained an asymptotically chi-squared statistic under null hypothesis. A local alternative hypothesis is introduced, under which the test has a noncentral chi-squared asymptotic distribution. We discussed for what local alternatives the power of the test is the largest. Appendix Proof of Lemma 1. First we prove (4). With probability tending to 1, as m → ∞, we have H̄X̂ = abT . Hence (15) (a0a0T )−1(a0a0T + ãa0T + a0ãT + ããT − EããT )X̂ = = (a0a0T )−1(a0a0T X + ãa0T X + a0b̃T + ãb̃T ), or V −1 m H̄X̂ = V −1 m abT , where Vm := a0a0T is nonsingular for m > m0, and Vm → V, as m → ∞. We show that (16) (a0a0T )−1(a0a0T + ãa0T + a0ãT + ããT −EããT ) P−→ In, (17) and (a0a0T )−1(ãa0T X + a0b̃T + ãb̃T ) P−→ 0. We deal with each summand in (16) separately. We have ‖(a0a0T )−1(ãa0T )‖ ≤ ‖V −1 m ‖ · ‖ãa0T‖. Since Vm is nonsingular matrix, ‖V −1 m ‖ ≤ const · λ−1 min(Vm). By Cauchy- Schwartz inequality we obtain E‖ãa0T‖2 = E‖ 1 m m∑ i=1 ãia 0 i T‖2 = 1 m2 n∑ j,k=1 E( m∑ i=1 ãija 0 ik) 2 ≤ ≤ 1 m2 n∑ j,k=1 ( m∑ i=1 Eã2 ij · m∑ i=1 (a0 ik) 2) ≤ 1 m · ‖Sã‖ · const, 70 ALEXANDER KUKUSH AND MARIA POLEKHA therefore ‖ãa0T‖ = Op(1)√ m . By (ii) we have (18) ‖(a0a0T )−1 · a0ãT‖ = Op(1)√ m · λmin(Vm) . (19) Similarly we have ‖(a0a0T )−1 · ãa0T‖ = Op(1)√ m · λmin(Vm) . Next, from (i) we get E‖ããT − EããT‖2 = 1 m2 n∑ j,k=1 E( m∑ i=1 ãij ãik −Eãijãik) 2 = 1 m2 n∑ j,k=1 m∑ i=1 E(ãijãik − Eãijãik) 2 = O(1) m . Therefore (20) ‖(a0a0T )−1 · (ããT −EããT )‖ = Op(1)√ m · λmin(Vm) . By the assumption b) and (ii) we get E‖a0b̃T‖2 = 1 m2 n∑ j,k=1 E( m∑ i=1 aij 0b̃ik) 2 = O(1) m . (21) Thus ‖(a0a0T )−1 · a0b̃T‖ = Op(1)√ m · λmin(Vm) . Similarly we obtain for the last residual: (22) ‖(a0a0T )−1 · ãb̃T‖ = Op(1)√ m · λmin(Vm) Therefore, equalities (18) - (20) yield the convergence (16), and the relations (19), (21) and (22) yield the convergence (17). Then (15) implies the desired convergence X̂ P−→ X as m → ∞. Now we prove the convergence (5). We have bbT = (XT a0 + b̃)(XT a0 + b̃)T = XT a0a0T X + XT a0b̃T + b̃a0T X + b̃b̃T , baT = (XT a0 + b̃)aT = XTa0a0T + XT a0ãT + b̃ãT + b̃a0T , and X̂ = X + op(1), then Ŝb = XT a0b̃T + b̃b̃T − XT a0ãT X − b̃ãT X + op(1). From the proof of the first part of the theorem XTa0b̃T −XT a0ãT X − b̃ãT X = Op(1)/ √ m. Moreover b̃b̃T P−→ Eb̃b̃T = Sb. As a result we obtain Ŝb P−→ Sb, as m → ∞. � A GOOGNESS OF-FIT-TEST 71 Proof of Lemma 2. We substitute the estimator (3) into statistic (9): (23) T 0 m = (b − X̂Ts(a))eλT a = (b − (H̄−1abT )T s(a))eλT a = = (XT a0 + b̃ − XTa0aT · H̄−1s(a) − b̃aT · H̄−1s(a)) · eλT a = = b̃(eλT a − aT · H̄−1s(a)eλT a)+ + XT (a0eλT a − a0aT · H̄−1s(a)eλT a) =: F + XT G. First, we investigate the vector √ mF. Since H̄ = aaT −EããT P−→ V, as m → ∞, we denote Λ = H̄ − V, Λ ≈ 0. The approximate equality ” ≈ ” means equality up to summands, converging to 0 in probability. Thus H̄−1 = (In+V −1ΛV −1)−1V −1 = V −1− V −1ΛV −1 + rm, where ‖rm‖ = ‖Λ‖2Op(1). We show that √ m · ‖Λ‖2 ≈ 0. From (i), and (ii), and (vi) we have E‖H̄ − V ‖2 = (24) = E‖ãa0T + a0ãT + ããT − EããT + a0a0T − V ‖2 ≤ O(1) m + o(1)√ m . Therefore √ m · ‖Λ‖2 ≈ 0 and ‖rm‖ = op(1)/ √ m. Moreover √ m · b̃aT = √ m(b̃ãT + b̃a0T ) = √ m · Op(1)√ m = Op(1), s(a)eλT a = (a0 + ã − μ1 μ0 )eλT ãeλT a0 P−→ M(a0eλT a0 · μ0), therefore s(a)eλT a = Op(1). Then we get√ m · b̃aT H̄−1s(a)eλT a ≈ √ m · b̃aT V −1M(a0eλT a0 ). This implies (25) √ mF ≈ √ m · b̃(eλT a − aT V −1M(a0eλT a0μ0)) = √ m · b̃(eλT a − aT f), where f is the vector defined in Lemma 2. Next, consider √ mG, where G comes from (23): √ mG ≈ √ m · (a0eλT a − a0aT V −1s(a)eλT a + a0aT V −1ΛV −1s(a)eλT a). Since 4 √ m·‖Λ‖ ≈ 0 we have m1/4(a0aT−V ) ≈ 0 and s(a)eλT a = Op(1). Then√ m ·a0aT V −1ΛV −1s(a)eλT a ≈ √ m ·ΛV −1s(a)eλT a = √ m ·(H̄V −1s(a)eλT a− s(a)eλT a), √ mG ≈ √ m((a0 − s(a))eλT a + (H − a0aT )V −1s(a)eλT a). Since√ m · H − a0aT = Op(1) and s(a)eλT a P−→ M(a0eλT a0 ), we also have (26) √ mG ≈ √ m · (a − s(a))eλT + (H − a0aT )f. Using (25) and (26), we obtain (10). � Proof of Lemma 3. From the Lemma 2 we have (27) √ m · T 0 m ≈ 1√ m m∑ i=1 zi, 72 ALEXANDER KUKUSH AND MARIA POLEKHA where zi =: b̃i(e λT ai − ai T f) + XT ((a0 i − s(ai))e λT ai + (Hi − ai 0ai T )f) are independent random vectors and Ezi = 0. Represent the vectors zi as zi = b̃i(e λT ai − ai T f) + XT [In, fT ⊗ In] [ (a0 i − s(ai))e λT ai vec(Hi) − vec(ai 0ai T ) ] . Then Eziz T i = Sb̃ · E(eλT ai − ai 0f)2+ +XT [In, fT ⊗ In] · cov [ (a0 i − s(ai))e λT ai vec(Hi) − vec(ai 0ai T ) ] [In, fT ⊗ In]T X, therefore lim m→∞ 1 m m∑ i=1 Eziz T i = ΣT . The limit exists due to conditions (i), (ii), (vi), and (viii). Conditions (vii), (ix) guarantee the following boundedness: ∃δ > 0 : 1 m m∑ i=1 E‖zi‖2+δ ≤ const. Thus all the conditions of the CLT in Lyapunov form are satisfied, then 1√ m m∑ i=1 zi d−→ N(0, ΣT ). From this and from (27) using Slutsky Lemma we get Lemma 3. � Proof of Lemma 4. First we prove by induction that there exists a polyno- mial p1(a), a ∈ R n, of degree k such that (28) E(p1(a)eλT a) = p(a0)eλT a0 . 1. Let p(a0) be a polynomial of degree 0. Since EeλT a = eλT a0 EeλT ã = eλT a0 μ0, there exists a polynomial of degree 0, p1(a) = p(a0)μ−1 0 . 2. Suppose that for arbitrary polynomial of degree less than k, p(a0), there exists p1(a) such that deg p1(a) < k and the equality (28) is satisfied. 3. Prove the existence of similar polynomial of degree k. E(p(a)eλT a) = p(a0)eλT a0 μ0 + eλT a0 Ep∗(a0, ã)eλT ã, where p∗ is some poly- nomial of two variables. The expectation Ep∗(a0, ã)eλT ã can be represented as p2(a 0)eλT a0 , where deg p2 < deg p = k. Therefore by part 2 of the proof, for p2(a 0) there exists a polynomial p∗1(a) of degree less than k, such that (28) is satisfied. Moreover, E(p(a)eλT a/μ0 − p∗1(a)eλT a) = p(a0)eλT a0 . Therefore, ∃ p1(a) := p(a)/μ0 − p∗1(a), degp1 = k. Now, prove the convergence (12) for the constructed polynomial p1(a). In fact, we have to prove the next equality (29) 1 m m∑ i=1 p1(ai)e λT ai − 1 m m∑ i=1 p(a0 i )e λT a0 i = op(1). A GOOGNESS OF-FIT-TEST 73 Consider the difference 1 m m∑ i=1 (p1(ai)e λT ai − p(a0 i )e λT a0 i ) =: 1 m m∑ i=1 zi, Ezi = 0, D( 1 m m∑ i=1 zi) = E( 1 m m∑ i=1 zi) 2 = 1 m2 m∑ i=1 Ez2 i , Ez2 i = E(p1(ai)e λT ai)2 − (p(a0 i )e λT a0 i )2 ≤ E(p2 1(ai)e 2λT ai), i = 1, m. We have Ez2 i ≤ const · E[(1 + ‖ai‖2k)e2λT ai ] ≤ const · E[(1 + ‖a0 i ‖2k + ‖ãi‖2k)e2λT a0 i e2λT ãi ] ≤ (1 + ‖a0 i ‖2k)e2λT a0 i · const, i = 1, m. Then by condition (xvi) we get as m → ∞, 1 m2 m∑ i=1 Ez2 i ≤ const m2 · m∑ i=1 (1 + ‖a0 i ‖2k)e2λT a0 i = 1 m (1 + ‖a0‖2k)e2λT a0 → 0, Thus we obtain (29), and as a result we get the convergence (12). � Proof of Theorem 1. From the conditions of Theorem 1 we get that ΣT is positive matrix. Then m · ‖Σ−1/2 T T 0 m‖2 d−→ χ2 p. Since Σ̂T is the consistent estimator of ΣT , we have T 2 m = m·‖Σ̂−1/2 T T 0 m‖2 d−→ χ2 p under null hypothesis. � Proof of Theorem 2. Assume the hypothesis H1,m. Then (30) X̂ = H̄−1abT + H̄−1 1√ m · ag(a0)T . Since H̄−1abT P−→ X under H0, we have H̄−1 = Op(1) and 1 m a0g(a0)T → 0, E‖ 1√ m · ãg(a0)T‖2 → 0, as m → ∞. Therefore from (30) we obtain X̂ P−→ X under H1,m. However for the statistic T 0 m we have (31) √ m · T 0 m|H1,m = √ m · T 0 m|H0 + (g(a0)eλT a − g(a0)aT H̄−1seλT a), where T 0 m|H1,m , and T 0 m|H0 are the values of T 0 m under the corresponding hypotheses H1,m and H0. Now, consider the last summand in (31). By conditions (xiii) and (xiv) we have g(a0)eλT a ≈ g(a0)EeλT a = μ0g(a0)eλT a0 → μ0M(g(a0)eλT a0 ) and seλT a P−→ μ0M(a0eλT a0 ), H̄−1 P−→ V −1, g(a0)aT ≈ g(a0)EaT = g(a0)a0T → M(g(a0)a0T ), as m → ∞. Relation (31) yields (32) √ m · T 0 m|H1,m d−→ N(C, ΣT ), where C is the vector defined in (14). The conditions of Theorem 1 are satisfied, therefore ΣT > 0. And from (32) we have the following convergence (33) m · ‖Σ−1/2 T · T 0 m|H1,m‖2 d−→ χ2 p(‖Σ−1/2 T C‖). 74 ALEXANDER KUKUSH AND MARIA POLEKHA Further, due to (xii) and (xiii) we have Ŝb̃ P−→ Sb̃, under H1,m, and Σ̂T P−→ ΣT under H1,m. (we used only the observations ai, i = 1, m, in the construc- tion of Σ̂T , and they do not change under local alternative H1,m). Thus by relation (33). We have T 2 m|H1,m d−→ χ2 p(‖Σ−1/2 T C‖). � References 1. Kukush, A., and Van Huffel, S., Consistency of element-wise weighted total least squares estimator in multivariate errors-in-variables model AX = B, Metrika, 59, (2004), no.1, 75-97. 2. Kukush, A., Markovsky, I., and Van Huffel, S., Consistency of the structured total least squares estimator in a multivariate errors-in-variables model, Journal of Statistical Planning and Inference, 133, (2005), no.2, 315-358. 3. Kukush A., Markovsky I., and Van Huffel, S., Estimation in a linear multivariate measurement error model with clustering in the regressor, Internal Report 05- 170. ESAT-SISTA. K.U.Leuven (Leuven, Belgium), (2005), 4. Zhu, L., Cui, H., and K.W.Ng., Testing lack-of-fit for linear errors-in-variables model. Acta Appl. Math. (to appear). 5. Kukush, A.G., Cheng, C.-L., A goodness-of-fit test for a polynomial errors-in- variables model, Ukrainian Mathematical Journal, 56, (2004), no.4, 527-543. 6. Cheng, C.-L., and Schneeweiss, H., Polynomial regression with errors in the variables, J. R. Statist. Soc. B, 60, (1998), 189-199. Kyiv National Taras Shevchenko University, Kyiv, Ukraine E-mail address: alexander kukush@univ.kiev.ua Kyiv National Taras Shevchenko University, Kyiv, Ukraine E-mail address: poleha@bigmir.net