Central Limit Theorem for Linear Eigenvalue Statistics of the Wigner and Sample Covariance Random Matrices
We consider two classical ensembles of the random matrix theory: the Wigner matrices and sample covariance matrices, and prove Central Limit Theorem for linear eigenvalue statistics under rather weak (comparing with results known before) conditions on the number of derivatives of the test functions...
Збережено в:
Дата: | 2011 |
---|---|
Автор: | |
Формат: | Стаття |
Мова: | English |
Опубліковано: |
Фізико-технічний інститут низьких температур ім. Б.І. Вєркіна НАН України
2011
|
Назва видання: | Журнал математической физики, анализа, геометрии |
Онлайн доступ: | http://dspace.nbuv.gov.ua/handle/123456789/106671 |
Теги: |
Додати тег
Немає тегів, Будьте першим, хто поставить тег для цього запису!
|
Назва журналу: | Digital Library of Periodicals of National Academy of Sciences of Ukraine |
Цитувати: | Central Limit Theorem for Linear Eigenvalue Statistics of the Wigner and Sample Covariance Random Matrices / M. Shcherbina // Журнал математической физики, анализа, геометрии. — 2011. — Т. 7, № 2. — С. 176-192. — Бібліогр.: 15 назв. — англ. |
Репозитарії
Digital Library of Periodicals of National Academy of Sciences of Ukraineid |
irk-123456789-106671 |
---|---|
record_format |
dspace |
spelling |
irk-123456789-1066712016-10-02T03:03:06Z Central Limit Theorem for Linear Eigenvalue Statistics of the Wigner and Sample Covariance Random Matrices Shcherbina, M. We consider two classical ensembles of the random matrix theory: the Wigner matrices and sample covariance matrices, and prove Central Limit Theorem for linear eigenvalue statistics under rather weak (comparing with results known before) conditions on the number of derivatives of the test functions and also on the number of the entries moments. Moreover, we develop a universal method which allows one to obtain automatically the bounds for the variance of differentiable test functions, if there is a bound for the variance of the trace of the resolvent of random matrix. The method is applicable not only to the Wigner and sample covariance matrices, but to any ensemble of hermitian or real symmetric random matrices. 2011 Article Central Limit Theorem for Linear Eigenvalue Statistics of the Wigner and Sample Covariance Random Matrices / M. Shcherbina // Журнал математической физики, анализа, геометрии. — 2011. — Т. 7, № 2. — С. 176-192. — Бібліогр.: 15 назв. — англ. 1812-9471 http://dspace.nbuv.gov.ua/handle/123456789/106671 en Журнал математической физики, анализа, геометрии Фізико-технічний інститут низьких температур ім. Б.І. Вєркіна НАН України |
institution |
Digital Library of Periodicals of National Academy of Sciences of Ukraine |
collection |
DSpace DC |
language |
English |
description |
We consider two classical ensembles of the random matrix theory: the Wigner matrices and sample covariance matrices, and prove Central Limit Theorem for linear eigenvalue statistics under rather weak (comparing with results known before) conditions on the number of derivatives of the test functions and also on the number of the entries moments. Moreover, we develop a universal method which allows one to obtain automatically the bounds for the variance of differentiable test functions, if there is a bound for the variance of the trace of the resolvent of random matrix. The method is applicable not only to the Wigner and sample covariance matrices, but to any ensemble of hermitian or real symmetric random matrices. |
format |
Article |
author |
Shcherbina, M. |
spellingShingle |
Shcherbina, M. Central Limit Theorem for Linear Eigenvalue Statistics of the Wigner and Sample Covariance Random Matrices Журнал математической физики, анализа, геометрии |
author_facet |
Shcherbina, M. |
author_sort |
Shcherbina, M. |
title |
Central Limit Theorem for Linear Eigenvalue Statistics of the Wigner and Sample Covariance Random Matrices |
title_short |
Central Limit Theorem for Linear Eigenvalue Statistics of the Wigner and Sample Covariance Random Matrices |
title_full |
Central Limit Theorem for Linear Eigenvalue Statistics of the Wigner and Sample Covariance Random Matrices |
title_fullStr |
Central Limit Theorem for Linear Eigenvalue Statistics of the Wigner and Sample Covariance Random Matrices |
title_full_unstemmed |
Central Limit Theorem for Linear Eigenvalue Statistics of the Wigner and Sample Covariance Random Matrices |
title_sort |
central limit theorem for linear eigenvalue statistics of the wigner and sample covariance random matrices |
publisher |
Фізико-технічний інститут низьких температур ім. Б.І. Вєркіна НАН України |
publishDate |
2011 |
url |
http://dspace.nbuv.gov.ua/handle/123456789/106671 |
citation_txt |
Central Limit Theorem for Linear Eigenvalue Statistics of the Wigner and Sample Covariance Random Matrices / M. Shcherbina // Журнал математической физики, анализа, геометрии. — 2011. — Т. 7, № 2. — С. 176-192. — Бібліогр.: 15 назв. — англ. |
series |
Журнал математической физики, анализа, геометрии |
work_keys_str_mv |
AT shcherbinam centrallimittheoremforlineareigenvaluestatisticsofthewignerandsamplecovariancerandommatrices |
first_indexed |
2025-07-07T18:50:40Z |
last_indexed |
2025-07-07T18:50:40Z |
_version_ |
1837015226985218048 |
fulltext |
Journal of Mathematical Physics, Analysis, Geometry
2011, vol. 7, No. 2, pp. 176–192
Central Limit Theorem for Linear Eigenvalue Statistics
of the Wigner and Sample Covariance Random Matrices
M. Shcherbina
Mathematics Division, B. Verkin Institute for Low Temperature Physics and Engineering
National Academy of Sciences of Ukraine
47 Lenin Ave., Kharkiv 61103, Ukraine
E-mail: Shcherbi@ilt.kharkov.ua
Received January 20, 2011
We consider two classical ensembles of the random matrix theory: the
Wigner matrices and sample covariance matrices, and prove Central Limit
Theorem for linear eigenvalue statistics under rather weak (comparing with
results known before) conditions on the number of derivatives of the test
functions and also on the number of the entries moments. Moreover, we
develop a universal method which allows one to obtain automatically the
bounds for the variance of differentiable test functions, if there is a bound
for the variance of the trace of the resolvent of random matrix. The method
is applicable not only to the Wigner and sample covariance matrices, but to
any ensemble of hermitian or real symmetric random matrices.
Key words: random matrices, Wigner matrix, sample covariance matrix,
Central Limit Theorem.
Mathematics Subject Classification 2000: 15A52 (primary); 15A57
(secondary).
1. Introduction
The Wigner Ensembles for real symmetric matrices is a family of n × n real
symmetric matrices M of the form
M = n−1/2W, (1.1)
where W =
{
w
(n)
jk
}n
j,k=1
with w
(n)
jk = w
(n)
kj ∈ R, 1 ≤ j ≤ k ≤ n, and w
(n)
jk ,
1 ≤ j ≤ k ≤ n are independent random variables such that
E
{
w
(n)
jk
}
= 0, E
{
(w(n)
jk )2
}
= 1, j 6= k, E
{
(w(n)
jj )2
}
= w2. (1.2)
c© M. Shcherbina, 2011
CLT for the Wigner and Sample Covariance Matrices
Here and below we denote by E{.} the averaging with respect to all random
parameters of the problem. Let {λ(n)
j }n
i=1 be eigenvalues of M . Since the pioneer
work of Wigner [15] it is known that if we consider the linear eigenvalue statistic
corresponding to any continuous test function ϕ:
Nn[ϕ] =
n∑
j=1
ϕ(λ(n)
j ), (1.3)
then n−1Nn[ϕ] converges in probability to the limit
lim
n→∞n−1Nn[ϕ] =
∫
ϕ(λ)ρsc(λ)dλ, (1.4)
where ρsc(λ) is the famous semicircle density
ρsc(λ) =
1
2π
√
4− λ21[−2,2].
The result of this type, which is the analog of the Law of Large Numbers of the
classical probability theory, normally is the first step in studies of the eigenvalue
distribution for any ensemble of random matrices. For the Wigner ensemble this
result, obtained initially in [15] for Gaussian W =
{
w
(n)
jk
}n
j,k=1
, was improved in
[11], where the convergence of Nn(λ) to the semicircle law was shown under the
minimal conditions on the distribution of W =
{
w
(n)
jk
}n
j,k=1
(the Lindeberg type
conditions).
The second classical ensemble which we consider in the paper is a sample
covariance matrix of the form
M = n−1XX∗, (1.5)
where X is a n × m matrix whose entries
{
X
(n)
jk
}
j=1,.,n,k=1,.,m
are independent
random variables, satisfying the conditions
E
{
X
(n)
jk
}
= 0, E
{
(X(n)
jk )2
}
= 1. (1.6)
Corresponding results on the convergence of normalized linear eigenvalue statis-
tics to integrals with the Marchenko–Pastur distribution were obtained in [10].
Central Limit Theorem (CLT) for fluctuations of linear eigenvalue statistics
is a natural second step in studies of the eigenvalue distribution of any ensem-
ble of random matrices. That is why there are a lot of papers, devoted to the
proofs of CLT for different ensembles of random matrices (see [1, 2, 6, 7, 9,
12–14]). CLT for the traces of resolvents for the classical Wigner and sample
covariance matrices was proved by Girko in 1975 (see [5] and references therein),
Journal of Mathematical Physics, Analysis, Geometry, 2011, vol. 7, No. 2 177
M. Shcherbina
but the expression for the variance found by him was rather complicated. A sim-
ple expression for the covariance of the resolvent traces for the Wigner matrix
in the case E{(w(n)
ii )2} = 2 was found in [8]. CLT for polynomial test func-
tions for some generalizations of the Wigner and sample covariance matrices was
proved in [1] by using moment methods. CLT for real analytic test functions
for the Wigner and sample covariance matrices was established in [2] under ad-
ditional assumptions that E{(w(n)
ii )2} = 2, E{(w(n)
jk )4} = 3E2{(w(n)
jk )2} = 3 (or
E{(X(n)
jk )4} = 3E2{(X(n)
jk )2} for the model (1.5)). In the recent paper [9] CLT
for the linear eigenvalue statistics of the Wigner and sample covariance matrix
ensemble was proved under assumptions that E{(w(n)
ii )2} = 2, the third and the
forth moments of all entries are the same, but E{(w(n)
jk )4} is not necessary 3.
Moreover, the test functions, studied in [9], are not supposed to be real analytic.
It was assumed that the Fourier transform ϕ̂ of the test function ϕ satisfies the
inequality ∫
(1 + |k|5)|ϕ̂(k)|dk < ∞, (1.7)
which means that ϕ has more than 5 bounded derivatives.
In the present paper we prove CLT for the Wigner ensemble (1.1) under the
following assumptions on the matrix entries
E
{
(w(n)
jk )4
}
= w4, sup
n
sup
1≤j<k≤n
E
{|w(n)
jk |4+ε1
}
= w4+ε1 < ∞, ε1 > 0. (1.8)
We consider the test functions from the space Hs, possessing the norm (cf (1.7))
||ϕ||2s =
∫
(1 + 2|k|)2s|ϕ̂(k)|2dk, s > 3/2, ϕ̂(k) =
1
2π
∫
eikxϕ(x)dx. (1.9)
Theorem 1. Consider the Wigner model with entries satisfying condition
(1.8). Let the real valued test function ϕ satisfy condition ||ϕ||3/2+ε < ∞, ε > 0.
Then N ◦
n [ϕ] converges in distribution to the Gaussian random variable with zero
mean and the variance
V [ϕ] =
1
2π2
2∫
−2
2∫
−2
(
ϕ(λ1)− ϕ(λ2)
λ1 − λ2
)2 4− λ1λ2√
4− λ2
1
√
4− λ2
2
dλ1dλ2
+
κ4
2π2
2∫
−2
ϕ(µ)
2− µ2
√
4− µ2
dµ
2
+
w2 − 2
4π2
2∫
−2
ϕ(µ)µ√
4− µ2
dµ
2
, (1.10)
where κ4 = w4 − 3.
178 Journal of Mathematical Physics, Analysis, Geometry, 2011, vol. 7, No. 2
CLT for the Wigner and Sample Covariance Matrices
Let us note that similarly to the result of [9] it is easy to check that Theorem 1
remains valid if the second condition of (1.8) is replaced by the Lindeberg type
condition for the fourth moments of entries of W
lim
n→∞L(4)
n (τ) = 0, ∀τ > 0, (1.11)
where
L(4)
n (τ) =
1
n2
n∑
j,k=1
E{(w(n)
jk )41|w(n)
jk |>τ
√
n
}. (1.12)
The proof will be the same as for Theorem 1, but everywhere below n−ε1/2 will
be replaced by Ln(τ)/τγ , with some positive γ.
The proof of Theorem 1 is based on some combination of the resolvent ap-
proach with martingale bounds for the variance of the resolvent traces, used
before by many authors, in particularly, by Girko (see [5] and references therein).
An important advantage of our approach is that it is shown by the marginal
difference method that (see Proposition 2 below)
Var{TrG(z)} ≤ C/|=z|4, G(z) = (M − z)−1, (1.13)
while in the previous papers the martingale method was used only to obtain the
bounds of the type Var{TrG(z)−1} ≤ nC(z). The bound (1.13) will be combined
with the inequality
Proposition 1. For any s > 0 and any hermitian or real symmetric matrix M
Var{Nn[ϕ]} ≤ Cs||ϕ||2s
∞∫
0
dye−yy2s−1
∞∫
−∞
Var{TrG(x + iy)}dx. (1.14)
The proposition allows one to transform the bounds for the variances of the resolvent
traces into the bounds for the variances of linear eigenvalue statistics of ϕ ∈ Hs, where
the value of s depends on the exponent of |=z| in the r.h.s. of (1.13). It is important, that
Proposition 1 has a rather general form and therefore it is applicable to any ensemble of
random matrices for which the bounds of the type (1.13) (may be with a different expo-
nent of |=z|) are found. This makes Proposition 1 an important tool of the proof of CLT
for linear eigenvalue statistics for different random matrices. The idea of Proposition 1
was taken from the paper [7], where a similar argument was used to study the first order
correction terms of n−1E{Nn[ϕ]} for the matrix models. Having in mind Proposition 1,
one can prove CLT for any dense in Hs set of the test functions, and then extend this
result to the whole Hs by the standard procedure (see Proposition 3). In the present
paper for this aim we use a set of convolutions of integrable functions with the Poisson
kernel (see (2.32) and (2.3)). This choice simplifies considerably the argument in the
proof of CLT and makes the proof more short than that in the previous papers [1, 2, 9].
Journal of Mathematical Physics, Analysis, Geometry, 2011, vol. 7, No. 2 179
M. Shcherbina
The result for sample covariance matrices is very similar. We assume that the mo-
ments of the entries of X from (1.5) satisfy the bounds
E
{
(X(n)
jk )4
}
= X4, sup
n
sup
1≤j<k≤n
E
{|X(n)
jk |4+ε1
}
= X4+ε1 < ∞, ε1 > 0. (1.15)
Theorem 2. Consider a random matrix (1.5)–(1.6) with entries of X, satisfying the
condition (1.15). Let the real valued test function ϕ satisfy condition ||ϕ||3/2+ε < ∞,
ε > 0. Then N ◦
n [ϕ] in the limit m,n →∞, m/n → c ≥ 1 converges in distribution to the
Gaussian random variable with zero mean and the variance
VSC [ϕ] =
1
2π2
a+∫
a−
a+∫
a−
(
∆ϕ
∆λ
)2
(
4c− (λ1 − am)(λ2 − am)
)
dλ1dλ2
√
4c− (λ1 − am)2
√
4c− (λ2 − am)2
+
κ4
4cπ2
a+∫
a−
ϕ(µ)
µ− am√
4c− (µ− am)2
dµ
2
, (1.16)
where
∆ϕ
∆λ
=
ϕ(λ1)− ϕ(λ2)
λ1 − λ2
, κ4 = X4 − 3 is the fourth cumulant of entries of X,
a± = (1±√c)2, and am = 1
2 (a+ + a−).
2. Proofs
P r o o f of Proposition 1. Consider the operator Ds
D̂sf(k) = (1 + 2|k|)sf̂(k). (2.1)
It is easy to see that for fixed n Var{Nn[ϕ]} is a bounded quadratic form in the Hilbert
space H of the functions with the inner product (u, v)s = (Dsu,Dsv), where the symbol
(., .) means the standard inner product of L2(R). Hence there exists a positive self adjoint
operator V such that
Var{Nn[ϕ]} = (Vϕ,ϕ) = Tr (ΠϕVΠϕ),
where Πϕ is the projection on the vector ϕ
(Πϕf)(x) = ϕ(x)(f, ϕ)||ϕ||−1
0 ,
where ||.||0 means the norm (1.9) with s = 0. We can write
Tr (ΠϕVΠϕ) = Tr (ΠϕDsD−1
s VD−1
s DsΠϕ).
But it is easy to see that
(DsΠϕf)(x) = (Dsϕ)(x)(f, ϕ)||ϕ||−1
0 ,
hence,
||DsΠϕ|| = ||Dsϕ||0 = ||ϕ||s.
180 Journal of Mathematical Physics, Analysis, Geometry, 2011, vol. 7, No. 2
CLT for the Wigner and Sample Covariance Matrices
Therefore we can write
Var{Nn[ϕ]} = Tr (ΠϕDsD−1
s VD−1
s DsΠϕ) ≤ ||DsΠϕ||2 Tr (D−1
s VD−1
s ). (2.2)
But since for any u, v ∈ L2(R) we have
Γ(2s)(D−2
s u, v) = Γ(2s)
∫
(1 + 2|k|)−2sû(k)v̂(k)dk
=
∞∫
0
dye−yy2s−1
∫
e−2|k|yû(k)v̂(k)dk =
∞∫
0
dye−yy2s−1(Py ∗ u, Py ∗ v)
=
∞∫
0
dye−yy2s−1
∫
dx
∫ ∫
Py(x− λ)Py(x− µ)u(λ)v(µ)dλdµ,
where the symbol ∗ means the convolution of functions, and Py is the Poisson kernel
Py(x) =
y
π(x2 + y2)
. (2.3)
This implies
Γ(2s)D−2
s (λ, µ) =
∞∫
0
dye−yy2s−1
∫
dxPy(x− λ)Py(x− µ), (2.4)
and so
Γ(2s)Tr(D−1
s VD−1
s ) =
∞∫
0
dye−yy2s−1
∫
dx
(
VPy(x− .), Py(x− .)
)
=
∞∫
0
dye−yy2s−1
∫
dxVar{Nn[Py(x− .)]}
=
∞∫
0
dye−yy2s−1
∫
dxVar{=Tr G(x + iy)}.
This relation combined with (2.2) proves (1.14).
In what wallows we need to estimate E
{|w(n)
jk |8
}
(see the proof of Proposition 2).
Hence, if ε1 < 4, then it is convenient to consider the truncated matrix
M̃ (τ) = {M̃ (τ)
ij }n
i,j=1, M̃
(τ)
ij = Mij1|Mij |≤τ , M̃ (τ)◦ −E{M̃ (τ)}. (2.5)
Lemma 1. Let Ñn[ϕ] = Tr ϕ(M̃ (τ)◦) be the linear eigenvalue statistic of the matrix
M̃ (τ)◦, corresponding to the test function ϕ with bounded first derivative. Then
|eixN◦
n [ϕ] − eixÑn[ϕ]| ≤ o(1) + C|x| ||ϕ′||∞Ln(τ)/τ3.
Journal of Mathematical Physics, Analysis, Geometry, 2011, vol. 7, No. 2 181
M. Shcherbina
P r o o f. Consider the matrix M(t) = M̃ + t(M − M̃). Let {λi(t)} be eigenvalues
of M(t) and {ψi(t)} be corresponding eigenvectors. Then
E
{∣∣Nn[ϕ]− Ñn[ϕ]
∣∣
}
=
1∫
0
dtE
{ ∑ ∣∣ϕ′(λi(t))λ′i(t)
∣∣
}
≤ ||ϕ′||∞
1∫
0
dtE
{ ∑ ∣∣(M ′(t)ψi(t), ψi(t))
∣∣
}
≤ ||ϕ′||∞E
{
Tr |M − M̃ |
}
= ||ϕ′||∞
∑
k
E
{∣∣∣
∑
ij
u∗kj(M − M̃)ijujk
∣∣∣
}
≤ ||ϕ′||∞E
{ ∑
ij
∣∣(M − M̃)ij
∣∣
}
≤ sup |ϕ′|Ln(τ)/τ3,
where M ′(t) = d
dtM(t) = (M −M̃), U = {uik} is the unitary matrix such that M −M̃ =
U∗ΛU , where Λ is a diagonal matrix and |M − M̃ | = U∗|Λ|U . Hence,
|eixN◦
n [ϕ] − eixÑn[ϕ]◦ | ≤ 2Pr{M̃ (τ) 6= M}+ |x|
(
E{Nn[ϕ]} −E{Ñn[ϕ]}
)
≤ o(1) + C|x| ||ϕ′||∞Ln(τ)/τ3.
It follows from Lemma 1 that for our purposes it suffices to prove CLT for Ñ ◦
n [ϕ].
Hence, starting from this point, we will assume that M is replaced by M̃ (τ)o, but to
simplify notations we will write M instead of M̃ (τ)o just assuming below that the matrix
entries of W satisfy the conditions
E{wjk} = 0, E{w2
jk} = 1 + o(1), (j 6= k), E{w2
jj} = w2 + o(1), (2.6)
E{w4
jk} = w4 + o(1),
E{|wjk|6} ≤ w4+ε1n
1−ε1/2, E{|wjk|8} ≤ w4+ε1n
2−ε1/2. (2.7)
Here and below we omit also the super index (n) of the matrix entries w
(n)
jk and X
(n)
jk .
Proposition 2. If the conditions (2.6) are satisfied, then for γn = TrG and any
1 > δ > 0 we have
Var{γn} ≤ Cn−1
n∑
i=1
E{|Gii(z)|1+δ}/|=z|3+δ, Var{γn} ≤ C/|=z|4. (2.8)
If the conditions of (2.7) are also satisfied, then
E{|γ◦n|4} ≤ Cn−1−ε1/2/|=z|12. (2.9)
P r o o f. Denote E≤k the averaging with respect to {wij}1≤i≤j≤k. Then, according
to the standard martingale method (see [4]), we have
Var{γn} =
n∑
k=1
E{|E≤k−1{γn} −E≤k{γn}|2}. (2.10)
182 Journal of Mathematical Physics, Analysis, Geometry, 2011, vol. 7, No. 2
CLT for the Wigner and Sample Covariance Matrices
Denote Ek the averaging with respect to {wki}1≤i≤n. Then, using the Schwarz inequality,
we obtain that
|E≤k−1{γn} −E≤k{γn}|2 = |E≤k−1{γn −Ek{γn}| ≤ E≤k−1{|γn − Ek{γn}|2}.
Hence
Var{γn} ≤
n∑
k=1
E{|γn −Ek{γn}|2}. (2.11)
Let us estimate the first summand (with k = 1) of the above sum. The other ones can be
estimated similarly. Denote M (1) the (n− 1)× (n− 1) matrix which is the main bottom
(n− 1)× (n− 1) minor of M
G(1) = (M (1) − z)−1, m(1) = n−1/2(w12, . . . , w1n) ∈ Rn−1. (2.12)
We will use the identities
Tr G− Tr G(1) = − 1 + (G(1)G(1)m(1),m(1))
z + n−1/2w11 + (G(1)m(1),m(1))
=: −1 + B(z)
A(z)
. (2.13)
G11 = −A−1, Gii −G
(1)
ii = −(G(1)m(1))2i /A,
where (., .) means the standard inner product in Cn−1.
The first identity of (2.13) yields that it suffices to estimate E{|BA−1−E1{BA−1}|2}
and E{|A−1 − E1{A−1}|2}. We will estimate the first expression. The second one can
be estimated similarly. Denote ξ◦1 = ξ −E1{ξ} for any random variable ξ and note that
for any a independent of {w1i} we have
E1{|ξ◦1 |2} ≤ E1{|ξ − a|2}.
Hence it suffices to estimate
∣∣∣∣
B
A
− E1{B}
E1{A}
∣∣∣∣ =
∣∣∣∣
B◦
1
E1{A} −
A◦1
E1{A}
B
A
∣∣∣∣ ≤
∣∣∣∣
B◦
1
E1{A}
∣∣∣∣ +
∣∣∣∣
A◦1
=zE1{A}
∣∣∣∣.
Let us use also the identities that follow from the spectral theorem
=(G(1)m(1),m(1)) = =z(|G(1)|2m(1),m(1)), =Tr G(1) = =zTr |G(1)|2, (2.14)
where |G(1)| = (G(1)G(1)∗)1/2. The first relation yields, in particular, that |B/A| ≤
|=z|−1. Moreover, using the second identity of (2.14), we have
n−1Tr |G(1)|2
|z + n−1Tr G(1)|2 =
(n−1Tr |G(1)|2)δ(n−1Tr |G(1)|2)1−δ
|z + n−1Tr G(1)|1+δ|z + n−1Tr G(1)|1−δ
≤ C
|=z|−1−δ
|E1{A}|1+δ
. (2.15)
Since
A◦1 = n−1/2w11 + n−1
∑
i 6=j
G
(1)
ij w1iw1j + n−1
∑
i
G
(1)
ii (w2
1i)
◦, (2.16)
E1{|A◦1|2} ≤ Cn−2Tr |G(1)|2 + Cn−1,
Journal of Mathematical Physics, Analysis, Geometry, 2011, vol. 7, No. 2 183
M. Shcherbina
we get by (2.15) and the second identity of (2.14):
E1
{∣∣∣ A◦1
E1{A}
∣∣∣
2}
≤ C
(|=z||E1{A}|
)−1−δ
. (2.17)
Similarly,
E1
{∣∣∣∣
B◦
1
E1{A}
∣∣∣∣
2}
≤ Cn−2Tr |G(1)|4
|z + n−1Tr G(1)|2 ≤
C|=z|−2n−2Tr |G(1)|2
|z + n−1Tr G(1)|2 ≤ C
n−1|=z|−3−δ
|E1{A}|1+δ
.
Then, using the Jensen inequality |E1{A}|−1 ≤ E1{|A|−1}, and the second identity of
(2.13), we conclude that
E{|(γn(z))◦1|2} ≤
C
n|=z|3+δ
E{|G11(z)|1+δ}.
Then (2.11) implies (2.8).
To prove (2.9), we use the inequality similar to (2.11) (see [4])
E{|γ◦n|4} ≤ Cn
n∑
k=1
E{|γn −Ek{γn}|4}. (2.18)
Thus, in view of (2.13), it is enough to check that
E1{|A◦1|4} ≤ Cn−1−ε1/2|=z|−4, E1{|B◦|41} ≤ Cn−1−ε1/2|=z|−8. (2.19)
The first relation here evidently follow from (2.16), if we take the forth degree of the
r.h.s., average with respect to {w1i}, and take into account (2.7). The second relation
can be obtained similarly.
Proposition 2 gives the bound for the variance of the linear eigenvalue statistics for
the functions ϕ(λ) = (λ − z)−1. We are going to extend the bound on a wider class of
test functions.
Lemma 2. If ||ϕ||3/2+ε ≤ ∞, with any ε > 0, then
Var{Nn[ϕ]} ≤ Cε||ϕ||23/2+ε. (2.20)
P r o o f. In view of Proposition 1 we need to estimate
I(y) =
∞∫
−∞
Var{γn(x + iy)}dx.
Take in (2.8) δ = ε/2. Then we need to estimate
∞∫
−∞
E{|Gjj(x + iy)|1+ε/2}dx, j = 1, . . . , n.
184 Journal of Mathematical Physics, Analysis, Geometry, 2011, vol. 7, No. 2
CLT for the Wigner and Sample Covariance Matrices
We do this for j = 1. For other j the estimates are the same. The spectral representation
G11 =
∫
N11(dλ)
λ− x− iy
and the Jensen inequality yield
∞∫
−∞
|G|1+ε/2
11 (x + iy)dx ≤
∞∫
−∞
dx
∞∫
−∞
N11(dλ)
(|x− λ|2 + y2)(1+ε/2)/2
≤ C|y|−ε/2.
Taking s = 3/2 + ε in (1.14), we get
Var{Nn[ϕ]} ≤ ||ϕ||23/2+εC
∞∫
0
e−yy2+2εy−3−εdy ≤ C||ϕ||23/2+ε.
To simplify formulas we will assume below that {wjk}1≤j<k≤n are i.i.d. and {wjj}1≤j≤n
are i.i.d. Note that this assumption does not change the proof seriously, it just allows us
to write the bounds only for G11 instead of all Gii.
The next lemma collects relations which we need to prove CLT.
Lemma 3. Using notations of (2.13) we have uniformly in z1, z2 : =z1,2 > a with
any a > 0:
E{(A◦)3}, E{|A◦|4}, E{(B◦)3}, E{|B◦|4} = O(n−1−ε1/2), (2.21)
nE1{A◦(z1)A◦(z2)} =
2
n
TrG(1)(z1)G(1)(z2) + w2
+
κ4
n
∑
i
G
(1)
ii (z1)G
(1)
ii (z2) +
◦
γ
(1)
n (z1)
◦
γ
(1)
n (z2)/n, (2.22)
nE1{A◦(z1)B◦(z2)} = n
d
dz2
E1{A◦(z1)A◦(z2)}, (2.23)
Var{nE1{A◦(z1)A◦(z2)}} = O(n−1), (2.24)
Var{nE1{A◦(z1)B◦(z2)}} = O(n−1),
E{|◦γ(1)
n (z)− ◦
γn(z)|4} = O(n−1−ε1/2), (2.25)
where γ
(1)
n = TrG(1). Moreover,
Var{G(1)
ii (z1)} = O(n−1), |E{G(1)
ii (z1)} −E{Gii(z1)}| = O(n−1), (2.26)
|E{γ(1)
n (z)}/n− f(z)| = O(n−1), |E−1{A(z)}+ f(z)| = O(n−1). (2.27)
P r o o f. Note that since =z=(G(1)m,m) ≥ 0, we can use the bound
|=A| ≥ |=z| ⇒ |A−1| ≤ |=z|−1 ≤ a−1. (2.28)
Journal of Mathematical Physics, Analysis, Geometry, 2011, vol. 7, No. 2 185
M. Shcherbina
Relations (2.21) follow from the representations
A◦ = A◦1 + n−1 ◦γ
(1)
n (z), B◦ = B◦
1 + n−1 d
dz
◦
γ
(1)
n (z), (2.29)
combined with (2.19), and (2.9), applied to γ
(1)
n . Relations (2.22) and (2.23) follow from
(2.16) and (2.29), if we take the products of the r.h.s. of (2.16) with different z and
average with respect to {w1i}. Relation (2.25) follows from (2.13), (2.21), and (2.28).
The first relation of (2.26) is the analog of the relation
Var{Gii(z1)} = Var{G11(z1)} = O(n−1) (2.30)
if in the latter we replace the matrix M by M (1). But since G11(z1) = −A−1(z1), (2.30)
follows from (2.21) and (2.28). The second relation of (2.26) follows from (2.13).
The first relations of (2.27) follows from the above bound for n−1E{γn − γ
(1)
n } and
the well known estimate (see, e.g., [8])
n−1E{γn} − f(z) = O(n−1).
The second one of (2.27) is the corollary of the above estimate and of the relation
E−1{A(z)} = (z + E{γ(1)
n }/n)−1 = (z + f(z))−1 + O(n−1) = −f(z) + O(n−1).
Finally we obtain the first bound of (2.24) from (2.22), (2.26), (2.25), and the identity
TrG(1)(z1)G(1)(z2) = Tr
G(1)(z1)−G(1)(z2)
z1 − z2
. (2.31)
The second bound of (2.24) follows from the first one, (2.23), and the Cauchy theorem.
P r o o f of Theorem 1. We prove first Theorem 1 for the functions ϕη of the form
ϕη = Pη ∗ ϕ0,
∫
|ϕ0(λ)|dλ ≤ C < ∞, (2.32)
where Pη is the Poisson kernel defined in (2.3). One can see easily that
N ◦
n [ϕ] =
1
π
∫
ϕ0(µ)=γ◦n(zµ)dµ, zµ = µ + iη. (2.33)
Set
Zn(x) = E{eixN◦
n [ϕ]}, e(x) = eixN◦
n [ϕ], Yn(z, x) = E{Tr G(z)e◦(x)}. (2.34)
Then
d
dx
Zn(x) =
x
2π
∫
ϕ0(µ)(Y (zµ, x)− Y (zµ, x))dµ. (2.35)
186 Journal of Mathematical Physics, Analysis, Geometry, 2011, vol. 7, No. 2
CLT for the Wigner and Sample Covariance Matrices
On the other hand, using the symmetry of the problem and the notations of (2.13), we
have
Yn(z, x) = E{Tr G(z)e◦(x)} = nE{G11(z)e◦(x)}
= −nE{(A−1)◦e1(x)} − nE{(A−1)◦(e(x)− e1(x))} = T1 + T2, (2.36)
where
e1(x) = eix(N (1)
n−1[ϕ])◦ , (N (1)
n−1[ϕ])◦ = (Trϕ(M (1)))◦ =
∫
dµϕ0(µ)=◦γ(1)
n (zµ).
Let us use the representation
A−1 =
1
E{A} −
A◦
E2{A} +
(A◦)2
E3{A} −
(A◦)3
E4{A} +
(A◦)4
AE4{A} . (2.37)
Since e1(x) does not depend on {w1i}, using that E{...} = E{E1{...}}, we obtain in view
of (2.37) and (2.21)
T1 =
E{nE1{A◦(z))}e◦1(x)}
E2{A} − E{nE1{(A◦(z))2}e◦1(x)}
E3{A} + O(n−ε1/2).
Relations (2.24) imply
|E{nE1{(A◦(z))2}e◦1(x)}| ≤ Var1/2{nE1{(A◦(z))2}}Var1/2{e◦1(x)} = O(n−1/2),
thus
T1 = E−2{A}E{γ(1)
n e◦1(x)}+ O(n−ε1/2).
But the Schwarz inequality and (2.25) yield
|E{(γ(1)
n )◦e1(x)} −E{γ◦ne(x)}| ≤ E{|(γ(1)
n )◦ − γ◦n|(1 + |x||γ◦n|)}
≤ C(1 + |x|)E1/2{|(γ(1)
n )◦ − γ◦n|2} = O(n−1/2). (2.38)
Thus, we have
T1 = E−2{A(z)}Yn(z, x) + O(n−1/2) = f2(z)Yn(z, x) + O(n−ε1/2). (2.39)
To find T2, we write
e(x)− e1(x) = ix
∫
ϕ0(µ)
(
=(γ◦n −
◦
γ
(1)
n )e1(x) + O((γ◦n − (γ(1)
n )◦)2)
)
dµ.
Using the Schwarz inequality, (2.13), (2.25), and (2.22), we conclude that the term
O((γ◦n − (γ(1)
n )◦)2) gives the contribution O(n−ε1/4). Then, since e1(x) does not depend
on {w1i}, we average first with respect to {w1i} and obtain in view of (2.25)
T2 = − ixn
π
∫
dµϕ0(µ)E
{
e1(x)(A−1)◦(z)=(
γn − γ(1)
n
)◦(zµ)
}
+ O(n−ε1/4)
=
ixn
π
∫
dµϕ0(µ)E
{
e1(x)E1
{
(A−1)◦(z)=
(1 + B(zµ)
A(zµ)
)◦}}
+ O(n−ε1/4)
=
ixn
π
∫
dµϕ0(µ)E
{
(A−1)◦(z)=
(1 + B(zµ)
A(zµ)
)◦}
E{e1(x)}+ O(n−ε1/4).
Journal of Mathematical Physics, Analysis, Geometry, 2011, vol. 7, No. 2 187
M. Shcherbina
Using (2.37) and (2.21), we conclude that only linear terms with respect to B◦ and A◦
give nonvanishing contribution, hence in view of (2.23) and (2.24) we obtain
Dn(z, zµ) := nE
{
(A−1)◦(z)
(
(1 + B(zµ))A−1(zµ)
)◦}
=
(
1 + E{B(zµ)})nE1{A◦(z)A◦(zµ)}
E2{A(z)}E2{A(zµ)} − nE1{A◦(z)B◦(zµ)}
E2{A(z)}E{A(zµ)} + O(n−ε1/2)
= f2(z)f2(zµ)(1 + f ′(zµ))
(
E
{ 2
n
Tr G(1)(z1)G(1)(z2)
}
+
κ4
n
E
{ ∑
i
G
(1)
ii (z1)G
(1)
ii (z2)
}
+ 2
)
− f2(z)f(zµ)
d
dzµ
(
E
{ 2
n
Tr G(1)(z)G(1)(zµ)
}
+
κ4
n
E
{ ∑
i
G
(1)
ii (z)G(1)
ii (zµ)
})
+ O(n−ε1/2),
where we used also (2.27) to replace E−1{A(z)} by f(z) and E{B(zµ)} by f ′(zµ). The
identity (2.31) yields
Dn(z, zµ) :=2f2(z)f2(zµ)(1 + f ′(zµ))
(f(z)− f(zµ)
z − zµ
+ w2/2
)
+ 2f2(z)f(zµ)
d
dzµ
(f(z)− f(zµ)
z − zµ
)
+ κ4
(
f3(z)f3(zµ)(1 + f ′(zµ)) + f3(z)f(zµ)f ′(zµ)
)
+ O(n−ε1/2). (2.40)
In addition, similarly to (2.38), we have
E{e1(x)} = Zn(x) + O(n−1/2).
Hence, relations (2.36)–(2.40) imply
Yn(z, x) = f2(z)Yn(z, x) + ixZn(x)
∫
dµϕ(µ)
Dn(z, zµ)−Dn(z, zµ)
2iπ
+ o(1),
Yn(z, x) = ixZn(x)
∫
dµϕ0(µ)
Cn(z, zµ)− Cn(z, zµ)
2iπ
+ o(1), (2.41)
Cn(z, zµ) :=
Dn(z, zµ)
1− f2(z)
.
Using the relations
f(z)(f ′(z) + 1) =
f(z)
1− f2(z)
= − 1√
z2 − 4
, f ′ = − f(z)√
z2 − 4
,
we can transform Cn(z, zµ) to the form
Cn(z, zµ) =
1
(z − zµ)2
(
zzµ − 4
(z2 − 4)1/2(z2
µ − 4)1/2
− 1
)
+
(w2 − 2)f(z)f(zµ)
(z2 − 4)1/2(z2
µ − 4)1/2
+ 2κ4
f2(z)f2(zµ)
(z2 − 4)1/2(z2
µ − 4)1/2
+ o(1) =: C(z, zµ) + o(1). (2.42)
188 Journal of Mathematical Physics, Analysis, Geometry, 2011, vol. 7, No. 2
CLT for the Wigner and Sample Covariance Matrices
Now, taking into account (2.35), (2.41), and (2.42), we obtain the equation
d
dx
Zn(x) = −xV [ϕ0, η]Zn(x) + o(1), (2.43)
V [ϕ0, η] =
1
4π2
∫ ∫
ϕ0(µ1)ϕ0(µ2)
(
C(zµ1 , zµ2) + C(zµ1 , zµ2)
− C(zµ1 , zµ2)− C(zµ1 , zµ2)
)
dµ1dµ2.
Now if we consider
Z̃n(x) = ex2V [ϕ0,η]/2Zn(x),
then (2.43) yields for any |x| ≤ C
d
dx
Z̃n(x) = o(1),
and since Z̃n(0) = Zn(0) = 1, we obtain uniformly in |x| ≤ C
Z̃n(x) = 1 + o(1)
⇒Zn(x) = e−x2V [ϕη]/2 + o(1). (2.44)
Thus, we have proved CLT for the functions of the form (2.32). To extend CLT to a
wider class of functions we use
Proposition 3. Let {ξ(n)
l }n
l=1 be a triangular array of random variables, Nn[ϕ] =
n∑
l=1
ϕ(ξ(n)
l ) be its linear statistics, corresponding to a test function ϕ : R→ R, and
Vn[ϕ] = Var{Nn[ϕ]}
be the variance of Nn[ϕ]. Assume that
(a) there exists a vector space L endowed with a norm ||...|| and such that Vn is
defined on L and admits the bound
Vn[ϕ] ≤ C||ϕ||2, ∀ϕ ∈ L, (2.45)
where C does not depend on n;
(b) there exists a dense linear manifold L1 ⊂ L such that CLT is valid for Nn[ϕ],
ϕ ∈ L1, i.e., if Zn[xϕ] = E
{
eix
◦
Nn[ϕ]
}
is the characteristic function of n−1/2
◦
Nn[ϕ], then
there exists a continuous quadratic functional V : L1 → R+ such that we have uniformly
in x, varying on any compact interval
lim
n→∞
Zn[xϕ] = e−x2V [ϕ]/2, ∀ϕ ∈ L1. (2.46)
Then V admits a continuous extension to L and CLT is valid for all Nn[ϕ], ϕ ∈ L.
Journal of Mathematical Physics, Analysis, Geometry, 2011, vol. 7, No. 2 189
M. Shcherbina
P r o o f. Let {ϕk} be a sequence of elements of L1 converging to ϕ ∈ L. We have
then in view of the inequality |eia−eib| ≤ |a−b|, the linearity of
◦
Nn[ϕ] in ϕ, the Schwarz
inequality, and (2.45):
∣∣∣Zn(xϕ)− Zn(xϕ)|ϕ=ϕk
∣∣∣ ≤ |x|E
{∣∣∣∣
◦
Nn[ϕ]−
◦
Nn[ϕk]
∣∣∣∣
}
≤ |x|Var1/2{Nn[ϕ− ϕk]} ≤ C|x| ||ϕ− ϕk||.
Now, passing first to the limit n →∞ and then k →∞, we obtain the assertion.
The proposition and Lemma 2 allow us to complete the proof of Theorem 1.
P r o o f of Theorem 2. The proof of Theorem 2 can be performed by the same way as
that for Theorem 1. We start from the proposition which is the analog of Proposition 2.
Proposition 4. Let γn = TrG(z), where G(z) = (M − z)−1 and M is a sample
covariance matrix (1.5) with entries satisfying (1.6) and (1.15). Then inequalities (2.8)
hold.
Taking into account Proposition 4, on the basis of Proposition 1 and Lemma 2 we
obtain immediately the bound (2.20) for the variance of linear eigenvalue statistics of
sample covariance matrices. Then one can use the same method as in the proof of
Theorem 1 to prove CLT for ϕη of (2.32) or just use the result of [9] for the functions,
satisfying conditions (1.7). Then Proposition 3 implies immediately the assertion of
Theorem 2.
Thus, to complete the proof of Theorem 2 we are left to prove Proposition 4.
P r o o f of Proposition 4. Similarly to the proof of Proposition 2 we use the identity
(2.10) where this time E≤k means the averaging with respect to {Xjl}l=1,.,m,j≤k. Then
we obtain (2.11) with Ek meaning the averaging with respect to {Xkl}l=1,.,m.
Denote M (1) = X(1)X(1)∗, where the (n− 1)×m matrix X(1) is made from the lines
X, from the second to the last one. Then denote
G(1) = (M (1) − z)−1, m(1) = (M12, . . .M1n)
and use (2.13) with these G(1) and m(1).
To obtain the estimate for E1
{
|γn − E1{γn}|2
}
we need (as in the proof of Propo-
sition 2) to estimate E1{|A◦1|2}/(=zE1{A})2 and E1{|B◦
1 |2}/(E1{A})2. Since G(1) does
not depend on {X1i}i=1,.,m, averaging with respect to {X1i}i=1,.,m, using the Jensen
inequality and (2.47), we get
E1{A} =
1
n
TrG(1)M (1), =E1{A} =
=z
n
Tr G(1)M (1)G(1)∗,
E1{|A◦1|2} ≤
2 + X4
n2
TrG(1)M (1)G(1)∗M (1)
≤ C
n
E1−δ
1
{
n−1TrG(1)M (1)G(1)∗}Eδ
1
{
n−1TrG(1)(M (1))(1+δ)/δG(1)∗}
≤ C
n|=z|2δ
E1−δ
1
{
n−1TrG(1)M (1)G(1)∗}Eδ
1
{
n−1Tr (M (1))(1+δ)/δ
}
.
190 Journal of Mathematical Physics, Analysis, Geometry, 2011, vol. 7, No. 2
CLT for the Wigner and Sample Covariance Matrices
But it is known (see [3] and references therein) that for any fixed δ > 0
E
{
n−1Tr (M (1))(1+δ)/δ
} ≤ (
2 +
√
c
)(1+δ)/δ + o(1). (2.47)
Combining this bound with the above inequality and repeating the argument of Propo-
sition 2, we obtain the bound (2.17). The bound for E1{|B◦
1 |2}/E2
1{A} can be obtained
similarly.
References
[1] G. W. Anderson and O. Zeitouni, A CLT for a Band Matrix Model. — Probab.
Theory Related Fields 134 (2006), No. 2, 283–338.
[2] Z. Bai and J.W. Silverstein, CLT for Linear Spectral Statistics of Large-Dimensional
Sample Covariance Matrices. — Ann. Probab. 32 (2004), No. 1A, 553–605.
[3] Z. Bai and J.W. Silverstein, Spectral Analysis of Large Dimensional Random Ma-
trices. Second edition. Springer Series in Statistics. Springer, New York (2010).
[4] S.W. Dharmadhikari, V. Fabian, and K. Jogdeo, Bounds on the Moments of Mar-
tingales. — Ann. Math. Statist. 39 (1968), 1719–1723.
[5] V.L. Girko, Theory of Stochastic Canonical Equations, vols. I. II. Kluwer, Dordrecht
(2001).
[6] A. Guionnet, Large Deviations Upper Bounds and Central Limit Theorems for
Non-Commutative Functionals of Gaussian Large Random Matrices. — Ann. Inst.
H. Poincaré Probab. Statist. 38 (2002), 341–384.
[7] K. Johansson, On Fluctuations of Eigenvalues of Random Hermitian Matrices. —
Duke Math. J. 91 (1998), 151–204.
[8] A. Khorunzhy, B. Khoruzhenko, and L. Pastur, Random Matrices with Independent
Entries: Asymptotic Properties of the Green Function. — J. Math. Phys. 37 (1996),
5033–5060.
[9] A. Lytova and L. Pastur, Central Limit Theorem for Linear Eigenvalue Statistics
of Random Matrices with Independent Entries. — Ann. Probab. 37 (2009), No. 5,
1778–1840.
[10] V. Marchenko and L. Pastur, The Eigenvalue Distribution in Some Ensembles of
Random Matrices. — Math. USSR Sb. 1 (1967), 457–483.
[11] L. Pastur, On the Spectrum of Random Matrices. — Teor. Math. Phys. 10 (1972),
67–74.
[12] Ya. Sinai and A. Soshnikov, Central Limit Theorem for Traces of Large Random
Symmetric Matrices with Independent Matrix Elements. — Bol. Soc. Brasil. Mat.
(N.S.) 29 (1998), 1–24.
Journal of Mathematical Physics, Analysis, Geometry, 2011, vol. 7, No. 2 191
M. Shcherbina
[13] A. Soshnikov, Central Limit Theorem for Traces for Local Linear Statistics in
Classical Compact Groups and Related Combinatorial Identities. — Ann. Probab.
28 (2000), 1353–1370.
[14] M. Shcherbina and B. Tirozzi, Central Limit Theorem for Fluctuations of Linear
Eigenvalue Statistics of Large Random Graphs. — J. Math. Phys. 51 (2010), No. 2,
02523–02542.
[15] E.P. Wigner, On the Distribution of the Roots of Certain Symmetric Matrices. —
Ann. Math. 67 (1958), 325–327.
192 Journal of Mathematical Physics, Analysis, Geometry, 2011, vol. 7, No. 2
|