Central Limit Theorem for Linear Eigenvalue Statistics of Orthogonally Invariant Matrix Models

We prove central limit theorem for linear eigenvalue statistics of orthogonally invariant ensembles of random matrices with one interval limiting spectrum. We consider ensembles with real analytic potentials and test functions with two bounded derivatives.

Збережено в:
Бібліографічні деталі
Дата:2008
Автор: Shcherbina, M.
Формат: Стаття
Мова:English
Опубліковано: Фізико-технічний інститут низьких температур ім. Б.І. Вєркіна НАН України 2008
Назва видання:Журнал математической физики, анализа, геометрии
Онлайн доступ:http://dspace.nbuv.gov.ua/handle/123456789/106500
Теги: Додати тег
Немає тегів, Будьте першим, хто поставить тег для цього запису!
Назва журналу:Digital Library of Periodicals of National Academy of Sciences of Ukraine
Цитувати:Central Limit Theorem for Linear Eigenvalue Statistics of Orthogonally Invariant Matrix Models / M. Shcherbina // Журнал математической физики, анализа, геометрии. — 2008. — Т. 4, № 1. — С. 171-195. — Бібліогр.: 18 назв. — англ.

Репозитарії

Digital Library of Periodicals of National Academy of Sciences of Ukraine
id irk-123456789-106500
record_format dspace
spelling irk-123456789-1065002016-09-30T03:02:51Z Central Limit Theorem for Linear Eigenvalue Statistics of Orthogonally Invariant Matrix Models Shcherbina, M. We prove central limit theorem for linear eigenvalue statistics of orthogonally invariant ensembles of random matrices with one interval limiting spectrum. We consider ensembles with real analytic potentials and test functions with two bounded derivatives. 2008 Article Central Limit Theorem for Linear Eigenvalue Statistics of Orthogonally Invariant Matrix Models / M. Shcherbina // Журнал математической физики, анализа, геометрии. — 2008. — Т. 4, № 1. — С. 171-195. — Бібліогр.: 18 назв. — англ. 1812-9471 http://dspace.nbuv.gov.ua/handle/123456789/106500 en Журнал математической физики, анализа, геометрии Фізико-технічний інститут низьких температур ім. Б.І. Вєркіна НАН України
institution Digital Library of Periodicals of National Academy of Sciences of Ukraine
collection DSpace DC
language English
description We prove central limit theorem for linear eigenvalue statistics of orthogonally invariant ensembles of random matrices with one interval limiting spectrum. We consider ensembles with real analytic potentials and test functions with two bounded derivatives.
format Article
author Shcherbina, M.
spellingShingle Shcherbina, M.
Central Limit Theorem for Linear Eigenvalue Statistics of Orthogonally Invariant Matrix Models
Журнал математической физики, анализа, геометрии
author_facet Shcherbina, M.
author_sort Shcherbina, M.
title Central Limit Theorem for Linear Eigenvalue Statistics of Orthogonally Invariant Matrix Models
title_short Central Limit Theorem for Linear Eigenvalue Statistics of Orthogonally Invariant Matrix Models
title_full Central Limit Theorem for Linear Eigenvalue Statistics of Orthogonally Invariant Matrix Models
title_fullStr Central Limit Theorem for Linear Eigenvalue Statistics of Orthogonally Invariant Matrix Models
title_full_unstemmed Central Limit Theorem for Linear Eigenvalue Statistics of Orthogonally Invariant Matrix Models
title_sort central limit theorem for linear eigenvalue statistics of orthogonally invariant matrix models
publisher Фізико-технічний інститут низьких температур ім. Б.І. Вєркіна НАН України
publishDate 2008
url http://dspace.nbuv.gov.ua/handle/123456789/106500
citation_txt Central Limit Theorem for Linear Eigenvalue Statistics of Orthogonally Invariant Matrix Models / M. Shcherbina // Журнал математической физики, анализа, геометрии. — 2008. — Т. 4, № 1. — С. 171-195. — Бібліогр.: 18 назв. — англ.
series Журнал математической физики, анализа, геометрии
work_keys_str_mv AT shcherbinam centrallimittheoremforlineareigenvaluestatisticsoforthogonallyinvariantmatrixmodels
first_indexed 2025-07-07T18:34:23Z
last_indexed 2025-07-07T18:34:23Z
_version_ 1837014202610352128
fulltext Journal of Mathematical Physics, Analysis, Geometry 2008, vol. 4, No. 1, pp. 171�195 Central Limit Theorem for Linear Eigenvalue Statistics of Orthogonally Invariant Matrix Models M. Shcherbina Mathematical Division, B. Verkin Institute for Low Temperature Physics and Engineering National Academy of Sciences of Ukraine 47 Lenin Ave., Kharkiv, 61103, Ukraine E-mail:shcherbi@ilt.kharkov.ua Received November 12, 2007 We prove central limit theorem for linear eigenvalue statistics of orthogo- nally invariant ensembles of random matrices with one interval limiting spec- trum. We consider ensembles with real analytic potentials and test functions with two bounded derivatives. Key words: orthogonally invariant matrix models, linear eigenvalue statis- tics, central limit theorem. Mathematics Subject Classi�cation 2000: 15A52 (primary); 15A57 (secondary). Dedicated to Leonid Andreevich Pastur and Vladimir Aleksandrovich Marchenko, our teachers and colleagues 1. Introduction and Main Rezult In this paper we consider the ensembles of n� n real symmetric matrices M with the probability distribution Pn(M)dM = Z �1 n;� expf�n� 2 TrV (M)gdM; (1.1) where Zn;� is the normalization constant, V : R ! R+ is a H�older function satisfying the condition jV (�)j � 2(1 + �) log(1 + j�j) (1.2) and dM means the Lebesgue measure on the algebraically independent entries of M . In the case of real symmetric matrices � = 1. But since it is interesting to c M. Shcherbina, 2008 M. Shcherbina compare the results with the case of Hermitian matrix models, where � = 2, we keep the parameter � in (1.1). Let f�igni=1 be the eigenvalues of M . Then it is well known (see [9]) that the joint distribution of f�igni=1 has the density pn(�1; : : : ; �n) = Q �1 n;� expf�n� 2 nX j=1 V (�j)g Y 1�j<k�n j�j � �kj�; (1.3) where Qn;� is the normalizing constant. The Normalized Counting Measure (NCM) of eigenvalues for any interval � � R is de�ned as Nn(�) = ]f�l 2 �g=n: (1.4) It is known [3, 8] that for any � Nn(�) converges weakly in probability to a non- random measure N(�), and the limiting measure N can be found as a unique minimum of some functional on the set of nonnegative unit measures. The ex- tremum point equation for this functional in the case of H�older V 0 gives us V 0(�) = 2 Z � �(�)d� �� � ; � 2 �; (1.5) where � is the density of N and � is the support of N . For all ' : R ! R consider a linear statistics Nn['] = '(�1) + � � �+ '(�n): It follows from the results of [3, 8] that if V is a H�older function, then lim n!1 n �1 Nn['] = Z '(�)N(d�): Consider the �uctuation of linear eigenvalue statistics _Nn['] = Nn[']�EfNn[']g: (1.6) For polynomial V it was proved by Johansson [8] that if the limiting spectrum � = [�2; 2], then for any � and any ' 2 C1[�d � 2; 2 + d] _Nn['] converges in distribution, as n!1, to a Gaussian random variable. The limiting variance is the limit, as n!1, of Varn[';V ] = Ef _N2 n[']g = n(n� 1) Z d�1d�2p (n) 2;�(�1; d�2)'(�1)'(�2) + n Z d�1p (n) 1;�(�1)' 2(�1)� n 2 �Z d�1p (n) 1;�(�1)'(�1) �2 ! 1 2��2 Z Z d�d� � �(�� �(�) �)� � �2 4� ��p 4� �2 p 4� �2 : 172 Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 Central Limit Theorem for Orthogonally Invariant Matrix Models Here and below we denote by p (n) l;� the lth marginal density p (n) l;� (�1; : : : ; �l) = Z d�l+1 � � � d�npn(�1; : : : ; �n): (1.7) For Hermitian matrix models these results can be easily generalized on non- analytic V under conditions that � = [�2; 2] and V (4) 2 L2[�2� "; 2 + "]. A key role in the proof of CLT as well as in the most studies of Hermitian matrix models belongs to the orthogonal polynomials technics which allows to write all marginal densities as p (n) l;2 (�1; : : : ; �l) = (n� l)! n! detfKn(�j ; �k)glj;k=1; (1.8) where Kn(�; �;V ) = n�1X l=0 (n) l (�) (n) l (�) (1.9) is a reproducing kernel of the orthonormal system, (n) l (�) = w 1=2 n (�)p (n) l (�); l = 0; : : : ; (1.10) p (n) l , l = 0; : : : are orthogonal polynomials on R associated with the weight wn(�) = e �nV (�) Z p (n) l (�)p(n)m (�)wn(�)d� = Æl;m: In the Hermitian case it can be proved that d 2 dt2 logEfet _Nn[']g = VarfNn[';V + t'=n]g = Z d�1d�2('(�1)� '(�2)) 2 K 2 n(�1; �2;V + t'=n): (1.11) Hence, to prove CLT we are to study the last integral or to prove that Kn does not depend on the "small perturbation" t'=n in the limit n!1. For unitary matrix models it is true only in the case (see [8]) when the support of N ( limiting NCM) consists of one interval. If the limiting support consists of two or more intervals, then the r.h.s. of (1.11) has no limit, as n!1 (see [11]). In the case of real symmetric matrix models the situation is more complicated. According to the result of [18], to study the marginal densities we need to study a matrix kernel of the form bKn;1(�; �) = � Sn(�; �) Snd(�; �) �ISn(�; �) Sn(�; �) � ; (1.12) Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 173 M. Shcherbina where Sn(�; �) = � n�1X i;j=0 (n) i (�)(M(0;n))�1 i;j (n� (n) j )(�); (1.13) with M(0;n) = fMj;lgn�1 j;l=0; Mj;l = n( (n) j ; � (n) l ): (1.14) Here and below we denote �(�) = 1 2 sign(�); �f(�) = Z �(�� �)f(�)d�: (1.15) If we know bKn(�; �), then p (n) l;1 (�1; : : : ; �l) = (n� l)! n! @ l @'(�1) : : : @'(�l) det1=2fI + bKn b'g; where b' is the operator of multiplication by ' and bKn : L2[R]�L2 [R] ! L2[R]� L2[R] is an integral operator with the matrix kernel bKn(�; �). In particular, p (n) 1;1 (�) = 1 2n Tr bKn(�; �); p (n) 2;1 (�; �) = 1 4n(n� 1) h Tr bKn(�; �)Tr bKn(�; �)� 2Tr bKn(�; �) bKn(�; �)) i : (1.16) Below there will also be used the following representation of the variance VarfNn['1;V ]g: Proposition 1. VarfNn['1];V g = 1 4 Z d�1d�2('1(�1)� '1(�2)) 2tr � bKn(�1; �2) bKn(�2; �1) � : (1.17) The structure of the matrix kernel bKn is studied only for a few particular ensembles. GOE was considered in [18]. The case V (�) = � 2m for natural m was studied in [6]. Ensembles with V (�) = 1 4 � 4 � a 2 � 2 were studied in [17]. Let us set our main conditions. C1: V (�) satis�es (1.2) and is an even analytic function in [d; d1] = fz : �2� 2d � <z � 2 + 2d; j=zj � d1g; d; d1 > 0: (1.18) C2: The support � of IDS of the ensemble consists of a single interval: � = [�2; 2]: 174 Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 Central Limit Theorem for Orthogonally Invariant Matrix Models C3: DOS �(�) is strictly positive in the internal points � 2 (�2; 2) and �(�) � j�� 2j1=2, as � � �2. C4: The function u(�) = 2 Z log j�� �j�(�)d�� V (�) (1.19) achieves its maximum if and only if � 2 �. It is proved in [2] that these conditions imply that �(�) = 1 � P (�) p 4� �21�; (1.20) where P (z) = 1 2�i I L V 0(z)� V 0(�) z � � d� (�2 � 4)1=2 = 1 2� �Z �� V 0(z) � V 0(2 cos y) z � 2 cos y dy: (1.21) Here the contour L � [d; d1], and L contains inside the interval (�2; 2). It is evident that P is an analytic function in [2d=3; 2d1=3] and P (�) � Æ > 0, � 2 �. Under these conditions it was proved in [16] that there exists an n-independent C such that for even n jj(M (0;n))�1jj � C and Sn(�; �) = Kn(�; �) + rn(�; �) + ~rn(�; �); (1.22) where rn(�; �) = n X jkj;jjj�2 log2 n A (n) j;k (n) n+j(�)� (n) n+k(�); (1.23) ~rn(�; �) = n�1X j;k=0 E(n) j;k (n) j (�)� (n) k (�); jjE(n) j;k jj � e �c log2 n : (1.24) Here and below we denote by c; C;C0; C1; : : : positive n-independent constants (di�erent in di�erent formulas). Besides, ISn(�; �) = Z �(�� � 0)Kn(� 0 ; �)d�0 + Irn(�; �);+I~rn(�; �); (1.25) where Irn(�; �) = Z �(���0)rn(�0; �)d�0; I~rn(�; �) = Z �(���0)~rn(�0; �)d�0; (1.26) and Snd(�; �) = � @ @� Kn(�; �) + @ @� rn(�; �) + @ @� ~rn(�; �): (1.27) The main result of the present paper is Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 175 M. Shcherbina Theorem 1. Consider the orthogonally invariant ensemble of random matrices de�ned by (1.1)�(1.3) with V satisfying conditions C1�C4. Then for any ' 2 C1[�2 � "; 2 + "], growing not faster than polynomial at in�nity, �uctuations of linear statistics (1.6) converge in distribution, as n!1, to a Gaussian random variable with zero mean and the variance Var[';V ], where Var[';V ] = lim n!1 Varn[';V ]: (1.28) 2. Proof of the Main Results P r o o f o f P r o p o s i t i o n 1. By de�nition and (1.16) we have Varn[';V ] = n(n� 1) Z d�d� p (n) 2;1 (�; �)'(�)'(�) + n Z d� p (n) 1;1 (�)' 2(�)� n 2 Z d�d� p (n) 1;1 (�)p (n) 1;1 (�)'(�)'(�) = �1 2 Z d�d� tr � bKn(�; �) bKn(�; �) � '(�)'(�) + 1 2 Z d� tr bKn(�; �)' 2(�): (2.1) But since Z d� p (n) 1;1 (�) = 1; Z d� p (n) 2;1 (�; �) = p (n) 1;1 (�); we obtain 1 2 Z Z d�tr bKn(�; �) = 1; Z d�d� tr � bKn(�; �) bKn(�; �) � = tr bKn(�; �): Using this expression in (2.1) we get (1.17). The proof of Theorem 1 is based on the following lemma: Lemma 1. Let for any ' 2 C1[�d], where �d = [�d� 2; 2 + d] Varn[';V ] � Cmax �d j'0j2; (2.2) and for any polynomial ' and any jtj � A Efeit _Nn [']g ! e �t2Var[';V ]=2 : (2.3) Then for any ' 2 C1[�d] the limit in (1.28) exists and (2.3) is valid. 176 Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 Central Limit Theorem for Orthogonally Invariant Matrix Models P r o o f. Since ' 2 C1[�d], for any " > 0 there exist '1 and '2, such that ' = '1 + '2, '1 is a polynomial and j'02j � ", it follows from (2.2) and the Schwarz inequality that there exists C > 0 that is independent of ", n and jVarn[';V ]�Varn['1;V ]j � C": Besides, for any other choice ~'1 and ~'2 such that ' = ~'1+ ~'2, j ~'02j � "1, we have jVarn[ ~'1;V ]�Varn['1;V ]j � C("+ "1): Hence, for any choice of polynomials f'ng1n=1 such that max j'0 � ' 0 nj ! 0, as n ! 1, the sequence Varn['1;n;V ] is fundamental and has a limit independent of the choice of '1;n. This implies the existence of the limit in (1.28) and that for any '1; '2 2 C1[�d] jVar['1;V ]�Var['2;V ]j � Cmax �d j'01 � ' 0 2j: (2.4) To prove (2.3) for any ' we �x any " > 0, choose '1 and '2 as in the case above by the �nal increments formula and the Schwarz inequality and write jEfeit _Nn['1+'2] �Efeit _Nn['1]gj � jtjEf _Nn['2]e it _Nn['1+�'2]g � AVar 1=2 n ['2;V ] � CA": Hence, taking the limit n!1, we get e �t 2 Var['1;V ]=2 �CA" � lim inf n!1 Efeit _Nn[']g � lim sup n!1 Efeit _Nn [']g � e �t2Var['1;V ]=2 + CA" Thus, using (2.4) we get (2.3) for any ' 2 C1[�d]. The next lemma will help us to prove (2.3) for polynomial '. Lemma 2. Let f�n(t)g1n=1 be a sequence of analytic uniformly bounded func- tions in the circle BA = ft : jtj � Ag. Assume also that �n(t)! �(t) for any real t, and �(t) is also analytic function in BA. Then �n(t)! �(t) for all t 2 BA. P r o o f. The proof of the lemma is very simple. According to the Arcella theorem, the sequence f'n(t)g is weakly compact in BA. But according to the uniqueness theorem, the limit of any convergent in BA subsequence f'nk(t)g must coincide with '(t). Hence we obtain the assertion of the lemma. Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 177 M. Shcherbina P r o o f o f T h e o r e m 1. According to the results of [2] and [13], if we restrict the integration in (1.3) by j�ij � 2 + d, consider the polynomials fp(n;d) k g1 k=0 to be orthogonal on the interval �d = [�2� d; 2 + d] with the weight e �nV and set (n;d) k = e �nV=2 p (n;d) k , then for k � n(1 + ") with some " > 0 sup �2�d j (n;d) k (�)� (n) k (�)j � e �nC ; sup j�j�2+d=2 j (n) k (�)j � e �nC : (2.5) Hence, if M(0;n) d and Sn;d are constructed as in (1.14) and (1.13) for �d, then jjM(0;n) d �M(0;n)jj � e �nC ; max �d jSn;d(�; �)� Sn;d(�; �)j � e �nC : Therefore from the very beginning we can take all integrals in (1.3), (1.7), (1.17), (1.15) and (1.14) over the interval �d and then we can studyM(0;n) d and Sn;d(�; �) instead of M(0;n) and Sn(�; �). But to simplify notations we omit below the index d. Besides, everywhere below the integrals without limits mean the integrals in �d and the symbols (:; :)2 and jj:jj2 mean the standard scalar product in L2[�d] and the correspondent norm. We use Lemma 2 to prove that for polynomial ' �n(t) = Efet _Nn[']g ! e t2Var[';V ]=2 ; n!1; where Var[';V ] is de�ned in (1.28). It is evident that j�n(t)j � j�n(jtj)j + j�n(�jtj)j: Hence to obtain the uniform bound for f�n(t)g1n=1 for t 2 BA we are just to �nd the uniform bound for f�n(t)g1n=1 with t 2 [�A;A]. And to �nd the last bound and also to prove the convergence of f�n(t)g1n=1 for real t it is enough to prove that the sequence f�00n(t)g1n=1 is uniformly bounded for t 2 [�A;A] and that lim n!1 � 00 n(t) = Var[';V ]; t 2 [�A;A]: (2.6) But it is easy to see that � 00 n(t) = Varn[';V + t'=n]: (2.7) In other words, for our purpose it is enough to prove that under conditions of Theorem 1 lim n!1 Varn[';V + t'=n] = Varn[';V ]: (2.8) 178 Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 Central Limit Theorem for Orthogonally Invariant Matrix Models First, let us to transform the expression for Varn[f ;V + t'=n] given by Proposi- tion 1. Using (1.22)�(1.27) and integrating by parts in terms, containing @ @� K(�; �), we get 2Varn[f ;V + t'=n] = Z d�d�Sn(�; �)Sn(�; �)� 2 f � Z d�d� @ @� Sn(�; �)(ISn(�; �)� �(�� �))�2 f = 2 Z d�d�K 2 n(�; �)� 2 f + 3 Z d�d�Kn(�; �)rn(�; �)� 2 f + Z d�d� rn(�; �)rn(�; �)� 2 f � Z d�d� @ @� rn(�; �)(IKn(�; �)� �(�� �))�2 f � Z d�d� @ @� rn(�; �)Irn(�; �)� 2 f � 2 Z d�d�Kn(�; �)(IKn(�; �) � �(�� �))�ff 0(�)� 2 Z d�d�Kn(�; �)Irn(�; �)�ff 0(�) +O(max jf j2e�c log2 n) = 2I1 + 3I2 + I3 � I4 � I5 � 2I6 � 2I7 +O(max jf je�c log2 n); (2.9) where �f = f(�)� f(�); (2.10) and O(max jf j2e�c log2 n) is a contribution of the terms containing integrals of ~rn(�; �) of (1.24). Note that all integrated terms here contain (n) k (�2 � d) = O(e�nc) (see (2.5)). Hence their contribution is O(e�nc). To proceed further let us recall that, by standard arguments, f (n) l g satisfy the recursion formula � (n) l (�) = J (n) l+1 (n) l+1(�) + q (n) l (n) l (�) + J (n) l (n) l�1(�); l = 0; 1; : : : J (n) = 0: (2.11) The Jacobi matrix J (n) de�ned by this recursion plays an important role in our proof. Lemma 3. Consider (n) j and J (n) j ; q (n) j de�ned by (2.11) for the potential V + t'=n. Under conditions of Theorem 1 there exists ~" > 0, such that for all jjj � ~"n J (n) n+j = 1+ c (1) t+ j 2P (0)n +r (1) j ; q (n) n+j = c (0) t 2P (0)n +r (0) j ; jr(�) j j � C( j 2 n2 +n�4=3); � = 0; 1; (2.12) for jjj � n 1=5 � (n) n+j�1 � � (n) n+j+1 = 2n�1 X k>0 Rj�k (n) k + n �1 " (n) k ; jj"(n) k jj2 � n �1=9 ; (2.13) Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 179 M. Shcherbina where Rj = 1 2� �Z �� e ijx dx P (2 cos x) ; (2.14) and the function P is de�ned in (1.21). Moreover, there exists M � n�j;n�k such that for any jjj; jkj � n 1=5 Mn�j;n�k =M � n�j;n�k +O(n�1=9); M � n�j;n�k =Mk�j+1 � 1 2 (1 + (�1)j)M�1 (2.15) with Mk = (1 + (�1)k) 1X j=k Rj; M�1 = 2 1X j=�1 Rj : (2.16) The proof of the lemma is given in the next section. On the basis of the lemma we can prove now that the last two integrals in the r.h.s. of (2.9) (I6 and I7) disappear in the limit n ! 1. Using the Christo�el� Darboux formula it is su�cient to prove that for any polynomial f; g and any jjj; jkj � log2 nZ d�d� � (n) n (�) (n) n�1(�)� (n) n (�) (n) n�1(�) � (IKn(�; �)� �(�� �)) f(�)g(�)!0 n Z d�d� � (n) n (�) (n) n�1(�)� (n) n (�) (n) n�1(�) � � (n) n+k(�)� (n) n+j(�)f(�)g(�)! 0: (2.17) We use that IKn(�; �) � �(�� �) = 1X k=n � (n) k (�) (n) k (�) (2.18) in the weak sense. Besides, using the recursion formula (2.11), we obtain easily that for polynomial f of the degree l f(�) (n) n�� (�) = j=n+�+lX k=n+��l fn��;j (n) n��+j(�); � = 0; 1; (2.19) where, according to (2.12), the coe�cients fn+�;j have �nite limits, as n ! 1. Using (2.18) and (2.19) in the �rst integral of (2.17) and integrating with respect to �, we obtain that the �rst integral is equal to a �nite sum of the termsZ d� � (n) n+j(�) (n) n��(�)g(�): (2.20) 180 Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 Central Limit Theorem for Orthogonally Invariant Matrix Models But using the representation of the type of (2.19) for the polynomial g we obtain easily that every term of the type of (2.20) is equal to a �nite sum of the termsZ d� � (n) n+j(�) (n) n+j0(�) = n �1 Mn+j0;n+j: (2.21) Since by (2.15) Mn+j0;n+j have �nite limits as n!1, we obtain the �rst line of (2.17). To prove that the second integral in (2.17) tends to zero, we also use (2.19) and its analog for g. Then we obtain that the second integral is a �nite sum with the convergent coe�cients of the terms n Z d�d� � (n) n+k(�) (n) n+k0(�)� (n) n+j(�) (n) n+j0(�) = n �1 Mn+k0;n+kMn+j0;n+j: Similarly to the above we conclude that all these terms tend to zero and so the second integral in (2.17) tends to zero. Lemma 4. Consider the coe�cients A (n) j;k from (1.23) de�ned for the potential V + t'=n. Under conditions of Theorem 1 for any jjj; jkj � log2 n there exists Aj;k independent of t and such that jA(n) j;k �Aj;kj � Cn �1=9 : (2.22) Moreover, there exist n-independent c; C such that jAj;kj � Ce �c(jjj+jkj) : (2.23) We prove this lemma in the next section. According to the above arguments it is clear now that to prove Theorem 1 it is enough to prove that for any polynomial f there exist limits for all integral I�, (� = 1; : : : ; 5) from (2.9). The existence of the limit of I1 follows from the result of [8]. Using representation (1.23) and the Christo�el�Darboux formula it is easy to see that I2 can be represented as a sum of the terms Tj;k := n Z d�d� � (n) n (�) (n) n�1(�) � (n) n (�) (n) n�1(�) � (n) n�j (�)� (n) n+k(�) �2 f �� � : (2.24) It is evident that if f is a polynomial of the lth degree, then �2 f �� � = X jpj;jqj�2l�1 ~fp(�)~gq(�); where ~fp and ~gq are some �xed polynomial of the degree less than 2l. Since we have the bound (2.23), it is su�cient to prove that the limit exists for any �xed Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 181 M. Shcherbina j; k, as n ! 1. But using for (2.19) for ~fp and ~gq and integrating with respect to �, we reduce the existence of the limit of T2(j; k) to the existence of the limits of Mn�j0;n+k for any �xed j0; k, which follows from Lemma 3. The existence of the limits of I3 and I5 can be obtained in the same way. To �nd the limit of I4 we use �rst the relation (2.18), then (2.19) for f and observe that after integration with respect to � only the �nite number of k in the r.h.s. of (2.18) gives us a nonzero contribution. Hence, as above, we reduce the problem to the existence of the limits Mn�j;n+k, which follows from Lemma 3. To complete the proof of the theorem we are left to prove the estimate (2.2). It is clear that for this goal it is enough to prove similar estimates for all terms I� � = 1; : : : ; 7 in (2.9). For I1 we have by the Christo�el�Darboux formulaZ d�d�K 2 n(�; �)� 2 f � max �2�d jf 0j2 Z d�d�K 2 n(�; �)(�� �)2 = 2(J (n) n )2 max �2�d jf 0j2: To prove the estimates for other I� �rst we prove the following auxiliary statement: Proposition 2. For any g with g 0 bounded in �d and any jjj; jkj � 2 log2 n����nZ d� g(�) (n) n+j(�)� (n) n+k(�) ���� � C(max �d jg0j+max �d jgj): (2.25) P r o o f o f P r o p o s i t i o n 2. We start with a simple relation, which follows from the de�nition of the operator � (see 1.15). For any integrable f; gZ d��f(�)�g(�) = 1 4 (1�d ; f)2(1�d ; g)2 � 1 2 Z �d d�d� j� � �jf(�)g(�): (2.26) In particular, using a simple observation that 1 2 j���j = (���)�(���) and then the de�nition (1.14), we getZ d�� (n) j (�)� (n) k (�) = 1 4 (1�d ; (n) j )2(1�d ; (n) k )2 � 1 n � J (n) j+1Mj+1;k + J (n) j Mj�1;k � J (n) k+1Mj;k+1 � J (n) k Mj;k�1 � : (2.27) Since for odd k (1�d ; (n) j )2 = 0, this relation and (2.15) gives us immediately that for odd jkj � n 1=5 Z d�(� (n) n+k(�)) 2 � C n : (2.28) For even k the same relation can be obtained if we apply the analog of (2.27) to f(�) = � (n) n+k(�) = J (n) n+k+1 (n) n+k+1(�) + J (n) n+k (n) n+k�1(�) and then use (2.13). Note also that since (2.5) yields j� (n) n+k(2 + �)� � (n) n+k(2 + d=2)j � e �nc ; d=2 � � � d; 182 Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 Central Limit Theorem for Orthogonally Invariant Matrix Models by (2.28), we have n(� (n) n+k(2 + d))2d=2 � n Z d� (� (n) n+k(�)) 2 + o(1) � C: (2.29) The last bound and (2.28) imply one more useful estimate that is valid for any f with the bounded derivativeZ d� � �(f (n) n+k)(�) �2 � C n (max �d jf j+max �d jf 0j)2: (2.30) Indeed, using the fact that (n) n+k = (� (n) n+k) 0 and integrating by parts (12.30), it is easy to obtain �(f (n) n+k) = f(�)� (n) n+k � 1 2 f(2 + d) (n) n+k(2 + d) �1 2 f(�2� d) (n) n+k(�2� d)� � � f 0 � (n) n+k � : Now, taking the square of the r.h.s. and using (2.29) and (2.28), we obtain (2.30). To prove Proposition 2 we consider three cases: (a) j � k is even; (b) k is even and j is odd; (c) k is odd and j is even. (a) Using (2.13), it is easy to get that����nZ d� g(�) (n) n+j(�)� (n) n+k(�)� n Z d� g(�) (n) n+k(�)� (n) n+k(�) ���� � Cjk � jjmax �d jg(�)j: Then, integrating by parts the second integral, we obtain n Z d� g(�) (n) n+k(�)� (n) n+k(�) = n 2 g(�)(� (n) n+k(�)) 2 ����2+d �2�d � n 2 Z d� g 0(�)(� (n) n+k(�)) 2 : Relation (2.25) follows now from (2.29) and (2.28). (b) Since for even k � (n) n+k(0) = 0, using the result of [4] on the asymptotic of orthogonal polynomials, it is easy to get that for any j�j � 1 j� (n) n+k(�)j = ���� �Z 0 (n) n+k(�)d� ���� � C n : Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 183 M. Shcherbina Hence, if we de�ne ~g(�) = g(�)��1 1 j�j>1 + 1 2 [g(1)(1 + �) + g(�1)(1 � �)] 1 j�j�1; so that g(�) = ~g(�)� for j�j � 1, then n ���� Z d� g(�) (n) n+j(�)� (n) n+k(�)� Z d��~g(�) (n) n+j(�)� (n) n+k(�) ���� � Cmax �d jgj: (2.31) It is evident that j~g0(�)j � jg0(�)j + jg(�)j. Thus, using the recursion relations (2.11), we replace the last integral by n Z d� ~g(�) � J (n) n+j (n) n+j+1(�) + J (n) n+j�1 (n) n+j�1(�)) � � (n) n+k(�)d�: Hence, we obtain again the case (a). (c) Integrating by parts, we get n Z d� g(�) (n) n+j(�)� (n) n+k(�) = ng(�)� (n) n+k(�)� (n) n+j(�) ����2+d �2�d � n Z d� g 0(�)� (n) n+j(�)� (n) n+k(�)� n Z d� g(�)� (n) n+j(�) (n) n+k(�): The bounds for �rst two terms in the r.h.s. were found before, and the last integral corresponds to the case (b). Thus we have proved (2.25). To �nd the bound for I2 in (2.9) we use the Christo�el�Darboux formula. Then we are faced with the problem to �nd the bounds for the terms Tj;k of (2.24). But since the function �2 f (� � �)�1 for any � has a derivative, bounded uniformly with respect to �; �, we can apply the bound (2.25) for any �xed �. We get Tj;k � Cmax �d jf 0j2 Z d�j (n) n (�)jj (n) n�k (�)j � Cmax �d jf 0j2; where the last bound is valid because of the Schwarz inequality. The estimates for I3 and I5 follow directly from (2.25) and (2.23). For I6 we use the Christo�el�Darboux formula and then the Schwarz inequality. Thus we get jI6j2 � Cmax �d jf 0(�)j4 � �Z d� n�1X k=0 (� (n) n�1(�)) 2 + C � : Here the sum with respect to k appears due to integration with respect to � of IK 2(�; �) and C appears due to integration of �2(� � �). But from (2.27) it is 184 Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 Central Limit Theorem for Orthogonally Invariant Matrix Models easy to see thatZ d� n�1X k=0 (� (n) k (�))2 = 1 4 n�1X k=0 (1�d ; (n) k )2 � Z d�d�Kn(�; �)(� � �)�(�� �): It follows from the Bessel inequality that the sum in the r.h.s. is bounded by (1�d ;1�d). In the second integral we apply the Christo�el�Darboux formula and then (2.15). For I7 we apply Christo�el�Darboux formula and then the Schwarz inequality. We obtain jI7j � nCmax �d jf 0j2 � 0@ X j;k;j0;k0 Aj;kAj0;k0 Z d�d� � (n) n+j(�)� (n) n+k0(�)� (n) n+k(�)� (n) n+k0(�) 1A1=2 � max �d jf 0j2; (2.32) where the last inequality follows from (2.28). Now we are left to prove the bound for I4 (see (2.9)). Note that because of (2.5) and (1.12)�(1.16) the integrals in [2 + d=2; 2 + d] and [�2� d;�2� d=2] in (2.9) give us O(e�nc) terms. Hence, without loss of generality, we can replace the function f in these intervals by a linear one in order to have a new function being continuous with a bounded derivative and such that f(2 + d) = f(�2 � d) = 0. Then, integrating by parts with respect to �, we need to control only the terms which do not contain f(�). But for odd k � (n) k (�2 � d) = 0, and if j and k are even, then � (n) k (�)� (n) j (�) is an even function and so � (n) k (�)� (n) j (�) ��2+d �2�d = 0. Hence, integrating by parts in I4, we obtain that all integrated terms disappear. Thus, I4 = �I2 + 2 Z d�d� rn(�; �)(IKn(�; �)� �(�� �))f 0(�)�f = �I2 + 2I4;1: The bound for I2 was found before. Hence, we need to �nd the bound for I4;1. From de�nitions (1.14) it is evident that Mj;k = �Mk;j and therefore from (1.13) we derive ISn(�; �) = �ISn(�; �), IKn(�; �) = �IKn(�; �) � Irn(�; �) � Irn(�; �): Now, if we replace IKn(�; �) by the above expression, then the terms containing Irn(�; �) and Irn(�; �) can be easily estimated by using (2.25) and (2.23). Hence Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 185 M. Shcherbina we are left to prove the bound for���� Z d�d� rn(�; �)IKn(�; �) ~f(�)~g(�) ���� = n ����X j;k Aj;k n�1X l=0 ( ~f (n) n�j ; � (n) l )(~g� (n) n+k; (n) l ) ���� � n X j;k jAj;kj � jj�( ~f (n) n�j )jj2jj~g� (n) n+kjj2 � C(max �d j ~f j+max �d j ~f 0j) �max �d j~gj; where the last bound follows from (2.28), (2.30 and (2.22)�(2.23). The term with �(���) can be estimated in a similar way. This completes the proof of Theorem 1. 3. Auxiliary Results P r o o f o f L e m m a 3. It is proved in [16] that for t = 0 representation (2.12) implies (2.13) and (2.15). If we know (2.12) for t 6= 0, then the proofs of (2.13) and (2.15) coincide with that one of [16]. Hence we need only to prove (2.12). The idea is to use the perturbation expansion of the string equations: V 0 t (J (n))k;k = 0; J (n) k V 0 t (J (n))k;k+1 = k + 1 n : (3.1) Here and below in the proof of Lemma 3 we denote Vt = V + t' and by J (n) a semi-in�nite Jacobi matrix, de�ned in (2.11). Relations (3.1) can be easily obtained from the identityZ � e �nVt(�)(P (n) k (�))2 � 0 d� = 0;Z � e �nVt(�)P (n) k+1(�)P (n) k (�) � 0 d� = 0: We consider (3.1) as a system of nonlinear equations with respect to the coef- �cients J (n) k ; q (n) k . To have zero order expression for J (n) n+k we use the following lemma, proven in [15]: Lemma 5. Under conditions C1�C3 for small enough ~" uniformly in k : jkj � ~"n ���q(n) n+k ��� ; ���J (n) n+k � 1 ��� � C � n �1=4 log1=2 n+ (jkj=n)1=2 � : (3.2) 186 Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 Central Limit Theorem for Orthogonally Invariant Matrix Models Denote J (0) an in�nite Jacobi matrix with constant coe�cients J (0) k;k+1 = J (0) k+1;k = 1; J (0) k;k = 0 (3.3) and for any positive n1=3 << N < n de�ne an in�nite Jacobi matrix ~J (N) with the entries ~Jk = ( J (n) n+k � 1; jkj < N; 0; otherwise: ~qk = ( q (n) n+k; jkj < N; 0; otherwise: (3.4) De�ne a periodic function ~vt(�) = ~vt(�+4+2d) with ~v (4) t 2 L2[�d], and such that ~v(�) = V 0(�) for j�j � 2 + d=2. Consider the standard Fourier expansion for the function ~vt ~vt(�) = 1X j=�1 vtje ij�� ; � = � 2 + d : (3.5) The �rst step in the proof of (2.12) is the lemma Lemma 6. If V satis�es conditions C2�C3 and V (4) 2 L2[�d], then for any n 1=3 << N < n and any jkj � N=2 V 0 t (J (n))n+k;n+k = t n '(J (0))k;k + X Pk�l(t)~ql + ~r (0) k +O(jj ~J jj=n) +O(N�7=2); V 0 t (J (n))n+k;n+k+1 = 1� ~Jk + t n '(J (0))k;k+1 + X Pk�l(t) ~Jl + ~r (1) k +O(jj ~J jj=n) +O(N�7=2); (3.6) where for � = 0; 1 ~r (�) k = 1X j=�1 vtj(ij�) 2 � 1Z 0 ds1 1�s1Z 0 ds2 � e ij�s1J (0) ~J eij�s2J (0) ~J eij�(1�s1�s2)(J (0)+ ~J ) � k;k+� ; (3.7) with vj, d de�ned in (3.5), and Pl(t) = 1 � �Z �� (P (2 cos(x=2)) + t ~'(2 cos(x=2))=n)eilxdx; (3.8) with P de�ned in (1.21) and ~'-some polynomial with the coe�cients depending on '. Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 187 M. Shcherbina P r o o f o f L e m m a 6. By Proposition 1 of [16] it is enough to obtain (3.6) for ~vt(J (0) + ~J )n+k;n+k+�. Using the spectral theorem, we have ~vt(J (0) + ~J )k;k+� = 1X j=�1 � vtje ij�(J (0)+ ~J ) � k;k+� : Applying the Duhamel formula two times we get for � = 0; 1 ~vt(J (0) + ~J )k;k+� = ~vt(J (0))k;k+� + 1X j=�1 vtj(ij�) 1Z 0 ds � e ij�sJ (0) ~J eij�(1�s)J (0) � k;k+� + r (�) k : (3.9) To �nd the the �rst term in (3.9) we use the relation, which follows from coinci- dence ~v(�) = V 0(�), � 2 [�2; 2] and (1.5) ~vt(J (0))n+k;n+k+� = 1 2� �Z �� ~vt(2 cos x) cos � x dx = 1 2� �Z �� (V 0(2 cos x) + t' 0(2 cos x)=n) cos� x dx = 1 � �Z �� dx 2Z �2 cos� x �(�)d� 2 cos x� � + t 2�n �Z �� ' 0(2 cos x)=n) cos� x dx = �+ tc (�) n : (3.10) Besides, since by the spectral theorem (eij�sJ (0) )k;l = 1 2� �Z �� e ij�s cos x e i(k�l)x dx = Jk�l(j�s); (3.11) where Jk(s) is the Bessel function, and since V 0 is an odd function, we get for any l and an integer � 1X j=�1 v0j(ij�) 1Z 0 ds � e ij�sJ (0) � k;l � e ij�(1�s)J (0) � l��;k+1�� = 1 (2�)2 �Z �� �Z �� dxdy V 0(2 cos x)� V 0(2 cos y) 2 cos x� 2 cos y cos ((k � l)(x� y) +(�(1 � 1) + 1)y) = 0: 188 Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 Central Limit Theorem for Orthogonally Invariant Matrix Models Hence, the linear terms with respect to ~Jk in the �rst equation of (3.6) and the linear terms with respect to ~qk in the second equation give us only the contribution of the order tn�1jj ~J jj. Besides, we derive from (3.9) that the operator P from the second line of (3.6) can be represented in the form Pk�l(t) = Æk;l + Z ds 1X j=�1 vtj(ij�) � e ij�sJ (0) E (n+l) e ij�(1�s)J (0) � k;k+1 ; where we denote by E(l) a matrix with the entries E (l) k;m = Æk;lÆm;l+1 + Æk;l+1Æm;l: It is easy to see that P(t) is a Toeplitz matrix, so its entries can be represented in the form Pl;k(t) = Pl�k(t) = 1 2� �Z �� e ilx F (x; t)dx; F (x; t) = X Pl(t)eilx: Thus, we obtain F (x; 1) = 1 + X j (ij�)vtj 1Z 0 ds1 X l 1 4�2 �Z �� �Z �� e il(�x1+x2+x)(1 + e �i(x1+x2)) � expf2ij�[s1 cos x1 + (1� s1) cos x2]gdx1dx2 = 1 + 1 2� �Z �� vt(2 cos x1)� vt(2 cos(x1 � x)) cos x1 � cos(x1 � x) (1 + cos(2x1 � x))dx1 = 1 + 1 2� �Z �� vt(2 cos x1) � 1 + cos(2x1 � x) cos x1 � cos(x1 � x) + 1 + cos(2x1 + x) cos x1 � cos(x1 + x) � dx1 = P (2 cos(x=2)) + P (�2 cos(x=2)) + t ~'(2 cos(x=2))=n; (3.12) where in the last line (3.10) and (1.21) are used. For the linear operator in the �rst line of (3.6) the calculations are similar. Lemma 6 is proved. Let us use (3.6) in (3.1). We obtain for k � N=2 P Pk�l(t)~ql = � tc (0) n � ~r (0) k +O(jj ~J jj=n) +O(N�7=2);P Pk�l(t) ~Jl = k + 1 n � tc (1) n + ~J2 k � ~r (1) k +O(jj ~J jj=n) +O(N�7=2); Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 189 M. Shcherbina where c(0) and c(1) are de�ned in (3.10). We would like to consider this system of equations as two linear equations in l2. For this we set for jkj > N=2 ~r (0) k = P Pk�l(t)ql; ~r (1) k = P Pk�l(t) ~Jl � k + 1 n � ~J2 k : It follows from (3.8) that the operator P has a bounded inverse operator whose entries can be represented in the form (P�1)k�l = 1 4� �Z �� (P (2 cos(x=2)) + t ~'(2 cos(x=2))=n)�1 e i(k�l)x dx: (3.13) Then ql = � P P�1 l�k (0) � tc (0) n +O(jj ~J jj=n) + ~rk +O(N�7=2) � ; ~Jl = P P�1 l�k (0) � k + 1 n + ~J2 k � tc (1) n +O(jj ~J jj=n)� ~rk +O(N�7=2) � : (3.14) Moreover, since by assumption v0 has the fourth derivative from L2[�2; 2], P also does (see [10]). Therefore, using a standard bound for the tails of the Fourier expansion of the function f with f (p) 2 L2[��; �]X j>M jfkj �M �p+1=2 �X jfkj2k2p �1=2 � CM �p+1=2 ; (3.15) we have for any MX jlj>M jP�1 l j �M �7=2 ; X jlj>M jljjP�1 l j �M �5=2 ; X jlj>M jlj2jP�1 l j �M �3=2 : (3.16) Besides, since P�1 l = P�1 �l , we haveX l�k P�1 l�k k + 1 n = l + 1 n X l�k P�1 l�k = 1 2P (2) l + 1 n : (3.17) Using a trivial bound�����eij�s1J (0) ~J eij�s2J (0) ~J eij�(1�s1�s2)(J (0)+ ~J ) � k;k+1 ���� � jj ~J jj2 (3.18) and (3.2), �rst we obtain a rather crude bound j~r(�) k j � C � jkj=n+ n �1=2 log2 n � ; � = 0; 1: (3.19) 190 Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 Central Limit Theorem for Orthogonally Invariant Matrix Models This bound combined with (3.14) and (3.15) gives us j~qkj; j ~Jkj � C � jkj=n+ n �1=2 log2 n+N �7=2 � : (3.20) Now we use the bound, valid for any Jacobi matrix J with coe�cients Jk;k+1 = Jk+1;k = ak 2 R, jakj � A. Then there exist positive constants C0; C1; C2, depending on A, such that the matrix elements of eitJ satisfy the inequalities: j(eitJ )k;jj � C0e �C1jk�jj+C2t: (3.21) This bound follows from the representation (eitJ )k;j = � 1 2�i I l e itz Rk;j(z)dz; where R = (J � z)�1, and from the Comb�Thomas type bound on the resolvent of the Jacobi matrix (see [14]) jRk;j(z)j � 2 j=zje �C 0 1j=zjjk�jj + 8 j=zj2 e �C 0 1j=zj(M�1) : (3.22) Let us choose M = C1 4C2� n 1=3 ; (3.23) where C1 and C2 are the constants from (3.21) and � = �(2 + ")�1. Then (3.21) guarantees that for any l; l0 : jl � l 0j > n 1=3 and any j : jjj < M , jtj � 1 j(eitdjJ (0) )l;l0 j; j(eitdj(J (0)+ ~J )l;l0 j � Ce dC2M�C1jl�l 0 j � Ce �C1n 1=3=3 e �C1jl�l 0 j=3 : (3.24) Now we split the sum in (3.7) in two parts jjj < M and jjj �M . ~r (�) k = 1X j=�1 vj(ij�) 2 X l1;l2 1Z 0 ds1 1�s1Z 0 ds2 � � e ij�s1J (0) ~J � k;l1 � e ij�s2J (0) � l1;l2 � ~J eij�(1�s1�s2)(J (0)+ ~J ) � l2;k+1 = X jjj<M + X jjj�M : (3.25) Then (3.24) allows us to write X jjj<M = X jjj<M vj(ij�) 2 k+[n1=3]X l1;l2=k�[n1=3] 1Z 0 ds1 1�s1Z 0 ds2 � e ij�s1J (0) ~J � k;l1 � e ij�s2J (0) � l1;l2 � ~J eij�(1�s1�s2)(J (0)+ ~J ) � l2;k+1 +O(e�Cn 1=3=3): Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 191 M. Shcherbina Hence using (3.18) we obtain now���� X jjj<M ���� � C max l:jl�k�nj�n1=3 j ~Jlj2: (3.26) For P jjj>M we use (3.18) combined with (3.20) and (3.15) for the function V 0. Then we get���� X jjj�M ���� � CM �3=2 � (N=n)2 + n �1 log4 n � � Cn �1=2 (N=n)2 (3.27) and therefore j~r(�) k j � C �� (jkj+ n 1=3)=n �2 + n �1 log4 n+N �7=2 + n �1=2 (N=n)2 � : (3.28) Using this bound in (3.14) we obtain (2.12), but the bound for r (�) k now has the form jr(�) k j � C � (k=n)2 + n �1 log4 n+N �7=2 + n �1=2 (N=n)2 � : (3.29) Now, using (2.12) with (3.29) in (3.26) and setting N = 2[n1=2], we obtain the bound from (2.12) for jkj � n 1=2. Then, setting N = 2[n3=4] and again using (2.12) with (3.29) in (3.26), we obtain the bound from (2.12) for n1=2 < k � n 3=4. And �nally setting N = 2[~"n], we obtain the bound from (2.12) for n3=4 < k � ~"n. P r o o f o f L e m m a 4. The relation (2.22) is proved in [16]. To prove (2.23) we need some extra de�nitions. We denote by H = l2(�1;1) a Hilbert space of all in�nite sequences fxig1i=�1 with a standard scalar product (:; :) and a norm jj:jj. Let also feig1i=�1 be a standard basis in H and I(�1;n) be an orthogonal projection operator de�ned as I (�1;n) ei = � ei; i < n; 0; otherwise: (3.30) For any in�nite matrix A = fAi;jg we will denote by A(�1;n) = I (�1;n)AI(�1;n) ; (A(�1;n))�1 = I (�1;n) � I � I (�1;n) +A(�1;n) � �1 I (�1;n) ; (3.31) so that (A(�1;n))�1 is a block operator which is inverse to A(�1;n) in the space I (�1;n)H and zero on the (I � I (�1;n))H. 192 Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 Central Limit Theorem for Orthogonally Invariant Matrix Models Besides, we will say that the matrix A(�1;n) is of the exponential type, if there exist constants C and c, such that jAn�j;n�kj � Ce �c(jjj+jkj) : (3.32) De�ne the in�nite Toeplitz matrices P and V� by their entries Pj;k = 1 2� �Z �� e i(j�k)x dxP (2 cos x); V � j;k = sign(k � j) 2� �Z �� e i(j�k)x dxV 0(2 cos x); (3.33) and let the entries R be de�ned in (2.14). Then as it was proved in [16] that for jjj; jkj � 2 log2 n, (M(0;n))�1 n�j;n�k = (R(�1;n))�1D(�1;n))n�j;n�k + bn�jan�k +O(n�1=10); (3.34) where ak = ((R(;n))�1 en�1)k; bj = ((R(�1;n))�1 r �)j; and the vector r� 2 I(0;n)H has components r� n�i = Ri (i = 2; 4; : : : ) with Ri de�ned by (2.14) Let us prove that F (�1;n) := (R(�1;n))�1D(�1;n) � V�(�1;n) (3.35) is of the �rst type. It is proved in [16] (see Prop. 1) that jR�1 n�j;n�k j � Ce �cjj�kj j(R(�1;n))�1 n�j;n�k �R�1 n�j;n�k j � Cminfe�cjjj; e�cjkjg � Ce �c(jjj+jkj)=2 : (3.36) Hence, jF (�1;n) n�j;n�k j � ����X l�1 Pn�j;nDn�l;n�k � V�n�j;n�k ����+ Ce �cjjj X l�1 e �cjlj e �cjl�kj � ����X l�0 Pn�j;nÆk;1 ����+ C 0 e �c(jjj+jkj)=2 � C1e �c(jjj+jkj)=2 : Besides, (3.36) implies jakj � Ce �cjkj ; jbj j � Ce �cjjj : (3.37) It is easy to see that �1 2 X k V(n) k;j � (n) k = 1 n (� (n) j )0 = 1 n (n) j ; Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 193 M. Shcherbina where we denote Vj;k = sign(k � j)V 0(J (n))j;k, and that for j; k � 2 log2 n (M(�1;n))�1 n�j;n�k = Vn�j;n�k +O(e�c log 2 n): Hence, if we denote A (n) j;k = (M(�1;n))�1 n�j;n�k � Vn�j;n�k; Aj;k = F (0;n))n�j;n�k + bn�jan�k; then Sn is indeed represented in the form (1.22),(2.22) is valid because of (2.12) and (3.34), and (2.23) is valid because we have proved that F (0;n) is of the �rst type and because of (3.37). References [1] S. Albeverio, L. Pastur, and M. Shcherbina, On Asymptotic Properties of Certain Orthogonal Polynomials. � Mat. �z., analiz, geom. 4 (1997), 263�277. [2] S. Albeverio, L. Pastur, and M. Shcherbina, On the 1=n Expansion for Some Unitary Invariant Ensembles of Random Matrices. � Commun. Math. Phys. 224 (2001), 271�305. [3] A. Boutet de Monvel, L. Pastur, and M. Shcherbina, On the Statistical Mechanics Approach in the Random Matrix Theory. Integrated Density of States. � J. Stat. Phys. 79 (1995), 585�611. [4] P. Deift, T. Kriecherbauer, K. McLaughlin, S. Venakides, and X. Zhou, Uni- form Asymptotics for Polynomials Orthogonal with Respect to Varying Exponential Weights and Applications to Universality Questions in Random Matrix Theory. � Commun. Pure Appl. Math. 52 (1999), 1335�1425. [5] P. Deift, T. Kriecherbauer, K. McLaughlin, S. Venakides, and X. Zhou, Strong Asymptotics of Orthogonal Polynomials with Respect to Exponential Weights. � Commun. Pure Appl. Math. 52 (1999), 1491�1552. [6] P. Deift and D. Gioev, Universality in Random Matrix Theory for Orthogonal and Symplectic Ensembles. Preprint arxiv:math-ph/0411075. [7] P. Deift and D. Gioev, Universality at the Edge of the Spectrum for Unitary, Or- thogonal, and Symplectic Ensembles of Random Matrices. Preprint arxiv:math- ph/0507023. [8] K. Johansson, On Fluctuations of Eigenvalues of Random Hermitian Matrices. � Duke Math. J. 91 (1998), 151�204. [9] M.L. Mehta, Random Matrices. Acad. Press, New York, 1991. [10] N.I. Muskhelishvili, Singular Integral Equations. P. Noordho�., Groningen, 1953. [11] L. Pastur, Limiting Laws of Linear Eigenvalue Statistics for Unitary Invariant Matrix Models. � J. Math. Phys. 47 (2006), 103303. 194 Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 Central Limit Theorem for Orthogonally Invariant Matrix Models [12] L. Pastur and M. Shcherbina, Universality of the Local Eigenvalue Statistics for a Class of Unitary Invariant Random Matrix Ensembles. � J. Stat. Phys. 86 (1997), 109�147. [13] L. Pastur and M. Shcherbina, On the Edge Universality of the Local Eigenvalue Statistics of Matrix Models. � Mat. �z., analiz, geom. 10 (2003), 335�365. [14] M. Reed and B. Simon, Methods of Modern Mathematical Physics, Vol. IV. Acad. Press, New York, 1978. [15] M. Shcherbina, Double Scaling Limit for Matrix Models with Non Analytic Poten- tials. Preprint arXiv:cond-mat/0511161. [16] M. Shcherbina, On Universality for Orthogonal Ensembles of Random Matrices. � Preprint arXiv:math-ph/0701046. [17] A. Stojanovic, Universality in Orthogonal and Symplectic Invariant Matrix Models with Quatric Potentials. � Math. Phys., Anal., Geom. 3 (2002), 339�373. [18] C.A. Tracy and H. Widom, Correlation Functions, Cluster Functions, and Spacing Distributions for Random Matrices. � J. Stat. Phys. 92 (1998), 809�835. Journal of Mathematical Physics, Analysis, Geometry, 2008, vol. 4, No. 1 195