A Probablistic Origin for a New Class of Bivariate Polynomials

We present here a probabilistic approach to the generation of new polynomials in two discrete variables. This extends our earlier work on the 'classical' orthogonal polynomials in a previously unexplored direction, resulting in the discovery of an exactly soluble eigenvalue problem corresp...

Повний опис

Збережено в:
Бібліографічні деталі
Дата:2008
Автори: Hoare, M.R., Rahman, M.
Формат: Стаття
Мова:English
Опубліковано: Інститут математики НАН України 2008
Назва видання:Symmetry, Integrability and Geometry: Methods and Applications
Онлайн доступ:http://dspace.nbuv.gov.ua/handle/123456789/148000
Теги: Додати тег
Немає тегів, Будьте першим, хто поставить тег для цього запису!
Назва журналу:Digital Library of Periodicals of National Academy of Sciences of Ukraine
Цитувати:A Probablistic Origin for a New Class of Bivariate Polynomials / M.R. Hoare, M. Rahman // Symmetry, Integrability and Geometry: Methods and Applications. — 2008. — Т. 4. — Бібліогр.: 24 назв. — англ.

Репозитарії

Digital Library of Periodicals of National Academy of Sciences of Ukraine
id irk-123456789-148000
record_format dspace
spelling irk-123456789-1480002019-02-17T01:26:56Z A Probablistic Origin for a New Class of Bivariate Polynomials Hoare, M.R. Rahman, M. We present here a probabilistic approach to the generation of new polynomials in two discrete variables. This extends our earlier work on the 'classical' orthogonal polynomials in a previously unexplored direction, resulting in the discovery of an exactly soluble eigenvalue problem corresponding to a bivariate Markov chain with a transition kernel formed by a convolution of simple binomial and trinomial distributions. The solution of the relevant eigenfunction problem, giving the spectral resolution of the kernel, leads to what we believe to be a new class of orthogonal polynomials in two discrete variables. Possibilities for the extension of this approach are discussed. 2008 Article A Probablistic Origin for a New Class of Bivariate Polynomials / M.R. Hoare, M. Rahman // Symmetry, Integrability and Geometry: Methods and Applications. — 2008. — Т. 4. — Бібліогр.: 24 назв. — англ. 1815-0659 2000 Mathematics Subject Classification: 33C45; 60J05 http://dspace.nbuv.gov.ua/handle/123456789/148000 en Symmetry, Integrability and Geometry: Methods and Applications Інститут математики НАН України
institution Digital Library of Periodicals of National Academy of Sciences of Ukraine
collection DSpace DC
language English
description We present here a probabilistic approach to the generation of new polynomials in two discrete variables. This extends our earlier work on the 'classical' orthogonal polynomials in a previously unexplored direction, resulting in the discovery of an exactly soluble eigenvalue problem corresponding to a bivariate Markov chain with a transition kernel formed by a convolution of simple binomial and trinomial distributions. The solution of the relevant eigenfunction problem, giving the spectral resolution of the kernel, leads to what we believe to be a new class of orthogonal polynomials in two discrete variables. Possibilities for the extension of this approach are discussed.
format Article
author Hoare, M.R.
Rahman, M.
spellingShingle Hoare, M.R.
Rahman, M.
A Probablistic Origin for a New Class of Bivariate Polynomials
Symmetry, Integrability and Geometry: Methods and Applications
author_facet Hoare, M.R.
Rahman, M.
author_sort Hoare, M.R.
title A Probablistic Origin for a New Class of Bivariate Polynomials
title_short A Probablistic Origin for a New Class of Bivariate Polynomials
title_full A Probablistic Origin for a New Class of Bivariate Polynomials
title_fullStr A Probablistic Origin for a New Class of Bivariate Polynomials
title_full_unstemmed A Probablistic Origin for a New Class of Bivariate Polynomials
title_sort probablistic origin for a new class of bivariate polynomials
publisher Інститут математики НАН України
publishDate 2008
url http://dspace.nbuv.gov.ua/handle/123456789/148000
citation_txt A Probablistic Origin for a New Class of Bivariate Polynomials / M.R. Hoare, M. Rahman // Symmetry, Integrability and Geometry: Methods and Applications. — 2008. — Т. 4. — Бібліогр.: 24 назв. — англ.
series Symmetry, Integrability and Geometry: Methods and Applications
work_keys_str_mv AT hoaremr aprobablisticoriginforanewclassofbivariatepolynomials
AT rahmanm aprobablisticoriginforanewclassofbivariatepolynomials
AT hoaremr probablisticoriginforanewclassofbivariatepolynomials
AT rahmanm probablisticoriginforanewclassofbivariatepolynomials
first_indexed 2025-07-11T03:47:21Z
last_indexed 2025-07-11T03:47:21Z
_version_ 1837320783403155456
fulltext Symmetry, Integrability and Geometry: Methods and Applications SIGMA 4 (2008), 089, 18 pages A Probablistic Origin for a New Class of Bivariate Polynomials? Michael R. HOARE and Mizan RAHMAN 1 School of Mathematics and Statistics, Carleton University, Ottawa, ON K1S 5B6, Canada E-mail: mrahman@math.carleton.ca Received September 15, 2008, in final form December 15, 2008; Published online December 19, 2008 Original article is available at http://www.emis.de/journals/SIGMA/2008/089/ Abstract. We present here a probabilistic approach to the generation of new polyno- mials in two discrete variables. This extends our earlier work on the ‘classical’ orthogonal polynomials in a previously unexplored direction, resulting in the discovery of an exactly soluble eigenvalue problem corresponding to a bivariate Markov chain with a transition kernel formed by a convolution of simple binomial and trinomial distributions. The solution of the relevant eigenfunction problem, giving the spectral resolution of the kernel, leads to what we believe to be a new class of orthogonal polynomials in two discrete variables. Possibilities for the extension of this approach are discussed. Key words: cumulative Bernoulli trials; multivariate Markov chains; 9−j symbols; transition kernel; Askey–Wilson polynomials; eigenvalue problem; trinomial distribution; Krawtchouk polynomials 2000 Mathematics Subject Classification: 33C45; 60J05 1 Introduction Some thirty years ago we published several papers [6, 10, 11, 12, 15, 16] in which we described a class of statistical models which gave rise to the ‘classical’ orthogonal polynomials and a variety of associated formulas which had previously been known only in the abstract. The key to this was to define simple Markov chains using certain ‘Urn models’ and variants of Bernoulli trials, whose transition kernels provided soluble eigenvalue problems that in turn yielded the polynomials of interest as eigenfunctions. This was carried out for both continuous and discrete variables and led to a scheme in which the quintet of discrete-single-variable orthogonal polynomials (Hahn, Gonin, Krawtchouk, Meixner, Charlier) could be inter-related by suitable limits and substitutions. These in turn underlay the better-known continuous-variable sets (Laguerre, Jacobi). As well as solving the defining eigenvalue problems, we were able to discover results relating, amongst others, to ‘ladder-operators’, the Factorization Method, and dual polynomials, some effectively new, others known in disguised form in the ‘Bateman-project’ era. The success of these methods opens up a more interesting prospect, namely that of actually discovering new polynomial systems through statistical models, or perhaps, less ambitiously, distinguishing specially important cases within the generality of those already known. While the scope for this in the case of the single variable was predictably limited, a whole new opportunity presents itself when functions of more than one variable are considered. While a plethora of special functions of several variables has emerged in the last decades, in various stages of genera- lity, few have been related to tangible structures, such as might have their origin in physical, statistical or combinatorial models. With the experience of the single variable results, there is ?This paper is a contribution to the Special Issue on Dunkl Operators and Related Topics. The full collection is available at http://www.emis.de/journals/SIGMA/Dunkl operators.html 1Supported partially by an NSERC Grant #A6197. mailto:mrahman@math.carleton.ca http://www.emis.de/journals/SIGMA/2008/089/ http://www.emis.de/journals/SIGMA/Dunkl_operators.html 2 M.R. Hoare and M. Rahman reason to hope that the eigenvalue problems based on one of these might lead to a specially interesting class of solutions illuminating the hitherto rather unstructured world of multivariate special functions. In this paper we shall describe our first steps in this direction. We consider an extension to two discrete variables of the ‘Cumulative Bernoulli Trials’, first described in 1983 [10], and show that this leads to a soluble eigenvalue problem via a transition-kernel involving the bi- and trinomial distributions. The resulting eigenfunctions prove to be an adaptation of the ‘9 − j symbols’ known in theoretical physics, where they form a central idea in the theory of angular momentum. Once again a mathematical structure proves to pervade the natural world in the most surprising places. Here we shall concentrate on the mathematical content of this result, exploring the probability of a non-trivial explicitly soluble multivariate Markov Chain, possibly for the first time, is a most satisfying result in itself. See [5, 11, 12] for related ideas. The statistical model. The idea behind ‘Cumulative Bernoulli Trials’ (CBTs) is extraordi- narily simple, for all that it seems to have appeared only as late as the 1970s. Whereas ordinary Bernoulli Trials (BTs) represent the outcome of a set of success/failure events with an assigned success probability, Cumulative Bernoulli Trials (CBTs) allow for the possibility that successes in an initial trial can be ‘saved’ while further trials are carried out with the ‘failures’ to increase the number of ‘successes’. The elementary properties of such a scheme are detailed in [10]. The simplest realization of the CBT process for descriptive purposes involves trials by ‘throwing’ a set of dice with a defined success criterion. Consider the case of six ‘Poker dice’ with faces marked as usual from ‘Ace’ to ‘nine’. Then the occurrence of k ‘Ace’ on a single throw has the binomial probability b(k, N ;α) = ( N k ) αk(1− α)N−k, (1.1) where in the example given N = 6 and α = 1/6. In [14] we showed that, if the ‘failed’ dice are re-thrown n times with the successes saved, the probability of i successes altogether is: b(n)(i,N ;α) = b(i,N ; 1− (1−α)n), a result that could be generalized to the case where α is not constant on successive throws. This result is not needed in the present paper, but will serve to motivate the present results. Realization of a bivariate Markov chain. For convenience we shall continue in the language of Bernoulli Trials with dice, though this is not essential to the structure of the problem. Consider thus a set of N dice, each with a given number of faces, n. The faces can be marked in any way, but of these two (or possibly more), for example red and black colours, are designated ‘interesting’ outcomes and are scored as ‘successes’ when all or some of N dice are thrown. The chain can be described. Note that they are anti-correlated, since obviously getting more red reduces the chances of getting many black and vice versa. Thus the Markov chain is a non-trivial extension from the single variable, not simply a multi-dimensional case. The sides that turn up which are not red or black are ‘failures’ and can be called ‘blanks’. The probabilities of getting red or black on throwing a single die are assigned as α1, α2 and in the case of the dice are related to n in an obvious way. Example. Standard ‘Poker dice’ n = 6; black = ‘Ace’, red = ‘King’, α1 = α2 = 1/6, N = 5. Possible ranges 0 ≤ i1 ≤ N − i2; 0 ≤ i2 ≤ N − i1. Thus the possible states of the system are: (0,0), (0,1), (0,2), (0,3), (0,4), (0,5), (1,0), (1,1), (1,2), (1,3), (1,4), (2,0), (2,1), (2,2), (2,3), (3,0), (3,1), (3,2), (4,0), (4,1), (5,0), 21 in all, constituting the state space. Returning now to a general N,α1, α2, consider an initial (i1, i2) with N − i1 − i2 ‘blanks’. Step 1. The i1 ‘black’ dice are thrown, giving k1 ≤ i1 ‘black successes’, the i2 ‘red’ dice are thrown, giving k2 ≤ i2 ‘red successes’, and these are saved. The probabilities are the respective binomials b(k1, i1;α1) and b(k2, i2;α2). A Probablistic Origin for a New Class of Bivariate Polynomials 3 Step 2. The i1 + i2 − k1 − k2 ‘blanks’ from the previous step are added to the N − i1 − i2 original ‘blanks’ giving N − k1 − k2 ‘blanks’ in all. Step 3. The collected N − k1 − k2 ‘blanks’ are thrown and the ‘red’ and ‘black’ successes recorded as p1, p2 respectively. These can be at different probabilities β1, β2. The outcome is given by the trinomial b2(p1, p2, N − k1 − k2;β1, β2). Evidently: 0 ≤ p1 ≤ N − k1 − k2 − p2; 0 ≤ p2 ≤ N − k1 − k2 − p1. Now redefine p1 = j1 − k1; p2 = j2 − k2 where now k1 ≤ j1 ≤ N − j2; k2 ≤ j2 ≤ N − j1. Step 4. Combine the ‘successes’ j1 − k1 and j2 − k2 with the ‘successes’ held over from Step 1. The ‘score’ of ‘successes’ will now be (j1, j2), with N − j1 − j2 ‘blanks’ and the above process will have led to the transition (i1, i2)→ (j1, j2). The transition probability for this will be the kernel K(j1, j2; i1, i2), which clearly defines the transition kernel of a bivariate Markov chain giving the probability of arriving at state (j1, j2) from state (i1, i2). Step 5. Repeat the whole process Steps 1, to 4, to generate the chain. In the ‘Poker dice’ example the transition matrix will have 212 = 441 elements, many of which will, however, be zero. The sequence of states is Markovian by virtue of the ‘memory’ carried over from the original to final states as guaranteed by the sequence above. The process will be defined by the transition kernel K(j1, j2; i1, i2) the form of which follows by summing all possible pathways, leading to the convolution: K(j1, j2; i1, i2) = min(i1,j1)∑ k1=0 min(i2,j2)∑ k2=0 b(k1, i1;α1) × b(k2, i2;α2)b2(j1 − k1, j2 − k2, N − k1 − k2;β1, β2), (1.2) where b(·, ·; ·) is the simple binomial as before and the trinomial b2(·, ·, ·; ·, ·) is b2(i1, i2, N ; p, q) = pi1qi2 [1− p− q]N−i1−i2 ( N ! i1!i2!(N − i1 − i2)! ) . (1.3) Thus the kernel is explicitly K(j1, j2; i1, i2) = i1!i2!β j1 1 βj2 2 [1− β1 − β2]N−j1−j2 (1− α1)i1(1− α2)i2 (N − j1 − j2)! × min(i1,j1)∑ k1=0 min(i2,j2)∑ k2=0 ( α1 1− α1 )k1 ( α2 1− α2 )k2 × 1 βk1 1 βk2 2 (N − k1 − k2)! (i1 − k1)!(i2 − k2)!(j1 − k1)!(j2 − k2)!k1!k2! . (1.4) Fig. 1 will be helpful in following the steps outlined above. Having established the proba- bilistic background to the kernel K we shall now consider its eigenvalue problem on the discrete state-space (i1, i2) with 0 ≤ ii, i2 ≤ N .∑ j1 ∑ j2 K(i1, i2; j1, j2)Ψm,n(j1, j2) = λm,nΨm,n(i1, i2). (1.5) This is the form we shall take as origin of the eigenvalue problem to be investigated. For the moment we may note the crucial property that K is a function of four parameters 0 ≤ α1, α2, β1, β2 ≤ 1. Before embarking on this, however, we shall need to spend some time on an excursion, placing the problem in the context of present-day theory of multivariate orthogonal polynomials. Only then can we tackle the eigenfunction problem for K. 4 M.R. Hoare and M. Rahman Figure 1. Schematic representation of the Markov chain for cumulative binomial and trinomial trials. Note that the dotted lines refer to stochastic outcomes while the solid arrows indicate counts carried forward. 2 Multivariate orthogonal polynomials of a discrete variable The modern era of single-variable classical orthogonal polynomials probably began with Wil- son’s [22, 23] observation that Wilson’s 6 − j symbols which are known as the Racah [18] coefficients, are constant multiples of the 4F3 polynomial: Rn(x) := 4F3 [ −n, n + α + β + 1,−x, x + γ −N α + 1,−N, β + γ + 1 ; 1 ] , (2.1) x, n = 0, 1, . . . , N , and that they have a q-analogue of the form 4Φ3 [ q−n, abqn+1, q−x, cqx−N aq, q−N , bcq ; q, q ] , (2.2) see also Askey and Wilson [2, 3]. For q-analogue notations see, for example, Gasper and Rah- man [9]. An idea that originated in the theory of quantum angular momentum in Physics was taken up by two mathematicians and transformed into a rich new area of research in orthogonal polynomials of a single variable. The q-polynomials (2.2) and their continuous versions have been the object of great interest over the last 25 years in the field of Special Functions. They have found applications in many different fields, including Statistical Mechanics, Quantum Group Theory, Representation Theory, Approximation Theory, and Combinatorics. It is known in the theory of quantum angular momenta that the 6 − j symbols are the coupling coefficients for 3 angular momenta, and that their orthogonality property follows from the unitary nature of the coupling transformations. As a hypergeometric orthogonality this can be written in the form N∑ x=0 fm(x)fn(x) = δm,n, (2.3) where the orthonormal functions fm(x) are defined by fm(x) = (ρ(x)hm)1/2Rm(x) (2.4) A Probablistic Origin for a New Class of Bivariate Polynomials 5 with ρ(x) = γ −N + 2x γ −N (γ −N,α + 1, β + γ + 1,−N)x x!(γ −N − α,−N − β, γ + 1)x , (2.5) and hm = (β + 1, α + 1− γ)N (α + β + 2,−γ)N (α + β + 1 + 2m) α + β + 1 (α + β + 1, α + 1, β + γ + 1,−N)m m!(β + 1, α− γ + 1, N + α + β + 2)m . (2.6) Physicists have also given us the 9−j symbols, the coupling coefficients for 4 angular momenta, including their orthogonality, which can be written ∑ x ∑ y (2x + 1)(2y + 1)(2m + 1)(2n + 1)  a b x c d y m n e   a b x c d y m′ n′ e  = δm,m′δn,n′ , (2.7) where a b x c d y m n e  = ∑ k (2k + 1)W (aecn; km)W (aeby; kx)W (bync; kd), (2.8) W (aeby; kx) = ∆(abx)∆(byk)∆(xye)∆(aek)(2a)!(a+b+e−y)!(a+b+e+y+1)! (a+b−x)!(a−b+x)!(b+y−k)!(b−y+k)!(x−y+e)!(y−x+e)!(a+e−k)!(a−e+k)! × 4F3 [ k − a− e,−k − a− e− 1, x− a− b,−x− a− b− 1 −2a, y − a− b− e,−y − a− b− e− 1 ; 1 ] , (2.9) with the “triangle” function: ∆(abc) = { (a + b− c)!(a− b + c)!(b + c− a)! (a + b + c + 1)! }1/2 , (2.10) where the implicit assumption is that a, b, c satisfy the triangle inequality, and that the expres- sions with the factorial symbols in (2.9) and (2.10) are all nonnegative integers. The W -functions above are just the normalized polynomials fn(x) in different notation. The symmetry properties of the 6− j symbols enable the physicists to transform the W -functions in a number of different ways. These relations are, of course, equivalent to the Whipple formula [21] for the terminating and balanced 4F3 functions: 4F3 [ −n, a, b, c d, e, f ; 1 ] = (e− a, f − a)n (e, f)n 4F3 [ −n, a, d− b, d− c, d, 1 + a− e− n, 1 + a− f − n ; 1 ] , (2.11) where the “balancedness” is indicated in the condition a + b + c + 1 = d + e + f + n. (2.12) Applying (2.11) several times on the W -function in (2.8) we reduce it to a form that is convenient for our purposes a b x c d y m n e  = ∆(abx)∆(cdy)∆(xye)∆(acm)∆(bdn)∆(mne) (a + b− x)!(b− a + x)!(x + y − e)!(y − x + e)!(a + c−m)!(c− a + m)! × ( (2b)! )2(2c)!(a + c + n− e)!(a + b + y − e)!(b + c + y − n)! (b + d− n)!(b− d + n)!(c− d + y)!(d− c + y)!(m + n− e)!(n−m + e)! 6 M.R. Hoare and M. Rahman × (a + c + n + e + 1)!(a + b + y + e + 1) (c + n− b− y)! × ∑ k (2k + 1)(e− a + k)!(y − b + k)!(n− c + k)!(−1)b+y−k (b + y − k)!(b + y + k + 1)!(b− y + k)!(a + e− k)!(a− e− k)! × 1 (c−n+k)!(a+e+k+1)!4 F3 [ k − b− y,−k − b− y − 1, x− a− b,−x− a− b− 1 e− a− b− y,−e− a− b− y − 1,−2b ; 1 ] × 4F3 [ k − b− y,−k − b− y − 1, n− b− d, d + n + 1− b −2b, n− b− c− y, n + c− b− y + 1 ; 1 ] × 4F3 [ k − c− n,−k − c− n− 1,m− a− c,−m− a− c− 1 −2c, e− a− c− n,−e− a− c− n− 1 ; 1 ] . (2.13) For a detailed account of the 6− j and 9− j symbols see, for example, Edmonds [7]. In order to identify these 9− j symbols as normalized orthogonal polynomials in 2 discrete variables, we replace a + b− x, c + d− y, a + c−m and b + d− n by x, y, m and n, respectively, and set a + b + c + d− e = N, (2.14) and assume that N takes only nonnegative integer values. Now we rewrite (2.13) in a somewhat more suggestive form: Fm,n(x, y; a, b, c, d) := [ (2a + 2b + 1− 2x)(2a + 2c + 1− 2m)(2b + 2d + 1− 2n)(2c + 2d + 1− 2y) ]1/2 ×  a b a + b− x c d c + d− y a + c−m b + d− n, a + b + c + d−N  = Am,n(x, y; a, b, c, d) N−y∑ `=0 2` + 2y − 2b− 2c− 2d− 1 2y − 2b− 2c− 2d− 1 (2y − 2b− 2c− 2d− 1)` `! × (N + y − 2a− 2b− 2c− 2d− 1,−2b, y − n− 2c, y −N)`(−1)` (2a + 1 + y −N, 2y − 2c− 2d, n + y − 2b− 2d, N + y − 2b− 2c− 2d)` ×4 F3 [ −`, ` + 2y − 2b− 2c− 2d− 1,−x, x− 2a− 2b− 1 −2b, N + y − 2a− 2b− 2c− 2d− 1, y −N ; 1 ] ×4 F3 [ −`, ` + 2y − 2b− 2c− 2d− 1,−n, 2d + 1− n −2b, y − n− 2c, y − n + 1 ; 1 ] ×4 F3 [ n− y − `, n + y + `− 2b− 2c− 2d− 1,−m,m− 2a− 2c− 1 −2c,N + n− 2a− 2b− 2c− 2d− 1, n−N ; 1 ] , (2.15) where Am,n(x, y; a, b, c, d) = (−y)n (−N)n (2b + 2c + 2d−N − y)! (2b + 2c + 2d− 2y)! × {( N m,n )( N x, y ) (2a− x)!(2a−m)!(2b)!(2b)!(2c)!(2d− n)! (2a + y −N)!(2a + y −N)!(2b− x)!(2b− n)!(2c− y)!(2d− y)! + (2a + 2b + 1− 2x) (2a + 2b + 1− x) (2a + 2b + y − x−N)! (2a + 2b− x)! ×(2a + 2c + 1− 2m) (2a + 2c + 1−m) (2a + 2c + n−m−N)! (2a + 2c−m)! (2b + 2d + 1− 2n) (2b + 2d + 1− n) (2b + 2d− n− y)! (2b + 2d− n)! ×(2c + 2d + 1− 2y) (2c + 2d + 1− y) (2c + 2d− 2y)! (2c + 2d− y)! (2c + 2d− 2y)!(2b + 2d− n− y)! (2c + 2d + x− y −N)!(2b + 2d + m− n−N)! A Probablistic Origin for a New Class of Bivariate Polynomials 7 × (2a + 2b + 2c + 2d + 1−N − n)!(2a + 2b + 2c + 2d + 1−N − n)! (2a + 2b + 2c + 2d + 1− x− y −N)!(2a + 2b + 2c + 2d + 1−N −m− n)! }1/2 . (2.16) By repeated use of (2.11) to transform the balanced 4F3 series in (2.15) it is possible to reduce the sum over ` to a very-well-poised 4F3[;−1] series which can be summed by a standard summation formula, see Bailey [5, 4.4(3)], thereby transforming the 9 − j symbol to a triple series. But the resulting expression is not very helpful, mainly because the ensuing 4F3 series are no longer balanced, as the 4F3 series in (2.15) are, and hence are not easily transformable. Besides, for the purposes of this paper reduction to a triple series is not at all useful. For some time the commonly held belief was that an orthogonal polynomial in 2 variables should be expressible as a double series, and so should be the 9− j symbols. The results of this paper seem to indicate that it is not necessarily true. We can, in fact show, without going into detailed calculation, that even the weight function in 2 variables, or the normalization constant, need not be a compact expression. For example, when m = 0, n = 0, (2.15) gives us the weight function for the polynomials wx,y(a, b, c, d) = ( N x, y ) (2a− x)!(2a)!(2b)!(2c)!(2d)! (2a + y −N)!(2a + y −N)!(2b− x)!(2c− y)!(2d− y)! × (2a + 2b + 1− 2x) 2a + 2b + 1− x (2a + 2b + y − x−N)! (2a + 2b− x)! (2a + 2c−N)! (2a + 2c)! 2c + 2d + 1− 2y 2c + 2d + 1− y × (2c + 2d− 2y)! (2c + 2d− y)! (2c + 2d− 2y)! (2c + 2d + x− y −N)! (2b + 2d−N)! (2b + 2d)! × (2a + 2b + 2c + 2d + 1−N)! (2a + 2b + 2c + 2d + 1− x− y −N)! × 3F 2 2 [ x + y −N, 2a + 2b + 1 + y − x−N, y − 2c 2a + 1 + y −N, 2y − 2c− 2d ; 1 ] . (2.17) Because of the self-dual character of the polynomials in (2.15) it is clear that the normalization constant is an expression similar to (2.17) with b ↔ c and x, y replaced by m, n, respectively. No matter how closely one tries to identify the 5 parameters in (2.17) with those of (2.5) it would be stretching one’s imagination to think of (2.17) as a 2-dimensional extension of the simple product form that one has in (2.5). In general, the 3F2 series in (2.17) is not summable because it is not balanced (balancedness would require the unnatural condition 2b+2d+1 = N). In view of this reality it is hardly surprising that the 9− j symbols are, at best, expressible as triple series. There are a number of limiting cases in which the 3F2 series can be summed. The case that corresponds to the Hahn polynomials [13] arises when any one of the 2 parameters a, c approaches ∞ (the same, of course, is true for b or d, but to see that one has to transform the above 3F2 series first). For example, if a → ∞, then, by use of Gauss’ summation formula [8] one finds the weight function as( N x, y ) (2b)!(2c)!(2d)!(2d− y)!(2b + 2d−N)!(2c + 2d + x− y −N)! (2b− x)!(2c− y)!(2d + x−N)!2(2b + 2d)!(2c + 2d− y)! 2c + 2d + 1 2c + 2d + 1− y , which is a 2-dimensional extension of the weight function for the Hahn polynomials. Compare this with those, in, say [19] and [20]. The limit case we are interested in in this paper is obtained in the limit t→∞ after setting 2a = p1t, 2b = p2t, 2c = p3t, 2d = p4t. (2.18) We get, as the weight function for the corresponding polynomials, the expression ρx,y(p1, p2, p3, p4) = lim t→∞ wx,y(p1t/2, p2t/2, p3t/2, p4t/2) 8 M.R. Hoare and M. Rahman = ( N x, y ) ηx 1ηy 2(1− η1 − η2)N−x−y, (2.19) which is a trinomial distribution, with η1 = p1p2(p1 + p2 + p3 + p4) (p1 + p2)(p1 + p3)(p2 + p4) , (2.20) η2 = p3p4(p1 + p2 + p3 + p4) (p1 + p3)(p4 + p2)(p4 + p3) . (2.21) Because of the orthonormality of the 9− j symbols, it is guaranteed that 1− η2 − η2 = (p1p4 − p2p3)2 (p1 + p2)(p1 + p3)(p4 + p2)(p4 + p3) , (2.22) which, of course follows also from (2.20) and (2.21). The corresponding orthonormal functions obtained from the limit of (2.15) are Rm,n(x, y; p1, p2, p3, p4) = {( N x, y )( N m,n )}1/2 { p2N−2y−x−m 1 px+n 2 py+m 3 py−n 4 (p1 + p2)y−N (p1 + p3)n−N ×(p2 + p4)N−m−2y(p3 + p4)N−x−2y(p1 + p2 + p3 + p4)m−n+x+y }1/2 ×(p2 + p3 + p4)y−N (−y)n (−N)n × N−y∑ `=0 (y −N)` `! { p2p3(p1 + p2 + p3 + p4) p1(p2 + p4)(p3 + p4) }` F [ −`,−x y −N ; (p1 + p2)(p2 + p3 + p4) p2(p1 + p2 + p3 + p4) ] ×F [ −`,−n y − n + 1 ;−p4(p2 + p3 + p4) p2p3 ] F [ n− y − `,−m n−N ; (p1 + p3)(p2 + p3 + p4) p3(p1 + p2 + p3 + p4) ] . (2.23) In Section 3 we will show that Rm,n(x, y; p1, p2, p3, p4) = { b2(x, y;N ; η1, η2)b2(m,n;N ; η̄1, η̄2)(1− η1 − η2)−N }1/2 Pm,n(x, y), (2.24) where b2(x, y;N ; η1, η2) = ( N x, y ) ηx 1ηy 2(1− η1 − η2)N−x−y, (2.25) and η̄1 = p1p3(p1 + p2 + p3 + p4) (p1 + p2)(p1 + p3)(p3 + p4) , (2.26) η̄2 = p2p4(p1 + p2 + p3 + p4) (p1 + p2)(p2 + p4)(p4 + p3) , (2.27) (it is easily verified that 1− η̄1 − η̄2 = 1− η1 − η2), Pm,n(x, y) = ∑ i ∑ j ∑ k ∑ ` (−m)i+j(−n)k+`(−x)i+k(−y)j+` i!j!k!`!(−N)i+j+k+` tiujvkw`, (2.28) A Probablistic Origin for a New Class of Bivariate Polynomials 9 with t = (p1 + p2)(p1 + p3) p1(p1 + p2 + p3 + p4) , u = (p1 + p3)(p4 + p3) p3(p1 + p2 + p3 + p4) , (2.29) v = (p1 + p2)(p2 + p4) p2(p1 + p2 + p3 + p4) , w = (p4 + p2)(p4 + p3) p4(p1 + p2 + p3 + p4) . One can think of the parameters η̄1, η̄2 as dual to η1, η2, and the polynomials Pm,n(x, y) as self-dual. The normalization constant in m, n is (1 − η̄1 − η̄2)−N b2(m,n;N ; η̄1, η̄2), and that in x, y is b2(x, y;N ; η1, η2) (1− η1 − η2)−N . The polynomials are very different from those obtainable as limits of (3.9) or (3.10) of Rahman [19], or from the limits of the 2-dimensional case of Tratnik’s [20] polynomials. Given η1, η2 and the trinomial distribution (2.25) it would be almost impossible to construct the set of polynomials Pm,n(x, y) that has 4 parameters. After having established (2.24) in Section 3 we shall proceed in Section 4 to show that the eigenfunctions Ψm,n of (1.5) are precisely the polynomials Pm,n and the eigenvalues λm,n are certain nonlinear functions of the 4 parameters α1, α2, β1, β2 introduced in Section 1. These functions can be determined from symmetry considerations by requiring that for balance at equilibrium (when λ0,0 = 1), one must have Ψ0,0(j1, j2)K(i1, i2; j1, j2) = Ψ0,0(i1, i2)K(j1, j2; i1, i2), (2.30) where K is defined in (1.4), and hence Ψ0,0(i1, i2) must be of the form Ψ0,0(i1, i2) = ( N ii, i2 ) ηii 1 ηi2 2 (1− η1 − η2)N−i1−i2 , (2.31) with η1(1− α1) β1 = η2(1− α2) β2 = 1− η1 − η2 1− β1 − β2 , (2.32) which leads to η1(1− α1) β1 = η2(1− α2) β2 = 1− η1 − η2 1− β1 − β2 = D−1, (2.33) D = 1 + α1β1 1− α1 + α2β2 1− α2 . (2.34) 3 A bivariate extension of Krawtchouk polynomials In order to reduce (2.23) to (2.24)–(2.29) we shall make frequent use of 3 well-known transfor- mation formulas: F [ a, b c ;x ] = (1− x)−aF [ a, c− b c ; x x− 1 ] = (1− x)c−a−bF [ c− a, c− b c ;x ] , (3.1) F [ −n, b c ;x ] = (c− b)n (c)n F [ −n, b 1 + b− c− n ; 1− x ] , n = 0, 1, 2, . . . , (3.2) F1(a; b, c; d;x, y) = (1− y)−aF1 ( a; b, d− b− c; d; y − x y − 1 , y y − 1 ) = (1− x)−aF1 ( a; d− b− c, c; d, x x− 1 , x− y x− 1 ) , (3.3) 10 M.R. Hoare and M. Rahman where F1(a; b, c; d;x, y) = ∑ i ∑ j (a)i+j(b)i(c)j i!j!(d)i+j xiyj (3.4) is an Appell function, see, for example, Erde’lyi et al. [8]. First, by (3.1) F [ −m,n− y − ` n−N ; (p1 + p3)(p2 + p3 + p4) p3(p1 + p2 + p3 + p4) ] = [ − p1(p2 + p4) p3(p1 + p2 + p3 + p4) ]m F [ −m, ` + y −N n−N ; (p1 + p3)(p2 + p3 + p4) p1(p2 + p4) ] , (3.5) and F [ −x,−` y −N ; (p1 + p2)(p2 + p3 + p4) p2(p1 + p2 + p3 + p4) ] = [ − p1(p3 + p4) p2(p1 + p2 + p3 + p4) ]x F [ −x, ` + y −N y −N ; (p1 + p2)(p2 + p3 + p4) p1(p3 + p4) ] . (3.6) Since∑ ` (i + y −N, j + y −N)` `!(y −N)` (−`)k { p2p3(p1 + p2 + p3 + p4) p1(p2 + p4)(p3 + p4) }` = (i + y −N, j + y −N)k (y −N)k { −p2p3(p1 + p2 + p3 + p4) p1(p2 + p4)(p3 + p4) }k × F [ i + k + y −N, j + k + y −N k + y −N ; p2p3(p1 + p2 + p3 + p4) p1(p2 + p4)(p3 + p4) ] = (i + y −N, j + y −N)k (y −N)k { p2p3(p1 + p2 + p3 + p4) (p2 + p3 + p4)(p2p3 − p1p4) }k (3.7) × { (p2 + p3 + p4)(p1p4 − p2p3) p1(p2 + p4)(p3 + p4) }N−y−i−j F [ −i,−j k + y −N ; p2p3(p1 + p2 + p3 + p4) p1(p2 + p4)(p3 + p4) ] , we have as the contribution from (3.6) (y −N)k ∑ i (−x, k + y −N)i i!(y −N)i (−i)` [ (p1 + p2)(p2 + p4) p1p4 − p2p3 ]i = (−x)`(y −N)k+` (y −N)` [ −(p1 + p2)(p2 + p4) p1p4 − p2p3 ]` × F [ `− x, k + ` + y −N ` + y −N ; (p1 + p2)(p2 + p4) p1p4 − p2p3 ] = [ −p2(p1 + p2 + p3 + p4) p1p4 − p2p3 ]x [ (p1 + p2)(p2 + p4) p2(p1 + p2 + p3 + p4) ]` (−x)`(y −N)k+` (y −N)` × F [ −k, `− x ` + y −N ; (p1 + p2)(p2 + p4) p2(p1 + p2 + p3 + p4) ] , (3.8) and that from (3.5): ∑ j (−m)j(y −N)j+k j!(n−N)j (−j)` { p1 + p3)(p3 + p4) p1p4 − p2p3 }j A Probablistic Origin for a New Class of Bivariate Polynomials 11 = { −p3(p1 + p2 + p3 + p4) p1p4 − p2p3 }m{ (p1 + p3)(p3 + p4) p3(p1 + p2 + p3 + p4) }` (−m)`(y −N)k+` (n−N)` × F [ `−m,n− y − k ` + n−N ; (p1 + p3)(p3 + p4) p3(p1 + p2 + p3 + p4) ] . (3.9) Collecting the expressions on the right sides of (3.7)–(3.9) we find that the series part of (2.33) equals{ p1(p3 + p4) p1p4 − p2p3 }x{(p1p4 − p2p3)(p2 + p3 + p4) p1(p1 + p4)(p3 + p4) }N−y { p1(p2 + p4) p1p4 − p2p3 }m × ∑ i ∑ j ∑ ` (−m)j+`(−x)i+` i!j!`!(y −N)i+`(n−N)j+` { (p1 + p2)(p2 + p4) p2(p1 + p2 + p3 + p4) }i × { (p1 + p3)(p3 + p4) p3(p1 + p2 + p3 + p4) }j { (p1 + p2)(p1 + p3) p1(p1 + p2 + p3 + p4 }` Si,j,`, (3.10) where Si,j,` = ∑ k (−n)k(y −N)k+`(n− y − k)j(−k)i k!(y − n + 1)k { p4(p1 + p2 + p3 + p4) p1p4 − p2p3 }k = (−1)i+j { p4(p1 + p2 + p3 + p4) p1p4 − p2p3 }i (−n)i(y −N)i+` (y − n + 1)i−j × F [ i− n, i + ` + y −N y − n + 1 + i− j ; p4(p1 + p2 + p3 + p4) p1p4 − p2p3 ] = (−1)i+j { p4(p1 + p2 + p3 + p4) p1p4 − p2p3 }` (−n)i(y −N)i+` (y − n + 1)n−j (N + 1− n− j − `)n−i × F [ i− n, i + ` + y −N i + j + `−N ;−(p2 + p4)(p3 + p4) p1p4 − p2p3 ] by(3.2) = [ p4(p1 + p2 + p3 + p4) p1p4 − p2p3 ]n (−N)n(−n)i(y −N)i+`(n−N)j+`(−y)j (−y)n(−N)i+j+` × F [ i− n, j − y i + j + `−N ; (p2 + p4)(p3 + p4) p4(p1 + p2 + p3 + p4) ] . (3.11) Using (3.10) and (3.11) in (2.23), and simplifying the coefficients, we finally obtain (2.24)–(2.29). Note that, by (3.3), Pm,n(x, y) = ∑ i ∑ j (−m)i+j(−x)i(−y)j i!j!(−N)i+j tiujF1(−n; i− x, j − y; i + j −N ; v, w) = (1− v)n ∑ i ∑ j (−m)i+j(−x)i(−y)j i!j!(−N)i+j tiuj × F1 ( −n;x + y −N, j − y; i + j −N ; v v − 1 , v − w v − 1 ) = (1− w)n ∑ i ∑ j (−m)i+j(−x)i(−y)j i!j!(−N)i+j tiuj × F1 ( −n; i− x, x + y −N ; i + j −N ; w − v w − 1 , w w − 1 ) (3.12) which will be very useful in the next section. 12 M.R. Hoare and M. Rahman For notational simplicity let us adopt the symbol F (2) 1 (a, a′; b, c; d;λ, µ, ν, ρ) for the iterate of F1, i.e. F (2) 1 (a, a′; b, c; d;λ, µ, ν, ρ) := ∑ i ∑ j ∑ k ∑ ` (a)i+j(a′)k+`(b)i+k(c)j+` i!j!k!`!(d)i+j+k+` λiµjνkρ`. (3.13) 4 Eigenvalues and eigenfunctions of K(i1, i2; j1, j2) From (1.4) and (2.25) it follows that K(i1, i2; j1, j2) = b2(i1, i2;N ;β1, β2)(1− α1)j1(1− α2)j2 × F3 ( −i1,−i2,−j1,−j2;−N ; α1 β1(α1 − 1) , α2 β2(α2 − 1) ) , (4.1) where the Appell function F3 is defined by F3(a, b, a′, b′; c;x, y) = ∑ r ∑ s (a, a′)r(b, b′)s r!s!(c)r+s xrys. (4.2) Since our objective is to show that Pm,n(j1, j2) are the eigenfunctions of K(i1, i2; j1, j2) for certain choices of the parameters t, u, v and w, it sufficies to compute the sum Qm,n(i1, i2) := ∑ j1 ∑ j2 b2(j1, j2;N ; η1, η2)(1− α1)j1(1− α2)j2 (4.3) × ∑ r ∑ s (−i1,−j1)r(−i2,−j2)s r!s!(−N)r+s ( α1 β1(α1 − 1) )r( α2 β2(α2 − 1) )s Pm,n(j1, j2). We shall do the j1-sum first. To facilitate the summing process we use the first of the two formulas in (3.12) to obtain Pm,n(j1, j2) = (1− t)m(1− v)n ∑ i ∑ j ∑ k ∑ ` (−m)i+j(−n)k+` i!j!k!`! × (j1 + j2 −N)i+k(−j2)j+` (−N)i+j+k+` ( t t− 1 )i( t− u t− 1 )j( v v − 1 )k(v − w v − 1 )` . (4.4) Since∑ j1 ( N j1, j2 ) (η1(1− α1)) j1 (1− η1 − η2) N−j1−j2 (−j1)r(j1 + j2 −N)i+k = ( N j2 ) (η1(1− α1)) r (1− η1 − η2)i+k(1− α1η1 − η2)N−j2−r−i−k(j2 −N)r+i+k, (4.5) the r.h.s. of (4.3) can be written as (1− t)m(1− v)n ∑ j2 ( N j2 ) (η2(1− α2))j2(1− α1η1 − η2)N−j2 × ∑ r ∑ s (−i1, j2 −N)r(−i1,−j2)s r!s!(−N)r+s ( − α1η1 β1(1− α1η1 − η2) )r A Probablistic Origin for a New Class of Bivariate Polynomials 13 × ( α2 β2(α2 − 1) )s F (2) 1 ( −m,−n; r + j2 −N,−j2;−N ; t(1− η1 − η2) (t− 1)(1− α1η1 − η2) , ( t− u t− 1 ) , ( v(1− η1 − η2) (v − 1)(1− α1η1 − η2) ) , ( v − w v − 1 )) , (4.6) by using the transformation formulas F (2) 1 (a, a′; b, c; d;λ, µ, ν, ρ) = (1− λ)−a(1− ν)−a′F (2) 1 ( a, a′; d− b− c, c; d; λ λ− 1 , λ− µ λ− 1 , ν ν − 1 , ν − ρ ν − 1 ) (4.7) = (1− µ)−a(1− ρ)−a′F (2) 1 ( a, a′; b, d− b− c; d; µ− λ µ− 1 , µ µ− 1 , ρ− ν ρ− 1 , ρ ρ− 1 ) , (4.8) which are direct consequences of (3.3). By (4.8), the F (2) 1 series in (4.6) transforms to( 1− u 1− t )m(1− w 1− v )n F (2) 1 (−m,−n; r + j2 −N,−r;−N ; t′, u′, v′, w′), (4.9) where t′ = (t− u)(1− α1η1 − η2)− t(1− η1 − η2) (1− u)(1− α1η1 − η2) = tβ1 − u(1− β2) (1− u)(1− β2) , (4.10) u′ = t− u 1− u , (4.11) v′ = (v − w)(1− α1η1 − η2)− v(1− η1 − η2) (1− w)(1− α1η1 − η2) = vβ1 − w(1− β2) (1− w)(1− β2) , (4.12) w′ = v − w 1− w , (4.13) where the expressions for t′, v′ in terms of β1, β2 are obtained from (2.32) and (2.33). With (4.9) we may now carry out the sum over j2. We have∑ j2 ( N j2 ) (η2(1− α2)) j2 (1− α1η1 − η2)N−j2(−j2)s(j2 −N)r+i+k = ( η1(1− α2) 1− α1η1 − α2η2 )s( 1− α1η1 − η2 1− α1η1 − α2η2 )r+i+k (1− α1η1 − α2η2)N (−N)r+s+i+k = βs 2(1− β2)r+i+kD−N (−N)r+s+i+k. (4.14) So the expression in (4.6) can be written as (1− u)m(1− w)n(1− α1η1 − α2η2)N ∑ r ∑ s (−i1)r(−i2)s r!s! ( α1 α1 − 1 )r ( α2 α2 − 1 )s (4.15) × F (2) 1 ( −m,−n; r + s−N,−r;−N ; tβ1 − u(1− β2) (1− u)(1− β2) , t− u 1− u , vβ1 − w(1− β2) (1− w)(1− β2) , v − w 1− w ) . We are just one transformation away from the form where we can do the r and s summations. First, we interchange the first and third parameters, then the second and the fourth, so that the parameters r + s − N and −r are also interchanged (although this step is not necessary), followed by an application of (4.8). The end result is that (4.15) transform to (1− α1η1 − α2η2)N { 1− α1η1 − α2η2 − η1t(1− α1)− η2u(1− α2) 1− α1η1 − α2η2 }m 14 M.R. Hoare and M. Rahman × { 1− α1η1 − α2η2 − η1v(1− α1)− η2w(1− α2) 1− α1η1 − α2η2 }n (4.16) × ∑ r ∑ s (−i1)r(−i1)s r!s! ( α1 α1 − 1 )r ( α2 α2 − 1 )s F (2) 1 (−m,−n;−r,−s;−N ;λ, µ, ν, ρ), with λ = (1− β1)t− β2u 1− β1t− β2u , (4.17) µ = u(1− β2)− β1t 1− β1t− β2u , (4.18) ν = η2(α2 − 1)(v − w)− v(1− η1 − η2) vη1(1− α1) + wη2(1− α2)− (1− α1η1 − α2η2) , (4.19) ρ = (1− α1η1 − α2η2)(v − w)− v(1− η1 − η2) vη1(1− α1) + wη2(1− α2)− (1− α1η1 − α2η2) . (4.20) Now,∑ r (−i1)r r! ( α1 α1 − 1 )r (−r)i+k = αi+k 1 (1− α1)−i1(−i1)i+k, and ∑ s (−i2)s s! ( α2 α2 − 1 )s (−s)j+` = αj+` 2 (1− α2)−i2(−i2)j+`, so that the double sum in (4.16) reduces to (1− α1)−i1(1− α2)−i2F (2) 1 (−m,−n;−i1,−i2;−N ;α1λ, α2µ, α1ν, α2ρ). (4.21) Note that b2(i1, i2;N ;β1, β2)(1− α1)−i1(1− α2)−i2(1− α1η1 − α2η2)N = ( N i1, i2 ) βi1 1 βi2 2 (1− β1 − β2)N−i1−i2 ( η1 β1(1− α1η − α2η2) )i1 × ( η2 β2(1− α1η1 − α2η2) )i2 (1− α1η1 − α2η2)N = ( N i1, i2 ) ηi1 1 ηi2 2 (1− η1 − η2)N−i1−i2 = b2(i1, i2;N ; η1, η2) by (2.32)–(2.34). (4.22) So the polynomial Pm,n(i1, i2) is indeed an eigenfunction of K(i1, i2; j1, j2) with the corres- ponding eigenvalue λm,n = { 1− α1η1 − α2η2 − η1t(1− α1)− η2u(1− α2) 1− α1η1 − α2η2 }m × { 1− α1η1 − α2η2 − η1v(1− α1)− η2w(1− α2) 1− α1η1 − α2η2 }n = (1− β1t− β2u)m(1− β1v − β2w)n, (4.23) provided we can find solutions for the parameters t, u, v, w in terms of α1, α2, β1, β2 such that t = λα1, u = µα2, v = να1, w = ρα2. (4.24) This is, of course, elementary algebra, which will be carried out in the next two sections. A Probablistic Origin for a New Class of Bivariate Polynomials 15 5 Eigenvalues and eigenfunctions: the nondegenerate case α1 6= α2 The relations between the parameters, i.e., (4.24), can be expressed in the form t = α1 t(1− β1)− β2u 1− β1t− β2u , (5.1) u = α2 u(1− β1)− β1t 1− β1t− β2u , (5.2) v = α1 v(1− β1)− β2u 1− β1v − β2w , (5.3) w = α2 w(1− β2)− β1v 1− β1v − β2w . (5.4) From (5.1) 1− β1t− β2u = α1(1− t) α1 − t . (5.5) which, on substitution in (5.2) gives β2u = −β1t + t(1− α1) t− α1 . (5.6) Combination of (5.6) and (5.1) or (5.2) gives the quadratic relation for t: β1(α1 − α2)(t− α1)2 − (1− α1)(α1 − α2 + α1β1 + α2β2)(t− α1) + α1(1− α1)2 = 0. (5.7) If α1 > α2, then both roots are real and positive with the discriminant ∆ given by ∆ = (α1 − α2 + α1β1 + α2β2)2 − 4α1β1(α1 − α2) = (α1 − α2 + α2β2 − α1β1)2 + 4α1α2β1β2, (5.8) which is > 0 since the parameters, being probabilities, are necessarily in (0, 1). The roots are t− α1 = ( α1 − α2 + α1β1 + α2β2 ±∆1/2 ) (1− α1) 2(α1 − α2)β1 . (5.9) From (5.1) and (5.3) it is clear that v satisfies the same equation as (5.7), so we may take t−α1 and v − α1, having one of the signs indicated in (5.9) (it is immaterial which sign we assign to each). However, if α1 < α2, the equation is u− α2 = ( α2 − α1 + α1β1 + α2β2 ±∆1/2 ) (1− α2) 2β2(α2 − α1) . (5.10) Since K is a transition probability it is necessarily positive and less than 1, and hence 0 < λm,n < 1. This implies that 1 − β1t − β2u and 1 − β1v − β2w must both be in (0, 1). From (5.1)–(5.4) it is clear that λm,n = ( α1 ( 1− t α1 − t ))m( α2 ( 1− w α2 − w ))n . (5.11) Obviously we have to choose values of t and w such that (i) either t > 1 or t < α1, (ii) either w > 1 or w < α2. (5.12) 16 M.R. Hoare and M. Rahman A straightforward calculation, however, shows that in both cases the end-result is the same, i.e. α1 1− t α1 − t = α2 1− w α2 − w = 1 2 { α1(1− β1) + α2(1− β2) + ∆1/2 } , (5.13) irrespective of whether or not α1 − α2 is +ve or −ve. So the eigenvalues are λm,n = { α1(1− β1) + α2(1− β2) + ∆1/2 2 }m+n . (5.14) 6 The degenerate case α1 = α2 When α1 = α2 = α, say, we subtract (5.2) from (5.1) to get t− u = α t− u 1− β1t− β2u . (6.1) So, either t = u or 1 − β1t − β2u = α. Since we must assume that 0 < α < 1, the second alternative is impossible as it would require α = 1, which can be seen from either (5.1) or (5.2). So we must conclude that t = u, (6.2) in which case it follows that ∆1/2 = α(β1 + β2), so from (5.14), we have λm,n = αm+n. (6.3) Similarly it follows that v = w, and that, ultimately t = u = v = w. (6.4) The eigenfunction in this degenerate case reduces to F (2) 1 (−m,−n;−i1,−i2;−N ; t, t, t, t) = ∑ r ∑ s (−m)r+s(−ii)r(−i2)s r!s!(−N)r+s tr+sF1(−n; r − i1, s− i2; r + s−N ; t, t) = ∑ r ∑ s (−m)r+s(−i1)r(−i2)s r!s!(−N)r+s tr+s 2F1(−n, r + s− i1 − i2; r + s−N ; t) by [4, 9.5(1)] = (1− t)n ∑ r ∑ s (−m)r+s(−i1)r(−i2)s r!s!(−N)r+s tr+s × 2F1 ( −n, i1 + i2 −N ; r + s−N ; t t− 1 ) by (3.1) = (1− t)n ∑ k (−n, i1 + i2 −N)k k!(−N)k ( t t− 1 )k F1(−m;−i1,−i2; k −N ; t, t) = (1− t)m+nF1 ( i1 + i2 −N ;−m,−n;−N ; t t− 1 , t t− 1 ) = (1− t)m+n 2F1 ( −m− n, i1 + i2 −N ;−N ; t t− 1 ) = 2F1(−m− n,−i1 − i2;−N ; t). (6.5) A Probablistic Origin for a New Class of Bivariate Polynomials 17 It also follows from (5.7) that t = 1− α(1− β1 − β2) β1 + β2 . (6.6) So, in this special case, the eigenfunction is essentially a single-variable Krawtchouk polynomial of degree m + n in i1 + i2. 7 Concluding remarks and acknowledgements It is nearly 3 years since the first draft of this paper was prepared, then sent away for private circulation to professional friends and colleagues. Since then it was brought to our attention that a number of publications exist in the literature of both orthogonal polynomials and probability- statistics, that are closely related to what we have done in this paper. The earliest among them was, as far as we know, the 1971 paper of R.C. Griffiths [10], see also [11] and [12], where he considers a (persumably more general) class of transition density expansions of the so-called Lancaster type. He used probability generating functions to characterize bivariable distributions with identical multinominal marginals, with the transition density having orthogonal polyno- mials as eigenfunctions, quite akin to what we have attempted to do here. One might argue, as one of the referees of our paper has pointed out, that Griffiths’ paper says more about the probabilistic nature of the model than ours do. However, our principal motivation in delving into the Quantum Angular Momentum literature is to get a handle on the problem of how to fit the four probability parameters into a trinomial-distribution-based cumulative Bernoulli model that has only two independent parameters. 9− j symbols of Angular Momentum theory provided us with a 4-parameter representation of the two probabilities in the trinomial distribution. Fortunately, one of us (MR) had the good fortune of meeting Dr. Griffiths in an Orthogonal Polynomial meeting in France in 2007, and had the benefit of a fruitful discussion on what we had done in our paper, and what he had done much earlier. The authors gratefully acknowledge the help he provided us with reprints of his papers. We are also grateful to the first referee for pointing out the importance of discussing Dr. Griffiths’ work in this paper, which we had intended to do in a subsequent publication. There was yet another eye-opening experience for MR when he met Dr. Zhedanov of Donetsk Institute for Physics and Technology in Ukraine and talked about the present paper. It turned out that Dr. Zhedanov [24] also found a very similar 2-variable Krawtchouk polynomial by considering the oscillator algebra of the 9− j symbols. However, his polynomials are not exactly the same as ours, but a limiting case of. It is obvious that he would have found the same polynomials as we have, had he chosen to work with the full SU(2) algebra of the 9− j symbols. In the latest SIDE8 meeting in Montreal, June’08, Professor M. Noumi pointed out to MR that a multidimensional version of our 2-variable Krawtchouk polynomial was found by Aomoto and Gelfand [1], and later by Mizukawa [17], who gave a zonal spherical functions proof of the orthogonality of the polynomials. We owe our gratitude to Dr. Noumi as well. References [1] Aomoto K., Kita M., Theory of hypergeometric functions, Springer, Tokyo, 1994 (in Japanese). [2] Askey R., Wilson J.A., A set of orthogonal polynomials that generalize the Racah coefficients or 6 − j symbols, SIAM J. Math. Anal. 10 (1979), 1008–1016. [3] Askey R., Wilson J.A., A set of hypergeometric orthogonal polynomials, SIAM J. Math. Anal. 13 (1982), 651–655. [4] Askey R., Wilson J.A., Some basic hypergeometric polynomials that generalize Jacobi polynomials, Mem. Amer. Math. Soc. 319 (1985), 1–55. 18 M.R. Hoare and M. Rahman [5] Bailey W.N., Generalized hypergeometric series, Cambridge University Press, Cambridge, 1935 (reprinted by Stechert-Hafner, New York, 1964). [6] Cooper R.D., Hoare M.R., Rahman M., Stochastic processes and special functions: on the probabilistic origin of some positive kernels associated with classical orthogonal polynomials, J. Math. Anal. Appl. 61 (1977), 262–291. [7] Edmonds A.R., Angular momentum in quantum mechanics, 2nd ed., Princeton University Press, Princeton, New Jersey, 1960. [8] Erdélyi et. al., Higher transcendental functions, Vol. I, McGraw-Hill, New York, 1953. [9] Gasper G., Rahman M., Basic hypergeometric series, 2nd ed., Encyclopedia of Mathematics and Its Appli- cations, Vol. 96, Cambridge University Press, Cambridge, 2004. [10] Griffiths R.C., Orthogonal polynomials on the multinomial distribution, Austral. J. Statist. 13 (1971), 27–35, Corregenda, Austral. J. Statist. 14 (1972), 270. [11] Griffiths R.C., Orthogonal polynomials on the negative multinomial distribution, J. Multivariate Anal. 5 (1975), 271–277. [12] Griffiths R.C., Orthogonal polynomials on the multinomial, Notes: Version 3.0, 04/09/2006, unpublished (Private communication). [13] Hahn W., Über Orthogonalpolynome, die q-Differenzengleichungen genügen, Math. Nachr. 2 (1949), 4–34. [14] Hoare M.R., Rahman M., Cumulative Bernoulli trials and Krawtchouk processes, Stochastic Process. Appl. 16 (1983), 113–139. [15] Hoare M.R., Rahman M., Cumulative hypergeometric processes: a statistical role for the nFn−1 functions, J. Math. Anal. Appl. 135 (1988), 615–626. [16] Hoare M.R., Rahman M., Distributive processes in discrete systems, Phys. A 97 (1979), 1–41. [17] Mizukawa H., Zonal spherical functions on the complex reflection groups and (n+1, m+1)-hypergeometric functions, Adv. Math. 184 (2004), 1–17. [18] Racah G., Theory of complex spectra. II, Phys. Rev. 62 (1942), 438–462. [19] Rahman M., Discrete orthogonal systems corresponding to Dirichlet distribution, Utilitas Math. 20 (1981), 261–272. [20] Tratnik M.V., Some multivariable orthogonal polynomials of the Askey tableau-discrete families, J. Math. Phys. 32 (1991), 2337–2342. [21] Whipple F.J.W., Well-poised series and other generalized hypergeometric series, Proc. Lond. Math. Soc. (2) 25 (1926), 525–544. [22] Wilson J.A., Hypergeometric series, recurrence relations and some new orthogonal functions, Thesis, Univ. of Wisconsin, Madison, 1978. [23] Wilson J.A., Some hypergeometric orthogonal polynomials, SIAM J. Math. Anal. 11 (1980), 690–701. [24] Zhedanov A., 9j-symbols of the oscillator algebra and Krawtchouk polynomials in two variables, J. Phys. A: Math. Gen. 30 (1997), 8337–8353. 1 Introduction 2 Multivariate orthogonal polynomials of a discrete variable 3 A bivariate extension of Krawtchouk polynomials 4 Eigenvalues and eigenfunctions of K (i_1, i_2; j_1, j_2) 5 Eigenvalues and eigenfunctions: the nondegenerate case \alpha_1 \not = \alpha_2 6 The degenerate case \alpha_1 = \alpha_2 7 Concluding remarks and acknowledgements References