|
Characters of classical groups, Schur-type functions and discrete splines
G. I. Olshanskiabc a Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute), Moscow, Russia
b Skolkovo Institute of Science and Technology, Moscow, Russia
c National Research University Higher School of Economics, Moscow, Russia
Abstract:
We study a spectral problem related to finite-dimensional characters of the groups $Sp(2N)$, $SO(2N+1)$ and $SO(2N)$, which form the classical series $\mathcal{C}$, $\mathcal{B}$ and $\mathcal{D}$, respectively. Irreducible characters of these three series are given by $N$-variate symmetric polynomials. The spectral problem in question consists in the decomposition of characters after their restriction to subgroups of the same type but smaller rank $K<N$. The main result of the paper is the derivation of explicit determinantal formulae for the coefficients of the expansion.
In fact, first we compute these coefficients in greater generality — for the multivariate symmetric Jacobi polynomials depending on two continuous parameters. Next, we show that the formulae are drastically simplified in the three special cases of Jacobi polynomials corresponding to characters of the series $\mathcal{C}$, $\mathcal{B}$ and $\mathcal{D}$. In particular, we show that then these coefficients are given by piecewise polynomial functions. This is where a link with discrete splines arises.
For characters of the series $\mathcal{A}$ (that is, of the unitary groups $U(N)$) similar results were obtained previously by Borodin and this author [5], and then reproved by Petrov [39] by another method. The case of symplectic and orthogonal characters is more intricate.
Bibliography: 58 titles.
Keywords:
characters of classical groups, Schur functions, discrete splines, generalized hypergeometric series.
Received: 02.03.2023 and 05.05.2023
§ 1. Introduction This introductory section is structured as follows. We begin with a brief description of the problem and its history (§ 1.1) and a discussion of spline functions (§§ 1.2–1.4). Then we introduce a few necessary definitions (§§ 1.5 and 1.6). After that, in §§ 1.7–1.9, the main results are stated. Various comments and bibliographic notes are collected in § 1.10. 1.1. Stochastic matrices related to irreducible characters Let $G$ be a finite or compact group and $\{\chi_{\nu,G}\}$ be the set of its irreducible characters, where the indices $\nu$ are appropriate labels. Let us regard the set $\{\chi_{\nu,G}\}$ (or simply the corresponding set $\{\nu\}$ of labels) as a kind of dual object $\widehat{G}$ to the group $G$. Then one would like to assign to any morphism $\phi\colon H\to G$ a dual ‘morphism’ $\widehat\phi$ from $\widehat{G}$ to $\widehat{H}$; how can we do this? A reasonable solution is as follows. It is more convenient to work with normalized irreducible characters
$$
\begin{equation*}
\widetilde\chi_{\nu,G}:=\frac{\chi_{\nu,G}}{\dim\nu}, \qquad \dim\nu:=\chi_{\nu,G}(e).
\end{equation*}
\notag
$$
The pullback of $\widetilde\chi_{\nu,G}$ under $\phi$ is a normalized, positive definite class function on $H$, and so it can be written as a convex combination of normalized irreducible characters $\widetilde\chi_{\varkappa,H}$ of the group $H$:
$$
\begin{equation*}
\widetilde\chi_{\nu,G}\circ\phi=\sum_\varkappa\Lambda^G_H(\nu,\varkappa) \widetilde\chi_{\varkappa,H},
\end{equation*}
\notag
$$
where the $\Lambda^G_H(\nu,\varkappa)$ are some coefficients. These coefficients obviously form a stochastic matrix $\Lambda^G_H$ of the format $\{\nu\}\times\{\varkappa\}$, and we regard $\Lambda^G_H$ as the required dual ‘morphism’ $\widehat\phi$ from $\widehat G$ to $\widehat H$. This is justified by the fact that stochastic matrices (more generally, Markov kernels) can be viewed as a natural generalization of ordinary maps.1[x]1The idea to consider Markov kernels as morphisms also arises in other situations; see [25], for instance. Example 1.1. Take $G=S(n)$, the symmetric group on the set $\{1,\dots,n\}$; then the corresponding set $\{\nu\}$ is $\mathbb{Y}_n$, the set of Young diagrams with $n$ boxes. Next, for $k<n$ let $H=S(k)$ be the subgroup of $S(n)$ fixing the points $k+1,\dots, n$; the corresponding set $\{\varkappa\}$ is $\mathbb{Y}_k$. Then
$$
\begin{equation}
\Lambda^{S(n)}_{S(k)}(\nu,\varkappa)= \begin{cases} \dfrac{\dim\varkappa\, \dim(\nu/\varkappa)}{\dim\nu} &\text{if }\varkappa\subset\nu, \\ 0 &\text{otherwise,} \end{cases}
\end{equation}
\tag{1.1}
$$
where $\dim (\,\cdot\,)$ is the number of standard tableaux of given (skew) shape. For this quantity one can obtain a determinantal expression which can be transformed into the following form:
$$
\begin{equation}
\Lambda^{S(n)}_{S(k)}(\nu,\varkappa)=\frac{\dim\varkappa}{n^{\downarrow k}} s^*_\varkappa(\nu_1,\nu_2,\dots),
\end{equation}
\tag{1.2}
$$
where
$$
\begin{equation*}
n^{\downarrow k}:=n(n-1)\dotsb(n-k+1)
\end{equation*}
\notag
$$
and $s^*_\varkappa$ is the shifted Schur function indexed by $\varkappa$; these functions form a basis of the algebra of shifted symmetric functions; see [29], Theorem 8.1, and [7], Proposition 6.5. Formula (1.2) makes it possible to find the asymptotics of $\Lambda^{S(n)}_{S(k)}(\nu,\varkappa)$ for fixed $\varkappa$ and growing $\nu$. As an application, one obtains a relatively simple proof of Thoma’s theorem about characters of the infinite symmetric group (see [22] and [7]). Another notable fact is that the quantity
$$
\begin{equation*}
\frac{n^{\downarrow k}}{\dim\varkappa}\Lambda^{S(n)}_{S(k)}(\nu,\varkappa),
\end{equation*}
\notag
$$
viewed as a function of the partition $\nu$, has a similarity with the Schur function (which is not obvious from the initial definition (1.1)). In [5] we raised the problem of studying the stochastic matrices $\Lambda^{U(N)}_{U(K)}$ related to unitary group characters; here the matrix entries $\Lambda^{U(N)}_{U(K)}(\nu,\varkappa)$ are indexed by tuples of integers
$$
\begin{equation*}
\nu=(\nu_1\geqslant\dots\geqslant\nu_N) \quad\text{and}\quad \varkappa=(\varkappa_1\geqslant\dots\geqslant\varkappa_K), \qquad K<N.
\end{equation*}
\notag
$$
In that work we were guided by a remarkable analogy2[x]2For more details about this analogy, see [6]. For applications of the stochastic matrices $\Lambda^{U(N)}_{U(K)}$, see the expository paper [35]. between the infinite symmetric group $S(\infty)$ and the infinite-dimensional unitary group $U(\infty)$. We obtained in [5] a determinantal ‘Schur-type’ expression for the matrix entries $\Lambda^{U(N)}_{U(K)}(\nu,\varkappa)$ and applied it to a novel derivation of the classification theorem for the characters of the infinite-dimensional unitary group $U(\infty)$. The aim of the present paper is to extend the results of [5] to other series of compact classical groups, that is, the symplectic groups $Sp(2N)$ (series $\mathcal{C}$) and the orthogonal groups $SO(2N+1)$ and $SO(2N)$ (series $\mathcal{B}$ and $\mathcal{D}$). As it often occurs in representation theory, working with the series $\mathcal{C}$, $\mathcal{B}$ and $\mathcal{D}$ turns out to be harder than with the series $\mathcal{A}$. The present paper is focused on the combinatorial aspects of the problem, and the asymptotic analysis is deferred to a separate publication. 1.2. The $\mathrm{B}$-spline Given an $N$-tuple of real numbers $y_1>\dots>y_N$, we define a function of a variable $x\in\mathbb{R}$ by
$$
\begin{equation}
M_N(x;y_1,\dots,y_N):=(N-1)\sum_{i\colon y_i\geqslant x} \frac{(y_i-x)^{N-2}}{\prod\limits_{r\colon r\ne i}(y_i-y_r)}.
\end{equation}
\tag{1.3}
$$
Note that the number of terms on the right-hand side depends on the position of the variable $x$ relative to the parameters $y_1,\dots,y_N$. The function $x\mapsto M_N(x; y_1,\dots,y_N)$ has the following properties: - (i) it vanishes outside the interval $(y_N,y_1)$;
- (ii) it is piecewise polynomial: on each interval $(y_{i+1},y_i)$ inside $(y_N,y_1)$ it is given by a polynomial of degree $N-2$;
- (iii) it has $N-3$ continuous derivatives at each point $y_i$;
- (iv) it is positive on $(y_N,y_1)$ and
$$
\begin{equation*}
\int_{y_N}^{y_1}M_N(x;y_1,\dots,y_N)\,dx=1,
\end{equation*}
\notag
$$
so that $M_N(x;y_1,\dots,y_N)\,dx$ is a compactly supported probability measure on $\mathbb{R}$. The function $x\mapsto M_N(x;y_1,\dots,y_N)$ is called the $\mathrm{B}$-spline with knots $y_1,\dots,y_N$ (‘B’ is an abbreviation of ‘basic’). The paper [11] by Curry and Schoenberg contains remarkable results about the $\mathrm{B}$-spline. For more details about spline functions, see Schumaker’s monograph [46]. The next remark (due to Okounkov) relates $\mathrm{B}$-splines to random matrices. Let $H(N)$ be the space of $N\times N$ Hermitian matrices. The unitary group $U(N)$ acts on $H(N)$ by conjugations. Let $\mathcal O(y_1,\dots,y_N)\subset H(N)$ denote the set of matrices with eigenvalues $y_1,\dots,y_N$. It is an $U(N)$-orbit and carries a (unique) $U(N)$-invariant probability measure, which we denote by $P(y_1,\dots,y_N)$. Remark 1.2. Consider the projection $\mathcal O(y_1,\dots,y_N)\to\mathbb{R}$ assigning to a matrix $X\in \mathcal O(y_1,\dots,y_N)$ its upper leftmost entry $X_{11}$. The pushforward of the measure $P(y_1,\dots,y_N)$ under this projection is the measure $M_N(x;y_1,\dots,y_N)\,dx$. This fact can easily be derived from Theorem 2 in [11]; see [38], Remark 8.2. 1.3. Discrete $\mathrm{B}$-splines Throughout the paper we use the standard notation for the Pochhammer symbol (also known as raising factorial power)
$$
\begin{equation*}
(x)_m:=x(x+1)\dotsb(x+m-1)=\frac{\Gamma(x+m)}{\Gamma(x)}, \qquad m=0,1,2,\dots\,.
\end{equation*}
\notag
$$
By the discrete $\mathrm{B}$-spline with integral knots $y_1>\dots>y_N$ we mean the function on $\mathbb{Z}$ defined by
$$
\begin{equation}
M^{\mathrm{discr}}_N(x;y_1,\dots,y_N):=(N-1)\sum_{i\colon y_i\geqslant x}\frac{(y_i-x+1)_{N-2}}{\prod\limits_{r\colon r\ne i}(y_i-y_r)}.
\end{equation}
\tag{1.4}
$$
This agrees with the definition in Schumaker [46], § 8.5, up to minor changes. Note that the right-hand side of (1.4) is not affected if, instead of $y_i\geqslant x$, we impose the weaker condition $y_i+N-2\geqslant x$: the reason is that the function $x\mapsto (y-x+1)_{N-2}$ vanishes at the points $y+1,\dots,y+N-2$. Formula (1.4) is very similar to (1.3), only the variable $x$ now ranges over $\mathbb{Z}$ rather than $\mathbb{R}$, and ordinary powers are replaced by raising factorial powers. The discrete $\mathrm{B}$-spline has properties similar to properties (i)–(iv) above. In particular, it determines a finitely supported probability measure on $\mathbb{Z}$ (its support is the lattice interval $y_N+N-2\leqslant x\leqslant y_1$). 1.4. A link with characters of $U(N)$ Let $\operatorname{Sign}_N\subset \mathbb{Z}^N$ denote the set of $N$-tuples of integers $\nu=(\nu_1,\dots,\nu_N)$ subject to the inequalities $\nu_1\geqslant\dots\geqslant\nu_N$. Elements $\nu\in\operatorname{Sign}_N$ are called signatures of length $N$. They parametrize the irreducible characters of the unitary group $U(N)$; these characters can be thought of as symmetric $N$-variate Laurent polynomials (also known as rational Schur functions) and are denoted by $\chi_{\nu,N}(u_1,\dots,u_N)$. We also introduce the normalized characters
$$
\begin{equation}
\widetilde\chi_{\nu,N}(u_1,\dots,u_N) :=\frac{\chi_{\nu,N}(u_1,\dots,u_N)}{\chi_{\nu,N}(1,\dots,1)}, \qquad \nu\in\operatorname{Sign}_N.
\end{equation}
\tag{1.5}
$$
Observe that $u\mapsto \widetilde\chi_{\nu,N}(u,1,\dots,1)$ is a univariate Laurent polynomial and consider its expansion in the monomials $u^k$, which we write in the form
$$
\begin{equation}
\widetilde\chi_{\nu,N}(u,1,\dots,1)=\sum_{k\in\mathbb Z}\Lambda^N_1(\nu,k) u^k.
\end{equation}
\tag{1.6}
$$
The coefficients $\Lambda^N_1(\nu,k)$ are nonnegative real numbers which sum to $1$ for every fixed $\nu\in\operatorname{Sign}_N$. Thus, we can view $\Lambda^N_1(\nu,\,\cdot\,)$ as a finitely supported probability distribution on $\mathbb{Z}$. In combinatorial terms the quantity $\chi_{\nu,N}(1,\dots,1)$ (the dimension of the character $\chi_{\nu,N}$) is equal to the number of triangular Gelfand-Tsetlin patterns with top row $\nu$, whereas $\Lambda^N_1(\nu,k)$ is the fraction of the patterns with bottom entry $k$. Proposition 1.3 (see [5], formula (7.10)). For any signature $\nu\in\operatorname{Sign}_N$ the distribution $\Lambda^N_1(\nu,\,\cdot\,)$ is the discrete $\mathrm{B}$-spline with knots $y_i=\nu_i-i+1$:
$$
\begin{equation}
\Lambda^N_1(\nu,k)=M_N^{\mathrm{discr}}(k; \nu_1,\nu_2-1,\dots,\nu_N-N+1), \qquad k\in\mathbb Z.
\end{equation}
\tag{1.7}
$$
Formally, formula (7.10) in [5] assumes that $k\geqslant1$; however, the whole picture is invariant under the simultaneous shift of all coordinates by an arbitrary integer, so this constraint can be dropped. Another derivation of (1.7) can be obtained from the proof of Theorem 1.2 in Petrov [39]. 1.5. Schur-type functions Let $\operatorname{Sign}^+_N\subset\operatorname{Sign}_N$ denote the set of positive signatures of length $N$: these are $N$-tuples of integers $\nu=(\nu_1,\dots,\nu_N)$ subject to the constraints $\nu_1\geqslant\dots\geqslant \nu_N\geqslant0$. Let $\phi_0(x)\equiv1, \phi_1(x), \phi_2(x),\dots$ be an infinite sequence of functions of a variable $x$. For a signature $\nu\in\operatorname{Sign}^+_N$ we define a symmetric function of $N$ variables $x_1,\dots,x_N$ by
$$
\begin{equation}
\phi_{\nu, N}(x_1,\dots,x_N) :=\frac{\det[\phi_{\nu_i+N-i}(x_j)]_{i,j=1}^N}{\det[\phi_{N-i}(x_j)]_{i,j=1}^N}.
\end{equation}
\tag{1.8}
$$
Note that $\phi_{\varnothing, N}(x_1,\dots,x_N)\equiv1$, where $\varnothing:=(0,\dots,0)$. The definition (1.8) fits in the formalism proposed by Nakagawa, Noumi, Shirakawa and Yamada [28] as an alternative approach to Macdonald’s 9th variation of Schur functions [26]. The latter term is historically justified, but slightly inconvenient to use. For this reason we prefer to call functions (1.8) Schur-type functions. If $\phi_n$ is a polynomial of degree $n$ ($n=0,1,2,\dots$), then the denominator on the right-hand side is proportional to the Vandermonde determinant
$$
\begin{equation*}
V(x_1,\dots,x_N):=\prod_{1\leqslant i<j\leqslant N}(x_i-x_j),
\end{equation*}
\notag
$$
which implies that the functions $\phi_{\nu,N}$ are symmetric polynomials. Such Schur-type functions are sometimes called generalized Schur polynomials; see Sergeev and Veselov [47], for example. Note that these polynomials form a basis in the algebra of $N$-variate symmetric polynomials. In the particular case when $\phi_n(x)=x^n$ ($n=0,1,2,\dots$) we obtain ordinary Schur polynomials. Throughout our paper various Schur-type functions $\phi_{\nu,N}$ appear. Sometimes they are polynomials, and sometimes they are not. 1.6. Stochastic matrices $\Lambda^N_K$ related to Jacobi polynomials We are mainly interested in characters of the compact classical groups
$$
\begin{equation}
Sp(2N)\quad (\text{series }\mathcal C), \qquad SO(2N+1)\quad (\text{series }\mathcal B)\quad\text{and} \quad SO(2N)\quad (\text{series }\mathcal D).
\end{equation}
\tag{1.9}
$$
However, a substantial part of our results hold true in the broader context of multivariate Jacobi polynomials. Recall that classical Jacobi polynomials $P^{(a,b}_n(x)$ are orthogonal polynomials with weight function $(1-x)^a(1+x)^b$ on $[-1,1]$ (Szegő [50]). The corresponding $N$-variate Jacobi polynomials are defined by
$$
\begin{equation*}
P^{(a,b)}_{\nu,N}(x_1,\dots,x_N):=\frac{\det[P^{(a,b)}_{\nu_i+N-i}(x_j)]_{i,j=1}^N} {V(x_1,\dots,x_N)}, \qquad \nu\in\operatorname{Sign}^+_N.
\end{equation*}
\notag
$$
These polynomials are an instance of generalized Schur polynomials (up to constant factors); they are also a particular case of the more general $3$-parameter family of orthogonal polynomials associated with the root system $BC_N$ (see, for example, Lassale [24] or Heckman’s lectures in [21]). The three distinguished cases of Jacobi parameters
$$
\begin{equation}
(a,b)=\biggl(\frac12,\frac12\biggr), \biggl(\frac12,-\frac12\biggr),\text{ or } \biggl(-\frac12,-\frac12\biggr)
\end{equation}
\tag{1.10}
$$
correspond to characters of the groups (1.9) (in the same order). More precisely, set $x_i=\frac12(u_i+u^{-1}_i)$ and regard $u_1^{\pm1},\dots,u^{\pm1}_N$ as the matrix eigenvalues. Then the polynomials $P^{(a,b)}_{\nu,N}$, suitably renormalized, turn to irreducible characters, with the understanding that in the case of the series $\mathcal{D}$ and $\nu_N>0$ one must take the sum of two ‘twin’ irreducible characters (see, for example, Okounkov and Olshanski [30]). Constant factors can be neglected here, because we deal with normalized characters and the normalized polynomials
$$
\begin{equation}
\widetilde P^{(a,b)}_{\nu, N}(x_1,\dots,x_N):=\frac{P^{(a,b)}_{\nu, N}(x_1,\dots,x_N)}{P^{(a,b)}_{\nu, N}(1,\dots,1)}.
\end{equation}
\tag{1.11}
$$
Definition 1.4. With each couple $(N,K)$ of natural numbers $N>K\geqslant1$ we associate a matrix $\Lambda^N_K$ of the format $\operatorname{Sign}^+_N\times\operatorname{Sign}^+_K$: the matrix entries $\Lambda^N_K(\nu,\varkappa)$ are the coefficients in the expansion
$$
\begin{equation}
\widetilde P^{(a,b)}_{\nu, N}(x_1,\dots,x_K, 1,\dots,1)=\sum_{\varkappa\in\operatorname{Sign}^+_K}\Lambda^N_K(\nu,\varkappa)\widetilde P^{(a,b)}_{\varkappa, K}(x_1,\dots,x_K).
\end{equation}
\tag{1.12}
$$
The matrix depends on the Jacobi parameters $(a,b)$, but we suppress them to simplify the notation. Our assumptions on the Jacobi parameters are the following:
$$
\begin{equation}
a>-1,\qquad b>-1\quad\text{and}\quad a+b\geqslant-1.
\end{equation}
\tag{1.13}
$$
Here the first two inequalities ensure the integrability of the weight function. The third inequality is an additional technical assumption; it is obviously satisfied for the three special values (1.10). Our goal is to find explicit formulae for the quantities $\Lambda^N_K(\nu,\varkappa)$, with the emphasis on the three distinguished cases (1.10) corresponding to characters of the series $\mathcal{C}$, $\mathcal{B}$ and $\mathcal{D}$. One can think of $\Lambda^N_K(\nu,\varkappa)$ as a function of the variable $\nu\in\operatorname{Sign}^+_N$, with $\varkappa\in\operatorname{Sign}^+_K$ being an index. Or, conversely, as a function of $\varkappa$ indexed by $\nu$. The first point of view is motivated by asymptotic representation theory, where one is interested in large $N$ limits (Okounkov and Olshanski [31], [32]). The second point of view has its origins in spectral problems of classical representation theory and leads, in the distinguished cases (1.10), to multidimensional discrete splines. From the branching rule of multivariate Jacobi polynomials (see [32], Proposition 7.5) and the condition $a+b\geqslant-1$ it follows that the coefficients $\Lambda^N_K(\nu,\varkappa)$ are nonnegative (in the three special cases (1.10) this also follows from the classical branching rule of symplectic and orthogonal characters; see Zhelobenko [57]). Next, the row sums of the matrix entries are equal to $1$ (to see this, substitute $x_1=\dots=x_K=1$ into (1.12)). This means that $\Lambda^N_K(\nu,\,\cdot\,)$ is a probability distribution on the set $\operatorname{Sign}^+_K$, for any fixed $\nu\in\operatorname{Sign}^+_N$. In other words, $\Lambda^N_K$ is a stochastic matrix, and its entries $\Lambda^N_K(\nu,\varkappa)$ can be viewed as transition probabilities between the sets $\operatorname{Sign}^+_N$ and $\operatorname{Sign}^+_K$. We proceed to the description of the main results (Theorems A–D). 1.7. A Cauchy-type identity involving $\Lambda^N_K$ (Theorem A) Throughout the paper we use the notation
$$
\begin{equation*}
L:=N-K+1 \quad\text{and}\quad \varepsilon:=\frac{a+b+1}2.
\end{equation*}
\notag
$$
We often use the parameters $(a,\varepsilon)$ instead of $(a,b)$. Given a positive integer $N$ and $\nu\in\operatorname{Sign}^+_N$ we set
$$
\begin{equation}
F_N(t;\nu;\varepsilon):=\prod_{i=1}^N\frac{t^2-(N-i+\varepsilon)^2}{t^2-(\nu_i+N-i+\varepsilon)^2}.
\end{equation}
\tag{1.14}
$$
This is an even rational function of $t$. We call it the characteristic function of the signature $\nu$. We also set
$$
\begin{equation}
d_N(\nu;\varepsilon):=\prod_{1\leqslant i<j\leqslant N}((\nu_i+N-i+\varepsilon)^2-(\nu_j+N-j+\varepsilon)^2).
\end{equation}
\tag{1.15}
$$
We need $d_N(\nu,\varepsilon)$ to be nonzero. Because of this (and for some other reasons) we have imposed the additional constraint $a+b\geqslant-1$, meaning that $\varepsilon\geqslant0$. This guarantees that $d_N(\nu,\varepsilon)\ne0$. In the distinguished cases (1.10) we have $\varepsilon=1,1/2,0$. Next, we introduce the sequence of functions
$$
\begin{equation}
g_k(t)=g_k(t;a,\varepsilon,L) :={}_4F_3\biggl[\begin{matrix} -k,\, k+2\varepsilon,\, L,\, L+a\\-t+L+\varepsilon,\, t+L+\varepsilon,\, a+1\end{matrix}\biggm|1\biggr], \qquad k=0,1,2,\dots\,.
\end{equation}
\tag{1.16}
$$
The right-hand side is a balanced (= Saalschützian) hypergeometric series (Bailey [3], § 2.5). Because $k$ is a nonnegative integer, the series terminates and represents a rational function of variable $t$. Because of the symmetry $g_k(t)=g_k(-t)$, it is actually a rational function of $t^2$. Note that $g_0(t)\equiv1$. Note also that in the limit transition as $L\to\infty$, combined with a change of the variable $t$, the functions $g_k(t)$ degenerate into the Jacobi polynomials; see § 9.3. From the sequence $\{g_k(t)\}$ we form Schur-type functions in accordance with (1.8):
$$
\begin{equation}
G_{\varkappa,K}(t_1,\dots,t_K)=\frac{\det[g_{\varkappa_i+K-i}(t_j)]_{i,j=1}^K} {\det[g_{K-i}(t_j)]_{i,j=1}^K}, \qquad \varkappa\in\operatorname{Sign}^+_K\,.
\end{equation}
\tag{1.17}
$$
Theorem A. The following identity holds true:
$$
\begin{equation*}
\prod_{j=1}^KF_N(t_j;\nu;\varepsilon)=\sum_{\varkappa\in\operatorname{Sign}^+_K} \frac{\Lambda^N_K(\nu,\varkappa)}{d_K(\varkappa;\varepsilon)}\,G_{\varkappa,K}(t_1,\dots,t_K).
\end{equation*}
\notag
$$
The sum on the right-hand side is finite, and the quantities $\Lambda^N_K(\nu,\varkappa)$ are uniquely determined by this formula. The proof is presented in § 4. This result has the form of Cauchy’s identity connecting two families of multivariate functions, both indexed by elements $\varkappa\in\operatorname{Sign}^+_K$. Specifically, these functions are $\nu\mapsto \Lambda^N_K(\nu,\varkappa)/d_K(\varkappa;\varepsilon)$ and $G_{\varkappa,K}(t_1,\dots,t_K)$. In the first family we take as the variables the shifted coordinates $n_i:=\nu_i+N-i$, where $i=1,\dots,N$. Then on the other side of the identity we obtain a double product over two sets of variables,
$$
\begin{equation*}
\prod_{i=1}^N\prod_{j=1}^K \frac{t_j^2-(N-i+\varepsilon)^2}{t_j^2-(n_i+\varepsilon)^2},
\end{equation*}
\notag
$$
which is separately symmetric with respect to the permutations of $n_1,\dots,n_N$ and $t_1,\dots,t_K$ — just as in the classical Cauchy identity. 1.8. A determinantal formula for the matrix entries $\Lambda^N_K(\nu,\varkappa)$ (Theorem B) To state the result we need a few definitions. Given an integer $L\geqslant2$, we consider the infinite grid
$$
\begin{equation}
\mathbb A(\varepsilon,L):=\{A_1,A_2,\dots\}\subset \mathbb R_{>0}, \qquad A_m:=L+\varepsilon+m-1, \quad m=1,2,\dots,
\end{equation}
\tag{1.18}
$$
and we denote by $\mathcal{F}(\varepsilon,L)$ the vector space whose elements are even rational functions $f(t)$ of the complex variable $t$, which are regular at $t=\infty$ and such that their only singularities are simple poles contained in the set $(-\mathbb{A}(\varepsilon,L))\cup \mathbb{A}(\varepsilon,L)$. Obviously,
$$
\begin{equation*}
\mathcal F(\varepsilon,2)\supset \mathcal F(\varepsilon,3)\supset\dotsb,
\end{equation*}
\notag
$$
and all these spaces have countable dimension. We show that the functions $g_k(t)=g_k(t;a,\varepsilon,L)$ form a basis of $\mathcal{F}(\varepsilon, L)$. Given $\phi\in\mathcal{F}(\varepsilon,L)$, we denote by $(\phi:g_k)$ coefficients in the expansion
$$
\begin{equation}
\phi(t)=\sum_{k=0}^\infty (\phi:g_k) g_k(t).
\end{equation}
\tag{1.19}
$$
Theorem B. With the notation introduced above, we have the following determinantal formula:
$$
\begin{equation}
\frac{\Lambda^N_K(\nu,\varkappa)}{d_K(\varkappa;\varepsilon)}=\det\bigl[(g_{K-j}F_N: g_{\varkappa_i+K-i})\bigr]_{i,j=1}^K.
\end{equation}
\tag{1.20}
$$
The proof is presented in § 4. Note that the characteristic function (1.14) lies in the space $\mathcal{F}(\varepsilon,N)$. More generally, under our assumption that $L=N-K+1$, the function $g_{K-j}F_N$ lies in $\mathcal{F}(\varepsilon,L)$ for any $j=1,\dots, K$. This implies that the quantities $(g_{K-j}F_N:g_{\varkappa_i+K-i})$ are well defined. The determinantal formula (1.20) resembles the classical Jacobi-Trudi formula for the Schur functions (or rather its version for Macdonald’s 9th variation of Schur functions). This result is deduced from Theorem A in the same way as the classical Jacobi-Trudi formula is deduced from Cauchy’s identity. Theorem A and Theorem B show that if we treat $\nu$ as a variable and $\varkappa$ as a parameter, then the functions
$$
\begin{equation*}
\nu\mapsto \frac{\Lambda^N_K(\nu,\varkappa)}{d_K(\varkappa;\varepsilon)}
\end{equation*}
\notag
$$
share two fundamental properties of Schur functions: Cauchy’s identity and the Jacobi-Trudi identity. 1.9. The computation of the matrix entries $\Lambda^N_K(\nu,\varkappa)$ (Theorems C and D) Our subsequent actions are driven by the desire to find an explicit expression for the entries of the $K\times K$ matrix on the right-hand side of formula (1.20). This leads us to the problem of computing the coefficients $(\phi:g_k)$ of the expansion (1.19) for a given function $\phi\in\mathcal{F}(\varepsilon,L)$. Solving this problem will allow us to find explicitly the matrix entries $\Lambda^N_K(\nu,\varkappa)$ from the determinantal formula (1.20), because the matrix entries on the right-hand side are of the form $(\phi:g_k)$, where $\phi=g_{K-j}F_N\in\mathcal{F}(\varepsilon,L)$ and $k=\varkappa_i+K-i$. Our approach to this problem is a follows. For a function $\phi\in \mathcal{F}(\varepsilon,L)$ we denote by $\operatorname{Res}_{t=A_m}{\phi(t)}$ its residue at the point $t=A_m\in\mathbb{A}(\varepsilon,L)$. Because $\phi(t)$ is rational, it has finitely many poles only. We need the most natural and simplest basis of the space $\mathcal{F}(\varepsilon,L)$, which is formed by the functions
$$
\begin{equation}
e_0(t)\equiv1\quad\text{and} \quad e_m(t):=\frac1{t-A_m}-\frac1{t+A_m}, \quad m\in\mathbb Z_{\geqslant1}.
\end{equation}
\tag{1.21}
$$
By analogy with (1.19) we denote by $(e_m:g_k)$ the transition coefficients between the bases $\{e_m\}$ and $\{g_k\}$. Next, it is not difficult to show that for any $\phi\in \mathcal{F}(\varepsilon,L)$,
$$
\begin{equation}
(\phi:g_k)=\begin{cases} \displaystyle \sum_{m\geqslant k}\operatorname*{Res}_{t=A_m}(\phi(t))(e_m:g_k), & k\geqslant1, \\ \displaystyle \phi(\infty)+\sum_{m\geqslant1}\operatorname*{Res}_{t=A_m}(\phi(t))(e_m: g_0), & k=0 \end{cases}
\end{equation}
\tag{1.22}
$$
(see Proposition 5.1). The sums in (1.22) are, in fact, finite because the number of poles is finite. Theorem C. The transition coefficients $(e_m:g_k)$ have an explicit expression in terms of a terminating hypergeometric series of type ${}_4F_3$. A more detailed formulation of this result is presented in Theorem 5.2. Combining (1.22) with Theorem C we obtain an expression for matrix entries on the right-hand of (1.20). The final result looks complicated because it involves the residues of the functions $\phi=g_{K-j}F_N$, which are given by certain hypergeometric series of type ${}_4F_3$, and also the transition coefficients, which are given by some other ${}_4F_3$ series. However, the situation simplifies radically for symplectic and orthogonal characters. Theorem D. Consider the three distinguished cases (1.10) of Jacobi parameters that correspond to characters of the classical groups of type $\mathcal{C}$, $\mathcal{B}$ and $\mathcal{D}$. Then the matrix entries $(g_{K-j}F_N:g_{\varkappa_i+K-i})$ on the right-hand side of (1.20) admit an explicit elementary expression. A detailed formulation of this result is presented in Theorem 8.1. It turns out that in the three distinguished cases the two families of ${}_4F_3$ series (for the functions $g_k(t)$ and for the coefficients $(e_m:g_k)$) are miraculously summed explicitly. This is shown in Theorems 6.1 and 7.1, respectively. In the particular case $K=1$ we obtain symplectic and orthogonal versions of the discrete $\mathrm{B}$-spline (see § 8.2). 1.10. Notes 1. The present paper is a continuation of the work [5] by Borodin and this author. In [5] similar results were obtained in type $\mathcal{A}$, that is, for characters of the unitary groups $U(N)$. However, the case of symplectic and orthogonal characters, and especially that of multivariate Jacobi polynomials, is more difficult. 2. Part of the results of [5] was reproved and extended by Petrov [39]. His method is very different; it allows one to compute the correlation kernel of a two-dimensional determinantal point process generated by the stochastic matrices $\Lambda^N_{N-1}$ related to characters of unitary groups. The explicit expression for matrix elements of $\Lambda^N_K$ from [5] is then obtained as a direct corollary. Moreover, Petrov also obtained a $q$-version of these results. On the other hand Petrov’s approach does not produce a Cauchy-type identity. 3. In a scaling limit, the stochastic matrices $\Lambda^N_K$ of all four types $\mathcal{A}$, $\mathcal{B}$, $\mathcal{C}$ and $\mathcal{D}$ degenerate into certain continuous Markov kernels, which are related to corner processes in random matrix theory. These Markov kernels are given by determinantal expressions involving continuous spline functions; see this author [33], Faraut [16], and Zubov [58]. The results of these works do not rely on [5]. On the other hand, they can be derived from the earlier results of Defosseux [12] about the correlation functions of corner processes. 4. The restriction problem for characters of classical groups and multivariate Jacobi polynomials was also considered by Gorin and Panova [20], but from a different point of view. Namely, those authors were interested in finding explicit formulae for the resulting functions, and they did not deal with the spectral expansion. Our approach describes the dual picture, related to that of [20] by a Fourier-type transform. This reveals such aspects of the problem as the Cauchy-type identity from Theorem A or the connection with discrete splines, which do not arise in the context of [20]. It seems to me that both approaches complement each other well. In the case of unitary group characters it is not too difficult (at least, for $K=1$) to derive the formulae of [5] from those of [20], but in the case of characters of the series $\mathcal{C}$, $\mathcal{B}$ and $\mathcal{D}$ this does not seem to be an easy task. 5. The present work, as well as [5], originated from a problem in asymptotic representation theory. In the case of characters of the series $\mathcal{C}$, $\mathcal{B}$ and $\mathcal{D}$ our formulae for matrix elements $\Lambda^N_K(\nu, \varkappa)$ are well-suited for making the large $N$ limit transition in the spirit of [5], § 8, which leads to one more approach to the classification of extremal characters of the infinite-dimensional symplectic and orthogonal groups.3[x]3I decided to defer this material to a separate publication so as not to increase the size of the present paper. In connection with this topic, see also § 9.1 below. Earlier works on this subject are Boyer [8], Pickrell [40], Okounkov and Olshanski [31] and Gorin and Panova [20]. 6. An aspect of the present work, which seems to be of interest, is its connection with classical analysis. Such connections have already arisen in various problems concerning representations of infinite-dimensional groups. Here are some examples. The link with the $\mathrm{B}$-spline and its discrete version adds one more item to this list of connections with classical analysis. Note that the large $N$ limit transition for the $\mathrm{B}$-spline, studied by Curry and Schoenberg [11], is connected directly with the asymptotic approach to the classification of spherical functions for $U(\infty)\ltimes H(\infty)$. Likewise, a similar asymptotic problem for the discrete $\mathrm{B}$-spline is connected with the classification of characters of $U(\infty)$. Next, not so long ago, spline theorists also came up with a $q$-deformation of the $\mathrm{B}$-spline: the first paper on this topic is Simeonov and Goldman [48]; of the subsequent works on this topic, note Budakçi and Oruç [10]. As pointed out in this author’s paper [36], this new version also arises in the representation-theoretic context related to the works by Gorin [18], Petrov [39] and Gorin and this author [19]. 1.11. The organization of the paper The short sections (§§ 2 and 3) contain some preparatory material. Then we proceed to the proofs of Theorems A and B (§ 4) and Theorem C (§ 5). Sections 6 and 7 are devoted to simplifications of hypergeometric series in the three distinguished cases (1.10). In § 8 we deduce Theorem D from these results. As a corollary, we obtain symplectic and orthogonal versions of the discrete $\mathrm{B}$-spline. The last section (§ 9) contains a few remarks, in particular, an example of biorthogonal system of rational functions.
§ 2. Multiparameter and dual Schur functions Here we state a few results from [37], § 4, which are used in what follows. (As pointed out in [37], these results can also be extracted from the earlier paper by Molev [27].) Definition 2.1 (multiparameter Schur polynomials). Let $(c_0,c_1,c_2,\dots)$ be an infinite sequence of parameters and consider the monic polynomials
$$
\begin{equation*}
(x\mid c_0,c_1,\dots)^m:=(x-c_0)\dotsb(x-c_{m-1}), \qquad m=0,1,2,\dots\,.
\end{equation*}
\notag
$$
The $N$-variate multiparameter Schur polynomials are defined by
$$
\begin{equation*}
S_{\mu, N}(x_1,\dots,x_N\mid c_0,c_1,\dots) :=\frac{\det[(x_i\mid c_0,c_1,\dots)^{\mu_r+N-r}]_{i,r=1}^N}{V(x_1,\dots,x_N)}, \qquad \mu\in\operatorname{Sign}^+_N.
\end{equation*}
\notag
$$
This is a particular case of generalized Schur polynomials (see § 1.5). If $c_0=c_1=\dots=0$, they turn to the conventional Schur polynomials. Definition 2.2 (dual Schur functions). We apply the definition of Schur-type functions (§ 1.5) by taking
$$
\begin{equation*}
\phi_m(t)=\frac1{(y\mid c_1,c_2,\dots)^m}
\end{equation*}
\notag
$$
(note a shift by $1$ in the indexation of the parameters). The corresponding $N$-variate functions are denoted by $\sigma_{\mu, N}(y_1,\dots,y_N\mid c_1,c_2,\dots)$:
$$
\begin{equation}
\sigma_{\mu, N}(y_1,\dots,y_N\mid c_1,c_2,\dots) :=\frac{\det\biggl[\dfrac1{(y_j\mid c_1,c_2,\dots)^{\mu_r+N-r}}\biggr]_{j,r=1}^N} {\det\biggl[\dfrac1{(y_j\mid c_1,c_2,\dots)^{N-r}}\biggr]_{j,r=1}^N}.
\end{equation}
\tag{2.1}
$$
Following Molev [27] we call them ($N$-variate) dual Schur functions. If $c_1=c_2=\dots=0$, then they turn to conventional Schur polynomials in the variables $y_1^{-1},\dots,y_N^{-1}$. Lemma 2.3 (see [37], Lemma 4.5). Dual Schur functions (2.1) possess the following stability property:
$$
\begin{equation}
\begin{aligned} \, \notag &\sigma_{\mu, N}(y_1,\dots,y_N\mid c_1,c_2,\dots)\big|_{y_N=\infty} \\ &\qquad=\begin{cases} \sigma_{\mu, N-1}(y_1,\dots,y_{N-1}\mid c_2,c_3,\dots), & \ell(\mu)\leqslant N-1, \\ 0, & \ell(\mu)=N. \end{cases} \end{aligned}
\end{equation}
\tag{2.2}
$$
Lemma 2.4 (see [37], Lemma 4.6). One has
$$
\begin{equation}
\det\biggl[\frac1{(y_j\mid c_1,c_2,\dots)^{N-r}}\biggr]_{j,r=1}^N = \frac{(-1)^{N(N-1)/2}\,V(y_1,\dots,y_N)}{\prod_{j=1}^N(y_j-c_1)\dotsb(y_j-c_{N-1})}.
\end{equation}
\tag{2.3}
$$
Lemma 2.5 (see [37], Lemma 4.7). The dual Schur functions in $N$ variables form a topological basis in the subalgebra of $\mathbb{C}[[y_1^{-1},\dots,y_N^{-1}]]$ formed by the symmetric power series. Proposition 2.6 (Cauchy-type identity; see [37], Proposition 4.8). For $K\leqslant N$ one has
$$
\begin{equation}
\begin{aligned} \, \notag &\sum_{\mu\in\operatorname{Sign}^+_K}S_{\mu, N}(x_1,\dots,x_N\mid c_0,c_1,\dots) \sigma_{\mu, K}(y_1,\dots,y_K\mid c_{N-K+1},c_{N-K+2},\dots) \\ &\qquad =\prod_{j=1}^K\frac{(y_j-c_0)\dotsb(y_j-c_{N-1})}{(y_j-x_1)\dotsb(y_j-x_N)}, \end{aligned}
\end{equation}
\tag{2.4}
$$
where both sides are regarded as elements of the algebra of formal series in $y_1^{-1},\dots, y_k^{-1}$.
§ 3. Coherency property for special multiparameter Schur polynomials In the next proposition we use the normalized Jacobi polynomials $\widetilde P^{(a,b)}_{\nu,N}$ defined in (1.11), the multiparameter Schur polynomials (Definition 2.1) corresponding to the special sequence of parameters
$$
\begin{equation*}
(\varepsilon^2,(\varepsilon+1)^2, (\varepsilon+2)^2,\dots)
\end{equation*}
\notag
$$
and the conventional Schur polynomials $S_{\mu,N}$. Recall that $\varepsilon=(a+b+1)/2$. Given $\nu\in\operatorname{Sign}^+_N$, we set
$$
\begin{equation}
n_i:=\nu_i+N-i, \qquad 1\leqslant i\leqslant N.
\end{equation}
\tag{3.1}
$$
Proposition 3.1 (binomial formula for Jacobi polynomials). Let $\nu\in\operatorname{Sign}^+_N$. Then
$$
\begin{equation}
\begin{aligned} \, \notag &\widetilde P^{(a,b)}_{\nu, N}(1+\alpha_1,\dots,1+\alpha_N) \\ &\qquad =\sum_{\mu\in\operatorname{Sign}^+_N}\frac{S_{\mu, N}((n_1+\varepsilon)^2,\dots,(n_N+\varepsilon)^2\mid \varepsilon^2, (\varepsilon+1)^2, \dots)}{C(N,\mu; a)}S_{\mu, N}(\alpha_1,\dots,\alpha_N), \end{aligned}
\end{equation}
\tag{3.2}
$$
where
$$
\begin{equation}
C(N,\mu;a)= 2^{|\mu|}\, \prod_{i=1}^N \frac{\Gamma(\mu_i+N-i+1)\Gamma(\mu_i+N-i+a+1)} {\Gamma(N-i+1)\Gamma(N-i+a+1)}.
\end{equation}
\tag{3.3}
$$
See Okounkov and Olshanski [30], Theorem 1.2, and [32], Proposition 7.4, for the proof. Let $\nu\in\operatorname{Sign}^+_N$ and $\varkappa\in\operatorname{Sign}^+_K$, where $N>K\geqslant1$. Recall that the quantities $\Lambda^N_K(\nu,\varkappa)$ are the coefficients in the expansion
$$
\begin{equation}
\widetilde P^{(a,b)}_{\nu, N}(x_1,\dots,x_K,1,\dots,1) =\sum_{\varkappa\in\operatorname{Sign}^+_K}\Lambda^N_K(\nu,\varkappa) \widetilde P^{(a,b)}_{\varkappa, K}(x_1,\dots,x_K).
\end{equation}
\tag{3.4}
$$
Next, let $\mu\in\operatorname{Sign}^+_K$. We can also regard $\mu$ as a signature of length $N$ by adjusting $N-K$ zeros (this occurs on the left-hand side of relation (3.5) below). By analogy with (3.1) we also set
$$
\begin{equation*}
k_i:=\varkappa_i+K-i, \qquad 1\leqslant i\leqslant K.
\end{equation*}
\notag
$$
Theorem 3.2 (coherency property). With the above notation, the following relation holds:
$$
\begin{equation}
\begin{aligned} \, \notag &\frac{S_{\mu, N}((n_1+\varepsilon)^2,\dots,(n_N+\varepsilon)^2\mid\varepsilon^2, (\varepsilon+1)^2, \dots)}{C(N,\mu;a)} \\ &\qquad =\sum_{\varkappa\in\operatorname{Sign}^+_K} \Lambda^N_K(\nu,\varkappa)\frac{S_{\mu, K}((k_1+\varepsilon)^2,\dots,(k_K+\varepsilon)^2\mid\varepsilon^2, (\varepsilon+1)^2,\dots)}{C(K,\mu;a)}. \end{aligned}
\end{equation}
\tag{3.5}
$$
Comments. 1. The series on the right-hand side terminates, because for any $\nu$ there are only finitely many $\varkappa$ for which $\Lambda^N_K(\nu,\varkappa)\ne0$. Indeed, a necessary condition for $\Lambda^N_K(\nu,\varkappa)\ne0$ is $\varkappa_1\leqslant\nu_1$, as seen from the branching rule for multivariate Jacobi polynomials (see [32], Proposition 7.5). 2. A similar relation holds in the case of type $\mathcal{A}$ (see [5], (5.6), and [29], (10.30)). 3. Let $\nu\in\operatorname{Sign}^+_N$ be fixed, and let $\mu$ range over $\operatorname{Sign}^+_K$. Then $\Lambda^N_K(\nu,\,\cdot\,)$ is a unique finitely supported solution of the system of linear equations produced by the coherency relations (3.5). This follows from the fact that the multiparameter Schur polynomials on the right-hand side of (3.5) form a basis of the algebra of symmetric $K$-variate polynomials, and this algebra separates the $K$-point configurations of the form
$$
\begin{equation*}
(x_1,\dots,x_k)=((k_1+\varepsilon)^2,\dots,(k_K+\varepsilon)^2)
\end{equation*}
\notag
$$
corresponding to the signatures $\varkappa\in\operatorname{Sign}^+_K$. Proof of Theorem 3.2. We apply the binomial formula (3.2) and the definition (3.4). We make the change $N\to K$ and $\nu\to \varkappa$; then equation (3.2) turns to
$$
\begin{equation*}
\begin{aligned} \, &\widetilde P^{(a,b)}_{\varkappa, K}(1+\alpha_1,\dots,1+\alpha_K) \\ &\qquad =\sum_{\mu\in\operatorname{Sign}^+_K}\frac{S_{\mu, K}((k_1+\varepsilon)^2,\dots,(k_K+\varepsilon)^2\mid \varepsilon^2,(\varepsilon+1)^2,\dots)} {C(K,\mu;a)}S_{\mu, K}(\alpha_1,\dots,\alpha_K). \end{aligned}
\end{equation*}
\notag
$$
Substituting this into (3.4) and interchanging the order of summation gives
$$
\begin{equation}
\begin{aligned} \, \notag &\widetilde P^{(a,b)}_{\nu, N}(1+\alpha_1,\dots,1+\alpha_K,1,\dots,1) \\ \notag &\quad =\sum_{\mu\in\operatorname{Sign}^+_K}\biggl(\sum_{\varkappa\in\operatorname{Sign}^+_K} \Lambda^N_K(\nu,\varkappa)\frac{S_{\mu, K}((k_1+\varepsilon)^2,\dots,(k_K+\varepsilon)^2\mid \varepsilon^2, (\varepsilon+1)^2,\dots)}{C(K,\mu;a)}\biggr) \\ &\quad\qquad \times S_{\mu, K}(\alpha_1,\dots,\alpha_K). \end{aligned}
\end{equation}
\tag{3.6}
$$
On the other hand, specializing $\alpha_{K+1}=\dots=\alpha_N=0$ in the binomial formula (3.2) gives
$$
\begin{equation}
\begin{aligned} \, \notag &\widetilde P^{(a,b)}_{\nu, N}(1+\alpha_1,\dots,1+\alpha_K, 1,\dots,1) \\ &\qquad=\sum_{\mu\in\mathbb L_K}\frac{S_{\mu, N}((n_1+\varepsilon)^2,\dots,(n_N+\varepsilon)^2\mid \varepsilon^2,(\varepsilon+1)^2,\dots)} {C(N,\mu;a)}S_{\mu, K}(\alpha_1,\dots,\alpha_K). \end{aligned}
\end{equation}
\tag{3.7}
$$
Comparing (3.6) with (3.7) and equating the coefficients of the Schur polynomials $S_{\mu, K}(\alpha_1,\dots,\alpha_K)$ we arrive at the required formula (3.5).
Theorem 3.2 is proved.
§ 4. Cauchy-type identity and determinantal formula for the matrix entries $\Lambda^N_K(\nu,\varkappa)$: proof of Theorems A and B In this section we fix Jacobi parameters $(a,b)$ such that $a>-1$, $b>-1$ and $a+b\geqslant-1$, so that the parameter $\varepsilon:=\frac12(a+b+1)$ is nonnegative. We deal with the functions $g_k(t)$, the Schur-type functions $G_\varkappa(t_1,\dots,t_K)$, the characteristic function $F_N(t;\nu;\varepsilon)$ of signature $\nu\in\operatorname{Sign}^+_N$, and the space $\mathcal{F}(\varepsilon,L)$ related to the grid $\mathbb{A}(\varepsilon,L)$ consisting of the points $A_m:=L+\varepsilon+m-1$, where $m=1,2,\dots$ . All these objects were defined in §§ 1.7 and 1.8. We assume that $N>K$ and $L=N-K+1$, so that $L\geqslant2$. Lemma 4.1. For each $\nu\in\operatorname{Sign}^+_N$ the function $\prod_{j=1}^K F_N(t_j;\nu;\varepsilon)$ can be written in a unique way as a finite linear combination of the functions $G_\varkappa(t_1,\dots,t_K)$, where $\varkappa$ ranges over $\operatorname{Sign}^+_K$. Proof. Step 1. From the definition (1.16) of the functions $g_k(t)$ it can be seen that they are even, rational, and regular at $t=\infty$. The function $g_0(t)$ is the constant $1$. If $k\geqslant1$, then the singularities of $g_k(t)$ are simple poles contained in the set $\{\pm A_1,\dots, \pm A_k\}$; moreover, the residues at $\pm A_k$ are nonzero. It follows that the functions $g_k(t)$ form a basis of $\mathcal{F}(\varepsilon,L)$.
Step 2. The claim of the lemma is obviously equivalent to the following: there exists a unique finite expansion of the form
$$
\begin{equation}
\det[g_{K-i}(t_j)]_{i,j=1}^K\prod_{j=1}^KF_N(t_j;\nu;\varepsilon) =\sum_{k_1>\dots>k_K\geqslant0}(\cdots)\det[g_{k_i}(t_j)]_{i,j=1}^K,
\end{equation}
\tag{4.1}
$$
where the dots denote some coefficients.
We write the left-hand side as
$$
\begin{equation*}
\det[g_{K-i}(t_j)F_N(t_j;\nu;\varepsilon)]_{i,j=1}^K.
\end{equation*}
\notag
$$
By virtue of step 1 the existence and uniqueness of the expansion (4.1) is reduced to the following claim, concerning functions of a single variable $t$: for each $m=0,\dots, K-1$, the function $g_m(t) F_N(t;\nu;\varepsilon)$ lies in the space $\mathcal{F}(\varepsilon,L)$.
Step 3. Let us prove the latter claim. It is clear that $ g_m(t) F_N(t;\nu;\varepsilon)$ is even, rational and regular at infinity. It remains to examine its singularities. From the definition (1.14) of $F_N(t;\nu;\varepsilon)$ it follows that its singularities are simple poles contained in the set
$$
\begin{equation*}
\{\pm (N+\varepsilon), \pm(N+\varepsilon+1), \pm(N+\varepsilon+2), \dots\},
\end{equation*}
\notag
$$
while the singularities of $g_m(t)$ with $m\ne0$ are simple poles contained in the set
$$
\begin{equation*}
\{\pm (L+\varepsilon), \pm(L+\varepsilon+1),\dots, \pm(L+\varepsilon+m-1)\}.
\end{equation*}
\notag
$$
Since $m\leqslant K-1$ and $L=N-K+1$, these sets are disjoint. Furthermore, they are contained in $-(\mathbb{A}(\varepsilon,L)\cup\mathbb{A}(\varepsilon,L)$. Thus, the product $ g_m(t) F_N(t;\nu;\varepsilon)$ has only simple poles, all of which are contained in $-(\mathbb{A}(\varepsilon,L)\cup\mathbb{A}(\varepsilon,L)$. This proves that $g_m(t) F_N(t;\nu;\varepsilon)$ lies in the space $\mathcal{F}(\varepsilon,L)$.
Lemma 4.1 is proved. For $\varkappa\in\operatorname{Sign}^+_K$ we set
$$
\begin{equation}
\begin{aligned} \, \notag d_K(\varkappa;\varepsilon) &:=\prod_{1\leqslant i<j\leqslant K}\frac{(k_i+\varepsilon)^2-(k_j+\varepsilon)^2}{(k^0_i+\varepsilon)^2-(k^0_j+\varepsilon)^2} \\ &=\prod_{1\leqslant i<j\leqslant K}\frac{(\varkappa_i+K-i+\varepsilon)^2 -(\varkappa_j+K-j+\varepsilon)^2}{(K-i+\varepsilon)^2-(K-j+\varepsilon)^2}. \end{aligned}
\end{equation}
\tag{4.2}
$$
We will show that the precise form of the expansion in Lemma 4.1 is as follows:
$$
\begin{equation}
\prod_{j=1}^KF_N(t_j;\nu;\varepsilon)=\sum_{\varkappa\in\operatorname{Sign}^+_K}\frac{\Lambda^N_K(\nu,\varkappa)}{d_K(\varkappa;\varepsilon)}\,G_\varkappa(t_1,\dots,t_K).
\end{equation}
\tag{4.3}
$$
(This is Theorem A in § 1.7.) Before proceeding to the proof we need some preparations. In the next lemma we deal with a particular case of the multiparameter Schur polynomials (Definition 2.1) and dual Schur functions (Definition 2.2). We assume that $\mu,\varkappa\in\operatorname{Sign}^+_K$ and write $\mu\subseteq\varkappa$ if $\mu_i\leqslant\varkappa_i$ for all $i=1,\dots,K$. Lemma 4.2. For $\varkappa\in\operatorname{Sign}^+_K$ one has
$$
\begin{equation}
\frac{G_\varkappa(t_1,\dots,t_K; a,\varepsilon,L)}{d_K(\varkappa;\varepsilon)}=\sum_{\mu\colon \mu\subseteq\varkappa} A_{\mu,\varkappa}\, \sigma_{\mu, K}(t_1^2,\dots,t_K^2\mid (L+\varepsilon)^2, (L+\varepsilon+1)^2,\dots),
\end{equation}
\tag{4.4}
$$
where the coefficients $A_{\mu,\varkappa}$ are given by
$$
\begin{equation}
\begin{aligned} \, \notag A_{\mu,\varkappa}: &=\prod_{i=1}^K\frac{(L)_{m_i}(L+a)_{m_i}(a+1)_{K-i}(K-i)!}{(a+1)_{m_i}m_i!\,(L)_{K-i}(L+a)_{K-i}} \\ &\qquad \times S_{\mu, K}((k_1+\varepsilon)^2,\dots,(k_K+\varepsilon)^2\mid \varepsilon^2, (\varepsilon+1)^2,\dots) \end{aligned}
\end{equation}
\tag{4.5}
$$
with $m_i:=\mu_i+K-i$. Due to the constraint $\mu\subseteq\varkappa$, the expansion (4.4) is finite. Here is an immediate corollary of the lemma. Corollary 4.3. The rational function on the left-hand side of (4.4), viewed as a function of the variables $t_1^{-1},\dots,t_K^{-1}$, is regular about the point $(0,\dots,0)$ and its value at this point equals $1$. Indeed, this follows from the fact that the coefficient $A_{\varnothing,\varkappa}$ corresponding to the signature $\varnothing=(0,\dots0)$ equals $1$. Proof of Lemma 4.2. Step 1. We rewrite the definition (1.16) of the function $g_k(t)$ in the form
$$
\begin{equation*}
g_k(t)=\sum_{m=0}^\infty X(k,m)Y(t,m),
\end{equation*}
\notag
$$
where
$$
\begin{equation}
X(k,m):=\frac{(-k)_m(k+2\varepsilon)_m(L)_m(L+a)_m}{(a+1)_m m!}
\end{equation}
\tag{4.6}
$$
and
$$
\begin{equation}
Y(t,m):=\frac1{(-t+\varepsilon+L)_m(t+\varepsilon+L)_m}.
\end{equation}
\tag{4.7}
$$
The idea is to separate the terms depending on $t$ from those depending on $k$. The formally infinite series terminates in fact because of the factor $(-k)_m$.
From this presentation it follows that for $\varkappa\in\operatorname{Sign}^+_K$,
$$
\begin{equation}
\det[g_{k_i}(t_j)]_{i,j=1}^K=\sum_{m_1>\dots>m_r\geqslant0} \det[X(k_i,m_r)]_{i,r=1}^K\det[Y(t_j,m_r)]_{j,r=1}^K.
\end{equation}
\tag{4.8}
$$
Let $\mu\in\operatorname{Sign}^+_K$ be the signature corresponding to the tuple $(m_1,\dots,m_K)$, which means that $\mu_i=m_i-(K-i)$ for $i=1,\dots,K$. Observe that $\det[X(k_i,m_r)]_{i,r=1}^K=0$ unless $m_i\leqslant k_i$ for all $i=1,\dots,K$. Indeed, suppose the opposite; then there exists an index $s$ such that $m_s>k_s$. It follows that $m_i>k_r$ whenever $i\leqslant s\leqslant r$. Due to the factor $(-k)_m$ in (4.6), for any such pair $(i,r)$ the corresponding entry $X(k_i,m_r)$ vanishes. But this implies in turn that the determinant vanishes.
We have proved that summation in (4.8) goes in fact over the signatures $\mu\subseteq\varkappa$.
Step 2. In particular, for $\varkappa=\varnothing$ the sum (4.8) reduces to a single summand:
$$
\begin{equation}
\det[g_{K-i}(t_j)]_{i,j=1}^K=\det[X(K-i,K-r)]_{i,r=1}^K\det[Y(t_j,K-r)]_{j,r=1}^K.
\end{equation}
\tag{4.9}
$$
From (4.8), (4.9) and the definition (1.17) of the function $G_\varkappa(t_1,\dots,t_K)$ we obtain
$$
\begin{equation}
G_\varkappa(t_1,\dots,t_K)=\sum_{\mu\colon \mu\subseteq\varkappa}\frac{\det[X(k_i,m_r)]_{i,r=1}^K}{\det[X(K-i,K-r)]_{i,r=1}^K}\frac{\det[Y(t_j,m_r)]_{j,r=1}^K}{\det[Y(t_j,K-r)]_{j,r=1}^K}.
\end{equation}
\tag{4.10}
$$
We divide both sides of (4.10) by $d_K(\varkappa;\varepsilon)$. Then the left-hand side is the same as in (4.4). We are going to show that
$$
\begin{equation}
\frac1{d_K(\varkappa;\varepsilon)}\,\frac{\det[X(k_i,m_r)]_{i,r=1}^K}{\det[X(K-i,K-r)]_{i,r=1}^K} =(-1)^{|\mu|}A_{\mu,\varkappa}
\end{equation}
\tag{4.11}
$$
and
$$
\begin{equation}
\frac{\det[Y(t_j,m_r)]_{j,r=1}^K}{\det[Y(t_j,K-r)]_{j,r=1}^K}=(-1)^{|\mu|}\sigma_{\mu, K}(t_1^2,\dots,t_K^2\mid (L+\varepsilon)^2, (L+\varepsilon+1)^2,\dots).
\end{equation}
\tag{4.12}
$$
This will give us the required equality (4.4).
Step 3. Let us prove (4.11). From the definition of $X(k,m)$ (see (4.6)) we obtain
$$
\begin{equation}
\begin{aligned} \, \notag \frac{\det[X(k_i,m_r)]_{i,r=1}^K}{\det[X(K-i,K-r)]_{i,r=1}^K} &=(\text{the product in (4.5)}) \\ &\qquad\times\frac{\det[(-k_i)_{m_r}(k_i+2\varepsilon)_{m_r}]} {\det[(-(K-i))_{K-r}(K-i+2\varepsilon)_{K-r}]}. \end{aligned}
\end{equation}
\tag{4.13}
$$
Observe that
$$
\begin{equation}
\begin{aligned} \, \notag (-k)_m(k+2\varepsilon)_m &=\prod_{\ell=0}^{m-1}(-k+\ell)(k+2\varepsilon+\ell) =(-1)^m\prod_{l=0}^{m-1}((k+\varepsilon)^2-(\varepsilon+\ell)^2) \\ &=(-1)^m((k+\varepsilon)^2\mid \varepsilon^2, (\varepsilon+1)^2,\dots)^m. \end{aligned}
\end{equation}
\tag{4.14}
$$
It follows that
$$
\begin{equation}
\frac{\det[(-k_i)_{m_r}(k_i+2\varepsilon)_{m_r}]}{\det[(-(K-i))_{K-r}(K-i)_{K-r}]} =(-1)^{|\mu|}\frac{\det[((k_i+\varepsilon)^2\mid \varepsilon^2, (\varepsilon+1)^2,\dots)^{m_r}]}{\det[((K-i+\varepsilon)^2\mid \varepsilon^2, (\varepsilon\!+\!1)^2,\dots)^{K-r}]}.
\end{equation}
\tag{4.15}
$$
Next, we observe that
$$
\begin{equation*}
\det[((K-i+\varepsilon)^2\mid \varepsilon^2, (\varepsilon+1)^2,\dots)^{K-r}]=\prod_{1\leqslant i<j\leqslant K}((K-i)^2-(K-j)^2),
\end{equation*}
\notag
$$
and use the definition (4.2) of $d_K(\varkappa;\varepsilon)$. This allows us to write the left-hand side of (4.11) as
$$
\begin{equation*}
(\text{the product in (4.5)}) \times (-1)^{|\mu|}\frac{\det[((k_i+\varepsilon)^2\mid \varepsilon^2, (\varepsilon+1)^2,\dots)^{m_r}]}{\prod_{1\leqslant i<j\leqslant K}((k_i+\varepsilon)^2-(k_j+\varepsilon)^2)}.
\end{equation*}
\notag
$$
From the definition of dual Schur functions (see Definition 2.1) we conclude that the resulting expression is equal to $(-1)^{|\mu|} A_{\mu,\varkappa}$, as required.
Step 4. Let us prove (4.12). Similarly to (4.14) we have
$$
\begin{equation}
(-t+\varepsilon+L)_m(t+\varepsilon+L)_m=(-1)^m (t^2\mid (L+\varepsilon)^2,(L+\varepsilon+1)^2,\dots)^m.
\end{equation}
\tag{4.16}
$$
From this and the definition of $Y(t,m)$ (see (4.7)) we obtain
$$
\begin{equation}
\frac{\det[Y(t_j,m_r)]_{j,r=1}^K}{\det[Y(t_j,K-r)]_{j,r=1}^K} =(-1)^{|\mu|}\frac{\det\biggl[\dfrac1{(t_j^2\mid (\varepsilon+L)^2,(\varepsilon+L+1)^2,\dots)^{m_r}} \biggr]}{\det\biggl[\dfrac1{(t_j^2\mid (\varepsilon+L)^2,(\varepsilon+L+1)^2,\dots)^{K-r}}\biggr]}.
\end{equation}
\tag{4.17}
$$
By the definition of dual Schur functions (Definition 2.2) this equals the right-hand side of (4.12).
Lemma 4.2 is proved. Proof of Theorem A. Step 1. We begin with the coherency relation (3.5), which we write in the form
$$
\begin{equation*}
\begin{aligned} \, &S_{\mu, N}((n_1+\varepsilon)^2,\dots,(n_N+\varepsilon)^2\mid \varepsilon^2, (\varepsilon+1)^2, \dots) \\ &\quad =\sum_{\varkappa\in\operatorname{Sign}^+_K} \Lambda^N_K(\nu,\varkappa)S_{\mu, K}((k_1+\varepsilon)^2,\dots,(k_K+\varepsilon)^2\mid \varepsilon^2, (\varepsilon+1)^2,\dots)\frac{C(N,\mu;a)}{C(K,\mu;a)}. \end{aligned}
\end{equation*}
\notag
$$
Here $\mu\in\operatorname{Sign}^+_K$ is arbitrary; recall also that the sum is finite, because for each fixed $\nu$ the quantity $\Lambda^N_K(\nu,\varkappa)$ is nonzero only for finitely many $\varkappa$.
We multiply both sides by
$$
\begin{equation*}
\sigma_{\mu, K}(t^2_1,\dots,t^2_K\mid (\varepsilon+N-K+1)^2,\,(\varepsilon+N-K+2)^2,\,\dots)
\end{equation*}
\notag
$$
and sum over all $\mu\in\operatorname{Sign}^+_K$, which makes sense in the algebra of formal power series in $t^{-2}_1,\dots,t^{-2}_K$ due to Lemma 2.5. The resulting equality has the form
$$
\begin{equation}
\begin{aligned} \, \notag &\sum_{\mu\in\operatorname{Sign}^+_K}S_{\mu, N}((n_1+\varepsilon)^2,\dots,(n_N+\varepsilon)^2\mid \varepsilon^2,(\varepsilon+1)^2,\dots) \\ \notag &\qquad\qquad \times \sigma_{\mu, K}(t^2_1,\dots,t^2_K\mid (\varepsilon+N-K+1)^2,\,(\varepsilon+N-K+2)^2,\,\dots) \\ \notag &\qquad=\sum_{\mu\in\operatorname{Sign}^+_K}\sum_{\varkappa\in\operatorname{Sign}^+_K} \Lambda^N_K(\nu,\varkappa)S_{\mu, K}((k_1+\varepsilon)^2,\dots,(k_K+\varepsilon)^2\mid \varepsilon^2,(\varepsilon+1)^2,\dots) \\ &\qquad\qquad \times \frac{C(N,\mu;a)}{C(K,\mu;a)} \sigma_{\mu, K}(t^2_1,\dots,t^2_K\mid (\varepsilon+N-K+1)^2,\,(\varepsilon+N-K+2)^2,\,\dots). \end{aligned}
\end{equation}
\tag{4.18}
$$
We will show that this equation can be transformed into (4.3).
Step 2. We examine the left-hand side of (4.18). We apply to it the Cauchy-type identity (see (2.4))
$$
\begin{equation*}
\begin{aligned} \, &\sum_{\mu\colon\ell(\mu)\leqslant K}S_{\mu, N}(x_1,\dots,x_N\mid c_0,c_1,\dots) \sigma_{\mu \mid K}(y_1,\dots,y_K\mid c_{N-K+1},c_{N-K+2},\dots) \\ &\qquad =\prod_{j=1}^K\frac{(y_j-c_0)\cdots(y_j-c_{N-1})}{(y_j-x_1)\cdots(y_j-x_N)}, \end{aligned}
\end{equation*}
\notag
$$
where we specialize
$$
\begin{equation*}
x_1:=(n_1+\varepsilon)^2, \quad \dots, \quad x_N:=(n_N+\varepsilon)^2, \qquad y_1:=t_1^2, \quad \dots, \quad y_K:=t_K^2
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
c_i:=(\varepsilon+i)^2, \qquad i=0,1,\dots\,.
\end{equation*}
\notag
$$
Then the result gives us the left-hand side of (4.3).
Step 3. We proceed now to the right-hand side of (4.18). Here we can change the order of summation, because $\varkappa$ ranges actually over a finite set depending only on $\nu$. Then we obtain the double sum
$$
\begin{equation*}
\sum_{\varkappa\in\operatorname{Sign}^+_K}\Lambda^N_K(\nu,\varkappa) \sum_{\mu\in\operatorname{Sign}^+_K}(\cdots),
\end{equation*}
\notag
$$
where the inner sum over $\mu$ has the form
$$
\begin{equation}
\begin{aligned} \, \notag &\sum_{\mu\in\operatorname{Sign}^+_K} S_{\mu, K}((k_1+\varepsilon)^2,\dots,(k_K+\varepsilon)^2\mid \varepsilon^2, (\varepsilon+1)^2,\dots)\frac{C(N,\mu;a)}{C(K,\mu;a)} \\ &\qquad\qquad \times \sigma_{\mu, K}(t^2_1,\dots,t^2_K\mid (\varepsilon+N-K+1)^2,\,(\varepsilon+N-K+2)^2,\,\dots). \end{aligned}
\end{equation}
\tag{4.19}
$$
We will prove that this sum equals
$$
\begin{equation*}
\frac{G_\varkappa(t_1,\dots,t_K)}{d_K(\varkappa,\varepsilon)},
\end{equation*}
\notag
$$
which implies in turn that the right-hand side of (4.18) coincides with the right-hand side of (4.3).
Comparing (4.19) with the result of Lemma 4.2 we see that it remains to check the equality
$$
\begin{equation}
\frac{C(N,\mu;a)}{C(K,\mu;a)}=\prod_{i=1}^K \frac{(L)_{m_i}(L+a)_{m_i}(a+1)_{K-i}(K-i)!}{(a+1)_{m_i}m_i!\, (L)_{K-i}(L+a)_{K-i}}.
\end{equation}
\tag{4.20}
$$
The quantities on the left-hand side were defined in (3.3); they are
$$
\begin{equation}
C(N,\mu;a)= 2^{|\mu|}\, \prod_{i=1}^N \frac{\Gamma(\mu_i+N-i+1)\Gamma(\mu_i+N-i+a+1)} {\Gamma(N-i+1)\Gamma(N-i+a+1)}
\end{equation}
\tag{4.21}
$$
and
$$
\begin{equation}
C(K,\mu;a)= 2^{|\mu|}\, \prod_{i=1}^K \frac{\Gamma(\mu_i+K-i+1)\Gamma(\mu_i+K-i+a+1)} {\Gamma(K-i+1)\Gamma(K-i+a+1)}.
\end{equation}
\tag{4.22}
$$
Since $\ell(\mu)\leqslant K$, the product in (4.21) can in fact be restricted to $i=1,\dots,K$. After that equality (4.20) is readily checked.
Theorem A is proved. Remark 4.4. The main ingredients of the proof of Theorem A are two formulae involving Schur-type functions: the Cauchy-type identity (2.4) and the coherency relation (3.5). A similar mechanism works in the case of unitary groups (see [5]). Remark 4.5. In the statement of Theorem A it was assumed that $N>K$. Here we examine what happens for $N=K$. Looking at the proof one sees that it works for $N=K$, with the understanding that $\Lambda^K_K(\nu,\varkappa)=\delta_{\nu,\varkappa}$. Then the result reduces to
$$
\begin{equation*}
\prod_{j=1}^KF_K(t_j;\nu;\varepsilon)=\sum_{\varkappa\in \operatorname{Sign}^+_K}\delta_{\nu,\varkappa}\frac{G_\varkappa(t_1,\dots,t_K;a,\varepsilon,1)}{d_K(\varkappa;\varepsilon)},
\end{equation*}
\notag
$$
where we have used the more extended notation $G_\varkappa(t_1,\dots,t_K;a,\varepsilon,L)$ instead of $G_\varkappa(t_1,\dots,t_K)$ and then have specialized $L$ to $1$, because $N=K$ means that $L=1$. We rewrite this equality as
$$
\begin{equation*}
G_\varkappa(t_1,\dots,t_K;a,\varepsilon,1)=d_K(\varkappa;\varepsilon) \prod_{j=1}^KF_K(t_j;\varkappa;\varepsilon)
\end{equation*}
\notag
$$
or, in a more extended form (we use the definition (1.14)),
$$
\begin{equation}
G_\varkappa(t_1,\dots,t_K;a,\varepsilon,1)=\prod_{j=1}^K\prod_{i=1}^K \frac{t_j^2-(k^0_i+\varepsilon)^2}{t_j^2-(\varkappa_i+\varepsilon)^2}\cdot \prod_{1\leqslant i<j\leqslant K}\frac{(k_i+\varepsilon)^2-(k_j+\varepsilon)^2}{(k_i^0+\varepsilon)^2-(k_j^0+\varepsilon)^2},
\end{equation}
\tag{4.23}
$$
where $k_i:=\varkappa_i+K-i$ and $k^0_i:=K-i$. Formula (4.23) can be checked directly as follows. For $L=1$, the definition (1.16) simplifies drastically and takes the form
$$
\begin{equation}
g_k(t;a,\varepsilon,1):={}_3F_2 \biggl[\begin{matrix} -k,\, k+2\varepsilon,\, 1 \\ -t+1+\varepsilon,\, t+1+\varepsilon \end{matrix} \biggm|1\biggr] =\frac{t^2-\varepsilon^2}{t^2-(k+\varepsilon)^2},
\end{equation}
\tag{4.24}
$$
where the second equality follows from a well-known summation formula due to Saalschütz (Bailey [ 3], § 2.2, (1)). The resulting expression (4.24) is precisely the special case of (4.23) corresponding to $K=1$. Next, using (4.24), for $K\geqslant2$ we obtain
$$
\begin{equation*}
\det[g_{k_i}(t_j;a,\varepsilon,1)]_{i,j=1}^K =\prod_{j=1}^K(t_j^2-\varepsilon^2)\cdot\det\biggl[\frac1{t_j^2-(k_i+\varepsilon)^2}\biggr]_{i,j=1}^K.
\end{equation*}
\notag
$$
The determinant on the right is a Cauchy determinant. It follows that
$$
\begin{equation*}
\det[g_{k_i}(t_j;a,\varepsilon,1)]_{i,j=1}^K=(\cdots) \prod_{i,j=1}^K\frac1{t_j^2-(k_i+\varepsilon)^2}\cdot\prod_{1\leqslant i<j\leqslant K}((k_i+\varepsilon)^2-(k_j+\varepsilon)^2),
\end{equation*}
\notag
$$
where the dots denote an expression which depends only on the $t_j$ but does not depend on the $k_i$. Dividing this by a similar expression for $k_i=k_i^0$ we finally obtain (4.23). Recall that the functions $g_k(t)$ constitute a basis of $\mathcal{F}(\varepsilon,L)$ (step 1 of the proof of Lemma 4.1). Given a function $\phi\in \mathcal{F}(\varepsilon,L)$, we denote by $(\phi:g_k)$ the $k$th coefficient ($k=0,1,2,\dots$) in the expansion of $\phi$ in the basis $\{g_k\}$. Proof of Theorem B. We have to prove that the following determinantal formula holds:
$$
\begin{equation}
\frac{\Lambda^N_K(\nu,\varkappa)}{d_K(\varkappa;\varepsilon)} =\det[(g_{K-j}F_N\colon g_{k_i})]_{i,j=1}^K,\qquad \nu\in\operatorname{Sign}^+_N,\quad \varkappa\in\operatorname{Sign}^+_K.
\end{equation}
\tag{4.25}
$$
This is Theorem B in § 1.8.
Recall that, according to (4.3),
$$
\begin{equation}
\prod_{j=1}^KF_N(t_j;\nu;\varepsilon) =\sum_{\varkappa\in\operatorname{Sign}^+_K}\frac{\Lambda^N_K(\nu,\varkappa)} {d_K(\varkappa;\varepsilon)}\,G_\varkappa(t_1,\dots,t_K), \qquad t_1,\dots,t_k\in\mathbb C,
\end{equation}
\tag{4.26}
$$
and (the definition (1.17)) that
$$
\begin{equation}
G_{\varkappa,K}(t_1,\dots,t_K)=\frac{\det[g_{\varkappa_i+K-i}(t_j)]_{i,j=1}^K} {\det[g_{K-i}(t_j)]_{i,j=1}^K}.
\end{equation}
\tag{4.27}
$$
Substituting (4.27) into (4.26) and multiplying both sides by $\det[g_{K-i}(t_j)]_{i,j=1}^K$ we obtain
$$
\begin{equation}
\det[g_{K-i}(t_j)F_N(t_j;\nu;\varepsilon)]_{i,j=1}^K=\sum_{k_1>\dots>k_K\geqslant0} \frac{\Lambda^N_K(\nu,\varkappa)}{d_K(\varkappa;\varepsilon)}\, \det[g_{k_i}(t_j)]_{i,j=1}^K.
\end{equation}
\tag{4.28}
$$
Next, recall that the functions $g_{K-i}(t)F_N(t;\nu;\varepsilon)$ lie in this space (see the proof of Lemma 4.1, step 3).
Now we abbreviate
$$
\begin{equation*}
h_{K-i}(t):=g_{K-i}(t)F_N(t;\nu;\varepsilon).
\end{equation*}
\notag
$$
It follows from the above that there exists a unique expansion
$$
\begin{equation*}
\det[h_{K-i}(t_j)]_{i,j=1}^K=\sum_{k_1>\dots>k_K\geqslant0} c(k_1,\dots,k_K) \det[g_{K-i}(t_j)]_{i,j=1}^K,
\end{equation*}
\notag
$$
which is valid for all $t_1,\dots,t_K$. Furthermore, the coefficients of this expansion are given by
$$
\begin{equation*}
c(k_1,\dots,k_K)=\det[(h_{K-j}\colon g_{k_i})]_{i,j=1}^K.
\end{equation*}
\notag
$$
It follows that (4.28) implies (4.25).
Theorem B is proved. A determinantal formula similar to (4.25) holds for unitary groups; see [5], Proposition 6.2. Notice that (4.25) resembles the classical Jacobi-Trudi formula for the Schur symmetric polynomials, and the above argument is similar to the derivation of this formula from Cauchy’s identity.
§ 5. Expansion in the basis $\{g_k(t)\}$ in the general case: proof of Theorem C Recall that we deal with the functions defined by (1.16):
$$
\begin{equation*}
g_k(t;a,\varepsilon,L):={}_4F_3\biggl[\begin{matrix}-k,\, k+2\varepsilon,\, L,\, L+a\\-t+L+\varepsilon,\, t+L+\varepsilon,\, a+1\end{matrix}\biggm|1\biggr], \qquad k=0,1,2,\dots\,.
\end{equation*}
\notag
$$
Here $L\geqslant2$ is a positive integer, and $a>-1$ and $\varepsilon\geqslant0$ are real parameters. We keep these parameters fixed and abbreviate $g_k(t):=g_k(t;a,\varepsilon,L)$. We keep to the notation introduced in §§ 1.8 and 1.9. In particular, ${\{e_m\colon m\in\mathbb{Z}_{\geqslant0}\}}$ is the basis of $\mathcal{F}(\varepsilon,L)$ defined in (1.21) and $\operatorname{Res}_{t=A_m}(\phi(t))$ denotes the residue of $\phi(t)$ at the point $t=A_m$. The Jacobi-Trudi-type formula presented in Theorem B reduces the computation of the matrix entries $\Lambda^N_K(\nu,\varkappa)$ to the following one-dimensional problem (it was already stated in § 1.9). Problem. Given a function $\phi\in\mathcal{F}(\varepsilon,L)$, how can we compute the coefficients $(\phi:g_k)$ in the expansion (1.19)? Specifically, we need this for the functions ${\phi=g_{K-j}F_N}$. In the present section we study the problem in the case of general Jacobi parameters (as before, the only constraints are the ones in (1.13)). Proposition 5.1. For any $\phi\in \mathcal{F}(\varepsilon,L)$ one has
$$
\begin{equation}
(\phi:g_k)=\begin{cases} \displaystyle \sum_{m\geqslant k}\operatorname*{Res}_{t=A_m}(\phi(t))(e_m:g_k), & k\geqslant1, \\ \displaystyle \phi(\infty)+\sum_{m\geqslant1}\operatorname*{Res}_{t=A_m}(\phi(t))(e_m: g_0), & k=0. \end{cases}
\end{equation}
\tag{5.1}
$$
Proof. We write the expansion of $\phi$ in the basis $\{e_m\}$ as
$$
\begin{equation*}
\phi=\sum_{m\geqslant0}(\phi: e_m)e_m.
\end{equation*}
\notag
$$
From this we obtain
$$
\begin{equation}
(\phi: g_k)=\sum_{m\geqslant0}(\phi: e_m)(e_m:g_k), \qquad k\in\mathbb Z_{\geqslant0}.
\end{equation}
\tag{5.2}
$$
On the other hand, from the definition of the functions $e_m(t)$ it follows that
$$
\begin{equation}
(\phi: e_0)=\phi(\infty), \qquad (\phi: e_m)=\operatorname*{Res}_{t=A_m}(\phi(t)), \quad m\geqslant1.
\end{equation}
\tag{5.3}
$$
Recall that $g_0=1$ and the only poles of the function $g_k$ with index $k\geqslant1$ are the points $A_{\pm \ell}$ with $1\leqslant \ell\leqslant k$. Therefore, for each $k\geqslant0$ the coefficients $(g_k:e_\ell)$ vanish unless $\ell\leqslant k$. This triangularity property implies in its turn that the coefficients $(e_m,g_k)$ vanish unless $m\geqslant k$. Thus, we can rewrite (5.2) as
$$
\begin{equation*}
(\phi: g_k)=\sum_{m\geqslant k}(\phi: e_m)(e_m:g_k), \qquad k\in\mathbb Z_{\geqslant0}.
\end{equation*}
\notag
$$
In combination with (5.3), this yields (5.1).
The proposition is proved. To apply Proposition 5.1 we must know the transition coefficients $(e_m:g_k)$ for $m\geqslant k\geqslant0$ and $m\geqslant1$. They are computed in the next theorem. Theorem 5.2. (i) For $m\geqslant1$ and $k\geqslant1$,
$$
\begin{equation*}
\begin{aligned} \, (e_m:g_k) &=2(L+\varepsilon+m-1)(2L+2\varepsilon+m-1)_{k-1}\frac{(m-1)!}{(m-k)!} \\ &\qquad \times \frac{(a+1)_k}{(L)_k(L+a)_k(k+2\varepsilon)_k} \\ &\qquad \times {}_4F_3\biggl[\begin{matrix} k-m,\; k+1,\; k+a+1,\; 2L+2\varepsilon+m+k-2 \\ L+k,\; L+a+k,\; 2k+2\varepsilon+1\end{matrix}\biggm|1\biggr]. \end{aligned}
\end{equation*}
\notag
$$
(ii) For $m\geqslant1$ and $k=0$,
$$
\begin{equation*}
\begin{aligned} \, (e_m:g_0) &=-\frac{2(L+\varepsilon+m-1)(a+1)}{L(L+a)(2\varepsilon+1)} \\ &\qquad \times{}_4F_3\biggl[\begin{matrix} 1-m,\; 1,\;a+2,\; 2L+2\varepsilon+m-1 \\ L+1,\; L+a+1,\; 2\varepsilon+2\end{matrix}\biggm|1\biggr]. \end{aligned}
\end{equation*}
\notag
$$
This theorem (in combination with Proposition 5.1) is an extended version of Theorem C from § 1.9. The proof is based on three lemmas. To state them we need to introduce the auxiliary rational functions
$$
\begin{equation*}
f_\ell(t):=\frac1{(-t+L+\varepsilon)_\ell(t+L+\varepsilon)_\ell}, \qquad \ell\in\mathbb Z_{\geqslant0}.
\end{equation*}
\notag
$$
From the proof of Proposition 5.1 we know that the transition matrix between the bases $\{e_m\}$ and $\{g_k\}$ is triangular with respect to the natural order on the index set $\mathbb{Z}_{\geqslant0}$. Lemma 5.3. (i) The functions $f_\ell$ form a basis of $\mathcal{F}(\varepsilon,L)$. (ii) The transition matrices between all three bases, $\{e_m\}$, $\{g_k\}$ and $\{f_\ell\}$, are triangular. Proof. Note that $f_0=1$. Next, if $\ell\geqslant1$, then the function $f_\ell(t)$ vanishes at infinity and its singularities are precisely simple poles at the points $\pm A_m$, $m=1,\dots,\ell$. It follows that the functions $f_\ell$ lie in the space $\mathcal{F}(\varepsilon,L)$. The same properties also imply that the transition coefficients $(f_\ell: e_m)$ vanish unless $m\leqslant\ell$. Moreover, ${(f_\ell: e_m)\ne0}$ for $m=\ell$. This means that $\{f_\ell\}$ is a basis and the transition matrix between $\{f_\ell\}$ and $\{e_m\}$ is triangular. This implies in turn that all transition matrices in question are triangular too.
The lemma is proved. We write $(\phi:f_\ell)$ for the coefficients in the expansion of a function $\phi\in\mathcal{F}(\varepsilon,L)$ in the basis $\{f_\ell\}$. From Lemma 5.3 we obtain
$$
\begin{equation}
(e_m:g_k)=\sum_{\ell=k}^m(e_m:f_\ell)(f_\ell:g_k), \qquad m\geqslant k.
\end{equation}
\tag{5.4}
$$
The purpose of the two next lemmas is to compute the coefficients $(e_m:f_\ell)$ and $(f_\ell:g_k)$. Lemma 5.4. Let $m\in\mathbb{Z}_{\geqslant1}$. Then the following hold. (i) We have
$$
\begin{equation}
(e_m:f_0)=0.
\end{equation}
\tag{5.5}
$$
(ii) For $\ell\geqslant1$ we have
$$
\begin{equation*}
(e_m:f_\ell)=2(-1)^{\ell}(L+\varepsilon+m-1)\prod_{j=1}^{\ell-1}(2L+2\varepsilon+m+j-2)(m-j).
\end{equation*}
\notag
$$
Note that the triangularity property is ensured by the product $\prod_{j=1}^{\ell-1}(m-j)$. Proof of Lemma 5.4. (i) The functions $e_m(t)$ and $f_\ell(t)$ with nonzero indices vanish at $t=\infty$. This implies (i).
(ii) Let $z$ and $a_1,a_2,\dots$ be formal variables. Then the next identity is easily proved by induction on $m$:
$$
\begin{equation*}
\frac1{z-a_m}=\frac1{z-a_1}+\frac{a_m-a_1}{(z-a_1)(z-a_2)} +\dots+\frac{(a_m-a_1)\dotsb(a_m-a_{m-1})}{(z-a_1)\dotsb(z-a_m)},
\end{equation*}
\notag
$$
that is, the coefficients of the expansion are
$$
\begin{equation}
\biggl(\frac1{z-a_m}:\frac1{(z-a_1)\dotsb(z-a_\ell)}\biggr)=\prod_{j=1}^{\ell-1}(a_m-a_j), \qquad m=1,2,\dots\,.
\end{equation}
\tag{5.6}
$$
Now observe that
$$
\begin{equation*}
e_m(t)=\frac{2(L+\varepsilon+m-1)}{t^2-(L+\varepsilon+m-1)^2}, \qquad m=1,2,\dots,
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
f_\ell(t)=\frac{(-1)^\ell}{(t^2-(L+\varepsilon)^2)\dotsb(t^2-(L+\varepsilon+\ell-1)^2)}.
\end{equation*}
\notag
$$
So we set
$$
\begin{equation*}
z=t^2 \quad\text{and}\quad a_m=(L+\varepsilon+m-1)^2
\end{equation*}
\notag
$$
and use (5.6). This proves (ii).
The lemma is proved. Lemma 5.5. The following formula holds:
$$
\begin{equation*}
(f_\ell:g_k)=\frac{2(k+\varepsilon)\Gamma(a+\ell+1) \Gamma(k+2\varepsilon)(-\ell)_k}{(L)_\ell(L+a)_\ell\Gamma(a+1)\Gamma(k+2\varepsilon+\ell+1)k!}.
\end{equation*}
\notag
$$
Note that the triangularity property is ensured by the factor $(-\ell)_k$. Proof of Lemma 5.5. The functions $g_k(t)$ can be written in the form
$$
\begin{equation}
g_k(t):=\sum_{\ell=0}^k\frac{(-k)_\ell(k+2\varepsilon)_\ell}{(a+1)_\ell \ell!}\,\widetilde f_\ell(t), \qquad k\in\mathbb Z_{\geqslant0},
\end{equation}
\tag{5.7}
$$
where
$$
\begin{equation*}
\widetilde f_\ell(t):=(L)_\ell(L+a)_\ell f_\ell(t).
\end{equation*}
\notag
$$
We compare (5.7) with the well-known formula for Jacobi polynomials (Erdelyi [15], § 10.8)
$$
\begin{equation*}
\widetilde P^{(a,b)}_k(x):=\frac{\Gamma(a+1)k!}{\Gamma(k+a+1)}\,P^{(a,b)}_k(x) =\sum_{\ell=0}^k\frac{(-k)_\ell(k+2\varepsilon)_\ell}{(a+1)_\ell \ell!}\biggl(\frac{1-x}2\biggr)^\ell.
\end{equation*}
\notag
$$
The coefficients in these two expansions are the same, which implies that the required coefficients $(\widetilde f_\ell:g_k)$ coincide with the coefficients in the expansion of $(\frac12(1-x))^\ell$ in the polynomials $\widetilde P^{(a,b)}_k(x)$. This expansion can easily be derived using Rodrigues’s formula for Jacobi polynomials:
$$
\begin{equation}
\begin{aligned} \, \biggl(\frac{1-x}2\biggr)^\ell &=\sum_{k=0}^\ell\frac{2(k+\varepsilon) \Gamma(a+\ell+1)\Gamma(k+2\varepsilon)(-\ell)_k}{\Gamma(k+a+1) \Gamma(k+\ell+2\varepsilon+1)}\,P^{(a,b)}_k(x) \notag \\ &=\sum_{k=0}^\ell\frac{2(k+\varepsilon)\Gamma(a+\ell+1)\Gamma(k+2\varepsilon) (-\ell)_k}{\Gamma(a+1)\Gamma(k+\ell+2\varepsilon+1)k!}\, \biggl(\frac{\Gamma(a+1)k!}{\Gamma(k+a+1)}\,P^{(a,b)}_k(x)\biggr). \end{aligned}
\end{equation}
\tag{5.8}
$$
The first equality in (5.8) can be found in handbooks; see [9], § 5.12.2.1 (it is § 5.11.2.5 in the Russian edition (2006)) and [15], 10.20 (3) (but note that the expression in the latter reference contains a typo: namely, the factor $\Gamma(2n+\alpha+\beta+1)$ should be replaced by $2n+\alpha+\beta+1$).
From the second equality in (5.8) we obtain
$$
\begin{equation*}
(f_\ell:g_k)=\frac{(\widetilde f_\ell:g_k)}{(L)_\ell(L+a)_\ell}=\frac{2(k+\varepsilon)\Gamma(a+\ell+1) \Gamma(k+2\varepsilon)(-\ell)_k}{(L)_\ell(L+a)_\ell\Gamma(a+1) \Gamma(k+\ell+2\varepsilon+1)k!}.
\end{equation*}
\notag
$$
The lemma is proved. Proof of Theorem 5.2. Our goal is to perform summation in (5.4) explicitly by using the formulae from Lemmas 5.4 and 5.5.
(i) We examine the case when $m\geqslant1$ and $k\geqslant 1$, and set $\ell=k+n$.
Let us rewrite the formulae in Lemmas 5.4 and 5.5:
$$
\begin{equation*}
\begin{aligned} \, (e_m:f_\ell) &=2(-1)^{\ell}(L+\varepsilon+m-1)\prod_{j=1}^{\ell-1}(2L+2\varepsilon+m+j-2)(m-j) \\ &=2(-1)^{k}(L+\varepsilon+m-1)(2L+2\varepsilon+m-1)_{k-1}\frac{(m-1)!}{(m-k)!} \\ &\qquad\times(2L+2\varepsilon+m+k-2)_n(k-m)_n \end{aligned}
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
\begin{aligned} \, (f_\ell:g_k) &=\frac{2(k+\varepsilon)\Gamma(a+\ell+1) \Gamma(k+2\varepsilon)(-\ell)_k}{(L)_\ell(L+a)_\ell\Gamma(k+a+1) \Gamma(k+2\varepsilon+\ell+1)} \\ &=\frac{\Gamma(k+2\varepsilon)(-1)^k(a+1)_k}{(L)_k(L+a)_k\Gamma(2k+2\varepsilon)} \frac{(a+k+1)_n(k+1)_n}{(L+k)_n(L+a+k)_n(2k+2\varepsilon+1)_nn!}. \end{aligned}
\end{equation*}
\notag
$$
Next, we put aside the factors that do not depend on $n$ and then sum the resulting expressions over $n=0,\dots,m-k$, which corresponds to summation over $\ell=k,\dots,m$ in (5.4). Then we obtain
$$
\begin{equation*}
\begin{aligned} \, &\sum_{n=0}^{m-k}\frac{(a+k+1)_n(k+1)_n(2L+2\varepsilon+m+k-2)_n(k-m)_n} {(L+k)_n(L+a+k)_n(2k+2\varepsilon+1)_nn!} \\ &\qquad={}_4F_3\biggl[\begin{matrix} k-m,\; k+1,\; k+a+1,\; 2L+2\varepsilon+m+k-2 \\ L+k,\; L+a+k,\; 2k+2\varepsilon+1\end{matrix}\biggm|1\biggr]. \end{aligned}
\end{equation*}
\notag
$$
Taking the remaining factors into account gives us the required expression.
(ii) Now we examine the case when $m\geqslant1$ and $k=0$. The computation is similar to the previous one. Since $(e_m:f_0)=0$, formula (5.4) reduces to
$$
\begin{equation*}
(e_m:g_0)=\sum_{\ell=1}^m(e_m:f_\ell)(f_\ell:g_0).
\end{equation*}
\notag
$$
It is convenient to set $n:=\ell-1$, so that $n$ ranges from $0$ to $m-1$. The lemmas show that
$$
\begin{equation*}
(e_m:f_{n+1})=-2(L+m-1)(2L+2\varepsilon)_n(1-m)_n
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
\begin{aligned} \, (f_{n+1}:g_0) &=\frac{2\varepsilon\Gamma(2\varepsilon)\Gamma(a+n+2)}{(L)_{n+1}(L+a)_{n+1}) \Gamma(a+1)\Gamma(2\varepsilon+n+2)} \\ &=\frac{a+1}{L(L+a)(2\varepsilon+1)}\,\frac{(a+2)_n}{(L+1)_n(L+a+1)_n(2\varepsilon+2)_n}. \end{aligned}
\end{equation*}
\notag
$$
It follows that
$$
\begin{equation*}
(e_m:g_0)=-\frac{2(L+\varepsilon+m-1)(a+1)}{L(L+a)(2\varepsilon+1)} \sum_{n=0}^{m-1}\frac{(1-m)_n(2L+2\varepsilon+m-1)_n(a+2)_n} {(L+1)_n(L+a+1)_n(2\varepsilon+2)_n},
\end{equation*}
\notag
$$
which is the required expression.
Theorem 5.2 is proved.
§ 6. Elementary expression for functions $g_k$ in the case of symplectic and orthogonal characters Recall that we are dealing with the functions defined by (1.16):
$$
\begin{equation*}
g_k(t;a,\varepsilon,L):={}_4F_3\biggl[\begin{matrix}-k,\, k+2\varepsilon,\, L,\, L+a\\-t+L+\varepsilon,\, t+L+\varepsilon,\, a+1\end{matrix}\biggm|1\biggr], \qquad k=0,1,2,\dots\,.
\end{equation*}
\notag
$$
In this section we consider the three special cases
$$
\begin{equation*}
(a,\varepsilon)= \biggl(\frac12,1\biggr), \biggl(\frac12, \frac12\biggr),\text{ or } \biggl(-\frac12,0\biggr),
\end{equation*}
\notag
$$
which correspond to the series $\mathcal{C}$, $\mathcal{B}$ and $\mathcal{D}$, and we introduce the alternate notation
$$
\begin{equation*}
\begin{gathered} \, g_k^{(\mathcal C)}(t;L):=g_k\biggl(t; \frac12, 1,L\biggr) ={}_4F_3\biggl[\begin{matrix}-k,\, k+2,\, L,\, L+\frac12 \\ -t+L+1,\, t+L+1,\, \frac32\end{matrix}\biggm|1\biggr], \\ g_k^{(\mathcal B)}(t;L):=g_k\biggl(t; \frac12, \frac12,L\biggr) ={}_4F_3\biggl[\begin{matrix}-k,\, k+1,\, L,\, L+\frac12 \\ -t+L+\frac12,\, t+L+\frac12,\, \frac32\end{matrix}\biggm|1\biggr] \end{gathered}
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
g_k^{(\mathcal D)}(t;L):=g_k\biggl(t; -\frac12, 0,L\biggr) ={}_4F_3\biggl[\begin{matrix}-k,\, k,\, L,\, L-\frac12 \\ -t+L,\,t+L,\, \frac12\end{matrix}\biggm|1\biggr].
\end{equation*}
\notag
$$
Theorem 6.1. The above three hypergeometric series admit closed elementary expressions
$$
\begin{equation}
g^{(\mathcal C)}_k(t;L) =\frac1{2(k+1)(1-2L)t}\biggl[\frac{(t-L)_{k+2}}{(t+L+1)_k} -\frac{(-t-L)_{k+2}}{(-t+L+1)_k}\biggr],
\end{equation}
\tag{6.1}
$$
$$
\begin{equation}
g^{(\mathcal B)}_k(t;L) =\frac1{2(k+1/2)(1-2L)}\biggl[\frac{(t-L+1/2)_{k+1}}{(t+L+1/2)_k} +\frac{(-t-L+1/2)_{k+1}}{(-t+L+1/2)_k}\biggr]
\end{equation}
\tag{6.2}
$$
and
$$
\begin{equation}
g^{(\mathcal D)}_k(t;L) =\frac12\biggl[\frac{(t-L+1)_k}{(t+L)_k}+\frac{(-t-L+1)_k}{(-t+L)_k}\biggr].
\end{equation}
\tag{6.3}
$$
Remark 6.2. In the very beginning of this research I expected that summation formulae for the series $g^{(\mathcal{C})}_k(t;L)$, $g^{(\mathcal{B})}_k(t;L)$ and $g^{(\mathcal{D})}_k(t;L)$ should exist, so I attempted first to find them in the literature. As a result of this search I managed to find the first formula in the handbook [41] (p. 556 of the Russian edition § 7.5.3, formula 42). Unfortunately, that handbook does not provide any proof or a suitable reference. Then I asked Eric Rains, and he sent me with amazing speed a unified derivation of all three formulae in his letter [42]. I am very grateful to him for this help. I reproduce his argument below in a more detailed form, but with a different proof for the next lemma. Lemma 6.3. (i) Suppose $A$ is an even nonpositive integer. Then
$$
\begin{equation}
{}_3F_2\biggl[\begin{matrix}A,\, B,\, D\\ \frac12(A+B+1), \, E \end{matrix}\biggm|1\biggr]= {}_4F_3\biggl[\begin{matrix}\frac12 A,\, \frac12 B,\, E-D, \, D\\ \frac12(A+B+1), \, \frac12 E, \frac12(E+1) \end{matrix}\biggm|1\biggr].
\end{equation}
\tag{6.4}
$$
(ii) Suppose $A$ is an odd negative integer. Then
$$
\begin{equation}
{}_3F_2\biggl[\begin{matrix}A,\, B,\, D\\ \frac12(A+B+1), \, E \end{matrix}\biggm|1\biggr]= \frac{E-2D}E\,{}_4F_3\biggl[\begin{matrix}\frac12(A+1),\, \frac12(B+1),\, E-D, \, D\\ \frac12(A+B+1), \, \frac12(E+1), \frac12 E +1 \end{matrix}\biggm|1\biggr].
\end{equation}
\tag{6.5}
$$
Proof. (i) This is formula (3.6) in Krattenthaler and Rao [23]. As explained there, it is obtained from Gauss’s quadratic transformation formula
$$
\begin{equation}
{}_2F_1\biggl[\begin{matrix}A,\, B\\ \frac12(A+B+1) \end{matrix}\biggm|z\biggr] ={}_2F_1\biggl[\begin{matrix}\frac12 A,\, \frac12 B\\ \frac12(A+B+1) \end{matrix} \biggm|4z(1-z)\biggr]
\end{equation}
\tag{6.6}
$$
(see [15], § 2.11, formula 2) by the following simple procedure: 1) convert the hypergeometric series on both sides into (finite) sums; 2) multiply both sides of the equation by $z^{D-1}(1-z)^{E-D-1}$; 3) integrate term by term with respect to $z$ for $0\leqslant z\leqslant 1$; 4) interchange integration and summation; 5) use the beta integral to evaluate the integrals inside the sums; 6) convert the sums back into hypergeometric series.
(ii) We replace (6.6) with another quadratic transformation formula (see [15], § 2.11, formula 19):
$$
\begin{equation}
{}_2F_1\biggl[\begin{matrix}A,\, B\\ \frac12(A+B+1) \end{matrix}\biggm|z\biggr] =(1-2z)\,{}_2F_1\biggl[\begin{matrix}\frac12(A+1),\, \frac12(B+1)\\ \frac12(A+B+1) \end{matrix}\biggm|4z(1-z)\biggr],
\end{equation}
\tag{6.7}
$$
then write $1-2z=(1-z)-z$ and apply the same procedure.
Note also that (6.7) can easily be obtained from (6.6) by differentiating with respect to $z$.
The lemma is proved. Proof of Theorem 6.1. We apply the well-known transformation formula
$$
\begin{equation*}
\begin{aligned} \, &{}_4F_3\biggl[\begin{matrix}-k, \alpha_1,\alpha_2,\alpha_3\\\beta_1,\beta_2,\beta_3 \end{matrix}\biggm|1\biggr] =\frac{(-1)^k(\alpha_1)_k(\alpha_2)_k(\alpha_3)_k}{(\beta_1)_k(\beta_2)_k(\beta_3)_k} \\ &\qquad\qquad \times {}_4F_3\biggl[\begin{matrix}-k, \; -\beta_1-k+1,\; -\beta_2-k+1,\; -\beta_3-k+1 \\ -\alpha_1-k+1,\; -\alpha_2-k+1,\; -\alpha_3-k+1 \end{matrix}\biggm|1\biggr], \end{aligned}
\end{equation*}
\notag
$$
which is obtained by summing a terminating series in the reverse order. This yields
$$
\begin{equation}
\begin{aligned} \, g_k(t;a,\varepsilon,L)&=\frac{(-1)^k(k+2\varepsilon)_k(L)_k(L+a)_k} {(-t+L+\varepsilon)_k(t+L+\varepsilon)_k(a+1)_k} \notag \\ &\times {}_4F_3\biggl[\begin{matrix}-k,\; -k-a,\; t-L-\varepsilon-k+1,\; -t-L-\varepsilon-k+1\\ -2k-2\varepsilon+1,\; -L-k+1,\; -L-k-a+1\end{matrix}\biggm|1\biggr]. \end{aligned}
\end{equation}
\tag{6.8}
$$
In the three special cases under consideration the hypergeometric series on the right-hand side has the same form as in (6.4) or (6.5). However, the application of these formulae requires some caution, as it will soon become clear. Let us examine the three cases separately.
Case ($\mathcal{C}$): $a=1/2$ and $\varepsilon=1$. The series ${}_4 F_3$ in (6.8) has the same form as the one on the right-hand side of (6.5), with the parameters
$$
\begin{equation*}
A=-2k-1, \qquad B=-2k-2, \qquad D=-t-L-k \quad\text{and}\quad E=-2L-2k,
\end{equation*}
\notag
$$
so that the corresponding series ${}_3 F_2$ on the left-hand side is
$$
\begin{equation}
{}_3F_2\biggl[\begin{matrix}-2k-1,\, -2k-2,\, -t-L-k\\ -2k-1, \, -2L-2k \end{matrix}\biggm|1\biggr].
\end{equation}
\tag{6.9}
$$
How can we interpret this expression? It is tempting to reduce it directly to a ${}_2 F_1$ series by removing the parameter $-2k-1$ from the upper and lower rows, but this would give an incorrect result. A correct argument is as follows.
We return to the initial ${}_4F_3$ series in (6.5), keep $a$ as a parameter, but exclude $\varepsilon$ by imposing the linear relation
$$
\begin{equation*}
-2k-2\varepsilon+1=(-k)+(-k-a)-\frac12, \quad \text{that is},\quad 2\varepsilon=a+\frac32.
\end{equation*}
\notag
$$
The point is that the resulting ${}_4F_3$ series is still of the form (6.5). Next, it is a rational function of the parameter $a$ and has no singularity at $a=1/2$ for generic $L$. Therefore, we can apply identity (6.5) and then pass to the limit as $a$ tends to $1/2$. This leads to the conclusion that (6.9) must be interpreted as
$$
\begin{equation}
{}_2F_1\biggl[\begin{matrix}-2k-2,\, -t-L-k\\ -2L-2k \end{matrix}\biggm|1\biggr] - (\text{the last term of the series expansion}).
\end{equation}
\tag{6.10}
$$
Applying the Chu-Vandermonde identity
$$
\begin{equation}
{}_2F_1\biggl[\begin{matrix}-N,\, \alpha\\ \beta \end{matrix}\biggm|1\biggr]=\frac{(\beta-\alpha)_N}{(\beta)_N}, \qquad N=0,1, 2,\dots, \quad \beta\ne0,-1,\dots, -N
\end{equation}
\tag{6.11}
$$
(see [ 2], Corollary 2.2.3), to the series ${}_2F_1$ we see that (6.10) is equal to
$$
\begin{equation*}
\frac{(t-L-k)_{2k+2}-(-t-L-k)_{2k+2}}{(-2L-2k)_{2k+2}}.
\end{equation*}
\notag
$$
Next, we have to multiply this by
$$
\begin{equation*}
\frac{(-1)^k(k+2)_k(L)_k(L+1/2)_k}{(-t+L+1)_k(t+L+1)_k(3/2)_k}\cdot \frac{-2L-2k}{2t},
\end{equation*}
\notag
$$
where the first fraction comes from (6.8) and the second fraction comes from ${E/(E-2D)}$ (see (6.5)). After simplification this finally gives the required expression (6.1).
Case ($\mathcal{B}$): $a=1/2$ and $\varepsilon=1/2$. Now the ${}_4 F_3$ series in (6.8) has the same form as on the right-hand side of (6.4), with the parameters
$$
\begin{equation*}
A=-2k, \qquad B=-2k-1, \qquad D=-t-L-k+\frac12 \quad\text{and}\quad E=-2L-2k+1,
\end{equation*}
\notag
$$
so that the corresponding ${}_3 F_2$ series on the left-hand side is
$$
\begin{equation*}
{}_3F_2\biggl[\begin{matrix}-2k,\, -2k-1,\, -t-L-k+\frac12\\ -2k, \, -2L-2k+1 \end{matrix}\biggm|1\biggr].
\end{equation*}
\notag
$$
By the same argument as above the correct elimination of the parameter $-2k$ leads to
$$
\begin{equation*}
{}_2F_1\biggl[\begin{matrix}-2k-1,\, -t-L-k+\frac12\\ -2L-2k+1 \end{matrix}\biggm|1\biggr] - (\text{the last term of the series expansion}).
\end{equation*}
\notag
$$
Applying the Chu-Vandermonde identity (6.11) to this ${}_2F_1$ series we obtain
$$
\begin{equation*}
\frac{(t-L-k+1/2)_{2k+1}+(-t-L-k+1/2)_{2k+1}}{(-2L-2k+1)_{2k+1}}.
\end{equation*}
\notag
$$
Next, we multiply this by
$$
\begin{equation*}
\frac{(-1)^k(k+1)_k(L)_k(L+1/2)_k}{(-t+L+1/2)_k(t+L+1/2)_k(3/2)_k},
\end{equation*}
\notag
$$
and after simplification this finally gives the required expression (6.2).
Case ($\mathcal{D}$): $a=-1/2$ and $\varepsilon=0$. Again, the ${}_4 F_3$ series in (6.8) has the same form as on the right-hand side of (6.4), but now the parameters are
$$
\begin{equation*}
A=-2k, \qquad B=-2k+1, \qquad D=-t-L-k+1 \quad\text{and}\quad E=-2L-2k+2.
\end{equation*}
\notag
$$
According to (6.4), this leads to the ${}_3 F_2$ series
$$
\begin{equation*}
{}_3F_2\biggl[\begin{matrix}-2k,\, -2k+1,\, -t-L-k+1\\ -2k+1, \, -2L-2k+2 \end{matrix}\biggm|1\biggr].
\end{equation*}
\notag
$$
Here the correct elimination of the parameter $-2k+1$ is achieved by the limit transition
$$
\begin{equation}
\lim_{a\to-1/2}{}_3F_2\biggl[\begin{matrix}-2k,\, -2k-2a,\, -t-L-k+\frac34-\frac12 a\\ -2k-a+\frac12, \, -2L-2k+2 \end{matrix}\biggm|1\biggr].
\end{equation}
\tag{6.12}
$$
The limit in (6.12) can be taken termwise. The result differs from the expansion of the series
$$
\begin{equation}
{}_2F_1\biggl[\begin{matrix}-2k,\, -t-L-k+1\\ -2L-2k+2 \end{matrix}\biggm|1\biggr]
\end{equation}
\tag{6.13}
$$
in the last term only. Namely, in (6.12), the last term is
$$
\begin{equation*}
\lim_{a\to-1/2}\frac{(-2k)_{2k}(-2k-2a)_{2k}(-t-L-k+3/4-a/2)} {(-2k-a+1/2)_{2k}(-2L-2k+1)_{2k}(2k)!} =2\, \frac{(-t-L-k+1)_{2k}}{(-2L-2k+2)_{2k}},
\end{equation*}
\notag
$$
while the last term in (6.13) is
$$
\begin{equation}
\frac{(-t-L-k+1)_{2k}}{(-2L-2k+2)_{2k}}.
\end{equation}
\tag{6.14}
$$
We conclude that the limit in (6.12) is equal to the sum of (6.13) and (6.14):
$$
\begin{equation}
{}_2F_1\biggl[\begin{matrix}-2k,\, -t-L-k+1\\ -2L-2k+2 \end{matrix}\biggm|1\biggr] +\frac{(-t-L-k+1)_{2k}}{(-2L-2k+2)_{2k}}.
\end{equation}
\tag{6.15}
$$
Using the Chu-Vandermonde identity (6.11) we obtain that (6.15) equals
$$
\begin{equation*}
\frac{(t-L-k+1)_{2k}+(-t-L-k+1)_{2k}}{(-2L-2k+2)_{2k}}.
\end{equation*}
\notag
$$
Next, we multiply this by
$$
\begin{equation*}
\frac{(-1)^k(k)_k(L)_k(L-1/2)_k}{(-t+L)_k(t+L)_k(1/2)_k}
\end{equation*}
\notag
$$
and after simplification we obtain the required expression (6.3).
Theorem 6.1 is proved.
§ 7. Elementary expression for the transition coefficients $(e_m:g_k)$ in the case of symplectic and orthogonal characters It will be convenient for us to introduce the alternate notation
$$
\begin{equation*}
E(m,k)=E(m,k;a,\varepsilon,L)
\end{equation*}
\notag
$$
for the transition coefficients $(e_m:g_k)$. As in § 6, we examine the three distinguished cases when the parameters $(a,\varepsilon)$ correspond to characters of the series $\mathcal{C}$, $\mathcal{B}$ and $\mathcal{D}$. We show that then the formulae obtained in Theorem 5.2 are simplified: the ${}_4F_3$ hypergeometric series that appear in the formulae admit closed elementary expressions. To distinguish between these three cases of $\mathcal{C}$, $\mathcal{B}$ and $\mathcal{D}$ we introduce the additional superscripts $(\mathcal{C})$, $(\mathcal{B})$ and $(\mathcal{D})$, respectively. Theorem 7.1. (i) If $m\geqslant k\geqslant1$, then
$$
\begin{equation*}
\begin{aligned} \, E^{(\mathcal C)}(m,k) &:=E\biggl(m,k;\frac12,1,L\biggr) \\ &=\frac{2(k+1)(m-1)!\,(2L-2)(2L-1)(L+m)(2L+m-k-3)!} {(m-k)!\,(2L+m)!}, \\ E^{(\mathcal B)}(m,k) &:=E\biggl(m,k;\frac12,\frac12,L\biggr) \\ &=\frac{2(k+1/2)(m-1)!\,(2L-2)(2L-1)(2L+m-k-3)!}{(m-k)!\,(2L+m-1)!} \end{aligned}
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
E^{(\mathcal D)}(m,k) :=E\biggl(m,k;-\frac12,0,L\biggr)=\frac{2(m-1)!\,(2L-2)(2L+m-k-3)!}{(m-k)!\,(2L+m-2)!}.
\end{equation*}
\notag
$$
(ii) If $m\geqslant1$ and $k=0$, then
$$
\begin{equation*}
\begin{aligned} \, E^{(\mathcal C)}(m,0) &:=E\biggl(m,0;\frac12,1,L\biggr)=-\frac{2(m+4L-3)(L+m)}{(2L+m)(2L+m-1)(2L+m-2)}, \\ E^{(\mathcal B)}(m,0) &:=E\biggl(m,0;\frac12,\frac12,L\biggr)=-\frac{2m+6L-5}{(2L+m-1)(2L+m-2)} \end{aligned}
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
E^{(\mathcal D)}(m,0) :=E\biggl(m,0;-\frac12,0,L\biggr)=-\frac{2}{2L+m-2}.
\end{equation*}
\notag
$$
Recall that $E(m,k)=0$ for $k>m$, so that the theorem covers all possible cases. The proof is based on the following lemma. Lemma 7.2. Let $n=0,1,2,\dots$ . Then the following two formulae hold:
$$
\begin{equation}
{}_4F_3\biggl[\begin{matrix}-n,\, A,\, A+\frac12,\, 2B+n \\B,\, B+\frac12,\, 2A+1\end{matrix}\biggm|1\biggr] =\frac{\Gamma(2B-2A+n)\Gamma(2B)}{\Gamma(2B-2A)\Gamma(2B+n)}
\end{equation}
\tag{7.1}
$$
and
$$
\begin{equation}
{}_4F_3\biggl[\begin{matrix}-n,\, A+\frac12,\, A+1,\, 2B+n \\B+\frac12,\, B+1,\, 2A+1\end{matrix}\biggm|1\biggr] =\frac{B}{(B+n)}\,\frac{\Gamma(2B-2A+n)\Gamma(2B)}{\Gamma(2B-2A)\Gamma(2B+n)}.
\end{equation}
\tag{7.2}
$$
Proof. For the first formula, see Slater [49], p. 65, (2.4.2.2), and p. 245 (III.20). The second formula is derived from the first using the transformation (2.4.1.7) in [49], which holds for any balanced terminating series ${}_4F_3(1)$. In slightly modified notation it reads
$$
\begin{equation}
{}_4F_3\biggl[\begin{matrix}-n,\, a_1,\, a_2,\, x \\b_1,\, b_2,\, y\end{matrix}\biggm|1\biggr]=\frac{(b_1-x)_n(b_1-u)_n}{(b_1)_n(b_1-x-u)_n} {}_4F_3\biggl[\begin{matrix}-n,\, a_1-u,\, a_2-u,\, x \\b_1-u,\, b_2-u,\, y\end{matrix}\biggm|1\biggr],
\end{equation}
\tag{7.3}
$$
where
$$
\begin{equation*}
u:=a_1+a_2-y.
\end{equation*}
\notag
$$
We specialize it to
$$
\begin{equation*}
a_1=A+\frac12,\ \ \ a_2=A+1, \ \ \ x=2B+n, \ \ \ b_1=B+\frac12, \ \ \ b_2=B+1 \ \ \ \text{and}\ \ \ y=2A+1.
\end{equation*}
\notag
$$
For other ways to derive the formulae, see [17], (3.20) and (3.21), and further references therein.
The lemma is proved. Proof of Theorem 7.1. (i) Let us show that Lemma 7.2 can be applied to the ${}_4F_3$ series displayed in claim (ii) of Theorem 5.2.
Indeed, the series in question is
$$
\begin{equation*}
{}_4F_3\biggl[\begin{matrix} k-m,\; k+1,\; k+a+1,\; 2L+2\varepsilon+m+k-2 \\ L+k,\; L+a+k,\; 2k+2\varepsilon+1\end{matrix}\biggm|1\biggr].
\end{equation*}
\notag
$$
Look at the triple $(k+1, k+a+1, 2k+2\varepsilon+1)$. It takes the following form:
$$
\begin{equation*}
(k+1, k+a+1, 2k+2\varepsilon+1)= \begin{cases} \bigl(k+1, k+\frac32, 2k+3\bigr) & \text{in case } (\mathcal C), \\ \bigl(k+1, k+\frac32, 2k+2\bigr) & \text{in case }(\mathcal B), \\ \bigl(k+1, k+\frac12, 2k+1\bigr) & \text{in case }(\mathcal D). \end{cases}
\end{equation*}
\notag
$$
It follows that formula (7.1) is applicable in cases $(\mathcal C)$ and $(\mathcal D)$, while formula (7.2) is applicable in case $(\mathcal B)$. This leads to the required expressions.
(ii) Now we turn to the ${}_4F_3$ series from part (ii) of Theorem 5.2, where we are again interested in the three special cases. Formulae (7.1) and (7.2) are no longer applicable to these series. Suitable summation formulae can perhaps be extracted from the literature, but this author has not succeeded in this. Here is another way to solve the problem.
Observe that $e_m(\infty)=0$ for each $m\geqslant1$, while $g_k(\infty)=1$ for all $k$. It follows that
$$
\begin{equation*}
E(m,0)=-\sum_{k=1}^m E(m,k), \qquad m=1,2,\dots\,.
\end{equation*}
\notag
$$
Therefore, the formulae in claim (ii) are equivalent to the following three identities:
$$
\begin{equation*}
\begin{aligned} \, &\sum_{k=1}^m \frac{(2k+2)(m-1)!\,(2L-2)(2L-1)(L+m)(2L+m-k-3)!} {(m-k)!\,(2L+m)!} \\ &\qquad=\frac{2(m+4L-3)(L+m)}{(2L+m)(2L+m-1)(2L+m-2)}, \\ &\sum_{k=1}^m\frac{(2k+1)(m-1)!\,(2L-2)(2L-1)(2L+m-k-3)!}{(m-k)!\,(2L+m-1)!} \\ &\qquad=\frac{2m+6L-5}{(2L+m-1)(2L+m-2)} \end{aligned}
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
\sum_{k=1}^m \frac{2(m-1)!\,(2L-2)(2L+m-k-3)!}{(m-k)!\,(2L+m-2)!}=\frac{2}{2L+m-2}.
\end{equation*}
\notag
$$
Set $M:=2L-2$; after simplification one can rewrite these identities as
$$
\begin{equation*}
\begin{aligned} \, S^{(\mathcal C)}(m,M) &:=M\sum_{k=1}^m \frac{(M+m-k-1)!\,(k+1)}{(m-k)!}=\frac{(M+m-1)!\,(m+2M+1)}{(m-1)!\,(M+1)}, \\ S^{(\mathcal B)}(m,M) &:=M\sum_{k=1}^m \frac{(M+m-k-1)!\,(2k+1)}{(m-k)!}=\frac{(M+m-1)!\,(2m+3M+1)}{(l-1)!\,(M+1)} \end{aligned}
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
S^{(\mathcal D)}(m,M) :=M\sum_{k=1}^m \frac{(M+m-k-1)!\,}{(m-k)!}=\frac{(M+m-1)!}{(m-1)!}.
\end{equation*}
\notag
$$
The identity for $S^{(\mathcal{D})}(m,M)$ is checked by induction on $m$, using the relation
$$
\begin{equation*}
S^{(\mathcal D)}(m+1,M)=S^{(\mathcal D)}(m,M)+\frac{M(M+m-1)!}{m!}.
\end{equation*}
\notag
$$
Next, the other two sums, $S^{(\mathcal{B})}(m,M)$ and $S^{(\mathcal{C})}(m,M)$, are reduced to $S^{(\mathcal{D})}(m,M)$ using the relations
$$
\begin{equation*}
\begin{aligned} \, &(2m+1)S^{(\mathcal D)}(m,M)-S^{(\mathcal B)}(m,M) \\ &\qquad=2M\sum_{k=0}^{m-1}\frac{(M+m-k-1)!}{(m-k-1)!}=\frac{2M}{M+1}S^{(\mathcal D)}(m-1,M+1) \end{aligned}
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
\begin{aligned} \, &(m+1)S^{(\mathcal D)}(m,M)-S^{(\mathcal C)}(m,M) \\ &\qquad=M\sum_{k=0}^{m-1}\frac{(M+m-k-1)!}{(m-k-1)!}=\frac{M}{M+1}S^{(\mathcal D)}(m-1,M+1). \end{aligned}
\end{equation*}
\notag
$$
Theorem 7.1 is proved.
§ 8. Proof of Theorem D and application to discrete splines8.1. Proof of Theorem D The results of computations in §§ 6 and 7 are summarized below in Theorem 8.1 (this is a detailed version of Theorem D in § 1.9). To state it we recall the relevant definitions and notation. $\bullet$ We deal with the entries $\Lambda^N_K(\nu,\varkappa)$ of the stochastic matrix $\Lambda^N_K$, where $\nu$ ranges over $\operatorname{Sign}^+_N$, $\varkappa$ ranges over $\operatorname{Sign}^+_K$, and $N>K\geqslant1$; see Definition 1.4. Originally, the matrix depends on a pair $(a,b)$ of Jacobi parameters, but it is convenient for us to replace the second parameter $b$ by $\varepsilon:=\frac12(a+b+1)$. $\bullet$ We are particularly interested in the three distinguished cases, which are linked to characters of the series $\mathcal{C}$, $\mathcal{B}$ and $\mathcal{D}$. In terms of the parameters $(a,\varepsilon)$ this means that
$$
\begin{equation}
(a,\varepsilon) =\begin{cases} \bigl(\frac12,1\bigr), \\ \bigl(\frac12,\frac12\bigr), \\ \bigl(-\frac12,0\bigr), \end{cases}
\end{equation}
\tag{8.1}
$$
respectively. $\bullet$ We set $L:=N-K+1$, so that $L$ is an integer $\geqslant2$. In (1.18) we introduced a grid $\mathbb{A}(\varepsilon,L)$ on $\mathbb{R}_{>0}$ depending on $\varepsilon$ and $L$:
$$
\begin{equation*}
\mathbb A(\varepsilon,L):=\{A_1,A_2,\dots\}, \quad\text{where } A_m:=L+\varepsilon+m-1, \quad m=1,2,\dots\,.
\end{equation*}
\notag
$$
$\bullet$ In § 1.8 we introduced the space $\mathcal{F}(\varepsilon,L)$ formed by even rational functions with simple poles in $(-\mathbb{A}(\varepsilon,L))\cup\mathbb{A}(\varepsilon,L)$. For a rational function $\phi\in \mathcal{F}(\varepsilon,L)$, we denote by $\operatorname{Res}_{t=A_m}(\phi(t))$ its residue at the point $t=A_m$ of the grid $\mathbb{A}(\varepsilon,L)$. $\bullet$ In (1.16) we introduced the even rational functions $g_k(t)=g_k(t;a,\varepsilon,L)$ with indices $k=0,1,2,\dots$ and parameters $(a,\varepsilon)$. In the general case $g_k(t)$ is given by the terminating hypergeometric series ${}_4F_3$. For the special values (8.1) these functions admit explicit elementary expressions (Theorem 6.1). $\bullet$ In Theorem 5.2 we computed the transition coefficients $(e_m:g_k)$, renamed to $E(m,k)$ in the beginning of § 7. For the special values (8.1) these coefficients admit an explicit elementary expression (Theorem 7.1). $\bullet$ In (1.14), to each signature $\nu\in\operatorname{Sign}^+_N$ we assigned its characteristic function
$$
\begin{equation}
F_N(t)=F_N(t;\nu;\varepsilon):=\prod_{i=1}^N\frac{t^2-(N-i+\varepsilon)^2}{t^2-(\nu_i+N-i+\varepsilon)^2}.
\end{equation}
\tag{8.2}
$$
$\bullet$ For $\varkappa\in\operatorname{Sign}^+_K$ we abbreviate $k_i:=\varkappa_i+K-i$, where $i=1,\dots,K$, and set
$$
\begin{equation*}
d_K(\varkappa;\varepsilon):=\prod_{1\leqslant i<j\leqslant K}((k_i+\varepsilon)^2-(k_j+\varepsilon)^2).
\end{equation*}
\notag
$$
This agrees with the definition (1.15). Theorem 8.1. In the three distinguished cases (8.1) the following formula holds:
$$
\begin{equation}
\frac{\Lambda^N_K(\nu,\varkappa)}{d_K(\varkappa;\varepsilon)}=\det[M(i,j)]_{i,j=1}^K,
\end{equation}
\tag{8.3}
$$
where $[M(i,j)]$ is a $K\times K$ matrix whose entries are given by the following elementary expressions, which are in fact finite sums: Proof. By Theorem B,
$$
\begin{equation*}
\frac{\Lambda^N_K(\nu,\varkappa)}{d_K(\varkappa;\varepsilon)}=\det[(g_{k-j}F_N: g_{k_i})]_{i,j=1}^K.
\end{equation*}
\notag
$$
Next, recall that Proposition 5.1 gives a summation formula for the transition coefficients $(\phi:g_k)$ (see (5.1)). We set $\phi=g_{k-j}F_N$ and $k=k_i$. Then (5.1) takes the form indicated in (8.4) and (8.5). From Theorems 6.1 and 7.1 we obtain elementary expressions for the functions $g_{K-j}(t)$ and the coefficients $E(m,k_i)$. The functions $F_N(t)$ are also given by an elementary expression (see (8.2)).
The theorem is proved. 8.2. A symplectic and an orthogonal version of the discrete $\mathrm{B}$-spline We write the formulae in Theorem 8.1 for the particular case $K=1$ in a more explicit form. Corollary 8.2. Let $\nu\in\operatorname{Sign}^+_N$ and $k\in\operatorname{Sign}^+_1=\{0,1,2,\dots\}$. (i) For $(a,\varepsilon)=(1/2,1)$ (the series $\mathcal{C}$)
$$
\begin{equation*}
\begin{aligned} \, &\Lambda^N_1(\nu,k) =2(k+1)(N-1)(2N-1) \\ &\times\sum_{i\colon \nu_i-i+1\geqslant k}\frac{(\nu_i-i+2-k)_{2N-3}}{(\nu_i+N-i+1)\prod_{r\colon r\ne i}((\nu_i+N-i+1)^2-(\nu_r+N-r+1)^2)}, \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad k\geqslant1, \end{aligned}
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
\begin{aligned} \, \Lambda^N_1(\nu,0) &=1-\sum_{i\colon \nu_i-i\geqslant0}\frac{2(\nu_i+4N-i-2)(\nu_i+N-i+1)}{(\nu_i+2N-i+1)(\nu_i+2N-i)(\nu_i+2N-i-1)} \\ &\qquad\qquad\qquad\qquad\times\operatorname*{Res}_{t=\nu_i+N-i+1} F_N(t). \end{aligned}
\end{equation*}
\notag
$$
(ii) For $(a,\varepsilon)=(1/2,1/2)$ (the series $\mathcal{B}$)
$$
\begin{equation*}
\begin{aligned} \, &\Lambda^N_1(\nu,k)=2\biggl(k+\frac12\biggr)(N-1)(2N-1) \\ &\times\sum_{i\colon \nu_i-i+1\geqslant k}\frac{(\nu_i-i+2-k)_{2N-3}}{(\nu_i\,{+}\,N\,{-}\,i\,{+}\,1/2)\prod_{r\colon r\ne i}((\nu_i\,{+}\,N\,{-}\,i\,{+}\,1/2)^2{-}\,(\nu_r\,{+}\,N\,{-}\,r\,{+}\,1/2)^2)}, \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad k\geqslant1, \end{aligned}
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
\Lambda^N_1(\nu,0)=1-\sum_{i\colon \nu_i-i\geqslant0}\frac{2(\nu_i-i+1)+6N-5)}{(\nu_i+2N-i)(\nu_i+2N-i-1)} \operatorname*{Res}_{t=\nu_i+N-i+1/2} F_N(t).
\end{equation*}
\notag
$$
(iii) For $(a,\varepsilon)=(-1/2,0)$ (the series $\mathcal{D}$)
$$
\begin{equation*}
\Lambda^N_1(\nu,k)=2(N-1)\,\sum_{i\colon \nu_i-i+1\geqslant k}\frac{(\nu_i-i+2-k)_{2N-3}}{\prod_{r\colon r\ne i}((\nu_i\,{+}\,N\,{-}\,i)^2{-}\,(\nu_r\,{+}\,N\,{-}\,r)^2)}, \qquad k\geqslant1,
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
\Lambda^N_1(\nu,0)=1-2\sum_{i\colon \nu_i-i\geqslant0}\frac1{\nu_i+2N-i-1} \operatorname*{Res}_{t=\nu_i+N-i} F_N(t).
\end{equation*}
\notag
$$
Proof. In the case $K=1$ the formulae in Theorem 8.1 are slightly simplified. Namely, the factor $d_K(\varkappa;\varepsilon)$ on the left-hand side of (8.3) disappears (because it becomes the empty product and hence is equal to $1$); the factor $g_{K-j}(t)$ on the right-hand side of (8.4) and (8.5) also disappears (because it turns to $g_0(t)\equiv1$). Taking this into account we obtain
$$
\begin{equation*}
\Lambda^N_1(\nu,k)= \begin{cases} \displaystyle \sum_{m\geqslant k}\operatorname*{Res}_{t=A_m}(F_N(t)) E(m,k), & k\geqslant1, \\ \displaystyle 1+\sum_{m\geqslant 1}\operatorname*{Res}_{t=A_m}(F_N(t)) E(m,0), & k=0. \end{cases}
\end{equation*}
\notag
$$
Further, now we have $A_m=N+\varepsilon+m-1$, because $K=1$ implies that $L=N$. Then we substitute the values of the coefficients $E(m,k)$ from Theorem 7.1 and compute the residues of $F_N(t)$ from (8.2), which is easy. This leads to the expressions above.
The corollary is proved. Putting aside the endpoint $k=0$, we see that for $\nu$ fixed, $\Lambda^N_1(\nu,k)$ is given by a piecewise polynomial function in $k$ of degree independent of $\nu$ (this degree is ${2N-2}$ for the series $\mathcal{C}$ and $\mathcal{B}$, and $2N-3$ for the series $\mathcal{D}$). Note that the structure of the above formulae is similar to that of the discrete $\mathrm{B}$-spline; cf. (1.7) and (1.4). Likewise, for $K\geqslant2$ the coefficients $\Lambda^N_K(\nu,\varkappa)$ (where $K=2,\dots, N-1$) can be written as the determinants of $K\times K$ matrices whose entries are one-dimensional piecewise polynomial functions. The appearance of piecewise polynomial expressions is not too surprising. Similar effects arise in other spectral problems in representation theory, such as weight multiplicities or the decomposition of tensor products (see, for example, Billey, Guillemin and Rassart [4] and Rassart [43]). A specific feature of our problem, however, is that the description of the multidimensional picture can be expressed in terms of one-dimensional spline-type functions and, moreover, we end up with elementary formulae whose structure is much simpler than, say, that of the Kostant partition function for weight multiplicities.
§ 9. Concluding remarks9.1. Contour integral representation The large $N$ limit transition mentioned in the introduction (§ 1.10, remark 5) is based on the possibility to transform the sums in (8.4) and (8.5) into contour integrals. This contour integral representation is deduced from the next proposition, which is also of some independent interest. Proposition 9.1. We keep to the assumptions and notation of § 8.1. Let $E(m,k)$ denote the transition coefficients given by the formulae in Theorem 7.1. We assume that $L\in\{2,3,\dots\}$ is fixed. (i) There exists a function $R(t,k)$ of the variables $t\in\mathbb{C}$ and $k\in\{0,1,2,\dots\}$, such that $R_k(t):=R(t,k)$ is a rational function of $t$ for each fixed value of $k$ and
$$
\begin{equation*}
E(m,k)=R(A_m,k), \qquad m=1, 2,3,\dots\,.
\end{equation*}
\notag
$$
(ii) These properties determine $R(t,k)$ uniquely. (iii) For $k=1,2,\dots$ the function $R_k(t)$ does not have poles in the right half-plane
$$
\begin{equation*}
\mathcal H(\varepsilon,L):=\{t\in\mathbb C\colon \operatorname{Re}t>L+\varepsilon-1\}.
\end{equation*}
\notag
$$
We recall that
$$
\begin{equation}
A_m=L+\varepsilon+m-1, \quad \text{where } \varepsilon=1,\frac12,0.
\end{equation}
\tag{9.1}
$$
Proof of Proposition 9.1. (i) To define $R(t,k)$ we use (9.1) as a prompt and simply replace $m$ by $t-L-\varepsilon+1$ in the formulae in Theorem 7.1. For $k=0$ we obviously obtain a rational function in $t$. For $k\geqslant1$ we write down the result by using the gamma function instead of factorials (to distinguish between the series $\mathcal{C}$, $\mathcal{B}$ and $\mathcal{D}$ we add the corresponding superscript, as before):
$$
\begin{equation*}
\begin{aligned} \, R^{(\mathcal C)}(t,k) &=2(k+1)(2L-2)(2L-1)\,\frac{t\,\Gamma(t-L)\Gamma(t+L-k-2)} {\Gamma(t-L-k+1)\Gamma(t+L+1)}, \\ R^{(\mathcal B)}(t,k) &=2\biggl(k+\frac12\biggr)(2L-2)(2L-1)\,\frac{\Gamma(t-L+1/2)\Gamma(t+L-k-3/2)} {\Gamma(t-L-k+3/2)\Gamma(t+L+1/2)} \end{aligned}
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
R^{(\mathcal D)}(t,k) =2(2L-2)\,\frac{\Gamma(t-L+1)\Gamma(t+L-k-1)} {\Gamma(t-L-k+2)\Gamma(t+L)}.
\end{equation*}
\notag
$$
The property of rationality becomes obvious from these expressions: we use the fact that if $\alpha$ and $\beta$ are two constants such that $\alpha-\beta\in\mathbb{Z}$, then the ratio $\Gamma(t+\alpha)/\Gamma(t+\beta)$ is a rational function of $t$.
(ii) The uniqueness claim is obvious.
(iii) In each of the three variants there are different ways to factorize the expression containing the four gamma functions into the product of two fractions of the form
$$
\begin{equation*}
\frac{\Gamma(t+\alpha_1)}{\Gamma(t+\beta_1)}\cdot \frac{\Gamma(t+\alpha_2)}{\Gamma(t+\beta_2)},
\end{equation*}
\notag
$$
where $\alpha_1-\beta_1$ and $\alpha_2-\beta_2$ are integers. For our purposes it is convenient to form the first fraction by taking the second $\Gamma$-factor in the numerator and the first $\Gamma$-factor in the denominator. Then the first fraction is a polynomial in $t$. As for the second fraction, it has the form
$$
\begin{equation*}
\frac{\Gamma(t-L-\varepsilon+1)}{\Gamma(t+L+\varepsilon)}=\prod_{j=1}^{2L+2\varepsilon}\frac1{t-(L+\varepsilon-j)}
\end{equation*}
\notag
$$
and therefore it is regular in $\mathcal H(\varepsilon,L)$.
The proposition is proved. 9.2. A biorthogonal system of rational functions Let $\mathcal{F}^0(\varepsilon,L)$ denote the codimension $1$ subspace of $\mathcal{F}(\varepsilon,L)$ formed by the functions vanishing at infinity. As in the previous subsection, we fix $L\geqslant2$ and assume that $\varepsilon$ takes one of the three values $1$, $1/2$ and $0$. Because $g_k(\infty)=1$, the functions $g^0_k(t):=g_k(t)-1$, where $k=1,2,\dots$, form a basis of $\mathcal{F}^0(\varepsilon,L)$. The functions $R(t,k)$ and the half-plane $\mathcal H(\varepsilon,L)$ were introduced in Proposition 9.1. Proposition 9.2. The two systems of rational function $\{g_k^0(t)\}$ and $\{R_k(t)\}$ are biorthogonal in the following sense
$$
\begin{equation*}
\frac1{2\pi i}\oint_C g^0_\ell(t)R_k(t)\,dt=\delta_{k\ell}, \qquad k,\ell=1,2,\dots,
\end{equation*}
\notag
$$
where one can take as $C$ an arbitrary simple contour in $\mathcal H(\varepsilon,L)$ with the property that it goes in the positive direction and encircles all the poles of $g_\ell(t)$. Proof. The left-hand side is equal to the sum of residues of the integrand. Because $R_k(t)$ is regular in $\mathcal H(\varepsilon,L)$, we can replace $g^0_\ell(t)$ by $g_\ell(t)$ and write this sum as
$$
\begin{equation}
\sum_{m=1}^\infty \Bigl(\,\operatorname*{Res}_{t=A_m}g_\ell(t)\Bigr) R_k(A_m)=\sum_{m=1}^\infty \Bigl(\,\operatorname*{Res}_{t=A_m}g_\ell(t)\Bigr)E(m,k),
\end{equation}
\tag{9.2}
$$
where equality holds because $R_k(A_m)=E(m,k)$ (see Proposition 9.1, (i)).
On the other hand, by the definition of the coefficients $E(m,k)=(e_m:g_k)$, for any function $f\in\mathcal{F}(\varepsilon,L)$ one has
$$
\begin{equation*}
(f:g_k)=\sum_{m=1}^\infty \Bigl(\,\operatorname*{Res}_{t=A_m}f(t)\Bigr) E(m,k), \qquad k=1,2,3, \dots\,.
\end{equation*}
\notag
$$
Applying this to $f=g_\ell$ we conclude that (9.2) is equal to $(g_\ell:g_k)=\delta_{k\ell}$, as required.
The proposition is proved. 9.3. The degeneration $g_k(t) \to \widetilde P^{(a,b)}_k(x)$ Let
$$
\begin{equation*}
\widetilde P^{(a,b)}_k(x)=\frac{P^{(a,b)}_k(x)}{P^{(a,b)}_k(1)}, \qquad k=0,1,2,\dots,
\end{equation*}
\notag
$$
be the Jacobi polynomials with parameters $(a,b)$, normalized at the point $x=1$. Next, consider the rational functions $g_k(t;a,\varepsilon,L)$ given by the terminating hypergeometric series (1.16). Recall that $\varepsilon=\frac12(a+b+1)$. We rescale $t=sL$, where $s$ is a new variable, which is related to $x$ via
$$
\begin{equation*}
x=\frac{s^2+1}{s^2-1}=\frac12\biggl(\frac{s+1}{s-1}+\frac{s-1}{s+1}\biggr).
\end{equation*}
\notag
$$
Under these assumptions, the following limit relation holds:
$$
\begin{equation*}
\lim_{L\to\infty}g_k(sL; a,\varepsilon,L)=\widetilde P^{(a,b)}_k(x), \qquad k=0,1,2,\dots\,.
\end{equation*}
\notag
$$
It is easily verified on the basis of (1.16) and the expression for the Jacobi polynomials in terms of the Gauss hypergeometric function. 9.4. A three-term recurrence relations for the functions $g_k(t)$ Wilson’s thesis [55] contains a list of three-term recurrence relations satisfied by every terminated balanced hypergeometric series
$$
\begin{equation*}
F={}_4F_3\biggl[\begin{matrix}a,\, b,\, c,\, d \\e,\, f,\, g\end{matrix}\biggm|1\biggr].
\end{equation*}
\notag
$$
One of them (formula (4.9) in [55]) reads
$$
\begin{equation*}
\begin{aligned} \, &\frac{a(e-b)(f-b)(g-b)}{a-b+1}(F(a^+,b^-)-F) \\ &\qquad-\frac{b(e-a)(f-a)(g-a)}{b-a+1}(F(a^-,b^+)-F)+cd(a-b)F=0. \end{aligned}
\end{equation*}
\notag
$$
This formula is applicable to the ${}_4F_3$ series defining the rational functions $g_k(t)$ (see (1.16)). It follows that the functions $g_k(t)$ satisfy a three-term recurrence relation, which is of the type investigated by Zhedanov [56].
|
|
|
Bibliography
|
|
|
1. |
M. Aissen, I. J. Schoenberg and A. M. Whitney, “On the generating functions of totally positive sequences. I”, J. Anal. Math., 2 (1952), 93–103 |
2. |
G. E. Andrews, R. Askey and R. Roy, Special functions, Encyclopedia Math. Appl., 71, Cambridge Univ. Press, Cambridge, 1999, xvi+664 pp. |
3. |
W. N. Bailey, Generalized hypergeometric series, Cambridge Tracts in Math. and Math. Phys., 32, Cambridge Univ. Press, Cambridge, 1935, vii+108 pp. |
4. |
S. Billey, V. Guillemin and E. Rassart, “A vector partition function for the multiplicities of $\mathfrak{sl}_k\mathbb C$”, J. Algebra, 278:1 (2004), 251–293 |
5. |
A. Borodin and G. Olshanski, “The boundary of the Gelfand-Tsetlin graph: a new approach”, Adv. Math., 230:4–6 (2012), 1738–1779 |
6. |
A. Borodin and G. Olshanski, “The Young bouquet and its boundary”, Mosc. Math. J., 13:2 (2013), 193–232 |
7. |
A. Borodin and G. Olshanski, Representations of the infinite symmetric group, Cambridge Stud. Adv. Math., 160, Cambridge Univ. Press, Cambridge, 2017, vii+160 pp. |
8. |
R. P. Boyer, “Characters and factor representations of the infinite dimensional classical groups”, J. Operator Theory, 28:2 (1992), 281–307 |
9. |
Yu. A. Brychkov, Handbook of special functions. Derivatives, integrals, series and other formulas, CRC Press, Boca Raton, FL, 2008, xx+680 pp. |
10. |
G. Budakçi and H. Oruç, “Further properties of quantum spline spaces”, Mathematics, 8:10 (2020), 1770, 10 pp. |
11. |
H. B. Curry and I. J. Schoenberg, “On Pólya frequency functions. IV. The fundamental spline functions and their limits”, J. Anal. Math., 17 (1966), 71–107 |
12. |
M. Defosseux, “Orbit measures, random matrix theory and interlaced determinantal processes”, Ann. Inst. Henri Poincaré Probab. Stat., 46:1 (2010), 209–249 |
13. |
A. Edrei, “On the generating functions of totally positive sequences. II”, J. Anal. Math., 2 (1952), 104–109 |
14. |
A. Edrei, “On the generating function of a doubly infinite, totally positive sequence”, Trans. Amer. Math. Soc., 74 (1953), 367–383 |
15. |
A. Erdélyi, W. Magnus, F. Oberhettinger and F. G. Tricomi, Higher transcendental functions, Based, in part, on notes left by H. Bateman, v. 2, McGraw-Hill Book Company, Inc., New York–Toronto–London, 1953, xvii+396 pp. |
16. |
J. Faraut, “Rayleigh theorem, projection of orbital measures and spline functions”, Adv. Pure Appl. Math., 6:4 (2015), 261–283 |
17. |
I. Gessel and D. Stanton, “Strange evaluations of hypergeometric series”, SIAM J. Math. Anal., 13:2 (1982), 295–308 |
18. |
V. Gorin, “The $q$-Gelfand-Tsetlin graph, Gibbs measures and $q$-Toeplitz matrices”, Adv. Math., 229:1 (2012), 201–266 |
19. |
V. Gorin and G. Olshanski, “A quantization of the harmonic analysis on the infinite-dimensional unitary group”, J. Funct. Anal., 270:1 (2016), 375–418 |
20. |
V. Gorin and G. Panova, “Asymptotics of symmetric polynomials with applications to statistical mechanics and representation theory”, Ann. Probab., 43:6 (2015), 3052–3132 |
21. |
G. Heckman and G. Schlichtkrull, Harmonic analysis and special functions on symmetric spaces, Perspect. Math., 16, Academic Press, Inc., San Diego, CA, 1994, xii+225 pp. |
22. |
S. Kerov, A. Okounkov and G. Olshanski, “The boundary of the Young graph with Jack edge multiplicities”, Int. Math. Res. Not. IMRN, 1998:4 (1998), 173–199 |
23. |
C. Krattenthaler and K. Srinivasa Rao, “Automatic generation of hypergeometric identities by the beta integral method”, J. Comput. Appl. Math., 160:1–2 (2003), 159–173 |
24. |
M. Lassalle, “Polynômes de Jacobi généralisés”, C. R. Acad. Sci. Paris Sér. I Math., 312:6 (1991), 425–428 |
25. |
F. W. Lawvere, The category of probabilistic mappings, 1962 https://ncatlab.org/nlab/files/lawvereprobability1962.pdf |
26. |
I. G. Macdonald, “Schur functions: theme and variations”, Séminaire Lotharingien de combinatoire, 28th session (Saint-Nabor 1992), Publ. Inst. Rech. Math. Av., Univ. Louis Pasteur, Dép. Math., Inst. Rech. Math. Av., Strasbourg, 1992, 5–39 |
27. |
A. I. Molev, “Comultiplication rules for the double Schur functions and Cauchy identities”, Electron. J. Combin., 16:1 (2009), R13, 44 pp. |
28. |
J. Nakagawa, M. Noumi, M. Shirakawa and Y. Yamada, “Tableau representation for Macdonald's ninth variation of Schur functions”, Physics and combinatorics, 2000 (Nagoya), World Sci. Publ., River Edge, NJ, 2001, 180–195 |
29. |
A. Okounkov and G. Olshanskii, “Shifted Schur functions”, St. Petersburg Math. J., 9:2 (1998), 239–300 |
30. |
A. Okounkov and G. Olshanski, “Shifted Schur functions. II. The binomial formula for characters
of classical groups and its applications”, Kirillov's seminar on representation theory, Amer. Math. Soc. Transl. Ser. 2, 181, Adv. Math. Sci., 35, Amer. Math. Soc., Providence, RI, 1998, 245–271 |
31. |
A. Okounkov and G. Olshanski, “Asymptotics of Jack polynomials as the number of variables goes to infinity”, Int. Math. Res. Not. IMRN, 1998:13 (1998), 641–682 |
32. |
A. Okounkov and G. Olshanski, “Limits of $BC$-type orthogonal polynomials as the number of variables goes to infinity”, Jack, Hall-Littlewood and Macdonald polynomials, Contemp. Math., 417, Amer. Math. Soc., Providence, RI, 2006, 281–318 |
33. |
G. Olshanski, “Projections of orbital measures, Gelfand-Tsetlin polytopes, and splines”, J. Lie Theory, 23:4 (2013), 1011–1022 |
34. |
G. I. Ol'shanskii, “Unitary representations of the infinite-dimensional classical groups
$U(p,\infty)$, $SO(p,\infty)$, $Sp(p, \infty)$ and the corresponding motion
groups”, Funct. Anal. Appl., 12:3 (1978), 185–195 |
35. |
G. Olshanski, “The Gelfand-Tsetlin graph and Markov processes”, Proceedings of the international congress of mathematicians (Seoul 2014), v. IV, Kyung Moon Sa, Seoul, 2014, 431–453 http://www.icm2014.org/en/vod/proceedings.html ; arXiv: 1404.3646 |
36. |
G. I. Olshanski, “Extended Gelfand-Tsetlin graph, its $q$-boundary, and $q$-B-splines”, Funct. Anal. Appl., 50:2 (2016), 107–130 |
37. |
G. Olshanski, “Interpolation Macdonald polynomials and Cauchy-type identities”, J. Combin. Theory Ser. A, 162 (2019), 65–117 |
38. |
G. Olshanski and A. Vershik, “Ergodic unitarily invariant measures on the space of infinite Hermitian matrices”, Contemporary mathematical physics, F. A. Berezin memorial volume, Amer. Math. Soc. Transl. Ser. 2, 175, Adv. Math. Sci., 31, Amer. Math. Soc., Providence, RI, 1996, 137–175 |
39. |
L. Petrov, “The boundary of the Gelfand-Tsetlin graph: new proof of Borodin-Olshanski's formula, and its $q$-analogue”, Mosc. Math. J., 14:1 (2014), 121–160 |
40. |
D. Pickrell, “Separable representations for automorphism groups of infinite symmetric spaces”, J. Funct. Anal., 90:1 (1990), 1–26 |
41. |
A. P. Prudnikov, Yu. A. Brychkov and O. I. Marichev, Integrals and series, v. 3, More special functions, Gordon and Breach Sci. Publ., New York, 1990, 800 pp. |
42. |
E. M. Rains, A letter to the author, June 13, 2013 |
43. |
E. Rassart, “A polynomiality property for Littlewood-Richardson coefficients”, J. Combin. Theory Ser. A, 107:2 (2004), 161–179 |
44. |
I. J. Schoenberg, “Metric spaces and completely monotone functions”, Ann. of Math. (2), 39:4 (1938), 811–841 |
45. |
I. J. Schoenberg, “On Pólya frequency functions. I. The totally positive functions and their Laplace transforms”, J. Anal. Math., 1 (1951), 331–374 |
46. |
L. L. Schumaker, Spline functions: basic theory, Cambridge Math. Lib., 3rd ed., Cambridge Univ. Press, Cambridge, 2007, xvi+582 pp. |
47. |
A. N. Sergeev and A. P. Veselov, “Jacobi-Trudy formula for generalized Schur polynomials”, Mosc. Math. J., 14:1 (2014), 161–168 ; arXiv: 0905.2557 |
48. |
P. Simeonov and R. Goldman, “Quantum B-splines”, BIT, 53:1 (2013), 193–223 |
49. |
L. J. Slater, Generalized hypergeometric functions, Cambridge Univ. Press, Cambridge, 1966, xiii+273 pp. |
50. |
G. Szegő, Orthogonal polynomials, Amer. Math. Soc. Colloq. Publ., 23, Rev. ed., Amer. Math. Soc., Providence, RI, 1959, ix+421 pp. |
51. |
E. Thoma, “Die unzerlegbaren, positive-definiten Klassenfunktionen der abzählbar unendlichen, symmetrischen
Gruppe”, Math. Z., 85 (1964), 40–61 |
52. |
A. M. Vershik and S. V. Kerov, “Asymptotic theory of characters of the symmetric group”, Funct. Anal. Appl., 15:4 (1981), 246–255 |
53. |
A. M. Vershik and S. V. Kerov, “Characters and factor representations of the infinite unitary group”, Soviet Math. Dokl., 26:3 (1982), 570–574 |
54. |
D. Voiculescu, “Représentations factorielles de type $II_1$ de $U(\infty)$”, J. Math. Pures Appl. (9), 55:1 (1976), 1–20 |
55. |
J. A. Wilson, Hypergeometric series recurrence relations and some new orthogonal functions, Ph.D. Thesis, Univ. of Wisconsin, Madison, WI, 1978, 67 pp. |
56. |
A. Zhedanov, “Biorthogonal rational functions and the generalized eigenvalue problem”, J. Approx. Theory, 101:2 (1999), 303–329 |
57. |
D. P. Želobenko, Compact Lie groups and their representations, Transl. Math. Monogr., 40, Amer. Math. Soc., Providence, RI, 1973, viii+448 pp. |
58. |
D. I. Zubov, “Projections of orbital measures for classical Lie groups”, Funct. Anal. Appl., 50:3 (2016), 228–232 |
Citation:
G. I. Olshanski, “Characters of classical groups, Schur-type functions and discrete splines”, Sb. Math., 214:11 (2023), 1585–1626
Linking options:
https://www.mathnet.ru/eng/sm9905https://doi.org/10.4213/sm9905e https://www.mathnet.ru/eng/sm/v214/i11/p89
|
|