Izvestiya: Mathematics
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Forthcoming papers
Archive
Impact factor
Guidelines for authors
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Izv. RAN. Ser. Mat.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Izvestiya: Mathematics, 2023, Volume 87, Issue 6, Pages 1117–1147
DOI: https://doi.org/10.4213/im9150e
(Mi im9150)
 

This article is cited in 1 scientific paper (total in 1 paper)

A functional realization of the Gelfand–Tsetlin base

D. V. Artamonov

Lomonosov Moscow State University, Moscow, Russia
References:
Abstract: A realization of a finite dimensional irreducible representation of the Lie algebra $\mathfrak{gl}_n$ in the space of functions on the group $\mathrm{GL}_n$ is considered. It is proved that functions corresponding to Gelfand–Tsetlin diagrams are linear combinations of some new functions of hypergeometric type which are closely related to $A$-hypergeometric functions. These new functions are solution of a system of partial differential equations which follows from the Gelfand–Kapranov–Zelevinsky by an “antisymmetrization”. The coefficients in the constructed linear combination are hypergeometric constants, that is, they are values of some hypergeometric functions when instead of all arguments ones are substituted.
Keywords: the Gelfand–Tsetlin base, hypergeometric functions, the Gelfand–Kapranov–Zelevinsky system.
Received: 07.02.2021
Revised: 04.10.2022
Bibliographic databases:
Document Type: Article
UDC: 517.986.68
Language: English
Original paper language: English

§ 1. Introduction

In the year 1950, Gelfand and Tsetlin published a short paper [1], where they gave an indexation of base vectors in an irreducible finite-dimensional representation of the Lie algebra $\mathfrak{gl}_n$ and presented formulas for the action of generators of the algebra in this base. This paper does not contain a derivation of the presented formulas and it was not translated into English. Nevertheless, the results of the paper became known in the West and there appeared attempts to reproduce the construction of the base vectors and to reprove the formulas for the action of the generators. In 1963, there appeared a paper by Biedenharn and Baird [2], where it was done.

In Biedenharn and Baird’s paper [2] in the case $\mathfrak{gl}_3$, a very interesting derivation of Gelfand and Tsetlin’s formulas is given. Consider a realization of a representation in the space of function on the group $\mathrm{GL}_3$. Then a function corresponding to a Gelfand–Tsetlin base vector can be expressed through the Gauss hypergeometric function1 $F_{2,1}$. And the formulas for the action of generators turn out to be consequences of the contiguous relations for this function.

In [3], this approach is used to obtain explicit formulas for the Clebsh–Gordan coefficients for the algebra $\mathfrak{gl}_3$. Also this approach is used to obtain an explicit construction of an infinite-dimensional representation of $\mathfrak{gl}_3$ (see [4]). There exist generalizations of the results of [2] to the case of quantum algebras (see [5], [6]). Recently their generalization to the case $\mathfrak{sp}_4$ were obtained [7].

In the 1960s, it was not possible to obtain a generalization of these construction to the case $\mathfrak{gl}_n$, since at this time the theory of multivariate hypergeometric function was not well-developed. The theory of $A$-hypergeometric functions did not exist at this time, and the system of equation for these functions, that is, the Gelfand–Kapranov–Zelevinsky system (the GKZ system for short) was not known. All these objects appeared only in the 80s of the XX century. In the present paper, using these results, we do generalize the results of Biedenharn and Baird to the case of general $n$.

The main result of the present paper is a formula for a function on the group that corresponds to a Gelfand–Tsetlin base vector (Theorems 5, 6). This result makes possible to give a new derivation of formulas for the action of generators, to obtain formulas for the Clebsh–Gordan coefficients and so on.

The passage from the case $n=3$ to the case $n>3$ needs new ideas and methods. In the case $n=3$, a formula for a function $f$ corresponding to a Gelfand–Tsetlin diagram is derived using a presentation of $f$ as a result of application of lowering operators to a highest vector. It is not possible to generalize these considerations in the case $\mathfrak{gl}_3$ to the case $\mathfrak{gl}_n$, $n>3$, since the formulas for the lowering operators $\nabla_{n,k}$ become very complicated (see [8]). Also in the case $n\geqslant 4$, there appears a new difficulty because of the fact that arguments of the function $f$, which are minors of a matrix, are not independent, they satisfy the Plücker relations.

A possible way to overcome these difficulties is to use ideas from the complex analysis to find an analogue in the case $n\geqslant 4$ of a function corresponding to a Gelfand–Tsetlin diagram in the case $n=3$. This is done in the present paper. A function of an element $g\in \mathrm{GL}_N$, by analogy with the case $n=3$, is written as a function of minors of $g$. We note that in the case $\mathfrak{gl}_3$, the considered function can be written as an $A$-hypergeometric function. By analogy with the case $\mathfrak{gl}_3$ for $n>3$, we try to find the function of interest as an $A$-hypergeometric function of minors. This function is defined as a sum of a series called a $\Gamma$-series. A $\Gamma$-series is a sum of monomials divided by factorials of exponents. And the set of the exponents of monomials in this series is a shifted lattice in the space of all possible exponents. It turns out that it is possible to relate with a Gelfand–Tsetlin diagram a system of equations that defines a shifted lattice in the space of exponents (see § 4.1). Thus, to each Gelfand–Tsetlin diagram there corresponds a $\Gamma$-series (which is actually a finite sum). It is proved that the constructed functions belong to a canonical embedding of an irreducible finite dimensional representation into the functional representation and form a base in it. But this approach does give a solution of the proposed problem. Even in the case $\mathfrak{gl}_4$, the construed functions do not correspond to Gelfand–Tsetlin base vectors. Nevertheless, the constructed base is related to the Gelfand–Tsetlin base by a transformation which is upper-triangular relatively some order on diagrams (see § 6).

In order to prove that the constructed $\Gamma$-series form a base in a representation, a new system of PDEs (partial differential equations) is constructed. This system is called the antysimmetrized GKZ system (A-GKZ for short, see § 2). We construct a base in the space of its polynomial solutions. It turns out that there is a bijective correspondence between the constructed $\Gamma$-series and the constructed basic solutions of the A-GKZ system (Theorem 2).

Then we show that the constructed base solutions also belong to a canonical embedding of an irreducible finite dimensional representation of $\mathfrak{gl}_n$ into the functional representation, they form a base in it. This base is related to the Gelfand–Tsetlin base using a low-triangular transformation (see § 7.2). We express a function corresponding to a Gelfand–Tsetlin diagram using these basic solutions.

Since the Gelfand–Tsetlin base is orthogonal relative to an invariant scalar product, the passage from the base consisting of basic solutions of A-GKZ to the Gelfand–Tsetlin base is nothing but the orthogonalization transformation. To write this transformation explicitly, one needs to find scalar products between basic solutions (see § 7.1). When it is done, one finds a low-triangular change of coordinates that diagonalizes the bilinear form of the considered scalar product (see § 7.4). Finally, when one has the diagonalizing change of coordinates, one can find the corresponding orthogonal base which is nothing but the Gelfand–Tsetlin base (see Theorems 5, 6).

In these theorems, the functions corresponding to the Gelfand–Tsetlin base vectors are expressed through the basic solutions of the A-GKZ system using the numeric coefficients, which are written as sums of some series. In § 7.5, we try to convince the reader that the obtained formula for the function corresponding to the Gelfand–Tsetlin diagrams is good enough. The basic solutions of A-GKZ are Horn hypergeometric functions. And in § 7.5, the coefficients occurring in Theorems 5, 6 are discussed. It is shown that they are hypergeometric constants, that is, values of generalized hypergeometric functions (in the Horne sense) when one substitutes $1$ instead of all their arguments. And these generalized hypergeometric functions are expressed (see (7.13)) through the Horn functions associated with the $A$-hypergeometric function constructed in § 4.1.

Remark 1. One can overcome some difficulties caused by the Plücker relations by considering, as in [2], a realization using the creation and annihilation operators (or a realization based on the Weyl construction). But the usage of depended minors has some fundamental advantages. For example, in this case we have a simple description of the space of functions forming an irreducible representation [8], and also the presence of the relations suggests some fundamental steps in construction of the functions of interest.

§ 2. Preliminary facts

In this section, some basic objects and construction are introduced. The theorem for the case $\mathfrak{gl}_3$ is formulated. This result was a starting point for the present paper.

2.1. A realization in the space of functions on a group

In the paper, Lie groups and algebras over the field $\mathbb{C}$ are considered.

Function on the group $\mathrm{GL}_n$ form a representation of the group $\mathrm{GL}_n$. An element $X\in \mathrm{GL}_{n}$ acts onto a function $f(g)$, $g\in \mathrm{GL}_n$, by a right shift

$$ \begin{equation} (Xf)(g)=f(gX). \end{equation} \tag{2.1} $$

Passing to an infinitesimal version of this action, we find the action of the Lie algebra $\mathfrak{gl}_n$ on the space of all functions.

Every irreducible finite-dimensional representation can be realized as a sub-representation in the space of functions. Let $[m_{1},\dots,m_{n}=0]$ be the highest weight, then in the space of functions there exists a highest vector with such a weight which is written as follows.

Let $a_{i}^{j}$, $i,j=1,\dots,n$, be a function of a matrix element on the group $\mathrm{GL}_{n}$. Here $j$ is a row index and $i$ is a column index. Also put

$$ \begin{equation} a_{i_1,\dots,i_k}:=\operatorname{det}(a_i^j)_{i=i_1,\dots,i_k}^{j=1,\dots,k}. \end{equation} \tag{2.2} $$

That is, one takes the determinant of the sub-matrix in the matrix $(a_i^j)$ formed by the rows $1,\dots,k$ and the columns $i_1,\dots,i_k$. The operator $E_{i,j}$ acts onto this determinant by changing the column indices

$$ \begin{equation} E_{i,j}a_{i_1,\dots,i_k}=a_{\{i_1,\dots,i_k\}|_{j\mapsto i}}, \end{equation} \tag{2.3} $$
where $\cdot\big|_{j\mapsto i}$ is substitution of $j$ instead of $i$. If the index $j$ does not occur in $\{i_1,\dots,i_k\}$, then one obtains zero.

Using (2.3), one can easily see that the vector

$$ \begin{equation} v_0=\frac{a_{1}^{m_{1}-m_{2}}}{(m_1-m_2)!}\,\frac{a_{1,2}^{m_{2}-m_{3}}}{(m_2-m_3)!} \,\cdots\,\frac{a_{1,2,\dots,n-1}^{m_{n-1}}}{m_{n-1}!} \end{equation} \tag{2.4} $$
is a highest vector for the algebra $\mathfrak{gl}_{n}$ with weight $[m_{1},m_{2},\dots,m_{n-1},0]$. Thus, we have a canonical embedding of an irreducible finite-dimensional representation into the functional representation.

If one considered a highest weight with non-zero component $m_n$, then in all formulas below one must change $m_{n-1}\mapsto m_{n-1}-m_{n}$ and multiply all expressions by $a_{1,2,\dots,n}^{m_n}$. To make formulas less cumbersome, we put $m_n=0$.

2.2. The Gelfand–Tsetlin base

Consider a chain of subalgebra $\mathfrak{gl}_n\supset\mathfrak{gl}_{n-1}\supset\dots\supset \mathfrak{gl}_1$. Let us be given an irreducible representation $V_{\mu_n }$ of the algebra $\mathfrak{gl}_n$ with the highest weight $\mu_n$. When one restricts the algebra $\mathfrak{gl}_n\downarrow \mathfrak{gl}_{n-1}$, the representation ceases to be irreducible and it splits into a direct sum of irreducible representations of $\mathfrak{gl}_{n-1}$. Every irreducible representation of $\mathfrak{gl}_{n-1}$ can occur in this direct sum with multiplicity not greater than one [8]. Hence

$$ \begin{equation*} V_{\mu_n}=\sum_{\mu_{n-1}} V_{\mu_n;\mu_{n-1}}, \end{equation*} \notag $$
where $\mu_{n-1}$ is possible $\mathfrak{gl}_{n-1}$-highest weight and $ V_{\mu_n;\mu_{n-1}}$ is a representation of $\mathfrak{gl}_{n-1}$ with a highest weight $\mu_{n-1}$. The sum is taken over all $\mathfrak{gl}_{n-1}$-highest weights occurring in the decomposition of $V_{\mu_n }$ into irreducible representations.

When one continues restrictions $\mathfrak{gl}_{n-1}\downarrow \mathfrak{gl}_{n-2}$ and so on, one gets a splitting of the following type:

$$ \begin{equation} V=\sum_{\mu_{n-1}} \sum_{\mu_{n-2}}\cdots \sum_{\mu_1}V_{\mu_n;\dots;\mu_1}. \end{equation} \tag{2.5} $$
Here $V_{\mu_n;\dots;\mu_1}$ is an irreducible finite dimensional representation of the algebra $\mathfrak{gl}_1$, thus $\operatorname{dim}V_{\mu_n;\dots;\mu_1}=1$. When one chooses base vectors in all $V_{\mu_n;\dots;\mu_1}$, we get a base in $V_{\mu_n}$, which is the Gelfand–Tsetlin base.

The base vectors are encoded by sets of highest weights $(\mu_n;\dots;\mu_1)$ appearing in splitting (2.5). Writing these highest weights one under another as

$$ \begin{equation*} \begin{pmatrix} m_{1,n}&& m_{2,n}&&\dots&& m_{n,n} \\ & m_{1,n-1}&& m_{2,n-1}&\dots& m_{n-1,n-1} \\ &&&&\dots \\ &&&&m_{1,1} \end{pmatrix}, \end{equation*} \notag $$
we obtain a diagram that is called the Gelfand–Tsetlin diagram. We denote it by $(m_{i,j})$. For the elements of this diagram, the betweenness condition holds: if an element of a row occurs between two elements of an upper row, then it lies between them in the numeric sense. The opposite is also true: every diagram for which the betweenness condition holds appears as a Gelfand–Tsetlin diagram in the splitting (2.5).

2.3. $A$-hypergeometric functions

2.3.1. A $\Gamma$-series

One can find information about a $\Gamma$-series in [9].

Let $B\subset \mathbb{Z}^N$ be a lattice, let $\gamma\in \mathbb{Z}^N$ be a fixed vector. Define a hypergeometric $\Gamma$-series in variables $z_1,\dots,z_N$ by the formula

$$ \begin{equation} \mathcal{F}_{\gamma}(z)=\sum_{b\in B}\frac{z^{b+\gamma}}{\Gamma(b+\gamma+1)}, \end{equation} \tag{2.6} $$
where $z=(z_1,\dots,z_N)$. In the numerator and in the denominator, the multi-index notation is used
$$ \begin{equation*} z^{b+\gamma}:=\prod_{i=1}^N z_i^{b_i+\gamma_i},\qquad \Gamma(b+\gamma+1):=\prod_{i=1}^N\Gamma(b_i+\gamma_i+1). \end{equation*} \notag $$

For the $\Gamma$-series considered in the present paper, the vectors of exponents $b+\gamma$ have integer coordinates. In this case, instead of $\Gamma$-functions, it is reasonable to use shorter notation with factorials. Hence in the denominator instead of $\Gamma(b+\gamma+1)$ we shall write

$$ \begin{equation*} (b+\gamma)!:=\prod_{i=1}^N(b_i+\gamma_i)!\,. \end{equation*} \notag $$

We shall use an agreement that a factorial of a negative integer equals to infinity.

We need the following properties of $\Gamma$-series.

1. A vector $\gamma$ can be changed to $\gamma+b$, $b\in B$, the series remaining unchanged.

2. $\partial \mathcal{F}_{\gamma,B}(z)/\partial z_i=\mathcal{F}_{\gamma-e_i,B}(z)$, where $e_i=(0,\dots,1_{\text{at the place }i},\dots,0)$.

3. Let

$$ \begin{equation*} F_{2,1}(a_1,a_2,b_1;z)=\sum_{n\in\mathbb{Z}^{\geqslant 0}}\frac{(a_1)_n(a_2)_n}{(b_1)_n}z^n, \end{equation*} \notag $$
where $(a)_n=\Gamma(a+n)/\Gamma(a)$ is the Gauss hypergeometric series. Then, for $\gamma=(-a_1,-a_2,b_1-1,0)$ and $B=\mathbb{Z}\langle (-1,-1,1,1)\rangle $,
$$ \begin{equation*} \begin{gathered} \, \mathcal{F}_{\gamma}(z_1,z_2,z_3,z_4) =cz_1^{-a_1}z_2^{-a_2}z_3^{b_1-1}F_{2,1}\biggl(a_1,a_2,b_1;\frac{z_3z_4}{z_1z_2}\biggr), \\ c=\frac{1}{\Gamma(1-a_1)\Gamma(1-a_2)\Gamma(b_1)}. \end{gathered} \end{equation*} \notag $$
A sum of a $\Gamma$-series (when it converges) is called an $A$-hypergeometric function.

2.3.2. The Gelfand–Kapranov–Zelevinsky system

An $A$-hypergeometric function satisfies a system of partial differential equation which consists of equations of two types.

1. Let $a=(a_1,\dots,a_N)$ be a vector orthogonal to the lattice $B$, then

$$ \begin{equation} a_1z_1\, \frac{\partial}{\partial z_1}\mathcal{F}_{\gamma}+\dots+a_Nz_N\, \frac{\partial}{\partial z_N}\mathcal{F}_{\gamma}=(a_1\gamma_1+\dots+a_N\gamma_N)\mathcal{F}_{\gamma}, \end{equation} \tag{2.7} $$
it is enough to consider only the base vectors of an orthogonal complement to $B$.

2. Let $b\in B$ and $b=b_+-b_-$, where all coordinates of $b_+$, $b_-$ are non-negative. Let us select non-zero elements in these vectors $b_+=(\dots, b_{i_1},\dots,b_{i_k},\dots)$, $b_-=(\dots, b_{j_1},\dots,b_{j_l},\dots)$. Then

$$ \begin{equation} \biggl(\frac{\partial }{\partial z_{i_1}}\biggr)^{b_{i_1}}\cdots\biggl(\frac{\partial}{\partial z_{i_k}}\biggr)^{b_{i_k}} \mathcal{F}_{\gamma}=\biggl(\frac{\partial }{\partial z_{j_1}}\biggr)^{b_{j_1}} \cdots\biggl(\frac{\partial }{\partial z_{j_l}}\biggr)^{b_{j_l}} \mathcal{F}_{\gamma}. \end{equation} \tag{2.8} $$
It is enough to consider only the base vectors $b\in B$.

2.4. The case of $\mathfrak{gl}_3$

The results presented in this section were obtained in [2], from a modern point of view they are reformulated in [10].

A Gelfand–Tsetlin diagram in the case $\mathfrak{gl}_3$ can be written as follows:

$$ \begin{equation} \begin{pmatrix} m_{1} && m_{2} &&0\\ &k_{1}&& k_{2}\\&&h_{1} \end{pmatrix}. \end{equation} \tag{2.9} $$

Let us give a formula for the function corresponding to the Gelfand–Tsetlin diagram (2.9).

Theorem 1. Put determinants (2.2) in the following order

$$ \begin{equation*} a=(a_{1},a_{2},a_{3},a_{1,2},a_{1,3},a_{2,3}), \end{equation*} \notag $$
take a lattice
$$ \begin{equation*} B=\mathbb{Z}\langle (1,-1,0,0,-1,1)\rangle, \end{equation*} \notag $$
$\gamma=(h_{1}-m_{2},k_{1}-h_{1}, m_{1}-k_{1} , k_{2} ,m_{2}-k_{2},0)$. Then to the diagram (2.9) there corresponds a function $\mathcal{F}_{\gamma}(a)$.

This $\Gamma$-series can be expressed through the Gauss hypergeometric function (see property 3 for the $\Gamma$-series). In this form, the above theorem was obtained in [2]. One can easily see that the considered $\Gamma$-series is a finite sum.

Note that the lattice $B$ can be defined by equations with the following conditions for sums of exponents of determinants that contain some certain indices:

$$ \begin{equation} \begin{aligned} \, &\text{the sum of exponents of minors that contain $1$, or $2$, or }2=m_{1}; \\ &\text{the sum of exponents of minors that contain $1$ and $2$, $1$ and $3$, $2$ and }3=m_{2}; \\ &\text{the sum of exponents of minors that contain $1$ or }2=k_{1}; \\ &\text{the sum of exponents of minors that contain $1$ and $2$ }=k_{2}; \\ &\text{the sum of exponents of minors that contain }1=h_{1}. \end{aligned} \end{equation} \tag{2.10} $$

These equations have a natural explanation from the point of view of the representation theory. Take a function of determinants that corresponds to a Gelfand–Tsetlin diagram, decompose it into a sum of products of determinants.

The last equation guarantees that this function is a weight vector for the operator $E_{1,1}$ with the weight $m_{1,1}$.

The third and the fourth equations guarantee that after an application of the raising operator from $\mathfrak{gl}_2$ (in the needed power), we get the highest vector for the algebra $\mathfrak{gl}_2$ with the highest weight $[k_1,k_2]$.

The first and the second conditions guarantee that after an application of the raising operator from $\mathfrak{gl}_3$ (in the needed power), we get the highest vector for the algebra $\mathfrak{gl}_3$ with highest weight $[m_1,m_2,0]$.

Because of the presence of relations between determinants, these conditions do not necessarily hold for the summands of the functions corresponding to a Gelfand–Tsetlin diagram. But Theorem 1 states that these elementary considerations do give the right answer in the case $\mathfrak{gl}_3$.

§ 3. The Gelfand–Tsetlin lattice

Now we proceed to finding an analogue of Theorem 1 in the case $n>3$. Let us first write an analogue of equations (2.10) in the case $n>3$, and let us investigate the obtained lattice.

Consider a complex linear space whose coordinates are indexed by proper subsets $X\subset\{1,\dots,n\}$. We denote the coordinates by $z_X$, $X=\{i_1,\dots,i_l\}$, and denote the corresponding unit vector by $e_{X}$.

Definition 1. Let $B$ be a lattice defined by the following equations. Fix $1\leqslant p\leqslant q\leqslant n$. A vector $v\in B$ if and only if the sum of its coordinates, that contain at least $p$ indices such that they are not more than $q$, equals to $0$:

$$ \begin{equation*} v\in B \quad\Longleftrightarrow\quad \sum_{X\colon X \text{ contains }\geqslant p \text{ indices, each } \leqslant q} v_X=0. \end{equation*} \notag $$

Everywhere below $B$ denotes this lattice.

Take a standard scalar product in $\mathbb{Z}^N$:

$$ \begin{equation*} \langle x,y\rangle :=\sum_{i=1}^N x_iy_i,\qquad x,y\in \mathbb{Z}^N. \end{equation*} \notag $$

The definition of $B$ says that an orthogonal complement to $B$  is spanned by vectors $\chi_p^q$ defined as follows:

$$ \begin{equation} (\chi_p^q)_X=\begin{cases} 1 &\text{if X contains }\geqslant p \text{ indices, each } \leqslant q, \\ 0 &\text{ otherwise}. \end{cases} \end{equation} \tag{3.1} $$

Let us find a base in $B$.

Definition 2. Let $X\subset \{1,\dots,n\}$. One says that $s<X$ if every element of $X$ is greater than $s$. The equality $s<\varnothing$ holds.

Consider vectors

$$ \begin{equation} \begin{aligned} \, v_{i,j,x,X} &=(\dots,1_{z_{1,\dots,i-1,i,X}},\dots,-1_{z_{1,\dots,i-1,j,X}},\dots, \nonumber \\ &\qquad-1_{z_{1,\dots,i-1,i,xX}},\dots,1_{z_{1,\dots,i-1,j,xX}},\dots). \end{aligned} \end{equation} \tag{3.2} $$

A base in the lattice $B$ is constructed as a subset $\mathcal{I}$ in the set of all vectors (3.2). More precisely, $\mathcal{I}$ is a subset of vectors of type (3.2) such that the following conditions hold. Firstly, for all chosen vectors $v_{i,j,x,X}$, we have $i<j<x<X$. Secondly, the following condition holds. For fixed $i,j$, consider a graph whose vertices correspond to subsets $X\subset\{j+2,\dots,n\}$ and edges correspond to pairs $X,xX $ (not necessary $x<X$), where we use the notation

$$ \begin{equation*} xX:=\{x\}\cup X. \end{equation*} \notag $$

Thus, the edges correspond to vectors (3.2). Let us be given a subset $\mathcal{I}$ of vectors of type (3.2). To this set there corresponds a graph consisting of vertices defined by vectors from the subset and edges whose ends are these vertices. The second condition is the following. We claim that

a) the subset $\mathcal{I}$ is such that for all $i,j$ the corresponding graph is a tree,

b) the subset $\mathcal{I}$ is maximal with respect to extensions preserving the property a).

Proposition 1. The chosen vectors (3.2) form a base in $B$.

Note that the formulas obtained for the base vectors are very close to the formulas appearing in the construction of hypergeometric functions associated with grassmanians in Plücker coordinates [9].

Proof of Proposition 1. 1. Let us prove that the vectors from $\mathcal{I}$ are linearly independent.

Let us begin with a general remark. Given a set of linearly independent vectors $\{v_{\alpha}\}$, the set consisting of differences $\{v_{\alpha_p,\alpha_q}=v_{\alpha_p}-v_{\alpha_q}\}$ is linearly independent if and only if it does not contain cyclic subsets $v_{\alpha_{p_1},\alpha_{p_2}}, v_{\alpha_{p_2},\alpha_{p_3}},\dots, v_{\alpha_{p_n},\alpha_{p_1}}$. The corresponding vanishing non-trivial linear combination is just a sum of these vectors.

Now consider the vectors of type

$$ \begin{equation} v_{i,j;X}=(\dots,1_{z_{i}},\dots,-1_{z_{j}},\dots,-1_{z_{iX}},\dots,1_{z_{jX}},\dots). \end{equation} \tag{3.3} $$

Let us find when a set of vectors of such type is linearly independent. Let us apply a projection of these vectors onto the subspace spanned by coordinates of type $z_{i}$. Consider the obtained vectors

$$ \begin{equation*} v_{i,j}=(\dots,1_{z_{i}},\dots,-1_{z_{j}},\dots). \end{equation*} \notag $$

Applying the general remark to the subset of vectors $\{v_{\alpha}\}=\{e_{z_{i}}\}$, we find that the set of vectors $v_{i,j}$ is linearly independent if and only if it contains a subset of type

$$ \begin{equation*} v_{i_1,i_2},\ v_{i_2,i_3},\ \dots,\ v_{i_n,i_1}. \end{equation*} \notag $$

The corresponding vanishing non-trivial linear combination is proportional to the sum of these vectors. The sum of the projections of the vector

$$ \begin{equation*} (\dots,1_{z_{i}},\dots,-1_{z_{j}},\dots,-1_{z_{iX}},\dots,1_{z_{jX}},\dots) \end{equation*} \notag $$
onto other coordinates can vanish only in the case when all these vectors have the same set $X$. If there are no such subsets, then the set of vectors (3.3) is linearly independent. In particular, in the set vectors of type $\{v_{i,j;\{1,\dots,i-1,X\}}\}$ there are no such subsets. Thus it is linearly independent.

Now let us apply the general remark to the set $\{v_{\alpha}\} =\{v_{i,j;\{1,\dots,i-1,X\}}\}$. Hence this set is linearly independent.

The vectors of type (3.2) are vectors of type $v_{\alpha_i,\alpha_j}$ for the following set $\{v_{\alpha}\}$:

$$ \begin{equation} v_{i,j,x,X}=v_{i,j;\{1,\dots,i-1,X\}}-v_{i,j;\{1,\dots,i-1,xX\}}. \end{equation} \tag{3.4} $$

Thus, the set of vectors $\{v_{i,j,x,X}\}$ is linearly dependent if and only if it contains a cyclic subset. Let us be given a cyclic subset consisting of $q$ vectors. After reordering, we have

$$ \begin{equation*} i_1=i_2=\dots=i_q,\qquad j_1=j_2=\dots=j_q. \end{equation*} \notag $$
That is, the cyclic subset that consists of vectors $\{v_{i,j,x_l,X_l}\}$, $l=1,\dots,q$. Also one gets that the graph whose vertices correspond to subsets $X_l$, $x_lX_l$ and edges correspond to edges $\overline{X_l, x_lX_l}$ topologically is a circle.

For the vectors (3.2), such situations are prohibited. Thus, these vectors are linearly independent.

2. The vectors $\mathcal{I}$ are orthogonal to all the vectors $\chi_p^q$.

The vectors of type (3.2) have non-zero coordinates only with indices $z_{1,\dots,i-1,i,X}$, $z_{1,\dots,i-1,j,X}$, $z_{1,\dots,i-1,i,xX}$, $z_{1,\dots,i-1,j,xX}$, these coordinates are $(1,-1,-1,1)$. Let us write these coordinates for the vectors $\chi_p^q$, and let us check that they are orthogonal to $(1,-1,-1,1)$.

Let $q\geqslant j$. Then, in the case $p\leqslant i$, the coordinates $z_{1,\dots,i-1,i,X}$, $z_{1,\dots,i-1,j,X}$, $z_{1,\dots,i-1,i,x,X}$, $z_{1,\dots,i-1,j,x,X}$ of the vector $\chi_p^q$ are equal to $(1,1,1,1)$, thus this vector is orthogonal to $(1,-1,-1,1)$. And in the case $p>i$, the considered coordinates of the vector $\chi_p^q$ coincide with the numbers $(\chi_{p-i}^q)_{z_{X}}$, $(\chi_{p-i}^q)_{z_{X}}$, $(\chi_{p-i}^q)_{z_{x,X}}$, $(\chi_{p-i}^q)_{z_{x,X}}$. These numbers form a vector which is orthogonal to $(1,-1,-1,1)$.

Let $i\leqslant q< j$. Then the coordinates $z_{1,\dots,i-1,i,X}$, $z_{1,\dots,i-1,j,X}$, $z_{1,\dots,i-1,i,x,X}$, $z_{1,\dots,i-1,j,x,X}$ of the vector $\chi_p^q$ in the case $p\leqslant i-1$ are equal to $(1,1,1,1)$, and in the case $p> i-1$ these coordinates of the vector $\chi_p^q$ coincide with the numbers $(\chi_{p-i+1}^q)_{z_{i}}$, $(\chi_{p-i+1}^q)_{z_{i+1}}$, $(\chi_{p-i+1}^q)_{z_{i,x}}$, $(\chi_{p-i+1}^q)_{z_{i+1,x}}$. In both cases they are orthogonal to the vector $(1,-1,-1,1)$.

Finally, for $q\leqslant i-1$, the coordinates $z_{1,\dots,i-1,i,X}$, $z_{1,\dots,i-1,j,X}$, $z_{1,\dots,i-1,i,x,X}$, $z_{1,\dots,i-1,j,x,X}$ of the vector $\chi_p^q$ are equal to $(1,1,1,1)$. These numbers form a vector orthogonal to $(1,-1,-1,1)$.

3. The number of vectors in $\mathcal{I}$ equals to the rank of the lattice $B$. Indeed, $B$ is embedded into the space of dimension $2^n-1$. The lattice is defined by $n+(n- 1)+\dots+1$ equations. Thus, the rank of $B$ equals $2^n-1-(1+\dots+n)$.

The number of vectors (3.2) can be found in the following manner. Note that the number of edges in a tree equals to the number of vertices minus $1$. Thus, the number of vectors (3.2) with fixed $i$ equals to the number of choices of $j$, $x$ and $X$. So, the number of vectors (3.2) equals to the sum by $i$ from $1$ to $n-2$ of the number of subsets in $\{i+1,\dots,n\}$, that consist of at least two elements:

$$ \begin{equation*} \sum_{i=1}^{n-2}(2^{n-i}-1-(n-i))=\sum_{t=2}^{n-1}(2^t-1-t)=2^n-1-(1+\dots+n). \end{equation*} \notag $$
Proposition 1 is proved.

In what follows, we use an index $\alpha$ and the number $k$, let us give their definitions.

Definition 3. Choose a base $\mathcal{I}$ in the lattice $B\subset \mathbb{Z}^N$. Then $k$ is the number of vectors in the base and $\alpha$ is an index running through the set $\{1,\dots,k\}$.

The choice of the base is suggested to be fixed in the remaining part of the paper.

If a base vector is written as

$$ \begin{equation*} v_{i,j,x,X}=(\dots,1_{{iX}},\dots,-1_{{jX}},\dots,-1_{{ixX}},\dots,1_{{jxX}},\dots), \end{equation*} \notag $$
then we define
$$ \begin{equation} v^{\alpha}_+=e_{iX}^{}+e_{jyX}^{},\qquad v^{\alpha}_-=e^{}_{jX}+e^{}_{iyX},\qquad v^{\alpha}_0=e^{}_{yX}+e^{}_{ijX}. \end{equation} \tag{3.5} $$
Hence
$$ \begin{equation*} v^{\alpha}=v^{\alpha}_+-v^{\alpha}_-. \end{equation*} \notag $$
With each base vector $v^{\alpha}$ we associate the vector
$$ \begin{equation} r^{\alpha}:=v^{\alpha}_0-v^{\alpha}_+. \end{equation} \tag{3.6} $$

§ 4. Functions associated with the lattice

In this section, we introduce some functions of hypergeometric type associated with the Gelfand–Tsetlin lattice $B$.

4.1. $A$-hypergeometric functions

4.1.1. A $\Gamma$-series associated with the lattice

Consider a Gelfand–Tsetlin diagram $(m_{i,j})$. To this diagram there corresponds a shifted lattice $\gamma+B$ defined by the following equations.

Definition 4. A vector $x\in \mathbb{Z}^N$ belongs to $\gamma+B$ if and only if for all $1\leqslant p\leqslant q\leqslant n$, the sum of coordinates indexed by such sets $X$ that contain at least $p$ indices satisfying $\leqslant q$ equals to $ m_{p,q}$.

The shift vector $\gamma$ is not uniquely defined. Below, we will introduce functions $\mathcal{F}_{\gamma}$ that do not depend on the choice of $\gamma$ and the functions $J_{\gamma}^s$, $F_{\gamma}$ that do depend on the choice of $\gamma$.

To a shifted lattice there corresponds an $A$-hypergeometric function written as a $\Gamma$-series:

$$ \begin{equation} \mathcal{F}_{\gamma}(z)=\sum_{x\in \gamma+B}\frac{z^x}{x!} =\sum_{t\in\mathbb{Z}^k}\frac{z^{\gamma+tv}}{(\gamma+tv)!}, \end{equation} \tag{4.1} $$
where we use a fixed base $\mathcal{I}=\{v^1,\dots,v^k\}$ in the lattice $B$ and we use a notation $tv:=t_1v^1+\dots+t_kv^k$. The second expression for $\mathcal{F}_{\gamma}(z)$ in (4.1) is used below to define more general functions $J^s_{\gamma}(z)$ (see (4.2)).

Due to the properties of a $\Gamma$-series, expression (4.1) does not depend on the choice of $\gamma\ \operatorname{mod} B$.

Note that the sum in expression (4.1) is finite. Indeed, the base vectors $v^1,\dots,v^k$ have the following property: they have both positive and negative coordinates. Thus, only for finite number of $t\in\mathbb{Z}^k$, a vector of exponents $\gamma+tv$ has only non-negative coordinates. Since we divide by factorials of exponents in (4.1) only such summands are non-zero.

Also note that in the case when a diagram does not satisfy the betweenness condition, then (4.1) equals $0$.

The constructed functions $\mathcal{F}_{\gamma} $, into which one substitutes the determinants $a_X$, are direct analogues in the case $\mathfrak{gl}_n$, $n>4$, of the functions that in the case $\mathfrak{gl}_3$ correspond to the Gelfand–Tsetlin diagrams. Thus, it is natural to ask the question whether the same is true in the case $\mathfrak{gl}_n$, $n>4$. Unfortunately, this conjecture is not true2. Nevertheless, the construction of $\mathcal{F}_{\gamma} $ is an important step to the construction of functions corresponding to the Gelfand–Tsetlin diagrams.

4.1.2. Why a $\Gamma$-series?

Let us give arguments (not only an analogy with the case $n=3$), why it is natural to begin a search for a function corresponding to a Gelfand–Tsetlin diagram with a construction of a $\Gamma$-series.

Let us present this function as a sum of products of determinants. Which products can occur in this sum? Naive considerations at the end of § 3tm show that it is natural to expect that the exponents of these products satisfy the system from Definition 4. A $\Gamma$-series is the simplest function for which the exponents form a set defined by the system from Definition 4.

It is worth mentioning that there exists a construction of hypergeometric functions associated with homogeneous spaces [11]. Since a representation can be realized as a space of section of a bundle over a flag variety [12], it is natural to ask a question: is it useful to take a hypergeometric function associated with flag varieties, do they coincide with the functions constructed in the present paper? It turns out that the hypergeometric functions associated with flag varieties are not related to the functions constructed in the present paper. The hypergeometric functions associated with flag varieties can be considered as functions of Plücker coordinates. The Plücker coordinates are coordinates in the complex linear space whose coordinates are indexed by proper subsets $X\subset \{1,\dots,n\}$. The hypergeometric functions associated with flag varieties are then related to $A$-hypergeometric functions that are defined by a lattice $E$ в in the space $\mathbb{C}^N$, which is a lattice of characters of a torus action on $\mathbb{C}^N$, which is obtained from a standard action of an $n$-dimensional torus on $\mathbb{C}^n$ (see [9]). But the lattice $B$ does not coincide with $E$, since the corresponding torus action on $\mathbb{C}^N$ can not be obtained from a standard action of the $n$-dimensional torus onto $\mathbb{C}^n$. One can show that $B \varsubsetneq E$. From this inclusion it follows that the dimension of the space of hypergeometric functions associated with a flag variety is lower than the dimension of a representation.

Note also the following curiosity. With the lattice $B$ one can associate an action of a torus (for which this lattice is a lattice of characters) of dimension $n+(n- 1)+\dots+1$ (see [11]). There exists another construction of an action of this torus called the Gelfand–Tsetlin action [13], but it has nothing to do with the problems considered in the present paper since this is an action on another space.

4.2. A function $J_{\gamma}^s$

We need functions that are hypergeometric in the Horn sense (see [14]), which are generalizations of a $\Gamma$-series.

Recalling that $k=\operatorname{rk}B$,  we put

$$ \begin{equation} J_{\gamma}^s(z):=\sum_{t\in\mathbb{Z}^k}\frac{(t+1)\cdots(t+s)}{(\gamma+tv)!}z^{\gamma+tv}. \end{equation} \tag{4.2} $$

The multi-index notation is used: $(t+1)\cdots(t+s):=\prod_{\alpha=1}^k(t_{\alpha}+1)\cdots(t_{\alpha}+s_{\alpha})$.

Note that these functions depend on the choice of the shift vector $\gamma$ in the class $\gamma\, \operatorname{mod} B$.

In the lattice $B$, the base (3.2) is fixed and a notation $\alpha$ for the index numeration of the base vectors was introduced (see Definition 3). With each base vector $v^{\alpha}=v_{i,j,x,X}$ of type (3.2) one associates a differential operator $\mathcal{O}_{\alpha}$, called the GKZ operator:

$$ \begin{equation*} \mathcal{O}_{\alpha}:= \frac{\partial^2 }{\partial z_{1,\dots,i-1,i ,X}\,\partial z_{1,\dots,i-1,j,xX}}- \frac{\partial^2 }{\partial z_{1,\dots,i-1,j ,X}\, \partial z_{1,\dots,i-1,i,xX}}. \end{equation*} \notag $$

This differential operator appears in equation (2.8) of the GKZ system associated with the lattice $B$.

Lemma 1. For $\alpha\in\{1,\dots,k\}$,

$$ \begin{equation*} \mathcal{O}_{\alpha}J_{\gamma}^s(z)=s_{\alpha}J_{\gamma-v^{\alpha}_+}^{s-e_{\alpha}}(z). \end{equation*} \notag $$

Proof. Denote $(t+1)\cdots(t+s)$ by $c_t$. Using the multi-index notation, one can write shortly $\mathcal{O}_{\alpha}=\partial^2/\partial z^{v^{\alpha}_+} - \partial^2/\partial z^{v^{\alpha}_-}$. Hence
$$ \begin{equation*} \begin{aligned} \, \frac{\partial^2}{\partial z^{v^{\alpha}_+}}J_{\gamma}^s(z) &=\sum_{t\in\mathbb{Z}^k}\frac{c_tz^{\gamma+tv-v^{\alpha}_+}}{(\gamma+tv-v^{\alpha}_+)!}, \\ \frac{\partial^2}{\partial z^{v^{\alpha}_-}}J_{\gamma}^s(z) &=\sum_{t\in\mathbb{Z}^k}\frac{c_tz^{\gamma+tv-v^{\alpha}_-}}{(\gamma+tv-v^{\alpha}_-)!}= \sum_{t\in\mathbb{Z}^k}\frac{c_{t-e_{\alpha}}z^{\gamma+tvv^{\alpha}_+}}{(\gamma+tv-v^{\alpha}_+)!}, \end{aligned} \end{equation*} \notag $$
where $t-e_{\alpha}=(t_1,\dots,t_{\alpha}-1,\dots,t_k)$, where $e_{\alpha}:=(0,\dots,1_{\text{at the place } \alpha},\dots,0)$. So, we have
$$ \begin{equation*} \mathcal{O}_{\alpha}J_{\gamma}^s(z)=\sum_{t\in\mathbb{Z}^k} \frac{(c_t-c_{t-1^{\alpha}})z^{\gamma+tv-v^{\alpha}_+}}{(\gamma+tv-v^{\alpha}_+)!}. \end{equation*} \notag $$
But $c_t-c_{t-e_{\alpha}}=(t+1)\cdots(t+s)-t\cdots(t+s-e_{\alpha}) =s_{\alpha}(t+1)\cdots(t+s-e_{\alpha})$, proving the lemma.

The following analogy is worth pointing out:

$$ \begin{equation*} \frac{d}{dz_{{\alpha}}}=s_{\alpha}z^{s-e_{\alpha}}\quad\longleftrightarrow\quad \mathcal{O}_{\alpha} J_{\gamma}^s=s_{\alpha}J_{\delta-v^{\alpha}_+}^{s-e_{\alpha}}. \end{equation*} \notag $$
This analogy makes possible the following construction. We have
$$ \begin{equation*} e^z=\sum_{s\in\mathbb{Z}_{\geqslant 0}^k}\frac{1}{s!}z^s \quad \Longrightarrow\quad \frac{d}{dz}e^z=e^z. \end{equation*} \notag $$
Consider the hyperexpontent
$$ \begin{equation*} e^{J}:=\sum_{s\in\mathbb{Z}_{\geqslant 0}^k}\frac{1}{s!}J^s_{\delta+sv_+}\quad \Longrightarrow\quad \mathcal{O}_i e^{J}= e^{J}. \end{equation*} \notag $$

Continue this construction. Consider an operator $\partial^2/\partial a^{v_0}$. Define a modified hyperexponent:

$$ \begin{equation*} \operatorname{me}^{J}:=\sum_{s\in\mathbb{Z}_{\geqslant 0}^k}\frac{(-1)^s}{s!}J^s_{\delta-sr}, \end{equation*} \notag $$
where $r_{\alpha}=v^{\alpha}_0-v^{\alpha}_+$. It satisfies the equations
$$ \begin{equation*} \mathcal{O}_{\alpha}\operatorname{me}^{J}=-\frac{\partial^2}{\partial z^{v^{\alpha}_0}}\operatorname{me}^{J}. \end{equation*} \notag $$

Note that since functions of complex variables are considered, the operator $\mathcal{O}_{\alpha} $ is essentially the Laplace operator, and $\mathcal{O}_{\alpha}+\partial^2/\partial z^{v_0}$ is a wave operator.

§ 5. The antisymmetrized GKZ system

We introduce the antisymmetrized GKZ system, which is an important instrument in further considerations. This is a system of partial differential equations. In this section, we construct a base in the space of its polynomial solutions.

We have introduced the GKZ operator:

$$ \begin{equation*} \mathcal{O}_{\alpha}= \frac{\partial^2 }{\partial z_{1,\dots,i-1,i ,X}\, \partial z_{1,\dots,i-1,j,xX}}- \frac{\partial^2 }{\partial z_{1,\dots,i-1,j ,X}\, \partial z_{1,\dots,i-1,i,xX}}. \end{equation*} \notag $$

Let us associate with it an antisymmetrized GKZ operator (an A-GKZ operator):

$$ \begin{equation} \begin{aligned} \, \overline{\mathcal{O}}_{\alpha} &:=\frac{\partial^2 }{\partial z_{1,\dots,i-1,i ,X}\, \partial z_{1,\dots,i-1,j,xX}}- \frac{\partial^2 }{\partial z_{1,\dots,i-1,j ,X}\, \partial z_{1,\dots,i-1,i,xX}} \nonumber \\ &\qquad+\frac{\partial^2 }{\partial z_{1,\dots,i-1,x,X}\, \partial z_{1,\dots,i-1,i,jX}}. \end{aligned} \end{equation} \tag{5.1} $$
 

Definition 5. The antysimmetrized GKZ system (the A-GKZ system) is the following system of partial differential equations:

$$ \begin{equation} \overline{\mathcal{O}}_{\alpha}F=0. \end{equation} \tag{5.2} $$

Note that in this system the analogues of homogeneity conditions (2.7) from the system GKZ are omitted.

Consider the functions

$$ \begin{equation} F_{\gamma}(z):=\operatorname{me}^J=\sum_{s\in\mathbb{Z}_{\geqslant 0}^k}\frac{(-1)^s}{s!} J_{\gamma-sr}^s(z), \end{equation} \tag{5.3} $$
where $s!=\prod_i s_i!$ . Differentiating (4.2), this establishes
$$ \begin{equation} \frac{\partial}{\partial z_{X}}J_{\gamma}^s(z)=J_{\gamma-e_{X}}^s(z),\qquad \frac{\partial}{\partial z_{X}}F_{\gamma}(z)=F_{\gamma-e_{X}}(z), \end{equation} \tag{5.4} $$
where $X\subset \{1,\dots,n\}$ is a proper subset, and $e_{X}$ is a unit vector corresponding to the coordinate $z_X$.

Let us prove the following result.

Theorem 2. Consider a set of vectors $\{\gamma_p\}$, $\gamma_p\in\mathbb{Z}^N$, that satisfies the following conditions.

1. For every vector $\gamma_p$, there exists $b\in B$ such that the vector $\gamma_p+b$ has only non-negative coordinates.

2. The set $\{\gamma_p\}$ is maximal set that consists of vectors that satisfy the condition 1.

Then corresponding functions $F_{\gamma_p}$ form a base in the space of polynomial solutions of the A-GKZ system.

The function $F_{\gamma}$ is called an irreducible solution of system (5.2). Note that this solution is defined by the vector $\gamma$, rather than by the class $\gamma+B$.

Proof of Theorem 2. Before it was proved that the function $F_{\gamma}(z)$ is a solution of the A-GKZ system.

For a monomial $z^{\gamma}$, the vector of exponents $\gamma$ is called a support of this monomial. A support of a function, presented as a sum of a power series, is a union of supports of monomials occurring in this series with non-zero coefficients. A support of a function $F$ is denoted as $\operatorname{supp} F$.

Consider a solution $F$, and represent $\operatorname{supp}F$ as a union of subsets of type $\gamma+B$. For each such a subset, we take in $F$ the monomials whose supports belong to this subset. Denote the resulting function by $F^{\gamma}$. If this function satisfies the system $\forall \alpha$ $\mathcal{O}_{\alpha}(F^{\gamma})=0$, then the corresponding support is called extreme (or an extreme point in $\operatorname{supp} F$). The term “point” is correct since when one considers the support $\operatorname{mod} B$, then it becomes a point.

An irreducible solution $F_{\delta}$ has a unique extreme point $\delta+B$.

To prove the theorem let us first prove the following lemma.

Lemma 2. Every polynomial solution of system (5.2) can be presented as a linear combination of irreducible solutions.

Proof. Consider an arbitrary solution $F$ and decompose it into a sum of functions $F^{\gamma}$ with supports $\gamma+B$.

Let us introduce a partial order on the sets $\gamma+B$. We put

$$ \begin{equation} \gamma+B\preceq\delta+B\quad \text{if}\quad \gamma+sr=\delta\ \operatorname{mod} B,\ \ s\in\mathbb{Z}^k_{\geqslant 0}. \end{equation} \tag{5.5} $$

Since only polynomial solutions are considered, there exist summands $F^{\gamma}$ with supports $\gamma+B$ which are maximal with respect to the order. Let us prove that these summands are extreme. Indeed, we have

$$ \begin{equation*} \overline{\mathcal{O}}_{\alpha}F^{\gamma}=\mathcal{O}_{\alpha}F^{\gamma}+\frac{\partial^2}{\partial z^{v_0^{\alpha}}}F^{\gamma}. \end{equation*} \notag $$

If $\operatorname{supp}F^{\gamma}=\gamma+B$, then $\operatorname{supp}(\mathcal{O}_{\alpha}F^{\gamma})=\gamma-v_{\alpha}^++B$, and $\operatorname{supp}(\partial^2 F^{\alpha}/\partial z^{v^{\alpha}_0})=\gamma-v^{\alpha}_0+B$. Since $\overline{\mathcal{O}}_{\alpha}F=0$, by considering the supports, we conclude that the summand $\mathcal{O}_{\alpha}F^{\gamma}$ (if non-zero) contracts with one of the expressions of type $\partial^2 F^{\delta}/\partial z^{v_0^{\alpha}}$ or $\mathcal{O}_{\alpha}(F^{\delta})$. By considering supports, one concludes that $\mathcal{O}_{\alpha}F^{\gamma}$ can not contract with an expression of the same type for another $\delta$. This is why $\mathcal{O}_{\alpha}F^{\gamma}$ contracts with some $\partial^2 F^{\delta}/\partial z^{v^{\alpha}_0}$. Hence $\operatorname{supp} F^{\delta}-v_{+}^{\alpha}=\gamma-v^{\alpha}_0$. This means that $\operatorname{supp} F^{\delta}=\gamma+v_{+}^{\alpha}-v_0^{\alpha}+B$. Thus, $\operatorname{supp} F^{\delta}\succ \gamma+B$. But the support $\gamma+B$ is maximal, hence this situation is not possible. Hence $\mathcal{O}_{\alpha}F^{\gamma}=0$. We have also proved that maximal points are extreme.

Hence an arbitrary solution has extreme points. The corresponding functions $F^{\gamma}$ have supports of type $\gamma+B$. So, we have

$$ \begin{equation*} F^{\gamma}=\sum_{t\in\mathbb{Z}^k}c_t\frac{A^{\gamma+tv}}{(\gamma+tv)!} \end{equation*} \notag $$
for some numeric coefficients $c_t$. Since $F^{\gamma}$ is non-zero, there exists $b\in B$ such that the vector $\gamma+b$ has only non-negative coordinates. Also since $F^{\gamma}$ is annihilated by the operators $\mathcal{O}_{\alpha}$, one can conclude that all $c_t$ are equal. Hence $F^{\gamma}$ are $\Gamma$-series up to multiplication by a constant.

Let us describe a procedure that transforms a solution $V$ of the A-GKZ system into a solution $W$ of the same system.

1. Take an extreme point $\gamma+B$ in $\operatorname{supp} V$ and take the corresponding irreducible solution $F_{\gamma}$.

2. Subtract from $V$ all the constructed $F_{\gamma}$ with a coefficient such that the summands $V^{\gamma}$ (defined by $V$ by analogy with the definition of $F^{\gamma}$ for $F$) in $V$ with supports $\gamma+B$ are cancelled. This is possible since both in $F_{\gamma}$ and in $V$ the summands with support in $\gamma+B$ form a function proportional to a $\Gamma$-series.

3. The obtained solution is denoted by $W$.

The solution $W$ thus constructed has the following property: the extreme points in $\operatorname{supp}W$ are strictly lower than these extreme points of $\operatorname{supp}V$ with respect to the order $\preceq$.

Now let us operate as follows. Take a solution $F$, apply to it the procedure and obtain a new solution. Take its extreme points and apply the procedure again, and so on.

Let us show that after a finite number of steps this procedure gives a zero function. To prove this, it is enough to show that the supports of function that appear on all the steps are subsets of some finite set.

For this purpose, for every summand $F^{\gamma}$ in $F$ with a maximal support $\gamma+B$, let us find a set of non-negative integers $s^{\gamma}_{\alpha}\in\mathbb{Z}^k$ such that $\gamma-s^{\gamma}r+b:=\gamma-\sum_{\alpha}s^{\gamma}_{\alpha}r^{\alpha}+b$ has only non-negative coordinates for some $b\in B$.

This set is finite. Indeed, consider a functional $\chi^m_u$ introduced in (3.1). They are defined by their action on the base vectors $e_X$, where $X\subset\{1,\dots,n\}$, by the following rule:

$$ \begin{equation} \chi^m_u(e_X)=\begin{cases} 1 &\text{if in $X$ there are $\geqslant u$ indices, each }\leqslant m, \\ 0 &\text{otherwise}. \end{cases} \end{equation} \tag{5.6} $$

One can note that $\chi_u^m(b)=0$ for $b\in B$. For a vector $r^{\alpha}$, defined by (3.6), we have

$$ \begin{equation*} \chi_{u}^m(r^{\alpha})=\begin{cases} 1 &\text{for } u=i,\, j\leqslant m<y, \\ -1 &\text{for } u=i+1,\, j\leqslant m<y, \\ 0 &\text{otherwise}. \end{cases} \end{equation*} \notag $$

Consider first $r^{\alpha}$ such that for them $i=1$. When one subtracts from the vector $\gamma$ these vectors $r^{\alpha}$ with positive coefficients, the value of $\chi_{1}^m$ diminishes. And subtraction of other $r^{\alpha}$ with bigger value of $i$ has no effect on $\chi_{1}^m$. If then one adds $b\in B$, then $\chi_{1}^m$ remains unchanged. Thus. we come to a conclusion: if one subtracts from the vector $\gamma$ the vectors $r^{\alpha}$ with $i=1$ infinite number of times, then on some step they obtain a vector such that $\chi_{1}^m$ is negative on this vector. But such a vector can not have only non-negative coordinates since for such vectors the functional (5.6) is non-negative.

Then one considers the vectors $r^{\alpha}$ such that $i=2$ and the functional is $\chi_{2}^m$. One concludes that it is possible to subtract them from $\gamma$ only finite number of times and so no.

Introduce a notation:

$$ \begin{equation} M_{\gamma}=\bigcup\{\gamma-sr+B\}, \end{equation} \tag{5.7} $$
a union is taken over all $s^{\gamma}=\{s^{\gamma}_{\alpha}\}$ obtained above.

We have $\operatorname{supp}F_{\gamma}\subset M_{\gamma}$ since $F_{\gamma}=\sum_{s\in\mathbb{Z}^k_{\geqslant 0}}(-1)^sJ_{\gamma-sr}^s/s!$ and $\operatorname{supp} J_{\gamma-sr}^s=\gamma-sr+B$, and the function $J_{\gamma-sr}^s$ is non-zero if and only if there are vectors that have only non-negative coordinates in the support.

Also note that if $\delta\prec\gamma$, then $M_{\delta}\subset M_{\gamma}$.

We have $\operatorname{supp} F\subset \bigcup_{\gamma} M_{\gamma}$, where the union is taken over all extreme points $\gamma$. Indeed, suppose the opposite: there exists $\delta\in \operatorname{supp}F$, but $\delta\notin \bigcup_{\gamma} M_{\gamma}$. Consider $F^{\delta}$. Apply the arguments from the proof of the result that the maximal points are extreme (see the beginning of the proof of Lemma 2). Then one concludes that if $\mathcal{O}_iF^{\delta}\neq 0$, then $\delta'=\delta+r_i\in \operatorname{supp} F\,\operatorname{mod} B$. Also $\delta\prec\delta'$ and $\delta'\notin \bigcup_{\gamma} M_{\gamma}$ since otherwise the lower support $\delta$ also belongs to $\bigcup_{\gamma} M_{\gamma}$. So, one can find bigger points (relative to the order $\prec$) in the support that do not belong to $\bigcup_{\gamma} M_{\gamma}$. One can do so until a point $\delta''\in \operatorname{supp} F$ is obtained such that $\mathcal{O}_{\alpha}F^{\delta''}=0$. But this is an extreme point, and hence it belongs to $\bigcup_{\gamma} M_{\gamma}$, a contradiction.

Thus, on each step of the procedure the support belongs to the set $\bigcup_{\gamma} M_{\gamma}$, where the union is taken over extreme points of the support of the function $F$. This set is finite. Since on each step the support becomes smaller, then after a finite number of steps the empty set is obtained. This means that we have presented the functions $F$ as a linear combination of functions $F_{\gamma}$. Lemma is proved.

Lemma 3. The functions $F_{\gamma}$ from of Theorem 2 are linearly independent.

Proof. Indeed, let
$$ \begin{equation} \sum_pc_pF_{\gamma_p}=0. \end{equation} \tag{5.8} $$

Among the sets $\gamma_1+B$, $\gamma_2+B$, $\dots$ choose a maximal element with respect to the order $\prec$. Consider the corresponding summand $c_iF_{\gamma_i}$. Next, we choose in expression (5.8) a summand $\mathcal{F}_{\gamma_i}$. Due to condition 1 from Theorem 2, $\mathcal{F}_{\gamma_i}\neq 0$. Since $\gamma_i+B$ is maximal, $\mathcal{F}_{\gamma_i}$ cannot contract with any summand in (5.8), a contradiction.

This completes the proof of Theorem 2.

Remark 2. The solution $F_{\gamma}$ has the following property. Its support is the set $M_{\gamma}$ of type (5.7). If one represents these functions as a sum of series and takes summands with the support $\gamma+B$, then they obtain $\mathcal{F}_{\gamma}$. This solution in some natural sense is the simplest of the A-GKZ system that is generated by the solutions $\mathcal{F}_{\gamma}$ of system GKZ.

§ 6. The Gelfand–Kapranov–Zelevinsky base

Consider functions $\mathcal{F}_{\gamma}(z)$ corresponding to shift vectors obtained in the following manner. Consider the set of all possible Gelfand–Tsetlin diagrams for an irreducible finite dimensional representation of $\mathfrak{gl}_n$, construct the corresponding shifted lattices and take the corresponding shift vectors $\gamma$. Instead of the variables $z_X$, let us substitute into these functions the determinants $a_X$. The resulting function is denoted by $\mathcal{F}_{\gamma}(a)$.

Proposition 2. The functions $\mathcal{F}_{\gamma}(a)$ belong to the representations with the highest vector (2.4).

Proof. In the book [8], it is proved that a function on the group belongs to the representation with the highest vector (2.4) if and only if the following conditions hold.

1. $L^{-}f(a)=0$, where $L^-$ is left infinitesimal shift by negative root element. Such a shift acts onto row indices of a determinant. If one writes its action explicitly, one sees that this conditions always holds for a function of type $f(a_X)$.

2. $L_{i,i} f(a)=m_if(a)$, $i=1,2,\dots,n$, where the operators $L_{i,i}$ are left infinitesimal shifts by the elements $E_{i,i}$.

3. $(L_i^+)^{q_i+1}f(a)=0$, $i=2,3,\dots,n$. The operators $L_i^+$ are left infinitesimal shifts by positive simple root elements, that is by elements $E_{i-1,i}$, and $q_{i}=m_{i-1}-m_{i}$.

Conditions 2 and 3 for a polynomial in determinants mean that in each monomial the sum of exponents of determinants of order $i$ equals $m_{i}-m_{i-1}$. This conditions hold for a function $\mathcal{F}_{\gamma}(a)$ corresponding to the shift vectors described above. Thus, it belongs to the representation with the highest vector (2.4). This proves Proposition 2.

Let us show that these functions form a base that we call the Gelfand–Kapranov–Zelevinsky base. Let us also find its relation to the Gelfand–Tsetlin base.

6.1. The proof of the fact that the functions $\mathcal{F}_{\gamma}(a)$ form a base in a representation

In the representation with the highest vector (2.4), there are vectors $\mathcal{F}_{\gamma}(a)$ indexed by the Gelfand–Tsetlin diagram. To prove that they form a base, it is sufficient to prove that they are linearly independent. If the variables $a_X$ were independent, then the proof of the linear independence of the functions $\mathcal{F}_{\gamma}(a)$ would be very simple. The problem is that the determinants $a_X$ satisfy the Plücker relations (these are all relations between determinants of a square matrix, see [15]).

The strategy to overcome this difficulty is the following. We define a “canonical form” of $\mathcal{F}_{\gamma}(a)$ with respect to the Plücker relations. Using this “canonical form”, we derive that the linear span of the functions $\mathcal{F}_{\gamma}(a)$ is a representation. Using the irreducibility property, we conclude that $\mathcal{F}_{\gamma}(a)$ form a base in the representation with the highest vector (2.4).

To realize this strategy, one notes the following. With a base vector

$$ \begin{equation*} v_{i,j,x,X}=(\dots,1_{z_{iX}},\dots,-1_{z_{jX}},\dots,-1_{z_{ixX}},\dots,1_{z_{jxX}},\dots) \end{equation*} \notag $$
one associates a Plücker relation of the following type:
$$ \begin{equation} a_{1,\dots,i-1,i,X}a_{1,\dots,i-i,j,x,X}-a_{1,\dots,i-1,j,X}a_{1,\dots,i-i,i,x,X} +a_{1,\dots,i-1,x,X}a_{1,\dots,i-i,i,j,X}=0. \end{equation} \tag{6.1} $$

Denote an ideal generated by these relations as $\mathrm{IP}$ (we do not discuss the question whether it coincides with the ideal $\mathrm{Pl}$, generated by all Plücker relations).

Instead of the determinants $a_X$, consider independent variables $A_X$ (we also write $A$ for the set of all variables $A_X$). We also introduce the following notation. Consider the polynomial in variables $A_X$

$$ \begin{equation*} f(A)=\sum_{\beta}c_{\beta}A^{u_{\beta}},\qquad c_{\beta}\in\mathbb{C}, \end{equation*} \notag $$
where $\beta$ is some index enumerating monomials of this polynomial, $u_{\beta}$ is a vector of exponents of the corresponding monomial and $A^{u_{\beta}}$ is a multi-index notation for a monomial. With the polynomial we associate the differential operator obtained by the substitution $A_X\mapsto d/dA_X$:
$$ \begin{equation*} f\biggl(\frac{d}{dA}\biggr)=\sum_{\beta}c_{\beta}\biggl(\frac{d}{dA}\biggr)^{u_{\beta}},\qquad c_{\beta}\in\mathbb{C}. \end{equation*} \notag $$

Hence

$$ \begin{equation*} f(a)=0 \ \operatorname{mod} \mathrm{IP} \end{equation*} \notag $$
if and only if the differential operator $f(d/dA)$ acts as zero on the space of solutions of the A-GKZ system.

We denote this action by

$$ \begin{equation}  f(A)\curvearrowright F(A):=f\biggl(\frac{d}{dA}\biggr) F(A). \end{equation} \tag{6.2} $$

Since one has the base $F_{\gamma}(A) $ in the space of solutions of the A-GKZ system, the following result holds.

Lemma 4. $f(a)=0 \,\operatorname{mod}\mathrm{IP}$ if and only if

$$ \begin{equation*} f\biggl(\frac{d}{dA}\biggr)\curvearrowright F_{\gamma}(A)=0. \end{equation*} \notag $$

  Note that in the above formula the equality to zero is assumed in the ordinary sense, not $\operatorname{mod} \mathrm{IP}$. Let us find an explicit formula for the action $\mathcal{F}_{\delta}(A)\curvearrowright F_{\gamma}(A)$.

First of all, the following relation between binomial coefficients takes place.

Proposition 3 (see [16]).

$$ \begin{equation*} \binom{N}{t+l}=\sum_{N=N_1+N_2}\binom{N_1}{t}\binom{N_2-1}{l-1}. \end{equation*} \notag $$

Corollary 1.

$$ \begin{equation} \binom{(t_i+l_i)+k_i}{t_i+l_i}=\sum_{s_i\in\mathbb{Z}_{\geqslant 0}} \binom{l_i+s_i-1}{l_i-1}\binom{t_i+k_i-s_i}{t_i-s_i}+\cdots. \end{equation} \tag{6.3} $$

Now let us prove the following result.

Lemma 5.

$$ \begin{equation} \mathcal{F}_{\gamma}\biggl(\frac{d}{dA}\biggr)F_{\omega}(A)=\sum_{s\in\mathbb{Z}_{\geqslant 0}^k} \frac{(-1)^s}{s!}J_{\gamma+v}^s(1)F_{\omega-\gamma-sr}(A), \end{equation} \tag{6.4} $$
where $J_{\gamma+v}^s(1)$ is the result of substitution of $1$ instead of all arguments. And $\gamma+v:=\gamma+\sum_{\alpha}v_{\alpha}$.

Proof. Write:
$$ \begin{equation*} \mathcal{F}_{\gamma}\biggl(\frac{d}{dA}\biggr)=\sum_{l\in\mathbb{Z}^k}\frac{(d/d A)^{\gamma+lv}}{(\gamma+lv)!}. \end{equation*} \notag $$
Find an action of the operator $(d/d A)^{\gamma+lv}$ onto the summand $J_{\omega}^p(A)$ from $F_{\omega}$. Using rule (5.4), this gives
$$ \begin{equation*} \biggl(\frac{d}{d A}\biggr)^{\gamma+lv}J_{\omega-pr}^p(A)=J_{\omega-\gamma-pr-lv}^p(A). \end{equation*} \notag $$

Now consider in detail $J_{\omega-\gamma-pr-lv}^p(A)$. Let us use a notation

$$ \begin{equation*} \binom{\tau+p}{p}:=\prod_{i=1}^k \binom{\tau_i+p_i}{p_i}. \end{equation*} \notag $$
We have
$$ \begin{equation*} \frac{1}{p!}J_{\omega-\gamma-pr-lv}^p(A) =\sum_{\tau\in\mathbb{Z}^k}\frac{\binom{\tau+p}{p}A^{\omega-\gamma-pr-lv+\tau v}}{(\omega-\gamma-pr-lv+\tau v)!} =\sum_{t\in\mathbb{Z}^k}\frac{\binom{t+l+p}{p}A^{\omega-\gamma-pr+t v}}{(\omega-\gamma-pr+tv)!}. \end{equation*} \notag $$

Let us apply equality (6.3). Since

$$ \begin{equation*} \sum_{t\in\mathbb{Z}^k}\frac{\binom{t+p-s}{p-s}A^{\omega-\gamma-sr+t v}}{(\omega-\gamma-pr+tv)!}=\frac{1}{(p-s)!}J^{p-s}_{\omega-\gamma-pr}(A), \end{equation*} \notag $$
we have
$$ \begin{equation*} \frac{1}{p!}J_{\omega-\gamma-(p-s)r-lv}^p(A)=\sum_{s\in\mathbb{Z}^k_{\geqslant 0}}\binom{l+s-1}{s-1}\frac{1}{(p-s)!}J^{p-s}_{\omega-\gamma-(p-s)r}(A), \end{equation*} \notag $$
where
$$ \begin{equation*} \binom{l+s-1}{s-1}:=\prod_{i=1}^k\binom{l_i+s_i-1}{s_i-1}. \end{equation*} \notag $$

Now take the expression for $(d/d A)^{\gamma+lv}(1/p!)J_{\omega-pr}^p(A)$, multiply it by $(-1)^p$ and take a sum over $p$, then we obtain that

$$ \begin{equation*} \biggl(\frac{d}{d A}\biggr)^{\gamma+lv}F_{\omega}(A)=\sum_{s\in\mathbb{Z}^k_{\geqslant 0}}\binom{l-1+s}{l-1}F_{\omega-\delta-sr}(A)\cdot (-1)^s. \end{equation*} \notag $$

Summing over $l$, we have

$$ \begin{equation*} \begin{aligned} \, \mathcal{F}_{\gamma}\biggl(\frac{d}{dA}\biggr)F_{\omega}(A) &=\sum_{s\in\mathbb{Z}^k_{\geqslant 0}}\biggl(\sum_l\frac{\binom{l-1+s}{l-1}}{(\gamma+lv)!}\biggr)F_{\omega-\delta-sr}(A)\cdot (-1)^s \\ &=\sum_{s\in\mathbb{Z}^k_{\geqslant 0}}\frac{(-1)^s}{s!}J^s_{\gamma+v}(1)F_{\omega-\delta-sr}(A), \end{aligned} \end{equation*} \notag $$
proving Lemma 5.

Thus, we arrive at the following result.

Lemma 6.

$$ \begin{equation} \mathcal{F}_{\gamma}(A)=\sum_s \frac{1}{s!}J_{\gamma+v}^s(1)A^{\gamma+sr} \ \operatorname{mod} \mathrm{IP}. \end{equation} \tag{6.5} $$

Note that when one adds the vector $r^{\alpha}$ to a shift vector, then to some row of the Gelfand–Tsetlin diagrams the vector $[0, \dots, -1, \dots, 1, \dots, 0]$ is added.

Now using these results, we prove that the functions $\mathcal{F}_{\gamma}(a_Y)$ for the chosen $\gamma$ form the base in the representation.

Let us write $E_{i,j}$ as the differential operator

$$ \begin{equation*} E_{i,j}=\sum_X a_{i,X}\, \frac{\partial}{\partial a_{j,X}}, \end{equation*} \notag $$
where the summation is taken over the subsets $X\subset\{1,\dots,n\}$ that do not contain $i$ and $j$.

Hence

$$ \begin{equation} E_{i,j}\mathcal{F}_{\gamma}(a)=\sum_X a_{i,X}\mathcal{F}_{\gamma-e_{j,X}}(a). \end{equation} \tag{6.6} $$

An application of (6.5) shows that

$$ \begin{equation} E_{i,j}\mathcal{F}_{\gamma}(a)=\sum_X \sum_{s} c_{X,s}\mathcal{F}_{\gamma-e_{j,X}+e_{i,X}+sr}(a),\qquad c_{X,s}\in\mathbb{C}. \end{equation} \tag{6.7} $$

So, it is proved that the span of all $\mathcal{F}_{\gamma}$ is a representation of the algebra $\mathfrak{gl}_n$. This representation is generated by vectors indexed by the Gelfand–Tsetlin diagrams and it is contained in the representation with the highest vector (2.4). Using the arguments of irreducibility and dimension, one get the following result.

Theorem 3. Consider the set of all Gelfand–Tsetlin diagrams $(m_{i,j})$ for an irreducible finite dimensional representation of $\mathfrak{gl}_n$, construct a shifted lattice for each diagram. For each shifted lattice fix a presentation in the form $\gamma+B$ and take the corresponding shift vectors  $\gamma$. Then the functions $\mathcal{F}_{\gamma}(a)$ form a base in the representation with the highest vector (2.4).

Or, equivalently, the functions $\mathcal{F}_{\gamma}(A)$ of the independent variables $A$ form a base in the representations $\operatorname{mod} \mathrm{IP}$.

6.2. A triangular relation between the Gelfand–Tsetlin and the Gelfand–Kapranov–Zelevinsky bases

We write $G_{\gamma}(a)$ for the function associated with the Gelfand–Tsetlin base vector corresponding to a diagram $(m_{i,j})$, for which there corresponds a shift vector $\gamma$ (see Theorem 3).

The following result is a consequence of (6.7).

Theorem 4. The Gelfand–Kapranov–Zelevinsky base $\mathcal{F}_{\gamma}(a)$ is related to the Gelfand–Tsetlin base $G_{\gamma}(a)$ by a transformation upper-triangular with respect to the order (5.5).

Proof. We take the Gelfand–Tsetlin diagram $\gamma=\gamma_1$, consider a function $G_{\gamma_1}$ corresponding to the diagram $(m_{i,j})$, and consider a function $\mathcal{F}_{\gamma_1}$ also corresponding to this diagram.

Let us proceed with the following constructions.

The first step is as follows. The number $m_{1,2}-m_{1,1}$ is the maximal power of the operator $E_{1,2}$, which, being applied to $\mathcal{F}_{\gamma_1}$, does not give zero. This follows from (2.3). The same is true for the function $G_{\gamma_1}$, this follows from the formulas for the action of the generators in the Gelfand–Tsetlin base. An application of $E_{1,2}^{m_{1,2}-m_{1,1}}$ to $\mathcal{F}_{\gamma_1}$ and $G_{\gamma_1}$, gives the functions, which we denote by $\mathcal{F}_{\gamma_2}$ and $G_{\gamma_2}$. They correspond to the diagram $\gamma_2$ obtained form $\gamma_{1}$ by the change

$$ \begin{equation*} m_{1,1}\mapsto m_{1,2}. \end{equation*} \notag $$

Now describe the $k$th step of the construction. We are given a diagram $\gamma_{k-1}$, which is maximal with respect to $\mathfrak{gl}_{k-1}$. Note that $m_{1,k}-m_{1,k-1}$ is the maximal power of $E_{1,k}$, which, being applied to $\mathcal{F}_{\gamma_{k-1}}$ or $G_{\gamma_{k-1}}$, gives non-zero functions. The reason is the same as above. As a result of these actions, we get the functions $\mathcal{F}_{\gamma_{k-1,1}}$, $G_{\gamma_{k-1,1}}$, corresponding to a diagram $\gamma_{k-1,1}$, which is obtained from $\gamma_{k-1}$ by the change

$$ \begin{equation*} m_{1,k-1}\mapsto m_{1,k}, \quad \dots,\quad m_{1,1} \mapsto m_{1,k}. \end{equation*} \notag $$

Note that $m_{2,k}-m_{2,k-1}$ is the maximal power of $E_{1,k}$, which, being applied to $\mathcal{F}_{\gamma_{k-1,1}}$ or $G_{\gamma_{k-1,1}}$, gives non-zero functions. As a result, we get the functions $\mathcal{F}_{\gamma_{k-1,2}}$, $G_{\gamma_{k-1,2}}$ corresponding to the diagram  $\gamma_{k-1,2}$ obtained from $\gamma_{k-1,1}$ by the change

$$ \begin{equation*} m_{2,k-1}\mapsto m_{2,k},\quad \dots,\quad m_{2,2} \mapsto m_{2 ,k} \end{equation*} \notag $$
and so on. After $k-1$ such transformations, we get a $\mathfrak{gl}_{k}$-highest diagram $\gamma_{k-1,k-1}$, which we denote by $\gamma_{k}$. This concludes the $k$th step.

Finally, we get a $\mathfrak{gl}_{n-1}$-highest vector for which (see [2]) the GKZ vector coincides with the Gelfand–Tsetlins’s vector, and so

$$ \begin{equation} E_{n-2,n-1}^{m_{n-1,n-2}-m_{n-2,n-2}}\dots E_{1,2}^{m_{1,2}-m_{1,1}}\mathcal{F}_{\gamma}= E_{n-2,n-1}^{m_{n-1,n-2}-m_{n-2,n-2}}\dots E_{1,2}^{m_{1,2}-m_{1,1}}G_{\gamma}. \end{equation} \tag{6.8} $$

Now let us “remove” operators in the left- and right-hand sides of (6.8). Let us remove the operator $ E_{n-2,n-1}^{m_{n-2,n-1}-m_{n-2,n-2}}$. In the above notation, we have

$$ \begin{equation*} \mathcal{F}_{\gamma_{n-2,n-1}}=G_{\gamma_{n-2,n-1}}+f, \end{equation*} \notag $$
where, for $f$, the following is true.

1. This vector is $\mathfrak{gl}_{n-2}$-highest with the same highest weight as $\mathcal{F}_{\gamma_{n-2,n-1}}$ and $G_{\gamma_{n-2,n-1}}$.

2. It has the same weight as $\mathcal{F}_{\gamma_{n-2,n-1}}$ and $G_{\gamma_{n-2,n-1}}$.

3. $E_{n-2,n-1}^{m_{n-2,n-1}-m_{n-2,n-2}}f=0$.

From these facts one concludes that $f$ is a sum of $\mathfrak{gl}_{n-2}$-highest vectors corresponding to the Gelfand–Tsetlin diagrams such that for them the rows $n$ and $n-2$ coincide with the rows of the diagram corresponding to the shift vector $\gamma_{n-2,n-1}$, and the row $(n-1)$ is obtained from the row of the diagram corresponding to $\gamma_{n-2,n-1}$ by adding vectors of type $(\dots,-1,\dots,+1)$. But adding such vectors to the row of a diagram is equivalent to the addition of the vectors $r_i$ to the shift vector corresponding to the diagram. That is, $f$ is a sum of functions of type $G_{\gamma_{n-2,n-1}+sr}$, where $sr:=s_1r_1+\dots+s_kr_k$ for some $s\in\mathbb{Z}_{\geqslant 0}^k$.

We have $G_{\gamma_{n-2,n-1}}=E_{n-1,n-3}^{m_{n-3,n-1}-m_{n-3,n-2}}\dots E_{2,1}^{m_{1,2}-m_{1,1}}G_{\gamma}$, since the diagram corresponding to $\gamma_{n-2,n-1}+sr$ differs from that corresponding to $\gamma_{n-2,n-1}$  only in the row $n-1$. Hence

$$ \begin{equation*} G_{\gamma_{n-2,n-1}+sr}=E_{n-1,n-3}^{m_{n-3,n-1}-m_{n-3,n-2}}\dots E_{2,1}^{m_{1,2}-m_{1,1}} G_{\gamma+sr}. \end{equation*} \notag $$

So, removing the operator $E_{n-2,n-1}^{m_{n-2,n-1}-m_{n-2,n-2}}$ in (6.8), we find that

$$ \begin{equation} \begin{aligned} \, &E_{n-3,n-1}^{m_{n-3,n-1}-m_{n-3,n-2}}\dots E_{1,2}^{m_{1,2}-m_{1,1}}\mathcal{F}_{\gamma} \nonumber \\ &\qquad= E_{n-3,n-1}^{m_{n-3,n-1}-m_{n-3,n-2}}\dots E_{1,2}^{m_{1,2}-m_{1,1}}\biggl(G_{\gamma}+\sum_{j}\mathrm{const}_j\cdot G_{\gamma+s^jr}\biggr), \end{aligned} \end{equation} \tag{6.9} $$
where $s^j\in\mathbb{Z}_{\geqslant 0}^k$, and the addition of $s^{j}r$ to the shift vector corresponds to the change of the row $(n-1)$. Continuing this process, one comes to the conclusion that $\mathcal{F}_{\gamma}$ is expressed through $G_{\gamma}$ by an upper-triangular transformation. Hence $\mathcal{F}_{\gamma}$ is also related to $G_{\gamma}$ by an upper-triangular transformation. Theorem 4 is proved.

Remark 3. It is known that the function $G_{\gamma}$ corresponding to a Gelfand–Tsetlin diagram can be obtained from the highest vector by an application of lowering operators to the highest vector $v_0$ (see [8]):

$$ \begin{equation*} G_{\gamma}=\prod_{k=2}^n\prod_{i=k-1}^1\nabla_{k,i}^{m_{k,i}-m_{k-1,i}} v_0. \end{equation*} \notag $$

The vectors of the Gelfand–Kapranov–Zelevinsky base can be obtained in the same manner, but one needs to use the following lowering operators:

$$ \begin{equation*} \widetilde{\nabla}_{k,i}=a_{1,\dots,i-1,k}\, \frac{\partial}{\partial a_{1,\dots,i-i,i}}. \end{equation*} \notag $$

In a certain sense, $\widetilde{\nabla}_{k,i}$ is a simplification of the operators $\nabla_{k,i}$ (see the formula for $\nabla_{k,i}$ in [8]).

§ 7. A function corresponding to a diagram

In the case $n\geqslant 4$, the functions $\mathcal{F}_{\gamma}(a)$ are not the Gelfand–Tsetlin vectors. To find a function which is a Gelfand–Tsetlin vector, let us consider irreducible solutions $F_{\gamma}(a)$.

Let us prove the following result.

Lemma 7. Consider shift vectors $\gamma$ corresponding to all possible Gelfand–Tsetlin diagrams of an irreducible finite-dimensional representation of $\mathfrak{gl}_n$. Then the functions $F_{\gamma}(A)$ of independent variables $A_X$ form a base in the representation of the algebra $\mathfrak{gl}_n$ with the highest vector (2.4) (in which we change $a_X\mapsto A_X$). The generators of the algebra act by the rule (6.6).

In other words, the functions $F_{\gamma}(A)$ span a representation even without usage of relations between determinants.

Proof of Lemma 7. Consider the ideal $\mathrm{Pl}$ of all relations between the determinants $a_X$. It is an ideal in the ring of polynomials in independent variables $A_X$. The action of $E_{i,j}$ preserves this ideal. Note that $\mathrm{IP}\subset \mathrm{Pl}$.

To every polynomial in variables $A_X$, there corresponds a differential operator obtained by the substitution $A_X\mapsto d/d A_X$. To the ideals $\mathrm{IP}\subset \mathrm{Pl}$ in the polynomial ring there correspond ideals $D_{\mathrm{IP}}\subset D_{\mathrm{Pl}}$ in the ring of differential operators with constant coefficients.

Consider the spaces $\mathrm{Sol}_{D_{\mathrm{IP}}}$ and $\mathrm{Sol}_{D_{\mathrm{Pl}}}$ of polynomial solutions for these ideals that is the spaces of functions that are annihilated by all operators from the corresponding ideal. We have the inclusion

$$ \begin{equation*} \mathrm{Sol}_{D_{\mathrm{IP}}}\supset \mathrm{Sol}_{D_{\mathrm{Pl}}}. \end{equation*} \notag $$

The action of $E_{i,j}$ preserves the ideal $\mathrm{Pl}$, and hence the space $\mathrm{Sol}_{ D_{\mathrm{Pl}}}$ is invariant under the action of $\mathfrak{gl}_n$.

Consider the highest weight $[m_1,\dots,m_n]$, and take the finite-dimensional linear spaces

$$ \begin{equation} \mathrm{Sol}^{m_1,\dots,m_n}_{D_{\mathrm{IP}}}\supset \mathrm{Sol}^{m_1,\dots,m_n}_{D_{\mathrm{Pl}}} \end{equation} \tag{7.1} $$
of solutions such that the sums of exponents for $A_X$, such that $|X|=i$, equals $m_i-m_{i-1}$. Let us do the following observations.

1. Since the action of $E_{i,j}$ preserves these sums of exponents, the space on the right of (7.1) is invariant under the action of $E_{i,j}$.

2. Both spaces in (7.1) contain a monomial (2.4) (in which one changes $a_X\,{\mapsto}\, A_X$). Indeed, consider any basic Plücker relation (in particular, relation (6.1)). Every summand in the basic Plücker relation does not contain two variables from (2.4). Transforming the relations to differential operators, we see that each summand in these operators annihilates (2.4).

As a corollary of the above properties 1, 2, we see that the space on the right of (7.1) contains an irreducible representation with the highest weight $[m_1,\dots,m_n]$. Hence its dimension is greater or equal than that of this representation.

3. Due to Theorem 2, there exists a base of type $\{F_{\gamma_{p}}(A)\}$ in $\mathrm{Sol}^{m_1,\dots,m_n}_{D_{\mathrm{IP}}}$, where the vectors $\gamma_p$ form a maximal subset of vectors in the set of all vectors $\gamma\in \mathbb{Z}^N$ that considered $\operatorname{mod} B$ such that:

1) all the coordinates become non-zero after an addition of a vector from $B$,

2) $\sum_{X\colon |X|=i}\gamma_X=m_i-m_{i-1}$.

The number of vectors $\gamma_{p}$ is equal to that of independent $\operatorname{mod} B$ solutions of the system from Definition 4 that are constructed from all possible Gelfand–Tsetlin diagrams $(m_{i,j})$ with a fixed upper row $[m_1,\dots,m_n]$.

Hence the dimension $\mathrm{Sol}^{m_1,\dots,m_n}_{D_{\mathrm{IP}}}$ equals to that of the irreducible representation with highest weight $[m_1,\dots,m_n]$. Also one sees that the basic $\gamma_p$ are the shift vectors corresponding to all possible Gelfand–Tsetlin diagrams as it is stated in the formulation of the lemma.

Using the conclusion from properties 1, 2 and the conclusion from property 3, we find that the dimension of the space on the right of (7.1) is not smaller than that on the left of (7.1). So, we have

$$ \begin{equation*} \mathrm{Sol}^{m_1,\dots,m_n}_{D_{\mathrm{IP}}}= \mathrm{Sol}^{m_1,\dots,m_n}_{D_{\mathrm{Pl}}}. \end{equation*} \notag $$

As it was pointed in property 3, the span of functions $\langle F_{\gamma}(A)\rangle $ listed in the formulation is the space $\mathrm{Sol}^{m_1,\dots,m_n}_{D_{\mathrm{IP}}}$. This is a representation of the algebra $\mathfrak{gl}_n$ since such a property has $\mathrm{Sol}^{m_1,\dots,m_n}_{D_{\mathrm{Pl}}}$. Also $\langle F_{\gamma}(A)\rangle $ contains (2.4), it has the same dimension as the irreducible representation generated by (2.4). Hence $\langle F_{\gamma}(A)\rangle $ coincides with this representation. Lemma 7 is proved.

As a corollary, we find that functions on the group $F_{\gamma}(a)$ also form a representation. It contains the highest vector (2.4). Hence if taking shift vectors corresponding to different Gelfand–Tsetlin diagrams, we get functions $F_{\gamma}(a)$ that form a base in a representation. Due to Theorem 3, the bases $F_{\gamma}(a)$ and $\mathcal{F}_{\gamma}(a)$ are related by an invertible linear transformation $\operatorname{mod} \mathrm{IP}$.

Below we prove that the base $F_{\gamma}(a)$ is related to the Gelfand–Tsetlin base by a low-triangular transformation relative to the order (5.5).

To prove this fact and to find this transformation explicitly, we use an invariant scalar product. The Gelfand–Tsetlin base is orthogonal relative to this scalar product. Thus, the transformation from the base $F_{\gamma}(a)$ to the Gelfand–Tsetlin base is a lower-triangular transformation that diagonalizes the quadratic form of the scalar product.

7.1. An invariant scalar product in the functional representation

On a finite dimensional irreducible representation of $\mathrm{GL}_n$ there exists a unique (up to multiplication by a constant) hermitian product $\{\,{\cdot}\,,{\cdot}\,\}$ invariant under the action of $U_n$. This product defines an invariant $\mathbb{C}$-bilinear scalar product $(x,y):=\{x,\overline{y}\}$.   The invariance of $\{\,{\cdot}\,,{\cdot}\,\}$ with respect to the action of the group $U_n\subset \mathrm{GL}_n(\mathbb{C})$ means that the following equality for the $\mathbb{C}$-bilinear scalar product takes place:

$$ \begin{equation} (E_{i,j}v,w)=(v,E_{j,i}w). \end{equation} \tag{7.2} $$

Thus, on an irreducible finite dimensional representation of $\mathrm{GL}_n$, there exists a unique invariant $\mathbb{C}$-bilinear scalar product for which (7.2) holds. Let us find it in terms of functional realization.

Consider, firstly, the space $V$ spanned by independent variables $A_p$, $p=1,\dots,n$, onto which the algebra $\mathfrak{gl}_n$ acts by the rule

$$ \begin{equation*} E_{i,j} A_p=\delta_{j,p}A_i, \end{equation*} \notag $$
where $\delta_{j,p} $ is the Kronecker delta.

Consider the $\mathbb{C}$-bilinear scalar product $\langle \, ,\rangle$

$$ \begin{equation*} \langle A_p,A_q\rangle =\delta_{p,q}. \end{equation*} \notag $$

We claim that this product it is invariant. Indeed,

$$ \begin{equation*} \langle E_{i,j}A_p,A_q\rangle =\delta_{j,p}\delta_{i,q},\qquad \langle A_p,E_{j,i}A_q\rangle =\delta_{i,q}\delta_{p,j}, \end{equation*} \notag $$
proving the claim.

Now consider the following construction. Let $V$ be a space of representation of $\mathfrak{gl}_n$ with an invariant scalar product $\langle\, ,\rangle $. Then on $V^{\otimes n}$ there exists an invariant scalar product given by the rule

$$ \begin{equation*} \langle v_{i_1}\otimes\dots\otimes v_{i_n},w_{j_1}\otimes\dots\otimes w_{j_n}\rangle =\langle v_{i_1},w_{j_1}\rangle \cdots\langle v_{i_n},w_{j_n}\rangle . \end{equation*} \notag $$

There exists a projection

$$ \begin{equation*} \pi \colon V^{\otimes n} \to \mathrm{Sym}^n(V), \end{equation*} \notag $$
which is agreed with the action of $\mathfrak{gl}_n$. It has the right inverse
$$ \begin{equation*} \pi^{-1}\colon \mathrm{Sym}^n(V)\to V^{\otimes n},\qquad v_{i_1}\cdots v_{i_n}\mapsto \frac{1}{n!} \sum_{\sigma\in S_n} v_{i_{\sigma(1)}}\otimes\dots\otimes v_{i_{\sigma(n)}}. \end{equation*} \notag $$

It agrees with the action of $\mathfrak{gl}_n$. Thus, we have an invariant scalar product given by the rule

$$ \begin{equation} \langle v_{i_1}\cdots v_{i_n},w_{j_1}\cdots w_{j_n}\rangle :=\langle \pi^{-1}(v_{i_1}\cdots v_{i_n}),\pi^{-1}(w_{j_1}\cdots w_{j_n})\rangle . \end{equation} \tag{7.3} $$

Let us return to the space $V$ spanned by the variables $A_p$, and apply to it the rule (7.3). As a result, we find that the monomials $A^{\gamma}$ (the multi-index notation is used) are orthogonal. The scalar product is

$$ \begin{equation} \begin{aligned} \, &\langle A^{\gamma},A^{\gamma}\rangle =\gamma!,\quad\text{where} \quad (\gamma_1,\dots,\gamma_N)!=\gamma_1!\cdots\gamma_N!, \\ &\langle A^{\gamma},A^{\delta}\rangle =0\quad\text{for}\quad \gamma\neq \delta. \end{aligned} \end{equation} \tag{7.4} $$

Note that the scalar product (7.4) can be written as

$$ \begin{equation} \langle A^{\gamma},A^{\delta}\rangle =A^{\gamma}\curvearrowright A^{\delta}\big|_{A=0}, \end{equation} \tag{7.5} $$
where
$$ \begin{equation*} A^{\gamma}\curvearrowright A^{\delta}:=\biggl(\frac{d}{dA}\biggr)^{\gamma}A^{\delta}. \end{equation*} \notag $$

Let us construct an invariant scalar product on the space of functions on the group $\mathrm{GL}_n$ that form an irreducible finite dimensional representation. The functions are written as expressions depending on determinants. The difficulty is that the determinants are not independent variables (they obey the Plücker relations).

To overcome this difficulty note that $V=\operatorname{span}(\mathcal{F}_{\gamma}(a))=\operatorname{span}(F_{\gamma}(a)),$ where $\gamma$ are all possible Gelfand–Tsetlin diagrams for one representation. Both $\mathcal{F}_{\gamma}(a)$ and $F_{\gamma}(a)$ are bases.

It is enough to define a scalar product between base vectors. We put

$$ \begin{equation} (F_{\gamma}(a),F_{\delta}(a)):=\langle F_{\gamma}(A),F_{\delta}(A)\rangle =F_{\gamma}\curvearrowright F_{\delta}\big|_{A=0}. \end{equation} \tag{7.6} $$

Proposition 4. Formula (7.6) defines an invariant scalar product in the representation with highest vector (2.4).

Proof. This definition is correct since the functions $F_{\gamma}(A)$ of independent variables $A$ span a representation (without usage of equivalence $\operatorname{mod} \mathrm{IP}$). Since $\langle \, ,\rangle $ is invariant, the scalar product thus obtained is invariant. This proves Proposition 4.

7.2. A relation between the base $F_{\gamma}(a)$ and the Gelfand–Tsetlin base

Note that in general $(\mathcal{F}_{\gamma}(a),\mathcal{F}_{\delta}(a))\neq \langle \mathcal{F}_{\gamma}(a),\mathcal{F}_{\delta}(a)\rangle =\mathcal{F}_{\gamma}\curvearrowright \mathcal{F}_{\delta}\big|_{A=0}$. To find $\langle \mathcal{F}_{\omega}(a),\mathcal{F}_{\delta}(a)\rangle $, we have to express $\mathcal{F}_{\omega}(A)$, $\mathcal{F}_{\delta}(A)$ via $F_{\omega}(A) $ modulo $\mathrm{IP}$.

Proposition 5. $\langle pl,F_{\delta}\rangle =0$, where $pl\in \mathrm{IP}$.

Proof. We have $\langle pl,F_{\delta}\rangle =pl\curvearrowright F_{\delta}\big|_{A=0}$. But the generators of the ideal $\mathrm{IP}$ act onto $F_{\delta}$ as zero (since $F_{\delta}$ is a solution of A-GKZ). This proves Proposition 5.

So, we have

$$ \begin{equation*} (\mathcal{F}_{\gamma}(a),F_{\delta}(a))= \langle \mathcal{F}_{\gamma}(a),F_{\delta}(a)\rangle. \end{equation*} \notag $$

Earlier the order (5.5) on the sets $\gamma+B$ was defined. One can consider the order (5.5) being defined on the shift vectors.

Let us prove the following result.

Proposition 6. $(\mathcal{F}_{\gamma}(a),F_{\delta}(a))=\begin{cases} 0 &\text{if } \delta \prec \gamma, \\ \dfrac{(-1)^u}{u!}J^u_{\delta}(1) &\text{if }\delta+ur=\gamma,\, u\in\mathbb{Z}^k_{\geqslant 0}. \end{cases} $

Proof. We have to evaluate $\mathcal{F}_{\gamma}\curvearrowright F_{\delta}\big|_{A=0}$. In order to obtain a non-zero result among the summands in $\mathcal{F}_{\gamma}\curvearrowright F_{\delta}$, a constant must occur. Since $\operatorname{mod} B$, we have $\operatorname{supp}\mathcal{F}_{\gamma}=\gamma$, and $\operatorname{supp} F_{\delta}=\bigcup_{s\in\mathbb{Z}_{\geqslant 0}^k}(\gamma-sr)$, the result is non-zero if $\delta+ur=\gamma\,\operatorname{mod} B$.

Let $\delta+ur=\gamma$. In this case, the constant that appears under the action $\mathcal{F}_{\gamma}\curvearrowright F_{\delta}$ equals

$$ \begin{equation*} \sum_{t\in\mathbb{Z}^k} \frac{(-1)^u}{u!}\, \frac{(t+1)\cdots(t+u)}{(\delta+tv)!} =\frac{(-1)^u}{u!}J^u_{\delta}(1). \end{equation*} \notag $$
This proves Proposition 6.

Corollary 2. The base $F_{\gamma}(a)$ is related to the Gelfand–Tsetlin base by a lower-triangular transformation.

Proof. On the one hand, the base $\mathcal{F}_{\gamma}$ is related to the Gelfand–Tsetlin base vectors by an upper-triangular transformation. On the other hand, the scalar product $(\mathcal{F}_{\gamma}(a),F_{\delta}(a))$ is non-zero only in the case $\delta\preceq \gamma$. Since the Gelfand–Tsetlin base is orthogonal, the claim follows.

7.3. The scalar products of functions $F_{\gamma}(a)$

Let us find the scalar product $(F_{\gamma},F_{\omega})=\langle F_{\gamma},F_{\omega}\rangle$. Considering the supports of functions, one concludes that this scalar product is non-zero in the case $\gamma=\delta+l_1r\,\operatorname{mod} B$, $\omega=\delta+l_2r\,\operatorname{mod} B$, $l_1,l_2\in\mathbb{Z}^k_{\geqslant 0}$.

Also by considering the supports and using the expression $F_{\gamma}\curvearrowright F_{\delta}\big|_{A=0}$, one concludes that $\langle F_{\delta+l_1r},F_{\delta+l_2r}\rangle$, $l_1,l_2\in\mathbb{Z}^k_{\geqslant 0}$, equals

$$ \begin{equation*} \sum_{u\in \mathbb{Z}^k_{\geqslant 0}}(-1)^{l_1+l_2} \frac{(t+1)\cdots(t+u+l_1)(t+1)\cdots(t+u+l_2)}{(\delta-ur+tv)!\,(u+l_1)!\,(u+l_2)!} \end{equation*} \notag $$
for which $\min(l_1,l_2)=\{\min(l_1^i,l_2^i)\}=0$. Consider the functions
$$ \begin{equation} J_{\delta}^{u+l_1;u+l_2}(A):=\sum_{t\in\mathbb{Z}^k} \frac{(t+1)\cdots(t+u+l_1)(t+1)\cdots(t+u+l_2)}{(\delta+tv)!\,(u+l_1)!\,(u+l_2)!}A^{\delta+tv}. \end{equation} \tag{7.7} $$

We also introduce the functions

$$ \begin{equation} F_{\delta}^{l_1,l_2}(A):=\sum_{u\in\mathbb{Z}^k_{\geqslant 0}} \frac{(-1)^{l_1+l_2}J_{\delta-ur}^{u+l_1;u+l_2}(A)}{(u+l_1)!\,(u+l_2)!}. \end{equation} \tag{7.8} $$

We have

$$ \begin{equation*} \langle F_{\delta+l_1r},F_{\delta+l_2r}\rangle =F_{\delta}^{l_1,l_2}(1), \end{equation*} \notag $$
where it is suggested that $\min(l_1,l_2)=0$.

7.4. A function corresponding to a Gelfand–Tsetlin vector

Definition 6. Let $\delta_0$ be a shift vector which is a minimal element with respect to the order (5.5).

We can suppose that the vector corresponding to an arbitrary shift vector is written as follows:

$$ \begin{equation*} \delta_0+mr,\qquad m\in\mathbb{Z}^k_{\geqslant 0}. \end{equation*} \notag $$

This suggestion implies that the scalar product $\langle F_{\delta},F_{\omega}\rangle $ is non-zero only if the following relation takes place: $\gamma=\delta+l_1r$, $\omega=\delta+l_2r$, $l_1,l_2\in\mathbb{Z}^k_{\geqslant 0}$. In contrast to the previous section, this relation for $\gamma$ and $\omega$ holds exactly, rather than $\operatorname{mod}B$.

Consider the bilinear form corresponding to the scalar product in the base $F_{\gamma}$:

$$ \begin{equation*} q=\sum_{\delta,\gamma} x_{\gamma}y_{\delta}\langle F_{\delta},F_{\omega}\rangle, \end{equation*} \notag $$
the summation is taken over all chosen shift vectors.

Let us write explicitly the low-triangular change of coordinates that diagonalizes this quadratic form. Fix an arbitrary $\delta$ and consider the space $\operatorname{span}(F_{\gamma},\, \gamma\preceq \delta)$. Then the summands $q$, that contain $x_{\delta}$, are written as follows:

$$ \begin{equation*} q=F_{\delta}^{0,0}x_{\delta}^2+\sum_{l\in\mathbb{Z}^k_{\geqslant 0},\, l\neq 0} 2F_{\delta-lr}^{l,0}(1)x_{\delta-lr}x_{\delta}+\cdots. \end{equation*} \notag $$

An application of the Lagrange algorithm shows that the diagonalizing change of variables looks as follows:

$$ \begin{equation} x'_{\delta}=\sum_{l\in\mathbb{Z}^k_{\geqslant 0}}F_{\delta-lr}^{l,0}(1)x_{\delta-lr}. \end{equation} \tag{7.9} $$

Since $\delta$ is arbitrary, one comes to the following conclusion. Let $G_{\delta}$ be a function that corresponds to a Gelfand–Tsetlin diagram (corresponding to the shift vector $\delta$). We have an expression of the diagonalizing variables though the initial variables. So, we have an expression of $F_{\delta}$ via $G_{\delta}$.

Theorem 5.

$$ \begin{equation} \begin{gathered} \, F_{\delta}(A)=\sum_{l\in\mathbb{Z}^k_{\geqslant 0}}C_{\delta}^l\cdot G_{\delta-lr}(A), \nonumber \\ C_{\delta}^l=F_{\delta-lr}^{l,0}(1)=\sum_{u\in\mathbb{Z}^k_{\geqslant 0},\, t\in\mathbb{Z}^k} \frac{(-1)^{l}(t+1)\cdots(t+u+l)(t+1)\cdots(t+u)}{(\delta-(l+u)r+tv)!\, (u+l)!\, u!}. \end{gathered} \end{equation} \tag{7.10} $$

The inverse of (7.10) is given in the following theorem.

Theorem 6.

$$ \begin{equation} \begin{gathered} \, G_{\delta}(A)=\sum_{l\in\mathbb{Z}^k_{\geqslant 0}}S_{\delta}^l\cdot F_{\delta-lr}(A), \nonumber \\ S_{\delta}^0=\frac{1}{C_{\delta}^0},\qquad S_{\delta}^l=-\frac{C_{\delta}^l}{C_{\delta}^0 C_{\delta-lr}^0},\quad l\neq 0. \end{gathered} \end{equation} \tag{7.11} $$

Remark 4. Unfortunately, we do not know how one shows that from (7.11) in the case $n=3$ it follows that $G_{\gamma}=\mathcal{F}_{\gamma}$ modulo the Plücker relations. Mention that the case $n=3$ is very specific. For example, using the formula (65) from [3], it is possible to construct another base in the space of solutions of the A-GKZ system:

$$ \begin{equation*} \widetilde{F}_{\gamma}(a)=a_3^{\gamma_3}a_{1,2}^{\gamma_{1,2}}\sum_{s\in\mathbb{Z}_{\geqslant 0}}\mathrm{const}_s\cdot (a_1a_{2,3}-a_2a_{1,3})^s\mathcal{F}_{\gamma-sv^+}(a). \end{equation*} \notag $$

This base does not have an analogue in the case $n>3$. Using it instead of $F_{\delta}$, one can conclude that, in the case $n=3$, $G_{\gamma}=\mathcal{F}_{\gamma}$ modulo the Plücker relations.

7.5. The coefficients in the Theorems 5, 6

Formulas (7.10), (7.11) involve the expressions $c_l=F_{\delta-lr}^{l,0}(1)$. This expression is defined as a sum of a series (actually the sum is finite). In this section, we consider this series in detail.

7.5.1. An expression for $J_{\delta}^{u+l_1;u+l_2}(A)$

First of all, let us show that the function $J_{\delta}^{u+l_1;u+l_2}(A)$ can be expressed through simpler functions $J_{\delta}^{v}$.

Consider the expression $c_t^{a,b}:=(t+1)\cdots(t+a)(t+1)\cdots(t+b)$ and represent it as a linear combination of the expressions $c_t^c:=(t+1)\cdots(t+c)$. We have the equality

$$ \begin{equation*} c_t^{a,b}=\sum_{c=0}^{a+b} k_c c_{t}^{c}. \end{equation*} \notag $$

Let us find coefficients $k_c$ in this equality. We use the operator $O$ defined by

$$ \begin{equation*} Oc_t:=c_t-c_{t-1}, \end{equation*} \notag $$
and the operator of substitution $\big|_{t=-1}$. Let us use the rules
$$ \begin{equation*} \begin{aligned} \, O c_t^{a,b} &=a\cdot c_t^{a-1,b}+b\cdot c_{t}^{a,b-1}-ab\cdot c_t^{a-1,b-1}, \\ O c_t^c &=c\cdot c_t^{c-1}. \end{aligned} \end{equation*} \notag $$

Note that $c_{t}^{a,b}\big|_{t=-1}\neq 0$ only if $a=b=0$ and $c_{t}^{0,0}\big|_{t=-1}=1$, analogously, $c_{t}^{c}\big|_{t=-1}\neq 0$ only if $c=0$ and $c_{t}^{0}\big|_{t=-1}=1$.

Using these facts, we find that $O^pc_{t}^{a,b}\big|_{t=-1}{\neq}\, 0$ under the condition $\max(a,b)\leqslant p\leqslant a+b$, and in this case $O^pc_{t}^{a,b}\big|_{t=-1}=a!\,b!\,(-1)^{a+b-p}$. On the other hand, $O^pc_{t}^{c}\big|_{t=-1}\neq 0$ for $c=p$, under the condition $O^cc_{t}^{c}=c!$. Hence

$$ \begin{equation*} k_c=\frac{a!\, b!}{c!}(-1)^{a+b-c}. \end{equation*} \notag $$

Applying this relation to the function $J_{\delta}^{u+l_1,u+l_2}(A)$ defined by (7.7) we obtain

$$ \begin{equation*} J_{\delta}^{u+l_1,u+l_2}(A)=\sum_c\frac{(u+l_1)!\,(u+l_2)!}{c!}(-1)^{2u+l_l+l_2-c}J_{\delta}^c(A), \end{equation*} \notag $$
where $\max(l_1,l_2)\leqslant c\leqslant 2u+l_1+l_2$.

For the functions $F_{\delta}^{l_1,l_2}(A)$ defined by (7.8), we have

$$ \begin{equation*} F_{\delta}^{l_1,l_2}(A)=\sum_{u\in\mathbb{Z}_{\geqslant 0}^k}\, \sum_{u+\max(l_1,l_2)\leqslant c\leqslant 2u+l_1+l_2}\frac{(-1)^cJ_{\delta}^c(A)}{c!}. \end{equation*} \notag $$

Consider the scalar product $\langle F_{\delta},F_{\delta}\rangle =F_{\delta}^{0,0}(1)$. It is written as a sum of values at $A=1$ of the following functions:

$$ \begin{equation*} \begin{alignedat}{2} &\frac{1}{0!}\,J_{\delta}^0(A), \\ -&\frac{1}{1!}\,J_{\delta-r^{\alpha}}^{e_{\alpha}}(A),&\qquad &\frac{1}{2!}\,J_{\delta-r^{\alpha}}^2(A), \\ &\frac{1}{2!}\,J_{\delta-2r^{\alpha}}^{2e_{\alpha}}(A),&\qquad -&\frac{1}{3!}\,J_{\delta-3r^{\alpha}}^{3 e_{\alpha}}(A), \qquad\frac{1}{4!}\,J_{\delta-4r^{\alpha}}^{4e_{\alpha}}(A). \end{alignedat} \end{equation*} \notag $$

Here, the index is fixed $\alpha$, and so we need to consider the shift for all indices $\alpha=1,\dots,k$.

Consider the scalar product $\langle F_{\delta},F_{\delta-r_{\alpha}}\rangle =F_{\delta-r_{\alpha}}^{1_{\alpha},0}(1)$. It is presented as a sum of values at $A=1$ of the following functions:

$$ \begin{equation*} \begin{alignedat}{2} &{-}\frac{1}{1!}\,J_{\delta-r^{\alpha}}^{e_{\alpha}}(A), \\ &\hphantom{-} \frac{1}{2!}\,J_{\delta-2r^{\alpha}}^{2e_{\alpha}}(A),&\qquad &{-}\frac{1}{3!}\,J_{\delta-2r^{\alpha}}^{3e_{\alpha}}(A), \\ &{-}\frac{1}{3!}\,J_{\delta-3r^{\alpha}}^{3e_{\alpha}}(A),&\qquad &\hphantom{-}\frac{1}{4!}\,J_{\delta-3r^{\alpha}}^{4e_{\alpha}}(A), \qquad -\frac{1}{5!}\,J_{\delta-3r^{\alpha}}^{5e_{\alpha}}(A). \end{alignedat} \end{equation*} \notag $$

So, we have

$$ \begin{equation} F_{\delta-lr}^{l,0}(A)=\sum_{u\in \mathbb{Z}^k_{\geqslant 0}}\sum_{0\leqslant s\leqslant u} (-1)^{l+u}\frac{J^{l+u+s}_{\delta-(l+u)r}(A)}{(l+u+s)!}. \end{equation} \tag{7.12} $$

7.5.2. A relation to the Horn functions

The functions  $J_{\gamma}^s(A)$ have the following description on the language of the Horn functions. Let $\zeta\in\mathbb{C}^k$. Then

$$ \begin{equation*} H(\zeta)=\sum_{t\in \mathbb{Z}^k}c(t)\zeta^t,\qquad c(t)\in\mathbb{C}, \end{equation*} \notag $$
is called the Horn series if $c(t+e_{\alpha})/c(t)$ is a rational function of $t$, in other words, if
$$ \begin{equation*} \frac{c(t+e_{\alpha})}{c(t)}=\frac{P_{\alpha}(t)}{Q_{\alpha}(t)},\qquad\alpha=1,\dots,k, \end{equation*} \notag $$
where $P_{\alpha},Q_{\alpha}$ are polynomials (see [14], [9]).

With a $\Gamma$-series $\mathcal{F}_{\gamma}(A)$ one associates the following Horn series. Let us write

$$ \begin{equation*} \begin{aligned} \, \mathcal{F}_{\gamma}(A) &=\sum_{x=\gamma+t_1v_1+\dots+t_kv_k}\frac{A^x}{x!} =\sum_{t}\frac{A^{\gamma+tv}}{(\gamma+tv)!}=A^{\gamma}\sum \frac{(A^{v})^{t}}{(\gamma+t_1v_1+\dots+t_kv_kN)!} \\ &=A^{\gamma}\sum_{t}\frac{\zeta^t}{(\gamma+tv)!}=A^{\gamma}H_{\gamma}(\zeta), \end{aligned} \end{equation*} \notag $$
where
$$ \begin{equation*} \zeta_1=A^{v_1},\quad \dots,\quad \zeta_k=A^{v_N}. \end{equation*} \notag $$

Then

$$ \begin{equation*} \begin{aligned} \, J^{s}_{\gamma}(A) &=\sum_{t}\frac{(t+1)\cdots(t+s)A^{\gamma+tv}}{(\gamma+tv)!} =A^{\gamma}\sum_{t}\frac{(t+1)\cdots(t+s)\zeta^{t}}{(\gamma+tv)!} \\ &=A^{\gamma}\biggl(\frac{d}{d\zeta}\biggr)^s(\zeta^sH_{\gamma}(\zeta)). \end{aligned} \end{equation*} \notag $$

So, we have

$$ \begin{equation} F_{\delta-lr}^{l,0}(1)=\sum_{u\in \mathbb{Z}^k_{\geqslant 0}}\sum_{0\leqslant s\leqslant u}\frac{(-1)^{l+u}}{(l+u+s)!} \biggl(\frac{d}{d\zeta}\biggr)^s \bigl(\zeta^{l+u+s}H_{\delta-(l+u)r}(\zeta)\bigr)\big|_{\zeta=1}. \end{equation} \tag{7.13} $$

Remark 5. In (7.13), $F_{\delta-lr}^{l}(1)$ is represented as a value at 1 of a sum of the derivatives of a Horn function. It is also worth noting that such an expression is also a Horn function. Thus, $F_{\delta-lr}^{l}(1)$ is a value at 1 of a function of hypergeometric type, that is, this is a hypergeometric constant.


Bibliography

1. I. M. Gelfand and M. S. Tsetlin, “Finite-dimensional representations of groups of unimodular matrices”, Dokl. Akad. Nauk SSSR, 71:5 (1950), 825–828 (Russian)  mathscinet  zmath
2. G. E. Baird and L. C. Biedenharn, “On the representations of semisimple Lie groups. II”, J. Math. Phys., 4:12 (1963), 1449–1466  crossref  mathscinet  zmath  adsnasa
3. D. V. Artamonov, “Clebsh–Gordan coefficients for the algebra $\mathfrak{gl}_3$ and hypergeometric functions”, St. Petersburg Math. J., 33:1 (2022), 1–22  crossref
4. P. A. Valinevich, “Construction of the Gelfand–Tsetlin basis for unitary principal series representations of the algebra $\mathfrak{sl}_n(\mathbb C)$”, Theoret. and Math. Phys., 198:1 (2019), 145–155  crossref  adsnasa
5. V. K. Dobrev and P. Truini, “Polynomial realization of $U_q(\mathrm{sl}(3))$ Gel'fand–(Weyl)–Zetlin basis”, J. Math. Phys., 38:7 (1997), 3750–3767  crossref  mathscinet  zmath  adsnasa
6. V. K. Dobrev, A. D. Mitov, and P. Truini, “Normalized $U_q(\mathrm{sl}(3))$ Gel'fand–(Weyl)–Zetlin basis and new summation formulas for $q$-hypergeometric functions”, J. Math. Phys., 41:11 (2000), 7752–7768  crossref  mathscinet  zmath  adsnasa
7. D. V. Artamonov, “A Gelfand–Tsetlin-type basis for the algebra $\mathfrak{sp}_4$ and hypergeometric functions”, Theoret. and Math. Phys., 206:3 (2021), 243–257  crossref  adsnasa
8. D. P. Želobenko, Compact Lie groups and their representations, Transl. Math. Monogr., 40, Amer. Math. Soc., Providence, RI, 1973  crossref  mathscinet  zmath
9. I. M. Gelfand, M. I. Graev, and V. S. Retah, “General hypergeometric systems of equations and series of hypergeometric type”, Russian Math. Surveys, 47:4 (1992), 1–88  crossref
10. D. V. Artamonov, “Formula for the product of Gauss hypergeometric functions and applications”, J. Math. Sci. (N.Y.), 249 (2020), 817–826  crossref  mathscinet  zmath
11. I. M. Gelfand, A. V. Zelevinskii, and M. M. Kapranov, “Hypergeometric functions and toral manifolds”, Funct. Anal. Appl., 23:2 (1989), 94–106  crossref
12. J. Kamnitzer, “Geometric constructions of the irreducible representations of $GL_n$”, Geometric representation theory and extended affine Lie algebras (Ottawa 2009), Fields Inst. Commun., 59, Amer. Math. Soc., Providence, RI, 2011, 1–18  mathscinet  zmath
13. V. Guillemin and S. Sternberg, “The Gelfand–Cetlin system and quantization of the complex flag manifolds”, J. Funct. Anal., 52:1 (1983), 106–128  crossref  mathscinet  zmath
14. T. M. Sadykov and A. K. Tsikh, Hypergeometric and algebraic functions of several variables, Nauka, Moscow, 2014 (in Russian)
15. E. Miller and B. Sturmfels, Combinatorial commutative algebra, Grad. Texts in Math., 227, Springer-Verlag, New York, 2005  crossref  mathscinet  zmath
16. R. L. Graham, D. E. Knuth, and O. Patashnik, Concrete mathematics. A foundation for computer science, 2nd ed., Addison-Wesley Publ. Co., Reading, MA, 1994  mathscinet  zmath

Citation: D. V. Artamonov, “A functional realization of the Gelfand–Tsetlin base”, Izv. Math., 87:6 (2023), 1117–1147
Citation in format AMSBIB
\Bibitem{Art23}
\by D.~V.~Artamonov
\paper A functional realization of the Gelfand--Tsetlin base
\jour Izv. Math.
\yr 2023
\vol 87
\issue 6
\pages 1117--1147
\mathnet{http://mi.mathnet.ru//eng/im9150}
\crossref{https://doi.org/10.4213/im9150e}
\mathscinet{http://mathscinet.ams.org/mathscinet-getitem?mr=4700016}
\zmath{https://zbmath.org/?q=an:1544.17023}
\adsnasa{https://adsabs.harvard.edu/cgi-bin/bib_query?2023IzMat..87.1117A}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001146044700001}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85180639586}
Linking options:
  • https://www.mathnet.ru/eng/im9150
  • https://doi.org/10.4213/im9150e
  • https://www.mathnet.ru/eng/im/v87/i6/p3
  • This publication is cited in the following 1 articles:
    Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Известия Российской академии наук. Серия математическая Izvestiya: Mathematics
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024