Известия Российской академии наук. Серия математическая
RUS  ENG    ЖУРНАЛЫ   ПЕРСОНАЛИИ   ОРГАНИЗАЦИИ   КОНФЕРЕНЦИИ   СЕМИНАРЫ   ВИДЕОТЕКА   ПАКЕТ AMSBIB  
Общая информация
Последний выпуск
Скоро в журнале
Архив
Импакт-фактор
Правила для авторов
Загрузить рукопись

Поиск публикаций
Поиск ссылок

RSS
Последний выпуск
Текущие выпуски
Архивные выпуски
Что такое RSS



Изв. РАН. Сер. матем.:
Год:
Том:
Выпуск:
Страница:
Найти






Персональный вход:
Логин:
Пароль:
Запомнить пароль
Войти
Забыли пароль?
Регистрация


Известия Российской академии наук. Серия математическая, 2023, том 87, выпуск 5, страницы 5–40
DOI: https://doi.org/10.4213/im9389
(Mi im9389)
 

Fermions from classical probability and statistics defined by stochastic independence

L. Accardia, Yu. G. Lub

a Centro Vito Volterra, Università di Roma Tor Vergata, Roma, Italy
b Dipartimento di Matematica, Università degli Studi di Bari, Bari, Italy
Список литературы:
Аннотация: The case study of fermions and the attempt to deduce their structure from classical probability opens new ways for classical and quantum probability, in particular, for the notion of stochastic coupling which, on the basis of the example of fermions, we enlarge to the notion of algebraic coupling, and for the various notions of stochastic independence. These notions are shown to be strictly correlated with algebraic and stochastic couplings. This approach allows to expand considerably the notion of open system. The above statements will be illustrated with some examples. The last section shows how, from these new stochastic couplings, new statistics emerge alongside the known Maxwell–Boltzmann, Bose–Einstein and Fermi–Dirac statistics.
Bibliography: 5 titles.
Ключевые слова: fermions, Pauli exclusion principle, stochastic independences, algebraic constraints.
Поступило в редакцию: 12.06.2022
Исправленный вариант: 07.09.2022
Англоязычная версия:
Izvestiya: Mathematics, 2023, Volume 87, Issue 5, Pages 855–890
DOI: https://doi.org/10.4213/im9389e
Реферативные базы данных:
Тип публикации: Статья
УДК: 536.931
Язык публикации: английский

§ 1. Introduction

It is now known that the Heisenberg commutation relation (CCR) is a special manifestation of a universal phenomenon in classical probability: the quantum decomposition of any classical random field with all moments into a sum of $3$ operators, creation annihilation and preservation, satisfying some commutation relations uniquely determined by the orthogonal polynomial decomposition associated to the random field and characterizing the given field up to stochastic equivalence.

The Heisenberg commutation relations characterize the Gaussian class.

In this paper we discuss the analogue problem for the Fermi commutation relations and we prove that, with respect to bosons, a new feature appears that suggests a modification of the classical probabilistic notion of stochastic coupling as well as new problems connecting the traditional statistics in physics (Maxwell–Boltzmann, Bose–Einstein and Fermi–Dirac) with new notions of stochastic independence.

The classical probabilistic roots of boson quantum theory are quickly reviewed in Section 1.1 in order to illustrate the analogue program for Fermions.

It is known that Fermions are related to Bernoulli random variables [1]. In Section 2 we prove that, for a single random variable (system with $1$ degree of freedom) the Fermi commutation relations (CAR) can be deduced from the quantum decomposition of a classical Bernoulli random variable given by the theory of orthogonal polynomials.

However, in the same section we also prove that this result cannot be true for a vector valued random variable (random field), taking values in a space of dimensions strictly greater than $1$ (many degrees of freedom).

In Section 3 we prove that the multi-dimensional CAR follow from a simple and meaningful (physically and probabilistically) algebraic condition: the weak form of Pauli exclusion principle. In addition to this principle, the proof exploits the fact that in the case of a single Bernoulli random variable the CAR are satisfied.

The idea is to start from a family of classical Bernoulli random variables (a classical Bernoulli process), and to isomorphically embed the quantum algebra of each single Bernoulli variable constructed in Section 2, into a general $*$-algebra. Then to prove that the CAR follow adding to the above condition a mathematical formulation of the (weak) Pauli exclusion principle.

More explicitly:

– One starts from a family of classical (Bernoulli) random variables $(X_v)_{v\in V}$ ($V$ an index set) and to each of them one associates its quantum algebra $\mathcal{A}_v$ constructed in Section 2.

– One embeds each $\mathcal{A}_v$ into a larger $*$-algebra $\mathcal{A}$ imposing two algebraic constraints: the weak Pauli exclusion principle and the requirement that the embedding $\mathcal{A}_v\to \mathcal{A}$ is a $*$-isomorphism onto its image.

– Then one proves (Theorem 4) that the images of these embeddings satisfy the weak form of Pauli exclusion principle if and only if the CAR are satisfied.

– The proof of the theorem is independent of any state on the algebra $\mathcal{A}$, hence the mixed moments (correlators) of these random variables in any state on $\mathcal{A}$ will satisfy the constraints coming from the CAR: these are strong algebraic restrictions.

Suppose that a state $\varphi$ on $\mathcal{A}$ is compatible with the statistics of the given family of classical Bernoulli random variables in the sense that its restriction on the image of each algebra $\mathcal{A}_v$ coincides with the state on $\mathcal{A}_v$ induced by the classical probability distribution of $X_v$.

If the algebra $\mathcal{A}$ were commutative, this situation would be described by the classical notion of stochastic coupling of classical probability measures (see Section 4). But, since the algebra $\mathcal{A}$ cannot be commutative because it contains a CAR algebra, the example of fermions highlights the existence of a new class of stochastic couplings that we could call quantum couplings of classical random variables.

In fact, in the fermion case, one starts with a family of classical Bernoulli random variables, but the requirement that defines classical stochastic couplings, namely, that the marginals of the coupled state coincide with the probability distributions of the single classical variable, is replaced by the requirement that restrictions of the coupled state on the images of the canonical quantum algebras $\mathcal{A}_v$ (quantum marginals) coincide with those obtained from the restrictions to $\mathcal{A}_v$ of the quantum state uniquely determined by the classical probability distributions of the single random variable.

Furthermore, in our discussion of fermions, the stochastic coupling procedure is split into two steps. One in which statistical information is limited to the distributions of the single Bernoulli variables, while the embeddings into the larger algebra $\mathcal{A}$ are purely algebraic and no special state, i.e., no statistics, is fixed on $\mathcal{A}$. In this first step the CAR are deduced. In the second step, assuming that the image of the algebras $\mathcal{A}_v$ under the embeddings generates $\mathcal{A}$, one studies the structure of the states compatible with the CAR (this step is not discussed in the present paper for fermions, but in Section 5 the analogue of this study is performed in the simpler boolean and monotone situations).

This $2$-tiers situation, first the introduction of embeddings with purely algebraic constraints and then the study of the states which are compatible with these constraints, has been known in quantum probability for a long time and arose in connection with several notions of stochastic independence.

The first remark that purely algebraic embeddings can include statistical information, occurred, simultaneously and independently, to our knowledge, in two papers (both submitted in 1997): one by R. Lenczewski [2] who realized several kinds of stochastic independence and conditional independence (including boolean) using special embeddings in tensor products of $*$-algebras.

Another by V. Liebscher [3] who dealt only with monotone independence but in a more concrete and explicit way. Section 5.2 uses Lenczewski characterization of Boolean independence, Section 5.3 Liebscher’s one.

The last part of the present paper goes in the converse direction of the first. In fact, while the first part tackles the problem of deducing algebraic relations (the CAR) from statistical assumptions (the quantum algebra of each Bernoulli random variable), the second one is devoted to looking for statistical implications of algebraic relations.

The idea is to understand which statistical information is hidden into the algebraic constraints characterizing different types of stochastic independence. We prove that the answer to this question naturally leads to several different extensions of the known Maxwell–Boltzmann, Bose–Einstein and Fermi–Dirac statistics. In Section 6, in particular in Subsections 6.1.1 and 6.1.2, we describe some of these extensions, and in Section 6.2 we apply to the new statistics the Boltzmann’s method to deduce the maximum entropy distribution on macro-configurations.

1.1. A brief remainder of the boson case

Let $V$ be a real vector space. Recall the identifications:

$$ \begin{equation*} \begin{aligned} \, &\text{classical }V\text{-valued random variable }X \\ &\qquad\equiv\text{linear map } X\colon v\in V \to X_v\in \{\text{random variables} \colon (\Omega, \mathcal{F},P)\to \mathbb{R}\} \\ &\qquad\equiv\text{classical random field over }V, \end{aligned} \end{equation*} \notag $$
where $(\Omega, \mathcal{F},P)$ is a probability space and $X_v$ denotes the component of $X$ along the vector $v$. In what follows, we restrict our attention to the case of $V$-valued random variables with all moments, i.e.,
$$ \begin{equation*} \int_{\mathbb{R}}|x|^n\, P(dx) < \infty\quad\forall\, n\in\mathbb{N}, \end{equation*} \notag $$
and we say that two such fields are moment equivalent if they have the same moments. From now on the term random field over $V$ will mean $V$-valued random field with all moments and moment equivalent fields are identified.

Recall that a classical algebraic probability space is a pair $(\mathcal{A}, \varphi)$, where $\mathcal{A}$ is a $*$-algebra and $\varphi$ is a state on $\mathcal{A}$. All random fields over $V$ can be concretely realized as functions on the same classical algebraic probability space defined as follows.

Let $V$ be a real vector space, denote by $V^*$ its algebraic dual. For $v\in V$, the coordinate function along $v\in V$ is defined by

$$ \begin{equation} X_v\colon u^*\in V^*\to X_v(u^*):=u^*(v)\in \mathbb{R}. \end{equation} \tag{1} $$
The $n$th powers ($n\in\mathbb{N}$) of the coordinate functions
$$ \begin{equation} X_v^n\colon u^*\in V^*\to (X_v(u^*))^n\in \mathbb{R},\qquad v\in V, \quad n\in \mathbb{N}, \end{equation} \tag{2} $$
are called monomial functions (or simply monomials) on $V$ of degree $n$. Denote
$$ \begin{equation} \mathcal{P}_{V} := \mathbb{C}-\text{linear span of the monomials on $V$}, \end{equation} \tag{3} $$
$\mathcal{P}_{V}$ is a commutative $*$-algebra, for the point-wise operations with involution given by complex conjugation, called the polynomial algebra on $V$.

Zero-degree monomials are identified to the identity of $\mathcal{P}_{V} $

$$ \begin{equation} X_v^0 = 1_{\mathcal{P}_{V}}\quad \forall \, v\in V. \end{equation} \tag{4} $$

Definition 1. Let $\varphi$ be a state on $\mathcal{P}_{V}$. The pair $((\mathcal{P}_{V}, \varphi), (X_v)_{v\in V})$ is called a classical random field over $V$ with all moments.

Every classical random field over $V$ defines a quantization scheme and this definition is canonical, i.e., it does not depend on artificial constructions, but it is naturally deduced from the only assumption of existence of all moments. In this sense one can say that the mathematical roots of quantization lie in classical probability. This statement is illustrated by the following two theorems (see [4] and bibliography therein).

Theorem 1. Let $((\mathcal{P}_{V}, \varphi), (X_v)_{v\in V})$ be a classical random field over $V$ with all moments, identified with the linear map. Then, the $*$-algebra

$$ \begin{equation*} \mathcal{P}_{X} := \textit{the complex polynomial algebra in }\{X_v \colon v\in V\} \end{equation*} \notag $$
(for point-wise addition, multiplication and complex conjugation) is a semi-Hilbert space for the scalar product:
$$ \begin{equation*} \begin{aligned} \, \langle P(X),Q(X)\rangle &:= \int_{\Omega} \overline{P(X)(\omega)}Q(X)(\omega)\, P(d\omega) \\ &=: \varphi(P(X)^*Q(X))\quad\forall\, P(X),Q(X)\in \mathcal{P}_{X}, \end{aligned} \end{equation*} \notag $$
and, denoting
$$ \begin{equation*} \mathcal{P}_{X} \cdot\Phi_0,\qquad \Phi_0:= \textit{the constant function} = 1, \end{equation*} \notag $$
the $\varphi$-cyclic space associated to the classical algebraic probability space $(\mathcal{P}_{X}, \varphi)$, $\mathcal{P}_{X} \cdot\Phi_0$ admits a canonical gradation given by
$$ \begin{equation*} \mathcal{P}_{X} \cdot\Phi_0 = \bigoplus_{n\in \mathbb{N}}(\mathcal{P}_{X;n}, \langle \,{\cdot}\,,{\cdot}\, \rangle_{n}), \end{equation*} \notag $$
where $\mathcal{P}_{X;n}$ is the space of orthogonal polynomials of the process $X$. Moreover, each classical random variable $X_v$ ($v\in V$), considered as a multiplication operator on $\mathcal{P}_{X} \cdot\Phi_0$, has the following canonical quantum decomposition:
$$ \begin{equation} X_v = a^+_v + a^0_v + a^-_v, \end{equation} \tag{5} $$
where the CAP operators $a^+_v$, $a^0_v$, $a^-_v$ (creation, preservation, annihilation) enjoy the following properties:
$$ \begin{equation*} \begin{gathered} \, (a^+_v)^* = a^-_v,\qquad (a^0_v)^* = a^0_v, \\ a^{\pm}_v(\mathcal{P}_{X;n}) \subseteq\mathcal{P}_{X;n\pm 1},\qquad a^0_v (\mathcal{P}_{X;n}) \subseteq\mathcal{P}_{X;n}, \\ a^-_v\Phi_0=0\quad\textit{(Fock property)} \end{gathered} \end{equation*} \notag $$
and satisfy the following (Type II) commutation relations: for any $u,v\in V$,
$$ \begin{equation} [a^+_{u},a^+_v] = 0 \quad \textit{(creators commute)}, \end{equation} \tag{6} $$
$$ \begin{equation} [a^+_{u},a^-_v] + [a^0_{u},a^0_v] + [a^-_{u},a^+_v] = 0, \end{equation} \tag{7} $$
$$ \begin{equation} [a^+_{u},a^0_v] + [a^0_{u},a^+_v] = 0. \end{equation} \tag{8} $$

Remark 1. The CAP operators also satisfy another set of commutation relations (Type I), which are more similar to the usual quantum mechanical ones, but in this paper we will not discuss them.

Theorem 2. Let $V$ be a real vector space (not necessarily finite dimensional), $V_{\mathbb{C}}\equiv V\mathbin{\dot{+}}iV$ its complexification, and $\langle \,{\cdot}\,,{\cdot}\, \rangle_{V_{\mathbb{C}}}$ a semiscalar product on $V_{\mathbb{C}}$, real valued on $V$ and not identically equal to zero. For a symmetric random field $X$ over $V$, the following statements are equivalent:

(i) the CAP operators $a^+$, $a^-$ associated with $X$ satisfy Heisenberg commutation relations

$$ \begin{equation} [a_v,a^+_u]= \langle v,u\rangle_{V_{\mathbb{C}}} \cdot 1; \end{equation} \tag{9} $$

(ii) $X$ is the standard unit classical Gaussian field over the real Hilbert space $(\overline{V}, \langle \,{\cdot}\,, {\cdot}\,\rangle_{\overline{V}})$, obtained by completing $V$ with the semiscalar product obtained by restriction on $V$ of the semiscalar product on $V_{\mathbb{C}}$;

(iii) there is an isomorphism from the closure, in the $L^2$-space of $X$, of the algebra of polynomials in $X$ to the Boson Fock space $\Gamma(V_{\mathbb{C}}, \langle \,{\cdot}\,, {\cdot}\,\rangle_{V_{\mathbb{C}}})$ which intertwines the corresponding vacuum states and creation-annihilation operators.

The two theorems stated above prove and clarify the statement made at the beginning of this section, namely:

– the combination of classical probability with the theory of orthogonal polynomials naturally leads to a generalized quantization procedure applicable to all classical random variables with all moments;

Boson quantum mechanics is obtained, restricting the general probabilistic quantization to the class of gaussian measures.

1.2. A problem about Fermi systems

Suggested by the above discussion of the Boson case, the following question naturally arises: What about Fermions? More explicitly, once understood that the Heisenberg commutation relations follow from a simple probabilistic requirement (existence of all moments) applied to a particular family of classical probability measures (the Gauss–Poisson class) can we find a similar family of classical probability measures and a simple probabilistic requirement from which to deduce the Fermi anti-commutation relations?

Definition 2. Let $D$ be a set. The Fermi algebra over $D$ is the $C^*$-algebra, denoted $\operatorname{CAR}(D)$, with generators $\{a_j, a^+_j\}_{j\in D}$ and relations

$$ \begin{equation} (a_j^+)^*=a^+_j,\quad \{a^+_j,a_k\}=\delta_{jk},\quad \{a_j,a_k\}=\{a^+_j,a^+_k\}=0,\qquad j,k\in D, \end{equation} \tag{10} $$
where $\{\,{\cdot}\,,{\cdot}\,\}$ denotes the anti-commutator defined by
$$ \begin{equation} \{a,b\} := ab+ba \quad (\, = \{b,a\}). \end{equation} \tag{11} $$
The relations (10) are called Fermi (or canonical) anti-commutation relations.

This problem remained open for many years and, in the recent years, finally a natural solution emerged. A goal of the present paper is to describe a positive answer to the question above.

Remark 2. The solution of the above problem needs new ideas with respect to the probabilistic quantization scheme described by Theorems 1 and 2.

In fact, Theorem 1 shows that in the probabilistic quantization scheme the CAP operators must satisfy the Type II commutation relations. In particular, (6) implies that

$$ \begin{equation} [a^+_{u},a^+_v] = 0 \quad (\textit{creators commute}) \quad \forall\, v\in V. \end{equation} \tag{12} $$
But we know from (10) that, in the Fermi case, creators are anti-commute. Therefore, if $\operatorname{dim}(V)\geqslant 2$ (i.e., if the number of fermions is not smaller than $2$), fermions cannot be included in the probabilistic quantization scheme. In conclusion: a theory of Fermi systems with more than $1$ degree of freedom cannot be included in the probabilistic quantization scheme.

§ 2. Fermi systems with $1$ degree of freedom

The case $\operatorname{dim}(V) = 1$ cannot be covered by Remark 2 because in this case the Type II commutation relations are identically satisfied. Therefore, it is natural to ask oneself: can the probabilistic quantization scheme give some information at least in the case of a single fermion?

It is known that, in the Fock representation, the vacuum distribution of a Fermi field operator with $1$ degree of freedom is that of a mean zero classical Bernoulli random variable.

Definition 3. The support of a random variable is the support of its probability distribution. In particular, the support of a discrete real valued random variable (the only ones considered in this paper) is the set of values it assumes with non-zero probability.

2.1. The canonical quantum decomposition of a classical generalized Bernoulli random variable implies the Fermi anti-commutation relations for $1$ degree of freedom

Definition 4. Any symmetric classical random variable $X_1$ with support $\{-s,+s\}$ ($s\in\mathbb{R}$, $s>0$) is called a Bernoulli random variable with symmetric range. If the support of the classical random variable $X_1$ consists of two arbitrary different complex numbers $\{s_1,s_2\}$, one speaks of a generalized Bernoulli random variable.

In what follows, we only consider real valued random variables.

Theorem 3. The CAP operators in the canonical quantum decomposition of any classical generalized Bernoulli random variable $X$

$$ \begin{equation} X=a^+ + a^0 + a \end{equation} \tag{13} $$
are characterized by
$$ \begin{equation} a\Phi_0=0, \qquad a\Phi_1=\omega_1\Phi_0, \end{equation} \tag{14} $$
$$ \begin{equation} a^+\Phi_0=\Phi_1,\qquad a^+\Phi_1=0, \end{equation} \tag{15} $$
$$ \begin{equation} a^0\Phi_0=\alpha_0\Phi_0,\qquad a^0\Phi_1=\alpha_1\Phi_1, \end{equation} \tag{16} $$
where $\{\Phi_0, \Phi_1\}$ are the monic orthogonal polynomials of $X$, $\alpha_0$ is the mean of $X$, $\omega_1$ the variance and $\alpha_1=E(X(X-\alpha_0)^2)/ \omega_1$, $E$ denoting expectation with respect to the probability distribution of $X$.

In particular, $a$ and $a^+$ satisfy the Fermi anti-commutation relations (CAR relations) for $1$ degree of freedom:

$$ \begin{equation} \{a,a\} = \{a^+,a^+\}= 0, \end{equation} \tag{17} $$
$$ \begin{equation} aa^+ + a^+a = \{a,a^+\} = \omega_1. \end{equation} \tag{18} $$

Proof. Let $\mu$ denote the probability distribution of $X$ and $\{s_1, s_2\}\subset \mathbb{R}$ be its support. Then $L^2(\mathbb{R}, \mu)$ is identified with the set of functions $ f\colon\{s_1,s_2\}\to\mathbb{C}$ with scalar product
$$ \begin{equation*} \langle f_1, f_2\rangle =\overline{f}_1(s_1) f_2(s_1)\mu_{s_1} +\overline{f}_1(s_2)f_2(s_2)\mu_{s_2}. \end{equation*} \notag $$
The functions $\delta_{s_1}:=\chi_{\{s_1\}}$, $\delta_{s_2}:=\chi_{\{s_2\}}$ are an orthogonal basis of $L^2(\mathbb{R}, \mu)$ because
$$ \begin{equation*} \langle \delta_{s_1}, \delta_{s_2}\rangle =\delta_{s_1}(s_1)\delta_{s_2}(s_1) \mu_{s_1}+\delta_{s_1}(s_2) \delta_{s_2}(s_2)\mu_{s_2}=0, \end{equation*} \notag $$
and their norms are given by
$$ \begin{equation*} \|\delta_{s_j}\|=\mu_{s_j},\qquad j=1,2, \end{equation*} \notag $$
therefore, the vectors
$$ \begin{equation*} e_j:=\frac{\delta_{s_j}}{\sqrt{\mu_{s_j}}},\qquad j=1,2, \end{equation*} \notag $$
are an ortho-normal basis of $L^2(\mathbb{R}, \mu)$. Notice that
$$ \begin{equation*} 1=\delta_{s_1}+ \delta_{s_2}\quad (\mu\text{-a.e.}) \end{equation*} \notag $$
because
$$ \begin{equation*} \|1-(\delta_{s_1}+ \delta_{s_2})\|^2 =\bigl((1-(\delta_{s_1}+\delta_{s_2})(s_1)\bigr)^2\mu_{s_1} + \bigl(1-(\delta_{s_1}+\delta_{s_2})(s_2) \bigr)^2\mu_{s_2}=0. \end{equation*} \notag $$
Defining
$$ \begin{equation*} \Phi_0:=\delta_{s_1}+ \delta_{s_2} = 1 \quad (\mu\text{-a.e.}), \end{equation*} \notag $$
we know that
$$ \begin{equation} \Phi_1:=X\Phi_0-\alpha_0\Phi_0 \perp\Phi_0. \end{equation} \tag{19} $$
Therefore, the set $\{\Phi_0, \Phi_1\}$ is the monic orthogonal polynomial basis of $L^2(\mathbb{R}, \mu)$. Denote by $(a^+, a^0, a)$ the CAP operators of $X$. By definition of the CAP operators of $X$,
$$ \begin{equation*} a\Phi_0=0,\quad a^+\Phi_0=\Phi_1, \qquad\langle \Phi_i,a^+\Phi_j\rangle=\langle a\Phi_i, \Phi_j\rangle \quad \forall\, i,j\in\{0,1\}. \end{equation*} \notag $$
So one gets the first identities in (14) and (15). Moreover, by definition of annihilator, $a\Phi_1\in\mathbb{C}\cdot \Phi_0$, i.e., $a\Phi_1=x\Phi_0$ with $x\in\mathbb{C}$, since
$$ \begin{equation*} \langle a\Phi_1,\Phi_0\rangle =\overline{x}\|\Phi_0\|^2 =\langle \Phi_1, a^+\Phi_0\rangle=\|\Phi_1\|^2=\omega_1, \end{equation*} \notag $$
it follows that $x=\overline{x}=\omega_1$ and $a\Phi_1=\omega_1\Phi_0$, which is the second identity in (14). By definition of creator
$$ \begin{equation*} a^+\Phi_0=\Phi_1,\qquad a^+\Phi_1 \perp \Phi_0, \end{equation*} \notag $$
and since
$$ \begin{equation*} \langle a^+\Phi_1,\Phi_1\rangle=\langle \Phi_1, a\Phi_1\rangle =\langle \Phi_1, \omega_1\Phi_0\rangle =0, \end{equation*} \notag $$
one has also $a^+\Phi_1 \perp \Phi_1$, therefore, $a^+\Phi_1=0$, which is the second identity in (15). The Jacobi relations imply that $X\Phi_0 = \Phi_1 + \alpha_0\Phi_0$ and the first identity in (16) follows by taking scalar product with $\Phi_0$ of both sides of this identity. Finally, again using the Jacobi relations, one deduces that
$$ \begin{equation*} X\Phi_1 = \Phi_2 + \alpha_1\Phi_1 + \omega_1\Phi_0=(\mu\text{-a.e.}) \ \alpha_1\Phi_1 + \omega_1\Phi_0. \end{equation*} \notag $$
Therefore,
$$ \begin{equation*} \begin{gathered} \, \langle\Phi_1, X\Phi_1\rangle = \alpha_1\langle\Phi_1, \Phi_1 \rangle = \alpha_1\omega_1 \stackrel{(19)}{=} \langle (X-\alpha_0)\Phi_0,X(X-\alpha_0)\Phi_0\rangle, \\ \langle \Phi_0,X(X-\alpha_0)^2\Phi_0\rangle = E(X(X-\alpha_0)^2), \end{gathered} \end{equation*} \notag $$
which is the second identity in (16). From the identities (14)(16), one deduces that
$$ \begin{equation*} a^2\Phi_1 = a\Phi_0=0,\qquad a^2\Phi_0 = 0 \quad \Longleftrightarrow\quad 2 a^2 = \{a,a\}=0, \end{equation*} \notag $$
which is equivalent to (17). Similarly,
$$ \begin{equation*} aa^+\Phi_0= a\Phi_1 = \omega_1\Phi_0,\qquad a^+a\Phi_1 = \omega_1a^+\Phi_0 = \omega_1\Phi_1, \end{equation*} \notag $$
which is equivalent to (18). $\Box$

Remark 3. The relation $ \{a^+_j,a_k\}=\delta_{j,k}$ (compare (10) with (18)) can be, and in many cases is, replaced by $ \{a^+_j,a_k\}=\omega_j\delta_{j,k}$, where the $\omega_j$ are strictly positive numbers. Mathematically the change is trivial because it can be reabsorbed in the definition of the $a^{\varepsilon}_j$ ($\varepsilon\in\{+,-\}$). However the probabilistic interpretation is important because it shows that (for symmetric $X$) the anti-commutator $ \{a^+_j,a_j\}$ is given by the Fock vacuum variance $\omega_j$ of the classical random variable $X_j=a^+_j+a_j$ (field variable). Notice that the $1$-dimensional version of the boson commutation relations (9) leads to the same statement with commutator replacing anti-commutator.

Remark 4. Notice that

$$ \begin{equation*} \langle \delta_{s_1}+ \delta_{s_2}, \delta_{s_1}- \delta_{s_2}\rangle=\|\delta_{s_1}\|^2- \|\delta_{s_2}\|^2=\mu_{s_1}-\mu_{s_2}, \end{equation*} \notag $$
so $\delta_{s_1}+ \delta_{s_2}$ and $\delta_{s_1}- \delta_{s_2}$ are orthogonal only when $\mu$ is the uniform probability measure on $\{s_1, s_2\}$.

Remark 5. The following proposition shows that also the standard representation of the CAR, in terms of Pauli matrices, follows from the orthogonal polynomial approach.

Proposition 1. The matrices, in the $(\Phi_0,\Phi_1)$-basis, of the CAP operators of the classical generalized Bernoulli random $X$ described in Theorem 3 are:

$$ \begin{equation} \begin{cases} A^+=\sqrt{\omega_1}\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} &(\textit{creator matrix}), \\ A=\sqrt{\omega_1}\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} &(\textit{annihilator matrix}), \\ A^0=\begin{pmatrix} \alpha_0 & 0 \\ 0 & \alpha_1 \end{pmatrix} &(\textit{preservator matrix}). \end{cases} \end{equation} \tag{20} $$
In particular,
$$ \begin{equation} X = \begin{pmatrix} \alpha_0 & \sqrt{\omega_1} \\ \sqrt{\omega_1} & \alpha_1 \end{pmatrix} \quad (\textit{generalized field operator}). \end{equation} \tag{21} $$

Proof. The map
$$ \begin{equation*} x\Phi_0+ y\Phi_1\to \begin{pmatrix} x \\ y \end{pmatrix}\in\mathbb{C}^2 \end{equation*} \notag $$
is a vector space isomorphism characterized by
$$ \begin{equation} \Phi_0\to\begin{pmatrix} 1 \\ 0 \end{pmatrix} =:e_0,\qquad \Phi_1\to \begin{pmatrix} 0 \\ 1 \end{pmatrix}=:\sqrt{\omega_1}\, e_1, \end{equation} \tag{22} $$
i.e.,
$$ \begin{equation} e_i:={\omega_1}^{-i/2}\Phi_i\quad \forall \, i\in\{0,1\}. \end{equation} \tag{23} $$
Then, in virtue of the identities $\langle \Phi_0,\Phi_0\rangle =1$ and $\langle \Phi_1,\Phi_1\rangle =\omega_1$, one obtains $\langle e_0,e_0\rangle =\langle e_1,e_1 \rangle =1$. Let us define, for $\varepsilon\in\{+,0,-\}$,
$$ \begin{equation} a^{\varepsilon}_{i,j}:={\omega_1}^{-(i+j)/2}\langle \Phi_i, a^{\varepsilon}\Phi_j\rangle, \qquad i,j\in\{0,1\}, \end{equation} \tag{24} $$
the matrix of the image of $a^{\varepsilon}$ under the isomorphism (22) (denoted by the same symbol $a^{\varepsilon}$) is
$$ \begin{equation*} a^{\varepsilon}=\sum_{i,j}e_{i,i}a^{\varepsilon}e_{j,j} =\sum_{i,j}\langle e_i, a^{\varepsilon}e_j\rangle e_{i,j}, \end{equation*} \notag $$
where $(e_{i,j})$ is the system of matrix units in the $2\times 2$ complex matrices defined by the basis (22), i.e., $e_{i,j}:=e_ie_j^*$, and (24) becomes
$$ \begin{equation*} a^{\varepsilon}_{i,j}= \langle e_i, a^{\varepsilon}e_j\rangle\quad \forall\, i,j\in\{0,1\}. \end{equation*} \notag $$
Since
$$ \begin{equation*} \begin{aligned} \, a_{0,0} &=\langle e_0,a e_0\rangle=\langle \Phi_0,a \Phi_0\rangle=0, \\ a_{0,1} &=\langle e_0,a e_1\rangle ={\omega_1}^{-1/2} \langle \Phi_0,a \Phi_1\rangle=\sqrt{\omega_1}, \\ a_{1,0} &=\langle e_1,a e_0\rangle={\omega_1}^{-1/2} \langle \Phi_1,a \Phi_0\rangle=0, \\ a_{1,1} &=\langle e_1, a e_1\rangle={\omega_1}^{-1}\langle \Phi_1, a\Phi_1\rangle=0, \end{aligned} \end{equation*} \notag $$
one has
$$ \begin{equation*} A\equiv(a_{i,j})=\begin{pmatrix} 0 & \sqrt{\omega_1} \\ 0 & 0 \end{pmatrix} =\sqrt{\omega_1}\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, \end{equation*} \notag $$
which is the second identity in (20). Similarly, from
$$ \begin{equation*} \begin{aligned} \, a^+_{0,0} &=\langle e_0,a^+ e_0\rangle=\langle\Phi_0,a^+ \Phi_0\rangle=0, \\ a^+_{0,1} &=\langle e_0,a^+ e_1\rangle ={\omega_1}^{-1/2} \langle \Phi_0,a^+ \Phi_1\rangle=0, \\ a^+_{1,0} &=\langle e_1,a^+ e_0\rangle={\omega_1}^{-1/2} \langle \Phi_1,a^+ \Phi_0\rangle=\sqrt{\omega_1}, \\ a^+_{1,1} &=\langle e_1, a^+e_1\rangle={\omega_1}^{-1}\langle \Phi_1, a^+\Phi_1\rangle=0 \end{aligned} \end{equation*} \notag $$
it follows that
$$ \begin{equation*} A^+\equiv(a_{ij}^+)=\begin{pmatrix} 0 & 0 \\ \sqrt{\omega_1} & 0 \end{pmatrix} =\sqrt{\omega_1}\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}, \end{equation*} \notag $$
which is the first identity in (20). Clearly
$$ \begin{equation*} \begin{gathered} \, A^2=(A^+)^2=0, \\ AA^+=\omega_1\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \qquad A^+A=\omega_1\begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}, \\ \{A,A^+\}=AA^++ A^+A=\omega_1> 0. \end{gathered} \end{equation*} \notag $$
Finally, $a^0$ is diagonal in the $(\Phi_0,\Phi_1)$-basis and one has
$$ \begin{equation*} a=\begin{pmatrix} \alpha_0 & 0 \\ 0 & \alpha_1 \end{pmatrix}, \end{equation*} \notag $$
which is the third identity in (20). $\Box$

§ 3. The Fermi anti-commutation relations

3.1. A basic principle of classical probability

If $\operatorname{dim}(V) =: d > 1$ ($d\in\mathbb{N}\cup\{+\infty\}$), a classical field $X$ over $V$ describes a system with $d$ degrees of freedom. In particular, one can think of $X$ as describing a system of $d$ particles. A classical random field is a probabilistic description of such a system. Definition 1 implicitly contains a basic principle of classical probability, namely:

all constraints among the particles described by $X$ (interactions) are included into the state $\varphi$.

For example, tensor independence corresponds to absence of constraints among the particles. If $\operatorname{dim}(V) = 2$, $\{e_1, e_2\}$ is a linear basis of $V$ and $X_{e_1}$, $X_{e_2}$ are subject to the constraint $X_{e_1}^2 + X_{e_2}^2 = 1$, the coordinate functions (1) do not satisfy this constraint, but in classical probability this constraint is expressed by the fact that the support of any measure associated to $\varphi$ must be contained in the unit circle.

The just formulated basic principle of classical probability has the strong limitation of binding the model to a single state $\varphi$. In quantum theory one distinguishes between two types of constraints:

$\bullet$ kinematical (i.e., purely algebraic) constraints,

$\bullet$ statistical constraints.

Moreover, there are plenty of examples in which the kinematical constraints uniquely determine the state $\varphi$. The best known of these examples is given by the Fock states.

This leads us to the in of quantum probability: algebra implies statistics, that will be the core of the second part of this paper. The yan of quantum probability: statistics implies algebra is exemplified by Theorems 1 and 2 (and many other illustrations are available).

The above mentioned distinction between algebraic and statistical constraints suggest splitting the definition of random field in two steps:

(i) definition of a linear map $X\colon v\in V \to X_v\in \mathcal{A}$, where $\mathcal{A}$ is a $*$-algebra,

(ii) introduction of a state $\varphi$ on $\mathcal{A}$.

In this approach, some constraints are introduced as algebraic relations among different $X_v$ or the associated CAP operators, independently of any state (see Definition 5 for an example).

In what follows, we will introduce this approach also in classical probability starting from the special case of fermions.

3.2. Algebraic symmetric Bernoulli fields

Definition 5. Let $\mathcal{A}$ be a $*$-algebra and $V$ a real vector space. An algebraic symmetric Bernoulli field is a linear map $X\colon V \to \mathcal{A}$ with the following properties:

(i) for each $v\in V$, there exist $a^+_v, a_v\in \mathcal{A}$ such that

$$ \begin{equation} X_v = a^+_v + a_v,\qquad (a^+_v)^* = a_v; \end{equation} \tag{25} $$

(ii) the $*$-algebra generated by $a^{\pm}_v$ is $*$-isomorphic to the $*$-algebra generated by the CAP operators of a Bernoulli random variable with symmetric range $\{-s_v, s_v\}$, where $s_v > 0 $.

Remark 6. Definition 5 implies that, for fixed $v\in V$, the operators $a^+_v$, $a_v$ in (25) enjoy all the properties enjoyed by the CAP operators of the corresponding Bernoulli random variable with symmetric range. In particular, they satisfy the relations (17) and (18) with $\omega_1=1$.

3.3. The condition $\{a_{e_i},a_{e_j}\} =0$

Lemma 1. Let $V$ be a vector space and $X\colon V \to \mathcal{A}$ be an algebraic symmetric Bernoulli field. Then

$$ \begin{equation} a_v^2=(a_v^+)^2=0\quad\forall\, v\in V \end{equation} \tag{26} $$
and
$$ \begin{equation} \{a_{u},a_v\} =0\quad\forall\, u,v\in V. \end{equation} \tag{27} $$

Proof. Let (25) be the quantum decomposition of $X_v$. By definition of algebraic symmetric Bernoulli field, $a_v$ is the image, under a $*$-isomorphism, of a nilpotent operator, hence it has the same property. This proves (26). Consequently, for all $u,v\in V$,
$$ \begin{equation*} 0= a_{u+v}^2 = (a_{u} + a_v)^2 = a_{u}^2 + a_v^2 + a_{u}a_v + a_va_{u} =\{a_{u}, a_v\}. \end{equation*} \notag $$
$\Box$

Remark 7. It is a crucial point that $X\colon V \to \mathcal{A}$ is a linear map. In general, by giving $\{a_i, a^+_i,a_j,a^+_j\}$ with $i\ne j$, the following relations

$$ \begin{equation*} (a^+_i)^2=a_i^2=(a^+_j)^2=a_j^2=0,\qquad \{a^+_i,a_j\} =\delta_{i,j} \end{equation*} \notag $$
do not imply $\{a_i,a_j\} =0$ $\forall\, i\neq j$. The following Lemma gives a concrete example.

Lemma 2. The two matrices

$$ \begin{equation*} A_1=\frac{1}{2}\begin{pmatrix} 1 & i\\ i & -1 \end{pmatrix},\qquad A_2=\frac{1}{2}\begin{pmatrix} -1 & i \\ i & 1 \end{pmatrix} \end{equation*} \notag $$
are nilpotent, i.e.,
$$ \begin{equation} A_1^2=A_2^2=0 \end{equation} \tag{28} $$
and, by denoting $A_h^+:=A_h^*$, satisfy
$$ \begin{equation*} \{ A_k,A_h^+\} = \delta_{k,h}. \end{equation*} \notag $$
However,
$$ \begin{equation} \{A_1,A_2\} =-\mathbf{1}. \end{equation} \tag{29} $$

Proof. (28) is a simple calculation. Moreover,
$$ \begin{equation} A_1A_1^+ =A_1A_1^{\ast} =\frac{1}{4}\begin{pmatrix} 1 & i \\ i & -1 \end{pmatrix}\begin{pmatrix} 1 & -i \\ -i & -1 \end{pmatrix} =\frac{1}{4}\begin{pmatrix} 2 & 0 \\ 0 & 2 \end{pmatrix}, \end{equation} \tag{30} $$
$$ \begin{equation} A_2A_2^+ =A_2A_2^{\ast} =\frac{1}{4}\begin{pmatrix} -1 & i \\ i & 1 \end{pmatrix} \begin{pmatrix} -1 & -i \\ -i & 1 \end{pmatrix} =\frac{1}{4}\begin{pmatrix} 2 & 0 \\ 0 & 2 \end{pmatrix}, \end{equation} \tag{31} $$
hence
$$ \begin{equation} \{A_k,A_k^+\} =\mathbf{1}\quad \forall\, k=1,2. \end{equation} \tag{32} $$
Secondly,
$$ \begin{equation} A_1A_2^+ =A_1A_2^{\ast} =\frac{1}{4}\begin{pmatrix} 1 & i \\ i & -1 \end{pmatrix} \begin{pmatrix} -1 & -i \\ -i & 1 \end{pmatrix}=\mathbf{0}, \end{equation} \tag{33} $$
$$ \begin{equation} A_2A_1^+=(A_1A_2^{\ast})^{\ast}=\mathbf{0}, \end{equation} \tag{34} $$
$$ \begin{equation} A_2^+A_1 =A_2^{\ast}A_1=\frac{1}{4}\begin{pmatrix} -1 & -i \\ -i & 1 \end{pmatrix}\begin{pmatrix} 1 & i \\ i & -1 \end{pmatrix}=\mathbf{0},\qquad A_1^+A_2=(A_2^{\ast}A_1)^{\ast}=\mathbf{0}, \nonumber \end{equation} \notag $$
and so
$$ \begin{equation} \{ A_k,A_h^+\} =\mathbf{0}\quad \forall\, k\neq h. \end{equation} \tag{35} $$
Therefore, we have the Pauli exclusion principle (i.e., (28)) and $\{A_k,A_h^+\} =\delta_{k,h}\mathbf{1}$ (i.e., (31)(35)).

Finally, (29) follows from the following equalities:

$$ \begin{equation*} \begin{aligned} \, A_1A_2 &=\frac{1}{4}\begin{pmatrix} 1 & i \\ i & -1 \end{pmatrix}\begin{pmatrix} -1 & i \\ i & 1 \end{pmatrix}=\frac{1}{4} \begin{pmatrix} -2 & 2i \\ -2i & -2 \end{pmatrix}, \\ A_2A_1 & =\frac{1}{4}\begin{pmatrix} -1 & i \\ i & 1 \end{pmatrix}\begin{pmatrix} 1 & i \\ i & -1 \end{pmatrix}=\frac{1}{4} \begin{pmatrix} -2 & -2i \\ 2i & -2 \end{pmatrix}. \end{aligned} \end{equation*} \notag $$
$\Box$

3.4. Pauli exclusion principle for a family of classical Bernoulli random variables

Definition 6. One says that a quantum decomposition $X_i=a^+_i+a_i$ $(i\in V)$ satisfies

$\bullet$ the weak Pauli exclusion principle if

$$ \begin{equation} \begin{gathered} \, (a_i^+)^2=0\quad\forall\, i\in V, \notag \\ a_i^+ a_j^\epsilon a_i^+=0\quad\forall\, \epsilon\in\{-,+\} \text{ and different } i,j\in V, \end{gathered} \end{equation} \tag{36} $$

$\bullet$ the Pauli exclusion principle if, for any $n\in\mathbb{N}$ and $\varepsilon:=(\varepsilon(1),\dots,\varepsilon(n))\in\{-,+\}^n$,

$$ \begin{equation} a^{\varepsilon(n)}_{j_{n}} \cdots a^{\varepsilon(k)}_{j_k} \cdots a^{\varepsilon(h)}_{j_h} \cdots a^{\varepsilon(1)}_{j_1} = 0, \end{equation} \tag{37} $$
whenever there exist two pairs of indices $h<k\in\{1,\dots,n\}$ such that
$$ \begin{equation*} \begin{cases} j_h=j_k \quad\text{and}\quad \varepsilon(h)=\varepsilon(k)=+, \\ j_m\ne j_h\quad \forall\, m\in (h,k):=\{h+1,\dots,k-1\}. \end{cases} \end{equation*} \notag $$

Remark 8. Condition (37) expresses the fact that processes which include creation of two particles in the same state are impossible.

Remark 9. Taking adjoints of both sides of (37), one sees that if Pauli exclusion principle is satisfied, processes which include either creation or annihilation of two particles in the same state are impossible. In other words, (37) is equivalent to

$$ \begin{equation} a^{\varepsilon(n)}_{j_{n}}\cdots a^{\varepsilon(m)}_{j_m} \cdots a^{\varepsilon(m)}_{j_m}\cdots a^{\varepsilon(1)}_{j_1} = 0. \end{equation} \tag{38} $$
Similarly, (36) is equivalent to
$$ \begin{equation} \begin{gathered} \, (a_i^\varepsilon)^2=0\quad \forall\, \varepsilon\in\{-,+\} \text{ and } i\in V, \notag \\ a_i^\varepsilon a_j^\epsilon a_i^\varepsilon=0\quad \forall\, \varepsilon,\epsilon\in\{-,+\} \text{ and different } i,j\in V. \end{gathered} \end{equation} \tag{39} $$
Notice that this condition is purely algebraic, i.e., independent of any state.

3.5. The condition on $X_v^2$

Proposition 2. A generalized Bernoulli random variable $X$ with range $\{x_-, x_+\}$ and probabilities $\{p_-, p_+\}\subset (0,1)$ is symmetric (i.e., its odd moments vanish) iff

$$ \begin{equation} x_- = - x_+\quad\textit{and}\quad p_+ = p_- = \frac12. \end{equation} \tag{40} $$

Proof. By definition, $X$ is symmetric iff
$$ \begin{equation} \begin{aligned} \, &0 = E(X^{2n+1}) =0\quad\Longleftrightarrow\quad x_-^{2n+1}p_-+x_+^{2n+1}p_+=0 \nonumber \\ &\Longleftrightarrow\quad -x_-^{2n+1}p_-=x_+^{2n+1}p_+\qquad\forall \, n\in\mathbb{N}. \end{aligned} \end{equation} \tag{41} $$
Since, by assumption, $p_-$ and $p_+\neq 0$ and $x_-\neq x_+$, (41) implies that they must be both $\neq 0$. In this case, (41) is equivalent to
$$ \begin{equation} -\biggl(\frac{x_-}{x_+}\biggr)^{2n+1} = \frac{p_+}{p_-}\quad\forall\, n\in\mathbb{N}, \end{equation} \tag{42} $$
and this equality holds iff $x_- = - x_+$ and $p_+ = p_- = 1/2$. $\Box$

The following proposition characterizes the generalized classical Bernoulli random variable which satisfies only the first identity in (40).

Proposition 3. A generalized classical Bernoulli random variable $X$ with distribution $\mu$ has symmetric range in the sense of Definition 4 iff, for some constant $s\in\mathbb{R}$,

$$ \begin{equation} X^2=s^2. \end{equation} \tag{43} $$

Proof. $X$ is a Bernoulli random variable with values $\{-s,+s\}$, iff it has the form
$$ \begin{equation} X=s\bigl(\chi_{\{+s\}}- \chi_{\{-s\}}\bigr) \quad(\mu\text{-a.e.}) \end{equation} \tag{44} $$
Therefore,
$$ \begin{equation*} X^2=s^2\bigl(\chi_{\{+s\}}^2+\chi_{\{-s\}}^2\bigr) =s^2\bigl(\chi_{\{+s\}}+\chi_{\{-s\}}\bigr)=s^2. \end{equation*} \notag $$
Conversely, if
$$ \begin{equation*} X=s_+\chi_{\{s_+\}}+ s_-\chi_{\{s_-\}} \end{equation*} \notag $$
is a generalized Bernoulli random variable,
$$ \begin{equation*} X^2=s^2_+\chi_{\{s_+\}}+ s^2_-\chi_{\{s_-\}}. \end{equation*} \notag $$
Since $\chi_{\{s_+\}}$ and $\chi_{\{s_-\}}$ are linear independent, the identity (43) can take place iff
$$ \begin{equation*} s^2=s^2_+=s_-^2. \end{equation*} \notag $$
Thus, $s_+=\pm s$, $s_-=\pm s$. Combining these remarks with the fact that $s_+\ne s_-$, we obtain that $X$ must have the form (44), so it has symmetric range. $\Box$

Definition 7. A linear basis $e\equiv(e_j)_{j\in D}$ of $V$ is said to be $X$-algebraically normalized if, for any $j\in D$, $X_{e_j}^2=1$.

Proposition 4. Any linear basis $e\equiv(e_j)_{j\in D}$ of $V$ can be supposed to be $X$-algebraically normalized up to multiplication of each vector of the basis by a strictly positive number. Moreover, if $e\equiv(e_j)_{j\in D}$ is a linear basis also satisfying Pauli exclusion principle, then for any $i\ne j\in D$,

$$ \begin{equation} \{a_{e_j},a^+_{e_i}\} + \{a_{e_i},a^+_{e_j}\}=0. \end{equation} \tag{45} $$

Proof. From Definition 5, Proposition 3, and the linearity of the map $v\in V\,{\mapsto}\, X_v$ it follows that if $v\in V\setminus\{0\}$, then $X_{v/s_v}$ satisfies $X_{v/s_v}^2=1$. The first statement of the theorem implies that one can fix an $X$-algebraically normalized linear basis $e\equiv(e_j)_{j\in D}$ of $V$ satisfying the weak Pauli exclusion principle. Let us also fix two arbitrary indices $i\ne j\in D$. If $v:=v_ie_i+ v_je_j\in V$, possibly replacing $v$ by $v/s_v$, one can suppose that $X_v^2=1$ or equivalently
$$ \begin{equation} \begin{aligned} \, 1 &= X_v^2 =X_{v_ie_i+ v_je_j}^2 = (v_iX_{e_i}+ v_jX_{e_j})^2 \notag \\ &= v_i^2X_{e_i}^2+ v_j^2X_{e_j}^2 + v_iv_j(X_{e_i}X_{e_j}+X_{e_j}X_{e_i}) \notag \\ &= v_i^2 + v_j^2 + v_iv_j(a^+_{e_i}+a_{e_i}) (a^+_{e_j}+a_{e_j})+ v_iv_j(a^+_{e_j} +a_{e_j})(a^+_{e_i}+a_{e_i}). \end{aligned} \end{equation} \tag{46} $$
Multiplying both sides of (46) on the right by $a^+_{e_j}$ and on the left by $a_{e_j}$ and using the fact that $a^{+2}_{e_j}=0$, $a_{e_j}^2=0$, one obtains that
$$ \begin{equation*} \begin{aligned} \, a^+_{e_j}a_{e_j} &= v_i^2a^+_{e_j}a_{e_j} + v_j^2 a^+_{e_j}a_{e_j} + v_iv_j(a^+_{e_j}a^+_{e_i}+a^+_{e_j}a_{e_i})(a^+_{e_j}a_{e_j}+a_{e_j}a_{e_j}) \\ &\qquad + v_iv_j(a^+_{e_j}a^+_{e_j}+a^+_{e_j} a_{e_j})(a^+_{e_i}a_{e_j}+a_{e_i}a_{e_j}) \\ &=v_i^2a^+_{e_j}a_{e_j}+v_j^2 a^+_{e_j}a_{e_j} + v_iv_j(a^+_{e_j}a^+_{e_i}a^+_{e_j} a_{e_j}+a^+_{e_j}a_{e_i}a^+_{e_j}a_{e_j}) \\ &\qquad+ v_iv_j(a^+_{e_j}a_{e_j}a^+_{e_i}a_{e_j} +a^+_{e_j}a_{e_j}a_{e_i}a_{e_j}). \end{aligned} \end{equation*} \notag $$
By the weak Pauli exclusion principle,
$$ \begin{equation*} a^+_{e_j}a^+_{e_i}a^+_{e_j}=a^+_{e_j}a_{e_i}a^+_{e_j} =a_{e_j}a^+_{e_i}a_{e_j} = a_{e_j}a_{e_i}a_{e_j} = 0. \end{equation*} \notag $$

Therefore,

$$ \begin{equation} a^+_{e_j}a_{e_j} = v_i^2a^+_{e_j}a_{e_j} + v_j^2 a^+_{e_j}a_{e_j} = (v_i^2 + v_j^2) a^+_{e_j}a_{e_j}. \end{equation} \tag{47} $$
A similar argument, multiplying both sides of (46) on the right by $a_{e_j}$ and on the left by $a^+_{e_j}$, gives
$$ \begin{equation} a_{e_j}a^+_{e_j} = v_i^2a_{e_j}a^+_{e_j} + v_j^2 a_{e_j}a^+_{e_j} = (v_i^2 + v_j^2) a_{e_j}a^+_{e_j}. \end{equation} \tag{48} $$

Adding (47) and (48) and using the Remark 6, we conclude that

$$ \begin{equation*} 1 = a^+_{e_j}a_{e_j} + a_{e_j}a^+_{e_j} = (v_i^2 + v_j^2) (a^+_{e_j}a_{e_j} + a_{e_j}a^+_{e_j})=v_i^2 + v_j^2. \end{equation*} \notag $$

This, combined with (46), implies

$$ \begin{equation} \begin{aligned} \, 0 &= v_iv_j(a^+_{e_i}+a_{e_i}) (a^+_{e_j}+a_{e_j})+ v_iv_j(a^+_{e_j}+ a_{e_j})(a^+_{e_i}+a_{e_i}) \notag \\ &= v_iv_j(a^+_{e_i}a^+_{e_j} + a^+_{e_i}a_{e_j}+ a_{e_i}a^+_{e_j} + a_{e_i}a_{e_j}+a^+_{e_j}a^+_{e_i}+ a^+_{e_j}a_{e_i}+ a_{e_j}a^+_{e_i}+ a_{e_j}a_{e_i}) \notag \\ &= v_iv_j(\{a_{e_j},a^+_{e_i}\}+ \{a_{e_i},a^+_{e_j}\}+\{a^+_{e_i},a^+_{e_j}\} + \{a_{e_j},a_{e_i}\}) \notag \\ &\!\!\stackrel{(27)}{=} v_iv_j(\{a_{e_j},a^+_{e_i}\} + \{a_{e_i},a^+_{e_j}\}). \end{aligned} \end{equation} \tag{49} $$
Since $v_iv_j$ is arbitrary in $(0,1)$, this is equivalent to (45). $\Box$

3.6. Deduction of the Fermi anti-commutation relations

Theorem 4. Let $X\colon V \to \mathcal{A}$ be an algebraic symmetric Bernoulli field, and let $\{a^+_{u}, a_v\}_{u,v\in V}$ be the associated family of creation–annihilation operators. Then, if $e\equiv(e_j)_{j\in D}$ is any $X$-algebraically normalized linear basis of $V$ that satisfies the weak Pauli exclusion principle, $\{a_{e_j}, a^+_{e_j}\}_{j\in D}$ satisfies the Fermi anti-commutation relations $(a_j^+)^*=a_j$ and

$$ \begin{equation} \{a_j,a_k\}=\{a^+_j,a^+_k\}=0,\qquad j,k\in I, \end{equation} \tag{50} $$
$$ \begin{equation} \{a_k,a^+_j\}=\delta_{jk},\qquad j,k\in I. \end{equation} \tag{51} $$

Proof. The relation $(a_j^+)^*=a_j$ follows from Definition 5; (50) follows from (27). Moreover, the Remark 6 gives (51) for $i=j$. Now we turn to show (51) for $i\ne j$.

Notice that, in the case of $i\ne j$, starting from (45), i.e.,

$$ \begin{equation*} 0=\{a_{e_j},a^+_{e_i}\} + \{a_{e_i},a^+_{e_j}\}, \end{equation*} \notag $$
and multiplying on the left both sides by $a_{e_i}a^+_{e_i}$, one finds that
$$ \begin{equation*} \begin{aligned} \, 0&= a_{e_i}a^+_{e_i}(a_{e_j}a^+_{e_i}+ a^+_{e_i} a_{e_j}) + a_{e_i}a^+_{e_i}(a_{e_i}a^+_{e_j} + a^+_{e_j}a_{e_i}) \\ &= a_{e_i}a^+_{e_i}a_{e_j}a^+_{e_i} + a_{e_i}a^+_{e_i} a^+_{e_i} a_{e_j}+ a_{e_i}a^+_{e_i}a_{e_i}a^+_{e_j} + a_{e_i}a^+_{e_i}a^+_{e_j}a_{e_i}. \end{aligned} \end{equation*} \notag $$
Using the weak Pauli exclusion principle, this becomes
$$ \begin{equation} 0=a_{e_i}a^+_{e_i}\{a_{e_i}, a^+_{e_j}\}. \end{equation} \tag{52} $$
Similarly, multiplying on the left both sides of (45) by $a^+_{e_i}a_{e_i}$, and applying again the weak Pauli exclusion principle, one finds that
$$ \begin{equation} \begin{aligned} \, 0&= a^+_{e_i}a_{e_i}(a_{e_j}a^+_{e_i}+ a^+_{e_i} a_{e_j}) + a^+_{e_i}a_{e_i}(a_{e_i}a^+_{e_j} + a^+_{e_j}a_{e_i}) \notag \\ &= a^+_{e_i}a_{e_i}a_{e_j}a^+_{e_i} + a^+_{e_i} a_{e_i}a^+_{e_i} a_{e_j}+ a^+_{e_i}a_{e_i}a_{e_i}a^+_{e_j}+ a^+_{e_i}a_{e_i}a^+_{e_j}a_{e_i} \notag \\ &=a^+_{e_i}a_{e_i}a_{e_j}a^+_{e_i} + a^+_{e_i} a_{e_i}a^+_{e_i} a_{e_j}= a^+_{e_i}a_{e_i} \{a_{e_i}, a^+_{e_j}\}. \end{aligned} \end{equation} \tag{53} $$
Adding (52) and (53), one finds, for all $i\ne j$, that
$$ \begin{equation*} 0=a_{e_i}a^+_{e_i}\{a_{e_i}, a^+_{e_j}\} + a^+_{e_i}a_{e_i}\{a_{e_i},a^+_{e_j}\} =\{a_{e_i}, a^+_{e_i}\}\{a_{e_i}, a^+_{e_j}\}=\{a_{e_i}, a^+_{e_j}\}, \end{equation*} \notag $$
where the last identity follows from the fact $\{a_{e_i}, a^+_{e_i}\}=1$ which we have already proved. $\Box$

§ 4. Fermions and stochastic couplings

The example of fermions naturally suggests an algebraic extension of the classical notion of stochastic coupling for the reasons explained below.

Definition 8. Let $D$ be a set. A stochastic coupling for the family $(X_i)_{i\in D}$ of classical random variables with state space $(S_i,\mathcal{B}_i)$ and probability distribution $\mu_{X_i}$ is a probability measure $\mu$ on $\prod_{i\in D}(S_i,\mathcal{B}_i)$ such that, denoting $\pi_i\colon \prod_{i\in D}(S_i,\mathcal{B}_i)\to (S_i,\mathcal{B}_i)$ the $i$th projection, one has

$$ \begin{equation} \mu_{X_i}:=\mu\circ\pi_i^{-1} \quad (i\text{th marginal of }\mu)\quad \forall\, i\in D. \end{equation} \tag{54} $$

The following is a formulation of the notion of coupling that includes the classical and the quantum case.

Definition 9. Let $D$ be a set. A stochastic coupling for the family of algebraic probability spaces $(\mathcal{A}_i, \varphi_i)_{i\in D}$ is a pair $((\mathcal{A}, \varphi), \, (j_i)_{i\in D})$ such that:

1) $(\mathcal{A}, \varphi)$ is an algebraic probability space,

2) for each $i\in D$, $j_i\colon \mathcal{A}_i\to \mathcal{A}$ is a $*$-homomorphism (not necessarily identity preserving) satisfying

$$ \begin{equation} \varphi\circ j_i = \varphi_i\quad\forall\, i\in D, \end{equation} \tag{55} $$
$$ \begin{equation} \mathcal{A} = \bigvee_{i\in D}\mathcal{A}_i \quad \text{algebraic span}. \end{equation} \tag{56} $$
If
$$ \begin{equation} \mathcal{A} = \bigotimes_{i\in D}\mathcal{A}_i \quad (\text{algebraic tensor product of $*$-algebras with identity}), \end{equation} \tag{57} $$
one speaks of tensor stochastic coupling.

Remark 10. Since the product coupling is well defined for any family of classical random variables, the set of couplings for a family of random variables is never empty and it is a closed convex set.

Remark 11. A special case of the coupling problem is the definition of multi- dimensional extension of a random variable. In fact, such an extension is nothing but a coupling among more copies of a single random variable.

Motivated by the example of fermions, we introduce in classical probability the distinction between stochastic coupling and algebraic coupling replacing the polynomial algebras $\mathcal{P}_{X_i}$ ($i\in D$) by the associated canonical quantum polynomial algebras $\mathcal{P}_{a^+_{{X_i}}, a^0_{{X_i}}, a_{{X_i}}}$ and the classical algebraic probability space $(\mathcal{A}, \varphi)$ by a, non-necessarily commutative, $*$-algebra $\mathcal{A}$. This allows to introduce, as in the fermion case, algebraic constraints among the random variables which are valid for any choice of the state on $\mathcal{A}$.

Definition 10. Let $V$ be a real vector space and $(X_v)_{v\in V}$ be a family of classical random variables with all moments, each on its own probability space. An algebraic coupling of the family $(X_v)$ is a pair $(\mathcal{A}, (j_v)_{v\in V})$ such that $\mathcal{A}$ is a $*$-algebra and, for each $v\in V$, $j_v\colon \mathcal{A}_{X_v}\to \mathcal{A}$ is an injective, but not necessarily identity preserving, $*$-homomorphism from the quantum algebra $\mathcal{A}_{X_v}$ of $X_v$ into $\mathcal{A}$. The algebraic coupling $(\mathcal{A}, (j_v)_{v\in V})$ is called classical if $\mathcal{A}$ is commutative, quantum if it is not, identity preserving if each $j_v$ ($v\in V$) is.

§ 5. Stochastic independences and algebraic couplings

Various notions of stochastic independence emerged in quantum probability have a natural interpretation in terms of algebraic stochastic couplings. In this section we illustrate this idea with examples of tensor couplings between pairs of algebraic probability spaces. In other words, they belong to the same family of constraints as the Pauli exclusion principle. We use the indices $S$ (system) and $E$ (environment) to make an intuitive bridge with the theory of quantum open systems, which are usually defined in terms of tensor embeddings. In fact, the theory of couplings, both algebraic and stochastic, is a non-trivial extension of the theory of open systems (see [5] for a detailed discussion of this issue).

In this section $I = \{S, E\}$ hence, according to Definition 9, a stochastic coupling for the pair of algebraic probability spaces $(\mathcal{A}_S,\varphi_S)$ and $(\mathcal{A}_E,\varphi_E)$ is given by an algebraic probability space $(\mathcal{A},\varphi)$ and two injective embeddings

$$ \begin{equation} j_S \colon \mathcal{A}_S\to \mathcal{A},\qquad j_E \colon \mathcal{A}_E\to \mathcal{A}, \end{equation} \tag{58} $$
satisfying the compatibility condition
$$ \begin{equation} \varphi\circ j_S = \varphi_S,\qquad \varphi\circ j_E = \varphi_E. \end{equation} \tag{59} $$
The minimality condition (56) becomes
$$ \begin{equation} \mathcal{A} = \mathcal{A}_S\vee \mathcal{A}_E \end{equation} \tag{60} $$
and implies that $\mathcal{A}$ is linearly spanned by the following $4$ kinds of alternate products:
$$ \begin{equation} j_S(a_{S,1})j_E(a_{E,1})\cdots j_S(a_{S,n})j_E(a_{E,n}) \quad SE\text{-products}, \end{equation} \tag{61} $$
$$ \begin{equation} j_S(a_{S,1})j_E(a_{E,1})\cdots j_S(a_{S,n-1}) j_E(a_{E,n-1})j_S(a_{S,n}) \quad SS\text{-products}, \end{equation} \tag{62} $$
$$ \begin{equation} j_E(a_{E,1})j_S(a_{S,1}) \cdots j_E(a_{E,n})j_S(a_{S,n}) \quad ES\text{-products}, \end{equation} \tag{63} $$
$$ \begin{equation} j_E(a_{E,1})j_S(a_{S,1}) \cdots j_E(a_{E,n-1}) j_S(a_{S,n-1})j_E(a_{E,n}) \quad EE\text{-products} \end{equation} \tag{64} $$
in which $n\in\mathbb{N}$, $a_{S,k}\in\mathcal{A}_S$, $a_{E,k}\in\mathcal{A}_E$ and each $j_S(a_{S,k})$-factor must always be followed by a $j_E(a_{E,k+1})$-factor or by nothing and the same inverting the roles of $E$ and $S$.

Remark 12. In several cases, condition (60) implies that the algebra $\mathcal{A}$ has no identity. In these cases a positive linear functional $\varphi$ on $\mathcal{A}$ is called a state if

$$ \begin{equation} \varphi(j_S(1_{\mathcal{A}_S}))= 1,\qquad \varphi(j_E(1_{\mathcal{A}_E}))= 1, \end{equation} \tag{65} $$
notice that the identities (65) are automatically satisfied if $\varphi$ is a stochastic coupling due to (59).

Remark 13. In many cases there are algebraic relations that allow to reduce the $4$ kinds of alternate products (61)(64) to simpler forms that we will call normal forms. In Sections 5.2 and 5.3 below, we produce some examples of these normal forms.

Motivated by the discussion, we distinguish the algebraic aspects of the coupling from the stochastic ones. In the special case we are considering in this section, this means that we first introduce the embeddings (58) as purely algebraic objects and then classify the possible states on $\mathcal{A}$ that satisfy (59).

5.1. $\Phi$-embeddings

In the rest of this section, the notations and the assumptions are those introduced here. We suppose that

$$ \begin{equation} \mathcal{A}_{R}\text{ acts on a Hilbert space }\mathcal{H}_{R}\text{ with unit vector } \Phi_{R}\quad \forall\, R\in\{S,E\}. \end{equation} \tag{66} $$
It is known that, given a tensor product of two Hilbert spaces $\mathcal{H}_S\otimes\mathcal{H}_E$, there does not exist a canonical way to embed any of the factors $\mathcal{H}_S$, $\mathcal{H}_E $ into their product $\mathcal{H}_S\otimes\mathcal{H}_E$. The simplest embeddings are the isometric ones. The examples in this section will be built on a very special class of these examples, described in the following simple result that we recall.

Lemma 3. For any choice of $\Phi_E\in\mathcal{H}_E$ with $\|\Phi_E \|=1$, the embedding

$$ \begin{equation} J_S\colon \xi\in\mathcal{H}_S\to\xi\otimes \Phi_E\in\mathcal{H}_S\otimes\mathcal{H}_E \end{equation} \tag{67} $$
is isometric and its adjoint is
$$ \begin{equation} J^*_S(\eta\otimes\psi)=\langle\Phi_E,\psi\rangle\eta, \qquad \eta\in\mathcal{H}_S,\quad \psi\in\mathcal{H}_E, \end{equation} \tag{68} $$
so that
$$ \begin{equation} j_S(a_S) := J_Sa_SJ^*_S=a_S\otimes P_{\Phi_E},\qquad a_S\in\mathcal{B}(\mathcal{H}_S), \end{equation} \tag{69} $$
where
$$ \begin{equation} P_{\Phi_E} = \Phi_E \Phi_E^*\quad (\textit{i.e.}, P_{\Phi_E}\psi= \langle\Phi_E,\psi \rangle\Phi_E \quad \forall\, \psi\in\mathcal{H}_E). \end{equation} \tag{70} $$

Proof. A simple calculation. $\Box$

Definition 11. For any unit vector $\Phi_E\in\mathcal{H}_E$, the embedding

$$ \begin{equation*} j_S \colon \mathcal{B}(\mathcal{H}_S) \to \mathcal{B}(\mathcal{H}_S)\otimes \mathcal{B}(\mathcal{H}_E) \end{equation*} \notag $$
defined by (69) above (i.e., $j_S(a_S) := a_S\otimes P_{\Phi_E}$ for all $a_S\in\mathcal{B} (\mathcal{H}_S)$) is called the $\Phi_E$-embedding. Similarly, for any unit vector $\Phi_S\in\mathcal{H}_S$, the embedding $j_E(a_E) := P_{\Phi_S}\otimes a_E$ for all $a_E\in\mathcal{B}(\mathcal{H}_E)$ is called the $\Phi_S$-embedding.

Remark 14. Using (69) one finds, for all $\eta_S\in \mathcal{H}_S$ and $\psi_E\in\mathcal{H}_E$,

$$ \begin{equation*} \langle j_S(a_S)(\eta_S\otimes\psi_E),\eta_S\otimes\psi_E \rangle = \langle a_S\eta_S,\eta_S\rangle |\langle \Phi_E,\psi_E\rangle |^2\quad\forall\, a_S\in\mathcal{B}(\mathcal{H}_S) \end{equation*} \notag $$
and similarly,
$$ \begin{equation*} \langle j_E(a_E)(\eta_S\otimes\psi_E),\eta_S\otimes\psi_E \rangle = \langle a_E\psi_E,\psi_E\rangle |\langle \Phi_S,\eta_S\rangle |^2\quad\forall\, a_E\in\mathcal{B} (\mathcal{H}_E), \end{equation*} \notag $$
which clearly indicates that these embeddings contain a statistical information.

Let us introduce

$$ \begin{equation} \varphi_{R}(x) = \langle \Phi_{R},x \Phi_{R}\rangle\big|_{\mathcal{A}_{R}} \quad \forall\, x\in \mathcal{A}_{R},\quad \forall\, R\in\{S,E\}, \end{equation} \tag{71} $$
where $\big|_{\,\boldsymbol{\cdot}\,}$ denotes restriction. If the elements of $\mathcal{A}_S$ or $\mathcal{A}_E$ are unbounded operators, it is assumed that these both restrictions make sense. Let the projectors $P_{\Phi_S}$ and $P_{\Phi_R}$ be defined as in (70). Then, for all $R\in\{S,E\}$, the following identities take place:
$$ \begin{equation} P_{\Phi_{R}}x P_{\Phi_{R}}=\varphi_{R}(x)P_{\Phi_{R}}\quad \forall\, x\in {\mathcal{A}_R}, \end{equation} \tag{72} $$
$$ \begin{equation} \varphi_{R}(P_{\Phi_R}) =1, \end{equation} \tag{73} $$
$$ \begin{equation} \varphi_{R}(P_{\Phi_R}x) =\varphi_{R}(xP_{\Phi_R}) =\varphi_{R}(x)\quad \forall\, x\in \mathcal{A}_{R}, \end{equation} \tag{74} $$
$$ \begin{equation} \varphi_{R}(P_{\Phi_R}xP_{\Phi_R}y) \stackrel{(72)}{=}\varphi_{R}(x) \varphi_{R}(P_{\Phi_R}y)\stackrel{(74)}{=} \varphi_{R}(x)\varphi_{R}(y), \end{equation} \tag{75} $$
$$ \begin{equation} \varphi_{R}(xP_{\Phi_R}yP_{\Phi_R}) \stackrel{(72)}{=}\varphi_{R}(y) \varphi_{R}(xP_{\Phi_R})\stackrel{(74)}{=} \varphi_{R}(x)\varphi_{R}(y), \end{equation} \tag{76} $$
$$ \begin{equation} \varphi_{R}(xP_{\Phi_R}y)\stackrel{(74)}{=} \varphi_{R}(xP_{\Phi_R}yP_{\Phi_R})\stackrel{(76)}{=} \varphi_{R}(x)\varphi_{R}(y). \end{equation} \tag{77} $$
On the algebraic tensor product
$$ \begin{equation} \mathcal{A} := \mathcal{A}_S\otimes \mathcal{A}_E \end{equation} \tag{78} $$
we define the state
$$ \begin{equation} \varphi (x):=(\varphi_S\otimes \varphi_E)(x) := \langle \Phi_S\otimes \Phi_E, x(\Phi_S\otimes \Phi_E )\rangle,\qquad x\in\mathcal{A}. \end{equation} \tag{79} $$
Notice that for all $a_S\in\mathcal{A}_S$ and $a_E\in\mathcal{A}_E$,
$$ \begin{equation*} \begin{aligned} \, \langle\Phi_S\otimes \Phi_E,((P_{\Phi_S}a_S)\otimes a_E)\Phi_S\otimes \Phi_E\rangle &=\langle \Phi_S, P_{\Phi_S}a_S\Phi_S\rangle\langle \Phi_E, a_E\Phi_E\rangle \\ &=\varphi_S(a_S)\varphi_E(a_E) \end{aligned} \end{equation*} \notag $$
and the same identity holds replacing $P_{\Phi_S}a_S\otimes a_E$ by one of the following expressions:
$$ \begin{equation*} a_SP_{\Phi_S}\otimes a_E,\quad P_{\Phi_S}a_SP_{\Phi_S}\otimes a_E,\quad a_S\otimes P_{\Phi_E}a_E,\quad a_S\otimes a_EP_{\Phi_E},\quad a_S\otimes P_{\Phi_E}a_EP_{\Phi_E}. \end{equation*} \notag $$
With a small abuse of notations we will write these identities as
$$ \begin{equation} \varphi((a_SP_{\Phi_S})\otimes a_E) =\varphi(a_S\otimes (P_{\Phi_E}a_E)) = \dots = \varphi_S(a_S)\varphi_E(a_E). \end{equation} \tag{80} $$

5.2. Boolean stochastic couplings

The Lenczewsky boolean embeddings are defined by the pair

$$ \begin{equation} j_S(b_S) := b_S\otimes P_{\Phi_E} \quad \forall\, b_S\in\mathcal{A}_S, \end{equation} \tag{81} $$
$$ \begin{equation} j_E(b_E) := P_{\Phi_S}\otimes b_E \quad \forall\, b_E\in\mathcal{A}_E. \end{equation} \tag{82} $$
Notice that, for all $ b_S\in\mathcal{A}_S$ and $ b_E\in\mathcal{A}_E$, by denoting $\overline{b}_S:=j_S(b_S)$ and $\overline{b}_{R}:=j_S(b_{R})$,
$$ \begin{equation*} \begin{aligned} \, \varphi (\overline b_S) &\stackrel{(79)}{=} \langle \Phi_S\otimes \Phi_E, (b_S\otimes P_{\Phi_E})(\Phi_S\otimes \Phi_E )\rangle \stackrel{(71), (73)}{=}\varphi_S (b_S), \\ \varphi (\overline b_E) &\stackrel{(79)}{=} \langle \Phi_S\otimes \Phi_E, (P_{\Phi_S}\otimes b_E)(\Phi_S\otimes \Phi_E )\rangle \stackrel{(71), (73)}{=} \varphi_E ({b}_E), \end{aligned} \end{equation*} \notag $$
i.e.,
$$ \begin{equation} \varphi (\overline{b}_{R})=\varphi_R ({b}_{R})\quad \forall\,R\in\{S,E\},\quad \forall\, b_{R}\in\mathcal{A}_R. \end{equation} \tag{83} $$
In other words, the pair $((\mathcal{A}, \varphi), \{j_S, j_E\})$ is a stochastic coupling for the pair of algebraic probability spaces $(\mathcal{A}_S, \varphi_S)$, $(\mathcal{A}_E, \varphi_E)$.

Remark 15. Notice that the embeddings (81) and (82) are not identity preserving because

$$ \begin{equation*} j_S(1_S) = 1_S\otimes P_{\Phi_E}\ne 1_S\otimes 1_E,\qquad j_E(1_E) = P_{\Phi_S}\otimes 1_E\ne 1_S\otimes 1_E. \end{equation*} \notag $$
However, if $\varphi$ is given by (79), then
$$ \begin{equation*} \varphi (j_S(1_S))=\varphi_S\otimes \varphi_E(1_S\otimes P_{\Phi_E}) =\varphi_S(1_S)\varphi_E(P_{\Phi_E})\stackrel{(73)}{=} 1 \end{equation*} \notag $$
and similarly for $\varphi (j_E(1_E))$. The following proposition and its corollary are known [2], we give a new proof that better highlights the separation between algebraic and statistical constraints.

Proposition 5. In the case of the boolean embeddings given by (81) and (82), for any $n\in\mathbb{N}^*$, $\{b_{S,k}\colon k\leqslant n\}\subset \mathcal{A}_S$ and $\{b_{E,k}\colon k\leqslant n\}\subset \mathcal{A}_E$, the $4$ kinds of alternate products (61)(64) take respectively the following $4$ normal forms:

$$ \begin{equation} SE\textit{-product in }(61) =(b_{S,1}P_{\Phi_S}\otimes P_{\Phi_E}b_{E,n}) \prod_{k=2}^n\varphi_S( b_{S,k}) \prod_{h=1}^{n-1}\varphi_E( b_{E,h}), \end{equation} \tag{84} $$
$$ \begin{equation} SS\textit{-product in }(62) =(b_{S,1}P_{\Phi_S}b_{S,n}\otimes P_{\Phi_E}) \prod_{k=2}^{n-1}\varphi_S( b_{S,k}) \prod_{h=1}^{n-1}\varphi_E( b_{E,h}), \end{equation} \tag{85} $$
$$ \begin{equation} ES\textit{-product in }(63) = (P_{\Phi_S}b_{S,n}\otimes b_{E,1}P_{\Phi_E}) \prod_{k=1}^{n-1}\varphi_S( b_{S,k}) \prod_{h=2}^n\varphi_E( b_{E,h}), \end{equation} \tag{86} $$
$$ \begin{equation} EE\textit{-product in } (64) = (P_{\Phi_S}\otimes b_{E,1}P_{\Phi_E}b_{E,n}) \prod_{k=2}^n\varphi_S( b_{S,k}) \prod_{h=2}^{n-1}\varphi_E( b_{E,h}). \end{equation} \tag{87} $$

Proof. The relations (72) allows to put the products (61)(64) in normal form. In fact, in the case of $SE$-products, one has
$$ \begin{equation*} \begin{aligned} \, &\overline{b}_{S,1}\cdot \overline{b}_{E,1}\cdot \overline{b}_{S,2}\cdot \overline{b}_{E,2}\cdots \overline{b}_{S,n}\cdot \overline{b}_{E,n} \\ &=[b_{S,1}{\otimes}\, P_{\Phi_E}]\cdot[P_{\Phi_S}{\otimes}\, b_{E,1}]\cdot [b_{S,2}{\otimes}\, P_{\Phi_E}]\cdot [P_{\Phi_S}{\otimes}\, b_{E,2}] \cdots [b_{S,n}{\otimes}\, P_{\Phi_E}]\cdot [P_{\Phi_S}{\otimes}\, b_{E,n}] \\ &=b_{S,1}P_{\Phi_S}b_{S,2}P_{\Phi_S}\dots b_{S,n-1}P_{\Phi_S} b_{S,n}P_{\Phi_S}\otimes P_{\Phi_E}b_{E,1} P_{\Phi_E}b_{E,2}P_{\Phi_E}b_{E,3} \cdots P_{\Phi_E}b_{E,n} \\ &\!\!\stackrel{(72)}{=} b_{S,1}P_{\Phi_S}\otimes P_{\Phi_E}b_{E,n} \prod_{k=2}^n\varphi_S( b_{S,k}) \prod_{h=1}^{n-1}\varphi_E( b_{E,h}). \end{aligned} \end{equation*} \notag $$
This proves (84). For the $ES$-type products,
$$ \begin{equation*} \begin{aligned} \, &\overline{b}_{E,1}\cdot \overline{b}_{S,1}\cdot \overline{b}_{E,2}\cdot\overline{b}_{S,2}\cdots \overline{b}_{E,n}\cdot\overline{b}_{S,n} \\ &=[P_{\Phi_S}\otimes b_{E,1}]\cdot[b_{S,1}{\otimes}\, P_{\Phi_E}]\cdot [P_{\Phi_S} {\otimes}\, b_{E,2}]\cdot [b_{S,2}{\otimes}\, P_{\Phi_E}] \cdots [P_{\Phi_S} {\otimes}\, b_{E,n}]\cdot [b_{S,n}{\otimes}\, P_{\Phi_E}] \\ &=P_{\Phi_S}b_{S,1}P_{\Phi_S}b_{S,2}P_{\Phi_S}\cdots b_{S,n-1}P_{\Phi_S} b_{S,n}\otimes b_{E,1} P_{\Phi_E}b_{E,2}P_{\Phi_E}b_{E,3} \cdots P_{\Phi_E}b_{E,n}P_{\Phi_E} \\ &\!\!\stackrel{(72)}{=} P_{\Phi_S}b_{S,n}\otimes b_{E,1}P_{\Phi_E} \prod_{k=1}^{n-1}\varphi_S( b_{S,k}) \prod_{h=2}^n\varphi_E( b_{E,h}), \end{aligned} \end{equation*} \notag $$
which is (86). For the $EE$-type products,
$$ \begin{equation*} \begin{aligned} \, &\overline{b}_{E,1}\cdot \overline{b}_{S,2}\cdot \overline{b}_{E,2}\cdot \overline{b}_{S,3} \cdots \overline{b}_{S,n-1}\cdot \overline{b}_{E,n-1}\cdot\overline{b}_{S,n}\cdot\overline{b}_{E,n} \\ &=[P_{\Phi_S}\otimes b_{E,1}]\cdot [b_{S,2}\otimes P_{\Phi_E}]\cdot [P_{\Phi_S} \otimes b_{E,2}] \cdots [b_{S,n-1}\otimes P_{\Phi_E}]\cdot [P_{\Phi_S} \otimes b_{E,n-1}] \\ &\qquad\times [b_{S,n}\otimes P_{\Phi_E}]\cdot [P_{\Phi_S} \otimes b_{E,n}] \\ &=P_{\Phi_S}b_{S,2}P_{\Phi_S}\cdots b_{S,n-1}P_{\Phi_S} b_{S,n}P_{\Phi_S}\otimes b_{E,1} P_{\Phi_E}b_{E,2}P_{\Phi_E}b_{E,h} \cdots P_{\Phi_E}b_{E,n-1}P_{\Phi_E} \\ &\!\!\stackrel{(72)}{=} P_{\Phi_S}\otimes b_{E,1}P_{\Phi_E}b_{E,n} \prod_{k=2}^n\varphi_S( b_{S,k}) \prod_{h=2}^{n-1}\varphi_E( b_{E,h}), \end{aligned} \end{equation*} \notag $$
which is (87). For the $SS$-type products,
$$ \begin{equation*} \begin{aligned} \, &\overline{b}_{S,1}\cdot \overline{b}_{E,1}\cdot \overline{b}_{S,2}\cdot \overline{b}_{E,2} \cdots \overline{b}_{S,n-1}\cdot \overline{b}_{E,n-1}\cdot\overline{b}_{S,n} \\ &=[b_{S,1}\otimes P_{\Phi_E}]\cdot [P_{\Phi_S} \otimes b_{E,1}] \cdots [b_{S,n-1}\otimes P_{\Phi_E}]\cdot [P_{\Phi_S} \otimes b_{E,n-1}]\cdot [b_{S,n}\otimes P_{\Phi_E}] \\ &=b_{S,1}P_{\Phi_S}b_{S,2}P_{\Phi_S}\cdots b_{S,n-1}P_{\Phi_S} b_{S,n}\otimes P_{\Phi_E}b_{E,1} P_{\Phi_E}b_{E,2}P_{\Phi_E}b_{E,3} \cdots P_{\Phi_E}b_{E,n-1}P_{\Phi_E} \\ &\!\!\stackrel{(72)}{=} b_{S,1}P_{\Phi_S}b_{S,n}\otimes P_{\Phi_E} \prod_{k=2}^{n-1}\varphi_S( b_{S,k}) \prod_{h=1}^{n-1}\varphi_E( b_{E,h}), \end{aligned} \end{equation*} \notag $$
which is (85). $\Box$

Corollary 1. The $\varphi$-expectation of the $4$ types of products defined in (61)(64) (simplified in Proposition 5) is factorizable. Namely:

$$ \begin{equation} \varphi(SE\textit{-product in }(61)) =\prod_{k=1}^n\varphi(\overline b_{S,k}) \prod_{h=1}^n\varphi(\overline b_{E,h}), \end{equation} \tag{88} $$
$$ \begin{equation} \varphi(SS\textit{-product in }(62)) = \prod_{k=1}^n\varphi(\overline b_{S,k}) \prod_{h=1}^{n-1}\varphi(\overline b_{E,h}), \end{equation} \tag{89} $$
$$ \begin{equation} \varphi(ES\textit{-product in } (63)) =\prod_{k=1}^n\varphi(\overline b_{S,k}) \prod_{h=1}^n\varphi(\overline{b}_{E,h}), \end{equation} \tag{90} $$
$$ \begin{equation} \varphi(EE\textit{-product in }(64)) = \prod_{k=2}^n\varphi(\overline b_{S,k}) \prod_{h=1}^n\varphi(\overline b_{E,h}). \end{equation} \tag{91} $$
Consequently, with respect to the state $\varphi$, the two algebras $\mathcal{A}_S$ and $\mathcal{A}_R$ are Boolean independent.

Proof. Thanks to Proposition 5:
$$ \begin{equation*} \begin{aligned} \, \varphi(SE\text{-product in }(61)) &\stackrel{(84)}{=} \varphi(b_{S,1}P_{\Phi_S}\otimes P_{\Phi_E}b_{E,n}) \prod_{k=2}^n\varphi_S( b_{S,k}) \prod_{h=1}^{n-1}\varphi_E( b_{E,h}) \\ &\stackrel{(79)}{=} \varphi_S(b_{S,1}P_{\Phi_S}) \varphi_E(P_{\Phi_E}b_{E,n}) \prod_{k=2}^n\varphi_S( b_{S,k}) \prod_{h=1}^{n-1}\varphi_E( b_{E,h}) \\ &\stackrel{(74)}{=} \prod_{k=1}^n\varphi_S(b_{S,k}) \prod_{h=1}^n\varphi_E(b_{E,h})=\prod_{k=1}^n\varphi(\overline b_{S,k}) \prod_{h=1}^n\varphi(\overline b_{E,h}), \end{aligned} \end{equation*} \notag $$
which is the formula (88);
$$ \begin{equation*} \begin{aligned} \, \varphi(ES\text{-product in }(63)) &\stackrel{(86)}{=} \varphi(P_{\Phi_S}b_{S,n}\otimes b_{E,1}P_{\Phi_E}) \prod_{k=1}^{n-1}\varphi_S( b_{S,k}) \prod_{h=2}^n\varphi_E( b_{E,h}) \\ &\!\!\!\!\!\stackrel{(79),(74)}{=} \prod_{k=1}^n\varphi_S( b_{S,k}) \prod_{h=1}^n\varphi_E( b_{E,h}) \stackrel{(83)}{=} \prod_{k=1}^n\varphi(\overline b_{S,k}) \prod_{h=1}^n\varphi(\overline{b}_{E,h}), \end{aligned} \end{equation*} \notag $$
which is formula (90);
$$ \begin{equation*} \begin{aligned} \, \varphi(EE\text{-product in }(64)) &\stackrel{(87)}{=} \varphi(P_{\Phi_S}\otimes b_{E,1}P_{\Phi_E}b_{E,n})\prod_{k=2}^n\varphi_S( b_{S,k}) \prod_{h=2}^{n-1}\varphi_E( b_{E,h}) \\ &\ =\prod_{k=2}^n\varphi_S( b_{S,k}) \prod_{h=1}^n\varphi_E(b_{E,h}) \stackrel{(83)}{=}\prod_{k=2}^n\varphi(\overline b_{S,k})\prod_{h=1}^n\varphi(\overline b_{E,h}), \end{aligned} \end{equation*} \notag $$
which is formula (91). Finally, one gets formula (89) as follows:
$$ \begin{equation*} \begin{aligned} \, \varphi(SS\text{-product in }(62)) &\stackrel{(85)}{=} \varphi(b_{S,1}P_{\Phi_S}b_{S,n}\otimes P_{\Phi_E}) \prod_{k=2}^{n-1}\varphi_S( b_{S,k}) \prod_{h=1}^{n-1}\varphi_E( b_{E,h}) \\ &\stackrel{(79)}{=} \varphi_S(b_{S,1}P_{\Phi_S}b_{S,n}) \varphi_E(P_{\Phi_E}) \prod_{k=2}^{n-1}\varphi_S( b_{S,k}) \prod_{h=1}^{n-1}\varphi_E( b_{E,h}) \\ &\!\!\!\!\stackrel{(73), (77)}{=} \prod_{k=1}^n\varphi_S(b_{S,k}) \prod_{h=1}^{n-1}\varphi_E( b_{E,h}) \stackrel{(83)}{=} \prod_{k=1}^n\varphi(\overline b_{S,k}) \prod_{h=1}^{n-1}\varphi(\overline b_{E,h}). \end{aligned} \end{equation*} \notag $$
$\Box$

5.3. Monotone composite systems

In the notations of Section 5.1, we fix the Liebscher [3] monotone embeddings $j_S$, $j_E$ to be of the form

$$ \begin{equation*} j_S(b_S) := b_S\otimes P_{\Phi_E},\quad \Phi_E\text{-embedding}\quad \forall\, b_S\in\mathcal{B}(\mathcal{H}_S), \notag \end{equation*} \notag $$
$$ \begin{equation} j_E(b_E) := 1_S\otimes b_E, \quad \text{tensor embedding} \quad \forall\, b_E\in\mathcal{B}(\mathcal{H}_E). \end{equation} \tag{92} $$
While $j_E$ is identity preserving, $j_S$ is not because $j_S(1_S)=1_S\otimes P_{\Phi_E}$. The state $\varphi$ defined by (79), because of (73), satisfies the identity (83), i.e., the pair $((\mathcal{A}, \varphi), \{j_S, j_E\})$ is a stochastic coupling for the pair of algebraic probability spaces $(\mathcal{A}_S, \varphi_S)$, $(\mathcal{A}_E, \varphi_E)$.

For any $n\in\mathbb N$ and $b_{S,1}, b_{S,2}, \dots,b_{S,n} \in \mathcal{A}_S$, $b_{E,1}, b_{E,2}, \dots, b_{E,n} \in \mathcal{A}_E$ and, with the notations

$$ \begin{equation} j_S(b_{S,j}) :=b_{S,j}\otimes P_{\Phi_E} \quad \forall\, b_{S,j} \in \mathcal{A}_S, \end{equation} \tag{93} $$
$$ \begin{equation} j_E(b_{E,j}) :=1_S\otimes b_{E,j} \quad \forall\, b_{E,j} \in \mathcal{A}_E . \end{equation} \tag{94} $$
As an easy consequence of above choices of embeddings, one has
$$ \begin{equation*} \, b_{S,1}\cdots b_{S,n}\otimes P_{\Phi_E}= j_S(b_{S,1}\cdots b_{S,n}) =j_S(b_{S,1})\cdots j_S(b_{S,n}), \end{equation*} \notag $$
$$ \begin{equation} 1_S\otimes b_{E,h}\cdots b_{E,n} = j_E(b_{E,1}\cdots b_{E,n}) = j_E(b_{E,1})\cdots j_E(b_{E,n}). \end{equation} \tag{95} $$
Moreover, (79) and (74) tell us that
$$ \begin{equation*} {2} \varphi(j_S(b_S))=\varphi (b_S\otimes P_{\Phi_E})=\varphi_S (b_S) \quad \forall\, b_S\in \mathcal{A}_S, \notag \end{equation*} \notag $$
$$ \begin{equation} \varphi(j_E(b_E))=\varphi (1_S\otimes b_E)=\varphi_E (b_E) \quad \forall\, b_E\in \mathcal{A}_E. \end{equation} \tag{96} $$

Also the following proposition and its corollary are known [3], again we give a new proof that better highlights the separation between algebraic and statistical constraints.

Proposition 6. In the case of the monotone embeddings given by (93) and (94), for any $n\in\mathbb{N}^*$, $\{a_{S,k}\colon k\leqslant n\}\subset \mathcal{A}_S$ and $\{a_{E,k}\colon k\leqslant n\}\subset \mathcal{A}_E$, the $4$ kinds of alternate products (61)(64) take, respectively, the following $4$ normal forms:

$$ \begin{equation} j_S(a_{S,1})j_E(a_{E,1}) \cdots j_S(a_{S,n})j_E(a_{E,n}) = a_{S,1}\cdots a_{S,n}\otimes P_{\Phi_E}a_{E,n} \prod_{k=1}^{n-1}\varphi_E(a_{E,k}), \end{equation} \tag{97} $$
$$ \begin{equation} \begin{split} &j_S(a_{S,1})j_E(a_{E,1}) \cdots j_S(a_{S,n-1})j_E(a_{E,n-1})j_S(a_{S,n}) \\ &\qquad = a_{S,1}\cdots a_{S,n}\otimes P_{\Phi_E}\prod_{k=1}^{n-1} \varphi_E(a_{E,k}), \end{split} \end{equation} \tag{98} $$
$$ \begin{equation} j_E(a_{E,1})j_S(a_{S,1}) \cdots j_E(a_{E,n})j_S(a_{S,n}) = a_{S,1}\cdots a_{S,n}\otimes P_{\Phi_E} a_{E,1}P_{\Phi_E} \prod_{k=2}^n\varphi_E(a_{E,k}), \end{equation} \tag{99} $$
$$ \begin{equation} \begin{split} &j_E(a_{E,1})j_S(a_{S,1}) \cdots j_E(a_{E,n-1}) j_S(a_{S,n-1})j_E(a_{E,n}) \\ &\qquad= a_{S,1}\cdots a_{S,n-1}\otimes P_{\Phi_E}a_{E,1}P_{\Phi_E}a_{E,n} \prod_{k=2}^{n-1}\varphi_E(a_{E,k}). \end{split} \end{equation} \tag{100} $$

Proof. The embeddings (93) and (94) satisfy the relation
$$ \begin{equation} \begin{aligned} \, &j_S(b_{S,1}) j_E(b_E) j_S(b_{S,2}) =b_{S,1}b_{S,2}\otimes P_{\Phi_E}b_EP_{\Phi_E} \notag \\ &\qquad=j_S(b_{S,1})j_S(b_{S,2})\varphi_E(b_E)=j_S(b_{S,1}b_{S,2})\varphi(j_E(b_E)) \notag \\ &\qquad= j_S(b_{S,1})j_S(b_{S,2})\varphi(j_E(b_E))\quad \forall\, b_{S,1},b_{S,2}\in\mathcal{A}_E\text{ and }b_E\in\mathcal{A}_E. \end{aligned} \end{equation} \tag{101} $$
The relation (101) allows to put the products (61)(64) in normal form. In fact, in the case of $SE$-products, one has
$$ \begin{equation} \begin{aligned} \, &j_S(a_{S,1})j_E(a_{E,1})j_S(a_{S,2})j_E(a_{E,2}) \cdots j_S(a_{S,n})j_E(a_{E,n}) \notag \\ &\qquad\stackrel{(101)}{=} j_S(a_{S,1}a_{S,2})j_E(a_{E,2})j_S(a_{S,3})j_E(a_{E,3}) \cdots j_S(a_{S,n})j_E(a_{E,n})\varphi_E(a_{E,1}) \end{aligned} \end{equation} \tag{102} $$
which is still an $SE$-product. By induction, the left-hand side of (102) is reduced to
$$ \begin{equation*} \begin{aligned} \, &j_S(a_{S,1}a_{S,2}\cdots a_{S,n})j_E(a_{E,n}) \prod_{k=1}^{n-1}\varphi_E(a_{E,k}) \\ &\qquad\stackrel{(92)}{=} a_{S,1}a_{S,2}\cdots a_{S,n}\otimes P_{\Phi_E}a_{E,n} \prod_{k=1}^{n-1}\varphi_E(a_{E,k}). \end{aligned} \end{equation*} \notag $$
This proves (97). For $SS$-products, the same argument leads to
$$ \begin{equation*} \begin{aligned} \, &j_S(a_{S,1})j_E(a_{E,1}) \cdots j_S(a_{S,n-1})j_E(a_{E,n-1})j_S(a_{S,n}) \\ &\qquad=j_S(a_{S,1}a_{S,2}\cdots a_{S,n})\prod_{k=1}^{n-1}\varphi_E(a_{E,k}) \stackrel{(92)}{=}a_{S,1}a_{S,2}\cdots a_{S,n} \otimes P_{\Phi_E}\prod_{k=1}^{n-1}\varphi_E(a_{E,k}). \end{aligned} \end{equation*} \notag $$
This proves (98). Similarly, for $ES$-products,
$$ \begin{equation*} \begin{aligned} \, j_E(a_{E,1})j_S(a_{S,1})\cdots j_E(a_{E,n})j_S(a_{S,n}) &\stackrel{(98)}{=} j_E(a_{E,1})j_S(a_{S,1}\cdots a_{S,n}) \prod_{k=2}^n\varphi_E(a_{E,k}) \\ &\stackrel{(92)}{=} a_{S,1}\cdots a_{S,n}\otimes a_{E,1}P_{\Phi_E}\prod_{k=2}^n\varphi_E(a_{E,k}). \end{aligned} \end{equation*} \notag $$
This proves (99). Finally, for $EE$-products, one finds
$$ \begin{equation*} \begin{aligned} \, &j_E(a_{E,1})j_S(a_{S,1}) \cdots j_E(a_{E,n-1})j_S(a_{S,n-1})j_E(a_{E,n}) \\ &\qquad\stackrel{(98)}{=} j_S(a_{S,1}\cdots a_{S,n-1})\otimes (a_{E,1}P_{\Phi_E}a_{E,n}) \prod_{k=2}^{n-1}\varphi_E(a_{E,k}) \\ &\qquad\stackrel{(92)}{=} a_{S,1}\cdots a_{S,n-1}\otimes P_{\Phi_E}a_{E,1}P_{\Phi_E}a_{E,n} \prod_{k=2}^{n-1}\varphi_E(a_{E,k}). \end{aligned} \end{equation*} \notag $$
This proves (100). $\Box$

Corollary 2. Taking $\varphi$-expectations of the $4$ types of normal products in Proposition 6, one obtains that

$$ \begin{equation} \varphi(j_S(a_{S,1})j_E(a_{E,1}) \cdots j_S(a_{S,n})j_E(a_{E,n})) \nonumber \end{equation} \notag $$
$$ \begin{equation} \qquad=\varphi(j_S(a_{S,1})\cdots j_S(a_{S,n})) \prod_{h=1}^n\varphi(j_E(a_{E,h})), \end{equation} \tag{103} $$
$$ \begin{equation} \varphi(j_S(a_{S,1})j_E(a_{E,1}) \cdots j_S(a_{S,n-1})j_E(a_{E,n-1})j_S(a_{S,n})) \nonumber \end{equation} \notag $$
$$ \begin{equation} \qquad=\varphi(j_S(a_{S,1})\cdots j_S(a_{S,n})) \prod_{h=1}^{n-1}\varphi(j_E(a_{E,h})), \end{equation} \tag{104} $$
$$ \begin{equation} \varphi(j_E(a_{E,1})j_S(a_{S,1}) \cdots j_E(a_{E,n})j_S(a_{S,n})) \nonumber \end{equation} \notag $$
$$ \begin{equation} \qquad=\varphi(j_S(a_{S,1})\cdots j_S(a_{S,n})) \prod_{h=1}^n\varphi(j_E(a_{E,h})), \end{equation} \tag{105} $$
$$ \begin{equation} \varphi(j_E(a_{E,1})j_S(a_{S,1}) \cdots j_E(a_{E,n-1}) j_S(a_{S,n-1})j_E(a_{E,n})) \nonumber \end{equation} \notag $$
$$ \begin{equation} \qquad=\varphi(j_S(a_{S,1})\dots j_S(a_{S,n-1})) \prod_{h=1}^n\varphi(j_E(a_{E,h})). \end{equation} \tag{106} $$

Proof. One gets these 4 formulae just by applying (80), (96) and the 4 formulae in Proposition 6. For example, one gets (103) as follows:
$$ \begin{equation*} \begin{aligned} \, &\varphi(j_S(a_{S,1})j_E(a_{E,1}) \cdots j_S(a_{S,n})j_E(a_{E,n})) \\ &\qquad\stackrel{(97)}{=} \varphi(a_{S,1}a_{S,2}\cdots a_{S,n}\otimes P_{\Phi_E} a_{E,n})\prod_{k=1}^{n-1}\varphi_E(a_{E,k}) \\ &\qquad\stackrel{(80)}{=} \varphi_S(a_{S,1}a_{S,2}\cdots a_{S,n})\varphi_E(a_{E,n}) \prod_{k=1}^{n-1}\varphi_E(a_{E,k}) \\ &\ \qquad=\varphi_S(a_{S,1}a_{S,2}\cdots a_{S,n}) \prod_{k=1}^n\varphi_E(a_{E,k}) \\ &\qquad\stackrel{(96)}{=} \varphi(j_S(a_{S,1})\cdots j_S(a_{S,n})) \prod_{k=1}^n\varphi(j_E(a_{E,k})). \end{aligned} \end{equation*} \notag $$
$\Box$

Remark 16. By taking

$\bullet$ $n=1$ in (103) and (105);

$\bullet$ $n=2$ in (106),

one gets, for any $a_S\in \mathcal{A}_S$ and $a_E,b_E\in \mathcal{A}_S$,

$$ \begin{equation*} \varphi(j_S(a_S)j_E(a_E))=\varphi(j_S(a_S)) \varphi(j_E(a_E))= \varphi(j_E(a_E)j_S(a_S)), \end{equation*} \notag $$
$$ \begin{equation} \varphi(j_E(a_E)j_S(a_S)j_E(b_E)) = \varphi(j_E(a_E))\varphi(j_S(a_S))\varphi(j_E(b_E)). \end{equation} \tag{107} $$
By combining these equalities and (101), one recognizes the $\varphi$-monotone independence of the algebras $j_S(\mathcal{A}_S)\subset\mathcal{A}$ and $j_E(\mathcal{A}_E)\subset\mathcal{A}$.

§ 6. Statistical meaning of some stochastic independences

The following statistics are well known:

(S1) the Bose–Einstein (B–E) statistics;

(S2) the Fermi–Dirac (F–D) statistics;

(S3) the Maxwell–Boltzmann (M–B) statistics.

They arise from the solution, under different constrains, of the same problem, namely:

how to distribute $n$ balls in $m$ distinguishable boxes $U_1,\dots,U_m$.

In all $3$ cases the possible solutions of the problem are characterized by ordered $n$-tuples of natural integers $(N_1,\dots,N_m)$ such that $\sum_{j=1}^{m} N_j= M$, where $N_j$ is the number of particles in the $j$th box. Any such an ordered $n$-tuple is called a (macroscopic configuration or macro-configuration) and the probabilities without constraints are uniquely determined by the fact that all macroscopic configurations are supposed to be equi-probable. For $k=1,2,3$, denote

$$ \begin{equation*} N_k(n,m):= \text{number of macro-configurations according to the statistics } (\mathrm{S}k). \end{equation*} \notag $$
The constraints in each case are the following.

(S1) Bose–Einstein statistics: the balls are indistinguishable and each box can contain any number of balls. In this case,

$$ \begin{equation*} N_1(n,m)=\binom{n+m-1}{n}. \end{equation*} \notag $$

(S2) Fermi–Dirac statistics: the balls are indistinguishable and each box could contain no more than $1$ ball (it is natural that $n\leqslant m$). In this case,

$$ \begin{equation*} N_2(n,m) =\binom{m}{n}. \end{equation*} \notag $$

(S3) Boltzmann–Maxwell statistics: the balls are distinguishable (e.g., are labelled with indices $1,2,\dots,n$) and each box can contain any number of balls. In this case,

$$ \begin{equation*} N_3(n,m) =m^n. \end{equation*} \notag $$

Notice that, in all $3$ cases, the number $N_i(n,m)$ is the same if:

– the balls are placed in the $m$ boxes simultaneously;

– the balls are placed in the $m$ boxes in $n$ times and each time one ball is placed;

– the balls are placed in the $m$ boxes in $k\leqslant n$ times and $n_j$ balls are placed in the $j$th time for all $j\in\{1,\dots,k\}$, where $n_1+\dots+n_k=n$.

In other terms: the specific way to place the $n$ balls in the $m$ boxes is irrelevant. This property will be called process–independence.

In particular, if either the balls are distinguishable or indistinguishable and one denotes them by $b_1,\dots,b_{n}$, the process–independence property means that, if $\sigma\,{\in}\,\mathcal{S}_{n}$ is any permutation of the indices $\{1,\dots, n\}$, the number $N_k(n,m)$ ($k=1,2,3$) is invariant if the $n$ balls are placed in the order: first $b_{\sigma(1)}$, second $b_{\sigma(2)}$, $\dots$, or in the order $b_{\sigma(n)}$.

In what follows, we will consider the following modifications of these $3$ statistics, where the notion of macro-configuration remains the same.

(S4) Boolean statistics: a mixture of F–D and M–B (FDMB) statistics. The balls are distinguishable and each box can contain no more than $1$ ball. In this case $n\leqslant m$ and the number of macro-configuration is:

$$ \begin{equation*} N_{4}(n,m) =n!\, \binom {m}{n}. \end{equation*} \notag $$

(S5) An order dependent modification of the Maxwell–Boltzmann (MMB) statistics. The balls are distinguishable and each box can contain any number of balls. One places the $n$ balls in $n$ times and each time one ball is placed. For any such a box containing more than $2$ balls, one counts not only which balls but also in what order the balls enter the box. In this case the number of macro-configuration is:

$$ \begin{equation*} N_{5}(n,m)= n!\, \binom{n+m-1}{n} =n!\, N_1(n,m). \end{equation*} \notag $$

In fact, by denoting, for any $n,m\in \mathbb{N}^*$,

$$ \begin{equation*} [0,1,\dots,n]^m:=\biggl\{(k_1,\dots,k_m)\in\{0,1,\dots,n\}^m\colon \sum_{j=1}^mk_j=n\biggr\}. \end{equation*} \notag $$

Table 1

$\begin{gathered} { \text{number of balls}} \\ \text{enter}\end{gathered}$$\begin{gathered} \text{number of different ways} \\ \text{to take balls to enter}\end{gathered}$$\begin{gathered} \text{number of all orders} \\ \text{of the balls taken}\end{gathered}$
$U_1$$n_1$${\displaystyle\biggl(\begin{matrix}n\\n_1\end{matrix}\biggr)}$$n_1!$
$U_2$$n_2$${\displaystyle\biggl(\begin{matrix}n-n_1\\n_2\end{matrix}\biggr)}$$n_2!$
$\vdots$$\vdots$$\vdots$$\vdots$
$U_{m-1}$$n_{m-1}$${\displaystyle\biggl(\begin{matrix}n-n_1-\dots-n_{m-2}\\n_{m-1}\end{matrix}\biggr)}$$n_{m-1}!$
$U_m$$n_m$${\displaystyle\biggl(\begin{matrix}n-n_1-\dots-n_{m-1}\\n_m\end{matrix}\biggr)}$$n_m!$

For any $(n_1,\dots,n_m)\in[0,1,\dots,n]^m$, we provide the following results (see Table 1), where, the fact that $(n_1,\dots,n_m)\in[0,1,\dots,n]^m$ implies

$$ \begin{equation*} n_m=n-n_1-\dots-n_{m-1} \quad\text{and}\quad \binom{n-n_1-\dots-n_{m-2}}{n_{m-1}}=1. \end{equation*} \notag $$
Therefore, in virtue of the fact that
$$ \begin{equation*} |[0,1,\dots,n]^m|=\binom{n+m-1}{n} \end{equation*} \notag $$
one gets
$$ \begin{equation} \begin{aligned} \, N_{5}(n,m) &=\sum_{(n_1,\dots,n_m)\in[0,1,\dots,n]^m} \binom{n}{n_1}n_1!\, \binom{n-n_1}{n_2}n_2!\,\cdots\binom{n-n_1-\dots-n_{m-2}}{n_{m-1}} \notag \\ &\qquad\times n_{m-1}!\,(n-n_1-\dots-n_{m-1})! \notag \\ &=\sum_{(n_1,\dots,n_m)\in[0,1,\dots,n]^m}n!=n!\, \binom{n+m-1}{n}. \end{aligned} \end{equation} \tag{108} $$

Remark 17. In case of $n$ balls being placed in the order: first $b_{\sigma(1)}$, second $b_{\sigma(2)}$, $\dots$, finally $b_{\sigma(n)}$, the above numbers $N_{4}(n,m)$ and $N_{5}(n,m)$ are independent of $\sigma\in\mathcal{S}_{n}$, i.e., also in this case process–independence is satisfied.

6.1. Monotone statistics, not process–independent

In this section $n$ distinguishable balls (i.e., labelled with indices $1,2,\dots,n$) $b_1,\dots,b_{n}$ are placed in $m$ distinguishable boxes $U_1,\dots,U_m$ ($n\leqslant m$) in $n$ times and each time ONE ball is placed randomly in one box. We introduce a new statistics: the monotone statistics ((S6) in short), in which the analogy of $N_k(n,m)$ ($k\in \{1,2,3,4,5\}$) depends on the order of placing the balls so that process–independence does not hold.

6.1.1. Monotone statistics

One places the balls $b_1,\dots,b_{n}$ according to the following rules:

$\bullet$ P1: the balls are placed in the order: first $b_{\sigma(1)}$, second $b_{\sigma(2)}$, $\dots$, with $\sigma\in\mathcal{S}_{n}$;

$\bullet$ P2 (monotone (or ordered exclusion) principle): for any box $U_i$ and $j\in\{1,\dots,n-1\}$, if in the $j$th placement, the ball $b_k$ enters the box $U_i$, then in all sequent placements, only the balls with index bigger than $k$ (i.e., $b_h$ with $h>k$) can enter the box $U_i$.

In other words, property P2 means that, for any $\sigma\in\mathcal{S}_{n}$, if the balls are placed in the order: 1st $b_{\sigma(1)}$, 2nd $b_{\sigma(2)}$, $\dots$, last $b_{\sigma(n)}$, then for any $j\in\{1,\dots,n-1\}$ and $i\in\{ 1,\dots,m\}$,

$$ \begin{equation*} \begin{aligned} \, &b_{\sigma(j)}\text{ entered the box }U_i \\ &\quad \Longrightarrow\quad \forall\, h>j,\ b_{\sigma(h)}\text{ can enter the box } U_i \text{ only if }\sigma(h) >\sigma(j). \end{aligned} \end{equation*} \notag $$
In what follows, when the balls are placed in the order described above P1, we will say that they are placed in $\sigma$-order. It is clear that the number of different ways to distribute $n$ balls in $m$ distinguishable boxes in the above case depends strongly on the order of placing the $n$ balls: if in the $j$th time, the ball $b_{\sigma(j)}$ is placed in for any $j$ with $\sigma\in\mathcal{S}_{n}$, then the analogue of the numbers $N_k(n,m)$ ($k=1,2,3,4,5$) depends on $\sigma$ (i.e., the order of the placements) and we denote it by $N_{6}^{\sigma}(n,m)$, where $\sigma\in\mathcal{S}_{n}$.

For any $h\in\{ 1,\dots,n\}$, we denote $\chi_{(h}\colon \{1,\dots,n\} \to\{0,1\}$ with

$$ \begin{equation*} \chi_{(h}(k) := \begin{cases} 1 &\text{if }k>h, \\ 0 &\text{if } k\leqslant h. \end{cases} \end{equation*} \notag $$
One could imagine $\chi_{(h}$ as $\chi_{(h,+\infty)}$. For any $n\geqslant 2$ and $\sigma\in \mathcal{S}_n$, for any fixed $1\leqslant k_1,\dots,k_n\leqslant m$, one defines
$$ \begin{equation} \begin{aligned} \, C_\sigma(k_1,\dots,k_n) &:=|\{\text{different ways to place the ball }b_{\sigma(i)}\text{ in the box }U_{k_i} \notag \\ &\qquad\text{ for all }i\in\{1,\dots,n\}\}| \end{aligned} \end{equation} \tag{109} $$
and
$$ \begin{equation} \begin{aligned} \, C'_\sigma(k_1,\dots,k_n) &:= |\{\text{different ways to place the ball }b_{\sigma(n)} \text{ in the box }U_{k_n},\text{knowing} \notag \\ &\qquad\text{that the ball } b_{\sigma(i)}\text{ is in the box }U_{k_i}\text{ for all }i\in\{1,\dots,n-1\}\}|. \end{aligned} \end{equation} \tag{110} $$
Since $1\leqslant k_1,\dots,k_n\leqslant m$ are already fixed, both $C_\sigma(k_1,\dots,k_n)$ and $C'_\sigma(k_1,\dots,k_n)$ must be either $1$ or zero. Moreover,

$\bullet$ $C_\sigma(k_1,\dots,k_n)=0$ means that one cannot place the ball $b_{\sigma(i)}$ in the box $U_{k_i}$ for any $i\in\{1,\dots,n\}$; e.g., in case $n=2$ and $\sigma(2)<\sigma(1)$, the principle P2 gives $C_\sigma(k_1,k_2)\big|_{k_1=k_2}=0$; equivalently, $C_\sigma(k_1,\dots,k_n)=1$ means that one can place the ball $b_{\sigma(i)}$ in the box $U_{k_i}$ for any $i\in\{1,\dots,n\}$; e.g., in case $n=2$, $C_\sigma(k_1,k_2)=1$ for any $\sigma$ whenever $k_1\ne k_2$;

$\bullet$ $C'_\sigma(k_1,\dots,k_n)=0$ means that the ball $b_{\sigma(i)}$ has entered the box $U_{k_i}$ for any $i\in\{1,\dots,n-1\}$, but the ball $b_{\sigma(n)}$ is not permitted to enter the box $U_{k_n}$ due to the order–exclusion property described by P2.

Theorem 5. For any $n\geqslant 2$, $m\in\mathbb{N}^*$ and $\sigma\in\mathcal{S}_n$, one has

$$ \begin{equation} C'_\sigma(k_1,\dots,k_n) =\prod_{1\leqslant i< n}\bigl(1- \chi_{(\sigma(n)}(\sigma(i))\delta_{k_i,k_n} \bigr), \end{equation} \tag{111} $$
$$ \begin{equation} C_\sigma(k_1,\dots,k_n) =\prod_{1\leqslant i<j\leqslant n}\bigl(1- \chi_{(\sigma(j)}(\sigma(i))\delta_{k_i,k_j} \bigr). \end{equation} \tag{112} $$
Consequently,
$$ \begin{equation} N_{6}^{\sigma}(n,m) =\sum_{1\leqslant k_1,\dots,k_{n}\leqslant m}\,\prod_{1\leqslant i<j\leqslant n} \bigl(1-\chi_{(\sigma(j)}(\sigma(i))\delta_{k_i,k_j} \bigr). \end{equation} \tag{113} $$

Remark 18. Clearly, for any $i<j$, $n\geqslant2$, $\sigma\in\mathcal{S}_n$ (so $\sigma(i)\ne \sigma(j)$), and $1\leqslant k_1,\dots,k_n\leqslant m$,

$$ \begin{equation*} 1- \chi_{(\sigma(j)}(\sigma(i))\delta_{k_i,k_j} =\chi_{(\sigma(i)}(\sigma(j)) +\chi_{(\sigma(j)}(\sigma(i))(1-\delta_{k_i,k_j}) \end{equation*} \notag $$
since $1=\chi_{\{1,\dots,n\}}$ as a function defined on $\{1,\dots, n\}$ and since $\chi_{\{\sigma(i)\}}(\sigma(j))=0$.

Before proving Theorem 5, we check formula (113) in some particular cases.

First, we look at the case $n=2$. In this case, the product

$$ \begin{equation} \prod_{1\leqslant i<j\leqslant n}\bigl(1-\chi_{(\sigma(j)}(\sigma(i)) \delta_{k_i,k_j} \bigr) \end{equation} \tag{114} $$
is $1-\chi_{(\sigma(2)}(\sigma(1))\delta_{k_1,k_2}$ and so the right-hand side of (113) is $\sum_{1\leqslant k_1,k_2\leqslant m}(1- \chi_{(\sigma(2)}(\sigma(1)) \delta_{k_1,k_2})$.

$\bullet$ If $(1,2) \xrightarrow{\sigma}(1,2)$, one has $\chi_{(\sigma(2)}(\sigma(1))=0$, so the right-hand side of (113) is $\sum_{1\leqslant k_1,k_2\leqslant m}1=m^2$.

$\bullet$ If $(1,2) \xrightarrow{\sigma}(2,1)$, one has $\chi_{(\sigma(2)}(\sigma(1))=1$ and so the right-hand side of (113) is $\sum_{ 1\leqslant k_1,k_2\leqslant m} (1-\delta_{k_1,k_2})=m(m-1)$.

Notice that a direct calculation, without using (113), gives the same results.

Second, let us look at the case $n=3$ for some particular $\sigma$.

Case 1. $(1,2,3) \xrightarrow{\sigma} (2,3,1) =(\sigma(1),\sigma(2),\sigma(3))$.

$\bullet$ Using (113): one has $\chi_{(\sigma(3)}(\sigma(2))=\chi_{(\sigma(3)}(\sigma(1))=1$ and $\chi_{(\sigma(2)}(\sigma(1))=0$ and so the product (114) is $(1-\delta_{k_1,k_3})(1-\delta_{k_2,k_3})$. Therefore, the right-hand side of (113) is $\sum_{1\leqslant k_1, k_2,k_3\leqslant m} (1-\delta_{k_1,k_3}) (1-\delta_{k_2,k_3}) =m^{3}-2m^2+m=m(m-1)^2$.

$\bullet$ Without using (113): the order to place the three balls is: $b_2$ then $b_3$ then $b_1$, so the left-hand side of (113), i.e., $N_{6}^{\sigma}(3,m)$, is equal to

$$ \begin{equation*} m(m-1)(m-2) +m\cdot 1\cdot(m-1)=m(m-1)^2. \end{equation*} \notag $$
In fact, first $b_2$ can enter any box; second $b_3$ can enter any box; third $b_1$ can enter any box, with the following exceptions:

– the $2$ boxes which contain $b_2$, $b_3$ if $b_2$, $b_3$ enter different boxes;

– the box which contains both $b_2$ and $b_3$ if $b_2$, $b_3$ enter the same box.

Case 2. The case $(1,2,3) \xrightarrow{\sigma} (3,1,2) =(\sigma(1),\sigma(2),\sigma(3))$.

$\bullet$ Using (113): one has $\chi_{(\sigma(3)}(\sigma(1))=\chi_{( \sigma(2)}(\sigma(1)) =1$ and $\chi_{(\sigma(3)}(\sigma(2))=0$, so the product (114) is $(1-\delta_{k_1,k_3})(1-\delta _{k_1,k_2})$. Therefore, the right-hand side of (113) is $\sum_{1\leqslant k_1,k_2,k_3\leqslant m} (1-\delta_{k_1,k_3}) (1-\delta_{k_1,k_2}) =m^{3}-2m^2+m=m(m-1)^2$.

$\bullet$ Without using (113): the order to place the three balls is: $b_3$ then $b_1$ then $b_2$, so the monotone principle says that $N_{6}^{\sigma}(2,m) =m(m-1)^2$. In fact, $b_3$ can enter any box, then $b_1$ can enter any box except the one containing $b_3$; then $b_2$ can enter any box except the one containing $b_3$.

Proof of Theorem 5. Whenever (112) is proved, one gets easily (113) as follows:
$$ \begin{equation*} \begin{aligned} \, &N_{6}(n,m) :=|\{\text{different ways of distributing } b_1,\dots,b_n \text{ in } U_1,\dots,U_m\}| \\ &=\bigcup_{1\leqslant k_1,\dots,k_n\leqslant m}\bigl|\bigl\{\text{different ways to place the ball } b_{\sigma(i)}\text{ in the box }U_{k_i} \\ &\qquad\qquad\qquad\qquad\text{for all }i\in\{1,\dots,n\} \bigr\}\bigr| \\ &=\sum_{1\leqslant k_1,\dots,k_n\leqslant m}C_\sigma(k_1,\dots,k_n)\stackrel{(112)}{=} \sum_{1\leqslant k_1,\dots,k_n\leqslant m}\prod_{1\leqslant i<j\leqslant n} \bigl(1-\chi_{(\sigma(j)} (\sigma(i))\delta_{k_i,k_j} \bigr). \end{aligned} \end{equation*} \notag $$
Therefore, our task is to prove (111) and (112). Moreover, since both sides of the equality in (111), as well as the equality in (112), take values in $\{0,1\}$, it is sufficient to obtain (111) and (112) if we are able to prove
$$ \begin{equation} C'_\sigma(k_1,\dots,k_n)=0\quad\Longleftrightarrow\quad \prod_{1\leqslant i< n} \bigl(1-\chi_{(\sigma(n)} (\sigma(i))\delta_{k_i,k_{n}} \bigr)=0, \end{equation} \tag{115} $$
and, respectively,
$$ \begin{equation} C_\sigma(k_1,\dots,k_n)=1\quad\Longleftrightarrow\quad \prod_{1\leqslant i<j\leqslant n} \bigl(1-\chi_{(\sigma(j)} (\sigma(i))\delta_{k_i,k_j} \bigr)=1. \end{equation} \tag{116} $$

First of all, the definition of $C'_\sigma$ (i.e., (110)) and property P2 imply that

$$ \begin{equation*} \begin{aligned} \, &C'_\sigma(k_1,\dots,k_n)=0 \\ \stackrel{(110)}{\Longleftrightarrow}\quad &\text{the ball }b_{\sigma(i)}\text{ has entered }U_{k_i}\text{ for any } i\in\{1,\dots,n-1\}, \\ \qquad\ \ \ \ &\text{but the ball }b_{\sigma(n)}\text{ is forbidden to enter the box }U_{k_n} \\ \stackrel{\mathrm{P2}}{\Longleftrightarrow}\quad &\text{the ball }b_{\sigma(i)}\text{ has entered } U_{k_i}\text{ for any }i\in\{1,\dots,n-1\}, \\ \qquad\ \ \ \ &k_n\text{ is a certain }k_i\text{ with }i\in\{1,\dots,n-1\}\text{ and }\sigma(n)<\sigma(i) \\ \Longleftrightarrow\quad &\delta_{k_i,k_{n}} =1\text{ and }\chi_{(\sigma(n)} (\sigma(i))=1\text{ for a certain }i\in\{1,\dots,n-1\} \\ \Longleftrightarrow\quad &\prod_{1\leqslant i< n} \bigl(1-\chi_{(\sigma(n)} (\sigma(i))\delta_{k_i,k_{n}} \bigr)=0. \end{aligned} \end{equation*} \notag $$
Now we prove (116) by induction. For $n=2$, (116) becomes
$$ \begin{equation} \begin{aligned} \, C_\sigma(k_1,k_2)=1 \quad&\Longleftrightarrow\quad 1-\chi_{(\sigma(2)} (\sigma(1))\delta_{k_1,k_2} =1 \notag \\ &\Longleftrightarrow\quad \chi_{(\sigma(2)}(\sigma(1))\delta_{k_1,k_2}=0. \end{aligned} \end{equation} \tag{117} $$

$\bullet$ In case $k_1\ne k_2$, the last equality of (117) holds trivially. Similarly, the set

$$ \begin{equation} \{\text{different ways of distributing } b_{\sigma(1)} \text{ in } U_{k_1}\text{ and }b_{\sigma(2)} \text{ in } U_{k_2}\} \end{equation} \tag{118} $$
is clearly non-empty and so $C_\sigma(k_1,k_2)=1$.

$\bullet$ In case $k_1=k_2$ (equivalently, $\delta_{k_1,k_2}=1$), the last equality is equivalent to $\sigma(2)>\sigma(1)$. On the other hand, $C_\sigma(k_1,k_2)\big|_{k_1=k_2}=1$ means the set

$$ \begin{equation} \{\text{different ways of distributing both }b_{\sigma(1)}\text{ and }b_{\sigma(2)}\text{ in }U_{k_1}\} \end{equation} \tag{119} $$
is non-empty. Because of P2, this is equivalent to $\sigma(2)>\sigma(1)$.

Suppose that (116) is proved for $n\leqslant N$ and consider the case $n= N+1$. In this case, for any $\sigma\in \mathcal{S}_{N+1}$, the $N+1$ placements can be thought of as carried out in two steps:

$\bullet$ the balls $b_{\sigma(1)},\dots, b_{\sigma(N)}$ are placed in $\sigma$-order;

$\bullet$ the ball $b_{\sigma(N+1)}$ is placed.

So for any $1\leqslant k_1,\dots,k_{N+1}$, the multiplication rule (rule of product) gives

$$ \begin{equation} \begin{aligned} \, &\big|\bigl\{\text{different ways of distributing }b_{\sigma(i)}\text{ in }U_{k_i}\text{ for any } i\in\{1,\dots,N+1\} \bigr\}\bigr| \notag \\ &\qquad=\bigl|\bigl\{\text{different ways of distributing } b_{\sigma(i)} \text{ in } U_{k_i}\text{ for any }i\in\{1,\dots,N\} \bigr\}\bigr|, \notag \\ &\bigl|\bigl\{\text{different ways of distributing } b_{\sigma(N+1)} \text{ in } U_{k_{N+1}} \notag \\ &\qquad\qquad\qquad\text{knowing that }b_{\sigma(i)}\text{ is in }U_{k_i}\text{ for any } i\in\{1,\dots,N\} \bigr\}\bigr| \notag \\ &\qquad=\bigl|\bigl\{\text{different ways of distributing } b_{\sigma(i)} \text{ in } U_{k_i}\text{ for any }i\in\{1,\dots,N\} \bigr\}\bigr| \notag \\ &\qquad\qquad\times C'_\sigma(k_1,\dots,k_{N+1}). \end{aligned} \end{equation} \tag{120} $$
Thanks to formula (111) (equivalently, formula (115)), one has
$$ \begin{equation*} C'_\sigma(k_1,\dots,k_{N+1}) =\prod_{1\leqslant i< N+1} \bigl(1- \chi_{(\sigma(N+1)}(\sigma(i))\delta_{k_i,k_{N+1}} \bigr). \end{equation*} \notag $$
Therefore, the proof will be completed if we can prove
$$ \begin{equation} \begin{aligned} \, &\bigl|\bigl\{\text{different ways of distributing } b_{\sigma(i)} \text{ in } U_{k_i}\text{ for any }i\in\{1,\dots,N\} \bigr\}\bigr| \notag \\ &\qquad=\prod_{1\leqslant i<j\leqslant N} \bigl(1-\chi_{(\sigma(j)}(\sigma(i)) \delta_{k_i,k_j} \bigr). \end{aligned} \end{equation} \tag{121} $$
In order to get (121) (in fact, we are going to prove that both sides in (121) are equal to $1$ simultaneously), one defines $\tau\colon \{1,\dots,N\}\mapsto\{1,\dots,N\}$ as follows:
$$ \begin{equation*} \tau (k):=\begin{cases} \sigma(k) &\text{if }\sigma(k)<\sigma(N+1), \\ \sigma(k)-1 &\text{if }\sigma(k)>\sigma(N+1). \end{cases} \end{equation*} \notag $$
Then

$\bullet$ $\tau\in \mathcal{S}_N$;

$\bullet$ for any $i,j\in\{1,\dots,N\}$,

$$ \begin{equation} \tau(i)<\tau(j)\quad\Longleftrightarrow\quad \sigma(i)<\sigma(j); \end{equation} \tag{122} $$
$\bullet$ by renaming the balls
$$ \begin{equation*} \begin{gathered} \, B_{\tau(k)}:=\begin{cases} b_{\sigma(k)} &\text{if }\sigma(k)<\sigma(N+1), \\ b_{\sigma(k)-1} &\text{if }\sigma(k)>\sigma(+1), \end{cases} \\ \begin{split} &\text{the balls }b_{\sigma(1)},\dots, b_{\sigma(N)}\text{ are placed in }\sigma\text{-order} \\ &\quad\Longleftrightarrow\quad\text{the balls }B_{\tau(1)},\dots, B_{\tau(N)}\text{ are placed in }\sigma\text{-order}. \end{split} \end{gathered} \end{equation*} \notag $$
Therefore, for any $1\leqslant k_1,\dots,k_N\leqslant m$
$$ \begin{equation} \begin{aligned} \, &\bigl|\bigl\{\text{different ways of distributing } b_{\sigma(i)} \text{ in } U_{k_i}\text{ for any }i\in\{1,\dots,N\} \bigr\}\bigr|=1 \notag \\ &\quad\Longleftrightarrow\quad\bigl|\bigl\{\text{different ways of distributing } B_{\tau(i)} \text{ in } U_{k_i}\text{ for any }i\in\{1,\dots,N\} \bigr\}\bigr|=1 \notag \\ &\quad\Longleftrightarrow\quad\prod_{1\leqslant i<j\leqslant N} \bigl(1-\chi_{(\tau(j)}(\tau(i)) \delta_{k_i,k_j}\bigr)=1 \notag \\ &\quad\stackrel{(122)}{\Longleftrightarrow}\quad\prod_{1\leqslant i<j\leqslant N} \bigl( 1-\chi_{(\sigma(j)}(\sigma(i)) \delta_{k_i,k_j} \bigr)=1. \end{aligned} \end{equation} \tag{123} $$
$\Box$

6.1.2. A modification of monotone statistics

Naturally, one can place the balls $b_1,\dots,b_{n}$ according to the rule P1 and the following modification of P2:

$\widetilde{\mathrm{P2}}$: for any box $U_i$ and $j\in\{1,\dots,n-1\}$, if in the $j$th placement, the ball $b_k$ enters the box $U_i$, then in all sequent placements, only the balls with numbers smaller than $k$ (i.e., $b_h$ with $h<k$) can be permitted to enter the box $U_i$.

In other words, $\widetilde{\mathrm{P2}}$ means that for any $\sigma\in\mathcal{S}_{n}$, if the balls are placed in the order: the 1st time $b_{\sigma(1)}$, the 2nd time $b_{\sigma(2)}$, $\dots$, the last time $b_{\sigma(n)}$, then for any $j\in\{1,\dots,n-1 \}$ and $i\in\{ 1,\dots,m\}$,

$$ \begin{equation*} \begin{aligned} \, b_{\sigma(j)}\text{ entered the box } U_i\quad &\Longrightarrow\quad \forall\, h>j,\ \ b_{\sigma(h)}\text{ can enter the box }U_i \\ &\qquad\qquad\text{only if }\sigma(h) <\sigma(j). \end{aligned} \end{equation*} \notag $$

Since $\sigma(i) <\sigma(j)$ is equivalent to say that $\sigma^{-1}(i) >\sigma^{-1}(j)$, one gets the following:

$$ \begin{equation*} \begin{aligned} \, &\bigl|\bigl\{\text{different ways of distributing } b_{\sigma(1)},\dots,b_{\sigma(n)} \text{ placed in }\sigma\text{-order} \bigr\}\bigr| \\ &\qquad=N_{6}^{\tau}(n,m)\big|_{\tau= \sigma^{-1}}=\sum_{1\leqslant k_1,\dots,k_{n}\leqslant m} \prod_{1\leqslant i<j\leqslant n}\bigl(1-\chi_{(\sigma^{-1}(j)}(\sigma^{-1}(i))\delta_{k_i,k_j} \bigr) \\ &\qquad=\sum_{1\leqslant k_1,\dots,k_{n}\leqslant m}\prod_{1\leqslant i<j\leqslant n}\bigl(1- \chi_{(\sigma(i)}(\sigma(j))\delta_{k_i,k_j}\bigr) \end{aligned} \end{equation*} \notag $$
which is nothing else than that number given in (113).

6.2. Maximum entropy

For any $j\in\{1,\dots,m\}$, at the $j$th level (energy level), one considers the number of ways to distribute $N_j$ balls in $G_j\geqslant N_j$ distinguishable boxes.

6.2.1. F–D and B–E statistics

We will extend the classical argument used for F–D and B–E statistics to the new statistics, hence we briefly recall this argument. For any $j\in\{ 1,\dots,m\}$, the number of ways to distribute $N_j$ indistinguishable balls in $G_j\geqslant N_j$ distinguishable boxes is

$\bullet$ $\binom{G_j}{N_j}$, in F–D case;

$\bullet$ $\binom{G_j+N_j-1}{N_j}$, in B–E case.

So the total occupation number $W_{\mathrm{F}{-}\mathrm{D}}$ (in F–D case) and $W_{\mathrm{B}{-}\mathrm{E}}$ (in B–E case) is

$$ \begin{equation} W_{\mathrm{F}{-}\mathrm{D}}=\prod_j\binom{G_j}{N_j},\qquad W_{\mathrm{B}{-}\mathrm{E}}=\prod_j\binom{G_j+N_j-1}{N_j}. \end{equation} \tag{124} $$
Therefore, their logarithms are
$$ \begin{equation} \begin{aligned} \, S_{\mathrm{F}{-}\mathrm{D}} &:=\sum_j\log\binom{G_j}{N_j} \notag \\ &\simeq\sum_j\bigl( G_j(\log G_j-1) -N_j(\log N_j-1) -(G_j-N_j)(\log(G_j-N_j) -1) \bigr) \nonumber \\ &=\sum_j\bigl( G_j\log G_j-N_j\log N_j-(G_j-N_j)\log(G_j-N_j) \bigr) \end{aligned} \end{equation} \tag{125} $$
and
$$ \begin{equation} \begin{aligned} \, S_{\mathrm{B}{-}\mathrm{E}} &:=\sum_j\log\binom{G_j+N_j-1}{N_j} \notag \\ &\,\simeq\sum_j\bigl((G_j+N_j-1) \log(G_j+N_j-1)-N_j\log N_j-(G_j-1) \log(G_j-1) \bigr), \end{aligned} \end{equation} \tag{126} $$
where the asymptotic is given by the Stirling formula and the fact that $N_j\gg1$, $G_j-N_j\gg1$ (so $G_j\gg1$).

The argument proceeds looking for the maximum of the various $W$ under the following two additional conditions, for given positive $N$ and $E$, and given $\varepsilon_j$’s:

$$ \begin{equation} \sum_jN_j=N,\qquad \sum_jN_j\varepsilon_j=E. \end{equation} \tag{127} $$
Let
$$ \begin{equation} \begin{aligned} \, S_{f}(x_1,\dots,x_m) &:=\sum_j\bigl(G_j\log G_j-x_j\log x_j-(G_j-x_j) \log(G_j-x_j) \bigr), \notag \\ S_{e}(x_1,\dots,x_m) &:=\sum_j\bigl( (G_j+x_j-1)\log(G_j+x_j-1)-x_j\log x_j \notag \\ &\qquad\qquad-(G_j-1)\log(G_j-1)\bigr), \end{aligned} \end{equation} \tag{128} $$
and
$$ \begin{equation} s_1(x_1,\dots,x_m) :=N-\sum_jx_j,\qquad s_2(x_1,\dots,x_m) :=E-\sum_j\varepsilon_jx_j. \end{equation} \tag{129} $$
Then for any $k\in\{ 1,\dots,m\}$,
$$ \begin{equation} \begin{aligned} \, \frac{\partial S_{f}}{\partial x_k} &=-\log x_k-1 +\log(G_k-x_k) +1 =\log\frac{G_k-x_k}{x_k}, \notag \\ \frac{\partial S_{e}}{\partial x_k} &=\log(G_k+x_k-1)+1-\log x_k-1=\log\frac{G_k+x_k-1}{x_k}, \notag \\ \frac{\partial s_1}{\partial x_k} &=-1,\qquad \frac{\partial s_2}{\partial x_k} =-\varepsilon_k. \end{aligned} \end{equation} \tag{130} $$

So, by the method of Lagrange multipliers, one gets the critical point as follows:

$\bullet$ in the F–D case

$$ \begin{equation} \begin{aligned} \, \frac{\partial S_{f}}{\partial x_k}-\alpha\frac{\partial s_1}{\partial x_k}-\beta\frac{\partial s_2}{\partial x_k}=0\quad &\Longleftrightarrow\quad \log\frac{G_k-x_k}{x_k}=\alpha+\beta\varepsilon_k \notag \\ &\Longleftrightarrow\quad x_k=\frac{G_k}{ e^{\alpha+\beta\varepsilon_k}+1}; \end{aligned} \end{equation} \tag{131} $$

$\bullet$ in the B–E case

$$ \begin{equation} \begin{aligned} \, \frac{\partial S_{e}}{\partial x_k}-\alpha\frac{\partial s_1}{\partial x_k} -\beta\frac{\partial s_2}{\partial x_k} =0\quad &\Longleftrightarrow\quad \log\frac{G_k+x_k-1}{x_k}=\alpha+\beta\varepsilon_k \notag \\ &\Longleftrightarrow\quad x_k=\frac{G_k-1}{e^{\alpha+ \beta\varepsilon_k}-1} \simeq\frac{G_k}{e^{\alpha +\beta\varepsilon_k}-1}. \end{aligned} \end{equation} \tag{132} $$

6.2.2. M–B, Boolean and modified M–B (MMB) statistics

In this section we apply the method described in the preceding section to the M–B, Boolean and MMB statistics.

For any $j\in\{ 1,\dots,m\}$, the number of ways to distribute $N_j$ indistinguishable balls in $G_j\geqslant N_j$ distinguishable boxes is

$\bullet$ $G_j^{N_j}$, in M–B case;

$\bullet$ $N_j\binom{G_j}{N_j}$, in Boolean case;

$\bullet$ $N_j\binom{G_j+N_j-1}{N_j}$, in MMB case.

So the total occupation number $W_{\mathrm{M}{-}\mathrm{B}}$ (in M–B case), $W_{\mathrm{Bo}}$ (in Boolean case) and $W_{\mathrm{MMB}}$ (in MMB case) are

$$ \begin{equation*} W_{\mathrm{M}{-}\mathrm{B}} =\binom{N}{N_1,\dots,N_m}\prod_j G_j^{N_j}=N!\, \prod_j \frac{G_j^{N_j}}{N_j!}, \notag \end{equation*} \notag $$
$$ \begin{equation*} W_{\mathrm{Bo}} =\binom{N}{N_1,\dots,N_m}\prod_j N_j!\, \binom{G_j}{N_j}=N!\prod_j \binom{G_j}{N_j}=N!\,W_{\mathrm{F}{-}\mathrm{D}}, \end{equation*} \notag $$
$$ \begin{equation*} W_{\mathrm{MMB}} =\binom{N}{N_1,\dots,N_m}\prod_j N_j!\, \binom{G_j+N_j-1}{N_j}=N!\, \prod_j \binom{G_j+N_j-1}{N_j} \end{equation*} \notag $$
$$ \begin{equation} =N!\, W_{\mathrm{B}{-}\mathrm{E}}. \end{equation} \tag{133} $$
Correspondingly,
$$ \begin{equation*} \begin{aligned} \, S_{\mathrm{M}{-}\mathrm{B}} &=\log N!+\sum_j(N_j\log G_j-\log N_j!) \\ &\simeq\log N!+\sum_j(N_j\log G_j-N_j\log N_j+N_j), \\ S_{\mathrm{Bo}} &=\log N!+S_{\mathrm{F}{-}\mathrm{D}},\qquad S_{\mathrm{MMB}} =\log N!+S_{\mathrm{B}{-}\mathrm{E}}. \end{aligned} \end{equation*} \notag $$

Since $N$ is fixed, we know that

$$ \begin{equation} \text{the critical point for }S_{\mathrm{Bo}}=\text{the critical point for } S_{\mathrm{F}{-}\mathrm{D}}\text{ given in }(131), \end{equation} \tag{134} $$
$$ \begin{equation} \text{the critical point for }S_{\mathrm{MMB}}=\text{the critical point for } S_{\mathrm{B}{-}\mathrm{E}}\text{ given in }(132). \end{equation} \tag{135} $$
In order to know the critical points of $S_{\mathrm{M}{-}\mathrm{B}}$, one proceeds in analogy with the other cases, i.e.,

$$ \begin{equation*} S_{b}(x_1,\dots,x_m) :=\log N!+\sum_j(x_j\log G_j-x_j\log x+x_j). \end{equation*} \notag $$
Then for any $k\in\{ 1,\dots,m\}$,
$$ \begin{equation*} \frac{\partial S_{b}}{\partial x_k}=\log G_k-\log x_k=\log\frac{G_k}{x_k}, \end{equation*} \notag $$
and so
$$ \begin{equation} \begin{aligned} \, \frac{\partial S_{b}}{\partial x_k}-\alpha\frac{\partial s_1}{\partial x_k}-\beta\frac{\partial s_2}{\partial x_k}=0\quad &\Longleftrightarrow\quad \log\frac{G_k}{x_k} =\alpha+\beta\varepsilon_k \nonumber \\ &\Longleftrightarrow\quad x_k=\frac{G_k}{e^{\alpha+\beta\varepsilon_k}}. \end{aligned} \end{equation} \tag{136} $$

Список литературы

1. L. Accardi, Yun-Gang Lu, “The $qq$-bit (I): Central limits with left $q$-Jordan–Wigner embeddings, monotone interacting Fock space, Azema random variable, probabilistic meaning of $q$”, Infin. Dimens. Anal. Quantum Probab. Relat. Top., 21:4 (2018), 1850030, 53 pp.  crossref  mathscinet  zmath
2. R. Lenczewski, “Unification of independence in quantum probability”, Infin. Dimens. Anal. Quantum Probab. Relat. Top., 1:3 (1998), 383–405  crossref  mathscinet  zmath
3. V. Liebscher, “On a central limit theorem for monotone noise”, Infin. Dimens. Anal. Quantum Probab. Relat. Top., 2:1 (1999), 155–167  crossref  mathscinet  zmath
4. L. Accardi, A. Boukas, Yun-Gang Lu, A. Teretenkov, “The non-linear and quadratic quantization programs”, Infinite dimensional analysis, quantum probability and applications (Al Ain, UAE, 2021), Springer Proc. Math. Stat., 390, Springer, Cham, 2022, 3–53  crossref  mathscinet  zmath
5. L. Accardi, “Classical and quantum conditioning: mathematical and information theoretical aspects”, Quantum bio-informatics III. From quantum informatics to bio-informatics, QP-PQ: Quantum Probab. White Noise Anal., 26, World Sci. Publ., Hackensack, NJ, 2009, 1–16  crossref  mathscinet

Образец цитирования: L. Accardi, Yu. G. Lu, “Fermions from classical probability and statistics defined by stochastic independence”, Изв. РАН. Сер. матем., 87:5 (2023), 5–40; Izv. Math., 87:5 (2023), 855–890
Цитирование в формате AMSBIB
\RBibitem{AccLu23}
\by L.~Accardi, Yu.~G.~Lu
\paper Fermions from classical probability and statistics defined by stochastic independence
\jour Изв. РАН. Сер. матем.
\yr 2023
\vol 87
\issue 5
\pages 5--40
\mathnet{http://mi.mathnet.ru/im9389}
\crossref{https://doi.org/10.4213/im9389}
\mathscinet{http://mathscinet.ams.org/mathscinet-getitem?mr=4666679}
\adsnasa{https://adsabs.harvard.edu/cgi-bin/bib_query?2023IzMat..87..855A}
\transl
\jour Izv. Math.
\yr 2023
\vol 87
\issue 5
\pages 855--890
\crossref{https://doi.org/10.4213/im9389e}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001101882800001}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85177077524}
Образцы ссылок на эту страницу:
  • https://www.mathnet.ru/rus/im9389
  • https://doi.org/10.4213/im9389
  • https://www.mathnet.ru/rus/im/v87/i5/p5
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Известия Российской академии наук. Серия математическая Izvestiya: Mathematics
    Статистика просмотров:
    Страница аннотации:346
    PDF русской версии:9
    PDF английской версии:61
    HTML русской версии:53
    HTML английской версии:162
    Список литературы:97
    Первая страница:16
     
      Обратная связь:
     Пользовательское соглашение  Регистрация посетителей портала  Логотипы © Математический институт им. В. А. Стеклова РАН, 2024