Russian Mathematical Surveys
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Archive
Impact factor
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Uspekhi Mat. Nauk:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Russian Mathematical Surveys, 2024, Volume 79, Issue 3, Pages 375–457
DOI: https://doi.org/10.4213/rm10171e
(Mi rm10171)
 

Sequences of independent functions and structure of rearrangement invariant spaces

S. V. Astashkinabcd

a Samara National Research University
b Lomonosov Moscow State University
c Moscow Center for Fundamental and Applied Mathematics
d Bahçesehir University, Istanbul, Turkey
References:
Abstract: The main aim of the survey is to present results of the last decade on the description of subspaces spanned by independent functions in $L_p$-spaces and Orlicz spaces on the one hand, and in general rearrangement invariant spaces on the other. A new approach is proposed, which is based on a combination of results in the theory of rearrangement invariant spaces, methods of the interpolation theory of operators, and some probabilistic ideas. The problem of the uniqueness of the distribution of a function such that a sequence of its independent copies spans a given subspace is considered. A general principle is established for the comparison of the complementability of subspaces spanned by a sequence of independent functions in a rearrangement invariant space on $[0,1]$ and by pairwise disjoint copies of these functions in a certain space on the half-line $(0,\infty)$. As a consequence of this principle we obtain, in particular, the classical Dor–Starbird theorem on the complementability of subspaces spanned by independent functions in the $L_p$-spaces.
Bibliography: 103 titles.
Keywords: independent functions, $L_p$-space, rearrangement invariant space, Orlicz function, Orlicz space, $p$-convex function, $p$-concave function, Boyd indices, Rosenthal's inequalities, Kruglov property, $\mathcal K$-functional, complemented subspace, projection.
Funding agency Grant number
Russian Science Foundation 23-71-30001
This research was carried out at Lomonosov Moscow State University and supported by the Russian Science Foundation under grant no. 23-71-30001, https://rscf.ru/en/project/23-71-30001/.
Received: 14.02.2024
Bibliographic databases:
Document Type: Article
UDC: 517.982.22+517.518.34+519.2
MSC: Primary 46B09, 46B15, 46B20, 46E30; Secondary 46B26, 46B70
Language: English
Original paper language: Russian

1. Introduction

In the last chapter of his famous book Théorie des opérations linéaires [22] (1932), whose publication heralded the birth of the theory of Banach spaces, Banach stated the problem of the description of the set of pairs of numbers $(p,q)$, $1\leqslant p,q< \infty$, such that the function space $L_p=L_p[0,1]$ contains a subspace isomorphic to the sequence space $\ell_q$.

This problem was solved in part in subsequent papers by Banach and Mazur, and also by Paley [86]; however, till the 1950s the case $1\leqslant p<q<2$ was open. Finally, in 1958 Kadec gave an affirmative answer by showing that a sequence of independent copies of a step function $g$ such that $\xi^{(q)}(t)\leqslant g(t)\leqslant 2\xi^{(q)}(t)$, $0<t\leqslant 1$, where $\xi^{(q)}$ is a $q$-stable random variable, is equivalent in $L_p$ to the canonical basis of $\ell_q$ [59].

The fact that Banach’s problem was solved by use of independent functions, one of the central objects in probability theory, enhanced interest of mathematicians investigating the structure of Banach spaces in such functions. Another reason for the increasing interest in such topics was Rosenthal’s discovery in 1970 of subspaces of $L_p[0,1]$ spanned by independent functions that had a new isomorphy type (that is, they were not isomorphic to any subspace known before that) [96]. The problem of a description of subspaces of $L_p$ spanned by sequences of independent functions occurred in the spotlight in this connection. Working in this direction, the French mathematicians Bretagnolle and Dacunha-Castelle re-discovered Kadec’s result using sequences of identically and symmetrically distributed independent functions and also proved that an Orlicz space $\ell_\psi$ can isomorphically be embedded in $L_p$ for $1\leqslant p<2$ if and only if the function $\psi$ on $[0,1]$ is equivalent to a $p$-convex and $2$-concave Orlicz function [34], [35], [40]. Slightly later close results were established by Braverman [31]–[33] and – in the finite-dimensional case – by Kwapień and Schütt [71] (who used combinatorial approaches).

It is well known that the behaviour of sums of independent functions in the spaces $L_p$ for finite $p$ and in $L_\infty$ is quite different (for example, the sequence of Rademacher functions is equivalent to the canonical $\ell_2$-basis in $L_p[0,1]$ for $1\leqslant p< \infty$, and to the $\ell_1$-basis in $L_\infty[0,1]$). As a rule, one cannot explain the arising phenomena ‘from within’ the $L_p$-scale. On the other hand this can be done by going beyond this scale and considering, for example, Orlicz spaces or even general rearrangement invariant function spaces. A decisive step in this direction was made in Braverman’s monograph [32], which concentrated on the study of sequences of independent functions in rearrangement invariant spaces on the basis of an intensive application of both probabilistic techniques and functional-theoretic methods.

In general, in our survey we continue Braverman’s line of research in [32], but we use slightly different methods. We propose an approach initiated by Astashkin and Sukochev in [18] and then developed in [20], [4], and [21]. It is based on a combination of results of the theory of rearrangement invariant spaces with methods of the interpolation theory of operators. The central role is taken by consistent application of relations between the norms of sums of independent functions and disjoint copies of these, which were originally used in the case of $L_p$-spaces by Rosenthal in the paper mentioned above, and which were subsequently extended to the class of rearrangement invariant spaces by Johnson and Schechtman [58]. Later on, Astashkin and Sukochev showed that these relations hold precisely in the rearrangement invariant spaces with the so-called Kruglov property, which had previously been introduced by Braverman (see the survey [17] and the references there, as well as § 3.2).

In our opinion the approach proposed here has significant advantages over the methods in [32] based on using complex analysis. As it is more straightforward, it makes clearer the connection between the properties of the subspace of a space $X$ spanned by a sequence of identically distributed independent functions $\{f_k\}_{k=1}^\infty$ and the distribution of $f_1\in X$. Thus, one can ‘move slightly farther’: while in [32], owing to the constraints of the method, the author considered only ${\ell_q}$-estimates for sums of independent variables, that is, the case when their sequence is equivalent to the canonical $\ell_q$-basis for some $1\leqslant q<\infty$, our approach enables us to investigate efficiently more general $\ell_\psi$-estimates for a wide class of Orlicz functions $\psi$. We discuss this in § § 4 and 5.

In § 5 we also consider a problem (stated originally in [18]) that is converse to the above one in a certain sense: the uniqueness of the distribution of a function such that a sequence of its independent copies spans the subspace in question. We include in our survey some recent results on the solution of this problem for the spaces $L_p$ and general rearrangement invariant spaces alike.

The ‘spectrum’ of subspaces of function spaces that are spanned by independent functions determines many geometric properties of these spaces, so the properties of independent functions and probabilistic inequalities related to them turn out to be an efficient tool for the investigation of the geometry of Banach spaces. As prominent examples of their use, we can mention works by Bourgain on $\Lambda(p)$-sets [28], by Gluskin on the diameter of the Minkowski compactum [50], and by Kashin on the decomposition of $L_2$ into a direct sum of subspaces on which the $L_1$- and $L_2$-norms are equivalent [65]. To date this topic has also been reflected in monographs; in this connection, without pretending to be complete, we mention [79], [57], [32], [66], and [90].

In our survey, in § § 6 and 7 we consider questions relating to the complementability of subspaces spanned by independent functions. A beautiful classical result in this direction, the Dor–Starbird theorem, says that each subspace of $L_p$, $1\leqslant p<\infty$, that is spanned by independent functions and is isomorphic to $\ell_p$, is complemented in $L_p$ [44]. We will see that, in fact, this is a consequence of a comparison principle for the complementability of the subspaces spanned by sequences of independent functions in a rearrangement invariant space $X$ on $[0,1]$ and pairwise disjoint copies of these functions in a certain space on the half-line $(0,\infty)$. Quite general as it is, this principle also enables us to obtain results of Dor–Starbird type for rearrangement invariant spaces satisfying a lower $p$-estimate. In the proofs of these and some other results we use methods close to the ones used in our survey [17] in Uspekhi Matematicheskikh Nauk 1 (2010), where, as an application we characterized the rearrangement invariant spaces in which a vector-valued version of Khintchine’s inequality holds, and found the broadest sufficient conditions known to date ensuring that a rearrangement invariant space on $[0, 1]$ is isomorphic to some rearrangement invariant space on $(0,\infty)$ (a problem stated by Mityagin in [82] and actively investigated in the memoir [57]).

We should also mention two earlier surveys in Uspekhi Matematichskikh Nauk. In the first paper [49] due to Gaposhkin the focus is on the function-theoretic properties of sequences of both independent and weakly dependent (lacunary) systems of functions (convergence and absolute convergence, integrability, limit theorems, the law of iterated logarithm, and so on). The other survey [89] by Peskir and Shiryaev was published much later, in 1995. It is mainly devoted to the most important and, incidentally, simplest system of independent functions, the Rademacher system, which is treated from the point of view of the extension of its properties to general martingale sequences. Note that a detailed discussion of various aspects of the behaviour of this classical system in function spaces is the subject of the recent monograph [5].

Let us say a few words on another line of research, related directly to our survey but not included in it for reasons of space. This is the investigation of so-called strongly embedded subspaces of rearrangement invariant spaces $X$, that is, subspaces in which convergence with respect to the $X$-norm is equivalent to convergence in measure. Initiated by Rudin in his classical paper [98] on Fourier analysis on the circle $[0,2\pi)$, this line of research was immediately continued in several directions. In particular, Rosenthal proved a remarkable result that for each $p$, $1<p<2$, a subspace $H$ of $L_p$ is strongly embedded if and only if all functions in the unit ball in $H$ have equicontinuous norms in $L_p$ [97]. In the recent paper [6] (also see [7]) we considered the problem of extending Rosenthal’s theorem to the class of Orlisz function spaces $L_M$. Along with other results, we established a criterion, in terms of the Matuszewska–Orlicz indices of a function $M$, of whether functions in the union ball of each subspace $H$ that is strongly embedded in $L_M$ and isomorphic to an Orlicz sequence space have equicontinuous norms in $L_M$. Note that the properties of subspaces of $L_M$ spanned by independent or pairwise disjoint functions are crucial for the proof of this result. As subspaces of Orlicz spaces spanned by disjoint functions have a more complicated structure than in $L_p$, no Rosenthal-type theorem holds in general for Orlicz spaces $L_M$ lying between $L_1$ and $L_2$.

Recall the most important classical inequalities involving independent functions, which prompted further investigations. In 1923, in his paper “Über dyadische Brüche” [67] Khintchine proved his famous inequalities for the Rademacher functions

$$ \begin{equation*} r_n(t)=\operatorname{sign}\sin 2^n\pi t,\qquad 0\leqslant t\leqslant 1\quad (n\in\mathbb{N}). \end{equation*} \notag $$

Theorem 1.1 (Khintchine’s inequalities). For each $0<p<\infty$ there exist positive constants $A_p$ and $B_p$ such that for all $n\in{\mathbb N}$ and arbitrary $a=(a_k)_{k=1}^n\in\mathbb{R}^n$

$$ \begin{equation*} A_p\|a\|_{\ell_2} \leqslant \biggl\|\,\sum_{k=1}^n a_kr_k\biggr\|_{L_p[0,1]} \leqslant B_p\|a\|_{\ell_2}, \end{equation*} \notag $$
where $\|a\|_{\ell_2}:=\displaystyle\biggl(\,\sum_{k=1}^n|a_k|^2\biggr)^{1/2}$.

Theorem 1.1 shows that the $L_p$-norms of polynomials in the Rademacher system are equivalent to the norms of the sequences of their coefficients in ${\ell_2}$. From the standpoint of the geometry of spaces this means that in $L_p[0,1]$ ($0<p<\infty$) the Rademacher system is equivalent to the canonical basis in ${\ell_2}$. The proof of Theorem 1.1 is presented, for instance, in [66], Theorem 2.6, or [5], Theorem 1.4 (in addition, in [5] the reader can find various versions of these inequalities and some information about the optimal constants $A_p$ and $B_p$).

One of the best known generalizations of Khintchine’s inequalities are the Marcinkiewicz–Zygmund inequalities [81] published in 1937.

Theorem 1.2 (Marcinkiewicz–Zygmund inequalities). Let $\{f_k\}_{k=1}^\infty$ be a system of independent functions on $[0,1]$ satisfying

$$ \begin{equation*} \|f_k\|_2=1,\quad \|f_k\|_\infty\leqslant M,\quad\textit{and}\quad \int_0^1 f_k(t)\,dt=0,\qquad k\in{\mathbb N}. \end{equation*} \notag $$
Then for each $p\geqslant 1$ there exists a positive constant $C_{M,p}$, which only depends on $M$ and $p$, such that for each $n\in{\mathbb N}$ and any $a=(a_k)_{k=1}^n\in\mathbb{R}^n$
$$ \begin{equation*} C_{M,p}^{-1}\|a\|_{\ell_2}\leqslant \biggl\|\,\sum_{k=1}^n a_kf_k\biggr\|_{L_p[0,1]}\leqslant C_{M,p}\|a\|_{\ell_2}. \end{equation*} \notag $$

Moreover, each system of independent functions satisfying the assumptions of Theorem 1.2 is equivalent to the Rademacher system in distribution ([5], Corollary 7.3).

2. Definitions, notation, and preliminary information

If $F_1$ and $F_2$ are two functions (quasinorms), then we write $F_1\preceq F_2$ to indicate that $F_1\leqslant CF_2$ for some positive constant $C$ independent of the arguments of $F_1$ and $F_2$ (or a part of these arguments, which must be clear from the context). In the case when $F_1\preceq F_2$ and $F_2\preceq F_1$ simultaneously, we write $F_1\asymp F_2$ and say that these functions (quasinorms) are equivalent. In particular, we say that two functions $F_1$ and $F_2$ on the half-line $(0,\infty)$ are equivalent at zero (or at infinity) if for some $t_0>0$ we have $F_1\asymp F_2$ for all $0<t\leqslant t_0$ (for $t\geqslant t_0$, respectively).

Throughout what follows an embedding of one Banach space in another is meant to be continuous, that is, by writing $ X_1\subset X_0$ we mean that if $x\in X_1$, then $x\in X_0$ and $\|x\|_{X_0}\leqslant C\|x\|_{X_1}$ for some $C>0$. If a concrete constant for which this inequality holds is of importance, then we write $X_1\overset{C}{\subset} X_0$ in the case when the above inequality holds for the norms. Finally, when two Banach spaces $X_0$ and $X_1$ are isomorphic, we indicate this by $X_0\approx X_1$.

2.1. Independent functions

Definition 2.1. A set $\{f_k\}_{k=1}^n$ of measurable functions (random variables) defined on a probability space $(\Omega,\Sigma,{\mathsf P})$ is said to be independent if for any intervals $I_k$ of the line ${\mathbb R}$ we have

$$ \begin{equation*} {\mathsf P}\{\omega\in \Omega:\,f_k(\omega)\in I_k,\,k=1,\dots,n\}\;= \;\prod_{k=1}^n {\mathsf P}\{\omega\in \Omega\colon f_k(\omega)\in I_k\}. \end{equation*} \notag $$
We say that a sequence $\{f_k\}_{k=1}^\infty$ consists of independent functions if for each $n\in{\mathbb N}$ the set $\{f_k\}_{k=1}^n$ is independent.

Two measurable functions $f$ and $g$ are said to be identically distributed if

$$ \begin{equation*} {\mathsf P}\{\omega\in \Omega\colon f(\omega)>\tau\}= {\mathsf P}\{\omega\in \Omega\colon g(\omega)>\tau\} \quad\text{for each }\ \tau>0. \end{equation*} \notag $$

We say that a function $f$ is symmetrically distributed on $\Omega$ if $f$ and $-f$ are identically distributed. If $f$ is symmetrically distributed, then it is mean zero on $\Omega$, that is, $\displaystyle\int_\Omega f(\omega)\,d{\mathsf P}(\omega)=0$.

Let $f$ be an integrable function on a probability space $(\Omega,\Sigma,{\mathsf P})$. If $\Re$ is a $\sigma$- subalgebra of the $\sigma$-algebra $\Sigma$, then there exists a unique (up to measure zero) $\Re$-measurable integrable function ${\mathsf E}_\Re f$ that for all $B\in\Re$ satisfies the equality

$$ \begin{equation*} \int_Bf(\omega)\,d{\mathsf P}(\omega)= \int_B{\mathsf E}_\Re f(\omega)\,d{\mathsf P}(\omega) \end{equation*} \notag $$
(see [45], Theorem 1.1). We call ${\mathsf E}_\Re f$ the conditional expectation of $f$ with respect to the $\sigma$-algebra $\Re$. In particular, if $\Re=\{\Omega,\varnothing\}$, then
$$ \begin{equation*} {\mathsf E}_\Re f={\mathsf E}f:=\int_{\Omega}f(\omega)\,d{\mathsf P}(\omega). \end{equation*} \notag $$

In what follows the probability space is almost always the interval $[0,1]$ with Lebesgue measure $m$ defined on the $\sigma$-algebra of Lebesgue measurable sets.

One of the most important examples of sequences of identically and symmetrically distributed independent functions on $[0,1]$ is the sequence of Rademacher functions $r_k(t)=\operatorname{sign}\sin 2^k\pi t$, $k \in \mathbb{N}$, which we mention repeatedly below.

Another example is a sequence $\{\xi_k^{(r)}\}_{k=1}^\infty$ of $r$-stable random variables $(0< r\leqslant 2)$. They are identically distributed too, and the quantity $\xi_1^{(r)}$ satisfies the following conditions for some $c=c(r)$:

$$ \begin{equation*} {\mathsf E}(\exp(it\xi_1^{(r)}))=\exp(-c|t|^r),\qquad t\in\mathbb{R} \end{equation*} \notag $$
(see, for instance, [1], § 6.4). In the case when $r=2$ and $c=1/2$ we obtain the standard sequence of normally distributed (Gaussian) random variables.

The reader can find more information on the properties of independent functions in [62], [66], Chap. 2, [72], and [73], and on the properties of conditional expectation in [45], Chap. 1.

2.2. Rearrangement invariant spaces

Here we recall some concepts and results of the theory of rearrangement invariant spaces, which we use broadly below (see details in [64], [69], [79], and [25]).

Let $J=[0,1]$ or $J=(0,\infty)$, and let $S(J)$ denote the set of almost everywhere finite, measurable (with respect to Lebesgue measure $m$) real functions (equivalence classes) on $J$ with natural algebraic operators and natural almost everywhere ordering.

For a function $x=x(t) \in S(J)$ we introduce its distribution function as considered in the theory of functions:

$$ \begin{equation*} n_{x}(\tau):=m\{ t\in J\colon |x(t)| > \tau \},\qquad \tau > 0. \end{equation*} \notag $$
It is non-negative, non-increasing, and right-continuous. Two functions $x,y \in S(J)$ are said to be equimeasurable if $n_{x}(\tau)=n_{y}(\tau)$ for all $\tau >0$.

It is clear that identically distributed functions are equimeasurable. In addition, each function $x \in S(J)$ is equimeasurable with its non-increasing left-continuous rearrangement

$$ \begin{equation*} x^{\ast}(t):=\inf\{\tau \geqslant 0\colon n_{x}(\tau) < t \},\qquad t\in J. \end{equation*} \notag $$
The functions $x^{\ast}$ and $n_{x}$ are mutually inverse in the generalized sense, namely,
$$ \begin{equation*} n_{x}(x^{\ast}(t))=t \quad \text{if } x^{\ast}(t)\text{ is a point of continuity of }n_{x}, \end{equation*} \notag $$
and conversely,
$$ \begin{equation*} x^{\ast}(n_{x}(\tau))=\tau \quad \text{if } n_{x}(\tau) \text{ is a point of continuity of }x^{\ast} \end{equation*} \notag $$
(see [69], § II.2).

Definition 2.2. A Banach function space $X \subset S(J)$ is said to be rearrangement invariant (r. i.) if

(1) the relations $x \in X$, $y \in S(J)$, and $|y(t)|\leqslant |x(t)|$ almost everywhere imply that $y\in X$ and ${\|y\|}_X \leqslant \|x\|_X$;

(2) the relations $x \in X$ and $y \in S(J)$, and the fact that $x$ and $y$ are equimeasurable imply that $y\in X$ and ${\|y\|}_X=\|x\|_X$.

If $X$ is r. i., then the convergence $\|x_n-x\|_X\to 0$ $(x_n,x\in X)$ implies that $x_n\to x$ in the measure $m$ on finite-measure sets in $J$ ([64], Theorem 4.3.1).

Without loss of generality we always assume that $\|\chi_{[0,1]}\|_X=1$ (throughout, $\chi_A$ is the characteristic function of the set $A$). Then for each r. i. space $X$ on $[0,1]$ we have

$$ \begin{equation*} L_\infty[0,1]\overset{1}{\subset} X\overset{1}{\subset} L_1[0,1]. \end{equation*} \notag $$
In the case when $X$ is an r. i. space on $(0,\infty)$ we have the embeddings
$$ \begin{equation*} (L_1\cap L_\infty) (0,\infty)\overset{1}{\subset}X\overset{1}{\subset} (L_1+ L_\infty) (0,\infty). \end{equation*} \notag $$

If $X$ is an r. i. space on $J$, then the Köthe dual (or associated) space $X'$ consists of all $y\in S(J)$ for which

$$ \begin{equation*} \|y\|_{X'}:=\sup\biggl\{\int_{J}{x(t)y(t)\,dt}\colon \|x\|_{X} \leqslant 1\biggr\}\,<\,\infty. \end{equation*} \notag $$
The space $X'$ is also rearrangement invariant; it is isometrically embedded in the conjugate space $X^*;$ moreover, $X'=X^*$ if and only if $X$ is separable. Each r. i. space $X$ is continuously embedded in the second Köthe dual, which we denote by $X''$. An r. i. space $X$ is said to be maximal if, given $x_n\in X$, $n \in \mathbb{N}$, and $x\in S(J)$ such that $\sup_{n \in \mathbb{N}} \|x_n\|_X<\infty$ and $x_n\to{x}$ almost everywhere, we have $x\in X$ and $\|x\|_X\leqslant \liminf_{n\to\infty}{\|x_n\|_X}$. A space $X$ is maximal if and only if its natural embedding in $X''$ is a surjective isometry ([64], Theorem 6.1.7). If an r. i. space $X$ is separable or maximal, then it is embedded isometrically in $X''$ ([64], Theorem 6.1.6).

Below we assume, as is often done (for instance, see [79]), that each r. i. space is separable or maximal.

Let $X$ be an r. i. space on $(0,\infty)$. For each $\tau>0$ the dilation operator ${\sigma}_\tau x(t):=x(t/\tau)$, $t>0$, is bounded in $X$, and moreover, $\|\sigma_\tau\|_{X\to X}\leqslant \max\{1,\tau\}$ (for instance, see [69], Theorem II.4.4). If $X$ is an r. i. space on $[0,1]$, then the same estimate holds for the norm of the modified dilation operator $\widetilde{\sigma}_\tau x(t):=x(t/\tau)\chi_{(0,\min\{1,\tau\})}(t)$, $0\leqslant t\leqslant 1$. Using this operator we define the upper and lower Boyd indices of the r. i. space $X$ on $[0,1]$:

$$ \begin{equation*} \mu_X=\lim_{\tau \to +0} \frac{\ln {\|\widetilde{\sigma}_\tau\|}_{X \to X}}{\ln \tau}\quad\text{and}\quad \nu_X=\lim_{\tau \to +\infty}\frac{\ln \|\widetilde{\sigma}_\tau\|_{X \to X}}{\ln \tau}\,. \end{equation*} \notag $$
We always have $0\leqslant\mu_X\leqslant\nu_X\leqslant 1$ ([69], § II.1, Theorems II.1.3 and II.4.5).

Here are some examples of r. i. spaces. First of all, these are the spaces $L_p=L_p(J)$, where $1 \leqslant p \leqslant \infty$:

$$ \begin{equation*} \|x\|_{p}:=\begin{cases} \biggl(\displaystyle\int_J{|x(t)|}^{p}\,dt \biggr)^{1/p}, &1 \leqslant p < \infty, \\ \displaystyle\operatorname*{ess\,sup}_{t\in J}|x(t)|, &p=\infty, \end{cases} \end{equation*} \notag $$
and a generalization of these, the family of spaces $L_{p,q}=L_{p,q}(J)$, $1<p<\infty$, $1 \leqslant q \leqslant \infty$:
$$ \begin{equation*} \|x\|_{L_{p,q}}:=\begin{cases} \biggl(\dfrac{q}{p}\displaystyle\int_J t^{q/p}x^{\ast}(t)^q\, \dfrac{dt}{t}\biggr)^{1/q}, & 1 \leqslant q < \infty, \\ \displaystyle\operatorname*{ess\,sup}_{t\in J} (t^{1/p}x^{\ast}(t)), & q=\infty. \end{cases} \end{equation*} \notag $$
Although the functional $x\mapsto \|x\|_{L_{p,q}}$ is not subadditive for $q>p$, it is equivalent to the norm $\|x\|'=\|x^{**}\|_{L_{p,q}}$, where $x^{**}(t)=\dfrac{1}{t}\displaystyle\int_0^t x^*(s)\,ds$. In addition, $L_{p,q_1}\subset L_{p,q_2}$ if $1\leqslant q_1\leqslant q_2\leqslant\infty$ and $L_{p,p}=L_p$ ([69], Lemma II.6.5 and the remark after it).

Let $\varphi$ be an increasing concave function on $J$ such that $\varphi(0) =0$ and $\varphi(1) = 1$. The Lorentz space $\Lambda(\varphi)$ and the Marcinkiewicz space $M(\varphi)$ consist of all measurable functions $x(t)$ on $J$ such that

$$ \begin{equation*} \|x\|_{\Lambda(\varphi)}:=\int_{J} x^*(t)\,d\varphi(t)< \infty\quad\text{and}\quad \|x\|_{M(\varphi)}:=\sup_{t\in J} \frac{1}{\varphi(t)}\int_{0}^{t}x^{\ast}(s)\,ds < \infty, \end{equation*} \notag $$
respectively.

If $\lim_{t\to +0}\varphi(t)=\lim_{t\to +\infty}\varphi(t)/t=0$, then the space $\Lambda(\varphi)$ is separable and maximal, whereas $M(\varphi)$ is maximal but not separable. We also have the following duality relations: $\Lambda(\varphi)'=M(\varphi)$ and $M(\varphi)'=\Lambda(\varphi)$ ([69], Theorems II.5.2 and II.5.4).

Let $X$ be an r. i. space on $[0,1]$. We let $X_0$ denote the separable part of $X$, that is, the closure of $L_\infty$ in $X$. An r. i. space $X_0$ is separable if and only if $X\ne L_\infty$ ([69], § II.4.5). In particular, the space $M(\varphi)_0$ can be characterized as the set of functions $x\in M(\varphi)$ such that $\lim_{t\to +0} \dfrac{1}{\varphi(t)}\displaystyle\int_{0}^{t}x^{\ast}(s)\,ds=0$ ([69], Lemma II.5.4).

The fundamental function $\phi_X$ of the r. i. space $X$ is defined by

$$ \begin{equation*} \phi_X(t):=\|\chi_A\|_X, \end{equation*} \notag $$
where $A$ is any measurable subset of $J$ such that $m(A)=t$. The function $\phi_X$ is quasiconcave (that is, $\phi_X(0)=0$, $\phi_X$ is non-decreasing, and $\phi_X(t)/t$ is non-increasing on its domain of definition). In particular,
$$ \begin{equation*} \begin{gathered} \, {\phi}_{L_{p,q}}(s)=s^{1/p}\qquad (1\leqslant q\leqslant\infty), \\ \phi_{\Lambda(\varphi)}(s)={{\varphi}}(s),\quad\text{and}\quad \phi_{M(\varphi)}(s)=\frac{s}{\varphi(s)}\,. \end{gathered} \end{equation*} \notag $$
If $\phi_X$ is the fundamental function of the r. i. space $X$, then $\Lambda({\Phi_X})\subset X\subset M(s/\Phi_X(s))$, where $\Phi_X$ denotes the least concave majorant of $\phi_X$ ([69], Theorems II.5.5 and II.5.7).

In a similar way we define an r. i. space of number sequences $\{a_k\}_{k=1}^\infty$ (for instance, see [9], § II.8). In particular, the fundamental function of an r. i. space of sequences $E$ is defined by

$$ \begin{equation*} \phi(n):=\biggl\|\,\sum_{k=1}^n e_k\biggr\|_E,\qquad n\in\mathbb{N}, \end{equation*} \notag $$
where the $e_k$ are the canonical unit vectors in sequence spaces. It is easy to see that $\{e_k\}_{k=1}^\infty$ is a symmetric basis in $E$ if $E$ is separable.

Yet another natural generalization of $L_p$-spaces is Orlicz spaces. Since these (function and sequence) spaces play an extremely important role in what follows, we collect the preliminaries on Orlicz spaces in a separate subsection.

2.3. Orlicz spaces

For a comprehensive description of the properties of Orlicz spaces, see [68], [92], and [80].

Let $M$ be an Orlicz function, that is, a (strictly) increasing convex continuous function on the half-line $[0,\infty)$ such that $M(0)=0$. Throughout what follows, without loss of generality we assume that $M(1)=1$. The Orlicz space $L_{M}:=L_M(J)$ consists of all measurable functions $x(t)$ on $J$ that have a finite Luxemburg norm

$$ \begin{equation*} \|x\|_{L_{M}}:=\inf\biggl\{\lambda > 0 \colon \int_J M\biggl(\frac{|x(t)|}{\lambda}\biggr) \, dt \leqslant 1\biggr\}. \end{equation*} \notag $$
In particular, if $M(u)=u^p$, $1\leqslant p<\infty$, then $L_M=L_p$ with the standard norm.

Note that the definition of $L_M[0,1]$ depends (up to equivalence of norms) only on the behaviour of $M$ at infinity (that is, for large values of its argument). The fundamental function of this space is

$$ \begin{equation*} \phi_{L_M}(u)=\frac{1}{M^{-1}(1/u)}\,,\qquad 0<u\leqslant 1, \end{equation*} \notag $$
where $M^{-1}$ is the inverse function of $M$.

If $M$ is an Orlicz function, then the complementary (or Young conjugate) function is the function $M'$ defined by

$$ \begin{equation*} M'(u):=\sup_{t>0}(ut-M(t)),\qquad u \geqslant 0. \end{equation*} \notag $$
It is easy to see that $M'$ is an Orlicz function too, and its complementary function is equal to $M$.

Each Orlicz space $L_M(J)$ is maximal; the space $L_M[0,1]$ is separable if and only if $M$ satisfies the $\Delta_2^\infty$-condition ($M\in \Delta_2^\infty$), that is,

$$ \begin{equation*} \sup_{u\geqslant 1}\frac{M(2u)}{M(u)}<\infty; \end{equation*} \notag $$
in a similar way the space $L_M(0,\infty)$ is separable if and only if $M$ satisfies the $\Delta_2$-condition ($M\in \Delta_2$), that is,
$$ \begin{equation*} \sup_{u>0}\frac{M(2u)}{M(u)}<\infty. \end{equation*} \notag $$
In that case $L_M(J)^*=L_M(J)'=L_{M'}(J)$.

An important class of non-separable Orlicz spaces is formed by the so-called exponential spaces. If $p>0$, then $\exp L_p$ is the Orlicz space $L_{N_p}[0,1]$, where $N_p(u)$ is an arbitrary Orlicz function which is equivalent to the function $\exp(u^p)$ at infinity. The Köthe dual to the space $\exp L_p$ is the separable Otlicz space $L\log^{1/p}L$, which is constructed for an Orlicz function equivalent to the function $u\log^{1/p}u$ at infinity.

Note that the Rademacher system is equivalent in an r. i. space $X$ to the canonical $\ell_2$-basis if and only if the embedding $X\supset G$ holds, where $G:=(\exp L_2)_0$, that is, $G$ is the closure of $L_\infty$ in $\exp L_2$ (§ 2.2) [94] (also see [5], Theorem 2.3).

It is easy to verify that in terms of the Matuszewska–Orlicz indices at infinity,

$$ \begin{equation*} \alpha_{M}^{\infty}:=\sup\biggl\{p\colon \sup_{t,s \geqslant 1} \frac{M(t)s^{p}}{M(ts)} < \infty \biggr\}\quad\text{and}\quad \beta_{M}^{\infty}:=\inf\biggl\{p\colon \inf_{t,s \geqslant 1} \frac{M(t)s^{p}}{M(ts)} > 0 \biggr\}, \end{equation*} \notag $$
the Boyd indices of the Orlicz space $L_M[0,1]$ can be expressed as follows:
$$ \begin{equation*} \mu_{L_M}=\frac{1}{\beta_{M}^{\infty}}\quad\text{and}\quad \nu_{L_M}=\frac{1}{\alpha_{M}^{\infty}}\,. \end{equation*} \notag $$
Finally, $M\in \Delta_2^\infty$ if and only if $\beta_{M}^{\infty} <\infty$.

An Orlicz sequence space is defined similarly, namely, if $\psi$ is an Orlicz function, then the space $\ell_{\psi}$ consists of all sequences of real numbers $a=(a_{k})_{k=1}^{\infty}$ such that

$$ \begin{equation*} \|a\|_{\ell_{\psi}}:=\inf\biggl\{\lambda>0\colon \sum_{k=1}^{\infty} \psi\biggl(\frac{|a_{k}|}{\lambda}\biggr)\leqslant 1\biggr\}<\infty. \end{equation*} \notag $$
If $\psi(u)=u^p$, $p\geqslant 1$, then $\ell_\psi=\ell_p$ isometrically. The definition of the Orlicz space $\ell_{\psi}$ depends (up to equivalence of norms) only on the behaviour of $\psi$ at zero (that is, for small values of the argument).

The fundamental function of the Orlicz space $\ell_{\psi}$ can be calculated by the formula

$$ \begin{equation} \phi_{\ell_\psi}(n)=\frac{1}{\psi^{-1}(1/n)}\,,\qquad n \in \mathbb{N}. \end{equation} \tag{2.1} $$

The space $\ell_{\psi}$ is separable if and only if $\psi$ satisfies the $\Delta_{2}^{0}$-condition ($\psi\in \Delta_{2}^{0}$), that is, $\sup_{0<u\leqslant 1}{\psi(2u)}/{\psi(u)}<\infty$. In this case $\ell_{\psi}^*=\ell_{\psi}'=\ell_{{\psi}'}$, where ${\psi}'$ is the function complementary to $\psi$.

For each Orlicz function $\psi$ we define the Matuszewska–Orlicz indices at zero:

$$ \begin{equation*} \alpha_{\psi}^{0}:=\sup\biggl\{p \colon \sup_{0<t,s \leqslant 1} \frac{\psi(st)}{s^{p}\psi(t)} < \infty\biggr\}\quad\text{and} \quad \beta_{\psi}^{0}:=\inf \biggl\{p \colon \inf_{0<t, s \leqslant 1} \frac{\psi(st)}{s^{p}\psi(t)} > 0 \biggr\}. \end{equation*} \notag $$
Similarly to the Matuszewska–Orlicz indices at infinity, the inequalities $1 \leqslant \alpha_{\psi}^{0} \leqslant \beta_{\psi}^{0} \leqslant \infty$ hold (see, for instance, [78], Chap. 4).

2.4. Some concepts of the interpolation theory of operators

Some concepts and methods from the interpolation theory of operators play an important role in what follows.

A system $\vec X=(X_0,X_1)$ of two Banach spaces $X_0$ and $X_1$ is called a Banach couple if both spaces are continuously and linearly embedded in the same Hausdorff linear topological space. Given a Banach couple, we can define its intersection $X_0\cap X_1$ and sum $X_0+X_1$ as Banach spaces with the norms

$$ \begin{equation*} \|x\|_{X_0\cap X_1}=\max\{\|x\|_{X_0},\|x\|_{X_1}\} \end{equation*} \notag $$
and
$$ \begin{equation*} \|x\|_{X_0+X_1}=\inf\{\|x_0\|_{X_0}+\|x_1\|_{X_1}\colon x=x_0+x_1,x_i\in X_i,i=0,1\}, \end{equation*} \notag $$
respectively. For example, two r. i. spaces $X_0$ and $X_1$ on $J$ form a Banach couple because both are continuously and linearly embedded in the space $S(J)$ considered with the topology of convergence in Lebesgue measure on finite-measure subsets.

We call a Banach space $X$ an interpolation space with respect to a Banach couple $(X_0,X_1)$ if $X_0 \cap X_1 \subset X \subset X_0+X_1$ and for each linear operator $T\colon X_0+X_1\to X_0+X_1$ that is bounded in both $X_0$ and $X_1$ we have $T\colon X \to X$. In particular, by the classical Riesz–Thorin theorem ([26], Theorem 1.1.1) $L_p$ is an interpolation space with respect to the couple $(L_{p_0},L_{p_1})$ for any $1\leqslant p_0\leqslant p\leqslant p_1\leqslant\infty$.

One of the most important methods of the construction of interpolation spaces, both theoretically and in applications, uses the Peetre ${\mathcal K}$-functional defined a Banach couple $( X_0,X_1)$ as follows:

$$ \begin{equation*} {\mathcal K}(t,x;X_0,X_1):=\inf\{\|x_0\|_{X_0}+t\|x_1\|_{X_1}\colon x=x_0+x_1,x_i\in X_i\}, \end{equation*} \notag $$
where $x\in {X_0+X_1}$ and $t>0$. For each Banach couple, for fixed $x\in X_0+X_1$ the map $t\mapsto {\mathcal K}(t,x;X_0,X_1)$ defines a non-negative continuous increasing concave function ([26], Lemma 3.1.1).

The ${\mathcal K}$-functional of a Banach couple can only rarely be calculated explicitly. In what follows we need two of such explicit formulae.

Let $(T,\Sigma,\mu)$ be an arbitrary space with $\sigma$-finite measure $\mu$. If $w$ is measurable and non-negative on $T$, then the weighted $L_1$-space $L_1(w)$ consists of all measurable functions $x$ on $T$ for which $\|x\|_{L_1(w)}:=\displaystyle\int_T|x(t)|w(t)\,d\mu<\infty$. For each couple $(L_1(w_1),L_1(w_2))$ we have

$$ \begin{equation} {\mathcal K}(t,x;L_1(w_1),L_1(w_2))= \int_\Omega |x(s)|\min\{w_1(s),tw_2(s)\}\,d\mu(s),\qquad t>0 \end{equation} \tag{2.2} $$
(see [26], the proof of Theorem 5.4.1, or [36], Proposition 3.1.17).

Using the second formula we can calculate the ${\mathcal K}$-functional of the couple $(L_1,L_\infty)$ defined on an arbitrary space with $\sigma$-finite measure:

$$ \begin{equation} {\mathcal K}(t,x;L_1,L_\infty)=\int_0^{t} x^*(s)\,ds,\qquad t>0 \end{equation} \tag{2.3} $$
(for instance, see [26], Theorem 5.2.1).

The so-called property of $K$-divisibility of the $\mathcal{K}$-functional, established by Brudnyi and Krugljak [36], is central for the interpolation theory of operators. Its proof is based on the following important result.

Proposition 2.1 ([36], Proposition 3.2.5). Let $\varphi$ be an increasing concave function on $[0,\infty)$, $\varphi(0)=0$, and let $q>1$. Then there exists a sequence of positive numbers $\{t_i\}_{i=-m}^n$, $m,n\in \mathbb{N}\cup\{+\infty\}$, such that the following conditions (a)–(d) hold.

(a) $t_0=1$, $t_{i+1}/t_i\geqslant q$, $-m\leqslant i<n$.

(b) The following equalities hold:

$$ \begin{equation*} \frac{\varphi(t_{2i})}{t_{2i}}=q\cdot\frac{\varphi(t_{2i+1})}{t_{2i+1}}\quad\textit{and}\quad \varphi(t_{2i+2})=q\cdot \varphi(t_{2i+1}). \end{equation*} \notag $$

(c) $0\leqslant n\leqslant\infty$, and $n=+\infty$ if and only if

$$ \begin{equation} \varphi'(+\infty):=\lim_{t\to+\infty}\frac{\varphi(t)}{t}=0\quad\textit{and}\quad \varphi(+\infty):=\lim_{t\to +\infty}\varphi(t)=+\infty. \end{equation} \tag{2.4} $$
The first relation in (2.4) fails if and only if $n$ is even; in this case
$$ \begin{equation} \frac{\varphi(t)}{t}\geqslant\frac{1}{q}\cdot \frac{\varphi(t_n)}{t_n}\quad\textit{for}\quad t\geqslant t_{n}. \end{equation} \tag{2.5} $$
The second relation in (2.4) fails if and only if $n$ us odd; in this case
$$ \begin{equation} \varphi(t_n)\min\biggl\{1,\frac{t}{t_n}\biggr\}\geqslant \frac{1}{q}\cdot\varphi(t)\quad\textit{for}\ \ t\geqslant t_{n-1}. \end{equation} \tag{2.6} $$

(d) $m=+\infty$ if and only if

$$ \begin{equation} \varphi(+0)=0\quad\textit{and}\quad \varphi'(+0):=\lim_{t\to +0}\frac{\varphi(t)}{t}=+\infty. \end{equation} \tag{2.7} $$
The first relation in (2.7) fails if and only if $m$ is even; then
$$ \begin{equation} \varphi(t)\geqslant \frac{1}{q}\cdot\varphi(t_{-m})\quad\textit{for}\ \ t\leqslant t_{-m}. \end{equation} \tag{2.8} $$
The second relation in (2.7) fails if and only if $m$ is odd; then
$$ \begin{equation} \varphi(t_{-m})\min\biggl\{1,\frac{t}{t_{-m}}\biggr\}\geqslant \frac{1}{q}\cdot\varphi(t)\quad\textit{for}\ \ t\leqslant t_{-m+1}. \end{equation} \tag{2.9} $$

A Banach couple $(X_0,X_1)$ is said to be $\operatorname{Conv}_0$-abundant if for each increasing concave function $\varphi$ on $(0,\infty)$ such that $\varphi(0)=0$ and $\lim_{t\to +0}\varphi(t)=\lim_{t\to +\infty}\varphi(t)/t= 0$ there exists $x\in X_0+X_1$ satisfying the relation

$$ \begin{equation*} \varphi(t)\asymp {\mathcal K}(t,x;X_0,X_1),\qquad t>0, \end{equation*} \notag $$
with constants independent of $\varphi$.

A detailed presentation of concepts and results mentioned here and many other results and facts from interpolation theory can be found in [26], [36], [69], and [25].

3. Auxiliary results and constructions

In what follows, along with methods of the interpolation theory of operators we also use approaches related to the comparison of the norms of sums of independent functions and their disjoint copies, as well as the properties of convex and concave functions.

3.1. Rosenthal’s inequalities and Johnson–Schechtman theorem

In 1970, in studying the structure of complemented subspaces of $L_p=L_p[0,1]$, Rosenthal proved the following remarkable inequalities, which show that, up to equivalence, the $L_p$-norm of a sum of independent mean zero functions is determined by the $L_p$- and $L_2$-norms of the summands.

Theorem 3.1 ([96]). For every $p> 2$ there exists a positive constant $K_p>0$ such that for each sequence $\{f_k\}_{k=1}^\infty\subset L_p$ of independent functions such that $\displaystyle\int_0^1f_k(t)\,dt=0$, $k=1,2,\dots$, and each $n\in\mathbb N$

$$ \begin{equation} \begin{aligned} \, &\frac{1}{2}\max\biggl\{\biggl(\,\sum_{k=1}^n\|f_k\|_p^p\biggr)^{1/p}, \biggl(\,\sum_{k=1}^n\|f_k\|_2^2\biggr)^{1/2}\biggr\} \leqslant \biggl\|\,\sum_{k=1}^n f_k\biggr\|_p \\ &\qquad\qquad\qquad\qquad\qquad\leqslant K_p\max\biggl\{\biggl(\,\sum_{k=1}^n\|f_k\|_p^p\biggr)^{1/p}, \biggl(\,\sum_{k=1}^n\|f_k\|_2^2\biggr)^{1/2}\biggr\}. \end{aligned} \end{equation} \tag{3.1} $$

The expression compared with the norm $\biggl\|\,\displaystyle\sum_{k=1}^n f_k\biggr\|_p$ in (3.1) can also be represented slightly otherwise. For each sequence $\{f_k\}_{k=1}^\infty$ of functions on $[0,1]$, let $\{\bar{f}_k\}_{k=1}^\infty$ denote an arbitrary sequence of disjoint functions on the half-line $[0,\infty)$ such that for each $k=1,2,\dots$ the function $\bar{f}_k$ has the same distribution as $f_k$. For example, we can set

$$ \begin{equation*} \bar{f}_k(t)=f_k(t-k+1)\cdot \chi_{[k-1,k)}(t),\qquad t>0. \end{equation*} \notag $$
Then the function
$$ \begin{equation*} {\mathbf f}_n:=\sum_{k=1}^n \bar{f}_k, \end{equation*} \notag $$
called the disjoint sum of the functions $f_k$, $k=1,\dots,n$, has the following obvious property:
$$ \begin{equation} m\{t>0\colon {\mathbf f}_n(t)>\tau\}=\sum_{k=1}^n m\{t\in [0,1]\colon f_k(t)>\tau\},\qquad \tau>0. \end{equation} \tag{3.2} $$

Using this concept we can rewrite (3.1) as follows: if $p\geqslant 2$ and independent functions $f_k$ have mean value zero on $[0,1]$, then, for a constant depending only on $p$,

$$ \begin{equation} \biggl\|\,\sum_{k=1}^n f_k\biggr\|_p\asymp \|{\mathbf f}_n\|_{(L_p\cap L_2)[0,\infty)},\qquad n\in\mathbb{N}. \end{equation} \tag{3.3} $$
The dual result is also true: if $1\leqslant p\leqslant 2$ and the functions $f_k$ satisfy the same conditions, then
$$ \begin{equation} \biggl\|\,\sum_{k=1}^n f_k\biggr\|_p\asymp \|{\mathbf f}_n\|_{(L_p +L_2)[0,\infty)},\qquad n\in\mathbb{N}. \end{equation} \tag{3.4} $$
Here
$$ \begin{equation*} \|g\|_{(L_p\cap L_q)[0,\infty)}=\max\bigl\{\|g\|_{L_p[0,\infty)}, \|g\|_{L_q[0,\infty)}\bigr\} \end{equation*} \notag $$
and
$$ \begin{equation*} \begin{aligned} \, \|g\|_{(L_p+L_q)[0,\infty)}&=\inf\bigl\{\|g_0\|_{L_p[0,\infty)}+ \|g_1\|_{L_q[0,\infty)}\colon \\ &\qquad\qquad\qquad g_0+g_1=g,\ g_0\in L_p[0,\infty),\ g_1\in L_q[0,\infty)\bigr\}. \end{aligned} \end{equation*} \notag $$

Thus, for some constants depending only on $p$ a sequence $\{f_k\}_{k=1}^\infty$ satisfying the above conditions is equivalent in $L_p[0,1]$ to the sequence $\{\bar{f}_k\}_{k=1}^\infty$ of disjoint copies of these functions in the space $(L_p\cap L_2)[0,\infty)$ for $p\geqslant 2$ and in $(L_p+L_2)[0,\infty)$ for $1\leqslant p\leqslant 2$.

Stated in this form, Rosenthal’s inequalities lead to the following natural question: for what function spaces, apart from $L_p$, do similar inequalities hold? As equality (3.2) shows, r. i. spaces are just most likely candidates for inequalities of this type.

In the 1980s, in a number of papers their authors discussed the problem of the comparison of sums of independent and disjoint functions in different spaces and the applications of such relations to the study of the geometry of these spaces. For instance, in 1988 Carothers and Dilworth [38] obtained analogues of (3.3) and (3.4) for the Lorentz spaces $L_{p,q}$, $1\leqslant p<\infty$, $0<q\leqslant \infty$ (see the definitions of these spaces in § 2.2). Around the same time Johnson and Schechtman [58], who used an inequality due to Hoffman-Jø rgensen [54], made a significant progress in the solution of this problem, by extending inequalities (3.3) and (3.4) to general r. i. spaces. To state their result we introduce a useful construction (see [57] or [79], § 2.f), which we use repeatedly below. If $X$ is an r. i. space on $[0,1]$ and $1\leqslant p\leqslant\infty$, then the set $Z_X^p$ consists of all measurable functions $f$ on $(0,\infty)$ such that

$$ \begin{equation*} \|f\|_{Z_X^p}:=\|f^*\chi_{[0,1]}\|_X+ \|f^*\chi_{[1,\infty)}\|_{L_p[1,\infty)}<\infty. \end{equation*} \notag $$
The functional $f\mapsto \|f\|_{Z_X^p}$ is just a quasinorm (in general, the triangle inequality only holds with a constant greater than 1). Nevertheless, for each r. i. space $X$ it is equivalent to an appropriate symmetric norm. In fact, as $X\subset L_1[0,1]$, by the well-known relation (for instance, see [55] or [26], Exercise 5.7.3)
$$ \begin{equation*} \|f\|_{(L_1+L_p)(0,\infty)}\asymp \int_{{0}}^1f^*(t)\,dt+ \biggl(\int_{{1}}^\infty f^*(t)^p\,dt\biggr)^{1/p},\qquad f\in (L_1+L_p)(0,\infty), \end{equation*} \notag $$
the functional $f\mapsto \|f\|_{Z_X^p}$ is equivalent to the norm
$$ \begin{equation*} \|f\|_{Z_X^p}':=\|f^*\chi_{[0,1]}\|_{X}+\|f\|_{(L_1+L_p)[0,\infty)}. \end{equation*} \notag $$
Thus, $Z_X^p$ becomes an r. i. space on $[0,\infty)$.

Theorem 3.2 ([58], Theorem 1). Let $X$ be an r. i. space on $[0,1]$. Then there exists $C=C_X>0$ such that for each sequence of independent functions $\{f_k\}_{k=1}^\infty\subset X$ such that $\displaystyle\int_0^1f_k(t)\,dt=0$, $k=1,2,\dots$, we have

$$ \begin{equation} \|{\mathbf f}\|_{Z_X^2}\leqslant C\biggl\|\,\sum_{k=1}^\infty f_k\biggr\|_{X}, \end{equation} \tag{3.5} $$
where ${\mathbf f}$ is the disjoint sum of the functions $f_k$, $k=1,2,\dots$ .

If, in addition, $X\supset L_p$ for some $p<\infty$, then there exists $C_1=C_1(p)>0$ such that a reverse inequality holds for each sequence satisfying the same conditions:

$$ \begin{equation} \biggl\|\,\sum_{k=1}^\infty f_k\biggr\|_{X}\leqslant C_1\|{\mathbf f}\|_{Z_X^2}. \end{equation} \tag{3.6} $$

3.2. Kruglov property and comparison of the norms of sums of independent functions and their disjoint copies

The embedding $X\supset L_p$ for some $p<\infty$ mentioned in the second part of Theorem 3.2 (and indicating that $X$ is ‘distant’ from $L_\infty$ in a certain sense) is not necessary for inequality (3.6) to hold. It also holds for some (but certainly not all) r. i. spaces positioned ‘on the other side’ of the spaces $L_p$, $p<\infty$. Under certain conditions on a system of independent functions, for a certain class of Orlicz spaces this follows already from probabilistic constructions due to Kruglov [70], which are related to infinitely divisible distributions and, in particular, to an analysis of the classical Levý–Khintchine formula. Slightly later, by extending that observation in [70] to the whole class of r. i. spaces Braverman introduced the notion of the Kruglov property and developed a new (relative to the method in [58]) ‘probabilistic’ approach to the comparison of sums of independent and disjoint functions (see [32]).

Let $f$ be a measurable function (a random variable) on $[0,1]$. We let $\pi(f)$ denote the sum $\displaystyle\sum_{i=1}^Nf_i$, where the $f_i$ are independent copies of $f$ and $N$ is a random variable which is independent of the sequence $\{f_i\}$ and has the Poisson distribution with parameter 1. Direct calculations show that the (probability) distribution function of $\pi(f)$ is given by the equality

$$ \begin{equation*} {\mathcal F}_{\pi(f)}(t)=\frac{1}{e}\biggl(\chi_{(0,\infty)}(t)+ {\mathcal F}_{f}(t)+\sum_{l=2}^\infty\frac{1}{l!} {\mathcal F}_{f}^{*l}(t)\biggr),\qquad t\in\mathbb{R}, \end{equation*} \notag $$
where we let ${\mathcal F}_{f}^{*l}$ denote the $l$-fold convolution of the distribution function ${\mathcal F}_{f}$.

We say that an r. i. space $X$ on $[0,1]$ has the Kruglov property (in brief, $X\in \mathbb{K}$) if the relation $f\in X$ implies that $\pi(f)\in X$.

Note that the converse implication $\pi(f)\in X\,\Longrightarrow\,f\in X$ is always true. In fact, by the definition of $\pi(f)$ we have $m\{|\pi(f)|\geqslant \tau\}\geqslant e^{-1}m\{|f|\geqslant \tau\}$ for all $\tau>0$, so, since $X$ is an r. i. space, it follows that $\|f\|_X\leqslant e\|\pi(f)\|_X$.

Simplifying slightly we can say that r. i. spaces possessing the Kruglov property sufficiently ‘distant’ from the space $L_\infty$. For instance, it is known that all maximal r. i. spaces containing $L_p$ for some $p<\infty$ have this property ([32], Theorem I.2; in particular, spaces $X$ with positive lower Boyd index $\mu_X$ are such ones: see § 2.2)). But not only they: for instance, the exponential Orlicz space $\exp L_p$ belongs to the class $\mathbb{K}$ if and only if $0<p\leqslant 1$. This follows from Kruglov’s theorem [70] mentioned above and the argument in the beginning of § II.4 of [32].

On the other hand we must say that the Kruglov property is quite subtle and cannot be characterized in terms of embeddings. For example, $\exp L_1$, the minimal exponential Orlicz space with Kruglov property, does not play this role in the class of all r. i. spaces: there exist spaces $X\overset{\ne}{\subset} \exp L_1$ with the Kruglov property and spaces $X\overset{\ne}{\supset} \exp L_1$ without it alike (see details in [17], § 4.3).

The Kruglov property turns out to be a very useful tool in the study of the geometric properties of Banach spaces of measurable functions, and it plays an important role in the monograph [32]. This is due to the following result, showing that under a certain extra condition on a sequence of independent functions the Kruglov property ensures inequality (3.6).

Theorem 3.3 ([32], Lemma I.4). Let the r. i. space $X$ belong to the class $\mathbb{K}$. Then there exists a positive constant $C=C_X$ such that for each sequence of independent functions $\{f_k\}_{k=1}^\infty\subset X$ satisfying

$$ \begin{equation} \sum_{k=1}^\infty m\{f_k\ne 0\}\leqslant 1 \end{equation} \tag{3.7} $$
the inequality
$$ \begin{equation} \biggl\|\,\sum_{k=1}^\infty f_k\biggr\|_{X}\leqslant C\|{\mathbf f}\|_{X} \end{equation} \tag{3.8} $$
holds.

Remark 3.1. Since by (3.7) the ‘support’ of the disjoint sum ${\mathbf f}=\displaystyle\sum_{k=1}^\infty \bar{f}_k$ lies in $[0,1]$, we can consider arbitrary (not necessarily mean zero) independent functions and replace $\|{\mathbf f}\|_{Z_X^2}$ by $\|{\mathbf f}\|_{X}$.

In comparing Theorems 3.2 and 3.3 it is natural to ask about the possible extension of (3.6) to arbitrary r. i. spaces with the Kruglov property (or, equivalently, about the validity of Theorem 3.3 without condition (3.7) after an appropriate modification of the right-hand side of (3.8)). An affirmative answer was given by Astashkin and Sukochev, who used an operator approach developed by them in [12] and [13]. Here we only briefly comment on this result (details can be found in [17]).

On the space of all Lebesgue-measurable, almost everywhere finite functions on $[0,1]$ we can define a positive linear operator ${\mathbf K}$ which is bounded in an r. i. space $X$ if and only if $X\in \mathbb{K}$ (for this reason ${\mathbf K}$ was also called the Kruglov operator). Using this approach we can avail of the advantages of the operator language: for instance, we can use the interpolation theory of operators or investigate the more general case when the norms on the left- and right-hand sides of (3.6) are taken in different spaces. Based on this, not only a number of questions concerned directly with the Kruglov property have been answered, but also this property itself has been used more efficiently in many cases (see [11], [16], [19], and [17]).

The original version of the following theorem, which improves both Theorem 3.2 and Theorem 3.3, was established in [14] on the basis of the tecnique of infinitely divisible distributions. Slightly later (see [19], Theorem 21, and [17], Theorem 25), by use of Prohorov’s arcsine inequality [91] its proof was, first, significantly simplified and, second, extended to the case of quasi-Banach r. i. spaces.

Theorem 3.4. Let the r. i. space $X$ belong to the class $\mathbb{K}$. Then there exists a positive constant $\kappa^{}_X$ such that for any sequence $\{f_k\}_{k=1}^\infty\subset X$ of independent functions satisfying $\displaystyle\int_0^1 f_k(t)\,dt=0$, $k=1,2,\dots$, we have

$$ \begin{equation} \biggl\|\,\sum_{k=1}^\infty f_k\biggr\|_{X}\leqslant \kappa^{}_X\|{\mathbf f}\|_{Z_X^2}. \end{equation} \tag{3.9} $$

Thus, the generalized Rosenthal inequality holds in any r. i. space with the Kruglov property.

Remark 3.2. The constant $\kappa^{}_X$ is equal to the product of a universal constant by the norm of the Kruglov operator ${\mathbf K}$ in the space $X$ (see [19], Theorem 21, or [17], Theorem 25).

Remark 3.3. For separable r. i. spaces the result converse to Theorem 3.4 also holds. More precisely, if there exists a positive constant $C=C_X $ such that inequality (3.8) holds for an arbitrary sequence of independent functions $\{f_k\}_{k=1}^\infty\subset X$ satisfying (3.7), then $X\in \mathbb{K}$ and $\|{\mathbf K}\|_{X\to X}\leqslant C$ ([17], Theorem 23).

3.3. $p$-convex and $p$-concave Orlicz functions

Let $1\leqslant p<\infty$. We say that an Orlicz function $M$ is $p$-convex ($p$-concave) if the map $t \mapsto M(t^{1/p})$ is convex (concave, respectively) on $[0,\infty)$.

The following well-known result is a direct consequence of the definitions (for instance, see [69], § II.1).

Lemma 3.1. Let $\psi$ be an Orlicz function.

(i) If $\psi$ is $p$-convex, then the function $t\mapsto{\psi(t)}/{t^p}$, $t>0$, is increasing.

(ii) If $\psi$ is $p$-concave, then the function $t\mapsto{\psi(t)}/{t^p}$, $t>0$, is decreasing.

For the proof of the following characterization of the concepts introduced the reader can consult [84], Lemma 20, or [18], Lemma 5 (also see the definition of the Matuszewska–Orlicz indices in § 2.3).

Lemma 3.2. Let $1\leqslant p<\infty$, and let $\psi$ be an Orlicz function on $[0,\infty)$. Then the following results hold.

(i) $\psi$ is equivalent to a $p$-convex (respectively, a $p$-concave) function at zero if and only if $\psi(st)\leqslant C s^{p}\psi(t)$ (respectively, $s^p\psi(t)\leqslant C \psi(st)$) for some $C>0$ and all $0<t,s\leqslant 1$.

(ii) $\psi$ is equivalent to a $(p+\varepsilon)$-convex (respectively, a $(p-\varepsilon)$-concave) function at zero for some $\varepsilon>0$ if and only if $\alpha_\psi^0>p$ (respectively, $\beta_\psi^0<p$).

Lemma 3.3. Let $1\leqslant p<q<\infty$, and let $\psi$ be an Orlicz function. Then the following conditions are equivalent:

(a) $\psi$ is equivalent to a $p$-convex and $q$-concave function at zero and

$$ \begin{equation*} \lim_{t\to +0}\psi(t)/t^p=0; \end{equation*} \notag $$

(b) there exists an increasing concave function $\varphi$ on $(0,1]$ such that

$$ \begin{equation*} \lim_{t\to +0}\varphi(t)=0 \end{equation*} \notag $$
and
$$ \begin{equation*} \psi(t)\asymp t^p\varphi(t^{q-p}),\qquad 0\leqslant t\leqslant 1. \end{equation*} \notag $$

Proof. (a) $\Rightarrow$ (b). We assume without loss of generality that $\psi$ is itself $p$-convex and $q$-concave at zero. Set $\varphi_1(t):=t^{-p/(q-p)}\psi(t^{1/(q-p)})$, $0<t\leqslant 1$. Since $\psi$ is $p$-convex and $q$-concave, $\varphi_1(t^{q-p})=\psi(t)t^{-p}$, and $\varphi_1(t^{q-p})t^{p-q}=\psi(t)t^{-q}$, it follows from Lemma 3.1 that $\varphi_1$ is increasing and the function $\varphi_1(t)/t$ is decreasing. Hence $\varphi_1$ is quasiconvex and therefore equivalent on $[0,1]$ to its least concave majorant $\varphi$ (for instance, see [69], Theorem II.1.1). Hence
$$ \begin{equation*} \psi(t)=t^p\varphi_1(t^{q-p})\asymp t^p\varphi(t^{q-p}),\qquad 0\leqslant t\leqslant 1, \end{equation*} \notag $$
where $\varphi$ is an increasing concave function on $(0,1]$. In addition, by condition (a) we have
$$ \begin{equation*} \lim_{t\to +0}\varphi(t)=\lim_{t\to +0}\frac{\psi(t)}{t^p}=0, \end{equation*} \notag $$
which proves (b).

(b) $\Rightarrow$ (a). Let $\varphi$ be the function in condition (b). We define a function $\psi_1$ by setting $\psi_1(t):=t^p\varphi(t^{q-p})$ for $0\leqslant t\leqslant 1$ and $\psi_1(t)=\varphi(1)t^p$ for $t\geqslant 1$. It can readily be verified that $\psi_1(t^{1/p})/t$ is an increasing function, while $\psi_1(t^{1/q})/t$ is decreasing. Hence if $\psi_2(t):=\displaystyle\int_0^t \dfrac{\psi_1(s)}{s}\,ds$, $t>0$, then it follows from the equality

$$ \begin{equation*} \psi_2(t)=\int_0^{t^p}\frac{\psi_1(s^{1/p})}{ps}\,ds= \int_0^{t^q}\frac{\psi_1(s^{1/q})}{qs}\,ds,\qquad t>0, \end{equation*} \notag $$
that $\psi_2$ is a $p$-convex and $q$-concave Orlicz function. Furthermore, by the above equalities
$$ \begin{equation*} \frac{1}{q}\psi_1(t)\leqslant \psi_2(t)\leqslant \frac{1}{p}\psi_1(t),\qquad t>0. \end{equation*} \notag $$
Hence the assumptions and the definition of $\psi_1$ yield the equivalence of $\psi$ and $\psi_2$ at zero. Since
$$ \begin{equation*} \lim_{t\to +0}\frac{\psi(t)}{t^p}=\lim_{t\to +0}\varphi(t^{q-p})=0, \end{equation*} \notag $$
we obtain (a). $\Box$

Let $1 \leqslant p \leqslant\infty$. A Banach lattice $X$ is said to be $p$-convex (respectively, $p$-concave) if there exists $C>0$ such that for each $n\in\mathbb{N}$ and arbitrary vectors $x_{1},x_{2},\dots,x_{n}$ in $X$

$$ \begin{equation*} \biggl\|\biggl(\,\sum_{k=1}^n |x_k|^p\biggr)^{1/p}\biggr\|_X \leqslant C\biggl(\,\sum_{k=1}^n \|x_k\|_X^p\biggr)^{1/p}, \end{equation*} \notag $$
(respectively,
$$ \begin{equation*} \biggl(\,\sum_{k=1}^n\|x_k\|_X^p\biggr)^{1/p} \leqslant C\biggl\|\biggl(\,\sum_{k=1}^n|x_k|^p\biggr)^{1/p}\biggr\|_X\biggr); \end{equation*} \notag $$
for $p=\infty$ the expressions must be modified in the natural way.

It is obvious that each Banach lattice is $1$-convex and $\infty$-concave with constant $1$. In addition, for each measure space the space $L_p$ is $p$-convex and $p$-concave, also with constant $1$.

It is easy to verify that an Orlicz space $L_M[0,1]$ is $p$-convex (respectively, $p$-concave) if and only if the function $M$ is equivalent at infinity to some $p$-convex (respectively, $p$-concave) Orlicz function. In a similar way an Orlicz sequence space $\ell_\psi$ is $p$-convex (respectively, $p$-concave) if and only if the function $\psi$ is equivalent at zero to a $p$-convex (respectively, $p$-concave) Orlicz function.

3.4. Subspaces of Orlicz spaces spanned by independent identically distributed mean zero functions

For $1\leqslant p<2$ the structure of subspaces of the space $L_p$ is quite intricate, and no effective description for it has been found so far. Even if we limit ourselves to subspaces of $L_p$ with symmetric bases, we know only then they are certain $p$-averages of Orlicz spaces (see [40], and also see [71] for the corresponding finite-dimensional result; furthermore, the case of general r. i. function spaces was considered in [93]).

We know much more about subspaces of $L_p$ that are isomorphic to Orlicz sequence spaces. We will see in § 4 that they can be described as the closed linear spans of sequences of independent copies of mean zero functions. Here we solve the simpler inverse problem by showing that in a separable Orlicz function space on $[0,1]$ a sequence of this type is always equivalent to the canonical basis of some Orlicz sequence space. Apparently, such a description appeared originally in the paper [40] (see the theorem on p. X.8 there).

Let $M$ be an Orlicz function on $[0,\infty)$, $M\in \Delta_2^\infty$, and let $L_M=L_M[0,1]$ be an Orlicz space. Also set

$$ \begin{equation} \theta(u):=\begin{cases} u^2, & 0<u\leqslant 1, \\ M(u), & u >1. \end{cases} \end{equation} \tag{3.10} $$

Generally speaking, the function $\theta$ is not convex. Nevertheless, $\theta(t)/t$ is increasing and continuous, and since $M\in \Delta_2^\infty$, it follows that $\theta\in\Delta_2$. Hence $\theta$ is equivalent to the Orlicz function $\widetilde{\theta}(t):= \displaystyle\int_0^t\dfrac{\theta(u)}{u}\,du$ on $(0,\infty)$. In fact, on the one hand $\displaystyle\int_0^t\dfrac{\theta(u)}{u}\,du\leqslant \theta(t)$, $t>0$, while on the other hand, for some $c>0$ we have

$$ \begin{equation*} \int_0^t\frac{\theta(u)}{u}\,du\geqslant \int_{t/2}^t\frac{\theta(u)}{u}\,du\geqslant \theta\biggl(\frac{t}{2}\biggr) \geqslant c \theta(t),\qquad t>0. \end{equation*} \notag $$
Thus, we can define the Orlicz space $L_\theta=L_\theta[0,\infty)$.

We start by showing that $L_\theta$ coincides with the space $Z_{L_M}^2$ (see the definition in § 3.1). Close results were proved in [83], Theorem 1, [11], Proposition 2.2, and also in [52], in the special case when $L_M=L_1$.

Lemma 3.4. If $\theta$ is the function defined by (3.10), then up to equivalence of norms

$$ \begin{equation*} Z_{L_M}^2=L_\theta. \end{equation*} \notag $$

Proof. Since the spaces under consideration are rearrangement invariant, we assume in what follows without loss of generality that a function $g$ is non-negative and decreasing, that is, $g^*=g$.

On the one hand, for some universal constants, for $1\leqslant p<\infty$ and each r. i. space $X$ we have

$$ \begin{equation} \|g\|_{Z_X^p}\asymp\|g\chi_{[0,1]}\|_X+ \biggl(\,\sum_{k=1}^\infty g(k)^p\biggr)^{1/p}. \end{equation} \tag{3.11} $$
In fact, since $\|\chi_{[0,1]}\|_X=1$, it follows that
$$ \begin{equation*} \begin{aligned} \, \biggl(\int_1^\infty g(t)^p\,dt\biggr)^{1/p}&= \biggl(\,\sum_{k=1}^\infty\int_k^{k+1} g(t)^p\,dt\biggr)^{1/p}\leqslant \biggl(\,\sum_{k=1}^\infty g(k)^p\biggr)^{1/p} \\ &\leqslant g(1)+\biggl(\,\sum_{k=2}^\infty g(k)^p\biggr)^{1/p}\leqslant \|g\chi_{[0,1]}\|_X+\biggl(\int_1^\infty g(t)^p\,dt\biggr)^{1/p}, \end{aligned} \end{equation*} \notag $$
and relation (3.11) follows from the definition of the quasinorm in $Z_X^p$.

On the other hand we can show that (for some universal constants again)

$$ \begin{equation} \|g\|_{L_\theta}\asymp\|g\chi_{[0,1]}\|_{L_M}+ \biggl(\,\sum_{k=1}^\infty g(k)^2\biggr)^{1/2}. \end{equation} \tag{3.12} $$

First let $\|g\|_{L_\theta}\leqslant 1$, that is, $\displaystyle\int_0^\infty \theta(g(t))\,dt\leqslant 1$. Since $\theta$ is increasing, we have

$$ \begin{equation*} \theta(g(1))\leqslant \int_0^1 \theta(g(t))\,dt\leqslant 1. \end{equation*} \notag $$
Hence from (3.10) we obtain $g(1)\leqslant 1$ (the function $M$ is strictly increasing and $M(1)= 1$!), and therefore $\theta(g(1))=g(1)^2$ by the definition of $\theta$. Thus,
$$ \begin{equation*} \sum_{k=1}^\infty g(k)^2\leqslant \theta(g(1))+ \int_1^\infty \theta(g(t))\,dt\leqslant 2. \end{equation*} \notag $$
Furthermore, it follows from the above estimates that
$$ \begin{equation*} c_0:=m\{t>0\colon g(t)>1\}\leqslant 1. \end{equation*} \notag $$
Hence, bearing in mind that $M(t)\leqslant M(1)=1$ for $0\leqslant t\leqslant 1$, we obtain
$$ \begin{equation*} \int_0^1 M(g(t))\,dt\leqslant \int_0^{c_0} \theta(g(t))\,dt+ (1-c_0)\leqslant 2. \end{equation*} \notag $$
Now, as $M$ is convex, we have $\|g\chi_{[0,1]}\|_{L_M}\leqslant 2$. Thus, the above estimates show that the right-hand side of (3.12) is at most $2+\sqrt{2}$, so the $\geqslant$-inequality in (3.12) is established by homogeneity.

Conversely, let the right-hand side of (3.12) be at most $1$. Then, since $g(1)\leqslant 1$ and $c_0=m\{t>0\colon g(t)>1\}\leqslant 1$, we have

$$ \begin{equation*} \int_0^\infty \theta(g(t))\,dt\leqslant \int_0^{c_0} M(g(t))\,dt+(1-c_0)+\sum_{k=1}^\infty g(k)^2\leqslant 3. \end{equation*} \notag $$
Hence by the inequality $\theta(u/3)\leqslant \theta(u)/3$ we have $\|g\|_{L_\theta}\leqslant 3$. Thus, using the same argument as previously we obtain the reverse estimate, which proves (3.12).

To complete the proof it remains to observe that the statement of the lemma is a direct consequence of (3.11) and (3.12). $\Box$

Below we use repeatedly the following notation: if $f$ is a function on $[0,1]$ and $a=(a_k)_{k=1}^\infty$ is a sequence of numbers, then we denote by $a\,\,\overline\otimes\,\,f$ the disjoint sum of the functions $a_k f(t)$, $k=1,2,\dots$ (see § 3.1), that is,

$$ \begin{equation*} (a \,\overline\otimes\, f)(s):= \sum_{k=1}^\infty a_kf(s-k+1)\cdot\chi_{[k-1,k)}(s),\qquad s>0. \end{equation*} \notag $$

As before, let $M$ be an Orlicz function, $M\in \Delta_2^\infty$, and let $\theta$ be the function defined by (3.10). If $f\in L_M$, $f\ne 0$, then, similarly to $\theta$, the function $\psi$ defined by

$$ \begin{equation} \psi(u):=\int_0^1\theta(u|f(t)|)\,dt,\qquad u>0, \end{equation} \tag{3.13} $$
is equivalent to some Orlicz function.

Proposition 3.1. Let $f\in L_M$, and let $\{f_k\}_{k=1}^\infty$ be a sequence of independent functions equimeasurable with $f$ and satisfying $\displaystyle\int_0^1 f_k(t)\,dt=0$, $k=1,2,\dots$ . Then for some constants independent of $(a_k)\in \ell_\psi$, where the function $\psi$ is defined by (3.13), we have

$$ \begin{equation} \biggl\|\,\sum_{k=1}^\infty a_kf_k\biggr\|_{L_M}\asymp \|(a_k)\|_{{\ell_\psi}}, \end{equation} \tag{3.14} $$
that is, the sequence $\{f_k\}_{k=1}^\infty$ is equivalent in $L_M$ to the canonical basis of the Orlicz sequence space ${\ell_\psi}$.

Proof. As mentioned in § 2.3 already, the condition $M\in \Delta_2^\infty$ is equivalent to the fact that $\beta_M^\infty< \infty$ or, which is the same, to the inequality $\mu_{L_M}>0$. Hence the space $L_M$ has the Kruglov property (see § 3.2), so that by Theorems 3.2 and 3.4 and our notation, for some constants independent of $a_k\in\mathbb{R}$, $k=1,2,\dots$, we have
$$ \begin{equation*} \biggl\|\,\sum_{k=1}^\infty a_kf_k\biggr\|_{L_M}\asymp \|a \,\overline\otimes\, f\|_{Z_{L_M}^2}. \end{equation*} \notag $$
Hence, using Lemma 3.4 and taking account of the equimeasurability of the $f_k$, $k=1,2,\dots$, with $f$ and the definition of $\psi$ (see (3.13)) we obtain
$$ \begin{equation*} \begin{aligned} \, \biggl\|\,\sum_{k=1}^\infty a_kf_k\biggr\|_{L_M}&\asymp \|a \,\overline\otimes\, f\|_{L_\theta} \\ &=\inf\biggl\{\lambda >0\colon\int_{0}^{\infty}\theta \biggl(\frac{1}{\lambda}\sum_{k=1}^\infty |a_kf(t-k+1)| \chi_{[k-1,k)}(t)\biggr)\,dt\leqslant 1\biggr\} \\ &=\inf\biggl\{\lambda >0:\,\sum_{k=1}^\infty\int_{0}^{1}\theta \biggl(\frac{|a_k|\,|f(t)|}{\lambda}\biggr)\,dt\leqslant 1\biggr\}= \|(a_k)\|_{\ell_\psi}. \ \Box \end{aligned} \end{equation*} \notag $$

Proposition 3.2. Let $f\in L_M$, and let $\{f_k\}_{k=1}^\infty$ be a sequence of independent functions equimeasurable with $f$ and such that $\displaystyle\int_0^1 f_k(t)\,dt= 0$, $k=1,2,\dots$ . Also assume that $\ell_\varphi$ is an Orlicz sequence space. Then the following conditions are equivalent.

(a) the sequence $\{f_k\}_{k=1}^{\infty}$ is equivalent in $L_M$ to the canonical basis $\{e_k\}_{k=1}^\infty$ of $\ell_\varphi$;

(b) for some positive constant $C$ independent of $n\in\mathbb{N}$,

$$ \begin{equation} \biggl\|\,\sum_{k=1}^{n}f_k\biggr\|_{L_M}\asymp \biggl\|\,\sum_{k=1}^{n}e_k\biggr\|_{\ell_\varphi}; \end{equation} \tag{3.15} $$

(c) the following relation holds:

$$ \begin{equation} \frac{1}{\varphi^{-1}(t)}\asymp\|\widetilde{\sigma}_{1/t}f^*\|_{L_M}+ \biggl(\frac{1}{t}\int_{t}^1f^*(s)^2\,ds\biggr)^{1/2},\qquad 0<t\leqslant 1. \end{equation} \tag{3.16} $$

In particular, if $M(u)=u^p$, $1\leqslant p<\infty$, then conditions (a), (b), and (c) are equivalent to the following one:

(c$'$) the following relation holds:

$$ \begin{equation} \frac{1}{\varphi^{-1}(t)}\asymp \biggl(\frac{1}{t}\int_{0}^tf^*(s)^p\,ds\biggr)^{1/p}+ \biggl(\frac{1}{t}\int_{t}^1f^*(s)^2\,ds\biggr)^{1/2},\qquad 0<t\leqslant 1. \end{equation} \tag{3.17} $$

Proof. Again, without loss of generality we assume that $f^*=f$. Since the implication (a) $\Rightarrow$ (b) is obvious, we start by verifying that (b) $\Rightarrow$ (c).

If $s_n:=\displaystyle\sum_{k=1}^n e_k$, $n\in\mathbb{N}$, then on the one hand, by (2.1) (see § 2.3) we have

$$ \begin{equation} \|s_n\|_{\ell_\varphi}\asymp \frac{1}{\varphi^{-1}(1/n)}\,,\qquad n\in\mathbb{N}. \end{equation} \tag{3.18} $$
On the other hand, by Lemma 3.4
$$ \begin{equation*} \biggl\|\,\sum_{k=1}^n f_k\biggr\|_{L_M}\asymp \|s_n \,\overline\otimes\, f\|_{Z_{L_M}^2},\qquad n\in\mathbb{N}. \end{equation*} \notag $$
It is easy to verify that the function
$$ \begin{equation*} (s_n \,\overline\otimes\, f)(t)= \sum_{k=1}^n f(t-k+1)\chi_{[k-1,k)}(t),\qquad t>0, \end{equation*} \notag $$
is equimeasurable with $\sigma_n f(t):=f(t/n)$, $t>0$ (we identify $f$ defined on $[0,1]$ with the function $f\chi_{[0,1]}$ defined on $[0,\infty)$). Hence it follows from the equality $\sigma_nf\cdot \chi_{[0,1]}=\widetilde{\sigma}_{n}f$ (see § 2.2) that
$$ \begin{equation*} \begin{aligned} \, \biggl\|\,\sum_{k=1}^n f_k\biggr\|_{L_M}&\asymp \|\widetilde{\sigma}_{n}f\|_{L_M}+\|\sigma_nf\cdot \chi_{[1,\infty)}\|_{L_2} \\ &=\|\widetilde{\sigma}_{n}f\|_{L_M}+ \biggl(n\int_{1/n}^1f(s)^2\,ds\biggr)^{1/2},\qquad n\in\mathbb{N}. \end{aligned} \end{equation*} \notag $$
Thus, comparing this with (3.18), taking (3.15) into account and bearing in mind that the input functions are quasiconcave we obtain (3.16).

(c) $\Rightarrow$ (a). By Proposition 3.1 the sequence $\{f_k\}_{k=1}^{\infty}$ is equivalent in $L_M$ to the canonical basis of the Orlicz space $\ell_\psi$, where $\psi$ is defined by (3.13). Then relation (3.15) holds (for $\psi$ in place of $\varphi$), and therefore arguing as in the proof of the implication (b) $\Rightarrow$ (c) we see that

$$ \begin{equation*} \frac{1}{\psi^{-1}(t)}\asymp \|\widetilde{\sigma}_{n}f\|_{L_M}+ \biggl(\frac{1}{t}\int_t^1f(s)^2\,ds\biggr)^{1/2},\qquad 0<t\leqslant 1. \end{equation*} \notag $$
Hence we conclude from (3.16) that the Orlicz functions $\varphi$ and $\psi$ are equivalent on $(0,1]$, so that $\ell_\psi=\ell_\varphi$ up to equivalence of norms (see § 2.3). This proves (a).

To complete the proof of the proposition it remains to observe that the equivalence (3.17) is just relation (3.16) for $M(u)=u^p$, $1\leqslant p<\infty$. $\Box$

4. A description of subspaces of $L_p$-spaces that are isomorphic to Orlicz sequence spaces

By Proposition 3.1 each sequence $\{f_k\}_{k=1}^\infty$ of independent copies of a mean zero function $f$ in an Orlicz space $L_M:=L_M[0,1]$ is equivalent in $L_M$ to the canonical basis of a certain Orlicz sequence space $\ell_{\psi}$. For $L_p$-spaces we can advance farther and find a description of the corresponding class of Orlicz functions $\psi$ (as dependent of $p$) and also to show that each subspace of $L_p$ that is isomorphic to some Orlicz sequence space $\ell_{\psi}$, $\ell_{\psi}\not\approx \ell_p$, can be obtained in this way.

Note that this problem is only non-trivial for $1\leqslant p<2$. In fact, for each mean zero function $f\in L_2$ a sequence of its independent copies $\{f_k\}_{k=1}^\infty$ consists of pairwise orthogonal functions, so by the classical result of Kadec and Pełczyński ([61], Corollary 4), in $L_p$ for $p\geqslant 2$ it is equivalent to the canonical ${\ell_2}$-basis.

Theorem 4.1. Let $1\leqslant p<2$, and let $\psi$ be an Orlicz function. Then the following conditions are equivalent:

(a) $\lim_{t\to +0}\psi(t)/t^p=0$, and $\psi$ is equivalent at zero to some $p$-convex and $2$- concave Orlicz function;

(b) there exists a function $f\in L_p$, $\displaystyle\int_0^1f(t)\,dt=0$, such that the sequence $\{f_k\}_{k=1}^\infty$ of independent copies of $f$ is equivalent in $L_p$ to the canonical $\ell_{\psi}$-basis;

(c) there exists a function $f\in L_p$, $\displaystyle\int_0^1f(t)\,dt=0$, such that $\ell_{\psi}$ is isomorphic to a subspace of $L_p$ spanned by independent copies of $f$;

(d) the space $\ell_{\psi}$ is isomorphic to a subspace of $L_p$, but not to $\ell_p$.

In the 1960s–1970s the problem of the description of subspaces of $L_p$-spaces isomorphic to Orlicz sequence spaces was considered by Bretagnolle and Dacunha- Castelle. In particular, using the probabilistic approach they proved that assertions (a) and (d) in Theorem 4.1 are equivalent ([35], Theorem IV.3; also see the short note [34]). Subsequently, some of their results were re-discovered by Bravernman, who, apart from probabilistic arguments, also made a broad use of methods of complex analysis (see [31], Corollary 2.1, and [32], Chap. 3). Using combinatorial methods, a close result was also proved by Kwapień and Schütt [71] (also see [99]).

Here we prove Theorem 4.1 using another approach, based on the interpolation theory of operators (see § 2.4). It originates from the paper [18] and, based on more direct estimates, has certain advantages over the methods mentioned above. The main advantage is that, as a bonus, we reveal the intimate link existing between the function $f\in L_p$ whose independent copies we consider and the corresponding Orlicz function $\psi$ (see [18] and also Corollary 4.2).

In particular, the following result is a consequence of Theorem 4.1.

Corollary 4.1. Let ${\psi}$ be an Orlicz function. Then the following conditions are equivalent.

(a) $\lim_{t\to +0}\psi(t)/t=0$, and $\psi$ is equivalent to some $2$-concave Orlicz function at zero;

(b) there exists a function $f\in L_1$, $\displaystyle\int_0^1f(t)\,dt=0$, such that the sequence $\{f_k\}_{k=1}^\infty$ of independent copies of $f$ is equivalent in $L_1$ to the canonical $\ell_{\psi}$-basis;

(c) there exists a function $f\in L_1$, $\displaystyle\int_0^1f(t)\,dt=0$, such that $\ell_{\psi}$ is isomorphic to a subspace of $L_1$ which is spanned by independent copies of $f$;

(d) the space $\ell_{\psi}$ is isomorphic to a subspace of $L_1$ but not to $\ell_1$.

To prove Theorem 4.1 we require two auxiliary results. The first reveals the close connection between the Orlicz functions arising in the description of subspaces of the space $L_p$ spanned by independent copies of mean zero functions and the $\mathcal {K}$-functional of a certain weighted $L_1$-couple.

For the definitions of the distribution function $n_{g}(\tau)$ of a measurable funtion $g$ and the Peetre ${\mathcal K}$-functional of a Banach couple, see § § 2.2 and 2.4, respectively.

Proposition 4.1. Let $1\leqslant p<2$. Assume that an Orlicz function $\varphi$ satisfies the relation

$$ \begin{equation} \varphi(t)\asymp t^p{\mathcal K}(t^{2-p},h;L_1(u^{p-1}),L_1(u)),\qquad 0\leqslant t\leqslant 1, \end{equation} \tag{4.1} $$
where $h$ is a decreasing non-negative function on $[0,\infty)$ such that $h(0)\leqslant 1$.

If a measurable function $f$ on $[0,1]$ has the properties $\displaystyle\int_0^1f(s)\,ds=0$ and

$$ \begin{equation} m\{\tau>0\colon n_{f}(\tau)\ne h(\tau)\}=0, \end{equation} \tag{4.2} $$
then a sequence $\{f_k\}_{k=1}^\infty$ of independent copies of $f$ is equivalent in $L_p$ to the canonical $\ell_{\varphi}$-basis.

Proof. First we show that there exists a function $f$ satisfying the assumptions of the proposition.

In fact, $h\in L_1(\min\{u,u^{p-1}\})$ by condition (4.1) and equality (2.2) for the ${\mathcal K}$-functional of a weighted $L_1$-couple. Therefore,

$$ \begin{equation*} \int_1^\infty h(u)\,du\leqslant \int_1^\infty h(u)u^{p-1}\,du\leqslant \|h\|_{L_1(\min\{u,u^{p-1}\})}<\infty. \end{equation*} \notag $$
Since $h$ is decreasing, it follows that $\lim_{t\to\infty}h(t)=0$. Thus, as $h(0)\leqslant 1$, we conclude that a function $f$ satisfying the assumption exists (see § 2.2). Furthermore, since
$$ \begin{equation*} \|f\|_p^p=p\int_0^\infty u^{p-1}n_f(u)\,du=p\int_0^\infty u^{p-1}h(u)\,du \leqslant p(1+\|h\|_{L_1(\min\{u,u^{p-1}\})})<\infty \end{equation*} \notag $$
(for instance, see [66], Appendix 1, Statement 1.1), we have $f\in L_p$.

Now, since the norm in an Orlicz sequence space depends (up to equivalence) only on the behaviour of the corresponding Orlicz function in a neighbourhood of zero (see § 2.3), in view of Proposition 3.1 it only remains to verify that the function $\varphi$ is equivalent at zero to the function $\psi$ defined by (3.13) in the case when $\theta(u)=u^2$ for $0<u\leqslant 1$ and $\theta(u)=u^p$ for $u\geqslant 1$.

Let us rewrite (3.13) as follows:

$$ \begin{equation*} \psi(u)=\int_0^\infty dn_{\theta(u|f|)}(\tau),\qquad u>0. \end{equation*} \notag $$
Direct calculations show that $n_{\theta(u|f|)}(\tau)=n_f(\tau^{1/2}/u)$ for $0<\tau\leqslant 1$ and $n_{\theta(u|f|)}(\tau)=n_f(\tau^{1/p}/u)$ for $\tau>1$. Hence
$$ \begin{equation*} \begin{aligned} \, \psi(u)&=\int_0^1 n_{f}\biggl(\frac{\tau^{1/2}}{u}\biggr)\,d\tau+ \int_1^\infty n_{f}\biggl(\frac{\tau^{1/p}}{u}\biggr)\,d\tau \\ &=2u^2\int_0^{1/u} s n_{f}(s)\,ds+pu^p\int_{1/u}^\infty s n_{f}(s)\,ds \\ &=\int_0^\infty h(s)\,d(\theta(us)),\qquad u>0. \end{aligned} \end{equation*} \notag $$
Now applying (2.2) to the case when the measure space is the half-line $(0,\infty)$ with Lebesgue measure, $w_1(u):=u^{p-1}$, and $w_2(u)=u$, for any $t>0$ we obtain
$$ \begin{equation*} \begin{aligned} \, \psi(t)&=t^p\biggl(p\int_{1/t}^\infty h(u)u^{p-1}\,du+ 2t^{2-p}\int_0^{1/t}h(u)u\,du\biggr) \\ &\asymp t^p\int_0^\infty h(u)\min\{u^{p-1},t^{2-p}u\}\,du \\ &=t^p{\mathcal K}\bigl(t^{2-p},h;L_1(u^{p-1}),L_1(u)\bigr) \end{aligned} \end{equation*} \notag $$
where the constants depend only on $p$. Since this and (4.1) imply that $\psi$ and $\varphi$ are equivalent at zero, the proof is complete.

Corollary 4.2. Let $1\leqslant p<2$ and $f\in L_p$, $\displaystyle\int_0^1f(s)\,ds=0$. Assume that an Orlicz function $\varphi$ satisfies

$$ \begin{equation*} {\varphi}(t)\asymp t^p{\mathcal K}(t^{2-p},n_f;L_1(u^{p-1}),L_1(u)),\qquad 0\leqslant t\leqslant 1. \end{equation*} \notag $$
Then a sequence $\{f_k\}_{k=1}^\infty$ of independent copes of $f$ is equivalent in $L_p$ to the canonical $\ell_{\varphi}$-basis.

In the next statement we investigate the properties of the set of ${\mathcal K}$-functionals corresponding to the Banach couple

$$ \begin{equation*} \overrightarrow{\mathbb{X}}_{c,d}:=(L_1(u^{c}),L_1(u^d)),\qquad 0\leqslant c<d<\infty. \end{equation*} \notag $$
It is easy to show that this couple is $\operatorname{Conv}_0$-abundant (see § 2.4). However, we need a stronger result, namely, that as an element of the space $L_1(u^{c})+L_1(u^d)=L_1(\min\{u^c,u^d\})$ generating a prescribed concave function, we can take a non- negative function which is, moreover, bounded and nonincreasing.

Proposition 4.2. Let $0\leqslant c<d<\infty$. Then for each increasing concave function $\varphi(t)$ on $[0,1]$ such that $\lim_{t\to +0}\varphi(t)=0$ there exists a decreasing non-negative function $h\in L_1(\min\{u^c,u^d\})$ such that $h(0)\leqslant 1$ and

$$ \begin{equation} \varphi(t)\asymp {\mathcal K}(t,h;L_1(u^{c}),L_1(u^d)),\qquad 0<t\leqslant 1, \end{equation} \tag{4.3} $$
with constants depending only on the quantities $\varphi(1)$, $c$, and $d$.

We start with a technical lemma.

Let $\varphi$ be a function satisfying the assumptions of Proposition 4.2, let $q> 1$, and let $\{t_i\}_{i=-m}^n$ be a sequence of positive numbers satisfying the assumptions of Proposition 2.1. Also let $m=2r-1$ and $n=2l+1$, where $r\in \mathbb{N}\cup \{+\infty\}$ and $l\in \mathbb{N}\cup \{0\}$, and consider the function $\psi$ on $[0,\infty)$ defined by

$$ \begin{equation} \psi(t):=\sum_{i=-r}^l\varphi(t_{2i+1}) \min\biggl\{1,\frac{t}{t_{2i+1}}\biggr\}. \end{equation} \tag{4.4} $$

Lemma 4.1. The following inequalities hold:

$$ \begin{equation} \frac{1}{q}\,\varphi(t)\leqslant\varphi(t_{2i+1}) \min\biggl\{1,\frac{t}{t_{2i+1}}\biggr\}\quad\textit{for}\ \ t_{2i}\leqslant t\leqslant t_{2i+2},\ \ -r+1\leqslant i\leqslant l-1, \end{equation} \tag{4.5} $$
and
$$ \begin{equation} \frac{1}{q}\,\varphi(t)\leqslant\psi(t)\leqslant \frac{q+1}{q-1}\,\varphi(t),\qquad t\geqslant 0. \end{equation} \tag{4.6} $$

Proof. Since $\varphi(t)$ is increasing and $\varphi(t)/t$ is decreasing, by the equalities in part (b) of Proposition 2.1, for $t_{2i}\leqslant t\leqslant t_{2i+1}$ and $-r+1\leqslant i\leqslant l$ we obtain
$$ \begin{equation} \frac{\varphi(t)}{t}\leqslant\frac{\varphi(t_{2i})}{t_{2i}}= q\cdot\frac{\varphi(t_{2i+1})}{t_{2i+1}}\,; \end{equation} \tag{4.7} $$
in a similar way, for $t_{2i+1}\leqslant t\leqslant t_{2i+2}$ and $-r\leqslant i\leqslant l-1$ we have
$$ \begin{equation} \varphi(t)\leqslant\varphi(t_{2i+2})=q\cdot \varphi(t_{2i+1}). \end{equation} \tag{4.8} $$
Since inequality (4.5) is a direct consequence of these relations, it remains to verify (4.6).

Note that for each $t\geqslant 0$ the right-hand side of (4.5) is at most $\psi(t)$. Hence the left-hand inequality in (4.6) is obvious. As concerns the right-hand inequality, it is sufficient to prove it for $t=0$ and $t=t_{2i+1}$, $i=-r,\dots,l$. This is because the function $\varphi$ is concave on the half-line and $\psi$ is a concave piecewise linear function with knots at the points $t_{2i+1}$ and $ \varphi(t_{2i+1})$, $i=-r,\dots,l$.

First of all, it follows from the definition (4.4) that $\psi(0)=0\leqslant \varphi(0)$. Now, for fixed $k=-r,\dots,l$ consider the representation

$$ \begin{equation*} \psi(t_{2k+1})=\sum_{-r\leqslant i\leqslant k}\varphi(t_{2i+1})+ t_{2k+1}\sum_{k<i\leqslant l}\frac{\varphi(t_{2i+1})}{t_{2i+1}}=:S_1+S_2. \end{equation*} \notag $$
By the equality in (4.8) and the monotonicity of $\varphi$ we have
$$ \begin{equation*} \varphi(t_{2i+1})=\frac{\varphi(t_{2i+2})}{q}\leqslant \frac{\varphi(t_{2i+3})}{q}\,. \end{equation*} \notag $$
Hence
$$ \begin{equation*} S_1\leqslant (1+q^{-1}+q^{-2}+\cdots)\varphi(t_{2k+1})\leqslant \frac{q}{q-1}\,\varphi(t_{2k+1}). \end{equation*} \notag $$
In a similar way, taking the equality in (4.7) and the decrease of $\varphi(t)/t$ into account we obtain
$$ \begin{equation*} \frac{\varphi(t_{2i+1})}{t_{2i+1}}=\frac{1}{q}\cdot \frac{\varphi(t_{2i})}{t_{2i}}\leqslant \frac{1}{q}\cdot \frac{\varphi(t_{2i-1})}{t_{2i-1}}\,, \end{equation*} \notag $$
and therefore
$$ \begin{equation*} S_2\leqslant (q^{-1}+q^{-2}+\cdots)\varphi(t_{2k+1})\leqslant \frac{1}{q-1}\varphi(t_{2k+1}). \end{equation*} \notag $$
As a result, it follows from this estimate and the previous ones that
$$ \begin{equation*} \psi(t_{2k+1})=S_1+S_2\leqslant \frac{q+1}{q-1}\,\varphi(t_{2k+1}),\qquad k=-r,\dots,l, \end{equation*} \notag $$
so that inequality (4.6) is proved. $\Box$

Proof of Proposition 4.2. We start with the case $c=0$.

Setting $m_a(t):=\min\{1, t/a\}$ and $v_a(t):=(d+1)a^{1/d}\chi_{[0,a^{-1/d}]}(t)$, where $a>0$ is arbitrary, we check that

$$ \begin{equation} m_a(t)\leqslant {\mathcal K}(t,v_a;\overrightarrow{ \mathbb{X}}_{0,d}) \leqslant (d+1)m_a(t),\qquad t>0. \end{equation} \tag{4.9} $$
In fact, if $t\leqslant a$, then it follows from (2.2) that
$$ \begin{equation*} \begin{aligned} \, {\mathcal K}(t,\chi_{[0,a^{-1/d}]};\overrightarrow{\mathbb{X}}_{0,d})&= \int_0^\infty \chi_{[0,a^{-1/d}]}(u)\min\{1,tu^d\}\,du \\ &=t\int_0^{a^{-1/d}}u^d\,du=\frac{ta^{-1/d}}{a(d+1)}\,, \end{aligned} \end{equation*} \notag $$
and therefore ${\mathcal K}(t,v_a;\overrightarrow{ \mathbb{X}}_{0,d})=m_a(t)$, so that (4.9) is proved in this case.

If $t>a$, then we have

$$ \begin{equation*} \begin{aligned} \, {\mathcal K}(t,\chi_{[0,a^{-1/d}]};\overrightarrow{\mathbb{X}}_{0,d})&= t\int_0^{t^{-1/d}}u^d\,du+\int_{t^{-1/d}}^{a^{-1/d}}\,du \\ &=\frac{t^{-1/d}}{d+1}+{a^{-1/d}}-{t^{-1/d}} ={a^{-1/d}}-\frac{d}{d+1}\, t^{-1/d}. \end{aligned} \end{equation*} \notag $$
Thus, ${\mathcal K}(t,v_a;\overrightarrow{ \mathbb{X}}_{0,d})= d+1-d(a/t)^{1/d}$. Since $m_a(t)= 1$ in this case, as a result we arrive at (4.9) again.

Let $\varphi$ be a function on $[0,1]$ satisfying the assumptions of Proposition 4.2. We extend it to the half-line $[0,\infty)$ by the formula

$$ \begin{equation} \varphi_1(t):=\varphi(t)\chi_{[0,1]}(t)+\varphi(1)\chi_{[1,\infty)}(t). \end{equation} \tag{4.10} $$
Then $\varphi_1$ is an increasing concave function on $[0,\infty)$ and $\lim_{t\to +0}\varphi_1(t)=0$.

Setting $q=4d+5$, we select a sequence $\{t_j\}_{j=-m}^n$, $m,n\in \mathbb{N}\cup\{+\infty\}$, of positive numbers satisfying conditions (a) and (b) in Proposition 2.1, that is, a sequence such that

$$ \begin{equation} t_0=1,\qquad \frac{t_{j+1}}{t_j}\geqslant q \quad \text{for}\ \ -m\leqslant j<n, \end{equation} \tag{4.11} $$
and for some suitable $i$
$$ \begin{equation} \frac{\varphi_1(t_{2i})}{t_{2i}}=q\cdot\frac{\varphi_1(t_{2i+1})}{t_{2i+1}} \quad \text{and}\quad \varphi_1(t_{2i+2})=q\cdot\varphi_1(t_{2i+1}). \end{equation} \tag{4.12} $$
By properties (c) and (d) in the same proposition, in this case $m=2r-1$ and $n=2l+1$, where $r\in \mathbb{N}\cup \{+\infty\}$ and $l\in \mathbb{N}\cup \{0\}$. In addition, by Lemma 4.1 we have the inequalities
$$ \begin{equation} \frac{1}{q}\cdot\varphi_1(t)\leqslant\varphi_1(t_{2i+1})m_{t_{2i+1}}(t), \quad\text{for }\, t_{2i}\leqslant t\leqslant t_{2i+2},\ \ -r+1\leqslant i\leqslant l-1, \end{equation} \tag{4.13} $$
and
$$ \begin{equation} \frac{1}{q}\cdot\varphi_1(t)\leqslant \sum_{i=-r}^l\varphi_1(t_{2i+1})m_{t_{2i+1}}(t)\leqslant \frac{q+1}{q-1}\,\varphi_1(t),\qquad t\geqslant 0. \end{equation} \tag{4.14} $$
Note that by (2.6) inequality (4.13) holds for $i=l$ for all $t\geqslant t_{2l}$ (respectively, if $r\in \mathbb{N}$, then by (2.8) inequality (4.13) holds for $i=-r$ for all $0<t<t_{-2r+2}$).

Using the arguments from the proof of Proposition 3.2.6 and Theorem 4.5.7 in the monograph [36] by Brudnyi and Krugljak we show that the function

$$ \begin{equation} {h}_1(t):=\sum_{i=-r}^l\varphi_1(t_{2i+1})v_{t_{2i+1}}(t),\qquad t\geqslant 0, \end{equation} \tag{4.15} $$
satisfies the inequalities
$$ \begin{equation} \frac{1}{2(4d+5)}\,\varphi_1(t)\leqslant {\mathcal K}(t,h_1;\overrightarrow{\mathbb{X}}_{0,d})\leqslant \frac{2d+3}{2}\,\varphi_1(t),\qquad t>0. \end{equation} \tag{4.16} $$

Since for fixed $t>0$ the functional $t\mapsto {\mathcal K}(t,x;X_0,X_1)$ is a norm on the sum $X_0+X_1$, by (4.9) and (4.14), for all $t\geqslant 0$ we obtain the upper bound

$$ \begin{equation} \begin{aligned} \, {\mathcal K}(t, h_1;\overrightarrow{ \mathbb{X}}_{0,d})&\leqslant (d+1)\sum_{i=-r}^l\varphi_1(t_{2i+1})m_{t_{2i+1}}(t) \nonumber \\ &\leqslant \frac{(d+1)(q+1)}{q-1}\,\varphi_1(t)=\frac{2d+3}{2}\,\varphi_1(t). \end{aligned} \end{equation} \tag{4.17} $$

Now we prove a reverse estimate. Taking the equality $m_{t_{2k+1}}(t_{2k+1})=1$ into account and using (4.9) we obtain

$$ \begin{equation*} \begin{aligned} \, {\mathcal K}(t_{2k+1},h_1;\overrightarrow{ \mathbb{X}}_{0,d})&\geqslant \varphi_1(t_{2k+1}){\mathcal K}(t_{2k+1},v_{t_{2k+1}}; \overrightarrow{\mathbb{X}}_{0,d}) \\ &\qquad-\sum_{i\ne k}\varphi_1(t_{2i+1}){\mathcal K}(t_{2k+1},v_{t_{2i+1}}; \overrightarrow{\mathbb{X}}_{0,d}) \\ &\geqslant \varphi_1(t_{2k+1})- (d+1)\sum_{i\ne k}\varphi_1(t_{2i+1})m_{t_{2i+1}}(t_{2k+1}). \end{aligned} \end{equation*} \notag $$
Since by (4.14) we have
$$ \begin{equation*} \sum_{i\ne k}\varphi_1(t_{2i+1})m_{t_{2i+1}}(t_{2k+1})\leqslant \varphi_1(t_{2k+1})\biggl(\frac{q+1}{q-1}-1\biggr)= \frac{1}{2(d+1)}\,\varphi_1(t_{2k+1}), \end{equation*} \notag $$
it follows that
$$ \begin{equation} {\mathcal K}(t_{2k+1},h_1;\overrightarrow{ \mathbb{X}}_{0,d})\geqslant \frac{1}{2}\,\varphi_1(t_{2k+1}). \end{equation} \tag{4.18} $$

Let us fix $t$. Three cases are possible: (i) $t_{2k}\leqslant t< t_{2k+2}$ for some $-r+1\leqslant k\leqslant l-1$, (ii) $0<t<t_{-2r+2}$, and (iii) $t\geqslant t_{2l}$. In cases (ii) and (iii) we set $k=-r$ and $k=l$, respectively. Then by the choice of $k$, since the function $t\mapsto {\mathcal K}(t,x;\vec{X})$ is concave, using (4.18) and (4.13) (also see the remark immediately after (4.14)), in each case we obtain

$$ \begin{equation*} \begin{aligned} \, {\mathcal K}(t,h_1;\overrightarrow{ \mathbb{X}}_{0,d})&\geqslant {\mathcal K}(t_{2k+1},h_1;\overrightarrow{ \mathbb{X}}_{0,d})m_{t_{2k+1}}(t) \\ &\geqslant \frac{1}{2}\,\varphi_1(t_{2k+1})m_{t_{2k+1}}(t)\geqslant \frac{1}{2q}\,\varphi_1(t) \\ &=\frac{1}{2(4d+5)}\,\varphi_1(t). \end{aligned} \end{equation*} \notag $$
Thus, this inequality and (4.17) imply (4.16).

Now using (4.11), (4.12), and the definition of $\varphi_1$ we obtain $l=0$ and $t_1=q$. Furthermore,

$$ \begin{equation*} \frac{q\varphi_1(t_{2i+1})}{t_{2i+1}}=\frac{\varphi_1(t_{2i})}{t_{2i}}= \frac{q\varphi_1(t_{2i-1})}{t_{2i}}\,,\qquad i\leqslant 0, \end{equation*} \notag $$
which yields
$$ \begin{equation*} \varphi_1(t_{2i-1})=\frac{t_{2i}}{t_{2i+1}}\,\varphi_1(t_{2i+1})\leqslant q^{-1}\varphi_1(t_{2i+1}),\qquad i\leqslant 0. \end{equation*} \notag $$
Since
$$ \begin{equation*} v_{t_{2i+1}}(t)\leqslant (d+1)t_{2i+1}^{1/d}\leqslant (d+1)t_1^{1/d}=(d+1)q^{1/d},\qquad t\geqslant 0, \end{equation*} \notag $$
it follows from the previous inequality and the definition (4.15) that for all $t\geqslant 0$
$$ \begin{equation*} {h}_1(t)\leqslant \varphi(1)\sum_{i=-r}^0 q^{i}v_{t_{2i+1}}(t)\leqslant \varphi(1)(d+1)q^{1/d}\sum_{k=0}^\infty q^{-k}\leqslant \varphi(1)(d+1)\,\frac{q^{1/d+1}}{q-1}\,. \end{equation*} \notag $$
Thus, the decreasing non-negative function
$$ \begin{equation*} h:=\frac{q^{-1/d-1}(q-1)}{\varphi(1)(d+1)}\,h_1 \end{equation*} \notag $$
belongs to the space $L_1+L_1(u^d)=L_1(\min\{1,u^d\})$ (just as $h_1$) and $h(0)\leqslant 1$. Moreover, in view of (4.10) and (4.16) we obtain
$$ \begin{equation*} {\mathcal K}(t,h;\overrightarrow{\mathbb{X}}_{0,d})\asymp\varphi(t),\qquad 0<t\leqslant 1, \end{equation*} \notag $$
for some constants depending only on $d$. So the proof for $c=0$ is complete.

Turning to the general case we start by showing that (using the same notation) for any $g\in L_1(\min\{u^{c},u^d\})$ and $t>0$

$$ \begin{equation} {\mathcal K}(t,g;\overrightarrow{\mathbb{X}}_{c,d})\asymp \int_0^{t^{d/(d-c)}}s^{-c/d-1} {\mathcal K}(s,g;\overrightarrow{\mathbb{X}}_{0,d})\,ds \end{equation} \tag{4.19} $$
where the constants depend only on $c$ and $d$.

First of all, by (2.2) and Fubini’s theorem

$$ \begin{equation*} \int_0^{t^{d/(d-c)}}s^{-c/d-1}{\mathcal K} (s,g;\overrightarrow{\mathbb{X}}_{0,d})\,ds=\int_0^\infty |g(u)|\int_0^{t^{d/(d-c)}}s^{-c/d-1}\min\{1,su^d\}\,ds\,du. \end{equation*} \notag $$
If $tu^d\leqslant u^c$, then $su^d\leqslant 1$ for all $0<s\leqslant t^{d/(d-c)}$. Therefore,
$$ \begin{equation*} \int_0^{t^{d/(d-c)}} s^{-c/d-1}\min\{1,su^d\}\,ds= u^d \int_0^{t^{d/(d-c)}} s^{-c/d}\,ds\asymp u^d t. \end{equation*} \notag $$
In the case when $tu^d> u^c$ we obtain
$$ \begin{equation*} \begin{aligned} \, \int_0^{t^{d/(d-c)}} s^{-c/d-1}\min\{1,su^d\}\,ds &= u^d \int_0^{u^{-d}} s^{-c/d}\,ds+ \int_{u^{-d}}^{t^{d/(d-c)}}s^{-c/d-1}\,ds \\ &=\frac{d}{d-c}u^c+\frac{d}{c}(u^c-t^{-c/(d-c)})\asymp u^c. \end{aligned} \end{equation*} \notag $$
Now, using (2.2) again we arrive at (4.19).

Next we show that the Banach couple $\overrightarrow{\mathbb{X}}_{c,d}$ is $\operatorname{Conv}_0$-abundant. To do this, in accordance with the Brudnyi–Krugljak inequality (see [36], Theorem 4.5.7) it is sufficient to find a function $f_0\in L_1(\min\{u^{c},u^d\})$ such that ${\mathcal K}(t, f_0;\overrightarrow{\mathbb{X}}_{c,d})\asymp t^{\alpha}$, $t>0$, for some $\alpha\in (0,1)$. Let us verify that the function $f_0(u)=u^{-\theta}$ satisfies this condition if $\theta\in (c+1,d+1)$. Using (2.2) again and making elemntary calculations we obtain

$$ \begin{equation*} \begin{aligned} \, {\mathcal K}(t,u^{-\theta};\overrightarrow{\mathbb{X}}_{c,d})&= \int_0^\infty u^{-\theta}\min\{u^{c},tu^d\}\,du \\ &=\biggl(\frac{1}{\theta-1-c}+\frac{1}{d+1-\theta}\biggr) t^{(\theta-c-1)/(d-c)}. \end{aligned} \end{equation*} \notag $$
Since $\alpha:=(\theta-c-1)/(d-c)\in (0,1)$, it follows in view of the above observations that the couple $\overrightarrow{\mathbb{X}}_{c,d}$ is $\operatorname{Conv}_0$-abundant.

Now since $\lim_{t\to +0}\varphi(t)=0$ by assumption, the extension $\varphi_1$ of $\varphi$ to $(0,\infty)$ defined in (4.10) belongs to the cone $\operatorname{Conv}_0$. Hence there exists a function $w\in L_1(\min\{u^{c},u^d\})$ such that the equivalence

$$ \begin{equation} {\mathcal K}(t,w;\overrightarrow{\mathbb{X}}_{c,d})\asymp\varphi(t),\qquad 0\leqslant t\leqslant 1, \end{equation} \tag{4.20} $$
holds with constants depending on $c$ and $d$. Note that the function $t\mapsto {\mathcal K}(t,w;\overrightarrow{\mathbb{X}}_{c,d})$ is concave and increasing on $[0,1]$. In addition, by (4.20) we have
$$ \begin{equation*} \lim_{t\to +0}{\mathcal K}(t,w;\overrightarrow{\mathbb{X}}_{c,d})= \lim_{t\to +0}\varphi(t)=0 \end{equation*} \notag $$
and
$$ \begin{equation*} \begin{aligned} \, {\mathcal K}(1,w;\overrightarrow{\mathbb{X}}_{0,d})&= \int_0^1s^d|w(s)|\,ds+\int_1^\infty |w(s)|\,ds\leqslant \int_0^1s^d|w(s)|\,ds+\int_1^\infty s^{c}|w(s)|\,ds \\ &={\mathcal K}(1,w;\overrightarrow{\mathbb{X}}_{c,d})\asymp\varphi(1). \end{aligned} \end{equation*} \notag $$
Hence, applying the first part of the proof to the function $t\mapsto {\mathcal K}(t, w;\overrightarrow{\mathbb{X}}_{c,d})$ in place of $\varphi$ we can find a decreasing non-negative function $h\in L_1(\min\{1,u^d\})$, $h(0)\leqslant 1$, that satisfies the condition
$$ \begin{equation*} {\mathcal K}(t,h;\overrightarrow{\mathbb{X}}_{0,d})\asymp {\mathcal K}(t,w;\overrightarrow{\mathbb{X}}_{0,d}),\qquad 0<t\leqslant 1, \end{equation*} \notag $$
for constants depending only on $\varphi(1)$ and $d$. Hence, as a result, it follows from (4.19) and (4.20) that
$$ \begin{equation*} \begin{aligned} \, {\mathcal K}(t,h;\overrightarrow{\mathbb{X}}_{c,d}) &\asymp \int_0^{t^{d/(d-c)}}s^{-c/d-1} {\mathcal K}(s,h;\overrightarrow{\mathbb{X}}_{0,d})\,ds \\ &\asymp\int_0^{t^{d/(d-c)}} s^{-c/d-1} {\mathcal K}(s,w;\overrightarrow{\mathbb{X}}_{0,d})\,ds \\ &\asymp {\mathcal K}(t, w;\overrightarrow{\mathbb{X}}_{c,d}) \asymp\varphi(t),\qquad 0\leqslant t\leqslant 1, \end{aligned} \end{equation*} \notag $$
so that Proposition 4.2 is proved.

Proof of Theorem 4.1. (a) $\Rightarrow$ (b). If the Orlicz function $\psi$ satisfies the assumptions of part (a), then using in turn Lemma 3.3 and Proposition 4.2 we find a decreasing non-negative function $h\in L_1(\min\{u^{p-1},u\})$ such that $h(0)\leqslant 1$ and
$$ \begin{equation*} {\psi}(t)\asymp t^p\cdot{\mathcal K}(t^{2-p},h;L_1(u^{p-1}),L_1(u)),\qquad 0\leqslant t\leqslant 1. \end{equation*} \notag $$
Hence, by Proposition 4.1, if $f\in L_p$ satisfies the conditions $\displaystyle\int_0^1f(u)\,du= 0$ and $n_f(\tau)=h(\tau)$, $\tau>0$, then a sequence $\{f_k\}_{k=1}^\infty$ of independent copies of $f$ is equivalent in $L_p$ to the canonical $\ell_{\psi}$-basis. Thus we have proved the implication (a) $\Rightarrow$ (b).

The implication (b) $\Rightarrow$ (c) is obvious. To prove that (c) implies (d) we must only verify that no subspace of $L_p$ spanned by independent copies $f_k$, $k=1,2,\dots$, of a function $f\in L_p$, $\displaystyle\int_0^1f(u)\,du=0$, can be isomorphic to the space $\ell_p$. For a contradiction assume that the closed linear span $[f_k]$ of such a sequence is isomorphic to $\ell_p$. Since $\{f_k\}_{k=1}^\infty$ is a symmetric basis in $[f_k]$ (that is, $\{f_{k}\}_{k=1}^\infty$ is equivalent in $L_p$ to the sequence $\{f_{\pi(k)}\}_{k=1}^\infty$ for every permutation $\pi:\mathbb{N}\to\mathbb{N}$) and the canonical basis in ${\ell_p}$ is perfectly homogeneous (for instance, see [100], Theorem 24.1), the sequence $\{f_k\}_{k=1}^\infty$ is equivalent to this basis. Hence by Proposition 3.4 in [44] there exist pairwise disjoint subsets $E_i$ of $[0,1]$ and a positive number $\delta$ such that $\displaystyle\int_{E_i}|f_i(t)|\,dt\geqslant\delta$, $i=1,2,\dots$ . Clearly, this property is in contradiction with the identical distribution of the $f_i$ $i=1,2,\dots$ . Hence the implication (c) $\Rightarrow$ (d) is also established.

To verify the last implication (d) $\Rightarrow$ (a) assume that an Orlicz sequence space $\ell_{\psi}$ is isomorphic to a subspace of $L_p$, which is in its turn not isomorphic to $\ell_p$. Since the space $L_p$ is $p$-convex and $2$-concave because $1\leqslant p<2$, by Theorem 1.d.7 in [79] the space $\ell_{\psi}$ also has this property. Thus, $\psi$ is equivalent at zero to a $p$-convex and $2$-concave Orlicz function $\psi_1$ (see § 3.3). Hence ${\psi}_1(t)t^{-p}$ is increasing on $[0,1]$, so that the limit $\lim_{t\to +0}{\psi}_1(t)t^{-p}$ exists. If it is distinct from zero, then by Lemma 3.2 $\psi(t)\asymp \psi_1(t)\asymp t^p$ at zero, that is, $\ell_{\psi}=\ell_p$ (up to norm equivalence). However, this contradicts the assumptions of part (d), so we have the equality

$$ \begin{equation*} \lim_{t\to +0}\frac{\psi(t)}{t^p}= \lim_{t\to +0}\frac{\psi_1(t)}{t^p}=0, \end{equation*} \notag $$
and we arrive at (a). $\Box$

5. Uniqueness problem for the distribution of a function whose independent copies span a prescribed subspace

Let $1\leqslant p<\infty$, and let $f\in L_p=L_p[0,1]$ be a function such that $\displaystyle\int_0^1 f(t)\,dt=0$. By Proposition 3.1 a sequence $\{f_k\}_{k=1}^\infty$ of independent copies of $f$ is equivalent to the canonical basis of some Orlicz space $\ell_\psi$. Can we say anything about $f$ in this case once we know $\psi$?

To indicate our limits in this sense recall the well-known Kwapień–Rychlik theorem ([103], Theorem V.4.5). Let $\{f_k\}_{k=1}^\infty$ and $\{g_k\}_{k=1}^\infty$ be two sequences of independent symmetrically distributed functions on $[0,1]$ such that

$$ \begin{equation*} m\{t\in [0,1]\colon |g_k(t)|>\tau\}\leqslant Cm\biggl\{t\in [0,1]\colon|f_k(t)|>\frac{\tau}{C}\biggr\} \end{equation*} \notag $$
for some $C>0$ and all $k \in \mathbb{N}$ and $\tau>0$. Then for any $n \in \mathbb{N}$ and $\tau>0$, for all $a_k\in\mathbb{R}$, $k=1,\dots,n$, we have the inequality
$$ \begin{equation*} m\biggl\{t\in [0,1]\colon\biggl|\,\sum_{k=1}^n a_kg_k(t)\biggr|>\tau\biggr\} \leqslant 2Cm\biggl\{t\in [0,1]\colon\biggl|\,\sum_{k=1}^n a_kf_k(t)\biggr|> \frac{\tau}{C^2}\biggr\}. \end{equation*} \notag $$
From this theorem and the standard application of the symmetrization inequality (see, for instance, [102], Proposition V.2.2) we obtain the following result. Let $1\leqslant p<\infty$, and let $f,g\in L_p$ be functions such that $\displaystyle\int_0^1 f(t)\,dt=\displaystyle\int_0^1 g(t)\,dt=0$. If for some $C_1$ and $C_2$ the distribution functions $n_f$ and $n_g$ satisfy the condition
$$ \begin{equation} \frac{1}{C_1}n_g(C_2 \tau)\leqslant n_f(\tau)\leqslant C_1n_g\biggl(\frac{\tau}{C_2}\biggr),\qquad \tau>0, \end{equation} \tag{5.1} $$
then arbitrary sequences $\{f_k\}_{k=1}^\infty$ and $\{g_k\}_{k=1}^\infty$ of independent copies of $f$ and $g$, respectively, are equivalent in $L_p$ to the canonical basis of the same Orlicz space. In what follows we say that the distributions of the functions $f$ and $g$ for which (5.1) holds are quasi-equivalent.

Thus, relation (5.1) for some $C_1>0$ and $C_2>0$ is the maximum we can say when we know that sequences of independent copies of two functions $f$ and $g$ are equivalent to the canonical basis of the same space $\ell_\psi$. If the converse holds, that is, the equivalence of arbitrary independent copies of the function $f\in L_p$ and any function $g\in L_p$ implies (5.1) then slightly abusing terminology we say that the distribution of $f$ is unique.

We note straight away that such uniqueness is only possible for $1\leqslant p<2$. In fact, as already mentioned, if $2\leqslant p<\infty$, then a sequence $\{f_k\}_{k=1}^\infty$ of independent copies of any function $f\in L_p$ such that $\displaystyle\int_0^1 f(t)\,dt=0$ is equivalent in $L_p$ to the canonical $\ell_2$-basis (see [61]; for similar results on general r. i. spaces, see [15] and [56], Theorem 1.3).

The above definition can be extended without corrections to the case of an arbitrary r. i. space $X$ on $[0,1]$ in place of $L_p$. In the power-function case, that is, for $\psi(t)=t^q$, $1<q<2$, the question of the uniqueness of the distribution of a function whose independent copies span a prescribed subspace was (implicitly) considered by Braverman. He proved that in each r. i. space $X\supset L_{q,\infty}$ a sequence of independent mean zero functions $\{g_n\}$ equimeasurable with the function $g(t)=t^{-1/q}$, $0<t\leqslant 1$, is equivalent in $X$ to the canonical $\ell_q$-basis (see [32], Theorem III.3). Furthermore, if $L_{q,\infty}\subset X_0$, where $X_0$ is the separable part of $X$, then a sequence of independent copies of a mean zero function $f\in X$ is equivalent in $X$ to the canonical $\ell_q$-basis if and only if $n_f(\tau)\asymp \tau^{-q}$, $\tau>0$ (see [30] and [32], pp. 50–66). Thus, in this case the distribution of a function $f$ such that a sequence of independent copies of $f$ is equivalent in $X$ to the canonical $\ell_q$-basis is unique.

Here we will consider any subspaces spanned by independent copies of functions in a given r. i. space, which, as we know, are isomorphic to some Orlicz sequence spaces. In § 5.2 we prove Theorem 5.2 asserting that the distribution of a function $f\in L_p$, $1\leqslant p<2$, such that a sequence of independent copies of $f$ is equivalent to the canonical $\ell_\psi$-basis is unique, if $\psi$ is sufficiently ‘distant’, in a certain sense, from the ‘extreme’ functions $t^p$ and $t^2$. Moreover, in this case the functions $f$ and $1/\psi^{-1}$ have quasi-equivalent distributions ([20], Theorem 1.1).

Note that in Theorem 5.2 it is essential in general that $\psi$ is ‘distant’ from the ‘extrema’. In § 5.3 we give an example of two functions $f$ and $g$ in $L_1$ such that sequences of independent copies of both functions are equivalent in $L_1$ to the canonical basis of the same Orlicz space $\ell_\psi$, where $\psi(t)\asymp {t}/{\log(e/t)}$ at zero, but $\lim_{\tau\to+\infty}n_g(\tau)/n_f(\tau)=0$ (also see [20]).

In § 5.4, following [21] and [4] we consider similar questions for an arbitrary r. i. space $X$ on $[0,1]$. First we show that, provided that $\psi$ is submultiplicative (see (5.22)), the condition $1/\psi^{-1}\in X$ ensures that in $X$ a sequence of mean zero functions equimeasurable with $1/\psi^{-1}$ is equivalent to the canonical $\ell_\psi$-basis. Moreover, if $\psi$ is not submultiplicative, then there exists a space $X$ such that $1/\psi^{-1}\in X$, but the last result fails (see Theorem 5.4). Finally, we prove there Theorem 5.5 which extends Braverman’s results mentioned above to the case of submultiplicative Orlicz functions.

5.1. Conditions ensuring that independent functions equimeasurable with $1/\psi^{-1}$ span the space $\ell_\psi$ in $L_p$

Let $\psi$ be an Orlicz function, and let $\mathfrak{m}_{\psi}:=1/\psi^{-1}$. We will see in what follows that $\mathfrak{m}_{\psi}$ is the most likely choice for a function (more precisely, a mean zero function equimeasurable with $\mathfrak{m}_{\psi}$) whose independent copies span the Orlicz space $\ell_\psi$ in the given r. i. space.

To find the specific conditions for this, recall that a sequence of independent copies of a function $f\in L_p$, $\displaystyle\int_0^1f(t)\,dt=0$, is equivalent in $L_p$ to the canonical $\ell_\psi$-basis if and only if

$$ \begin{equation} \frac{1}{\psi^{-1}(t)}\asymp\biggl(\frac{1}{t}\int_{0}^t f^*(s)^p\,ds\biggr)^{1/p}+\biggl(\frac{1}{t}\int_{t}^1 f^*(s)^2\,ds\biggr)^{1/2},\qquad 0<t\leqslant 1 \end{equation} \tag{5.2} $$
(see Proposition 3.2). Thus, it remains to find out when (5.2) holds for $f=\mathfrak{m}_{\psi}$.

Theorem 5.1 ([20], Theorem 3.3). Let $1\leqslant p<2$, and let $\psi$ be a $p$-convex and $2$-concave Orlicz function. Then the following conditions are equivalent:

(a) relation (5.2) holds for $f=\mathfrak{m}_{\psi}$;

(b) there exists $\varepsilon>0$ such that $\psi$ is equivalent at zero to a $(p+\varepsilon)$-convex and $(2-\varepsilon)$-concave Orlicz function.

To prove Theorem 5.1 we need two auxiliary results, the first of which provides necessary and sufficient conditions ensuring that the function $\mathfrak{m}_{\psi}^p$ is equivalent to its integral mean (in other words, to the Césaro transform of $\mathfrak{m}_{\psi}^p$).

Proposition 5.1. Let $1\leqslant p<\infty$, and let $\psi$ be a $p$-convex Orlicz function, $\psi\in\Delta_2^0$. Then the following conditions are equivalent:

Proof. First of all, setting $\varphi(t)=t\mathfrak{m}_{\psi}^p(t)$, $0<t\leqslant 1$, we observe that condition (ii) is equivalent to the inequality
$$ \begin{equation} \int_0^t\frac{\varphi(s)\,ds}{s}\leqslant C\varphi(t),\qquad 0<t\leqslant 1, \end{equation} \tag{5.4} $$
for some positive constant $C$;

(i) $\Rightarrow$ (ii). Without loss of generality we can assume that $ \psi$ is itself $(p+\varepsilon)$-convex at zero, that is, on $[0,1]$. Then the function $t\mapsto ({\psi}^{-1}(t))^{p+\varepsilon}$, $t\in (0,1]$, is concave, which yields

$$ \begin{equation*} \frac{(\psi^{-1}(t))^{p+\varepsilon}}{(\psi^{-1}(st))^{p+\varepsilon}} \leqslant \frac{1}{s}\,,\qquad 0<s,t\leqslant 1. \end{equation*} \notag $$
By the definition of $\varphi$
$$ \begin{equation*} \sup_{0<t\leqslant 1}\frac{\varphi(st)}{\varphi(t)}= s\cdot\sup_{0<t\leqslant 1}\biggl(\frac{(\psi^{-1}(t))^{p+\varepsilon}} {(\psi^{-1}(st))^{p+\varepsilon}}\biggr)^{p/(p+\varepsilon)},\qquad 0<s\leqslant 1, \end{equation*} \notag $$
so we have
$$ \begin{equation*} \sup_{t\in(0,1)}\frac{\varphi(st)}{\varphi(t)}\leqslant s^{\varepsilon/(p+\varepsilon)},\qquad 0<s\leqslant 1. \end{equation*} \notag $$
Therefore,
$$ \begin{equation*} \int_0^t\frac{\varphi(s)\,ds}{s}=\int_0^1\frac{\varphi(st)\,ds}{s}\leqslant \int_0^1s^{-p/(p+\varepsilon)}\,ds\cdot \varphi(t)= \frac{p+\varepsilon}{\varepsilon}\cdot\varphi(t), \end{equation*} \notag $$
which proves (5.4) and therefore establishes the implication (i) $\Rightarrow$ (ii).

(ii) $\Rightarrow$ (i). Since the function $\psi$ is $p$-convex, it follows that

$$ \begin{equation*} \frac{{\psi}(s)}{s^p}\leqslant\frac{{\psi}(t)}{t^p}\,,\qquad 0<s\leqslant t\leqslant 1. \end{equation*} \notag $$
Replacing here $s$ by $\psi^{-1}(s)$ and $t$ by $\psi^{-1}(t)$ we conclude that $\varphi$ is increasing on its domain.

As mentioned in the beginning of the proof, for some $C>0$ we have (5.4) by assumption. Let $s_0\in (0,e^{-2C})$ be arbitrary. Fixing it we will show that

$$ \begin{equation} \sup_{t\in(0,1]}\frac{\varphi(s_0t)}{\varphi(t)}<1. \end{equation} \tag{5.5} $$
In fact, supposing the converse we find $t\in(0,1)$ such that $\varphi(s_0t)>\varphi(t)/2$. Since $\varphi$ is increasing and $\log(s_0^{-1})>2C$ by the choice of $s_0$, it follows that
$$ \begin{equation*} \int_0^t\frac{\varphi(s)\,ds}{s}\geqslant \int_{s_0t}^t\frac{\varphi(s)\,ds}{s}\geqslant \varphi(s_0t)\log(s_0^{-1})>C\varphi(t), \end{equation*} \notag $$
which contradicts (5.4). Thus we have proved (5.5). By (5.5) there exists $a\in(0,1)$ such that
$$ \begin{equation} \varphi(s_0t)\leqslant a\varphi(t),\qquad 0<t\leqslant 1. \end{equation} \tag{5.6} $$
We can obviously assume that $a>s_0^{1/(1+p)}$. Hence there exists $\varepsilon\in(0,1)$ such that $a=s_0^{\varepsilon/(p+\varepsilon)}$.

Now, for each $s\in (0,1]$ there exists $n\in \mathbb{N}\cup\{0\}$ such that $s\in(s_0^{n+1},s_0^n)$. Since $\varphi$ is increasing, by (5.6) we have

$$ \begin{equation*} \varphi(st)\leqslant\varphi(s_0^nt)\leqslant s_0^{n\varepsilon/(p+\varepsilon)}\varphi(t)\leqslant s_0^{-\varepsilon/(p+\varepsilon)}s^{\varepsilon/(p+\varepsilon)} \varphi(t),\qquad 0<t\leqslant 1. \end{equation*} \notag $$
Thus,
$$ \begin{equation*} \varphi(st)\preceq s^{\varepsilon/(p+\varepsilon)}\varphi(t),\qquad 0<s,t\leqslant 1, \end{equation*} \notag $$
or, equivalently,
$$ \begin{equation*} (st)^{-\varepsilon/(p+\varepsilon)}\varphi(st)\preceq t^{-\varepsilon/(p+\varepsilon)}\varphi(t),\qquad 0<s,t\leqslant 1. \end{equation*} \notag $$
Hence by the definition of $\varphi$ we obtain
$$ \begin{equation*} \psi(st)\preceq s^{p+\varepsilon}\, \psi(t),\qquad 0<s,t\leqslant 1, \end{equation*} \notag $$
and now (i) is a consequence of Lemma 3.2. $\Box$

Since the dual statement below (cf. [29], Proposition 3.2) has a similar proof, we leave this proof out.

Proposition 5.2. Let $1<q<\infty$, and let $\psi$ be a $q$-concave Orlicz function. Then the following conditions are equivalent:

Proof of Theorem 5.1. (b) $\Rightarrow$ (a). If for some $\varepsilon>0$ the function $\psi$ is equivalent at zero to some $(p+\varepsilon)$-convex and $(2-\varepsilon)$-concave Orlicz function, then we have inequalities (5.3) and (5.7) by Propositions 5.1 and 5.2. As a result, relation (5.2) for $f=\mathfrak{m}_{\psi}$ follows from these inequalities and the following one, which is obvious because $\mathfrak{m}_{\psi}$ is decreasing:
$$ \begin{equation*} \mathfrak{m}_{\psi}(t)\leqslant \biggl(\frac{1}{t}\int_0^t \mathfrak{m}_{\psi}^p(s)\,ds\biggr)^{1/p} ,\qquad 0<t\leqslant 1. \end{equation*} \notag $$

(a) $\Rightarrow$ (b). Note that inequalities (5.3) and (5.7) follow directly from relation (5.2) for $f=\mathfrak{m}_{\psi}$. Hence using 5.1 and 5.2 we obtain (b). $\Box$

Theorem 5.1 and Proposition 3.2 yield the following result.

Corollary 5.1. For $1\leqslant p<2$ let $\psi$ be a $p$-convex and $2$-concave Orlicz function. Then the following conditions are equivalent:

(i) a sequence of mean zero functions equimeasurable with $\mathfrak{m}_{\psi}$ is equivalent in $L_p$ to the canonical $\ell_\psi$-basis;

(ii) for some $\varepsilon>0$ the function $\psi$ is equivalent at zero to a $(p+ \varepsilon)$-convex and $(2-\varepsilon)$-concave Orlicz function.

5.2. The case of the space $L_p$

We show that some ‘extra’ convexity and concavity properties of $\psi$ (as in condition (b) in Theorem 5.1) are necessary and sufficient conditions for the quasi-equivalence of the distributions of a mean zero function $f$ and the function $m_\psi = 1/\psi$ to follow from the equivalence in $L_p$ of a sequence of independent copies of $f$ to the canonical $\ell^p$-basis.

Theorem 5.2 ([20], Theorem 1.1). For $1\leqslant p<2$ let $\psi$ be an Orlicz function equivalent at zero to a $p$-convex and $2$-concave Orlicz function such that $\lim_{t\to +0}\psi(t)t^{-p}= 0$. Then the following conditions are equivalent:

(a) $\psi$ is equivalent at zero to a $(p+\varepsilon)$-convex and $(2-\varepsilon)$-concave function for some $\varepsilon>0$;

(b) if a sequence $\{f_k\}_{k=1}^{\infty}$ of independent copies of a mean zero function $f\in L_p$ satisfies the inequality

$$ \begin{equation} \frac{1}{C\psi^{-1}(1/n)}\leqslant\biggl\|\,\sum_{k=1}^{n}f_k\biggr\|_p \leqslant \frac{C}{\psi^{-1}(1/n)} \end{equation} \tag{5.8} $$
for some positive constant $C$ independent of $n\in\mathbb{N}$, then the functions $f$ and $\mathfrak{m}_\psi$ have quasi-equivalent distributions;

(c) if a sequence $\{f_k\}_{k=1}^{\infty}$ of independent copies of a mean zero function $f\in L_p$ is equivalent in $L_p$ to the canonical $\ell_{\psi}$-basis, then the functions $f$ and $\mathfrak{m}_\psi$ have quasi-equivalent distributions;

(d) $\mathfrak{m}_\psi\in L_p$, and a sequence of independent copies of a mean zero function equimeasurable with $\mathfrak{m}_\psi$ is equivalent in $L_p$ to the canonical $\ell_{\psi}$-basis.

We start with a technical lemma.

Lemma 5.1. Let $1\leqslant p<\infty$ and $1<q<\infty$, and let $\psi$ be an Orlicz function.

Proof. Assertions (i) and (ii) have similar proofs, so we only prove the first of them.

Since $\psi$ is $(q-\varepsilon)$-concave, the function $t\mapsto \psi(t)t^{\varepsilon-q}$ is decreasing for $t>0$. Hence the map

$$ \begin{equation*} t\mapsto t\mathfrak{m}^{q-\varepsilon}(t)= t(\psi^{-1}(t))^{\varepsilon-q},\qquad t>0, \end{equation*} \notag $$
also defines a decreasing function. Therefore,
$$ \begin{equation*} N^{q/(q-\varepsilon)}\sup_{t>0} \frac{\mathfrak{m}_{\psi}^q(Nt)}{\mathfrak{m}_{\psi}^q(t)}= \biggl(\,\sup_{t>0}\frac{Nt\mathfrak{m}_{\psi}^{q-\varepsilon}(Nt)} {t\mathfrak{m}^{q-\varepsilon}(t)}\biggr)^{q/(q-\varepsilon)}\leqslant 1, \end{equation*} \notag $$
which means that
$$ \begin{equation*} N\sup_{t>0}\frac{\mathfrak{m}_{\psi}^q(Nt)}{\mathfrak{m}_{\psi}^q(t)} \leqslant N^{-\varepsilon/(q-\varepsilon)}\to0\quad\text{as}\ \ N\to\infty. \ \Box \end{equation*} \notag $$

Proof of Theorem 5.2. Since
$$ \begin{equation*} \biggl\|\,\sum_{k=1}^{n}e_k\biggr\|_{\ell_\psi}= \frac{1}{\psi^{-1}(1/n)}\,,\qquad n\in\mathbb{N}, \end{equation*} \notag $$
where the $e_k$ are vectors of the canonical basis of $\ell_\psi$ (see § 2.3), inequality (5.8) holds because the sequence $\{f_k\}_{k=1}^{\infty}$ is equivalent in $L_p$ to the canonical basis of $\ell_{\psi}$. Hence it is obvious that (b) $\Rightarrow$ (c).

Now assume that we have (c). By the conditions imposed on $\psi$ and Theorem 4.1 there exists a function $f\in L_p$, $\displaystyle\int_0^1f(u)\,du=0$, such that a sequence of its independent copies is equivalent in $L_p$ to the canonical $\ell_\psi$-basis. Then we have relation (5.2) by Proposition 3.2. On the other hand, by (c) the functions $f$ and $\mathfrak{m}_{\psi}$ have quasi-equivalent distributions. Hence $\mathfrak{m}_{\psi}\in L_p$ and a relation similar to (5.2) holds for each function equimeasurable with $\mathfrak{m}_{\psi}$. Thus, by Proposition 3.2 again, a sequence of independent copies of such a mean zero function is equivalent in $L_p$ at zero to the canonical $\ell_\psi$-basis, which proves (d).

Since the implication (d) $\Rightarrow$ (a) follows from Proposition 3.2 and Corollary 5.1, it remains to prove that (a) $\Rightarrow$ (b).

Thus, assume that $\psi$ is equivalent at zero to a $(p+\varepsilon)$-convex and $(2- \varepsilon)$- concave Orlicz function, $f\in L_p$, $\displaystyle\int_0^1 f(t)\,dt=0$, and a sequence $\{f_k\}_{k=1}^{\infty}$ of independent copies of $f$ satisfies condition (5.8) with a positive constant $C$ independent of $n\in \mathbb{N}$. Since the decreasing rearrangement $f^*$ is a generalized inverse function of the distribution function $n_f$ (see § 2.2), to prove that $f$ and $\mathfrak{m}_{\psi}$ have quasi-equivalent distributions it is sufficient to show that $f^*$ and $\mathfrak{m}_{\psi}$ are equivalent for small values of the argument. For simpler notation we assume without loss of generality that $f=f^*$.

First of all, by Proposition 3.2 we have inequality (5.2), which yields the estimate

$$ \begin{equation} f(t)\leqslant C_1 \mathfrak{m}_{\psi}(t), \qquad 0<t\leqslant 1, \end{equation} \tag{5.9} $$
for some $C_1>0$. Thus, it only remains to prove the reverse relation
$$ \begin{equation} \mathfrak{m}_{\psi}(t)\preceq f(t) \end{equation} \tag{5.10} $$
for all sufficiently small $t\in (0,1)$.

We need below a number of inequalities which hold owing to the assumptions or some results established already. First of all, relation (5.2) ensures that there exists a positive constant $C$ such that for each $t\in(0,1)$ we have at least one of the inequalities

$$ \begin{equation} \biggl(\frac{1}{t}\int_t^1f^2(s)\,ds\biggr)^{1/2}\geqslant \frac{1}{2C}\mathfrak{m}_{\psi}(t) \end{equation} \tag{5.11} $$
and
$$ \begin{equation} \biggl(\frac{1}{t}\int_0^tf^p(s)\,ds\biggr)^{1/p}\geqslant \frac{1}{2C}\mathfrak{m}_{\psi}(t). \end{equation} \tag{5.12} $$
Moreover, by Propositions 5.1 and 5.2 there exists $C_2>0$, such that
$$ \begin{equation} \frac{1}{t}\int_0^t\mathfrak{m}_{\psi}^p(s)\,ds\leqslant C_2^p\mathfrak{m}_{\psi}^p(t),\qquad 0<t\leqslant 1, \end{equation} \tag{5.13} $$
and
$$ \begin{equation} \frac{1}{t}\int_t^1\mathfrak{m}_{\psi}^2(s)\,ds\leqslant C_2^2\mathfrak{m}_{\psi}^2(t),\qquad 0<t\leqslant 1. \end{equation} \tag{5.14} $$
Finally, using Lemma 5.1 we select and fix $R$ sufficiently large so that
$$ \begin{equation} \sup_{t>0}\frac{\mathfrak{m}_{\psi}^2(Rt)}{\mathfrak{m}_{\psi}^2(t)} \leqslant \frac{1}{8RC^2C_1^2C_2^2}\quad\text{and}\quad \sup_{t>0}\frac{\mathfrak{m}_{\psi}^p({t}/{R})}{\mathfrak{m}_{\psi}^p(t)} \leqslant\frac{R}{2^{p+1}C^{p}C_1^pC_2^p}\,. \end{equation} \tag{5.15} $$

Let $t\in(0,1/R)$. First consider the case when (5.11) holds. Squaring both sides of this inequality and using (5.9) we obtain

$$ \begin{equation*} \begin{aligned} \, \frac{1}{4C^2}\mathfrak{m}_{\psi}^2(t) &\leqslant \frac{1}{t}\int_t^1f^2(s)\,ds=\frac{1}{t}\int_t^{Rt}f^2(s)\,ds+ \frac{1}{t}\int_{Rt}^1f^2(s)\,ds \\ &\leqslant(R-1)f^2(t)+\frac{C_1^2}{t}\int_{Rt}^1\mathfrak{m}_{\psi}^2(s)\,ds. \end{aligned} \end{equation*} \notag $$
Hence it follows from (5.14) that
$$ \begin{equation*} \frac{1}{4C^2}\mathfrak{m}_{\psi}^2(t)\leqslant (R-1)f^2(t)+RC_1^2C_2^2\mathfrak{m}_{\psi}^2(Rt), \end{equation*} \notag $$
and therefore, from the first inequality in (5.15) we obtain
$$ \begin{equation} (R-1)f^2\biggl(\frac{t}{R}\biggr)\geqslant (R-1)f^2(t)\geqslant \frac{1}{4C^2}\,\mathfrak{m}_{\psi}^2(t)-RC_1^2C_2^2\mathfrak{m}_{\psi}^2(Rt) \geqslant\frac{1}{8C^2}\,\mathfrak{m}_{\psi}^2(t). \end{equation} \tag{5.16} $$

Now let (5.12) hold. Then

$$ \begin{equation*} \frac{1}{2^pC^p}\mathfrak{m}_{\psi}^p(t)\leqslant \frac{1}{t}\int_0^tf^p(s)\,ds=\frac{1}{t}\int_0^{t/R}f^p(s)\,ds+ \frac{1}{t}\int_{t/R}^tf^p(s)\,ds. \end{equation*} \notag $$
Hence, using (5.9) and (5.13), as well as the decrease of $f$, we obtain
$$ \begin{equation*} \begin{aligned} \, \frac{1}{2^pC^p}\mathfrak{m}_{\psi}^p(t) &\leqslant \frac{C_1^p}{t}\int_0^{t/R}\mathfrak{m}_{\psi}^p(s)\,ds+ \biggl(1-\frac{1}{R}\biggr)f^p\biggl(\frac{t}{R}\biggr) \\ &\leqslant\frac{1}{R}C_1^pC_2^p\mathfrak{m}_{\psi}^p\biggl(\frac{t}{R}\biggr) +\biggl(1-\frac{1}{R}\biggr)f^p\biggl(\frac{t}{R}\biggr). \end{aligned} \end{equation*} \notag $$
From this estimate and the second inequality in (5.15) it follows that
$$ \begin{equation*} \biggl(1-\frac{1}{R}\biggr)f^p\biggl(\frac{t}{R}\biggr)\geqslant \frac{1}{2^pC^p}\mathfrak{m}_{\psi}^p(t)- \frac{1}{R}C_1^pC_2^p\mathfrak{m}_{\psi}^p\biggl(\frac{t}{R}\biggr) \geqslant\frac{1}{2^{p+1}C^p}\mathfrak{m}_{\psi}^p(t), \end{equation*} \notag $$
and therefore we see from (5.16) that for some constant independent of $t$ we have
$$ \begin{equation*} f\biggl(\frac{t}{R}\biggr)\succeq \mathfrak{m}_{\psi}(t),\qquad 0<t<\frac{1}{R}\,. \end{equation*} \notag $$
As $\psi$ is convex, we have $\mathfrak{m}_{\psi}(t)\asymp \mathfrak{m}_{\psi}(t/R)$, $0<t\leqslant 1$, where the constants depend on $R$. Hence it follows from the last estimate that (5.10) holds for $t\in (0,R^{-2})$. Thus the proof of the implication (a) $\Rightarrow$ (b), and therefore of Theorem 5.2, is complete. $\Box$

5.3. An example when the distribution of a function generating a subspace ‘near’ $L_1$ fails to be unique

Let $1\leqslant p<2$. Assume that an Orlicz function $\psi$ is equivalent at the origin to a $p$-convex and $2$-concave Orlicz function, $\lim_{t\to +0}{\psi(t)}{t^{-p}}=0$, but there exists no $\varepsilon>0$ such that this function is equivalent at the origin to a $(p+\varepsilon)$-convex Orlicz function (so that condition (a) in Theorem 5.2 is not satisfied). We show that then, in general, $1/\psi^{-1}\not\in L_p$ and the distribution of a function with independent copies equivalent in $L_p$ to the canonical basis of $\ell_\psi$ is not unique. We consider the case $p=1$ for simplicity.

Theorem 5.3 ([20], Proposition 5.3). There exist functions $f$ and $g$ in $L_1[0,1]$ such that

(a) for each $C>0$

$$ \begin{equation*} \limsup_{\tau\to +\infty}\frac{n_f(C\tau)}{n_g(\tau)}=\infty; \end{equation*} \notag $$

(b) sequences $\{f_k\}_{k=1}^\infty$ and $\{g_k\}_{k=1}^\infty$ of independent functions equimeasurable with $f$ and $g$, respectively, and such that $\displaystyle\int_0^1 f_k(t)\,dt= \displaystyle\int_0^1 g_k(t)\,dt=0$, $k=1,2,\dots$, are equivalent in $L_1$ to the canonical basis in the Orlicz space $\ell_\psi$, where $\psi(t)=t/\log_2(2/t)$.

Thus, the distributions of $f$ and $g$ are not quasi-equivalent, but sequences of independent mean zero functions equimeasurable with $f$ and $g$ are equivalent to the canonical basis of the same space $\ell_\psi$.

Proof. Let $\{E_k\}_{k=1}^\infty$ (respectively, $\{F_k\}_{k=1}^\infty$) be a sequence of pairwise disjoint measurable subsets of $(0,1)$ such that $m(E_k)=2^{-k-2^k}$ (respectively, $m(F_k)=4^{-k-4^k}$), $k\in\mathbb{N}$. We define two functions $f$ and $g$ by setting
$$ \begin{equation} f:=\sum_{k=1}^\infty 2^{2^k}\chi_{E_k}\quad\text{and}\quad g:=\sum_{k=1}^\infty 4^{4^k}\chi_{F_k}. \end{equation} \tag{5.17} $$

Assuming that (a) fails we find $C>0$ such that

$$ \begin{equation} n_f(C\tau)\leqslant Cn_g(\tau)\quad\text{for all}\ \ \tau>0, \end{equation} \tag{5.18} $$
and fix $k\in\mathbb{N}$ such that
$$ \begin{equation} 2^{2k+1}>\log_2 C +1. \end{equation} \tag{5.19} $$

It is easy to verify that we can select $\tau$ so that $\tau$ and $C\tau$ belong to the interval $(2^{2^{2k+1}},2^{2^{2k+2}})$. Because $2^{2^{2k+1}}=4^{4^k}$ and $2^{2^{2k+2}}=4^{2\cdot 4^{k}}<4^{4^{k+1}}$, taking the definitions of $f$ and $g$ into account, we obtain

$$ \begin{equation*} n_f(C\tau)=n_f(2^{2^{2k+1}})\geqslant 2^{-(2k+2)-2^{2k+2}} \end{equation*} \notag $$
and
$$ \begin{equation*} n_g(\tau)=n_g(4^{4^k})\leqslant 2\cdot 4^{-(k+1)-4^{k+1}}=2^{-(2k+1)-2^{2k+3}}. \end{equation*} \notag $$
Now it follows from (5.18) that
$$ \begin{equation*} 2^{2k+2+2^{2k+2}}\geqslant \frac{1}{C} 2^{2k+1+2^{2k+3}}, \end{equation*} \notag $$
or, equivalently,
$$ \begin{equation*} \log_2C\geqslant 2^{2k+2}-1, \end{equation*} \notag $$
in contradiction to the choice of $k$ (see (5.19)). Thus (a) is proved.

To establish $(b)$ we need the following relation:

$$ \begin{equation} \int_0^1\min\{f(s),tf^2(s)\}\,ds\asymp\int_0^1\min\{g(s),tg^2(s)\}\,ds \asymp\frac{1}{\log_2(2/t)}\,,\qquad 0< t\leqslant 1. \end{equation} \tag{5.20} $$
Note first that it is sufficient to prove (5.20) for $0<t<1/4$. In this case we can find the greatest positive integer $m$ such that $2^{2^m}<1/t$. Then
$$ \begin{equation*} \begin{aligned} \, \int_0^1\min\{f(s),tf^2(s)\}\,ds&=\sum_{2^{2^k}\geqslant 1/t}2^{2^k} 2^{-k-2^k}+t\sum_{2^{2^k}<1/t}2^{2^{k+1}} 2^{-k-2^k} \\ &=\sum_{k=m+1}^{\infty}2^{-k}+t\sum_{k=1}^m2^{2^k-k}= 2^{-m}+t\sum_{k=1}^m2^{2^k-k}. \end{aligned} \end{equation*} \notag $$
Because
$$ \begin{equation*} \sum_{k=1}^m2^{2^k-k}\leqslant 2^{2^m-m}+(m-1)\,2^{2^{m-1}-m+1}\leqslant 2^{2^m-m}+2^{2^{m-1}}\leqslant 2 \cdot 2^{2^m-m}, \end{equation*} \notag $$
it follows that
$$ \begin{equation*} 2^{-m}\leqslant\int_0^1\min\{f(s),tf^2(s)\}\,ds\leqslant 2^{-m}+2t\cdot 2^{2^m-m}\leqslant 3\cdot 2^{-m}, \end{equation*} \notag $$
or, taking the definition of $m$ into account,
$$ \begin{equation*} \frac{1}{\log_2(1/t)}\leqslant\int_0^1\min\{f(s),tf^2(s)\}\,ds\leqslant \frac{6}{\log_2(1/t)}\,,\qquad 0<t< \frac{1}{4}\,. \end{equation*} \notag $$
Since we can also prove a quite similar inequality for the integral involving $g$, we have established the equivalence (5.20).

Now assume that two sequences $\{f_k\}_{k=1}^\infty$ and $\{g_k\}_{k=1}^\infty$ satisfy the assumptions of the theorem. We claim that in $L_1$ they are equivalent to the canonical basis of the Orlicz space $\ell_\psi$, where $\psi(t)=t/\log_2(2/t)$, that is, with constants independent of $n\in\mathbb{N}$ and $a_k\in\mathbb{R}$ we have

$$ \begin{equation*} \biggl\|\,\sum_{k=1}^na_kf_k\biggr\|_{L_1}\asymp \biggl\|\,\sum_{k=1}^na_kg_k\biggr\|_{L_1}\asymp \|(a_k)_{k=1}^n\|_{\ell_{\psi}}. \end{equation*} \notag $$
Taking the equivalence (3.4) into account, it is sufficient to show that
$$ \begin{equation} \biggl\|\,\sum_{k=1}^n a_k\bar{f}_k\biggr\|_{L_1+L_2}\asymp \biggl\|\,\sum_{k=1}^n a_k\bar{g}_k\biggr\|_{L_1+L_2}\asymp \|(a_k)_{k=1}^n\|_{\ell_{\psi}},\qquad n\in\mathbb{N}, \end{equation} \tag{5.21} $$
where, as before, we can assume that
$$ \begin{equation*} \bar{f}_k(t)=f(t+k-1)\cdot\chi^{}_{[k-1,k)}(t),\quad \bar{g}_k(t)=g(t+k-1)\cdot\chi^{}_{[k-1,k)}(t),\qquad t>0. \end{equation*} \notag $$

It is easy to verify that $L_1+L_2$ coincides (up to equivalence of norms) with the Orlicz space $L_N$, where

$$ \begin{equation*} N(t)=\begin{cases} t^2,& t\in(0,1), \\ 2t-1,& t\geqslant 1. \end{cases} \end{equation*} \notag $$
Setting
$$ \begin{equation*} {\psi}_N(t):=\int_0^1N(tf(s))\,ds,\qquad t>0 \end{equation*} \notag $$
(cf. § 3.4 and, in particular, (3.13)) we obtain
$$ \begin{equation*} \begin{aligned} \, \biggl\|\,\sum_{k=1}^n a_k\bar{f}_k\biggr\|_{L_N}\leqslant 1 \quad&\Longleftrightarrow\quad \int_0^\infty N\biggl(\,\sum_{k=1}^n |a_k| \,|\bar{f}_k(s)|\biggr)\,ds\leqslant 1 \\ &\Longleftrightarrow\quad\sum_{k=1}^n \int_0^1N(|a_k|\,|f_k(s)|)\,ds \leqslant 1 \\ &\Longleftrightarrow\quad \sum_{k=1}^n {\psi}^{}_N(|a_k|)\leqslant 1 \\ &\Longleftrightarrow\quad \|(a_k)_{k=1}^n\|_{\ell_{\psi_N}}\leqslant 1. \end{aligned} \end{equation*} \notag $$
Hence for some constants independent of $n\in\mathbb{N}$ and $a_k\in\mathbb{R}$ we have the equivalence
$$ \begin{equation*} \biggl\|\,\sum_{k=1}^n a_k\bar{f}_k\biggr\|_{L_1+L_2}\asymp \|(a_k)_{k=1}^n\|_{\ell_{\psi_N}}. \end{equation*} \notag $$
Since $N(t)\asymp\min\{t,t^2\}$, $t>0$, it follows from the definition of $\psi^{}_N$ that
$$ \begin{equation*} {\psi}_N(t)\asymp\int_0^1\min\{tf(s),(tf(s))^2\}\,ds. \end{equation*} \notag $$
Thus, taking (5.20) into account we obtain
$$ \begin{equation*} {\psi}_N(t)\asymp {\psi}(t)=\frac{t}{\log_2(2/t)}\,,\qquad 0<t\leqslant 1, \end{equation*} \notag $$
so that
$$ \begin{equation*} \bigg\|\sum_{k=1}^n a_k\bar{f}_k\bigg\|_{L_1+L_2}\asymp \|(a_k)_{k=1}^n\|_{\ell_{\psi}},\qquad n\in\mathbb{N}. \end{equation*} \notag $$
Since we can apply just the same argument to the sequence $\{\bar{g}_k\}$, relation (5.21) is proved, so Theorem 5.3 is too. $\Box$

Remark 5.1. The function ${\psi}$ in Theorem 5.3 fails condition (a) in Theorem 5.2 because it is not $(1+\varepsilon)$-convex for each $\varepsilon>0$. It is natural to wonder about the situation when $\psi(t)$ is ‘close’ to the other ‘extreme’ function $t^2$. The following result, proved in [31], Theorem 4.2, provides here a partial answer.

Let $h\colon [0,\infty)\to [0,\infty)$ be a function slowly varying at infinity, that is, let $\lim_{t\to\infty}h(st)/h(t)=1$ for each $s>0$. Assume that $\displaystyle\int_0^\infty\dfrac{h(t)\,dt}{t}<\infty$. It is easy to verify that this ensures that a function $f$ with distribution function $n_f(\tau)=\tau^{-2}h(\tau)$, $\tau\geqslant 0$, belongs to $L_2$. Then a sequence of copies of a mean zero function $g$ is equivalent in $L_p$, $1\leqslant p<2$, to the canonical basis in the Orlicz space $\ell_\psi$, where $\psi(s)=s^2\displaystyle\int_0^{1/s}\dfrac{h(t)\,dt}{t}$ , if and only if $n_g(\tau)\asymp\tau^{-2}h(\tau)$ for large $\tau$. Thus, in particular, the distribution of a mean zero function with a sequence of independent copies equivalent in $L_p$, $1\leqslant p<2$, to the canonical $\ell_\psi$-basis, where $\psi(t)=t^2\log_2(2/t)$, is unique, in contrast to the result of Theorem 5.3.

5.4. The case of an arbitrary rearrangement invariant space

For $1< q< 2$, let $\{g_n\}$ be a sequence of independent functions equimeasurable with $g(t)=t^{-1/q}$, $0<t\leqslant 1$, and such that $\displaystyle\int_0^1 g_n(t)\,dt=0$, $n \in \mathbb{N}$. As mentioned in the introduction to this section, using a rather serious arsenal of function-theoretical tools Braverman proved that in each r. i. space $X$ such that $X\supset L_{q,\infty}$, the sequence $\{g_n\}$ is equivalent to the canonical $\ell_q$-basis (see [32], Theorem III.3). Here (by using another method, which is much simpler in our opinion) we establish a similar result for Orlicz spaces $\ell_\psi$ in the case when the function $\psi$ is submultiplicative on $[0,1]$, that is, satisfies the condition

$$ \begin{equation} \psi(ts)\leqslant K\psi(t)\psi(s),\qquad 0<t,s\leqslant 1, \end{equation} \tag{5.22} $$
for some $K\geqslant 1$. The role of $L_{q,\infty}$ is taken in this proof by the Marcinkiewicz space corresponding to the function $\varphi(t):=t/{{\psi}^{-1}(t)}$.

We also show that under certain conditions on $\psi$ relation (5.22) is sharp in the following sense: if a sequence $\{f_n\}$ of independent mean zero functions equimeasurable with $\mathfrak{m}_{\psi}:={1}/{\psi}^{-1}$ on $[0,1]$ is equivalent to the canonical $\ell_\psi$-basis in the space $M(\varphi)$, then (5.22) holds.

Theorem 5.4 ([4], Theorem 3.8). Let $\psi$ be an Orlicz function which is equivalent at zero to a $(1+\varepsilon)$-convex and $(2-\varepsilon)$-concave Orlicz function for some $\varepsilon>0$. If $\{f_n\}$ is a sequence of independent mean zero functions on $[0,1]$ equimeasurable with $\mathfrak{m}_{\psi}$, then the following conditions are equivalent:

(a) in each r. i. space $X$ such that $X\supset M(\varphi)$ the sequence $\{f_n\}$ is equivalent to the canonical $\ell_\psi$-basis;

(b) for each r. i. space $X$ such that $X\supset M(\varphi)$ the following equivalence holds with constants independent of $n\in \mathbb{N}$:

$$ \begin{equation*} \biggl\|\,\sum_{k=1}^nf_k\biggr\|_X\asymp \frac{1}{{\psi}^{-1}(1/n)}\,; \end{equation*} \notag $$

(c) condition (5.22) holds.

Throughout what follows we assume without loss of generality that the function $\psi$ is itself $(1+\varepsilon)$-convex and $(2-\varepsilon)$-concave at zero for some $\varepsilon>0$, strictly increasing, and $\psi(1)=1$.

First we deduce an equivalent expression for the norm of a Marcinkiewicz space $M(\varphi)$.

Lemma 5.2. If an Orlicz function $\psi$ is $(1+\varepsilon)$-convex at zero for some $\varepsilon>0$ and $\varphi(t)=t/\psi^{-1}(t)$, then

$$ \begin{equation*} \|x\|_{M(\varphi)}\asymp\sup_{0<t\leqslant 1}x^*(t){\psi}^{-1}(t). \end{equation*} \notag $$

Proof. We define the dilation function ${\mathcal M}_\varphi$ of $\varphi$ by
$$ \begin{equation*} {\mathcal M}_\varphi(t):=\sup_{0<s\leqslant 1} \frac{\varphi(st)}{\varphi(s)}\,,\qquad 0<t\leqslant 1. \end{equation*} \notag $$

The function $\psi(t^{1/(1+\varepsilon)})$, $0<t\leqslant 1$, is convex on $[0,1]$ by assumption, so by Lemma 3.2

$$ \begin{equation*} \psi((st)^{1/(1+\varepsilon)})\leqslant t\psi(s^{1/(1+\varepsilon)}), \qquad 0<s,t\leqslant 1, \end{equation*} \notag $$
or
$$ \begin{equation*} \psi(uv)\leqslant v^{1+\varepsilon}\psi(u), \qquad 0<u,v\leqslant 1. \end{equation*} \notag $$
Turning to the inverse functions we obtain
$$ \begin{equation*} \psi^{-1}(s) t^{1/(1+\varepsilon)}\leqslant \psi^{-1}(st), \end{equation*} \notag $$
which shows that
$$ \begin{equation*} \varphi(st)=\frac{st}{\psi^{-1}(st)}\leqslant \frac{st^{\varepsilon/(1+\varepsilon)}}{\psi^{-1}(s)}\,. \end{equation*} \notag $$
Thus,
$$ \begin{equation*} {\mathcal M}_\varphi(t)\leqslant t^{\varepsilon/(1+\varepsilon)},\qquad 0<t\leqslant 1, \end{equation*} \notag $$
so that ${\mathcal M}_\varphi(t)\to 0$ as $t\to +0$. Hence by Theorem II.5.3 in [69]
$$ \begin{equation*} \|x\|_{M(\varphi)}\asymp\sup_{0<t\leqslant 1}\frac{tx^*(t)}{\varphi(t)}= \sup_{0<t\leqslant 1}x^*(t){\psi}^{-1}(t). \ \Box \end{equation*} \notag $$

We use the next lemma in the proofs of two subsequent results, Theorems 5.4 and 5.5. As before, if $a=(a_n)_{n=1}^\infty$, then

$$ \begin{equation*} a \,\overline\otimes\, \mathfrak{m}_{\psi}(u):= \sum_{n=1}^\infty a_n \mathfrak{m}_{\psi}(u+n-1)\cdot\chi_{[n-1,n)}(u),\qquad u>0. \end{equation*} \notag $$

Lemma 5.3. Let $\psi$ be an Orlicz function satisfying (5.22). Then

$$ \begin{equation} (a\,\overline\otimes\, \mathfrak{m}_{\psi})^*\cdot\chi_{(0,1)}\leqslant K\|a\|_{\ell_{\psi}}\mathfrak{m}_{\psi}. \end{equation} \tag{5.23} $$

Proof. Let $a=(a_k)_{k=1}^\infty\in \ell_\psi$. We can assume without loss of generality that $a_k\geqslant 0$ for $k \in \mathbb{N}$ and (for homogeneity reasons) $\|a\|_{\ell_{\psi}}= 1$.

Since $\psi$ is increasing, $\psi(0)=0$, and $\psi(1)=1$, for each $t\geqslant 1$

$$ \begin{equation*} n_{\mathfrak{m}_{\psi}}(t)=m\biggl\{u\in [0,1]\colon \frac{1}{\psi^{-1}(u)}> t\biggr\}=m\biggl\{u\in [0,1]\colon \psi\biggl(\frac{1}{t}\biggr)>u\biggr\}= \psi\biggl(\frac{1}{t}\biggr), \end{equation*} \notag $$
so that
$$ \begin{equation*} n_{a \,\overline\otimes\, \mathfrak{m}_{\psi}} (t)= \sum_{k=1}^\infty n_{a_k \mathfrak{m}_{\psi}} (t)= \sum_{k=1}^\infty {\psi}\biggl(\frac{a_k}{t}\biggr). \end{equation*} \notag $$
Furthermore, for each $j \in \mathbb{N}$
$$ \begin{equation*} \psi(a_j) \leqslant \sum_{k=1}^\infty \psi(a_k)=1, \end{equation*} \notag $$
and therefore, as $\psi$ is increasing and $\psi(1)=1$, we have $a_j \leqslant 1$ for $j \in \mathbb{N}$, that is, $\|a\|_{\ell_\infty} \leqslant 1$. Hence if $t \geqslant 1$, then by (5.22)
$$ \begin{equation*} \psi({a_k}t)\leqslant K\psi(a_k)\psi\biggl(\frac{1}{t}\biggr). \end{equation*} \notag $$
It follows from this and the previous equalities that
$$ \begin{equation} n_{a\,\overline\otimes\, \mathfrak{m}_{\psi}}(t)\leqslant K\sum_{k=1}^\infty \psi(a_k)\psi\biggl(\frac{1}{t}\biggr)= K\psi\biggl(\frac{1}{t}\biggr)=Kn_{\mathfrak{m}_{\psi}}(t),\qquad t\geqslant 1. \end{equation} \tag{5.24} $$

Let $s \in (0,1)$. Since $\psi(1)=1$, we have $\{u \colon n_{\mathfrak{m}_{\psi}}(u)\leqslant s\}\subset (1,\infty)$. Therefore, if $n_{\mathfrak{m}_{\psi}}(u)\leqslant s$, then by (5.24)

$$ \begin{equation*} n_{a \,\overline\otimes\, \mathfrak{m}_{\psi}} (u)\leqslant Kn_{\mathfrak{m}_{\psi}}(u)\leqslant Ks. \end{equation*} \notag $$
Hence, using the definition of the non-increasing rearrangment of a measurable functions and bearing in mind that $\mathfrak{m}_{\psi}$ is decreasing we obtain
$$ \begin{equation*} \sigma_{1/K}(a\,\overline\otimes\,\mathfrak{m}_{\psi})^*\cdot \chi_{(0,1)} \leqslant \mathfrak{m}_{\psi}, \end{equation*} \notag $$
so that
$$ \begin{equation} (a\,\overline\otimes\, \mathfrak{m}_{\psi})^*\cdot\chi_{(0,1)}\leqslant \sigma_K\mathfrak{m}_{\psi}. \end{equation} \tag{5.25} $$

On the other hand, as $\psi$ is convex, we have the inequality $\psi(s/K)\leqslant \psi(s)/K$ for each $s>0$, or, setting $s=\psi^{-1}(t)$, the inequality $\psi(\psi^{-1}(t)/K)\leqslant t/K$ for each $t>0$. Since $\psi$ is increasing, it follows that $\psi^{-1}(t)/K \leqslant \psi^{-1}(t/K)$ for $t>0$, or, taking the definition of $\mathfrak{m}_{\psi}$ into account,

$$ \begin{equation*} \sigma_K\mathfrak{m}_{\psi}\leqslant K\mathfrak{m}_{\psi}. \end{equation*} \notag $$
As a result, the required inequality is a direct consequence of the last relation and inequality (5.25). $\Box$

Proof of Theorem 5.4. First we prove the implication (c) $\Rightarrow$ (a). By the assumptions of the theorem, Proposition 3.2, and Theorem 5.1, the following equivalence holds with constants independent of $a=(a_n)\in \ell_{\psi}$:
$$ \begin{equation} \biggl\|\,\sum_{n=1}^\infty a_nf_n\biggr\|_1\asymp \|a\|_{\ell_{\psi}}, \end{equation} \tag{5.26} $$
so, as $X\subset L_1[0,1]$, it is sufficient to show that for a constant independent of $a\in \ell_{\psi}$ we have
$$ \begin{equation} \biggl\|\,\sum_{n=1}^\infty a_nf_n\biggr\|_X\preceq\|a\|_{\ell_{\psi}}. \end{equation} \tag{5.27} $$

First of all, since $\psi$ is $(2-\varepsilon)$-concave at zero, the function $\psi(t^{1/(2-\varepsilon)})/t$ decreases on $[0,1]$, so that $\psi(t)\geqslant t^{2-\varepsilon}\geqslant t^2$, $0<t\leqslant 1$. Therefore, $\varphi(t)\geqslant t^{1/2}$, which means that $M(\varphi)\supset M(t^{1/2})\supset L_2$. Thus, by Theorem 3.2

$$ \begin{equation} \biggl\|\,\sum_{n=1}^\infty a_nf_n\biggr\|_{M(\varphi)}\asymp \|a \,\overline\otimes\,\mathfrak{m}_{\psi}\|_{Z_{M(\varphi)}^2}, \end{equation} \tag{5.28} $$
where $Z_{M(\varphi)}^2$ is the r. i. space on $(0,\infty)$ with the quasinorm
$$ \begin{equation} \|x\|_{Z_{M(\varphi)}^2}=\|x^*\chi_{[0,1]}\|_{M(\varphi)}+ \|x^*\chi_{(1,\infty)}\|_{2} \end{equation} \tag{5.29} $$
(see § 3.1). Since by the same result of Theorem 3.2 (also see (3.4)) we have
$$ \begin{equation*} \bigg\|\sum_{n=1}^\infty a_nf_n\bigg\|_1\asymp \|a \,\overline\otimes\, \mathfrak{m}_{\psi}\|_{(L_1+L_2)(0,\infty)}, \end{equation*} \notag $$
in view of (5.26) we obtain
$$ \begin{equation*} \|(a \,\overline\otimes\,\mathfrak{m}_{\psi})^*\cdot\chi_{(1,\infty)}\|_{2} \preceq\|a\,\overline\otimes\,\mathfrak{m}_\psi\|_{(L_1+L_2)(0,\infty)} \asymp \|a\|_{\ell_{\psi}}. \end{equation*} \notag $$
Moreover, by Lemma 5.2 we have the membership relation $\mathfrak{m}_{\psi}\in M(\varphi)$, so using Lemma 5.3 we obtain
$$ \begin{equation*} \|(a\,\overline\otimes\, \mathfrak{m}_{\psi})^*\cdot\chi_{[0,1]}\|_{M(\varphi)}\leqslant K\|\mathfrak{m}_{\psi}\|_{M(\varphi)}\|a\|_{\ell_{\psi}}\preceq \|a\|_{\ell_{\psi}} \end{equation*} \notag $$
for some constants independent of $a\in \ell_{\psi}$. As a result, the last inequalities, relations (5.28) and (5.29), and the embedding $X\supset M(\varphi)$ yield (5.27).

Since the implication (a) $\Rightarrow$ (b) is obvious, it remains to show that (b) implies (c).

Let $s_n=\displaystyle\sum_{k=1}^n e_k$, $n\in\mathbb{N}$. Then

$$ \begin{equation*} s_n \,\overline\otimes\, \mathfrak{m}_{\psi}(u)= \sum_{k=1}^n\mathfrak{m}_{\psi}(u+k-1)\cdot\chi_{[k-1,k)}(u),\qquad u>0. \end{equation*} \notag $$
Since the function $1/\psi^{-1}$ is decreasing, it follows that
$$ \begin{equation*} (s_n \,\overline\otimes\, \mathfrak{m}_{\psi})^*(u)\cdot\chi_{[0,1]}(u)= \frac{1}{\psi^{-1}(u/n)}\,,\qquad u\in [0,1]. \end{equation*} \notag $$
Therefore, using condition (b) for $X=M(\varphi)$ and taking (5.28) and (5.29) into account we obtain
$$ \begin{equation*} \begin{aligned} \, \biggl\|\frac{1}{{\psi}^{-1}(\cdot/n)}\biggr\|_{M(\varphi)}&= \|(s_n \,\overline\otimes\, \mathfrak{m}_{\psi})^*\cdot\chi_{[0,1]}\|_{M(\varphi)}\preceq \|s_n \,\overline\otimes\, \mathfrak{m}_{\psi}\|_{Z_{M(\varphi)}^2} \\ &\asymp\biggl\|\,\sum_{k=1}^n f_k\biggr\|_{M(\varphi)}\asymp \frac{1}{\psi^{-1}(1/n)}\,,\qquad n\in\mathbb{N}. \end{aligned} \end{equation*} \notag $$
Since by Lemma 5.2 we have
$$ \begin{equation*} \biggl\|\frac{1}{\psi^{-1}(\cdot/n)}\biggr\|_{M(\varphi)}\asymp \sup_{0<t\leqslant 1}\frac{\psi^{-1}(t)}{\psi^{-1}(t/n)}\,, \end{equation*} \notag $$
it follows from the previous relation that
$$ \begin{equation*} \frac{\psi^{-1}(t)}{\psi^{-1}(t/n)}\leqslant \frac{C_1}{\psi^{-1}(1/n)}\,,\qquad n\in\mathbb{N}, \end{equation*} \notag $$
or
$$ \begin{equation*} \psi^{-1}\biggl(\frac{1}{n}\biggr)\psi^{-1}(t)\leqslant C_1\psi^{-1}\biggl(\frac{t}{n}\biggr) \end{equation*} \notag $$
for some $C_1>0$ and all $t\in (0,1]$ and $n\in\mathbb{N}$. Note that $\psi\in\Delta_2^\infty$ by the assumptions of the theorem. Hence by the above inequality, for some $C_2>0$ and all $t\in (0,1]$ we have
$$ \begin{equation*} \psi\biggl(\psi^{-1}\biggl(\frac{1}{n}\biggr)\psi^{-1}(t)\biggr)\leqslant \psi\biggl(C_1\psi^{-1}\biggl(\frac{t}{n}\biggr)\biggr)\leqslant \frac{C_2t}{n}\,. \end{equation*} \notag $$
Since this estimate is equivalent to (5.22), Theorem 5.4 is proved. $\Box$

Remark 5.2. An analysis of the proof of Theorem 5.4 shows that each of conditions (a), (b), and (c) is equivalent to the following equivalence with constants independent of $n\in \mathbb{N}$:

$$ \begin{equation*} \biggl\|\,\sum_{k=1}^nf_n\biggr\|_{M(\varphi)}\asymp \frac{1}{\psi^{-1}(1/n)}\,,\quad\text{where}\ \ \varphi(t):=\frac{t}{\psi^{-1}(t)}\,. \end{equation*} \notag $$

In the second part of this subsection we show that if $\psi$ is a submultiplicative function and $\mathfrak{m}_\psi$ belongs to the separable part of an r. i. space $X$, then the distribution of a function with independent copies spanning the space $\ell_\psi$ in $X$ is unique.

Theorem 5.5 ([21], Theorem 4.7). Let the Orlicz function $\psi$ be equivalent at zero to a $(1+\varepsilon)$-convex and $(2-\varepsilon)$-concave Orlicz function for some $\varepsilon>0$ and let it satisfy (5.22). Assume that $X$ is an r. i. space on $[0,1]$ such that $\mathfrak{m}_\psi\in X_0$. If a sequence of independent copies of a function $f\in X$ such that $\displaystyle\int_0^1 f(t)\,dt=0$ is equivalent in $X$ to the canonical $\ell_{\psi}$-basis, then the distributions of the functions $f$ and $\mathfrak{m}_\psi$ are quasi-equivalent.

First we prove a number of auxiliary results. The first of them is proved by a change of the variable under an integral sign.

Lemma 5.4. Let $\psi$ be an Orlicz function. Then $\mathfrak{m}_\psi\in L_1[0,1]$ if and only if

$$ \begin{equation} \int_0^1\frac{\psi'(u)\,du}{u}<\infty. \end{equation} \tag{5.30} $$
In this case
$$ \begin{equation*} \int_0^1{\mathfrak{m}_\psi}(s)\,ds=\int_0^1\frac{\psi'(s)\,ds}{s}\,. \end{equation*} \notag $$

For $r\geqslant 1$ let $\psi_r \colon [0,\infty) \to [0,\infty)$ be defined by

$$ \begin{equation} \psi_r(t):=\begin{cases} t^r, & 0 \leqslant t \leqslant 1, \\ 1+r(t-1), & t >1. \end{cases} \end{equation} \tag{5.31} $$
It is easy to verify that $\psi_r$ is an Orlicz function. Moreover, if $t > 1$, then
$$ \begin{equation*} t \leqslant 1+r(t-1)=\psi_r(t)=rt+(1-r) \leqslant \min\{t^r,rt\}, \end{equation*} \notag $$
so that
$$ \begin{equation} \min\{t^r,t\} \leqslant \psi_r(t) \leqslant \min\{t^r,rt\},\qquad t\geqslant 0. \end{equation} \tag{5.32} $$

Lemma 5.5. For $1\leqslant r<\infty$ let $\psi_r$ be the function defined by (5.31). Then $L_{\psi_r}=L_1+L_r$ and

$$ \begin{equation} \frac{1}{2r+2}\|f\|_{L_{\psi_r}}\leqslant \|f\|_{L_1+L_r}\leqslant 2\|f\|_{L_{\psi_r}},\qquad f\in L_1+L_r. \end{equation} \tag{5.33} $$

Proof. Without loss of generality we can assume that $f\geqslant0$ and $\|f\|_{L_{\psi_r}}=1$. Hence in view of (5.32) we have
$$ \begin{equation*} \|\min\{f,f^r\}\|_1\leqslant\|\psi_r(f)\|_1=1, \end{equation*} \notag $$
so that, setting $f_1=f\chi_{\{f>1\}}$ and $f_2=f\chi_{\{0\leqslant f\leqslant 1\}}$ we obtain a representation $f=f_1+f_2$, where $\|f_1\|_1\leqslant 1$ and $\|f_2\|_r\leqslant 1$. Thus,
$$ \begin{equation*} \|f\|_{L_1+L_r}\leqslant\|f_1\|_1+\|f_2\|_r\leqslant 2, \end{equation*} \notag $$
which yields the embedding $L_{\psi_r}\subset L_1+L_r$ and the right-hand inequality in (5.33).

In the opposite direction, if $f\in L_1+L_r$, then we can choose $f_1\in L_1$ and $f_2\in L_r$ so that

$$ \begin{equation*} f=f_1+f_2\quad\text{and}\quad \|f_1\|_1+\|f_2\|_r\leqslant 2\|f\|_{L_1+L_r}. \end{equation*} \notag $$
Hence, by the right-hand inequality in (5.32)
$$ \begin{equation*} \|f_2\|_{L_{\psi_r}}\leqslant\|f_2\|_r\leqslant 2\|f\|_{L_1+L_r} \end{equation*} \notag $$
and
$$ \begin{equation*} \|f_1\|_{L_{\psi_r}}\leqslant r\|f_1\|_1\leqslant 2r\|f\|_{L_1+L_r}. \end{equation*} \notag $$
Thus, using the triangle inequality we obtain
$$ \begin{equation*} \|f\|_{L_{\psi_r}}\leqslant \|f_1\|_{L_{\psi_r}}+ \|f_2\|_{L_{\psi_r}}\leqslant (2r+2)\|f\|_{L_1+L_r}, \end{equation*} \notag $$
which yields the embedding $L_1+L_r\subset L_{\psi_r}$ and the left-hand inequality in (5.33). $\Box$

The next result is a considerable simplification of Lemma 7 and Theorem 8 in [83].

Lemma 5.6. Let $r>1$, and let $\psi$ be an Orlicz function satisfying (5.30). Also let $\psi_r$ be the function defined by (5.31), and let

$$ \begin{equation*} N(t):=\int_0^1{\psi}_r(t{\mathfrak{m}_\psi}(s))\,ds,\qquad t>0, \end{equation*} \notag $$
where $\mathfrak{m}_\psi(s)=1/\psi^{-1}(s)$, $s\in(0,1]$. Then the following equivalence holds with constants independent of $c\in \ell_N$:
$$ \begin{equation*} \|c\,\overline\otimes\,\mathfrak{m}_\psi\|_{L_1+L_r} \asymp \|c\|_{\ell_N}. \end{equation*} \notag $$

Proof. First of all, we verify that the function $N$ is well defined and is an Orlicz function.

In fact, it follows from (5.32) that $\psi_r(t)\leqslant rt$ for all $t>0$. Hence, bearing in mind that $\psi$ satisfies (5.30), from Lemma 5.4 we obtain

$$ \begin{equation*} N(t)\leqslant rt\int_0^1{\mathfrak{m}_\psi}(s)\,ds<\infty,\qquad t>0. \end{equation*} \notag $$
Now, as $\psi_r$ is convex, it follows that
$$ \begin{equation*} {\psi}_r([\lambda t_1+(1-\lambda)t_2]{\mathfrak{m}_\psi}(s))\leqslant \lambda {\psi}_r(t_1{\mathfrak{m}_\psi}(s))+ (1-\lambda){\psi}_r(t_2{\mathfrak{m}_\psi}(s)) \end{equation*} \notag $$
whenever $s,\lambda\in[0,1]$ and $t_1,t_2\geqslant 0$. Integrating this with respect to $s\in [0,1]$ we obtain
$$ \begin{equation*} N(\lambda t_1+(1-\lambda)t_2)\leqslant \lambda N(t_1)+(1-\lambda)N(t_2). \end{equation*} \notag $$
Hence $N$ is a convex function. Since $N$ is increasing and $N(0)=0$, we see that it is an Orlicz function.

Let $c=(c_k)_{k=1}^\infty\in \ell_N$, $\|c\|_{\ell_N}=1$, and $c_k\geqslant 0$, $k \in \mathbb{N}$. Then

$$ \begin{equation*} 1=\sum_{k=1}^\infty N(c_k)=\sum_{k=1}^\infty \int_0^1 \psi_r(c_k \mathfrak{m}_\psi(s))\,ds= \int_0^\infty \psi_r(c\,\overline\otimes\,\mathfrak{m}_\psi)(u)\,du. \end{equation*} \notag $$
Therefore, $\|c \,\overline\otimes\,\mathfrak{m}_\psi\|_{L_{\psi_r}}=1$, so $\|c\|_{\ell_N}=\|c\,\overline\otimes\, \mathfrak{m}_\psi\|_{L_{\psi_r}}$ by the positive homogeneity of norms. Now the required result follows from Lemma 5.5. $\Box$

Lemma 5.7. If $\psi$, $N$, and $\psi_r$ are the same functions as in Lemma 5.6, then

$$ \begin{equation*} N(t)=t^r+r t\int_0^t \frac{{\psi}(u)}{u^2}\, du+ r t^r\int_t^1 \frac{{\psi}(u)}{u^{r+1}}\,du,\qquad t\in(0,1]. \end{equation*} \notag $$

Proof. By the definition of $\mathfrak{m}_\psi$, for each $t\in(0,1]$ we have the equalities
$$ \begin{equation*} t\mathfrak{m}_\psi(s)\geqslant 1\ \, \Longleftrightarrow\ \, s\leqslant \psi(t)\qquad\text{and}\qquad t\mathfrak{m}_\psi(s)\leqslant 1\ \, \Longleftrightarrow\ \, s\geqslant \psi(t). \end{equation*} \notag $$
Hence by (5.31)
$$ \begin{equation*} \psi_r(t\mathfrak{m}_\psi(s))=(t\mathfrak{m}_\psi(s))^r,\qquad s\in (\psi(t),1], \end{equation*} \notag $$
and
$$ \begin{equation*} \psi_r(t\mathfrak{m}_\psi(s))=1-r+rt\mathfrak{m}_\psi(s),\qquad s\in(0,\psi(t)]. \end{equation*} \notag $$
Consequently,
$$ \begin{equation*} N(t)=\int_0^{\psi(t)}(rt\mathfrak{m}_\psi(s)-r+1)\,ds+ \int_{\psi(t)}^1(t\mathfrak{m}_\psi(s))^r\,ds. \end{equation*} \notag $$
Inserting $s=\psi(u)$ into the last equality we obtain
$$ \begin{equation*} N(t)=(1-r)\psi(t)+r\int_0^t\frac{t\psi'(u)\,du}{u}+ \int_t^1\frac{t^r\psi'(u)\,du}{u^r}\,,\qquad t\in(0,1], \end{equation*} \notag $$
and, after integration by parts,
$$ \begin{equation*} \begin{aligned} \, N(t)&=(1-r)\psi(t)+rt\biggl(\frac{\psi(u)}{u}\bigg|_0^t+ \int_0^t\frac{\psi(u)}{u^2}\,du\biggr)+ t^r\biggl(\frac{\psi(u)}{u^r}\bigg|_t^1+ r\int_t^1\frac{\psi(u)}{u^{r+1}}\, du\biggr) \\ &=rt\biggl(-\lim_{u\to 0}\frac{\psi(u)}{u}+ \int_0^t\frac{\psi(u)}{u^2}\,du\biggr)+ t^r\biggl(1+r\int_t^1\frac{\psi(u)}{u^{r+1}}\,du\biggr). \end{aligned} \end{equation*} \notag $$
Since
$$ \begin{equation*} \frac{\psi(u)}{u}\leqslant \int_0^{\psi(u)}\,\frac{ds}{\psi^{-1}(s)}\,, \end{equation*} \notag $$
it follows from (5.30) that $\lim_{u\to 0}\psi(u)/u=0$. $\Box$

Lemma 5.8. Let $\psi$ be an Orlicz function, and let $\varepsilon>0$.

(i) If $\psi$ is $(1+\varepsilon)$-convex on $[0,1]$, then

$$ \begin{equation*} \int_0^t \frac{\psi(u)}{u^2}\,du\leqslant \frac{\psi(t)}{\varepsilon t}\,,\qquad t>0. \end{equation*} \notag $$

(ii) If $r>1$ and the function $\psi$ is $(r-\varepsilon)$-concave on $[0,1]$, then

$$ \begin{equation*} \int_t^1\frac{\psi(u)}{u^{r+1}}\,du\leqslant \frac{\psi(t)}{\varepsilon t^r}\,,\qquad 0<t\leqslant 1. \end{equation*} \notag $$

Proof. (i) If $u\in(0,t]$, then by Lemma 3.1 (i)
$$ \begin{equation*} \frac{\psi(u)}{u^{1+\varepsilon}}\leqslant \frac{\psi(t)}{t^{1+\varepsilon}}\,, \end{equation*} \notag $$
so that
$$ \begin{equation*} \frac{\psi(u)}{u^2}\leqslant \frac{\psi(t)}{t^{1+\varepsilon}u^{1-\varepsilon}}\,. \end{equation*} \notag $$
Hence
$$ \begin{equation*} \int_0^t\frac{\psi(u)\,du}{u^2}\leqslant \int_0^t\frac{\psi(t)\,du}{t^{1+\varepsilon}u^{1-\varepsilon}}= \frac{\psi(t)}{t^{1+\varepsilon}}\int_0^tu^{\varepsilon-1}\,du= \frac{\psi(t)}{\varepsilon t}\,. \end{equation*} \notag $$

(ii) Let $u\in [t,1]$. Then by Lemma 3.1 (ii) we have

$$ \begin{equation*} \frac{\psi(u)}{u^{r-\varepsilon}}\leqslant \frac{\psi(t)}{t^{r-\varepsilon}}\,, \end{equation*} \notag $$
and therefore
$$ \begin{equation*} \frac{\psi(u)}{u^{r+1}}\leqslant \frac{\psi(t)}{t^{r-\varepsilon}u^{1+\varepsilon}}\,. \end{equation*} \notag $$
Thus,
$$ \begin{equation*} \int_t^1\frac{\psi(u)\,du}{u^{r+1}}\leqslant \int_t^1\frac{\psi(t)\,du}{t^{r-\varepsilon}u^{1+\varepsilon}}= \frac{\psi(t)}{t^{r-\varepsilon}}\int_t^1u^{-\varepsilon-1}\,du\leqslant \frac{\psi(t)}{\varepsilon t^r}\,. \ \Box \end{equation*} \notag $$

Lemma 5.9. Let $r>1$. Assume that $\psi$ is an Orlicz function that is $(1+\varepsilon)$-convex and $(r-\varepsilon)$-concave on $[0,1]$ for some $\varepsilon>0$. Then

$$ \begin{equation*} \|c\,\overline\otimes\,\mathfrak{m}_\psi\|_{L_1+L_r} \asymp\|c\|_{\ell_{\psi}} \end{equation*} \notag $$
for some constants independent of $c\in \ell_{\psi}$.

Proof. First we verify that $\psi$ satisfies condition (5.30). In fact, as $\psi$ is $(1+\varepsilon)$-convex, the function $\psi(t^{1/(1+\varepsilon)})$ is convex, so its derivative $(1+\varepsilon)^{-1}\psi'(t^{1/(1+\varepsilon)}) t^{-\varepsilon/(1+\varepsilon)}$ is increasing for $t\in [0,1]$. Hence it is easy to see that $\psi'(t)\leqslant \psi'(1)t^{\varepsilon}$, $t\in (0,1]$, which in turn yields (5.30).

Thus, similarly to Lemma 5.6 we can define the Orlicz function $N$. We show that

$$ \begin{equation} \psi(t)\asymp N(t),\qquad 0<t\leqslant 1. \end{equation} \tag{5.34} $$

By Lemma 3.1, (ii), the function $\psi(t)t^{-r+\varepsilon}$ is decreasing on $[0,1]$. Hence (since we can assume that $r-\varepsilon>1$) it follows that

$$ \begin{equation*} \int_0^t \frac{\psi(u)}{u^2}\,du \geqslant \frac{\psi(t)}{t^{r-\varepsilon}}\int_0^t u^{r-\varepsilon-2}\,du= \frac{\psi(t)}{(r-\varepsilon-1)t}\,, \end{equation*} \notag $$
so that Lemma 5.7 ensures the inequality $\psi\preceq N$ on $[0,1]$.

On the other hand, in view of Lemmas 5.7 and 5.8

$$ \begin{equation*} N(t)\leqslant t^r+{2r}{\varepsilon}^{-1}\psi(t),\qquad 0<t\leqslant 1. \end{equation*} \notag $$
Furthermore, since $\psi(1)=1$, using Lemma 3.1, (ii), again we see that $\psi(t)\geqslant t^{r-\varepsilon}> t^r$, $0<t\leqslant 1$. This and the previous inequality means that $N\preceq \psi$ on $(0,1]$. Thus we have proved (5.34) and the required result follows from Lemma 5.6. $\Box$

Lemma 5.10. Let $\psi$ be an Orlicz function and $X$ be an r. i. space on $[0,1]$. Assume that for some measurable function $f$ on $[0,1]$ the estimate

$$ \begin{equation*} \|c\,\overline\otimes\, f\|_{Z_X^2} \preceq \|c\|_{\ell_{\psi}},\qquad c\in \ell_{\psi}, \end{equation*} \notag $$
holds. Then
$$ \begin{equation*} f^*(t)\preceq{\mathfrak{m}_\psi}(t),\qquad t\in (0,1]. \end{equation*} \notag $$

Proof. Since $Z_X^2 \subset L_1+L_{\infty}$, it follows from the assumptions of the lemma that
$$ \begin{equation*} \|c\,\overline\otimes\, f\|_{L_1+L_{\infty}} \preceq \|c\|_{\ell_{\psi}}. \end{equation*} \notag $$
If $s_n=\displaystyle\sum_{k=1}^n e_k$, $n\in\mathbb{N}$, then $(s_n \,\overline\otimes\, f)^*= \sigma_nf^*$, and therefore by (2.3)
$$ \begin{equation*} \|s_n\,\overline\otimes\, f\|_{L_1+L_{\infty}}= {\mathcal K}(1,\sigma_nf^*;L_1,L_\infty)= \int_0^1\sigma_n f^*(s)\,ds=n\int_0^{1/n}f^*(s)\,ds. \end{equation*} \notag $$
Thus, by the above estimate
$$ \begin{equation*} n\int_0^{1/n}f^*(s)\,ds\preceq\|s_n\|_{\ell_{\psi}}= \frac{1}{\psi^{-1}(1/n)}={\mathfrak{m}_\psi}\biggl(\frac{1}{n}\biggr),\qquad n\in\mathbb{N}. \end{equation*} \notag $$

Now, if $t\in (0,1]$, then $t\in(1/(n+1),1/n]$ for some $n\in\mathbb{N}$ and the following estimate holds with a constant independent of $t$:

$$ \begin{equation*} f^*(t)\leqslant\frac{1}{t}\int_0^tf^*(s)\,ds\leqslant (n+1)\int_0^{1/n}f^*(s)\,ds\preceq {\mathfrak{m}_\psi}\biggl(\frac{1}{n}\biggr)\leqslant {\mathfrak{m}_\psi}(t). \quad\Box \end{equation*} \notag $$

Lemma 5.11. If $X$ is an r. i. space on $[0,1]$, then for each $1\leqslant r<2$ and any $f\in Z_X^r$ we have

$$ \begin{equation*} \|f\|_{Z_X^2}\preceq\|f\|_{Z_X^{\infty}}^{1-r/2}\|f\|_{Z_X^r}^{r/2}. \end{equation*} \notag $$

Proof. Recall that by definition
$$ \begin{equation*} \|f\|_{Z_X^q}=\|f^*\chi_{(0,1)}\|_X+\|f^*\chi_{(1,\infty)}\|_q,\quad 1\leqslant q<\infty,\quad\text{and}\quad \|f\|_{Z_X^\infty}=\|f^*\chi_{(0,1)}\|_X. \end{equation*} \notag $$
Therefore,
$$ \begin{equation*} \|f^*\chi_{(0,1)}\|_X=\|f^*\chi_{(0,1)}\|_X^{1-r/2}\|f^*\chi_{(0,1)}\|_X^{r/2} \leqslant\|f\|_{Z_X^{\infty}}^{1-r/2}\|f\|_{Z_X^r}^{r/2}, \end{equation*} \notag $$
and since $\|\chi_{[0,1]}\|_X=1$, for each $1\leqslant r<2$ we obtain
$$ \begin{equation*} \|f^*\chi_{(1,\infty)}\|_2^2=\int_1^\infty f^*(t)^2\,dt \leqslant f^*(1)^{2-r}\int_1^\infty f^*(t)^r\,dt\leqslant \|f^*\chi_{(0,1)}\|_{X}^{2-r}\|f^*\chi_{(1,\infty)}\|_r^r. \end{equation*} \notag $$
Hence
$$ \begin{equation*} \|f^*\chi_{(1,\infty)}\|_2\leqslant \|f\|_{Z_X^{\infty}}^{1-r/2}\|f\|_{Z_X^r}^{r/2}. \end{equation*} \notag $$
Since the required estimate follows from the above relations, the proof is complete. $\Box$

Lemma 5.12. Let $\psi$ be an Orlicz function that is $(1+\varepsilon)$-convex and $(2-\varepsilon)$-concave on $[0,1]$ for some $\varepsilon>0$. Assume that $X$ is an r. i. space on $[0,1]$ such that $\mathfrak{m}_\psi\in X$. If a function $f\in X$ satisfies the condition

$$ \begin{equation*} \|c\,\overline\otimes\, f\|_{Z_X^2}\asymp\|c\|_{\ell_{\psi}},\qquad c\in \ell_{\psi}, \end{equation*} \notag $$
then
$$ \begin{equation*} \|c\,\overline\otimes\,f\|_{Z_X^{\infty}}\asymp \|c\|_{\ell_{\psi}},\qquad c\in \ell_{\psi}. \end{equation*} \notag $$

Proof. Fix $r<2$ such that $\psi$ is $(r- \delta)$-concave for some $\delta>0$. Then by Lemma 5.11
$$ \begin{equation*} \|c\,\overline\otimes\, f\|_{Z_X^2}\preceq \|c\,\overline\otimes\, f\|_{Z_X^{\infty}}^{1-r/2} \|c\,\overline\otimes\, f\|_{Z_X^r}^{r/2}. \end{equation*} \notag $$
In addition, by Lemma 5.10 we have $f^*(t)\preceq {\mathfrak{m}_\psi}(t)$, $t\in (0,1]$, so that by Lemma 5.9
$$ \begin{equation*} \begin{aligned} \, \|c \,\overline\otimes\, f\|_{Z_X^r}&\asymp \|c\,\overline\otimes\, f\|_{Z_X^{\infty}}+ \|c \,\overline\otimes\, f\|_{L_1+L_r}\preceq \|c\,\overline\otimes\, f\|_{Z_X^{\infty}}+ \|c \,\overline\otimes\, {\mathfrak{m}_\psi}\|_{L_1+L_r} \\ &\asymp\|c\,\overline\otimes\, f\|_{Z_X^{\infty}}+ \|c\|_{\ell_{\psi}}. \end{aligned} \end{equation*} \notag $$
Thus, in view of the assumptions of the lemma and the previous estimate we have
$$ \begin{equation*} \|c\|_{\ell_{\psi}}\asymp\|c \,\overline\otimes\, f\|_{Z_X^2}\preceq \|c\,\overline\otimes\, f\|_{Z_X^{\infty}}^{1-r/2} (\|c\,\overline\otimes\, f\|_{Z_X^{\infty}}+ \|c\|_{\ell_{\psi}})^{r/2}. \end{equation*} \notag $$
Since $r<2$, it follows that $\|c\|_{\ell_{\psi}}\preceq \|c\,\overline\otimes\, f\|_{Z_X^{\infty}}$. As the reverse estimate
$$ \begin{equation*} \|c\|_{\ell_{\psi}}\asymp\|c \,\overline\otimes\, y\|_{Z_X^2} \geqslant\|c\,\overline\otimes\, y\|_{Z_X^{\infty}} \end{equation*} \notag $$
is obvious, the proof is complete. $\Box$

Lemma 5.13. Let $X$ be an r. i. space on $[0,1]$. If $\{x_n\}_{n=1}^\infty\subset X$, $\|x_n\|_1\to0$ as $n\to \infty$ and $x_n^*\leqslant x^*$, $n=1,2,\dots$, for some $x\in X_0$, then $\|x_n\|_X\to 0$ as $n\to \infty$.

Proof. Since $x\in X_0$, for an arbitrary $\varepsilon>0$ there exists $\delta>0$ such that $\|x^*\chi_{(0,\delta]}\|_X<\varepsilon$. In addition, $x_n\to0$ in measure as $n\to \infty$. Therefore, for all sufficiently large $n\in\mathbb{N}$ we have $m(A_n)<\delta$, where $A_n:=\{t\colon |x_n(t)|>\varepsilon\}$, so that
$$ \begin{equation*} \begin{aligned} \, \|x_n\|_X &\leqslant \|x_n\chi_{A_n}\|_X+\|x_n\chi_{[0,1]\setminus A_n}\|_X \leqslant \|x_n^*\chi_{(0,\delta]}\|_X+\varepsilon\|\chi_{[0,1]}\|_X \\ &\leqslant \|x^*\chi_{(0,\delta]}\|_X+\varepsilon\|\chi_{[0,1]}\|_X \leqslant 2\varepsilon. \ \Box \end{aligned} \end{equation*} \notag $$
Proof of Theorem 5.5. As before, assume that $\psi$ is $(1+\varepsilon)$-convex and $(2-\varepsilon)$-concave on $[0,1]$.

First, since $\mathfrak{m}_\psi\in X_0$ by assumption and $\psi$ is $(1+\varepsilon)$-convex on $[0,1]$, by Lemma 5.2 we have the embedding $X\supset M(\varphi)$, where $\varphi(t)=t/\psi^{-1}(t)$, $0<t\leqslant 1$. Second, $\psi$ is $(2-\varepsilon)$-concave on $(0,1]$, and therefore $M(\varphi)\supset L_2$ (see the proof of Theorem 5.4). Thus, $X\supset L_2$, and since by assumption a sequence of independent copies of the function $f\in X$ satisfying $\displaystyle\int_0^1 f(t)\,dt=0$ is equivalent in $X$ to the canonical basis of $\ell_{\psi}$, using Theorem 3.2 we obtain

$$ \begin{equation*} \|c\,\overline\otimes\, f\|_{Z_X^{2}}\asymp \|c\|_{\ell_{\psi}},\qquad c\in \ell_{\psi}. \end{equation*} \notag $$
Hence, applying Lemma 5.12 in its turn, we have
$$ \begin{equation*} \|(c\,\overline\otimes\, f)^*\cdot\chi_{[0,1]}\|_X= \|c\,\overline\otimes\, f\|_{Z_X^{\infty}}\asymp \|c\|_{\ell_{\psi}},\qquad c\in \ell_{\psi}. \end{equation*} \notag $$
In addition, $f^*\preceq{\mathfrak{m}_\psi}$ on $(0,1]$ by Lemma 5.10, so that by Lemma 5.3 we have
$$ \begin{equation*} (c\,\overline\otimes\, f)^*\cdot\chi_{(0,1)}\preceq (c\,\overline\otimes\, {\mathfrak{m}_\psi})^*\cdot\chi_{(0,1)} \preceq \|c\|_{\ell_{\psi}}{\mathfrak{m}_\psi}. \end{equation*} \notag $$
Setting $c=\displaystyle\sum_{k=1}^n e_k$ in the last two relations, $n\in\mathbb{N}$, we obtain, as before,
$$ \begin{equation*} \|\sigma_n f^*\cdot\chi_{(0,1)}\|_X\asymp \frac{1}{\psi^{-1}(1/n)}\,,\qquad n\in\mathbb{N}, \end{equation*} \notag $$
and
$$ \begin{equation*} \sigma_n f^*\cdot\chi_{(0,1)}\preceq \frac{1}{\psi^{-1}(1/n)}{\mathfrak{m}_\psi},\qquad n\in\mathbb{N}. \end{equation*} \notag $$
Therefore, if
$$ \begin{equation*} x_n:=\psi^{-1}\biggl(\frac{1}{n}\biggr)\sigma_nf^*\cdot\chi_{(0,1)},\qquad n\in\mathbb{N}, \end{equation*} \notag $$
then
$$ \begin{equation} x_n^*\preceq{\mathfrak{m}_\psi}\quad\text{and}\quad \|x_n\|_X\asymp 1,\quad n\in\mathbb{N}. \end{equation} \tag{5.35} $$

Suppose there exists a subsequence $\{x_{n_k}\}\subset \{x_n\}$ such that $\|x_{n_k}\|_{1}\to 0$. Then as ${\mathfrak{m}_\psi}\in X_0$ by assumption, we have $\|x_{n_k}\|_{X}\to 0$ as $k\to \infty$ by the first inequality in (5.35) and Lemma 5.13, in contradiction to the second relation in (5.35). Thus, $\|x_n\|_1\asymp 1$, and therefore

$$ \begin{equation*} \|\sigma_n f^*\cdot\chi_{(0,1)}\|_1\asymp \frac{1}{\psi^{-1}(1/n)}\,,\qquad n\in\mathbb{N}, \end{equation*} \notag $$
or
$$ \begin{equation*} n\int_0^{1/n}f^*(s)\,ds\asymp \frac{1}{\psi^{-1}(1/n)}\,,\qquad n\in\mathbb{N}. \end{equation*} \notag $$
Now, if $t\in (0,1]$, then $t\in(1/(n+1),1/{n})$ for some $n\in\mathbb{N}$. Because
$$ \begin{equation*} \frac{1}{t}\int_0^t f^*(s)\,ds\geqslant n\int_0^{1/(n+1)}f^*(s)\,ds \succeq\frac{1}{\psi^{-1}(1/(n+1))}\asymp \frac{1}{\psi^{-1}(t)} \end{equation*} \notag $$
and, conversely,
$$ \begin{equation*} \frac{1}{t}\int_0^t f^*(s)\,ds\leqslant (n+1)\int_0^{1/n} f^*(s)\,ds \preceq\frac{1}{\psi^{-1}(1/n)}\asymp \frac{1}{\psi^{-1}(t)}\,, \end{equation*} \notag $$
for some $C>1$ we have
$$ \begin{equation} C^{-1}\frac{t}{\psi^{-1}(t)}\leqslant \int_0^t f^*(s)\,ds\leqslant C\frac{t}{\psi^{-1}(t)}\,,\qquad t\in(0,1]. \end{equation} \tag{5.36} $$

Now fix $R$ such that ${R}^{\varepsilon}>(2C^2)^{1+\varepsilon}$. Since $\psi$ is a $(1+\varepsilon)$-convex function on $[0,1]$ and $2C^2R^{-1}<1$, it follows from Lemma 3.1 (i) that

$$ \begin{equation*} R\cdot \psi\biggl(\frac{2C^2}{R}u\biggr)\leqslant R\cdot\biggl(\frac{2C^2}{R}\biggr)^{1+\varepsilon}\psi(u)\leqslant \psi(u),\qquad u\in(0,1], \end{equation*} \notag $$
or, turning to inverse functions,
$$ \begin{equation*} \psi^{-1}\biggl(\frac{t}{R}\biggr)\geqslant \frac{2C^2}{R}{\psi}^{-1}(t),\qquad t\in(0,1]. \end{equation*} \notag $$
Bearing in mind that $2C^2\leqslant {R}$, from (5.36) and the last inequality we obtain
$$ \begin{equation*} \begin{aligned} \, C^{-1}\frac{t}{{\psi}^{-1}(t)} &\leqslant \int_0^t f^*(s)\,ds \leqslant t\biggl(1-\frac{1}{R}\biggr)f^*\biggl(\frac{t}{R}\biggr)+ \int_0^{t/R} f^*(s)\,ds \\ &\leqslant t f^*\biggl(\frac{t}{R}\biggr)+C\frac{t}{R\psi^{-1}(t/R)} \leqslant t f^*\biggl(\frac{t}{R}\biggr)+ (2C)^{-1}\frac{t}{\psi^{-1}(t)}\,, \end{aligned} \end{equation*} \notag $$
and therefore
$$ \begin{equation*} t f^*\biggl(\frac{t}{R}\biggr)\geqslant (2C)^{-1}\frac{t}{\psi^{-1}(t)}\,,\qquad t\in(0,1]. \end{equation*} \notag $$
Thus, as $\psi^{-1}$ is concave, we have
$$ \begin{equation*} f^*(t)\geqslant (2C{R})^{-1}\frac{1}{\psi^{-1}(t)}\,,\qquad t\in(0,R^{-1}]. \end{equation*} \notag $$
Since in the opposite direction, in view of (5.36) we have
$$ \begin{equation*} f^*(t)\leqslant\frac{1}{t}\int_0^tf^*(s)\,ds\leqslant \frac{C}{\psi^{-1}(t)}=C{\mathfrak{m}_\psi}(t),\qquad t\in(0,1], \end{equation*} \notag $$
as a result, we obtain
$$ \begin{equation*} f^*(t)\asymp {\mathfrak{m}_\psi}(t),\qquad t\in(0,R^{-1}], \end{equation*} \notag $$
which yields the required quasi-equivalence of the distributions of $f$ and $\mathfrak{m}_\psi$. $\Box$

Corollary 5.2 ([32], Theorem III.2). Let $1<q<2$, and let $X$ be an r. i. space such that a $q$-stable random variable $\xi^{(q)}$ belongs to $X_0$. If a sequence of independent copies of a function $f\in X$, $\displaystyle\int_0^1 f(t)\,dt=0$, is equivalent in $X$ to the canonical $\ell_q$-basis, then for some $C>0$ and all $\tau>0$

$$ \begin{equation*} C^{-1}\tau^{-q}\leqslant n_f(\tau)\leqslant C\tau^{-q}. \end{equation*} \notag $$

6. Complementability of subspaces spanned by independent functions

Recall that a (closed linear) subspace $E$ of a Banach space $X$ is said to be complemented if there exists a bounded linear projection $P\colon X\to X$ such that $P(X)=E$.

Complementability of subspaces is one of the central topics in the investigation of the geometric properties of Banach spaces. If we limit ourselves to separable Banach lattices of measurable functions $X$, then one reason for this is the fact that each symmetric sequence that spans a complemented subspace of $X$ is equivalent in $X$ to the canonical $\ell_2$-basis or a sequence of pairwise disjoint functions (for instance, see [57], Lemma 8.10).

Questions relating to complementability were in the focus of the classical papers [23] by Banach and Masur, where it was shown that there exist non-complemented subspaces, and [46] by Fichtenholz and Kantorovich, where the authors showed that $C[0,1]$ is not a complemented subspace of $L_\infty [0,1]$. We also mention the papers [87], [74], and [76] by Pełczyński and Lindenstrauss, containing deep results on complementability in classical Banach spaces, [77] by Lindenstrauss and Tzafriri, where they showed that each infinite-dimensional Banach space not isomorphic to a Hilbert space contains a non-complemented subspace, and the survey by Kadec and Mityagin [60], containing, in particular, a detailed proof of results from [77] (also see the references there). Some aspects of the complementability property of subspaces and its applications were considered in [57], [78], [79], [69], [41], [66], [32], [103], [85], [1], and [5] (the list is not intended to be exhaustive).

6.1. Dor–Starbird theorem

In particular, considerable attention has been paid to the complementabiliy of subspaces spanned by independent functions. As shown in [95] (and, independently, in [79], Theorem 2.b.4, (ii)) the closed linear span $[r_k]$ of the Rademacher functions $r_k(t)=\operatorname{sign}\sin 2^k\pi t$, $k=1,2,\dots$, is complemented in an r. i. space $X$ on $[0,1]$ if and only if $G\subset X\subset G'$, where $G:=(\exp L_2)_0$, that is, $G$ is the closure of $L_\infty$ in the exponential Orlicz space $\exp L_2$ (see § 2.3), and $G'$ is its Köthe dual (which coincides with the Orlicz space $L\log^{1/2}L:=L_{N_2'}$, $N_2'(u)\asymp u\log^{1/2}(e/u)$ at infinity). In particular, for $L_p[0,1]$ the last condition holds if and only if $1< p<\infty$. This result was quickly generalized by Braverman to sequences of identically distributed or uniformly bounded independent functions alike (see [32] and [29]). Even before that, in 1979 Dor and Starbird [44] proved the following deep result on the complementability in $L_p$, $1\leqslant p<\infty$, of the closed linear span $[f_k]$ of independent functions, provided that $[f_k]\approx \ell_p$.

Theorem 6.1 (Dor–Starbird). Let $1 \leqslant p <\infty$, and let $\{f_k\}_{k=1}^\infty$ be a sequence of independent functions in the space $L_p=L_p[0,1]$. If $[f_k]\approx \ell_p$, then the subspace $[f_k]$ is complemented in $L_p$.

If $p\ne 2$, then the Dor–Starbird theorem is sharp in the following sense: none of its assumptions can be dropped. Indeed, first of all, $L_p$ contains non-complemented subspaces isomorphic to $\ell_p$. For $p > 2$ this was proved by Rosenthal [96] in 1970, and for $1 < p < 2$ by Bennett, Dor, Goodman, Johnson, and Newman [24] in 1977. Finally, in 1981 Bourgain solved a long-standing problem by producing a non-complemented subspace of $L_1$ that is isomorphic to $\ell_1$ [27].

Second, for each $p\ne 2$ the space $L_p$ contains non-complemented subspaces spanned by sequences of identically and symmetrically distributed independent functions. If $p > 2$, then such subspaces $X_{w,p}$ were constructed by Rosenthal in the same paper [96]. For $1\leqslant p < 2$ a similar assertion follows from Kadec’s result that a sequence of independent $q$-stable random variables $\{\xi_k^{(q)}\}$ is equivalent in $L_p$ to the canonical $\ell_q$-basis in the case when $1\leqslant p < q < 2$ (see [59] and also § 1). In fact, suppose that the subspace $E:=[\xi_k^{(q)}]$ is complemented in $L_p$. Then the dual space $E^*$ embeds isomorphically in $L_p^*=L_{p'}$, where $1/p+1/p'=1$. Furthermore, $E^*\approx l_{q'}$, where $1/q+1 /q'=1$. However, since $p'>2$, $q'\ne p'$, and $q'\ne 2$, this is in contradiction with the Kadec–Pełczyński alternative, which states that each infinite-dimensional subspace of $L_r$, $r>2$, is either isomorphic to $\ell_2$ or contains a subspace isomorphic to $\ell_r$ ([61], Corollary 2).

The proof of Theorem 6.1 presented in [44] has an intricate structure; it is based on a number of results, one of the most important of which is the following analytic characterization of the situation when a sequence $\{f_n\}$ of independent functions in $L_p$ is equivalent to the canonical $\ell_p$-basis.

Proposition 6.1 ([44], Proposition 3.5). Let $\{f_n\}_{n=1}^\infty$ be a sequence of independent functions in $L_p=L_p[0,1]$, $1\leqslant p<\infty$, $p\ne 2$, distinct from identical constants. Also assume that all of the conditions below related to the given value of $p$ are satisfied:

(i) if $1\leqslant p<2$, then there exist $\delta>0$ and subsets $E_n$, $n=1,2,\dots$, of the interval $[0,1]$ such that

$$ \begin{equation*} \sum_{n=1}^\infty m(E_n)<\infty \quad\textit{and}\quad \int_{E_n}|f_n(t)|^p\,dt\geqslant\delta^p, \qquad n=1,2,\dots; \end{equation*} \notag $$

(ii) if $2< p<\infty$, then

$$ \begin{equation*} \sum_{n=1}^\infty\|f_n\|_2^{2p/(p-2)}<\infty; \end{equation*} \notag $$

(iii) if $1< p<\infty$, $1/p+1/p'=1$, then

$$ \begin{equation*} \sum_{n=1}^\infty\biggl|\int_0^1 f_n(t)\,dt\biggr|^{p'}<\infty. \end{equation*} \notag $$

Then the sequence $\{f_n\}$ is equivalent in $L_p$ to the canonical $\ell_p$-basis.

Conversely, if $1\leqslant p<\infty$, $p\ne 2$, and $\{f_n\}$ is a sequence in $L_p$ that is equivalent there to the canonical $\ell_p$-basis, then assertions (i), (ii), and (iii) hold. Moreover, (i) holds for all $1\leqslant p<\infty$, $p\ne 2$, and, as $E_n$ in (i), we can take sets of the form $\{t\colon |f_n(t)|\geqslant \beta_n\}$ for suitable positive constants $\beta_n$.

In this section we show that the Dor–Starbird theorem is in fact a consequence of some comparison principle for the complementability of subspaces spanned by sequences of mean zero independent functions in an r. i. space $X$ on $[0,1]$ and sequences of their pairwise disjoint copies in the space $Z_X^2$ on $(0,\infty)$. Being quite general, this principle enables one to derive results of Dor–Starbird type for a certain class of r. i. spaces, as well as a number of corollaries on the complementability of subspaces spanned by independent functions in the spaces $L_p$.

6.2. A comparison principle for the complementability of subspaces spanned by sequences of independent functions and their disjoint copies

Recall that the class $\mathbb{K}$ of r. i. spaces with the Kruglov property was defined in § 3.2.

Theorem 6.2 ([2]). Let $X$ be an r. i. space on $[0,1]$, $\{f_k\}_{k=1}^\infty\subset X$ be an arbitrary sequence of independent functions, and $\{\bar{f}_k\}_{k=1}^\infty$ be a sequence of pairwise disjoint copies of these functions defined on the half-line $[0,\infty)$.

If $X\in\mathbb{K}$ and the subspace $[f_k]$ is complemented in $X$, then $[\bar{f}_k]$ is complemented in $Z_X^2$. In the case when, additionally, $X'\in\mathbb{K}$, the converse also holds: if the subspace $[\bar{f}_k]$ is complemented in $Z_X^2$, then so is the subspace $[f_k]$ in $X$.

Theorem 6.2 follows from Propositions 6.2 and 6.3; to prove these we need some auxiliary results.

The proof of the first two of them is standard and we leave it out.

Lemma 6.1 (for instance, see [44], Fact 2.2). Let $X$ be a Banach space and $Y$ and $E$ be subspaces of $X$, and let $E$ be finite-dimensional. Then $Y$ is complemented in $X$ if and only if $Y+E$ is.

Lemma 6.2 ([9], Lemma 1). If $X$ is a separable (or maximal) r. i. space on $[0,1]$, then $Z_X^2$ is a separable (maximal, respectively) r. i. space on $[0,\infty)$. Moreover, $(Z_X^2)'=Z_{X'}^2$.

Recall that a Banach lattice $X$ admits a lower $p$-estimate ($1\leqslant p < \infty$) if there exists a positive constant $C_X $ such that for any $n\in\mathbb{N}$, for arbitrary pairwise disjoint elements $x_1,\dots,x_n$ of $X$,

$$ \begin{equation*} \biggl(\,\sum_{k=1}^n \|x_k\|_X^p\biggr)^{1/p}\leqslant C_X\biggl\|\,\sum_{k=1}^n x_k\biggr\|_X. \end{equation*} \notag $$
In particular, each $p$-concave Banach lattice admits a lower $p$-estimate (see § 3.3). It is also obvious that $L_p$ admits a lower $p$-estimate but no lower $q$-estimate for any $q< p$.

Lemma 6.3 ([2]). Assume that an r. i. space $X$ on $[0,1]$ admits a lower $p$-estimate, where $p>2$. Then there exists a positive constant $B_X $ such that if $\{x_k\}\subset Z_X^2$ and $\operatorname{supp}x_k\subset [k-1,k]$ for all $k=1,2,\dots$, then

$$ \begin{equation*} \biggl(\,\sum_{k=1}^\infty\|x_k^*\|_X^p\biggr)^{1/p}\leqslant B_X\biggl\|\,\sum_{k=1}^\infty x_k\biggr\|_{Z_X^2}. \end{equation*} \notag $$

Proof. First we show that the space $L_{p,1}:=L_{p,1}[0,1]$ is continuously embedded in each r. i. space admitting a lower $p$-estimate, that is,
$$ \begin{equation} L_{p,1}\subset X\quad\text{and}\quad \|x\|_X\preceq \|x\|_{L_{p,1}}\quad\text{for all}\ \ x\in L_{p,1}. \end{equation} \tag{6.1} $$

For each $t\in (0,1]$ we can find $n\in{\mathbb N}$ such that $1/2<nt\leqslant 1$. Since $\chi_{(0,nt)}=\displaystyle\sum_{k=1}^n\chi_{(t(k-1),tk)}$ and $X$ admits a lower $p$-estimate, for the fundamental function $\phi_X$ of the space $X$ we have

$$ \begin{equation*} \phi_X(nt)=\|\chi_{(0,nt)}\|_X\geqslant C_X^{-1}\biggl(\,\sum_{k=1}^n\|\chi_{(t(k-1),tk)}\|^p\biggr)^{1/p}= C_X^{-1}\phi_X(t) n^{1/p}. \end{equation*} \notag $$
Hence from the choice of $n$ and the relation $\phi_X(1)=1$ it follows that
$$ \begin{equation*} \phi_X(t)\leqslant C_X\phi_X(nt)n^{-1/p}\leqslant C_X\,2^{1/p}t^{1/p}=C't^{1/p}, \end{equation*} \notag $$
which yields the embedding $\Lambda(t^{1/p})\subset \Lambda(\phi_X)$ for Lorentz spaces. Thus, as $L_{p,1}=\Lambda(t^{1/p})$ isometrically and $\Lambda(\phi_X)\subset X$ (see § 2.2), we obtain (6.1).

Now, by analogy with the definition of the spaces $Z_X^{p}$ in § 3.1 we introduce the r. i. space $Z_X^{p,1}$ of all measurable functions $f$ on $(0,\infty)$ such that

$$ \begin{equation*} \|f\|_{Z_X^{p,1}}=\|f^*\chi_{[0,1]}\|_X+ \|f^*\chi_{[1,\infty)}\|_{L_{p,1}[0,\infty)}<\infty. \end{equation*} \notag $$
Next we show that for some positive constant $C$, which depends only on $X$ and $p$, for an arbitrary sequence $\{x_k\}\subset Z_X^{p,1}$ such that $\operatorname{supp}x_k\subset [k-1,k]$, $k \in \mathbb{N}$, we have the inequality
$$ \begin{equation} \biggl(\,\sum_{k=1}^\infty\|x_k^*\|_X^p\biggr)^{1/p}\leqslant C\biggl\|\,\sum_{k=1}^\infty x_k\biggr\|_{Z_X^{p,1}}. \end{equation} \tag{6.2} $$

First of all, by the definition of the functional $f\mapsto\|f\|_{Z_X^{p,1}}$ we have

$$ \begin{equation*} \biggl\|\,\sum_{k=1}^\infty x_k\biggr\|_{Z_X^{p,1}}= \biggl\|\biggl(\,\sum_{k=1}^\infty u_k\biggr)^*\biggr\|_X+ \biggl\|\,\sum_{k=1}^\infty v_k\biggr\|_{L_{p,1}[0,\infty)}, \end{equation*} \notag $$
where $x_k=u_k+v_k$, the functions $u_k$ and $v_k$ are disjoint for each $k\in \mathbb{N}$, and
$$ \begin{equation*} m\biggl(\operatorname{supp}\sum_{k=1}^\infty u_k\biggr)\leqslant 1. \end{equation*} \notag $$
Furthermore, since the spaces $X$ and $L_{p,1}(0,\infty)$ admit lower $p$-estimates (for instance, see [39] or [63]) and $m(\operatorname{supp}v_k)\leqslant 1$, it follows that
$$ \begin{equation*} \biggl\|\biggl(\,\sum_{k=1}^\infty u_k\biggr)^*\biggr\|_X\geqslant C_X^{-1}\biggl(\,\sum_{k=1}^\infty \|u_k^*\|_X^p\biggr)^{1/p} \end{equation*} \notag $$
and
$$ \begin{equation*} \biggl\|\,\sum_{k=1}^\infty v_k\biggr\|_{L_{p,1}[0,\infty)}\geqslant C_{p}^{-1}\biggl(\,\sum_{k=1}^\infty \|v_k^*\|_{L_{p,1}[0,1]}^p\biggr)^{1/p}. \end{equation*} \notag $$
Thus, denoting the constant in the inequality in (6.1) by $A$ we obtain
$$ \begin{equation*} \begin{aligned} \, \biggl\|\,\sum_{k=1}^\infty x_k\biggr\|_{Z_X^{p,1}} &\geqslant \min\{C_X^{-1},C_{p}^{-1}\}\biggl(\,\sum_{k=1}^\infty \bigl(\|u_k^*\|_X+\|v_k^*\|_{L_{p,1}[0,1]}\bigr)^p\biggr)^{1/p} \\ &\geqslant A^{-1}\min\{C_X^{-1},C_{p}^{-1}\}\biggl(\,\sum_{k=1}^\infty \bigl(\|u_k^*\|_X+\|v_k^*\|_{X}\bigr)^p\biggr)^{1/p} \\ &\geqslant C^{-1}\biggl(\,\sum_{k=1}^\infty\|x_k^*\|_{X}^p\biggr)^{1/p}, \end{aligned} \end{equation*} \notag $$
and the proof of (6.2) is complete.

Finally, as $p>2$ and $\|\chi_{[0,1]}\|_X=1$, it follows that

$$ \begin{equation*} \begin{aligned} \, \|f^*\chi_{[1,\infty)}\|_{L_{p,1}[0,\infty)}&= \frac{1}{p}\int_0^\infty f^*(t+1)t^{1/p-1}\,dt \\ &\leqslant \frac{1}{p}\biggl(f^*(1)p+ \biggl(\int_1^\infty f^*(t)^2\,dt\biggr)^{1/2} \biggl(\int_1^\infty t^{2/p-2}\,dt\biggr)^{1/2}\biggr) \\ &\leqslant \|f^*\chi_{[0,1]}\|_X+ (p(p-2))^{-1/2}\|f^*\chi_{[1,\infty)}\|_{L_{2}[1,\infty)}, \end{aligned} \end{equation*} \notag $$
from which, comparing the norms in $Z_X^{p,1}$ and $Z_X^2$ we obtain the inequality $\|f\|_{Z_X^{p,1}}\preceq\|f\|_{Z_X^2}$ for $f\in Z_X^2$, where the constant depends only on $X$ and $p$. Since this and (6.2) yield the required inequality, the lemma is proved. $\Box$

The proof of the next result is quite analogous, so we leave it out.

Lemma 6.4. Assume that an r. i. space $X$ admits a lower $2$-estimate and $X\supset L_2[0,1]$. Then there exists a positive constant $B_X$ such that if $\{x_k\}\subset Z_X^2$ and $\operatorname{supp}x_k\subset [k-1,k]$, $k=1,2,\dots$, then

$$ \begin{equation*} \biggl(\,\sum_{k=1}^\infty\|x_k^*\|_X^2\biggr)^{1/2}\leqslant B_X \biggl\|\,\sum_{k=1}^\infty x_k\biggr\|_{Z_X^2}. \end{equation*} \notag $$

The last lemma shows that the complementability of a subspace in an r. i. space on the half-line which is spanned by disjoint functions with supports on the intervals $[k-1,k]$, $k=1,2,\dots$, has some stability properties.

Lemma 6.5. If $g_k$ and $h_k$ are equimeasurable functions in an r. i. space $Z$ on $[0,\infty)$, $k=1,2,\dots$, and $\operatorname{supp}g_k\cup\operatorname{supp}h_k\subset [k-1,k]$, then the subspaces $[g_k]$ and $[h_k]$ are simultaneously complemented or non-complemented in $Z$.

Proof. Assume, for example, that $[g_k]$ is complemented in $Z$, and let $Q$ be a bounded projection in $Z$ with image precisely equal to $[g_k]$. Then (see [69], Theorem II.2.1, or [25], Proposition 2.7.4) for each $\delta>0$ and each $k=1,2,\dots$ there exist a bijective (up to nullsets) measure-preserving map $w_k\colon [k-1,k)\to [k-1,k)$ and a measurable function $\varepsilon_k\colon[k-1,k)\to \{\pm 1\}$ such that
$$ \begin{equation} \|h_k-\varepsilon_k g_k(w_k)\|_{Z[k-1,k]}<\delta\,2^{-k},\qquad k=1,2,\dots\,. \end{equation} \tag{6.3} $$
It is easy to see that the map $w\colon [0,\infty)\to [0,\infty)$ defined by
$$ \begin{equation*} w(t):=\sum_{k=1}^\infty w_k(t)\chi_{[k-1,k)}(t), \end{equation*} \notag $$
is also measure preserving. Hence the operator
$$ \begin{equation*} T_{w,\varepsilon}x(t):=x(w(t))\cdot\varepsilon(t),\quad\text{where}\ \ \varepsilon(t):=\sum_{k=1}^\infty\varepsilon_k(t)\chi_{[k-1,k)}(t), \end{equation*} \notag $$
is an isometry of $Z$ and $P:=T_{w,\varepsilon}\cdot Q\cdot T_{w,\varepsilon}^{-1}$ is a bounded projection of $Z$ onto the subspace $[T_{w,\varepsilon}g_k]=[\varepsilon g_k(w)]$. Thus, this subspace is also complemented in $Z$.

Note that, as follows from (6.3), we have $\displaystyle\sum_{k=1}^\infty\|h_k-\varepsilon g_k(w)\|_Z\leqslant\delta$. Since the property of complementability of a subspace in a Banach space which is spanned by a sequence of vectors in this space is stable under small perturbations of these vectors (for instance, see [1], Theorem 1.3.9), we can choose a sufficiently small $\delta>0$ so that the last inequality ensures the complementability of $[h_k]$ in $Z$. $\Box$

Proposition 6.2 ([2]). Let $X\in\mathbb{K}$ be an r. i. space on $[0,1]$. Then if the subspace $[f_k]$ spanned by a sequence of independent functions $\{f_k\}_{k=1}^\infty\subset X$ is complemented in $X$, then the subspace $[\bar{f}_k]$ spanned by a sequence of pairwise disjoint copies of these functions $\{\bar{f}_k\}_{k=1}^\infty$ is complemented in $Z_X^2$.

Proof. First of all, by Lemma 6.1 we can (and will) assume that $\displaystyle\int_0^1 f_k(t)\,dt=0$, $k=1,2,\dots$ . Furthermore, it is easy to verify that the subspace
$$ \begin{equation*} \overline{N}(X):=\biggl\{y\in Z_X^2\colon\int_{k-1}^ky(s)\,ds=0\ \text{for all}\ k=1,2,\dots\biggr\} \end{equation*} \notag $$
is closed in $Z_X^2$. Finally, because we see from [69] (§ II.3.2 and Theorem II.4.3) that the linear operator
$$ \begin{equation} Uy(t):=y(t)-\sum_{k=1}^\infty\int_{k-1}^k y(s)\,ds\cdot\chi_{[k-1,k)}(t) \end{equation} \tag{6.4} $$
is a bounded projection in $Z_X^2$ whose image coincides with $\overline{N}(X)$, this subspace is complemented in $Z_X^2$.

Let $\Sigma_k$ denote the $\sigma$-algebra of subsets of $[0,1]$ generated by the function $f_k$, and $\mathsf{E}_k$ denote the associated operator of conditional expectation, that is, $\mathsf{E}_k x:=\mathsf{E}(x\,|\,{\Sigma_k})$ (see § 2.1). If $y_k(s):=y(s+k-1)$, $0\leqslant s\leqslant 1$, where $y\in\overline{N}(X)$ is an arbitrary function, then, vt the independence of the $\sigma$-algebras $\Sigma_k$, the functions $\mathsf{E}_ky_k$ are also independent and

$$ \begin{equation*} \int_0^1\mathsf{E}_ky_k(t)\,dt=\int_0^1 y_k(t)\,dt= \int_{k-1}^k y(s)\,ds=0. \end{equation*} \notag $$
Therefore, as $X\in\mathbb{K}$, for all $n\in{\mathbb N}$, by Theorem 3.4 we have
$$ \begin{equation} \biggl\|\,\sum_{k=1}^n\mathsf{E}_ky_k\biggr\|_X\leqslant \alpha_X \biggl\|\,\sum_{k=1}^n(\mathsf{E}_ky_k) (t-k+1)\,\chi_{[k-1,k)}(t)\biggr\|_{Z_X^2}. \end{equation} \tag{6.5} $$
On the other hand, since the operators $\mathsf{E}_k$ are bounded in $L_p[0,1]$ for $1\leqslant p\leqslant\infty$ (for instance, see [45], Remark 1.14), the operator
$$ \begin{equation} Ty(t):=\sum_{k=1}^\infty(\mathsf{E}_ky_k)(t-k+1)\, \chi_{[k-1,k)}(t) \end{equation} \tag{6.6} $$
is bounded with norm 1 in both $L_1[0,\infty)$ and $L_\infty[0,\infty)$. By Lemma 6.2 the space $Z_{X}^2$ is either separable or maximal because so is $X$. Hence it is an interpolation space with constant 1 with respect to the couple $(L_1[0,\infty),L_\infty[0,\infty))$ ([69], Theorems II.4.9 and II.4.10). Therefore, the operator $T$ is bounded in $Z_{X}^2$ and also has norm 1 there (see § 2.4). Thus, for each $y\in Z_X^2$ we have
$$ \begin{equation} \biggl\|\,\sum_{k=1}^n(\mathsf{E}_ky_k)(\,\cdot\,-k+1)\, \chi_{[k-1,k)}\biggr\|_{Z_X^2}\leqslant \|y\|_{Z_X^2},\qquad n\in\mathbb{N}. \end{equation} \tag{6.7} $$

Now we verify that the operator

$$ \begin{equation} Vy(t):=\sum_{k=1}^\infty\mathsf{E}_ky_k(t),\qquad 0\leqslant t\leqslant 1, \end{equation} \tag{6.8} $$
is bounded from $\overline{N}(X)$ to $X$.

If $X$, and therefore also $Z_X^2$ (see Lemma 6.2), is separable, then by inequality (6.7) the series on the right-hand side of (6.6) converges in $Z_X^2$ for each function $y\in Z_X^2$. Thus, by (6.5) the series on the right-hand side of (6.8) converges in $X$ and

$$ \begin{equation} \|Vy\|_X\leqslant \kappa^{}_X\|Ty\|_{Z_X^2}\leqslant \kappa^{}_X\|y\|_{Z_X^2}. \end{equation} \tag{6.9} $$

Now let $X$ be a maximal space. Then by inequalities (6.5) and (6.7)

$$ \begin{equation*} \biggl\|\,\sum_{k=1}^n\mathsf{E}_ky_k\biggr\|_X\leqslant \kappa^{}_X\|y\|_{Z_X^2},\qquad n\in\mathbb{N}, \end{equation*} \notag $$
so that in view of Corollary V.2.3 in [102] the series on the right-hand side of (6.8) converges almost everywhere on $[0,1]$. Since $X$ is maximal, it follows that $Vy\in X$ and (6.9) holds again.

Now, by assumption there exists a bounded projection $R\colon X\to [f_k]$. Consider the operator $S:=LRVU\colon Z_X^2\to [\bar{f}_k]$, where $L\colon [f_k]\to [\bar{f}_k]$ is the operator defined by

$$ \begin{equation*} L\biggl(\,\sum_{k=1}^\infty a_k{f}_k\biggr):= \sum_{k=1}^\infty a_k\bar{f}_k\qquad (a_k\in{\mathbb R}). \end{equation*} \notag $$
Since the $f_k$ are independent and $\displaystyle\int_0^1 f_k(t)\,dt=0$, $k=1,2,\dots$, by inequality (3.5) $L$ is bounded from the span $[f_k]$ to $[\bar{f}_k]$. Hence the operator $S$ is also bounded and its image coincides with the subspace $[\bar{f}_k]$. Furthermore, since $\mathsf{E}_k f_i=0$ for $k\ne i$ and $\mathsf{E}_i f_i=f_i$, for all $i\in \mathbb N$ we obtain
$$ \begin{equation*} S\bar{f}_i=LRV\bar{f}_i=LR f_i=Lf_i=\bar{f}_i. \end{equation*} \notag $$
As a result, $S$ is a bounded projection from $Z_X^2$ onto $[\bar{f}_k]$. $\Box$

If, in addition, $X'\in\mathbb{K}$, then the converse result also holds.

Proposition 6.3 ([2]). Let $X$ be an r. i. space on $[0,1]$ such that $X\in\mathbb{K}$ and $X'\in \mathbb{K}$. If $\{f_k\}_{k=1}^\infty\subset X$ is a sequence of independent functions such that $[\bar{f}_k]$ is complemented in $Z_X^2$, then the subspace $[f_k]$ is complemented in $X$.

Proof. As in the previous proof, we can assume that $\displaystyle\int_0^1 f_k(t)\,dt=0$, $k=1,2,\dots$ . Now, the subspace
$$ \begin{equation*} N(X):=\biggl\{x\in X\colon\int_0^1 x(s)\,ds=0\biggr\} \end{equation*} \notag $$
is complemented in $X$. In fact, if $Qx(t):=x(t)-\displaystyle\int_0^1 x(s)\,ds$, then by the inequality $\|x\|_{1}\leqslant \|x\|_X$, $x\in X$ (see [69], Theorem II.4.1, or § 2.2 here) we have $\|Qx\|_X\leqslant 2\|x\|_X$. Furthermore, $Qx=x$ for all $x\in N(X)$.

As before, let $\mathsf{E}_k$ be the operator of conditional expectation with respect to the $\sigma$-algebra of subsets of $[0,1]$ generated by the function $f_k$. On $N(X)$ we consider the operator

$$ \begin{equation*} Wx(t):=\sum_{k=1}^\infty(\mathsf{E}_k x)(t-k+1)\chi_{[k-1,k)}(t),\qquad t>0, \end{equation*} \notag $$
and show that it is bounded from $N(X)$ to $Z_X^2$. Since by Lemma 6.2 the space $Z_X^2$ is separable or maximal depending on $X$, it is isometrically embedded in its second Köthe dual $(Z_X^2)''$ (see § 2.2). Moreover, by the same lemma $(Z_{X}^2)'=Z_{X'}^2$, and therefore
$$ \begin{equation} \begin{aligned} \, \|Wx\|_{Z_X^2}&=\sup\biggl\{\int_0^\infty\sum_{k=1}^\infty(\mathsf{E}_kx) (s-k+1)\chi_{[k-1,k)}(s)y(s)\,ds\colon\|y\|_{Z_{X'}^2}\leqslant 1\biggr\} \nonumber \\ &=\sup\biggl\{\,\sum_{k=1}^\infty\int_0^1(\mathsf{E}_kx)(s) y_k(s)\,ds\colon\|y\|_{Z_{X'}^2}\leqslant 1\biggr\}, \end{aligned} \end{equation} \tag{6.10} $$
where $y_k(s):=y(s+k-1)$, $0\leqslant s\leqslant 1$. By the properties of conditional expectation (for instance, see [45], Chap. 1) and the relation $x\in N(X)$ we have the equalities
$$ \begin{equation*} \begin{aligned} \, \int_0^1(\mathsf{E}_kx)(s)y_k(s)\,ds &= \int_0^1\mathsf{E}_k(\mathsf{E}_kx\cdot y_k)(s)\,ds= \int_0^1(\mathsf{E}_kx)(s)\cdot(\mathsf{E}_ky_k)(s)\,ds \\ &=\int_0^1\mathsf{E}_k(x\cdot\mathsf{E}_ky_k)(s)\,ds= \int_0^1 x(s)\cdot\mathsf{E}_ky_k(s)\,ds \\ &=\int_0^1x(s)\cdot\biggl((\mathsf{E}_ky_k)(s)- \int_0^1 y_k(t)\,dt\biggr)\,ds,\qquad k \in \mathbb{N}, \end{aligned} \end{equation*} \notag $$
and therefore it follows from (6.10) that
$$ \begin{equation} \begin{aligned} \, \|Wx\|_{Z_X^2}&=\sup\biggl\{\,\sum_{k=1}^\infty\int_0^1x(s) \biggl((\mathsf{E}_ky_k)(s)-\int_0^1 y_k(t)\,dt\biggr)\,ds\colon \|y\|_{Z_{X'}^2}\leqslant 1\biggr\} \nonumber \\ &\leqslant \|x\|_X\sup\biggl\{\biggl\|\,\sum_{k=1}^\infty \biggl((\mathsf{E}_ky_k)(s)-\int_0^1 y_k(t)\,dt\biggr)\biggr\|_{X'}\colon \|y\|_{Z_{X'}^2}\leqslant 1\biggr\}. \end{aligned} \end{equation} \tag{6.11} $$
The functions $g_k(s):=(\mathsf{E}_ky_k)(s)-\displaystyle\int_0^1 y_k(t)\,dt$ are independent and $\displaystyle\int_0^1g_k(s)\,ds=0$, $k=1,2,\dots$ . Hence, as $X'\in\mathbb{K}$, it follows from Theorem 3.4 that for each $n\in \mathbb{N}$
$$ \begin{equation} \begin{aligned} \, \biggl\|\,\sum_{k=1}^n g_k\biggr\|_{X'} &\leqslant \kappa^{}_{X'}\biggl\|\,\sum_{k=1}^n\biggl((\mathsf{E}_ky_k)(s-k+1)- \int_0^1y_k(t)\,dt\biggr)\chi_{[k-1,k)}(s)\bigg\|_{Z_{X'}^2} \nonumber \\ &\leqslant \kappa^{}_{X'}\|Ty\|_{Z_{X'}^2}+\kappa^{}_{X'}\biggl\|\,\sum_{k=1}^n \int_{k-1}^k y(t)\,dt\cdot\chi_{[k-1,k)}\biggr\|_{Z_{X'}^2}, \end{aligned} \end{equation} \tag{6.12} $$
where $T$ is the operator defined by (6.6).

As in the proof of Proposition 6.2, we have

$$ \begin{equation*} \|Ty\|_{Z_{X'}^2}\leqslant \|y\|_{Z_{X'}^2},\qquad y\in {Z_{X'}^2}. \end{equation*} \notag $$
In addition, by [69], § II.3.2 and Theorem II.4.3,
$$ \begin{equation*} \biggl\|\,\sum_{k=1}^\infty\int_{k-1}^k y(t)\,dt\cdot \chi_{[k-1,k)}\biggr\|_{Z_{X'}^2}\leqslant\|y\|_{Z_{X'}^2}. \end{equation*} \notag $$
Therefore, since the space $X'$ is maximal, it follows from (6.12) that
$$ \begin{equation*} \biggl\|\,\sum_{k=1}^\infty\biggl((\mathsf{E}_ky_k)(s)- \int_0^1 y_k(t)\,dt\biggr)\biggr\|_{X'}\leqslant 2\kappa^{}_{X'}\|y\|_{Z_{X'}^2},\qquad y\in Z_{X'}^2, \end{equation*} \notag $$
and so $W$ is bounded from $N(X)$ to $Z_X^2$ by (6.11).

By assumption there exists a bounded projection $N\colon Z_X^2\to [\bar{f}_k]$. Consider the operator $S:=MNWQ\colon X\to [f_k]$, where $M\colon[\bar{f}_k]\to [f_k]$ is the operator defined by

$$ \begin{equation*} M\biggl(\,\sum_{k=1}^\infty a_k\bar{f}_k\biggr)= \sum_{k=1}^\infty a_k{f}_k\qquad (a_k\in{\mathbb R}). \end{equation*} \notag $$
Since the $f_k$ are independent, $\displaystyle\int_0^1 f_k(t)\,dt=0$ for $k=1,2,\dots$, and $X\in\mathbb{K}$, using Theorem 3.4 again, we conclude that $M$ is bounded from the span $[\bar{f}_k]$ to $[f_k]$. Thus, $S$ is bounded and its image coincides with the span $[f_k]$. In addition, bearing in mind that $\mathsf{E}_k f_i=0$ for $k\ne i$ and $\mathsf{E}_i f_i=f_i$, for all $i\in{\mathbb N}$ we obtain
$$ \begin{equation*} Sf_i=MNWf_i=MN\bar{f}_i=M\bar{f}_i=f_i. \end{equation*} \notag $$
Thus, $S$ is a bounded projection of $X$ onto $[f_k]$. $\Box$

Remark 6.1. Neither of the assumptions $X\in\mathbb{K}$ and $X'\in\mathbb{K}$ can in general be omitted in Proposition 6.3. In fact, the Rademacher system $\{r_k\}$ spans a complemented subspace of an r. i. space $X$ if and only if $G\subset X\subset G'$, where, as before, $G$ is the closure of $L_\infty$ in the exponential Orlicz space $\exp L_2$ (see [95] or [79], Theorem 2.b.4 (ii)). On the other hand a sequence of disjoint copies of the functions $r_k$, $k=1,2,\dots$ (just as the sequence $\{\chi_{[k-1,k]}\}_{k=1}^\infty$; see Lemma 6.5) spans a complemented subspace of $Z_X^2$ for each r. i. space $X$.

Corollary 6.1. Let $X$ be an r. i. space on $[0,1]$ such that $X\in\mathbb{K}$ and $X'\in\mathbb{K}$. Then if $\{f_k\}$ and $\{g_k\}$ are two sequences of independent functions in $X$ such that $f_k$ and $g_k$ are equimeasurable for each $k \in \mathbb{N}$, then the subspaces $[f_k]$ and $[g_k]$ are simultaneously complemented or non-complemented in $X$.

Proof. If $\{\bar{f}_k\}$ and $\{\bar{g}_k\}$ are sequences of disjoint copies of the sequences $\{f_k\}$ and $\{g_k\}$, respectively, then the function $\bar{f}_k$ is equimeasurable with $\bar{g}_k$ for each $k \in \mathbb{N}$. Hence by Lemma 6.5 the subspaces $[\bar{f}_k]$ and $[\bar{g}_k]$ are simultaneously complemented or not in $Z_X^2$. Using Theorem 6.2 we obtain the required result. $\Box$

Corollary 6.2. Let $X$ be an r. i. space on $[0,1]$ such that $X\in\mathbb{K}$ and $X'\in\mathbb{K}$, and let $\{f_k\}\subset X$ be a sequence of independent functions such that $\displaystyle\sum_{k=1}^\infty m(\operatorname{supp}f_k)\leqslant 1$. If $\operatorname{supp}\bar{f}_k\subset [0,1]$ for all $k \in \mathbb{N}$, then the subspaces $[f_k]$ and $[\bar{f}_k]$ are simltaneously complemented or non-complemented in $X$.

Proof. First of all, by the definition of $Z_X^2$ and the assumptions of the corollary we can indeed assume that $\operatorname{supp}\bar{f}_k\subset [0,1]$ for $k \in \mathbb{N}$. Furthermore, $X$ can be regarded as a complemented subspace of $Z_X^2$. Thus, the span $[\bar{f}_k]$ is complemented in $Z_X^2$ if and only if it is so in $X$. It remains to apply Theorem 6.2. $\Box$

6.3. Dor–Starbird theorem for some class of symmetric spaces

We say that a sequence $\{f_k\}$ in an r. i. space $X$ on $[0,1]$ is almost disjoint if there exist $\delta>0$ and a sequence of sets $A_k \subset [0,1]$ such that

$$ \begin{equation*} \sum_{k=1 }^\infty m(A_k)<\infty\quad\text{and}\quad \|f_k\chi_{A_k}\|_X \geqslant\delta,\quad k=1,2,\dots\,. \end{equation*} \notag $$

The following version of the Dor–Starbird theorem holds in the class of r. i. spaces admitting a lower $p$-estimate.

Theorem 6.3 ([2]). Let $1\leqslant p<\infty$. Assume that an r. i. space $X$ admits a lower $p$-estimate, let $X'\in\mathbb{K}$, and let $\{f_k\}\subset X$ be a sequence of independent functions that is equivalent in $X$ to the canonical basis of ${\ell_p}$. Assume that at least one of the following conditions is satisfied:

(a) $p>2$;

(b) $p=2$ and $X\supset L_2$;

(c) the sequence $\{f_k\}$ is almost disjoint.

Then the subspace $[f_k]$ is complemented in $X$.

Proof. Since $X$ admits a lower $p$-estimate, it follows that $X=L_1[0,1]$ for $p=1$ and $X\supset L_{p,1}[0,1]$ for $1< p<\infty$ (see the beginning of the proof of Lemma 6.3). Therefore, $X\in\mathbb{K}$ (see § 3.2), and thus $X$ satisfies all the assumptions of Theorem 6.2.

Below we assume without loss of generality that $\|f_k\|_X=1$, $k=1,2,\dots$ . First we let $\displaystyle\int_0^1f_k(s)\,ds=0$, $k=1,2,\dots$, and consider three cases depending on the condition among (a)–(c) that is actually satisfied.

(a) $p>2$. Let $\bar{f}_k$, $\operatorname{supp}\bar{f}_k\subset [k-1,k]$, be disjoint copies of the functions $f_k$, $k=1,2,\dots$ . Since $\|\bar{f}_k\|_{Z_{X}^2}=\|f_k\|_X=1$, there are functions $g_k\in Z_{X'}^2=(Z_{X}^2)'$ such that

$$ \begin{equation*} \|g_k\|_{Z_{X'}^2}=1,\quad \operatorname{supp}g_k\subset [k-1,k],\quad \text{and}\quad\int_0^\infty \bar{f}_k(s)g_k(s)\,ds=1,\quad k=1,2,\dots\,. \end{equation*} \notag $$
On the space $Z_X^2$ we consider the operator
$$ \begin{equation*} Px(t):=\sum_{k=1}^\infty\int_0^\infty x(s)g_k(s)\,ds\cdot\bar{f}_k(t),\qquad t>0. \end{equation*} \notag $$
By (3.5), from Theorem 3.2, the fact that $\{f_k\}$ is equivalent in $X$ to the canonical basis of $\ell_p$, and Lemma 6.3 we obtain
$$ \begin{equation*} \begin{aligned} \, \|Px\|_{Z_X^2} &\preceq \biggl\|\,\sum_{k=1}^\infty \int_0^\infty x(s) g_k(s)\,ds\cdot f_k\biggr\|_X\asymp \biggl(\,\sum_{k=1}^\infty \biggl|\int_0^\infty x(s)g_k(s)\,ds\biggr|^p\biggr)^{1/p} \\ &\leqslant \biggl(\,\sum_{k=1}^\infty \|g_k\|_{Z_{X'}^2}^p \|x\chi_{[k-1,k]}\|_{Z_{X}^2}^p\biggr)^{1/p}= \biggl(\,\sum_{k=1}^\infty\|(x\chi_{[k-1,k]})^*\|_{X}^p\biggr)^{1/p} \\ &\leqslant B_X\|x\|_{Z_{X}^2}. \end{aligned} \end{equation*} \notag $$
Thus, as $P\bar{f}_k=\bar{f}_k$, $k=1,2,\dots$, the projection $P$ is bounded from $Z_X^2$ to $[\bar{f}_k]$, and so the required result is a direct consequence of Theorem 6.2.

(b) $p>2$. The proof is quite similar to case (a), but in place of Lemma 6.3 we must use Lemma 6.4.

(c) As in case (a), first we prove that $[\bar{f}_k]$ is complemented in $Z_X^2$, and then we use Theorem 6.2.

By assumption there exist $\delta>0$ and a sequence of sets $A_k \subset [0,1]$ such that $\displaystyle\sum_{k=1}^\infty m(A_k)<\infty$ and $\|f_k\chi_{A_k}\|_X \geqslant\delta$, $k=1,2,\dots$ . By the definition of $X'$ there exist functions $h_k\in X'$ such that $\|h_k\|_{X'}\leqslant\delta^{-1}$, $\operatorname{supp}h_k\subset A_k$, and

$$ \begin{equation*} \int_0^1f_k(s)h_k(s)\,ds=\int_{A_k}f_k(s)h_k(s)\,ds=1,\qquad k=1,2,\dots\,. \end{equation*} \notag $$
Now, if $\bar{f}_k(t)=f_k(t-k+1)\cdot\chi_{[k-1,k]}(t)$, $\bar{h}_k(t)=h_k(t-k+1)\cdot\chi_{[k-1,k]}(t)$, then
$$ \begin{equation*} \int_0^\infty \bar{h}_k(t)\bar{f}_k(t)\,dt=1,\qquad k=1,2,\dots, \end{equation*} \notag $$
and
$$ \begin{equation*} \int_0^\infty \bar{h}_k(t)\bar{f}_i(t)\,dt=0,\qquad k\ne i \end{equation*} \notag $$
(that is, the sequences $\{\bar{f}_k\}$ and $\{\bar{h}_k\}$ are biorthogonal) and also
$$ \begin{equation} \|\bar{h}_k\|_{Z_{X'}^2}=\|h_k\|_{X'}\leqslant\delta^{-1},\qquad k=1,2,\dots\,. \end{equation} \tag{6.13} $$

Let $s_0:=\displaystyle\sum_{k=1}^\infty m(A_k)>1$ (if $s_0\leqslant 1$, then the argument is only simpler). Since $m(\operatorname{supp}\sigma_{s_0^{-1}}\bar{h}_k)\leqslant s_0^{-1} m(A_k)$, $k=1,2,\dots$, it follows that

$$ \begin{equation*} \sum_{k=1}^\infty m(\operatorname{supp}\sigma_{s_0^{-1}}\bar{h}_k)\leqslant s_0^{-1}\sum_{k=1}^\infty m(A_k)\leqslant 1. \end{equation*} \notag $$
Hence, as $\|\sigma_\tau\|_{Y\to Y}\leqslant\max\{1,\tau\}$ ($\tau>0$) for each r. i. space $Y$ ([69], Corollary II.4.1), by the definition of the norm of $Z_{X'}^2$ we have
$$ \begin{equation*} \biggl\|\,\sum_{k=1}^\infty a_k\bar{h}_k\biggr\|_{Z_{X'}^2}= \biggl\|\sigma_{s_0}\biggl(\,\sum_{k=1}^\infty a_k\sigma_{s_0^{-1}} \bar{h}_k\biggr)\biggr\|_{Z_{X'}^2}\leqslant s_0 \biggl\|\biggl(\,\sum_{k=1}^\infty a_k\sigma_{s_0^{-1}} \bar{h}_k\biggr)^*\biggr\|_{X'}. \end{equation*} \notag $$
Note that the space $X'$ admits an upper $p'$-estimate, where $1/p+1/p'=1$, that is, for some positive constant $C_{X'}$ and any disjoint elements $y_1,\dots,y_n$ of $X'$, where $n\in\mathbb{N}$ is arbitrary,
$$ \begin{equation*} \biggl\|\,\sum_{k=1}^n y_k\biggr\|_{X'}\leqslant C_{X'}\biggl(\,\sum_{k=1}^n\|y_k\|_{X'}^{p'}\biggr)^{1/p'} \end{equation*} \notag $$
([78], Proposition 1.f.5). Hence, as the functions $\sigma_{s_0^{-1}}\bar{h}_k$, $k=1,2,\dots$, are pairwise disjoint and we have
$$ \begin{equation*} \|(\sigma_{s_0^{-1}}\bar{h}_k)^*\|_{X'}\leqslant \|\bar{h}_k^*\|_{X'}=\|{h}_k\|_{X'}\leqslant\delta^{-1} \end{equation*} \notag $$
(see (6.13)), it follows from the above relation that
$$ \begin{equation} \begin{aligned} \, \biggl\|\,\sum_{k=1}^\infty a_k\bar{h}_k\biggr\|_{Z_{X'}^2} &\leqslant C_{X'}s_0\biggl(\,\sum_{k=1}^\infty |a_k|^{p'}\|(\sigma_{s_0^{-1}} \bar{h}_k)^*\|_{X'}^{p'}\biggr)^{1/{p'}} \nonumber \\ &\leqslant C_{X'}s_0\delta^{-1} \biggl(\,\sum_{k=1}^\infty |a_k|^{p'}\biggr)^{1/p'} \end{aligned} \end{equation} \tag{6.14} $$
(if $p=1$, then $\biggl(\,\displaystyle\sum_{k=1}^\infty |a_k|^{p'}\biggr)^{1/p'}$ must be replaced by $\sup_{k=1,2,\dots}|a_k|$).

On the other hand, since $X\in\mathbb{K}$ and $\displaystyle\int_0^1f_k(s)\,ds=0$, $k=1,2,\dots$, from Theorems 3.2 and 3.4 we obtain

$$ \begin{equation*} \biggl\|\,\sum_{k=1}^\infty a_kf_k\biggr\|_X\asymp \biggl\|\,\sum_{k=1}^\infty a_k\bar{f}_k\biggr\|_{Z_X^2}, \end{equation*} \notag $$
so that, by assumption the sequence $\{\bar{f}_k\}$ is equivalent in $Z_X^2$ to the canonical $\ell_p$-basis. Thus, by inequality (6.14) and the well-known complementablity criterion for subspaces spanned by sequences equivalent to the canonical $\ell_p$-basis (for instance, see [44], Fact 2.1) the subspace $[\bar{f}_k]$ is complemented in $Z_X^2$. As a result, then $[{f}_k]$ is complemented in $X$ by Theorem 6.2.

In the general case consider the functions $u_k:=f_k-\displaystyle\int_0^1 f_k(s)\,ds$, $k=1,2,\dots$ . They are independent and $\displaystyle\int_0^1 u_k(s)\,ds=0$, $k=1,2,\dots$ . We show that the sequence $\{u_k\}$ satisfies the same conditions from the statement of the theorem as $\{f_k\}$ does.

To demonstrate that $\{u_k\}$ is equivalent to the canonical $\ell_p$-basis we use the well- known result due to Gohberg and Markus [51] which claims that it is sufficient to verify the condition $\displaystyle\sum_{k=1}^\infty\|f_k- u_k\|_X^{p'}< \infty$, where, as before, $1/p+1/p'=1$.

In fact if $\{f_k\}$ is equivalent in $X$ with some constant $C$ to the canonical basis of $\ell_p$, then because $X\overset{1}{\subset} L_1[0,1]$ (see § 2.2), we obtain

$$ \begin{equation*} \begin{aligned} \, \sum_{k=1}^\infty\|f_k-u_k\|_X^{p'}&= \sum_{k=1}^\infty\biggl|\int_0^1 f_k(s)\,ds\biggr|^{p'} \\ &\leqslant \sup\biggl\{\biggl(\int_0^1 \biggl|\,\sum_{k=1}^\infty a_kf_k(s)\biggr|\,ds\bigg)^{p'}\colon \sum_{k=1}^\infty |a_k|^p\leqslant 1\biggr\} \\ &\leqslant\sup\biggl\{\biggl\|\,\sum_{k=1}^\infty a_kf_k\biggr\|_X^{p'}\colon \sum_{k=1}^\infty |a_k|^p\leqslant 1\biggr\}\leqslant C^{p'}. \end{aligned} \end{equation*} \notag $$
It also follows from the last estimate that $\displaystyle\int_0^1 f_k(s)\,ds\to 0$ as $k\to\infty$. Thus the condition that the sequence $\{u_k\}$ is almost disjoint holds (starting from some index) for the same sets $A_k$ as in the case of $\{f_k\}$, but maybe for slightly smaller $\delta>0$.

As a result, since the complementability of $[u_k]$ yields the complementability of $[f_k]$ by Lemma 6.1, the proof of the theorem reduces to the case already considered. $\Box$

Remark 6.2. The ‘almost disjointness’ of the sequence $\{f_k\}\subset X$ in part (c) of Theorem 6.3 (for $1\leqslant p<2$) is essential. In fact, a sequence of independent $p$-stable random variables is equivalent in $L_r$ to the canonical basis of $\ell_p$ for $1\leqslant r<p<2$; on the other hand it spans a non-complemented subspace of $L_r$ (for instance, see [1], Theorems 6.4.18 and 6.4.21).

Returning to $L_p$-spaces we recall that in this case the formally stronger assumption of Theorem 6.3 on the equivalence of the sequence $\{f_k\}_{k=1}^\infty$ to the canonical $\ell_p$-basis is indeed equivalent to the assumption of Theorem 6.1 that $[f_k]\approx \ell_p$.

In fact, let $f_k$ be independent functions, and let $\|f_k\|_p=1$, $k=1,2,\dots$, and $[f_k]\approx {\ell_p}$. As before, we can assume that $\displaystyle\int_0^1 f_k(t)\,dt=0$, $k=1,2,\dots$ . In each r. i. space independent mean zero functions form an unconditional basic sequence ([32], Proposition I.14). Hence if $p=1$ or $p=2$, then the required result follows directly from the fact that a normalized unconditional basis in $\ell_1$ (see [75]) or in an arbitrary separable Hilbert space (see [101], Proposition 1.1), respectively, is unique (up to equivalence). For $2<p<\infty$ it follows from Theorem 4 and Lemma 7 in [96] that either $\{f_k\}$ is equivalent in $L_p$ to the canonical basis of $\ell_p$, or $[f_k]$ contains a subspace isomorphic to $\ell_2$, which is impossible in our case. Finally, if $1<p<2$, then the equivalence in $L_p$ of $\{f_k\}$ to the canonical basis of $\ell_p$ follows from the arguments in the beginning of the proof of Theorem A in [44] (see pp. 168–169 there).

Thus, for $X=L_p$, $p\geqslant 2$, the Dor–Starbird theorem is a consequence of Theorem 6.3. If $1\leqslant p < 2$, then in addition the sequence $\{f_k\}$ must be almost disjoint in $L_p$, which follows from the well-known result due to Dor.

Theorem 6.4 ([43], Theorem B). Let $1\leqslant p < \infty$, $p\ne 2$, and let $\{f_k\}_{k=1}^\infty\subset L_{p}$. Assume that either

(i) $1\leqslant p < 2$, $\|f_i\|_{p}\leqslant 1$, $i=1,2,\dots$, and for all $a_i\in\mathbb{R}$

$$ \begin{equation*} \biggl\|\,\sum_{i=1}^n a_if_i\biggr\|_{p}\geqslant c\|(a_i)_{i=1}^n\|_{\ell_p}, \end{equation*} \notag $$
or

(ii) $2<p < \infty$, $\|f_i\|_{p}\geqslant 1$, $i=1,2,\dots$, and for all $a_i\in\mathbb{R}$

$$ \begin{equation*} \biggl\|\,\sum_{i=1}^n a_if_i\biggr\|_{p}\leqslant C\|(a_i)_{i=1}^n\|_{\ell_p}. \end{equation*} \notag $$

Then the sequence $\{f_i\}$ is almost disjoint in $L_p$.

Now consider the more general case of the spaces $L_{p,q}\!=L_{p,q}[0,1]$, $1<p<\infty$ and $1\leqslant q<\infty$ (see § 2.2). It is known (see, for instance, [48], Theorem 5.1, and [37], Lemma 3.1) that from each sequence of disjoint functions in $L_{p,q}$ we can extract a subsequence equivalent to the canonical $\ell_q$-basis which spans a complemented subspace of $L_{p,q}$. Since $L_{p,q}\in\mathbb{K}$ and $(L_{p,q})'=L_{p',q'}\in\mathbb{K}$, where $1/p+1/p'=1$ and $1/q+1/q'=1$ (see § 3.2), it follows from Corollary 6.2 that each sequence of independent functions $\{f_k\}$ in $L_{p,q}$ such that $\displaystyle\sum_{k=1}^\infty m(\operatorname{supp}f_k)\leqslant 1$ contains a subsequence $\{f_{k_i}\}$ with linear span $[f_{k_i}]$ complemented in $L_{p,q}$.

If the supports of the $f_k$, $k=1,2,\dots$, are arbitrary, then taking account of the facts that $L_{p,q}$ admits a lower $\max\{p,q\}$-estimate (see [39] or [42], Theorem 3), $L_{p_2,q_2}\subset L_{p_1,q_1}$ for $p_1<p_2$, and $L_{p,p}=L_p$ for $1<p<\infty$, from Theorem 6.3 we obtain the following result.

Theorem 6.5. Let $1<p\leqslant q<\infty$, and let $\{f_k\}\subset L_{p,q}$ be a sequence of independent functions that is equivalent in $L_{p,q}$ to the canonical $\ell_q$-basis. Then the subspace $[f_k]$ is complemented in $L_{p,q}$ if either $q\geqslant 2$ or the sequence $\{f_k\}$ is almost disjoint.

Let us present a consequence of the above theorem. On the one hand, if $p\in [2,\infty)$, then by a classical result of Kadec and Pełczyński ([61], Corollary 1) each subspace of $L_p$ isomorphic to $\ell_2$ is complemented in $L_p$. On the other hand it is known ([24], Theorem 3.3) that for each $p\in (1,2)$ the space $L_p$ contains non-complemented subspaces isomorphic to $\ell_2$. Theorem 6.5 yields the following result, showing that the result of Kadec and Pełczyński extends to the values $p\in (1,2)$, provided that we limit ourselves to subspaces spanned by independed functions.

Corollary 6.3 ([2]). If $1<p<2$ and $\{f_k\}\subset L_{p}$ is a sequence of independent functions which is equivalent in $L_{p}$ to the canonical basis of $\ell_2$, then the subspace $[f_k]$ is complemented in $L_{p}$.

6.4. Subspaces spanned by identically distributed independent functions

Using Theorem 6.2 and some results from the paper [53] by Semenov and Hernández we can relatively simply characterize the complementability property of subspaces spanned by identically distributed independent functions in an r. i. space with the Kruglov property.

Theorem 6.6 ([2]). Let $X$ be a separable r. i. space on $[0,1]$ such that $X\in\mathbb{K}$ and $X'\in\mathbb{K}$. Let $f_k\in X$, $k=1,2,\dots$, be independent identically distributed functions. Then the sequence $\{f_k\}$ spans a complemented subspace of $X$ if and only if it is equivalent in $X$ to the canonical $\ell_2$-basis.

Proof. Consider the sequence $\{g_k\}$ of integer translations of the function $f_1$:
$$ \begin{equation*} g_k(t):=f_1(t-k+1)\cdot\chi_{[k-1,k)}(t),\qquad t>0,\quad k=1,2,\dots\,. \end{equation*} \notag $$
By Lemma 6.2 the space $Z_X^2$ is separable if $X$ is. Hence an easy analysis of the proofs of Theorems 2.1 and 2.2 in [53] shows that the subspace $[g_k]$ is complemented in $Z_X^2$ if and only if the sequence $\{g_k\}$ is equivalent in $Z_X^2$ to the sequence $\{\chi_{[k-1,k)}\}_{k=1}^\infty$, which is in its turn equivalent in $Z_X^2$ to the canonical $\ell_2$-basis.

On the other hand, as the functions $f_k$, $k=1,2,\dots$, are identically distributed, the sequence $\{g_k\}$ is equivalent to the sequence $\{\bar{f}_k\}$ in $Z_X^2$, where $\bar{f}_k(t)=f_k(t- k+ 1)\cdot\chi_{[k-1,k)}(t)$. Furthermore, by Lemma 6.5 the subspaces $[g_k]$ and $[\bar{f}_k]$ are simultaneously complemented or non-complemented in this space. Since the sequence $\{f_k\}$ is in its turn equivalent in $X$ to the sequence $\{\bar{f}_k\}$ in $Z_X^2$ by Theorems 3.2 and 3.4 (as usual, we can assume that $\displaystyle\int_0^1 f_k(t)\,dt=0$, $k=1,2,\dots$), the required result now follows from the above facts and Theorem 6.2. $\Box$

Remark 6.3. The example of the sequence of Rademacher functions shows that the assumption $X'\in\mathbb{K}$ cannot be dropped in Theorem 6.6 (see Theorem 1.1).

The following result is yet another characterization of the Hilbert space $L_2$ in terms of the complementability of its subspaces.

Theorem 6.7 ([2]). Let $X$ be a separable r. i. space on $[0,1]$ such that $X\in\mathbb{K}$ and $X'\in\mathbb{K}$. Then the following conditions are equivalent:

(i) for any sequences of identically distributed independent functions $\{f_k\}\subset X$ and $\{f_k'\}\subset X'$ the subspaces $[f_k]$ and $[f_k']$ are complemented in $X$ and $X'$, respectively;

(ii) $X=L_2[0,1]$.

Proof. It is sufficient to show that (i) implies (ii). Note that by Theorem 6.2 condition (i) is equivalent to the following one: for any sequences $\{g_k\}\subset Z_X^2$ and $\{g_k'\}\subset Z_{X'}^2$ of identically distributed functions such that $\operatorname{supp}g_k\cup\operatorname{supp}g_k'\subset [k-1,k)$, $k=1,2,\dots$, the subspaces $[g_k]$ and $[g_k']$ are complemented in $Z_X^2$ and $(Z_X^2)'=Z_{X'}^2$, respectively. Then by Theorem 5.4 in [53] we have $Z_X^2=L_p[0,\infty)$ for some $1<p<\infty$. Hence it follows from the definition of $Z_X^2$ that $X=L_2[0,1]$. $\Box$

6.5. Subspaces spanned by uniformly bounded independent functions

In connection with the following result also see Theorem 1.2.

Theorem 6.8 ([2]). Let $X$ be an r. i. space on $[0,1]$ such that $X\in\mathbb{K}$ and $X'\in\mathbb{K}$. If $\{f_k\}\subset X$ is a sequence of independent functions such that $\displaystyle\int_0^1 f_k(t)\,dt=0$, $k=1,2,\dots$,

$$ \begin{equation} M:=\sup_{k=1,2,\dots}\|f_k\|_{\infty}<\infty,\quad\textit{and}\quad \alpha:=\inf_{k=1,2,\dots}\|f_k\|_{2}>0, \end{equation} \tag{6.15} $$
then the subspace $[f_k]$ is complemented in $X$.

Proof. By Theorem 6.2 it is sufficient to show that the subspace $[\bar{f}_k]$ is complemented in $Z_X^2$. By Lemma 6.5 we can assume that $\bar{f}_k\geqslant 0$. On the space $Z_X^2$ consider the operator
$$ \begin{equation*} Px(t):=\sum_{k=1}^\infty\biggl(\int_{k-1}^k \bar{f}_k(s)\,ds\biggr)^{-1} \int_{k-1}^k {x}(s)\,ds\cdot \bar{f}_k(t),\qquad t>0. \end{equation*} \notag $$
Then $P\bar{f}_k=\bar{f}_k$, $k=1,2,\dots$ . Furthermore, in view of (6.15) we have
$$ \begin{equation*} \int_{k-1}^k \bar{f}_k(s)\,ds\geqslant \frac{1}{M}\int_{k-1}^k \bar{f}_k(s)^2\,ds \geqslant \frac{\alpha^2}{M}\,,\qquad k=1,2,\dots, \end{equation*} \notag $$
so that
$$ \begin{equation*} Px(t) \leqslant \frac{M}{\alpha^2}\sum_{k=1}^\infty \biggl|\int_{k-1}^k {x}(s)\,ds\biggr|\cdot \bar{f}_k(t) \leqslant \biggl(\frac{M}{\alpha}\biggr)^2\sum_{k=1}^\infty \biggl|\int_{k-1}^k {x}(s)\,ds\biggr|\cdot \chi_{[k-1,k]}(t). \end{equation*} \notag $$
Thus, taking account of the boundedness of averaging operators in r. i. spaces again (see [69], § II.3.2 and Theorem II.4.3) we see that $\|Px\|_{Z_X^2}\!\leqslant(M/\alpha)^2 \|x\|_{Z_X^2}$. This means that $[\bar{f}_k]$ is complemented in $Z_X^2$. $\Box$

Remark 6.4. Theorems 6.6 and 6.8 reveal the extent to which the investigation of subspaces spanned by identically distributed (and, accordingly, uniformly bounded) independent functions in an r. i. space with the Kruglov property simplifies in comparison with the general case considered before by Braverman in Chap. IV of his monograph [32]. In particular, the reader can find there (see Theorem 4.2) a much more ‘laborious’ proof of the following result (which is stronger than Theorem 6.6): a subspace of a separable r. i. space $X$ that is spanned by a sequence $\{f_k\}$ of identically distributed independent functions is complemented in $X$ if and only if $G\subset X\subset G'$ and $[f_k]\approx \ell_2$. In [29] a similar stronger version of Theorem 6.8 was stated (without a proof); the conditions $X\in\mathbb{K}$ and $X'\in\mathbb{K}$ were replaced there by the embeddings $G\subset X\subset G'$. Note in conclusion that a weaker version of the last statement was proved in [32], Theorem 4.1: the norm of the space $L_2$ in the second expression in (6.15) is replaced in it by the norm of $L_1$.

7. Approximation of finite-dimensional subspaces of rearrangement invariant spaces which are spanned by independent functions

Let $X$ be a Banach space. For an arbitrary subspace $E$ of $X$, $\dim E=n$, and any $M>1$ let $k=k_X(E,M)$ be the least number such that there exists a bounded linear operator $U\colon X\to X$ such that $Ux=x$ for $x\in E$, $\|U\|\leqslant M$, and $\dim U(X)\leqslant k$. Also set

$$ \begin{equation*} k_X(n,M):=\sup\{k_X(E,M)\colon E\text{ is a subspace of } X,\dim E=n\}. \end{equation*} \notag $$
For the space $L_p=L_p[0,1]$, $1\leqslant p\leqslant\infty$, the uniformity function of the bounded approximation property $k_p(n,M):=k_{L_p}(n,M)$ was introduced in 1988 by Figiel, Johnson, and Schechtman [47]. Even earlier, without defining this quantity explicitly, Pełczyński and Rosenthal [88] proved that it is finite for any $1\leqslant p\leqslant\infty$, $n\in\mathbb{N}$, and $M>1$. Moreover, [88] contains an argument, due to Kwapień, which means in fact that the order of the quantity $k_p(n,1+\varepsilon)$ does not exceed $(n/\varepsilon)^{Cn}$ for some $C>0$.

In [47] a number of estimates for $k_1(E,M)$ were also established in the case when $E$ is a subspace spanned by a system of independent functions. For example, if $E$ is the linear span of $n$ standard Gaussian of Rademacher random variables, then

$$ \begin{equation*} k_1(E,M)\geqslant \exp(\delta M^{-2}n), \end{equation*} \notag $$
where $\delta>0$ is some constant. On the other hand, if $1<p<2$ and $E$ is a subspace spanned by $n$ $p$-stable random variables, then
$$ \begin{equation*} k_1(E,M)\geqslant \exp(\delta M^{-2}n^{2/p'}),\quad\text{where}\ \ \frac{1}{p}+\frac{1}{p'}=1. \end{equation*} \notag $$
In particular, it immediately follows from these estimates that for each fixed $M$ the uniformity function $k_1(n,M)$ has an exponential upper bound with respect to $n$. On the other hand, according to a conjecture in [58], for each $1<p<\infty$ there exists $M=M(p)$ such that $k_p(n,M)$ has a polynomial upper estimate with respect to $n$. In support of this conjecture Johnson and Schechtman [58] showed that there exists $M=M(p)$ such that if $E$ is the linear span of arbitrary $n$ independent functions, then
$$ \begin{equation*} k_p(E,M)\leqslant Cn\log(n+1), \end{equation*} \notag $$
where $C>0$ is a constant independent of $1<p<\infty$ and $n\in\mathbb{N}$.

We will see here that a similar estimate for $k_X(E,M)$ holds for a wide class of r. i. spaces, provided that the functions spanning $E$ are identically distributed. In particular, it holds for the Orlicz space $L \log^\beta L$ in the case when $\beta\geqslant 1$ and $E$ is a subspace spanned by $n$ symmetric $p$-stable random variables, $1<p<2$, in sharp contrast to the exponential lower bound for $k_1(E,M)$ mentioned above.

Below we need some definitions and results from the paper [10] on the complementability of finite-dimensional subspaces of r. i. spaces on $[0,1]$ spanned by dilations and translations.

Let $X$ be an r. i. space on $[0,1]$. Given a function $a \in X$, set

$$ \begin{equation*} \begin{gathered} \, a_{n,k}(t):=a(2^{n}t-k+1)\cdot\chi_{[(k-1)2^{-n},k2^{-n})}(t), \\ n=0,1,2,\dots,\qquad k=1,2,\dots,2^n. \end{gathered} \end{equation*} \notag $$
For each $n=0,1,2,\dots$ let $Q_{a,n}$ denote the linear span of the functions $a_{n,k}$, $k=1,2,\dots,2^n$. Then an r. i. space $X$ on $[0,1]$ belongs to the class $\mathcal{N}_{0}$ if for each function $a \in X$ there exists a sequence of projections $\{P_{n}\}_{n=0}^{\infty}$ in $X$ such that
$$ \begin{equation*} P_{n}(X)=Q_{a, n}\quad\text{and}\quad D_X:=\sup_{n=0,1,2,\dots}\|P_{n}\|_{X \to X} < \infty. \end{equation*} \notag $$

By [10], Theorem 4, an r. i. space $X$ belongs to the class $mathcal{N}_0$ if and only if the operator of tensor product $x\otimes y(s,t)=x(s)y(t)$ is bounded from $X \times X$ to the space $X([0,1]\times [0,1])$. This description of ${\mathcal N}_{0}$, in combination with some known results on the boundedness of the operator of tensor product in r. i. spaces (see the references in [10]), provide sufficient information about which r. i. spaces belong to this class. In particular, an Orlicz space $L_M$ belongs to ${\mathcal N}_{0}$ if and only if there exist $C>0$ and $u_0>0$ such that

$$ \begin{equation*} M(uv)\leqslant CM(u)M(v)\quad\text{for } u,v\geqslant u_0. \end{equation*} \notag $$
The analogous condition
$$ \begin{equation*} \psi(st)\leqslant C\psi(s)\psi(t),\qquad 0\leqslant s,t\leqslant 1, \end{equation*} \notag $$
is necessary and sufficient for a Lorentz space $\Lambda(\psi)$ to belong to ${\mathcal N}_{0}$. A Marcinkiewicz space $M(\psi)$ belongs to ${\mathcal N}_{0}$ if and only if $\psi'\otimes\psi'\in M(\psi)([0,1]\times [0,1])$ (here $\psi'$ is the derivative of $\psi$).

Theorem 7.1 ([3], Theorem 1). Let $Y$ be an r. i. space on $[0,\infty)$ such that $\|x\|_Y=\|x^*\|_X$ if $m(\operatorname{supp}x)\leqslant 1$, where $X$ is an r. i. space on $[0,1]$ and $X\in \mathcal{N}_0$. If $\{f_i\}_{i=1}^n\subset Y$ is an arbitrary sequence of pairwise disjoint identically distributed functions such that $m(\operatorname{supp}f_i)\leqslant 1$, $i=1,\dots,n$, then for each $\varepsilon>0$ there exists a projection $P\colon Y\to Y$ such that

$$ \begin{equation*} \begin{gathered} \, Pf_i=f_i,\qquad i=1,\dots,n, \\ \|P\|\leqslant D_X+1+\varepsilon,\quad\textit{and}\quad \dim P(Y)\leqslant Cn\log(n+1), \end{gathered} \end{equation*} \notag $$
where $C=C(\varepsilon)$ is independent of $X$ and $n\in\mathbb{N}$.

Proof. Without loss of generality assume that $\|f_i\|_Y=1$ for all $i=1,\dots,n$.

Since $m(\operatorname{supp}f_i)\leqslant 1$, $i=1,\dots,n$, and $\|\chi_{[0,1]}\|_X=1$, for any $\delta>0$ (which will be specified below) we have

$$ \begin{equation*} \|f_i\chi_{\{|f_i|\leqslant\delta/n\}}\|_Y\leqslant \frac{\delta}{n}\,\|\chi_{[0,1]}\|_X=\frac{\delta}{n}\,. \end{equation*} \notag $$
So if $\delta$ is sufficiently small, then, as $\varepsilon>0$ is arbitrary, by the theorem on small perturbations ([1], Theorem 1.3.9) we can assume that $m\{0<|f_i|<\delta/n\}= 0$ for all $i=1,\dots,n$. Therefore,
$$ \begin{equation*} f_i=g_i+h_i, \end{equation*} \notag $$
where
$$ \begin{equation*} g_i:=f_i\chi_{\{|f_i|>n\}}\quad\text{and}\quad h_i:=f_i\chi_{\{{\delta}/{n}\leqslant |f_i|\leqslant n\}},\quad i=1,\dots,n. \end{equation*} \notag $$

Using the inequality $m(\operatorname{supp}f_i)\leqslant 1$ again, we obtain

$$ \begin{equation*} m(\operatorname{supp}g_i)=m\{|f_i|>n\}\leqslant \frac{1}{n} \|f_i\|_{L_1(\operatorname{supp}f_i)}\leqslant \frac{1}{n} \|f_i^*\|_X=\frac{1}{n}\|f_i\|_Y=\frac{1}{n}\,. \end{equation*} \notag $$
This yields the estimate $m(G)\leqslant 1$ for the set $G:=\displaystyle\bigcup\limits_{i=1}^n\operatorname{supp}g_i$. Thus, by assumption the norms of the spaces $Y$ and $X$ coincide on functions with support in $G$.

Set $a=\biggl(\,\displaystyle\sum_{i=1}^n g_i\biggr)^*$, and let $\{a_i\}_{i=1}^n$ be a sequence of disjoint shifts of a function $\sigma_na(t)=a(t/n)$, $0\leqslant t\leqslant 1$, on $[0,1]$. Then, as $X\in {\mathcal N}_{0}$, there exists a projection $Q'\colon \,X\to X$ whose image coincides with the linear span $[a_i]_{i=1}^n$ and such that $\|Q'\|\leqslant D_X$ (also see [10]). Since $a_i$ and $g_i$ are identically distributed for each $i=1,\dots,n$, using the arguments from the proof of Lemma 6.5 it is easy to show that there exists a projection $Q\colon Y\to Y$ with image equal to the linear span $F_1:=[g_i]_{i=1}^n$ and such that $\|Q\|\leqslant D_X+\varepsilon/2$ and $Qx=0$ in the case when $G\cap \operatorname{supp}x=\varnothing$.

Now, following the proof of Theorem 10 in [58], we represent each function $h_i$ as a sum of $m$ disjoint functions (where $m$ is approximately the ratio $2\log(n/\delta)/\log(1+ \delta)$) whose moduli are ‘almost’ equal to the characteristic functions of some sets.

We start with the following observation. Let $l\in\mathbb{N}$ be arbitrary, the sets $B_i\subset (0,\infty)$, $i=1,\dots,n$, be pairwise disjoint, and $z_i$ be functions such that $\operatorname{supp}z_i\subset B_i$ and $|z_i(s)|/|z_i(t)|\leqslant 1+\delta$ for all $s,t\in B_i$, $i=1,\dots,l$. Then the operator

$$ \begin{equation*} W'x(t):=\sum_{i=1}^l\biggl(\int_{B_i}|z_i(s)|\,ds\biggr)^{-1} \int_{B_i} x(s)\,ds\cdot |z_i(t)| \end{equation*} \notag $$
is defined on $Y$ as a projection with range equal to the subspace $[|z_i|]_{i=1}^l$. In addition, $|W'x(t)|\leqslant (1+\delta)|Ux(t)|$, where $U$ is the averaging operator
$$ \begin{equation*} Ux(t):=\sum_{i=1}^l\frac{1}{m(B_i)}\int_{B_i}x(s)\,ds\cdot\chi_{B_i}(t). \end{equation*} \notag $$
Since $\|U\|=1$ ([69], § II.3.2 and Theorem II.4.3), it follows from the previous estimate that $\|W'\|\leqslant 1+\delta$. Therefore, the operator
$$ \begin{equation*} Wx:=\sigma W'(\sigma x), \quad \text{where}\quad \sigma(t):= \operatorname{sign}\biggl(\,\sum_{i=1}^l z_i(t)\biggr), \end{equation*} \notag $$
is a projection onto $[z_i]_{i=1}^l$, the estimate $\|W\|\leqslant 1+\delta$ holds, and $Wx=0$ in the case when $\operatorname{supp}x \cap \biggl(\,\displaystyle\bigcup\limits_{i=1}^l B_i\biggr)=\varnothing$.

Now let $m$ be the smallest integer such that $n(1+\delta)^{-m}\leqslant\delta/n$. For all $i=1,\dots,n$ and $k=1,\dots,m$ we set

$$ \begin{equation*} A_{i,k}:=\{n(1+\delta)^{-k}<|f_i|\leqslant n(1+\delta)^{-k+1}\}\cap \biggl\{|f_i|\geqslant\frac{\delta}{n}\biggr\}. \end{equation*} \notag $$
Since the $A_{i,k}$, $i=1,\dots,n$, $k=1,\dots,m$, are pairwise disjoint sets, $\operatorname{supp}f_i\chi_{A_{i,k}}\subset A_{i,k}$, and $|f_i(s)|/|f_i(t)|\leqslant 1+\delta$ for all $s,t\in A_{i,k}$, $i=1,\dots,n$, $k=1,\dots,m$, as shown above, there exists a projection $R\colon Y\to Y$ with image equal to the span $F_2$ of the functions $f_i\chi_{A_{i,k}}$, $i=1,\dots,n$, $k=1,\dots,m$, and such that $\|R\|\leqslant 1+\delta$ and $Rx=0$ in the case when $\operatorname{supp}x \cap \operatorname{supp}h_i=\varnothing$ for all $i=1,\dots,n$. The subspaces $F_1$ and $F_2$ are pairwise disjoint, so $P:=Q+R$ is a projection bounded in $Y$ with image equal to the linear span of the set $F_1\cup F_2$, whose dimension is at most $n(m+1)\asymp n\log(n+1)$ (with constant depending on $\delta$), and such that
$$ \begin{equation*} \|P\|\leqslant D_X+1+\frac{\varepsilon}{2}+\delta. \end{equation*} \notag $$
Furthermore, $Pf_i=f_i$ for all $i=1,\dots,n$ by construction. Therefore, taking $\delta$ smaller than $\varepsilon/2$ we complete the proof. $\Box$

Using Theorem 7.1 and arguments similar to the ones in the proofs of Propositions 6.2 and 6.3 we can establish the following result.

Theorem 7.2 ([3], Theorem 2). Let $X$ be an r. i. space on $[0,1]$ such that $X\in {\mathcal N}_0\cap \mathbb{K}$ and $X'\in\mathbb{K}$. Then there exists a positive constant $M_X$ such that for each $n\in\mathbb{N}$ and any subspace $E$ of $X$ spanned by $n$ independent identically distributed functions there exists a projection $P\colon X\to X$ such that $\|P\|\leqslant M_X$, $Px=x$ for $x\in E$, and $\dim P(X)\leqslant Cn\log(n+1)$, where the constant $C$ is independent of $X$ and $n$. In particular, $k_X(E,M_X)\leqslant Cn\log(n+1)$.

Theorem 7.2 has specific corollaries for various classes of r. i. spaces. We limit ourselves to Orlicz spaces. Recall that the complementary Orlicz function of an Orlicz function $\Phi$, is defined by

$$ \begin{equation*} {\Phi}'(u):=\sup_{v>0}(uv-{\Phi}(v)),\qquad u\geqslant 0. \end{equation*} \notag $$

Corollary 7.1 ([3], Corollary 2). Let the Orlicz function $\Phi$ satisfy the following conditions:

(a) there exist $C>0$ and $u_0>0$ such that $\Phi(uv)\leqslant C\Phi(u)\Phi(v)$ for $u,v\geqslant u_0$;

(b) $\Phi'$ satisfies the $\Delta_2^\infty$-condition.

Then there exists a constant $M_{\Phi}$ which only depends on $\Phi$ such that for any $n\in\mathbb{N}$, for an arbitrary subspace $E$ of the Orlicz space $L_{\Phi}$ that is spanned by $n$ independent identically distributed functions we have

$$ \begin{equation*} k_{L_{\Phi}}(E,M_{\Phi})\leqslant Cn\log(n+1), \end{equation*} \notag $$
where the constant $C$ is independent of $\Phi$ and $n\in\mathbb{N}$.

Note that condition (b) in Corollary 7.1 holds for no Orlicz space $X=L \log^\beta L$, where $\beta>0$. Nevertheless, $X\in {\mathcal N}_0\cap \mathbb{K}$ and $X'=\exp L_{1/\beta}\in\mathbb{K}$ for $\beta\geqslant 1$ (§ 3.2). Therefore, from Theorem 7.2, we obtain the following result.

Corollary 7.2 ([3], Corollary 3). Let $\beta\geqslant 1$. Then there exists a constant $M_\beta$ such that for each $n\in\mathbb{N}$, for an arbitrary subspace $E$ of $X=L \log^\beta L$ spanned by $n$ independent identically distributed functions,

$$ \begin{equation} k_{X}(E,M_\beta)\leqslant Cn\log(n+1), \end{equation} \tag{7.1} $$
where the constant $C$ is independent of $\beta$ and $n\in\mathbb{N}$.

In particular, in this corollary we can take as $E$ a subspace spanned by $p$-stable random variables, $1<p<2$. Note that in this case inequality (7.1) is in sharp contrast with the exponential lower bound for $k_1(E,M)$ in [47], which we mentioned at the beginning of this section.

8. Open problems

The following question arises in connection with Theorems 5.2 and 5.3.

Probem 1. For $1\leqslant p<2$ let $\psi$ be an Orlicz function that is equivalent at the origin to a $p$-convex and $2$-concave Orlicz function and satisfies the condition $\lim_{t\to +0}\psi(t)t^{-p}= 0$. Find necessary and sufficient conditions on $\psi$ ensuring that the distribution of a function $f\in L_p$ such that a sequence of independent copies of $f$ is equivalent in $L_p$ to the canonical $\ell_\psi$-basis is unique (in the sense of the definition in § 5).

As concerns the following problem, see Proposition 6.3.

Probem 2. Let $X$ be an r. i. space on $[0,1]$ and $\{f_k\}$ be a sequence of independent functions in $X$ such that the subspace $[f_k]$ is complemented in $X$. Does this imply that the subspace $[\bar{f}_k]$ spanned by disjoint copies $\bar{f}_k$ of the functions ${f}_k$, $k=1,2,\dots$, is complemented in $Z_X^2$?

In connection with Theorem 6.5 it is natural to wonder whether or not an analogue of Theorem 6.4 on almost disjointness holds for the spaces $L_{q,r}$.

Probem 3. Let $1 < q <\infty$ and $1 \leqslant r < \infty$. Assume that a sequence $\{f_k\}\subset L_{q,r}$ is equivalent in $L_{q,r}$ to the canonical $\ell_r$-basis. Does this imply that the sequence $\{f_k\}$ is almost disjoint?

We also state the following more general problem.

Probem 4. Let $1 \leqslant r <\infty$, and let $X$ be an r. i. space on $[0,1]$. Find sufficient (necessary) conditions on $X$ for an arbitrary sequence $\{f_k\}\subset X$ equivalent in $X$ to the canonical $\ell_r$-basis to be almost disjoint.

In particular, it could be interesting to consider Problem 4 for so-called $r$- disjointly homogeneous r. i. spaces $X$, that is, for spaces $X$ in which such that each sequence of normalized pairwise disjoint functions contains a subsequence equivalent to the canonical $\ell_r$-basis.

Probem 5. Let $X=L \log^\beta L$. Find all $\beta>0$ for which there exists a constant $M_\beta$ such that for each $n\in\mathbb{N}$ the quantity $k_{X}(E,M_\beta)$, where $E$ is a subspace of $X$ spanned by $n$ symmetric $p$-stable variables for some $1<p<2$, satisfies estimate (7.1) from Corollary 7.2.

One can also pose a similar question for subspaces spanned by arbitrary $n$ independent identically distributed functions.

Probem 6. Let $2<p<\infty$. In [96] Rosenthal showed that each subspace of $L_p$ spanned by a sequence of independent mean zero functions (which can have different distributions in general) is isomorphic to a subspace of the direct sum $\ell_p\oplus \ell_2$. In addition, in [96] the reader can find examples of complemented and non- complemented subspaces of this type. Find necessary and sufficient conditions ensuring that such a subspace is complemented in $L_p$.

Probem 7. Let $X$ be a separable r. i. space on $[0,1]$ such that the operator of tensor product $x \otimes y(s, t) = x(s)y(t)$ is bounded from $X \times X$ to the space $X([0, 1]\times [0, 1])$, and let $\{f_k\}$ be a sequence of independent identically and symmetrically distributed functions such that $f_1\in X$. What are conditions ensuring that the closed span $X_f$ of the set of all functions of the form $x(t)\cdot f_k(s)$, $t,s\in [0,1]$, where $x\in X$, is a complemented subspace of $X([0,1]\times [0,1])$? Note that when $\{f_k\}$ is a sequence of Rademacher functions, such a subspace (denoted by $\operatorname{Rad}X$) is complemented if and only if the Boyd indices of $X$ are non-trivial, that is, the inequalities $0<\mu_X\leqslant\nu_X<1$ hold (see [79], Proposition 2.d.2, and [8]).

The last two questions relate to exponential Orlicz spaces which do not possess the Kruglov property.

Probem 8. By Theorem 1.3 in [56]), if an r. i. space $X$ is an intepolation space with respect to the couple $(L_2,\exp L_2)$, then each sequence $\{f_k\}$ of independent identically distributed mean zero functions is equivalent in $X$ to the canonical basis in $\ell_2$. On the other hand, for each $p>2$ there exist sequences $\{f_k\}$ and $\{g_k\}$ with these properties that are equivalent in $\exp L_p$ to the canonical bases in $\ell_p$ and $\ell_{p',\infty}$, respectively (for instance, see [32]). The following question is natural to ask: what other sequence spaces can be obtained in a similar way?

We also state a more difficult problem.

Probem 9. Let $p>2$. Similarly to $L_p$-spaces (see Theorem 4.1), find the description of all subspaces of the space $\exp L_p$ that have the form $[f_k]$, where $\{f_k\}$ is a sequence of independent identically distributed mean zero functions.

The author is deeply obliged to F. A. Sukochev and D. V. Zanin for their assistance, and to M. Sh. Braverman, Y. Raynaud, E. M. Semenov, and F. L. Hernández for useful discussions of many questions treated in this paper.


Bibliography

1. F. Albiac and N. J. Kalton, Topics in Banach space theory, Grad. Texts in Math., 233, Springer, New York, 2006, xii+373 pp.  crossref  mathscinet  zmath
2. S. V. Astashkin, “On subspaces generated by independent functions in symmetric spaces with the Kruglov property”, St. Petersburg Math. J., 25:4 (2014), 513–527  mathnet  crossref  mathscinet  zmath
3. S. V. Astashkin, “Approximation of subspaces of symmetric spaces generated by independent functions”, Math. Notes, 96:5 (2014), 625–633  mathnet  crossref  mathscinet  zmath
4. S. V. Astashkin, “On symmetric spaces containing isomorphic copies of Orlicz sequence spaces”, Comment. Math., 56:1 (2016), 29–44  crossref  mathscinet  zmath
5. S. V. Astashkin, The Rademacher system in function spaces, Birkhäuser/Springer, Cham, 2020, xx+559 pp.  crossref  mathscinet  zmath
6. S. V. Astashkin, “The structure of subspaces in Orlicz spaces lying between $L^1$ and $L^2$”, Math. Z., 303:4 (2023), 91, 24 pp.  crossref  mathscinet  zmath
7. S. V. Astashkin, “On subspaces of Orlicz spaces spanned by independent copies of a mean zero function”, Izv. Math., 88:4 (2024), 601–625  mathnet
8. S. V. Astashkin and M. Sh. Braverman, “A subspace of a symmetric space, generated by a Rademacher system with vector coefficients”, Operator equations in function spaces, Voronezh State University, Voronezh, 1986, 3–10 (Russian)  mathscinet  zmath
9. S. V. Astashkin and G. P. Curbera, “Rosenthal's space revisited”, Studia Math., 262:2 (2022), 197–224  crossref  mathscinet  zmath
10. S. V. Astashkin, L. Maligranda, and E. M. Semenov, “Multiplicator space and complemented subspaces of rearrangement invariant space”, J. Funct. Anal., 202:1 (2003), 247–276  crossref  mathscinet  zmath
11. S. V. Astashkin, E. M. Semenov, and F. A. Sukochev, “Banach–Saks type properties in rearrangement-invariant spaces with the Kruglov property”, Houston J. Math., 35:3 (2009), 959–973  mathscinet  zmath
12. S. V. Astashkin and F. A. Sukochev, “Comparison of sums of independent and disjoint functions in symmetric spaces”, Math. Notes, 76:4 (2004), 449–454  mathnet  crossref  mathscinet  zmath
13. S. V. Astashkin and F. A. Sukochev, “Series of independent random variables in rearrangement invariant spaces: an operator approach”, Israel J. Math., 145 (2005), 125–156  crossref  mathscinet  zmath
14. S. V. Astashkin and F. A. Sukochev, “Series of independent, mean zero random variables in rearrangement-invariant spaces having the Kruglov property”, J. Math. Sci. (N. Y.), 148:6 (2008), 795–809  mathnet  crossref  mathscinet
15. S. V. Astashkin and F. A. Sukochev, “Sequences of independent identically distributed functions in rearrangement invariant spaces”, Function spaces VIII, Banach Center Publ., 79, Inst. Math., Polish Acad. Sci., Warsaw, 2008, 27–37  mathscinet  zmath
16. S. V. Astashkin and F. A. Sukochev, “Best constants in Rosenthal-type inequalities and the Kruglov operator”, Ann. Probab., 38:5 (2010), 1986–2008  crossref  mathscinet  zmath
17. S. V. Astashkin and F. A. Sukochev, “Independent functions and the geometry of Banach spaces”, Russian Math. Surveys, 65:6 (2010), 1003–1081  mathnet  crossref  mathscinet  zmath  adsnasa
18. S. V. Astashkin and F. A. Sukochev, “Orlicz sequence spaces spanned by identically distributed independent random variables in $L_p$-spaces”, J. Math. Anal. Appl., 413:1 (2014), 1–19  crossref  mathscinet  zmath
19. S. Astashkin, F. A. Sukochev, and D. Zanin, “Disjointification inequalities in symmetric quasi-Banach spaces and their applications”, Pacific J. Math., 270:2 (2014), 257–285  crossref  mathscinet  zmath
20. S. Astashkin, F. Sukochev, and D. Zanin, “On uniqueness of distribution of a random variable whose independent copies span a subspace in $L_p$”, Studia Math., 230:1 (2015), 41–57  crossref  mathscinet  zmath
21. S. Astashkin, F. Sukochev, and D. Zanin, “The distribution of a random variable whose independent copies span $\ell_M$ is unique”, Rev. Mat. Complut., 35:3 (2022), 815–834  crossref  mathscinet  zmath
22. S. Banach, Théorie des opérations linéaires, Monogr. Mat., 1, Inst. Mat. PAN, Warszawa, 1932, vii+254 pp.  mathscinet  zmath
23. S. Banach and S. Mazur, “Zur Theorie der linearen Dimension”, Studia Math., 4 (1933), 100–112  crossref  zmath
24. G. Bennett, L. E. Dor, V. Goodman, W. B. Johnson, and C. M. Newman, “On uncomplemented subspaces of $L_p$, $1<p<2$”, Israel J. Math., 26:2 (1977), 178–187  crossref  mathscinet  zmath
25. C. Bennett and R. Sharpley, Interpolation of operators, Pure Appl. Math., 129, Academic Press, Inc., Boston, MA, 1988, xiv+469 pp.  mathscinet  zmath
26. J. Bergh and J. Löfström, Interpolation spaces. An introduction, Grundlehren Math. Wiss., 223, Springer-Verlag, Berlin–New York, 1976, x+207 pp.  crossref  mathscinet  zmath
27. J. Bourgain, “A counterexample to a complementation problem”, Compos. Math., 43:1 (1981), 133–144  mathscinet  zmath
28. J. Bourgain, “Bounded orthogonal systems and the $\Lambda(p)$-set problem”, Acta Math., 162:3-4 (1989), 227–245  crossref  mathscinet  zmath
29. M. Sh. Braverman, “Complementability of subspaces generated by independent functions in a symmetric space”, Funct. Anal. Appl., 16:2 (1982), 129–130  mathnet  crossref  mathscinet  zmath
30. M. Sh. Braverman, “Symmetric spaces and sequences of independent random variables”, Funct. Anal. Appl., 19:4 (1985), 315–316  mathnet  crossref  mathscinet  zmath
31. M. Sh. Braverman, “On some moment conditions for sums of independent random variables”, Probab. Math. Statist., 14:1 (1993), 45–56  mathscinet  zmath
32. M. Sh. Braverman, Independent random variables and rearrangement invariant spaces, London Math. Soc. Lecture Note Ser., 194, Cambridge Univ. Press, Cambridge, 1994, viii+116 pp.  crossref  mathscinet  zmath
33. M. Braverman, “Independent random variables in Lorentz spaces”, Bull. London Math. Soc., 28:1 (1996), 79–87  crossref  mathscinet  zmath
34. J. Bretagnolle and D. Dacunha-Castelle, “Mesures aléatoires et espaces d'Orlicz”, C. R. Acad. Sci. Paris Sér. A-B, 264 (1967), A877–A880  mathscinet  zmath
35. J. Bretagnolle and D. Dacunha-Castelle, “Application de l'étude de certaines formes linéaires aléatoires au plongement d'espaces de Banach dans des espaces $L^p$”, Ann. Sci. École Norm. Sup. (4), 2:4 (1969), 437–480  crossref  mathscinet  zmath
36. Yu. A. Brudnyĭ and N. Ya. Krugljak, Interpolation functors and interpolation spaces, v. I, North-Holland Math. Library, 47, North-Holland Publishing Co., Amsterdam, 1991, xvi+718 pp.  mathscinet  zmath
37. N. L. Carothers and S. J. Dilworth, “Geometry of Lorentz spaces via interpolation”, Texas functional analysis seminar 1985–1986 (Austin, TX 1985–1986), Longhorn Notes, Univ. Texas, Austin, TX, 1986, 107–133  mathscinet  zmath
38. N. L. Carothers and S. J. Dilworth, “Inequalities for sums of independent random variables”, Proc. Amer. Math. Soc., 104:1 (1988), 221–226  crossref  mathscinet  zmath
39. J. Creekmore, “Type and cotype in Lorentz $L_{pq}$ spaces”, Nederl. Akad. Wetensch. Indag. Math., 43:2 (1981), 145–152  crossref  mathscinet  zmath
40. D. Dacunha-Castelle, “Variables aléatoires échangeables et espaces d'Orlicz”, Séminaire Maurey–Schwartz 1974–1975. Espaces $L^p$, applications radonifiantes et géométrie des espaces de Banach, École Polytech., Centre Math., Paris, 1975, Exp. X, XI, 21 pp.  mathscinet  zmath
41. J. Diestel, Sequences and series in Banach spaces, Grad. Texts in Math., 92, Springer-Verlag, New York, 1984, xii+261 pp.  crossref  mathscinet  zmath
42. S. J. Dilworth, “Special Banach lattices and their applications”, Handbook of the geometry of Banach spaces, v. 1, North-Holland Publishing Co., Amsterdam, 2001, 497–532  crossref  mathscinet  zmath
43. L. E. Dor, “On projections in $L_1$”, Ann. of Math. (2), 102:3 (1975), 463–474  crossref  mathscinet  zmath
44. L. E. Dor and T. Starbird, “Projections of $L_p$ onto subspaces spanned by independent random variables”, Compos. Math., 39:2 (1979), 141–175  mathscinet  zmath
45. R. J. Elliott, Stochastic calculus and applications, Appl. Math. (N. Y.), 18, Springer-Verlag, New York, 1982, ix+302 pp.  mathscinet  zmath
46. G. Fichtenholz and L. Kantorovitch (Kantorovich), “Sur les opérations linéaires dans l'espace des fonctions bornées”, Studia Math., 5 (1934), 69–98  crossref  zmath
47. T. Figiel, W. B. Johnson, and G. Schechtman, “Factorizations of natural embeddings of $l_p^n$ into $L_r$. I”, Studia Math., 89:1 (1988), 79–103  crossref  mathscinet  zmath
48. T. Figiel, W. B. Johnson, and L. Tzafriri, “On Banach lattices and spaces having local unconditional structure, with applications to Lorentz function spaces”, J. Approx. Theory, 13:4 (1975), 395–412  crossref  mathscinet  zmath
49. V. F. Gaposhkin, “Lacunary series and independent functions”, Russian Math. Surveys, 21:6 (1966), 1–82  mathnet  crossref  mathscinet  zmath  adsnasa
50. E. D. Gluskin, “Diameter of the Minkowski compactum is approximately equal to $n$”, Funct. Anal. Appl., 15:1 (1981), 57–58  mathnet  crossref  mathscinet  zmath
51. I. Ts. Gohberg and A. S. Markus, “Stability of bases in Banach and Hilbert spaces”, Izv. Akad. Nauk Mold.SSR, 1962, no. 5, 17–35 (Russian)  mathscinet
52. Y. Gordon, A. Litvak, C. Schütt, and E. Werner, “Geometry of spaces between polytopes and related zonotopes”, Bull. Sci. Math., 126:9 (2002), 733–762  crossref  mathscinet  zmath
53. F. L. Hérnandez and E. M. Semenov, “Subspaces generated by translations in rearrangement invariant spaces”, J. Funct. Anal., 169:1 (1999), 52–80  crossref  mathscinet  zmath
54. J. Hoffman-Jørgensen, “Sums of independent Banach space valued random variables”, Studia Math., 52 (1974), 159–186  crossref  mathscinet  zmath
55. T. Holmstedt, “Interpolation of quasi-normed spaces”, Math. Scand., 26:1 (1970), 177–199  crossref  mathscinet  zmath
56. Yong Jiao, F. Sukochev, and D. Zanin, “Sums of independent and freely independent identically distributed random variables”, Studia Math., 251:3 (2020), 289–315  crossref  mathscinet  zmath
57. W. B. Johnson, B. Maurey, G. Schechtman, and L. Tzafriri, Symmetric structures in Banach spaces, Mem. Amer. Math. Soc., 19, no. 217, Amer. Math. Soc., Providence, RI, 1979, v+298 pp.  crossref  mathscinet  zmath
58. W. B. Johnson and G. Schechtman, “Sums of independent random variables in rearrangement invariant function spaces”, Ann. Probab., 17:2 (1989), 789–808  crossref  mathscinet  zmath
59. M. I. Kadec, “Linear dimension of the spaces $L_p$ and $l_q$”, Uspekhi Mat. Nauk, 13:6(84) (1958), 95–98 (Russian)  mathnet  mathscinet  zmath
60. M. I. Kadets (Kadec) and B. S. Mityagin, “Complemented subspaces in Banach spaces”, Russian Math. Surveys, 28:6 (1973), 77–95  mathnet  crossref  mathscinet  zmath  adsnasa
61. M. I. Kadec and A. Pełczyński, “Bases, lacunary sequences and complemented subspaces in the spaces $L_{p}$”, Studia Math., 21 (1961/1962), 161–176  crossref  mathscinet  zmath
62. J.-P. Kahane, Some random series of functions, D. C. Heath and Co. Raytheon Education Co., Lexington, MA, 1968, viii+184 pp.  mathscinet  zmath
63. A. Kamińska and L. Maligranda, “Order convexity and concavity in Lorentz spaces $\Lambda_{p,w}$, $0<p<\infty$”, Studia Math., 160:3 (2004), 267–286  crossref  mathscinet  zmath
64. L. V. Kantorovich and G. P. Akilov, Functional analysis, Pergamon Press, Oxford–Elmsford, NY, 1982, xiv+589 pp.  mathscinet  zmath
65. B. S. Kašin, “Diameters of some finite-dimensional sets and classes of smooth functions”, Math. USSR-Izv., 11:2 (1977), 317–333  mathnet  crossref  mathscinet  zmath  adsnasa
66. B. S. Kashin and A. A. Saakyan, Orthogonal series, 2nd ed., Actuarila and Financial Center, Moscow, 1999, x+550 pp.  mathscinet  zmath; English transl. of 1st ed. Transl. Math. Monogr., 75, Amer. Math. Soc., Providence, RI, 1989, xii+451 pp.  crossref  mathscinet  zmath
67. A. Khintchine, “Über dyadische Brüche”, Math. Z., 18:1 (1923), 109–116  crossref  mathscinet  zmath
68. M. A. Krasnosel'skiĭ and Ya. B. Rutickiĭ, Convex functions and Orlicz spaces, P. Noordhoff Ltd., Groningen, 1961, xi+249 pp.  mathscinet  zmath
69. S. G. Kreĭn, Ju. I. Petunin, and E. M. Semenov, Interpolation of linear operators, Transl. Math. Monogr., 54, Amer. Math. Soc., Providence, RI, 1982, xii+375 pp.  mathscinet  zmath
70. V. M. Kruglov, “A note on infinitely divisible distributions”, Theory Probab. Appl., 15:2 (1970), 319–324  mathnet  crossref  mathscinet  zmath
71. S. Kwapień and C. Schütt, “Some combinatorial and probabilistic inequalities and their applications to Banach space theory”, Studia Math., 82:1 (1985), 91–106  crossref  mathscinet  zmath
72. S. Kwapień and W. A. Woyczyński, Random series and stohastic integrals: single and multiple, Probab. Appl., Birkhäuser Boston, Inc., Boston, MA, 1992, xvi+360 pp.  mathscinet  zmath
73. M. Ledoux and M. Talagrand, Probability in Banach spaces. Isoperimetry and processes, Ergeb. Math. Grenzgeb. (3), 23, Springer-Verlag, Berlin, 1991, xii+480 pp.  crossref  mathscinet  zmath
74. J. Lindenstrauss, “On complemented subspaces of $m$”, Israel J. Math., 5 (1967), 153–156  crossref  mathscinet  zmath
75. J. Lindenstrauss and A. Pełczyński, “Absolutely summing operators in ${\mathscr L}_p$ spaces and their applications”, Studia Math., 29 (1968), 275–326  crossref  mathscinet  zmath
76. J. Lindenstrauss and A. Pełczyński, “Contributions to the theory of the classical Banach spaces”, J. Funct. Anal., 8:2 (1971), 225–249  crossref  mathscinet  zmath
77. J. Lindenstrauss and L. Tzafriri, “On the complemented subspaces problem”, Israel J. Math., 9 (1971), 263–269  crossref  mathscinet  zmath
78. J. Lindenstrauss and L. Tzafriri, Classical Banach spaces, v. I, Ergeb. Math. Grenzgeb., 92, Sequence spaces, Springer-Verlag, Berlin–New York, 1977, xiii+188 pp.  mathscinet  zmath
79. J. Lindenstrauss and L. Tzafriri, Classical Banach spaces, v. II, Ergeb. Math. Grenzgeb., 97, Function spaces, Springer-Verlag, Berlin–New York, 1979, x+243 pp.  mathscinet  zmath
80. L. Maligranda, Orlicz spaces and interpolation, Sem. Mat., 5, Univ. Estad. Campinas, Dep. de Matemática, Campinas, SP, 1989, iii+206 pp.  mathscinet  zmath
81. J. Marcinkiewicz and A. Zygmund, “Remarque sur la loi du logarithme itéré”, Fund. Math., 29 (1937), 215–222  crossref  zmath
82. B. S. Mityagin, “The homotopy structure of the linear group of a Banach space”, Russian Math. Surveys, 25:5 (1970), 59–103  mathnet  crossref  mathscinet  zmath  adsnasa
83. S. Montgomery-Smith, “Rearrangement invariant norms of symmetric sequence norms of independent sequences of random variables”, Israel J. Math., 131 (2002), 51–60  crossref  mathscinet  zmath
84. S. Montgomery-Smith and E. Semenov, “Random rearrangements and operators”, Voronezh winter mathematical schools, Amer. Math. Soc. Transl. Ser. 2, 184, Adv. Math. Sci., 37, Amer. Math. Soc., Providence, RI, 1998, 157–183  crossref  mathscinet  zmath
85. P. F. X. Müller, Isomorphisms between $H^1$ spaces, IMPAN Monogr. Mat. (N. S.), 66, Birkhäuser Verlag, Basel, 2005, xiv+453 pp.  mathscinet  zmath
86. R. E. A. C. Paley, “Some theorems on abstract spaces”, Bull. Amer. Math. Soc., 42:4 (1936), 235–240  crossref  mathscinet  zmath
87. A. Pełczyński, “Projections in certain Banach spaces”, Studia Math., 19:2 (1960), 209–228  crossref  mathscinet  zmath
88. A. Pełczyński and H. P. Rosenthal, “Localization techniques in $L^{p}$ spaces”, Studia Math., 52:3 (1975), 263–289  mathscinet  zmath
89. G. Peshkir (Peskir) and A. N. Shiryaev, “The Khintchine inequalities and martingale expanding sphere of their action”, Russian Math. Surveys, 50:5 (1995), 849–904  mathnet  crossref  mathscinet  zmath  adsnasa
90. G. Pisier, Factorization of linear operators and geometry of Banach spaces, CBMS Regional Conf. Ser. in Math., 60, Amer. Math. Soc., Providence, RI, 1986, x+154 pp.  crossref  mathscinet  zmath
91. Yu. V. Prokhorov, “An extremal problem in probability theory”, Theory Probab. Appl., 4:2 (1959), 201–203  mathnet  crossref  mathscinet  zmath
92. M. M. Rao and Z. D. Ren, Theory of Orlicz spaces, Monogr. Textbooks Pure Appl. Math., 146, Marcel Dekker, Inc., New York, 1991, xii+449 pp.  mathscinet  zmath
93. Y. Raynaud and C. Schütt, “Some results on symmetric subspaces of $L^1$”, Studia Math., 89:1 (1988), 27–35  crossref  mathscinet  zmath
94. V. A. Rodin and E. M. Semyonov (Semenov), “Rademacher series in symmetric spaces”, Anal. Math., 1:3 (1975), 207–222  crossref  mathscinet  zmath
95. V. A. Rodin and E. M. Semenov, “Complementability of the subspace generated by the Rademacher system in a symmetric space”, Funct. Anal. Appl., 13:2 (1979), 150–151  mathnet  crossref  mathscinet  zmath
96. H. P. Rosenthal, “On the subspaces of $L^p$ ($p>2$) spanned by sequences of independent random variables”, Israel J. Math., 8 (1970), 273–303  crossref  mathscinet  zmath
97. H. P. Rosenthal, “On subspaces of $L^p$”, Ann. of Math. (2), 97:2 (1973), 344–373  crossref  mathscinet  zmath
98. W. Rudin, “Trigonometric series with gaps”, J. Math. Mech., 9:2 (1960), 203–227  crossref  mathscinet  zmath
99. C. Schütt, “On the embedding of 2-concave Orlicz spaces into $L^1$”, Studia Math., 113:1 (1995), 73–80  crossref  mathscinet  zmath
100. I. Singer, Bases in Banach spaces, v. I, Grundlehren Math. Wiss., 154, Springer-Verlag, New York–Berlin, 1970, viii+668 pp.  mathscinet  zmath
101. L. Tzafriri, “Uniqueness of structure in Banach spaces”, Handbook of the geometry of Banach spaces, v. 2, North-Holland, Amsterdam, 2003, 1635–1669  crossref  mathscinet  zmath
102. N. N. Vakhania, V. I. Tarieladze, and S. A. Chobanyan, Probability distributions on Banach spaces, Math. Appl. (Soviet Ser.), 14, D. Reidel Publishing Co., Dordrecht, 1987, xxvi+482 pp.  crossref  mathscinet  zmath
103. P. Wojtaszczyk, Banach spaces for analysts, Cambridge Stud. Adv. Math., 25, Cambridge Univ. Press, Cambridge, 1991, xiv+382 pp.  crossref  mathscinet  zmath

Citation: S. V. Astashkin, “Sequences of independent functions and structure of rearrangement invariant spaces”, Russian Math. Surveys, 79:3 (2024), 375–457
Citation in format AMSBIB
\Bibitem{Ast24}
\by S.~V.~Astashkin
\paper Sequences of independent functions and structure of rearrangement invariant spaces
\jour Russian Math. Surveys
\yr 2024
\vol 79
\issue 3
\pages 375--457
\mathnet{http://mi.mathnet.ru//eng/rm10171}
\crossref{https://doi.org/10.4213/rm10171e}
\mathscinet{http://mathscinet.ams.org/mathscinet-getitem?mr=4801214}
\zmath{https://zbmath.org/?q=an:07945464}
\adsnasa{https://adsabs.harvard.edu/cgi-bin/bib_query?2024RuMaS..79..375A}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001347820700001}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85210246040}
Linking options:
  • https://www.mathnet.ru/eng/rm10171
  • https://doi.org/10.4213/rm10171e
  • https://www.mathnet.ru/eng/rm/v79/i3/p3
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Успехи математических наук Russian Mathematical Surveys
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024