|
This article is cited in 3 scientific papers (total in 3 papers)
Random walks conditioned to stay nonnegative and branching processes in an unfavourable environment
V. A. Vatutina, C. Dongb, E. E. Dyakonovaa a Steklov Mathematical Institute of Russian Academy of Sciences, Moscow, Russia
b Xidian University, Xi'an, P. R. China
Abstract:
Let $\{S_n,\,n\geqslant 0\}$ be a random walk with increments that belong (without centering) to the domain of attraction of an $alpha$-stable law, that is, there exists a process $\{Y_t,\,t\geqslant 0\}$ such that $S_{nt}/a_{n}$ $\Rightarrow$ $Y_t$, $t\geqslant 0$, as $n\to\infty$ for some scaling constants $a_n$. Assuming that $S_{0}=o(a_n)$ and $S_n\leqslant \varphi (n)=o(a_n)$, we prove several conditional limit theorems for the distribution of the random variable $S_{n-m}$ given that $m=o(n)$ and $\min_{0\leqslant k\leqslant n}S_k\geqslant 0$. These theorems supplement the assertions established by Caravenna and Chaumont in 2013. Our results are used to study the population size of a critical branching process evolving in an unfavourable environment.
Bibliography: 28 titles.
Keywords:
random walks, stable law, conditional limit theorems, branching processes, unfavourable random environment.
Received: 13.03.2023 and 23.05.2023
§ 1. Introduction The investigations of the conditional distributions of random walks conditioned to stay positive or nonnegative have a long history (see, for example, [4], [6]–[12], [14], [17], [18], [20] and [25]). The interest in various invariance principles for conditional processes, to which most of these works are devoted, is due not only to the natural development of the theory of random walks, but also to the widespread use of such results in the theory of branching processes evolving in constant and random environments alike (see [2], [19] and [26]), in statistical physics, in particular, in investigations of random polymers (see [13]), and also in other areas. In this paper we also analyze the properties of random walks conditioned to stay nonnegative. Our research was motivated by the results due to Caravenna and Chaumont [11]. They showed that if the distribution of the step of a random walk belongs (without centering) to the domain of attraction of a stable law, then, provided that this walk stays nonnegative on an interval of length $n$, the distribution of the appropriately normalized excursion of length $n$ generated by this random walk converges to the distribution of the excursion of a stable Lévy process conditioned to stay nonnegative on the interval $[0,1]$. We supplement this result by a study of the behaviour of the appropriately rescaled trajectory of the random walk in a left neighbourhood of the $n-m$ excursion, where $m=o(n)$. It turns out that three different distributions occur in the limit, depending on the rate of convergence of $m/n$ to zero. These three regimes lead to three different rates of growth for the number of particles in critical branching processes evolving in an unfavourable random environment and conditioned to survive by a distant moment $n$. Our assertions for critical branching processes in random environment supplement the corresponding theorems established in [26], [27] and [28], where the distributions of the number of particles at times $m=o(n)$ and $m=[nt]$, $0<t\leqslant 1$, were considered under the condition that the processes survive by time $n$. The paper is structured as follows. In § 2 we introduce the main concepts, describe our basic conditions imposed on random walks, recall some known local limit theorems, and prove auxiliary assertions for random walks conditioned to stay nonnegative. In § 3 several conditional limit theorems are presented for excursions of a lattice random walk conditioned to stay nonnegative. In § 4 we prove the corresponding conditional limit theorems for excursions of a random walk whose step has an absolutely continuous distribution. In § 5 we establish a conditional limit theorem for an almost surely convergent sequence of random variables. In § 6 we apply our results for random walks conditioned to stay nonnegative to the description of the distribution of the population size of a critical branching process evolving in an unfavourable environment and conditioned to survive by a distant moment of time.
§ 2. Notation and assumptions For the unification of presentation and convenience of references we adhere mainly to the notation and assumptions introduced in [11]. In what follows we let $C_{1},C_{2},\dots$ denote some absolute constants, which do not necessarily coincide in distinct formulae. We introduce the sets $\mathbb{N}\mathbf:=\{1,2,\dots\} $ and $\mathbb{N}_{0}:=\mathbb{N} \cup \{0\}$. Given two positive sequences $\{c_{n},\,n\in \mathbb{N}\} $ and $\{d_{n},\,n\in\mathbb{N}\}$, we write, as usual, Recall that a positive sequence $\{c_{n},\,n\in \mathbb{N}\} $ (or a real function $c(x))$ is called regularly varying at infinity with index $\gamma \in \mathbb{R}$, which is denoted by $c_{n}\in R_{\gamma}$ or $c(x)\in R_{\gamma}$, respectively, if $c_{n}\sim n^{\gamma}l(n) $ ($c(x)\sim x^{\gamma}l(x)$, respectively), where $l(x)$ is a slowly varying function, that is, a positive function such that $l(cx)/l(x)\to 1$ as ${x\to \infty}$ for any fixed $c>0$. Let
$$
\begin{equation*}
\begin{aligned} \, \mathcal{A} &:=\{0<\alpha <1,\, |\beta |<1\}\cup \{1<\alpha <2,\, |\beta |<1\} \\ &\qquad\cup \{\alpha =1,\, \beta=0\}\cup \{\alpha =2,\, \beta =0\} \end{aligned}
\end{equation*}
\notag
$$
be a subset of the space $\mathbb{R}^{2}$. Given a pair $(\alpha,\beta)\in \mathcal{A}$ and a random variable $X$, we write $X\in \mathcal{D}(\alpha,\beta)$ if the distribution of $X$ belongs (without centering) to the domain of attraction of a stable law with density $g(x)=g_{\alpha,\beta }(x)$, $x\in (-\infty,+\infty)$, and the characteristic function is
$$
\begin{equation*}
G(w)=\int_{-\infty}^{+\infty}e^{iwx}g(x)\,dx =\exp\biggl\{-c|w|^{\alpha}\biggl(1-i\beta \frac{w}{|w|}\tan \frac{\pi \alpha}{2}\biggr)\biggr\}, \qquad c>0.
\end{equation*}
\notag
$$
We consider a random walk
$$
\begin{equation*}
S_{0}=0, \qquad S_{n}=X_{1}+\dots+X_{n}, \quad n\geqslant 1,
\end{equation*}
\notag
$$
with independent identically distributed increments. Throughout this section we assume that the random walk $(\mathcal{S}=\{S_{n},\,n\in \mathbb{N}_{0}\},\,\mathbb{P})$ satisfies the following conditions. Condition A1. The increments $X_{n}$, $n=1,2,\dots$, of $\mathcal{S}$ belong to $\mathcal D(\alpha,\beta)$. This means, in particular, that there is an increasing sequence of positive numbers
$$
\begin{equation*}
a_n:=n^{1/\alpha}\ell (n), \qquad n=1,2,\dots,
\end{equation*}
\notag
$$
where the sequence $\ell (1),\ell (2),\dots $ varies slowly at infinity, such that
$$
\begin{equation}
\mathcal{L}\biggl\{\frac{S_{[nt]}}{a_n},\,t\geqslant 0\biggr\} \quad\Longrightarrow\quad \mathcal{L}\{Y_t,\,t\geqslant 0\}
\end{equation}
\tag{2.1}
$$
as $n\to \infty $; the symbol $\Longrightarrow $ denotes convergence in distribution in the space $D[0,+\infty)$ with the Skorokhod topology, and the process $(\mathcal{Y}=\{Y_t,\,t\geqslant 0\},\,\mathbf{P})$ is strictly stable, that is, has stationary independent increments and marginal distributions described by the characteristic functions
$$
\begin{equation*}
\mathbf{E}e^{iwY_{t}}=G_{\alpha,\beta}(wt^{1/\alpha}), \qquad t\geqslant 0.
\end{equation*}
\notag
$$
We know (see, for example, [5], p. 380) that if $X_{n}\stackrel{d}{=}X\in \mathcal{D}( \alpha,\beta) $ for all $n\in \mathbb{N}$, then the limit
$$
\begin{equation*}
\lim_{n\to \infty}\mathbf{P}(S_{n}>0) =\rho =\mathbf{P}(Y_{1}>0)
\end{equation*}
\notag
$$
exists, where
$$
\begin{equation*}
\rho =\frac{1}{2}+\frac{1}{\pi \alpha}\arctan\biggl(\beta \tan \frac{\pi \alpha}{2}\biggr).
\end{equation*}
\notag
$$
Let $\Omega^{\mathrm{RW}}:=\mathbb{R}^{\mathbb{N}_{0}}$ be the space of right-continuous functions with jumps at integer points on the nonnegative half-axis, and let $\Omega :=D([0,\infty),\mathbb{R}) $ be the space of right-continuous real functions on $[0,\infty)$ that have finite limits from the left. The space $\Omega $ endowed with the Skorokhod topology turns to a Polish space with the corresponding $\sigma$-algebra of Borel sets. We also set $\Omega_{N}^{\mathrm{RW}}:=\mathbb{R}^{\{0,1,\dots,N\}}$ and $\Omega_t:=D([0,t],\mathbb{R})$ for $t\in (0,\infty)$. We let $\mathbb{P}_x$ denote the law of the random walk starting at the point $x\in \mathbb{R}$, that is, the probability law on $\Omega^{\mathrm{RW}}$ generated by the random walk $\mathcal{S}+x$, where $\mathbb{P}$ is the law of a random walk $\mathcal{S}$ starting at zero; given a process $\mathcal{Y}=\{Y_t,\,t\geqslant 0\} $ with distribution $\mathbf{P}$ on $\Omega $, we let $\mathbf{P}_a$ denote the distribution of the shifted process $\mathcal{Y}+a$ for $a\in \mathbb{R}$. Condition A2. One of the following constraints holds. $\bullet$ (The $(h;c)$-lattice case.) The measure $\mathbb{P}$ of the increment $X_1$ of a random walk is concentrated on the lattice $c+hZ$, where the step $h>0$ is chosen to be maximum possible (that is, the distribution of $X_1$ is not concentrated on any lattice $c_{0}+h_{0}Z$ for $h_{0}>h$ and $c_{0}\in R$). Note that we can take $c\in [0,h)$. $\bullet$ (The absolutely continuous case.) The measure $\mathbb{P}$ of the increment $X_1$ of a random walk is absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}$, and there exists $n\in \mathbb N$ such that the density $f_{n}(x):=\mathbb{P}(S_{n}{\in}dx){/}dx$ of the random variable $S_{n}$ is bounded (and therefore $f_{n}(x){\in}L^{\infty}$). We set
$$
\begin{equation*}
L_{N}:=\min_{1\leqslant k\leqslant N}S_{k}.
\end{equation*}
\notag
$$
The law of the bridge of length $N\in \mathbb{N}$ of a positive random walk starting at the point $x\in [0,\infty)$ and ending at $y\in [0,\infty)$ is the probability law on the set $\Omega_{N}^{\mathrm{RW}}$ defined by
$$
\begin{equation*}
\mathbb{P}_{x,y}^{\uparrow,N}(\,\cdot\,) :=\mathbb{P}_{x}(\cdot \mid L_{N}\geqslant 0,\,S_{N}=y).
\end{equation*}
\notag
$$
Clearly, in the lattice case, for the law $\mathbb{P}_{x,y}^{\uparrow,N}(\,\cdot\,)$ to be well defined, we need to assume that
$$
\begin{equation*}
q_{N}^{+}(x,y) :=\mathbb{P}_{x}(L_{N}\geqslant 0,\,S_{N}=y) >0.
\end{equation*}
\notag
$$
Similarly, in the absolutely continuous case we need to ensure the positivity of the function
$$
\begin{equation}
\begin{aligned} \, \notag &f_{N}^{+}(x,y) :=\frac{\mathbb{P}_{x}(L_{N-1}>0,\,S_{N}\in dy)}{dy} \\ &\quad=\int_{\mathcal{K}(N-1)}\biggl[f(s_{1}-x)\biggl( \prod_{i=2}^{N-1}f(s_{i}-s_{i-1})\biggr) f(y-s_{N-1})\biggr]ds_{1}\dotsb ds_{N-1}, \end{aligned}
\end{equation}
\tag{2.2}
$$
where $\mathcal{K}(N-1):=\{s_{1}>0,\dots,s_{N-1}>0\} $ is the domain of integration and $f(\,\cdot\,)=f_{1}(\,\cdot\,)$ is the density of the random walk increment $X_1$. For $t\in [0,\infty)$ and $a,b\in [0,\infty)$ we let $\mathbf{P}_{a,b}^{\uparrow,t}$ denote the law on the set $\Omega_{t}$ corresponding to the bridge of length $t$ of a nonnegative Lévy process starting at $a$ and ending at $b$, which can informally be defined by
$$
\begin{equation*}
\mathbf{P}_{a,b}^{\uparrow,t}(\,\cdot\,) :=\mathbf{P}_{a}( \cdot \mid Y_{s}\geqslant 0\ \forall\, s\in [0,t],\,Y_{t}=b)
\end{equation*}
\notag
$$
(see § 6.1 in [11], where it is explained in detail that such a law is well defined). In the case when $\alpha =2$ and $\rho =1/2$, that is, when the process $\mathcal{Y}$ is a standard Brownian motion, the law $\mathbf{P}_{a,b}^{\uparrow,1}(\,\cdot\,) $ specifies the distribution of the Brownian excursion. For $N\in \mathbb{N}$ we define the rescaling map $\phi_{N}\colon \Omega _{N}^{\mathrm{RW}}\to \Omega_{1}$ by
$$
\begin{equation*}
(\phi_{N}(\mathcal{S})) (t):=\frac{S_{[Nt]}}{a_{N}},
\end{equation*}
\notag
$$
where $\{a_{N},\,N\in \mathbb{N}\} $ is the scaling sequence appearing in Condition A1. Using this definition, we let $\mathbb{P}_{x,y}^{\uparrow,N}\,{\circ}\, \phi_{N}^{-1}$ denote the law induced on the set $\Omega_{1}:=D([0,1],\mathbb{R}) $ via $\mathbb{P}_{x,y}^{\uparrow,N}$ and $\phi_{N}$. The following important theorem was established in [11]. Theorem 1. Let $a,b\in [0,\infty)$, and let $\{x_{N},\,N\in \mathbb{N}\} $ and $\{y_{N},\,N\in \mathbb{N}\} $ be two nonnegative sequences such that $x_{N}/a_{N}\to a$ and $y_{N}/a_{N}\to b$ (in the $(h;c)$-lattice case, we assume that $(y_{N}-x_{N}) \in Nc+h\mathbb{Z}$ for all $N\in \mathbb{N}$). If Conditions A1 and A2 hold, then
$$
\begin{equation*}
\mathbb{P}_{x,y}^{\uparrow,N}\circ \phi_{N}^{-1} \quad\Longrightarrow\quad \mathbf{P}_{a,b}^{\uparrow,1}
\end{equation*}
\notag
$$
as $N\to \infty $. This theorem implies that if $a=b=0$, $X_{1}$ has a lattice distribution and $m=m(N)=o(N)$ as $N\to \infty $, then
$$
\begin{equation*}
\lim_{N\to \infty}\mathbb{P}_{x_{N}}\biggl(\frac{S_{N-m}}{a_{N}}\geqslant z\biggm|L_{N}\geqslant 0,\, S_{N}=y_{N}\biggr) =0
\end{equation*}
\notag
$$
for any $z>0$. Therefore, $S_{N-m}/a_{N}\to 0$ in probability as $N\to \infty $; thus, Theorem 1 gives little information about the distribution of the random variable $S_{N-m}$ in this situation. In this section we indicate a centering and a rescaling of the random variable $S_{N-m}$ that ensure the convergence of the transformed random variable to a proper nondegenerate distribution. It turns out that the form of the limit distribution depends substantially on the rate of change of the parameter $y_{N}$. In addition, to include also the absolutely continuous case into consideration, we will show that
$$
\begin{equation*}
\lim_{N\to \infty}\mathbb{P}_{x_{N}}\biggl( \frac{S_{N-m}-S_{N}}{a_{m}}\leqslant z\biggm|L_{N}\geqslant 0,\,S_{N}\leqslant y_{N}\biggr) =A(z),
\end{equation*}
\notag
$$
where $A(z)$ is a proper nondegenerate distribution, the form of which is different in the cases when $y_{N}/a_{m}\to 0$, $y_{N}/a_{m}\to T\in (0,\infty)$ and $y_{N}/a_{m}\to \infty $. To prove the results stated we need new notation, in which the infimum over an empty set is infinite. We set $S_{0}:=0$ and $\tau _{0}^{\pm}:=0$; for $k\geqslant 1$ we let
$$
\begin{equation*}
\tau_{k}^{-}:=\inf \bigl\{n>\tau_{k-1}^{-}\colon S_{n}\leqslant S_{\tau_{k-1}^{-}}\bigr\}
\end{equation*}
\notag
$$
denote the weak lower ladder moments and
$$
\begin{equation*}
\tau_{k}^{+}:=\inf \bigl\{n>\tau_{k-1}^{+}\colon S_{n}\geqslant S_{\tau_{k-1}^{+}}\bigr\}
\end{equation*}
\notag
$$
denote the weak upper ladder moments. We set
$$
\begin{equation*}
H_{k}^{\pm}:=\pm S_{\tau_{k}^{\pm}}
\end{equation*}
\notag
$$
and introduce the parameter
$$
\begin{equation*}
\begin{aligned} \, \zeta &:=\mathbb{P}(H_{1}^{+}=0) =\sum_{n=1}^{\infty}\mathbb{P}(S_{1}<0,\,\dots,\,S_{n-1}<0,\,S_{n}=0) \\ &\,=\sum_{n=1}^{\infty}\mathbb{P}(S_{1}>0,\,\dots,\,S_{n-1}>0,\,S_{n}=0) =\mathbb{P}(H_{1}^{-}=0) \in (0,1). \end{aligned}
\end{equation*}
\notag
$$
To verify the third equality we must use the relation
$$
\begin{equation*}
\{S_{n}-S_{n-k},\,k=0,1,\dots,n\} \stackrel{d}{=}\{S_{k},\,k=0,1,\dots,n\}.
\end{equation*}
\notag
$$
For $x\geqslant 0$ we introduce the renewal functions
$$
\begin{equation*}
V^{\pm}(x):=\sum_{k=0}^{\infty}\mathbb{P}(H_{k}^{\pm}\leqslant x) =\sum_{k=0}^{\infty}\sum_{n=0}^{\infty}\mathbb{P}(\tau_{k}^{\pm}=n,\,\pm S_{n}\leqslant x).
\end{equation*}
\notag
$$
Note that $V^{\pm}(x)$ are right-continuous nondecreasing functions and
$$
\begin{equation}
V^{\pm}(0)=\sum_{k=0}^{\infty}\mathbb{P}(H_{k}^{\pm}=0) =\frac{1}{1-\zeta}.
\end{equation}
\tag{2.3}
$$
Furthermore, if Condition A1 holds, then $\lim_{n\to\infty}\mathbf{P}(S_n>0)=\rho\in (0,1)$; therefore, the random variables $\tau_1^+$ and $\tau_1^-$ are proper and (see, for example, [21] or [23]) we have
$$
\begin{equation}
\mathbb{P}(\tau_{1}^{+}>n) \in R_{-\rho}, \qquad V^{+}(x)\in R_{\alpha \rho},
\end{equation}
\tag{2.4}
$$
$$
\begin{equation}
\mathbb{P}(\tau_{1}^{-}>n) \in R_{-(1-\rho)}\quad\text{and} \quad V^{-}(x)\in R_{\alpha (1-\rho)}.
\end{equation}
\tag{2.5}
$$
Note that by [25], relations (15) and (31), and [11], relation (3.18), there are positive constants $\widehat{C}$, $\mathcal{C}^{+}$, and $\mathcal{C}^{-}$ such that
$$
\begin{equation*}
\begin{gathered} \, n\,\mathbb{P}(\tau_{1}^{-}>n) \,\mathbb{P}(\tau_{1}^{+}>n) \sim \widehat{C}, \\ V^{+}(a_{n})\sim \frac{\mathcal{C}^{+}}{1-\zeta}\, n\, \mathbb{P}(\tau_{1}^{-}>n)\quad\text{and} \quad V^{-}(a_{n})\sim \frac{\mathcal{C}^{-}}{1-\zeta}\, n\, \mathbb{P}(\tau_{1}^{+}>n) \end{gathered}
\end{equation*}
\notag
$$
as $n\to \infty $; consequently,
$$
\begin{equation}
\mathbb{P}(\tau_{1}^{+}>n) V^{+}(a_{n})\sim C^{\ast }:=\frac{\mathcal{C}^{+}\widehat{C}}{1-\zeta}\in (0,\infty),
\end{equation}
\tag{2.6}
$$
$$
\begin{equation}
\mathbb{P}(\tau_{1}^{-}>n) V^{-}(a_{n})\sim C^{\ast \ast }:=\frac{\mathcal{C}^{-}\widehat{C}}{1-\zeta}\in (0,\infty)
\end{equation}
\tag{2.7}
$$
and
$$
\begin{equation}
V^{+}(a_{n})V^{-}(a_{n})\sim nC^{\ast \ast \ast }:=\frac{\widehat{C}\mathcal{C}^{+}\mathcal{C}^{-}}{(1-\zeta)^{2}}n.
\end{equation}
\tag{2.8}
$$
For $x\geqslant 0$ we introduce the left-continuous renewal functions
$$
\begin{equation*}
\underline{{V}}^{\pm}(x):=\sum_{k=0}^{\infty}\mathbb{P}(H_{k}^{\pm}<x) =\sum_{k=0}^{\infty}\sum_{n=0}^{\infty}\mathbb{P}(\tau_{k}^{\pm}=n,\, \pm S_{n}<x).
\end{equation*}
\notag
$$
If the distribution of the random variable $X_{1}$ is absolutely continuous, then
$$
\begin{equation*}
\underline{{V}}^{\pm}(x)=V^{\pm}(x).
\end{equation*}
\notag
$$
We will repeatedly use this fact referring to the results obtained in [14] and [25]. In what follows we need various functions of the form $T(x_{N}$, $y_{N})$ depending on the real parameters $x_{N}$ and $y_{N}$. Given a positive sequence $a_{n}$, $n=1,2,\dots$, for brevity we will write “$T(x_{N},y_{N})=o(1)$ or $T(x_{N},y_{N})\sim c>0$ uniformly with respect to $x_{N}=o(a_{N})$, $y_{N}=o(a_{m})$ as $\min(m,N)\to \infty$”, to indicate that “$T(x_{N},y_{N})=o(1)$ or $T(x_{N},y_{N})\sim c>0$ uniformly with respect to $x_{N}\in (0,\delta_{N}a_{N}]$, $y_{N}\in (0,\delta_{N}a_{m}]$ for any positive sequence $\delta_{N}\to 0$ as $\min(m,N)\to \infty$”. To simplify the presentation below we recall several local limit theorems established in [11], § 4, and valid in the lattice case. Let $g(\,\cdot\,)$ be the density of a random variable $Y_{1}$ and $g^{+}(\,\cdot\,)$ be the density of the meander of a Lévy process $\mathcal{Y}$ at time $t=1$ (see [8]):
$$
\begin{equation*}
\int_{0}^{y}g^{+}(x)\,dx=\mathbf{P}_{0} \Bigl(Y_{1}\leqslant y\Bigm|\inf_{0\leqslant s\leqslant 1}Y_{s}\geqslant0\Bigr), \qquad y\geqslant 0
\end{equation*}
\notag
$$
(see Lemma 4 in [12], where the properties of this density were described in detail). Similarly, let $g^{-}(\,\cdot\,)$ be the density of the meander of the Levy process $-\mathcal{Y}$ at time $t=1$. Finally, for $a,b\in [0,\infty)$ we introduce the function
$$
\begin{equation*}
C(a,b):=\mathbf{P}_{a}\Bigl(\inf_{0\leqslant s\leqslant 1}Y_{s}\geqslant 0\Bigm|Y_{1}=b\Bigr).
\end{equation*}
\notag
$$
Lemma 1. Assume that Conditions A1 and A2 (in the $(h;c)$-lattice case) are satisfied. Then for $x,y\geqslant 0$, where $(y-x) \in nc+h\mathbb{Z}$, the following relations hold as ${n\to \infty}$: (1) uniformly with respect to $x=o(a_{n})$ and $y\geqslant 0$,
$$
\begin{equation}
g_{n}^{+}(x,y)=\frac{h\mathbb{P}(\tau_{1}^{-}>n) }{a_{n}}V^{-}(x)\biggl(g^{+}\biggl(\frac{y}{a_{n}}\biggr) +o(1)\biggr) ;
\end{equation}
\tag{2.9}
$$
(2) uniformly with respect to $y=o(a_{n})$ and $x\geqslant 0$,
$$
\begin{equation}
g_{n}^{+}(x,y) =\frac{h\mathbb{P}(\tau_{1}^{+}>n) }{a_{n}}V^{+}(y)\biggl(g^{-}\biggl(\frac{x}{a_{n}}\biggr) +o(1)\biggr) ;
\end{equation}
\tag{2.10}
$$
(3) uniformly with respect to $y=o(a_{n})$ and $x=o(a_{n})$,
$$
\begin{equation}
g_{n}^{+}(x,y) =h(1-\zeta )\frac{g(0)}{na_{n}}V^{-}(x)V^{+}(y)(1+o(1)) ;
\end{equation}
\tag{2.11}
$$
(4) for any $T>1$, uniformly with respect to $x,y\in (T^{-1}a_{n}, Ta_{n}) $,
$$
\begin{equation}
g_{n}^{+}(x,y) =\frac{h}{a_{n}}g\biggl(\frac{y-x}{a_{n}}r\biggr) C\biggl(\frac{x}{a_{n}},\frac{y}{a_{n}}\biggr) (1+o(1)).
\end{equation}
\tag{2.12}
$$
Note that (2.12) is a consequence of Liggett’s invariance principle for bridges (see [20]) and Gnedenko’s local theorem (see [16]). The renewal functions introduced above were defined in terms of characteristics of the weak ladder moments of random walks. It is worth noting that both the strict ladder moments $\{\widehat{\tau}_{k}^{\pm},\,k\geqslant 0\} $ and the quantities $\{\widehat{H}_{k}^{\pm},\,k\geqslant 0\} $ defined by the relations $\widehat{\tau}_{0}^{\pm}:=0$, $\widehat{H}_{0}^{\pm}:=0$ and
$$
\begin{equation*}
\widehat{\tau}_{k}^{\pm}:=\inf \{n>\widehat{\tau}_{k-1}^{\pm}\colon \pm S_{n}>\pm S_{\widehat{\tau}_{k-1}^{\pm}}\}\quad\text{and} \quad \widehat{H}_{k}^{\pm}:=\pm S_{\tau_{k}^{\pm}}
\end{equation*}
\notag
$$
for $k\geqslant 1$ are also often used to study the properties of random walks and branching processes in a random environment (see, for example, [1], [3], [14], [25] and [27]). The sequences $\{\widehat{H}_{k}^{\pm},\,k\geqslant 0\} $ generate the renewal functions
$$
\begin{equation*}
\widehat{V}^{\pm}(x) :=\sum_{k=0}^{\infty}\mathbb{P}(\widehat{H}_{k}^{\pm}\leqslant x) =\sum_{k=0}^{\infty}\sum_{n=0}^{\infty}\mathbb{P}(\widehat{\tau}_{k}^{\pm}=n,\,\pm S_{n}\leqslant x)
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
\underline{\widehat{V}}^{\pm}(x) :=\sum_{k=0}^{\infty}\mathbb{P}(\widehat{H}_{k}^{\pm}<x) =\sum_{k=0}^{\infty}\sum_{n=0}^{\infty }\mathbb{P}(\widehat{\tau}_{k}^{\pm}=n,\,\pm S_{n}<x).
\end{equation*}
\notag
$$
It is known (see [15], § XII.1, (1.13)) that
$$
\begin{equation}
\widehat{V}^{\pm}(x)=(1-\zeta) V^{\pm}(x)\quad\text{and} \quad \underline{\widehat{V}}^{\pm}(x)=(1-\zeta) \underline{{V}}^{\pm}(x).
\end{equation}
\tag{2.13}
$$
In addition, it is true that (see [11], Appendix 3)
$$
\begin{equation*}
\mathbb{P}(\widehat{\tau}_{1}^{-}>n) \sim \frac{1}{1-\zeta}\, \mathbb{P}(\tau_{1}^{-}>n);
\end{equation*}
\notag
$$
due to symmetry,
$$
\begin{equation}
\mathbb{P}(\widehat{\tau}_{1}^{+}>n) \sim \frac{1}{1-\zeta}\, \mathbb{P}(\tau_{1}^{+}>n).
\end{equation}
\tag{2.14}
$$
In the absolutely continuous case results similar to Lemma 1 were also deduced in [11]. Lemma 2. Assume that Conditions A1 and A2 (in the absolutely continuous case) are satisfied. Then the following relations hold for $x,y\geqslant 0$ as $n\to \infty$: (1) uniformly with respect to $x=o(a_{n})$ and $y\geqslant 0$,
$$
\begin{equation}
f_{n}^{+}(x,y) =\frac{\mathbb{P}(\tau_{1}^{-}>n) }{a_{n}}V^{-}(x)\biggl(g^{+}\biggl(\frac{y}{a_{n}}\biggr) +o(1)\biggr) ;
\end{equation}
\tag{2.15}
$$
(2) uniformly with respect to $y=o(a_{n})$ and $x\geqslant 0$,
$$
\begin{equation}
f_{n}^{+}(x,y) =\frac{\mathbb{P}(\tau_{1}^{+}>n) }{a_{n}}V^{+}(y)\biggl(g^{-}\biggl(\frac{x}{a_{n}}\biggr) +o(1)\biggr) ;
\end{equation}
\tag{2.16}
$$
(3) uniformly with respect to $y=o(a_{n})$ and $x=o(a_{n})$,
$$
\begin{equation}
f_{n}^{+}(x,y) =\frac{g(0)}{na_{n}}V^{-}(x)V^{+}(y)( 1+o(1)) ;
\end{equation}
\tag{2.17}
$$
(4) for any $T>1$, uniformly with respect to $x,y\in (T^{-1}a_{n},Ta_{n}) $,
$$
\begin{equation}
f_{n}^{+}(x,y) =\frac{1}{a_{n}}g\biggl(\frac{y-x}{a_{n}}\biggr) C\biggl(\frac{x}{a_{n}},\frac{y}{a_{n}}\biggr) (1+o(1)).
\end{equation}
\tag{2.18}
$$
Relation (2.16) was not explicitly indicated in [11]. However, it is straightforward to derive it from (2.15) and (2.2) using symmetry, or, more precisely, by considering the random walk $-\mathcal{S}$ instead of $\mathcal{S}$, swapping $x$ and $y$, and replacing each symbol ‘$+$’ by ‘$-$’. Relation (2.18) is a consequence of Liggett’s invariance principle for bridges (see [20]) and the local theorem proved by Stone in [24]. Now we clarify the value of $C^{\ast}$ in (2.6). Lemma 3. Assume that Condition A1 is satisfied. Then
$$
\begin{equation*}
\frac{1}{C^{\ast}}=\int_{0}^{\infty}w^{\alpha \rho}g^{-}(w)\,dw.
\end{equation*}
\notag
$$
Remark 1. Note that the value of the constant $C^{\ast}$ is not universal in the following sense. If we replace the scaling sequence by $\overline{a}_{n}=ca_{n}$ for some $c>0$, then
$$
\begin{equation*}
\frac{S_{n}}{\overline{a}_{n}} \quad\Longrightarrow\quad \overline{Y}_{1}:=\frac{Y_{1}}{c},
\end{equation*}
\notag
$$
and therefore
$$
\begin{equation*}
\begin{aligned} \, \overline{g}^{-}(w)\, dw &=\mathbf{P}_{0}\Bigl(-\overline{Y}_{1}\in dw\Bigm| \inf_{0\leqslant s\leqslant 1}(-\overline{Y}_{s})\geqslant 0\Bigr) \\ &=\mathbf{P}_{0}\biggl(-\frac{Y_{1}}c\in dw\biggm|\inf_{0\leqslant s\leqslant 1}(-Y_{s})\geqslant 0\biggr) =cg^{-}(cw)\, dw. \end{aligned}
\end{equation*}
\notag
$$
Hence
$$
\begin{equation*}
\int_{0}^{\infty}w^{\alpha \rho}\overline{g}^{-}(w)\,dw =c\int_{0}^{\infty}w^{\alpha \rho}g^{-}(cw)\,dw =\frac{1}{c^{\alpha \rho}}\int_{0}^{\infty}w^{\alpha \rho}g^{-}(w)\,dw =\frac{1}{c^{\alpha \rho}C^{\ast}}
\end{equation*}
\notag
$$
and $C^{\ast}$ must be replaced by $c^{\alpha \rho}C^{\ast}$. Proof of Lemma 3. According to (2.7), (2.13), and (2.14) we have
$$
\begin{equation*}
C^{\ast \ast}=\lim_{n\to \infty}V^{-}(a_{n})\mathbb{P}(\tau_{1}^{-}>n) =\lim_{n\to \infty }\widehat{V}^{-}(a_{n})\mathbb{P}(\widehat{\tau}_{1}^{-}>n).
\end{equation*}
\notag
$$
Furthermore, let $U(w):=C^{\ast \ast}w^{\alpha (1-\rho)},w\geqslant 0$. It follows from the proof of Theorem 1.1 in [10] (see the reasoning between (3.11) and (3.12) and the definition (3.1) there) that
$$
\begin{equation*}
\mathbf{E}\Bigl[U(Y_{1})\Bigm|\inf_{0\leqslant s\leqslant 1}Y_{s}\geqslant 0\Bigr] =C^{\ast\ast}\int_{0}^{\infty}w^{\alpha (1-\rho)}g^{+}(w)\,dw=1.
\end{equation*}
\notag
$$
Considering the distribution of the meander of the process $-\mathcal{Y}$ at $t=1$ and noting that the positivity parameter of the process $-\mathcal{Y}$ is $1-\rho $, we can easily derive from this fact using symmetry considerations that
$$
\begin{equation*}
C^{\ast}=\lim_{n\to \infty}V^{+}(a_{n})\mathbb{P}(\tau_{1}^{+}>n) =\lim_{n\to \infty}\widehat{V}^{+}(a_{n})\mathbb{P}(\widehat{\tau}_{1}^{+}>n).
\end{equation*}
\notag
$$
Introducing the function $\overline{U}(w):=C^{\ast}w^{\alpha \rho}$ we arrive at the relation
$$
\begin{equation*}
\mathbf{E}\Bigl[\overline{U}(-Y_{1})\Bigm|\inf_{0\leqslant s\leqslant 1}(-Y_{s})\geqslant 0\Bigr] =C^{\ast}\int_{0}^{\infty}w^{\alpha \rho}g^{-}(w)\,dw=1.
\end{equation*}
\notag
$$
The lemma is proved. We set
$$
\begin{equation}
b_{n}:=\frac{1}{na_{n}}=\frac{1}{n^{1+1/\alpha}\ell (n)}.
\end{equation}
\tag{2.19}
$$
From [3], Proposition 2.3, and (2.13) it is straightforward to infer the following assertion, which supplements Lemmas 1 and 2. Lemma 4. Assume that Condition A1 is satisfied. Then there exists $C>0$ such that
$$
\begin{equation}
\mathbb{P}_{x}(y-1\leqslant S_{n}<y,\,L_{n}\geqslant 0) \leqslant Cb_{n}V^{-}(x)V^{+}(y)
\end{equation}
\tag{2.20}
$$
uniformly with respect to all $x,y\geqslant 0$ and $n\in \mathbb{N}$, which yields the estimate
$$
\begin{equation}
\mathbb{P}_{x}(S_{n}<y,\,L_{n}\geqslant 0) \leqslant Cb_{n}V^{-}(x)\sum_{z=0}^{y}V^{+}(z)
\end{equation}
\tag{2.21}
$$
in the $(h;0)$-lattice case and the inequality
$$
\begin{equation}
\mathbb{P}_{x}(S_{n}<y,\,L_{n}\geqslant 0) \leqslant Cb_{n}V^{-}(x)\int_{0}^{y}V^{+}(z)\,dz
\end{equation}
\tag{2.22}
$$
in the absolutely continuous case. Proof. It was shown in [3], Proposition 2.3, that if Condition A1 is satisfied, then there exists $C>0$ such that
$$
\begin{equation*}
\mathbb{P}_{x}(y-1\leqslant S_{n}<y,\,L_{n}\geqslant 0) \leqslant Cb_{n}\widehat{V}^{-}(x)\underline{\widehat{V}}^{+}(y)
\end{equation*}
\notag
$$
uniformly with respect to all $x,y\geqslant 0$ and all $n$. Now relation (2.20) follows from (2.13) and the inequalities
$$
\begin{equation*}
\widehat{V}^{-}(x)\leqslant V^{-}(x)\quad\text{and} \quad \underline{\widehat{V}}^{+}(y)\leqslant V^{+}(y).
\end{equation*}
\notag
$$
The second and third assertions in the lemma can be deduced using summation and integration, respectively.
The lemma is proved.
§ 3. Limit theorems for the lattice case Following [11], we consider below only the $(1;0)$-lattice case of Condition A2, that is, we assume that the distribution of $X_{1}$ is concentrated on the set $\mathbb{Z}$ and is aperiodic. The general $(h;c)$-lattice case requires only more cumbersome notation, while the proofs need no new ideas. We aim at studying the asymptotic behaviour of the probabilities
$$
\begin{equation*}
\mathbb{P}_{x_{N}}(S_{N-m}\leqslant za_{m},\,L_{N}\geqslant 0,\,S_{N}=y_{N})
\end{equation*}
\notag
$$
under the condition that $\max (x_{N},y_{N})/a_{N}\to 0$ and $m=o(N)$ as $N\to \infty $. We consider three cases separately: $y_{N}/a_{m}\to 0$, $y_{N}/a_{m}\to T\in (0,\infty)$ and $y_{N}/a_{m}\to \infty $. 3.1. The case when $y_{N}/a_{m}\to 0$ Lemma 5. Assume that Conditions A1 and A2 are satisfied and the random variable $X_{1}$ has a $(1;0)$-lattice distribution. If $\min (m,N)\to \infty $ so that $m=o(N)$, then
$$
\begin{equation*}
\mathbb{P}_{x_{N}}(S_{N-m}\leqslant za_{m}\mid L_{N}\geqslant 0,\,S_{N}=y_{N}) =A_{1}(z)(1+o(1))
\end{equation*}
\notag
$$
for any $z\in (0,\infty)$ uniformly with respect to $x_{N}=o(a_{N})$ and $y_{N}=o(a_{m})$, where the function
$$
\begin{equation}
A_{1}(z):=C^{\ast}\int_{0}^{z}w^{\alpha \rho}g^{-}(w) \,dw, \qquad z\in [0,\infty),
\end{equation}
\tag{3.1}
$$
is a proper distribution due to (2.6) and Lemma 3. Proof. According to (2.11),
$$
\begin{equation}
\mathbb{P}_{x_{N}}(L_{N}\geqslant 0,\,S_{N}=j) \sim (1-\zeta )b_{N}g(0)V^{-}(x_{N})V^{+}(j)
\end{equation}
\tag{3.2}
$$
as $N\to \infty$ uniformly with respect to $x_{N}=o(a_{N}) $ and $j=o(a_{N})$, which yields
$$
\begin{equation}
\mathbb{P}_{x_{N}}(L_{N}\geqslant 0,\,S_{N}\leqslant y_{N}) \sim (1-\zeta )b_{N}g(0)V^{-}(x_{N})\sum_{j=0}^{y_{N}}V^{+}(j)
\end{equation}
\tag{3.3}
$$
uniformly with respect to $x_{N}=o(a_{N}) $ and $y_{N}=o( a_{N}) $. This relation and (2.3) imply that
$$
\begin{equation}
\mathbb{P}_{x_{N}}(L_{N}\geqslant 0,\,S_{N}\leqslant y_{N}) \sim (1-\zeta)V^{-}(x_{N})\, \mathbb{P}(L_{N}\geqslant 0,\,S_{N}\leqslant y_{N})
\end{equation}
\tag{3.4}
$$
uniformly with respect to $x_{N}=o(a_{N}) $ and $y_{N}=o(a_{N}) $.
Now we fix $z>\varepsilon \in (0,1)$ and consider the asymptotic behaviour of the probability
$$
\begin{equation}
\mathbb{P}_{x_{N}}(S_{N-m}\leqslant za_{m},\,L_{N}\geqslant 0,\,S_{n}=y_{N}) =R_{1}(\varepsilon,m,N)+R_{2}(\varepsilon,m,N),
\end{equation}
\tag{3.5}
$$
where
$$
\begin{equation*}
R_{1}(\varepsilon,m,N) :=\sum_{0\leqslant k\leqslant \varepsilon a_{m}} \mathbb{P}_{x_{N}}(S_{N-m}=k,\,L_{N-m}\geqslant 0)\, \mathbb{P}_{k}(S_{m}=y_{N},\,L_{m}\geqslant 0)
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
R_{2}(\varepsilon,m,N) := \sum_{\varepsilon a_{m}<k\leqslant za_{m}} \mathbb{P}_{x_{N}}(S_{N-m}=k,\,L_{N-m}\geqslant 0)\, \mathbb{P}_{k}(S_{m}=y_{N},\,L_{m}\geqslant 0).
\end{equation*}
\notag
$$
Using Lemma 4 we conclude that
$$
\begin{equation*}
\begin{aligned} \, R_{1}(\varepsilon,m,N) &\leqslant Cb_{m}V^{+}(y_{N})\sum_{0\leqslant k\leqslant\varepsilon a_{m}} \mathbb{P}_{x_{N}}(S_{N-m}=k,\,L_{N-m}\geqslant 0) V^{-}(k) \\ &\leqslant Cb_{m}V^{+}(y_{N})V^{-}(\varepsilon a_{m})\, \mathbb{P}_{x_{N}}(S_{N-m}\leqslant \varepsilon a_{m},\,L_{N-m}\geqslant 0) \\ &\leqslant C_{1}b_{m}b_{N-m}V^{+}(y_{N})V^{-}(x_{N})V^{-}(\varepsilon a_{m}) \sum_{0\leqslant k\leqslant \varepsilon a_{m}}V^{+}(k) \\ &\leqslant \varepsilon C_{1}a_{m}b_{m}b_{N-m}V^{+}(y_{N})V^{-}(x_{N})V^{-} (\varepsilon a_{m})V^{+}(\varepsilon a_{m}). \end{aligned}
\end{equation*}
\notag
$$
Applying (2.8) and (2.19) and using the equivalence $b_{N-m}\sim b_{N}$ for $m=o(N)$ as ${N\to \infty}$ we obtain
$$
\begin{equation*}
\begin{aligned} \, &a_{m}b_{m}b_{N-m}V^{+}(y_{N})V^{-}(x_{N})V^{-} (\varepsilon a_{m})V^{+}(\varepsilon a_{m}) \\ &\qquad \leqslant Ca_{m}b_{m}b_{N}V^{+}(y_{N})V^{-}(x_{N})V^{-}(a_{m})V^{+}(a_{m}) \\ &\qquad \leqslant 2CC^{\ast \ast \ast}a_{m}b_{m}b_{N}mV^{+}(y_{N})V^{-}(x_{N}) =2CC^{\ast \ast \ast}b_{N}V^{+}(y_{N})V^{-}(x_{N}). \end{aligned}
\end{equation*}
\notag
$$
Thus, we have
$$
\begin{equation}
R_{1}(\varepsilon,m,N)\leqslant \varepsilon C_{1}b_{N}V^{+}(y_{N})V^{-}(x_{N}).
\end{equation}
\tag{3.6}
$$
To estimate $R_{2}(\varepsilon,m,N)$ we note first that, by virtue of Lemma 1,
$$
\begin{equation}
\mathbb{P}_{x_{N}}(S_{N-m}=k,\,L_{N-m}\geqslant 0) \sim \frac{(1-\zeta )g(0)}{(N-m) a_{N-m}}V^{-}(x_{N})V^{+}(k)
\end{equation}
\tag{3.7}
$$
uniformly with respect to $k$, $\varepsilon a_{m}\leqslant k\leqslant za_{m}$, and $x_{N}=o(a_{N-m})=o(a_{N})$ and also that
$$
\begin{equation}
\mathbb{P}_{k}(S_{m}=y_{N},\,L_{m}\geqslant 0) =\frac{\mathbb{P}(\tau_{1}^{+}>m)}{a_{m}}V^{+}(y_{N})\biggl(g^{-} \biggl(\frac{k}{a_{m}}\biggr) +o(1)\biggr)
\end{equation}
\tag{3.8}
$$
uniformly with respect to $k$, $\varepsilon a_{m}\leqslant k\leqslant za_{m}$, and $y_{N}=o(a_{m})$. Therefore,
$$
\begin{equation*}
\begin{aligned} \, R_{2}(\varepsilon,m,N) &\sim \frac{(1-\zeta)g(0)}{Na_{N}}\, \frac{\mathbb{P}(\tau_{1}^{+}>m)}{a_{m}}V^{-}(x_{N})V^{+}(y_{N}) \\ &\qquad \times \sum_{\varepsilon a_{m}<k\leqslant za_{m}}V^{+}(k)g^{-} \biggl(\frac{k}{a_{m}}\biggr) \end{aligned}
\end{equation*}
\notag
$$
uniformly with respect to $k$, $\varepsilon a_{m}\leqslant k\leqslant za_{m}$, $y_{N}=o(a_{m})$, and $x_{N}=o(a_{N})$. Using (2.6) we infer the relation
$$
\begin{equation*}
\begin{aligned} \, R_{2}(\varepsilon,m,N) &\sim \frac{C^{\ast}(1-\zeta)g(0)}{Na_{N}}V^{-}(x_{N})V^{+}(y_{N}) \\ &\qquad \times \sum_{\varepsilon a_{m}<k\leqslant za_{m}} \frac{V^{+}(k)}{V^{+}(a_{m})}g^{-}\biggl(\frac{k}{a_{m}}\biggr) \frac{1}{a_{m}}. \end{aligned}
\end{equation*}
\notag
$$
It follows from (2.4) and the properties of regularly varying functions (see [ 22]) that
$$
\begin{equation*}
\frac{V^{+}(wa_{m})}{V^{+}(a_{m})}\to w^{\alpha \rho}
\end{equation*}
\notag
$$
as $m\to \infty $ uniformly with respect to $w$, $\varepsilon \leqslant w\leqslant z$. Consequently,
$$
\begin{equation*}
\sum_{\varepsilon a_{m}<k\leqslant za_{m}}\frac{V^{+}(k)}{V^{+}(a_{m})}g^{-}\biggl(\frac{k}{a_{m}}\biggr) \frac{1}{a_{m}}\sim \int_{\varepsilon}^{z}w^{\alpha \rho}g^{-}(w)\,dw
\end{equation*}
\notag
$$
as $m\to \infty $. As a result, we have
$$
\begin{equation*}
R_{2}(\varepsilon,m,N)\sim C^{\ast}(1-\zeta) g(0)b_{N}V^{-}(x_{N})V^{+}(y_{N})\int_{\varepsilon}^{z}w^{\alpha \rho }g^{-}(w)\,dw
\end{equation*}
\notag
$$
as $N\to \infty $ uniformly with respect to $y_{N}=o(a_{m})$ and $x_{N}=o(a_{N})$. Since $\varepsilon >0$ can be chosen to be arbitrarily small, based on (3.6) and (3.2) we deduce that
$$
\begin{equation}
\frac{\mathbb{P}_{x_{N}}(S_{N-m} \leqslant za_{m},\,L_{N}\geqslant0,\,S_{N}=y_{N})}{\mathbb{P}_{x_{N}} (L_{N}\geqslant 0,\,S_{N}=y_{N})} \sim C^{\ast}\int_{0}^{z}w^{\alpha \rho}g^{-}( w) \,dw=A_{1}(z)
\end{equation}
\tag{3.9}
$$
as $N\to\infty$ uniformly with respect to $x_{N}=o(a_{N})$ and $y_{N}=o(a_{m})$.
The lemma is proved. Corollary 1. Assume that Conditions A1 and A2 are satisfied and the random variable $X_{1}$ has a $(1;0)$-lattice distribution. If $\min (m,N)\to \infty $ so that $m=o(N)$, then
$$
\begin{equation*}
\mathbb{P}_{x_{N}}(S_{N-m}\leqslant za_{m},\,L_{N}\geqslant 0,\,S_{N}\leqslant y_{N}) \sim A_{1}(z)\, \mathbb{P}_{x_{N}}(L_{N}\geqslant 0,\,S_{N}\leqslant y_{N})
\end{equation*}
\notag
$$
for any $z\in (0,\infty)$ uniformly with respect to $x_{N}=o(a_{N})$ and $y_{N}=o(a_{m})$. Using summation, we derive the required assertion from (3.9), (3.2) and (3.3). 3.2. The case when $y_{N}\asymp a_{m}$ Let $0<t_{0}<t_{1}<\infty $ be fixed positive numbers. Lemma 6. Assume that Conditions A1 and A2 are satisfied and the random variable $X_{1}$ has a $(1;0)$-lattice distribution. If $\min (m,N)\to \infty $ so that $m=o(N)$, then
$$
\begin{equation*}
\mathbb{P}_{x_{N}}(S_{N-m}\leqslant za_{m}\mid L_{N}\geqslant 0,\,S_{N}=j) \sim A_{2}\biggl(z,\frac{j}{a_{m}}\biggr)
\end{equation*}
\notag
$$
for any $z\in (0,\infty)$ uniformly with respect to $x_{N}=o(a_{N})$ and $j\in [t_{0}a_{m},t_{1}a_{m}]$, where
$$
\begin{equation*}
A_{2}(z,t):=t^{-\alpha \rho}\int_{0}^{z}w^{\alpha \rho}g(t-w)C(w,t)\,dw.
\end{equation*}
\notag
$$
Proof. Again, we use the decomposition (3.5). Estimates (3.6) and (3.7) remain as before. Furthermore, using (2.12) instead of (3.8) we obtain
$$
\begin{equation}
\mathbb{P}_{k}(S_{m}=j,\,L_{m}\geqslant 0) \sim \frac{1}{a_{m}}g\biggl( \frac{j-k}{a_{m}}\biggr) C\biggl(\frac{k}{a_{m}},\,\frac{j}{a_{m}}\biggr)
\end{equation}
\tag{3.10}
$$
as $m\to \infty $ uniformly with respect to $\varepsilon a_{m}\leqslant k\leqslant za_{m}$ and $j\in [ t_{0}a_{m},t_{1}a_{m}]$. In view of (3.7) and (3.10) it follows that
$$
\begin{equation*}
\begin{aligned} \, R_{2}(\varepsilon,m,N) &\sim \frac{(1-\zeta)g(0)}{(N-m) a_{N-m}}V^{-}(x_{N}) \\ &\qquad\times \sum_{\varepsilon a_{m}<k\leqslant za_{m}} V^{+}(k)\frac{1}{a_{m}}g\biggl(\frac{j-k}{a_{m}}\biggr) C\biggl(\frac{k}{a_{m}},\,\frac{j}{a_{m}}\biggr) \\ &\sim \frac{(1-\zeta)g(0)}{Na_{N}}V^{-}(x_{N})V^{+}(a_{m}) \\ &\qquad\times \sum_{\varepsilon a_{m}<k\leqslant za_{m}} \frac{V^{+}(k)}{V^{+}(a_{m})}\,\frac{1}{a_{m}}g\biggl(\frac{j-k}{a_{m}}\biggr) C\biggl(\frac{k}{a_{m}},\,\frac{j}{a_{m}}\biggr) \end{aligned}
\end{equation*}
\notag
$$
uniformly with respect to $x_{N}=o(a_{N})$ and $j\in [t_{0}a_{m},t_{1}a_{m}]$. Reasoning like in Lemma 5 we conclude that
$$
\begin{equation*}
\begin{aligned} \, &\sum_{\varepsilon a_{m}<k\leqslant za_{m}} \frac{V^{+}(k)}{V^{+}(a_{m})}\,\frac{1}{a_{m}}g\biggl(\frac{j-k}{a_{m}}\biggr) C\biggl(\frac{k}{a_{m}},\,\frac{j}{a_{m}}\biggr) \\ &\qquad\sim \int_{\varepsilon}^{z}w^{\alpha \rho }g\biggl(\frac{j}{a_{m}}-w\biggr) C\biggl(w,\frac{j}{a_{m}}\biggr)\,dw \end{aligned}
\end{equation*}
\notag
$$
as $m\to \infty $. Hence
$$
\begin{equation}
\begin{aligned} \, R_{2}(\varepsilon,m,N) &\sim (1-\zeta )g(0)b_{N}V^{-}(x_{N})V^{+}(a_{m}) \notag \\ &\qquad \times \int_{\varepsilon}^{z}w^{\alpha \rho} g\biggl(\frac{j}{a_{m}}-w\biggr) C\biggl(w,\frac{j}{a_{m}}\biggr)\,dw. \end{aligned}
\end{equation}
\tag{3.11}
$$
Since $\varepsilon >0$ can be chosen to be arbitrarily small and
$$
\begin{equation*}
V^{+}(a_{m})=V^{+}\biggl(\frac{a_{m}}{j}j\biggr) \sim \biggl(\frac{a_{m}}{j}\biggr)^{\alpha \rho}V^{+}(j)
\end{equation*}
\notag
$$
uniformly with respect to $j\in [t_{0}a_{m},t_{1}a_{m}]$, a combination of (3.6), (3.11) and (3.2) leads to the equality
$$
\begin{equation*}
\frac{\mathbb{P}_{x_{N}}(S_{N-m}\leqslant za_{m},\,L_{N}\geqslant 0,\,S_{N}=j) }{\mathbb{P}_{x_{N}}(L_{N}\geqslant 0,\,S_{N}=j)} =A_{2}\biggl(z,\frac{j}{a_{m}}\biggr) (1+o(1)),
\end{equation*}
\notag
$$
which is valid uniformly with respect to $x_{N}=o(a_{N})$ and $j\in [ t_{0}a_{m},t_{1}a_{m}]$.
Lemma 6 is proved. Corollary 2. Under the assumptions of Lemma 6, if $\min (m,N)\to \infty$, $m=o(N)$ and $y_{N}\sim Ta_{m}$, $T\in (0,\infty)$, then
$$
\begin{equation*}
\mathbb{P}_{x_{N}}(S_{N-m}\leqslant za_{m},\,L_{N}\geqslant 0,\,S_{N}\leqslant y_{N}) \sim \mathbb{P}_{x_{N}}(L_{N}\geqslant 0,\,S_{N}\leqslant y_{N}) B(z,T)
\end{equation*}
\notag
$$
for any $z\in (0,\infty)$ uniformly with respect to $x_{N}=o(a_{N})$, where
$$
\begin{equation}
B(z,T):=\frac{\alpha \rho +1}{T^{\alpha \rho +1}} \int_{0}^{z}w^{\alpha \rho}\,dw\int_{0}^{T}g(t-w) C(w,t)\, dt.
\end{equation}
\tag{3.12}
$$
Proof. In view of (2.21), for any $\varepsilon \in (0,1)$ we have
$$
\begin{equation}
\begin{aligned} \, &\mathbb{P}_{x_{N}}(S_{N-m}\leqslant za_{m},\,L_{N}\geqslant 0,\,S_{N}\leqslant \varepsilon y_{N}) \notag \\ &\qquad \leqslant \mathbb{P}_{x_{N}}(L_{N}\geqslant 0,\,S_{N}\leqslant \varepsilon y_{N})\leqslant C_{1}b_{N}V^{-}(x_{N})\sum_{0\leqslant j\leqslant \varepsilon y_{N}}V^{+}(j). \end{aligned}
\end{equation}
\tag{3.13}
$$
Furthermore, it follows from Lemma 6 and relations (2.11) and (2.4) that
$$
\begin{equation}
\begin{aligned} \, &\mathbb{P}_{x_{N}}(S_{N-m}\leqslant za_{m},\,L_{N}\geqslant 0,\,\varepsilon y_{N}\leqslant S_{N}\leqslant y_{N}) \notag \\ &\qquad \sim \sum_{\varepsilon y_{N}<j\leqslant y_{N}}\mathbb{P}_{x_{N}}( L_{N}\geqslant 0,\,S_{N}=j) A_{2}\biggl(z,\frac{j}{a_{m}}\biggr) \notag \\ &\qquad \sim (1-\zeta)g(0)b_{N}V^{-}(x_{N})\sum_{\varepsilon y_{N}<j\leqslant y_{N}}V^{+}(j)A_{2}\biggl(z,\frac{j}{a_{m}}\biggr) \notag \\ &\qquad =(1-\zeta)g(0)b_{N}V^{-}(x_{N})V^{+}(a_{m})\sum_{\varepsilon Ta_{m}<j\leqslant Ta_{m}}\frac{V^{+}(j)}{V^{+}(a_{m})}A_{2}\biggl(z,\frac{j}{a_{m}}\biggr) \notag \\ &\qquad \sim (1-\zeta )g(0)b_{N}V^{-}(x_{N})V^{+}(a_{m})a_{m}\int_{\varepsilon T}^{T}t^{\alpha \rho}A_{2}(z,t)\,dt \end{aligned}
\end{equation}
\tag{3.14}
$$
as $N\to\infty$. Note that
$$
\begin{equation*}
\begin{aligned} \, \int_{\varepsilon T}^{T}t^{\alpha \rho}A_{2}(z,t)\,dt &=\int_{\varepsilon T}^{T}dt\int_{0}^{z}w^{\alpha \rho}g(t-w) C(w,t)\,dw \\ &=\int_{0}^{z}w^{\alpha \rho}dw\int_{\varepsilon T}^{T}g(t-w) C(w,t)\,dt. \end{aligned}
\end{equation*}
\notag
$$
Since
$$
\begin{equation}
V^{+}(a_{m})a_{m}\sim V^{+}(T^{-1}y_{N})T^{-1}y_{N}\sim T^{-\alpha \rho -1}V^{+}(y_{N})y_{N}
\end{equation}
\tag{3.15}
$$
by (2.4), we have
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{P}_{x_{N}}(S_{N-m}\leqslant za_{m},\,L_{N}\geqslant 0,\,\varepsilon y_{N}\leqslant S_{N}\leqslant y_{N}) \\ &\qquad\sim (1-\zeta)g(0)b_{N}V^{-}(x_{N})V^{+}(y_{N})y_{N} T^{-\alpha \rho -1}\int_{0}^{z}w^{\alpha \rho}\,dw \int_{\varepsilon T}^{T}g(t-w) C(w,t) \,dt \end{aligned}
\end{equation*}
\notag
$$
owing to (3.14). Referring to (2.4) again, we see that
$$
\begin{equation*}
(\alpha \rho +1) \sum_{j=0}^{y_{N}}V^{+}(j)\sim y_{N}V^{+}(y_{N})
\end{equation*}
\notag
$$
as $y_{N}\to \infty $. Using (3.13) and (3.3) and letting $\varepsilon $ tend to zero, we deduce that
$$
\begin{equation*}
\mathbb{P}_{x_{N}}(S_{N-m}\leqslant za_{m},\,L_{N}\geqslant 0,\,S_{N}\leqslant y_{N}) \sim \mathbb{P}_{x_{N}}(L_{N}\geqslant 0,\,S_{N}\leqslant y_{N}) B(z,T)
\end{equation*}
\notag
$$
uniformly with respect to $x_{N}=o(a_{N})$.
The corollary is proved. 3.3. The case when $a_{m}=o(y_{N})$ Unlike §§ 3.1 and 3.2, where the distribution of the random variable $S_{N-m}$ was considered, here we look at the distribution of the difference $S_{N-m}-S_{N}$. Lemma 7. Assume that Conditions A1 and A2 are satisfied and the random variable $X_{1}$ has a $(1;0)$-lattice distribution. If $\min (m,N)\to \infty $ and $a_{m}=o(y_{N})$, then
$$
\begin{equation*}
\mathbb{P}_{x_{N}}(S_{N-m}-S_{N}\leqslant za_{m}\mid L_{N}\geqslant 0,S_{N}=y_{N}) \sim \mathbf{P}(Y_{1}\leqslant z)
\end{equation*}
\notag
$$
for any $z\in (-\infty,\infty)$ uniformly with respect to $x_{N}=o(a_{N})$ and $y_{N}=o(a_{N})$, $a_{m}=o(y_{N})$, where $Y_{1}$ is defined by (2.1). Proof. We fix a sufficiently large $M$ and use the decomposition
$$
\begin{equation*}
\mathbb{P}_{x_{N}}(S_{N-m}-y_{N}\leqslant za_{m},\,L_{N}\geqslant 0,\,S_{N}=y_{N}) =R_{3}(M,m,N)+R_{4}(M,m,N),
\end{equation*}
\notag
$$
where
$$
\begin{equation*}
R_{3}(M,m,N) :=\sum_{0\leqslant k\leqslant y_{N}-Ma_{m}}\mathbb{P}_{x_{N}}( S_{N-m}=k,\,L_{N-m}\geqslant 0) \, \mathbb{P}_{k}(S_{m}=y_{N},\,L_{m}\geqslant 0)
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
R_{4}(M,m,N) :=\sum_{y_{N}-Ma_{m}<k\leqslant y_{N}+za_{m}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathbb{P}_{x_{N}}(S_{N-m}=k,\,L_{N-m}\geqslant 0) \, \mathbb{P}_{k}(S_{m}=y_{N},\,L_{m}\geqslant 0).
\end{equation*}
\notag
$$
Note that
$$
\begin{equation*}
\begin{aligned} \, R_{3}(M,m,N) &\leqslant Cb_{N-m}V^{-}(x_{N})\sum_{0\leqslant k\leqslant y_{N}-Ma_{m}}V^{+}(k)\, \mathbb{P}_{k}(S_{m}=y_{N},\,L_{m}\geqslant 0) \\ &\leqslant C_{1}b_{N}V^{-}(x_{N})V^{+}(y_{N})\sum_{0\leqslant k\leqslant y_{N}-Ma_{m}}\mathbb{P}_{k}(S_{m}=y_{N},\,L_{m}\geqslant 0) \end{aligned}
\end{equation*}
\notag
$$
by virtue of (2.11). It is obvious that
$$
\begin{equation*}
\begin{aligned} \, &\sum_{0\leqslant k\leqslant y_{N}-Ma_{m}}\mathbb{P}_{k}(S_{m}=y_{N},\,L_{m}\geqslant 0)=\sum_{0\leqslant k\leqslant y_{N}-Ma_{m}}\mathbb{P}(S_{m}=y_{N}-k,\,L_{m}\geqslant -k) \\ &\qquad \leqslant \sum_{0\leqslant k\leqslant y_{N}-Ma_{m}}\mathbb{P}(S_{m}=y_{N}-k) \leqslant \mathbb{P}(S_{m}\geqslant Ma_{m}). \end{aligned}
\end{equation*}
\notag
$$
Since $\lim_{M\to \infty}\lim_{m\to \infty}\mathbb{P}(S_{m}\geqslant Ma_{m}) =0$, we have
$$
\begin{equation}
R_{3}(M,m,N)\leqslant \varepsilon (M)b_{N}V^{-}(x_{N})V^{+}(y_{N}),
\end{equation}
\tag{3.16}
$$
where $\varepsilon (M)\downarrow 0$ as $M\to \infty $.
Furthermore, by (2.11) and (2.4) we have
$$
\begin{equation*}
\begin{aligned} \, &R_{4}(M,m,N) \\ &\qquad\sim(1-\zeta )g(0)b_{N-m}V^{-}(x_{N})\sum_{y_{N}-Ma_{m}<k\leqslant y_{N}+za_{m}}V^{+}(k)\, \mathbb{P}_{k}(S_{m}=y_{N},\,L_{m}\geqslant 0) \\ &\qquad\sim(1-\zeta )g(0)b_{N}V^{-}(x_{N})V^{+}(y_{N})\sum_{y_{N}-Ma_{m}<k\leqslant y_{N}+za_{m}}\mathbb{P}_{k}(S_{m}=y_{N},\,L_{m}\geqslant 0) \end{aligned}
\end{equation*}
\notag
$$
uniformly with respect to $x_{N}=o(a_{N})$ and $y_{N}=o(a_{N})$. Thus, it remains to estimate the sum
$$
\begin{equation*}
\begin{aligned} \, &\sum_{y_{N}-Ma_{m}<k\leqslant y_{N}+za_{m}}\mathbb{P}_{k}( S_{m}=y_{N},\,L_{m}\geqslant 0) \\ &\qquad\qquad =\sum_{y_{N}-Ma_{m}<k\leqslant y_{N}+za_{m}}\mathbb{P}( S_{m}=y_{N}-k,\,L_{m}\geqslant -k) \\ &\qquad\qquad=\sum_{-Ma_{m}<j\leqslant za_{m}}\mathbb{P}(S_{m}=j,\,L_{m}\geqslant j-y_{N}). \end{aligned}
\end{equation*}
\notag
$$
Clearly,
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{P}(S_{m}\in [-Ma_{m},za_{m}],\,L_{m} \geqslant za_{m}-y_{N}) \\ &\qquad\leqslant \sum_{-Ma_{m}<j\leqslant za_{m}}\mathbb{P}(S_{m}=j,\,L_{m}\geqslant j-y_{N})\leqslant \mathbb{P}(S_{m}\in [-Ma_{m},za_{m}]). \end{aligned}
\end{equation*}
\notag
$$
Furthermore,
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{P}(S_{m}\in [-Ma_{m},za_{m}],\,L_{m}\geqslant za_{m}-y_{N}) \\ &\qquad \geqslant\mathbb{P}(S_{m}\in [-Ma_{m},za_{m}]) -\mathbb{P}(L_{m}<za_{m}-y_{N}). \end{aligned}
\end{equation*}
\notag
$$
Since $y_{N}/a_{m}\to \infty $, we have
$$
\begin{equation*}
\lim_{N\to \infty}\mathbb{P}\biggl( \frac{L_{m}}{a_{m}}<z-\frac{y_{N}}{a_{m}}\biggr) =0
\end{equation*}
\notag
$$
according to the invariance principle for random walks such that the distributions of their increments belong (without centering) to the domain of attraction of a stable law. Note that $b_{N-m}\sim b_{N}$ if $m=o(N)$ as $N\to \infty $; therefore,
$$
\begin{equation*}
R_{4}(M,m,N)\sim (1-\zeta )g(0)b_{N}V^{-}(x_{N})V^{+}(y_{N})\, \mathbf{P}(Y_{1}\in [-M,z])
\end{equation*}
\notag
$$
as $N\to \infty $. Since we can choose the parameter $M$ to be arbitrarily large and $\varepsilon (M)>0$ in (3.16) to be arbitrarily small, it is true that
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{P}_{x_{N}}(S_{N-m}-y_{N}\leqslant za_{m},\,L_{N}\geqslant 0,\,S_{n}=y_{N}) \\ &\qquad\sim (1-\zeta )g(0)b_{N}V^{-}(x_{N})V^{+}(y_{N})\, \mathbf{P}(Y_{1}\leqslant z) \end{aligned}
\end{equation*}
\notag
$$
uniformly with respect to $x_{N}=o(a_{N})$ and $y_{N}=o(a_{N})$. In view of (2.1) this implies the assertion of the lemma. Corollary 3. Assume that Conditions A1 and A2 are satisfied and the random variable $X_{1}$ has a $(1;0)$-lattice distribution. If $m\to \infty $ and $a_{m}=o(y_{N})$, then
$$
\begin{equation*}
\mathbb{P}_{x_{N}}(S_{N-m}-S_{N}\leqslant za_{m},\,L_{N}\geqslant 0,\,S_{N}\leqslant y_{N}) \sim \mathbb{P}_{x_{N}}(L_{N}\geqslant 0,\,S_{N}\leqslant y_{N}) \, \mathbf{P}(Y_{1}\leqslant z)
\end{equation*}
\notag
$$
for any $z\in (-\infty,\infty)$ uniformly with respect to $x_{N}=o(a_{N})$ and $y_{N}=o(a_{N})$, $a_{m}=o(y_{N})$. Proof. By (2.21),
$$
\begin{equation*}
\mathbb{P}_{x_{N}}(S_{N-m}-S_{N}\leqslant za_{m},\,L_{N}\geqslant 0,\,S_{N}\leqslant \varepsilon y_{N}) \leqslant C_{1}b_{N}V^{-}(x_{N})\sum_{0\leqslant j\leqslant \varepsilon y_{N}}V^{+}(j)
\end{equation*}
\notag
$$
for any $\varepsilon \in (0,1)$. Furthermore, by Lemma 7 we have
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{P}_{x_{N}}(S_{N-m}-S_{N}\leqslant za_{m},\,L_{N}\geqslant 0,\,\varepsilon y_{N}\leqslant S_{N}\leqslant y_{N}) \\ &\qquad \sim \sum_{\varepsilon y_{N}<j\leqslant y_{N}}\, \mathbb{P}_{x_{N}}(L_{N}\geqslant 0,\,S_{N}=j) \mathbf{P}(Y_{1}\leqslant z) \\ &\qquad \sim \mathbb{P}_{x_{N}}(L_{N}\geqslant 0,\,\varepsilon y_{N}\leqslant S_{N}\leqslant y_{N}) \, \mathbf{P}(Y_{1}\leqslant z). \end{aligned}
\end{equation*}
\notag
$$
Owing to (2.11), we have
$$
\begin{equation*}
\mathbb{P}_{x_{N}}(L_{N}\geqslant 0,\,S_{N}\leqslant y_{N}) \sim (1-\zeta )g(0)b_{N}V^{-}(x_{N})\sum_{j=0}^{y_{N}}V^{+}(j).
\end{equation*}
\notag
$$
Using (2.4) and letting $\varepsilon$ tend to zero we obtain the required assertion.
The corollary is proved.
§ 4. Limit theorems for the absolutely continuous case In this section we establish analogues of Corollaries 1–3 for the absolutely continuous case. Lemma 8. Assume that Conditions A1 and A2 (the absolutely continuous case) are satisfied and $\min (m,N)\to \infty,m=o(N)$. Then: (1) uniformly with respect to $x_{N}=o(a_{N})$ and $y_{N}=o(a_{m})$,
$$
\begin{equation*}
\mathbb{P}_{x_{N}}(S_{N-m}\leqslant za_{m}\mid L_{N}\geqslant 0,\,S_{N}\leqslant y_{N}) \sim A_{1}(z), \qquad z\in (0,\infty);
\end{equation*}
\notag
$$
(2) if $y_{N}\sim Ta_{m}$, $T\in (0,\infty)$, then, uniformly with respect to $x_{N}=o(a_{N})$,
$$
\begin{equation}
\mathbb{P}_{x_{N}}(S_{N-m}\leqslant za_{m}\mid L_{N}\geqslant 0,\,S_{N}\leqslant y_{N}) \sim B(z,T), \qquad z\in (0,\infty);
\end{equation}
\tag{4.1}
$$
(3) if $a_{m}=o(y_{N})$, then, uniformly with respect to $x_{N}=o(a_{N})$ and $y_{N}=o(a_{N})$,
$$
\begin{equation*}
\mathbb{P}_{x_{N}}(S_{N-m}-S_{N}\leqslant za_{m}\mid L_{N}\geqslant 0,\,S_{N}\leqslant y_{N}) \sim \mathbf{P}(Y_{1}\leqslant z), \qquad z\in (-\infty,\infty).
\end{equation*}
\notag
$$
Proof. The proofs of assertions (1)–(3) repeat conceptually the arguments used for the corresponding assertions in Corollaries 1–3. Therefore, we only verify (4.1).
According to (2.20), we have
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{P}_{x_{N}}(S_{N-m}\leqslant za_{m},\,L_{N}\geqslant 0,S_{N}<\varepsilon y_{N}) \\ &\qquad \leqslant\mathbb{P}_{x_{N}}(L_{N}\geqslant 0,\,S_{N}<\varepsilon y_{N}) \leqslant C_{1}b_{N-m}V^{-}(x_{N})\int_{0}^{\varepsilon y_{N}}V^{+}(w)\,dw \end{aligned}
\end{equation*}
\notag
$$
for any $\varepsilon \in (0,1)$. In a similar way
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{P}_{x_{N}}(S_{N-m}<\varepsilon y_{N},\,L_{N}\geqslant 0,\,S_{N}\leqslant y_{N}) \\ &\qquad\leqslant \mathbb{P}_{x_{N}}(S_{N-m}<\varepsilon y_{N},\,L_{N-m}\geqslant 0) \leqslant C_{1}b_{N-m}V^{-}(x_{N})\int_{0}^{\varepsilon y_{N}}V^{+}(w)\,dw. \end{aligned}
\end{equation*}
\notag
$$
Furthermore,
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{P}_{x_{N}}(\varepsilon y_{N}\leqslant S_{N-m}\leqslant za_{m},\,L_{N}\geqslant 0,\,\varepsilon y_{N}\leqslant S_{N}\leqslant y_{N}) \\ &\qquad =\int_{\varepsilon y_{N}}^{za_{m}}\mathbb{P}_{x_{N}} (S_{N-m}\in dw,\,L_{N-m}\geqslant 0)\, \mathbb{P}_{w}(L_{m}\geqslant 0,\,\varepsilon y_{N}\leqslant S_{m}\leqslant y_{N}). \end{aligned}
\end{equation*}
\notag
$$
By (2.18), for any $0<t_{0}<t_{1}<\infty $ we have
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{P}_{w}(L_{m}\geqslant 0,\,\varepsilon y_{N}\leqslant S_{m}\leqslant y_{N}) \\ &\qquad=\int_{\varepsilon y_{N}}^{y_{N}}\mathbb{P}_{w}(L_{m}\geqslant 0,\,S_{m}\in dq)\sim \int_{\varepsilon y_{N}}^{y_{N}}\frac{1}{a_{m}}g\biggl( \frac{q-w}{a_{m}}\biggr) C\biggl(\frac{w}{a_{m}},\,\frac{q}{a_{m}}\biggr) \,dq \end{aligned}
\end{equation*}
\notag
$$
uniformly with respect to $w\in [t_{0}a_{m},t_{1}a_{m}]$. In view of (2.17),
$$
\begin{equation}
\frac{\mathbb{P}_{x_{N}}(S_{N-m}\in dw,\,L_{N-m}\geqslant 0)}{dw}\sim \frac{g(0)}{Na_{N}}V^{-}(x_{N})V^{+}(w)
\end{equation}
\tag{4.2}
$$
uniformly with respect to $x_{N}\!=\!o(a_{N-m})\!=\!o(a_{N})$ and $w\!=\!o(a_{N-m})$. Since ${m\!=\!o(N)}$, it is true that
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{P}_{x_{N}}(\varepsilon y_{N}\leqslant S_{N-m}\leqslant za_{m},\,L_{N}\geqslant 0,\,\varepsilon y_{N}\leqslant S_{N}\leqslant y_{N}) \\ &\qquad \sim \frac{g(0)V^{-}(x_{N})}{Na_{N}}\int_{\varepsilon y_{N}}^{za_{m}}V^{+}(w)\,dw \int_{\varepsilon y_{N}}^{y_{N}} \frac{1}{a_{m}}g\biggl(\frac{q-w}{a_{m}}\biggr) C\biggl(\frac{w}{a_{m}},\,\frac{q}{a_{m}}\biggr) \,dq \\ &\qquad \sim \frac{g(0)V^{-}(x_{N})a_{m}}{Na_{N}}\int_{\varepsilon T}^{z}V^{+}(sa_{m})\, ds\int_{\varepsilon T}^{T}g(r-s) C(s,r)\, dr \\ &\qquad =\frac{g(0)V^{-}(x_{N})a_{m}V^{+}(a_{m})}{Na_{N}}\int_{\varepsilon T}^{z}\frac{V^{+}(sa_{m})}{V^{+}(a_{m})}\,ds\int_{\varepsilon T}^{T}g( r-s) C(s,r) \,dr \\ &\qquad \sim \frac{g(0)V^{-}(x_{N})a_{m}V^{+}(a_{m})}{Na_{N}} \int_{\varepsilon T}^{z}s^{\alpha \rho}\,ds\int_{\varepsilon T}^{T}g(r-s)C(s,r) \,dr. \end{aligned}
\end{equation*}
\notag
$$
Using the relation (3.15), the equivalence
$$
\begin{equation*}
(\alpha \rho +1) \int_{0}^{y_{N}}V^{+}(w)\,dw\sim\, y_{N}V^{+}(y_{N})
\end{equation*}
\notag
$$
and the representation
$$
\begin{equation*}
\mathbb{P}_{x_{N}}(L_{N}\geqslant 0,\,S_{N}\leqslant y_{N}) =(1+o(1))\frac{g(0)}{Na_{N}}V^{-}(x_{N})\int_{0}^{y_{N}}V^{+}(w)\,dw,
\end{equation*}
\notag
$$
which follows from (4.2), and letting $\varepsilon$ tend to zero we see that
$$
\begin{equation*}
\frac{\mathbb{P}_{x_{N}}(S_{N-m}\leqslant za_{m},\,L_{N}\geqslant 0,S_{N}\leqslant y_{N})}{\mathbb{P}_{x_{N}}(L_{N}\geqslant 0,\,S_{N}\leqslant y_{N}) }\sim \frac{\alpha \rho +1}{T^{\alpha \rho +1}}\int_{0}^{z}s^{\alpha \rho }\,ds \int_{0}^{T}g(r-s) C(s,r) \,dr
\end{equation*}
\notag
$$
as $N\to \infty $ uniformly with respect to $x_{N}=o(a_{N})$ and $y_{N}\sim Ta_{m}=o(a_{N})$.
Relation (4.1) is proved, and the proof of Lemma 8 is complete.
§ 5. Limit theorem for almost surely convergent sequences In this section we need a conditional measure $\mathbb{P}_x^{\uparrow}(\,\cdot\,)$ generating a random walk starting at $x$ and conditioned to stay nonnegative on the whole time axis (see, for example, [4], [10] and [25]). This new measure is defined as follows: for $x\geqslant 0$, any $N\in \mathbb{N}$ and each measurable set $B$ in the $\sigma $-algebra generated by the random variables $S_{1},\dots,S_{N}$ we set
$$
\begin{equation*}
\mathbb{P}_{x}^{\uparrow}(B):=\frac{1}{V^{-}(x)}\mathbb{E}_{x}[ V^{-}(S_{N})I\{B\} ;\,L_{N}\geqslant 0].
\end{equation*}
\notag
$$
For a function $\varphi (n)\to \infty $ as $n\to \infty $, we introduce the conditional means
$$
\begin{equation*}
I_{n}(z,m,\varphi):=\mathbb{E}[H_{n};\,S_{n-m}\leqslant za_{m}\mid S_{n}\leqslant \varphi (n),\,L_{n}\geqslant 0], \qquad z\in (0,\infty),
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
I_{n}^{\ast}(z,m,\varphi):=\mathbb{E}[H_{n};\,S_{n-m}-S_{n}\leqslant za_{m}\mid S_{n}\leqslant \varphi (n),\,L_{n}\geqslant 0], \qquad z\in (-\infty,\infty).
\end{equation*}
\notag
$$
Theorem 2. Assume that Conditions A1 and A2 are satisfied and $H_{1},H_{2},\dots$ is a uniformly bounded sequence of random variables adapted to a filtration $\widetilde{\mathcal{F}} = \{\widetilde{\mathcal{F}}_{k},\,k\in\mathbb{N}\}$ that converges $\mathbb{P}_{0}^{\uparrow}$-almost surely to a random variable $H_{\infty}$ as ${n\to \infty}$. Assume that the parameter $m=m(n)$ tends to infinity as $n\to \infty $ so that $m=o(n)$. Then the following hold: (1) if $\varphi (n)=o(a_{m})$, then
$$
\begin{equation}
\lim_{n\to \infty}I_{n}(z,m,\varphi)=A_{1}(z)\,\mathbb{E}_{0}^{\uparrow}[H_{\infty}] ;
\end{equation}
\tag{5.1}
$$
(2) if $\varphi (n)\sim Ta_{m},T\in (0,\infty)$, then
$$
\begin{equation}
\lim_{n\to \infty}I_{n}(z,m,\varphi)=B(z,T)\,\mathbb{E}_{0}^{\uparrow}[H_{\infty}] ;
\end{equation}
\tag{5.2}
$$
(3) if $m\to \infty $ and $a_{m}=o(\varphi (n))$, then
$$
\begin{equation}
\lim_{n\to \infty}I_{n}^{\ast}(z,m,\varphi)=\mathbf{P}(Y_{1}\leqslant z) \,\mathbb{E}_{0}^{\uparrow}[H_{\infty}].
\end{equation}
\tag{5.3}
$$
Remark 2. Theorem 2 generalizes Lemma 4 in [27], which considers the behaviour of the mean
$$
\begin{equation*}
\mathbb{E}[H_{n}\mid S_{n}\leqslant \varphi (n),\,L_{n}\geqslant 0]
\end{equation*}
\notag
$$
as $n\to\infty$. By (2.13) the measure
$$
\begin{equation*}
\mathbb{P}_{x}^{+}(B):=\frac{1}{\widehat{V}^{-}(x)}\, \mathbb{E}_{x}[\widehat{V}^{-}(S_{N})I\{B\};\,L_{N}\geqslant 0],
\end{equation*}
\notag
$$
which occurs under the limit sign in that lemma, coincides with the above measure $\mathbb{P}_{x}^{\uparrow}$. Proof of Theorem 2. We prove (5.1) for the $(1;0)$-lattice case. For fixed $1\leqslant k<n$ and $z\in (0,\infty)$ we consider the quantity
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{E}[H_{k};\,S_{n-m}\leqslant za_{m}\mid S_{n}\leqslant \varphi (n),\,L_{n}\geqslant 0] \\ &\qquad =\mathbb{E}\biggl[H_{k}\frac{\mathbb{P}_{S_{k}}(S_{n-m-k}' \leqslant za_{m},\,S_{n-k}'\leqslant \varphi (n),\,L_{n-k}'\geqslant 0)}{\mathbb{P}(S_{n}\leqslant \varphi (n),\,L_{n}\geqslant 0)};L_{k}\geqslant 0\biggr], \end{aligned}
\end{equation*}
\notag
$$
where $\mathcal{S}'=\{S_{n}',\,n=0,1,2,\dots\} $ is a probabilistic copy of the random walk $\mathcal{S}$ that is independent of the set $\{S_{j},\,j=0,1,\dots,k\} $. Owing to Corollaries 1 and 2 and relation (2.21), there are constants $C$, $C_{1}$, $C_{2}$ and $C_{3}$ such that
$$
\begin{equation*}
\begin{aligned} \, &\frac{\mathbb{P}_{x}(S_{n-m-k}'\leqslant za_{m},\,S_{n-k}'\leqslant \varphi (n),L_{n-k}'\geqslant 0)}{\mathbb{P}(S_{n}\leqslant \varphi(n),\,L_{n}\geqslant 0)} \\ &\qquad \leqslant \frac{\mathbb{P}_{x}(S_{n-k}'\leqslant \varphi(n),\,L_{n-k}'\geqslant 0)}{\mathbb{P}(S_{n}\leqslant \varphi(n),\,L_{n}\geqslant 0)} \leqslant \frac{C_{1}\,b_{n-k}\,V^{-}(x)\sum_{j=0}^{\varphi(n)}V^{+}(j)}{Cb_{n}\sum_{j=0}^{\varphi (n)}V^{+}(j)}\leqslant C_{3}V^{-}(x) \end{aligned}
\end{equation*}
\notag
$$
for any fixed $k$ and all $n\geqslant k$ and $z>0$. Furthermore, taking account of Corollary 1, relation (3.4) and the definition (2.19), we see that
$$
\begin{equation}
\lim_{n\to \infty}\frac{\mathbb{P}_{x}(S_{n-m-k}'\leqslant za_{m},\,S_{n-k}'\leqslant \varphi (n),L_{n-k}'\geqslant 0)}{\mathbb{P}(S_{n}\leqslant \varphi (n),\,L_{n}\geqslant 0)}=(1-\zeta )A_{1}(z)V^{-}(x)
\end{equation}
\tag{5.4}
$$
for any fixed $x\geqslant 0$ and $k\in \mathbb{N}$. In view of (2.3),
$$
\begin{equation*}
\begin{aligned} \, \mathbb{E}[H_{k}V^{-}(S_{k});\,L_{k}\geqslant 0] &=\frac{1}{1-\zeta}\cdot \frac{1}{V^{-}(0)}\, \mathbb{E}[H_{k}V^{-}(S_{k});\,L_{k}\geqslant0] \\ &=\frac{1}{1-\zeta}\, \mathbb{E}_{0}^{\uparrow}[H_{k}] <\infty. \end{aligned}
\end{equation*}
\notag
$$
Using the dominated convergence theorem and (5.4), we conclude that
$$
\begin{equation*}
\begin{aligned} \, &\lim_{n\to \infty}\mathbb{E} \biggl[H_{k}\frac{\mathbb{P}_{S_{k}}(S_{n-m-k}'\leqslant za_{m}, \,S_{n-k}'\leqslant \varphi(n),\,L_{n-k}'\geqslant 0)}{\mathbb{P}(S_{n-m}\leqslant za_{m}\,,S_{n}\leqslant \varphi (n),\,L_{n}\geqslant 0)};L_{k}\geqslant 0\biggr] \\ &\qquad =\mathbb{E}\biggl[H_{k}\cdot \lim_{n\to \infty} \frac{\mathbb{P}_{S_{k}}(S_{n-m-k}'\leqslant za_{m}, \,S_{n-k}'\leqslant \varphi (n),\,L_{n-k}'\geqslant 0)}{\mathbb{P}(S_{n-m}\leqslant za_{m}, \,S_{n}\leqslant \varphi (n),\,L_{n}\geqslant 0)};L_{k}\geqslant 0\biggr] \\ &\qquad =A_{1}(z)\frac{1}{V^{-}(0)}\, \mathbb{E}[H_{k}V^{-}(S_{k});\,L_{k}\geqslant 0] =A_{1}(z)\, \mathbb{E}_{0}^{\uparrow}[H_{k}] \end{aligned}
\end{equation*}
\notag
$$
for any fixed $k$.
To avoid cumbersome formulae in the reasoning below, we introduce the notation
$$
\begin{equation*}
\Psi (\lambda n,\lambda m,z,\varphi) :=\{S_{\lambda (n-m)}\leqslant z,\,S_{\lambda n}\leqslant \varphi (n),\,L_{n\lambda}\geqslant 0\}
\end{equation*}
\notag
$$
for $\lambda \geqslant 1$. By virtue of (2.21), for each $\lambda >1$ we have
$$
\begin{equation*}
\begin{aligned} \, &|\mathbb{E}[(H_{n}-H_{k}) ;\Psi (\lambda n,\lambda m,za_{m},\varphi) ] | \leqslant \mathbb{E}[|H_{n}-H_{k}|;\,S_{\lambda n}\leqslant \varphi (n),L_{n\lambda}\geqslant 0] \\ &\qquad =\mathbb{E}[|H_{n}-H_{k}| \mathbb{P}_{S_{n}}(S_{(\lambda -1)n}' \leqslant \varphi (n),\,L_{n(\lambda -1)}'\geqslant 0);\,L_{n}\geqslant 0] \\ &\qquad \leqslant Cb_{n(\lambda -1)}\sum_{z=0}^{\varphi (n)}V^{+}(z)\cdot \mathbb{E}[|H_{n}-H_{k}|V^{-}(S_{n}),\,L_{n}\geqslant 0] \\ &\qquad =Cb_{n(\lambda -1)}\sum_{z=0}^{\varphi (n)}V^{+}(z)\cdot \frac{1}{1-\zeta}\, \mathbb{E}_{0}^{\uparrow}[|H_{n}-H_{k}|]. \end{aligned}
\end{equation*}
\notag
$$
Furthermore, based on Corollary 1 and the relation $a_{\lambda m}\sim \lambda^{1/\alpha}a_{m}$, which is valid as $m\to \infty $, we conclude that
$$
\begin{equation*}
\mathbb{P}(\Psi (\lambda n,\lambda m,za_{m},\varphi) ) \sim A_{1}(z\lambda^{-1/\alpha})\, \mathbb{P}(S_{n\lambda}\leqslant \varphi (n),\,L_{n\lambda}\geqslant 0)
\end{equation*}
\notag
$$
as $m,n\to \infty $ and $m=o(n)$. Using (2.19) and (3.3) for $x_{N}=0$, we deduce that
$$
\begin{equation*}
\begin{aligned} \, &\frac{|\mathbb{E}[(H_{n}-H_{k});\,\Psi (\lambda n,\lambda m,za_{m},\varphi) ] |}{\mathbb{P}( \Psi (\lambda n,\lambda m,za_{m},\varphi))} \\ &\qquad \leqslant C\mathbb{E}^{\uparrow}[| H_{n}-H_{k}|]\, \frac{b_{n(\lambda -1)}\sum_{z=0}^{\varphi (n)}V^{+}(z)}{C_{1}\,b_{n\lambda}\,\sum_{z=0}^{\varphi (n)}V^{+}(z)} \leqslant C_{2}\biggl(\frac{\lambda}{\lambda -1}\biggr) ^{1+1/\alpha}\mathbb{E}_{0}^{\uparrow}[|H_{n}-H_{k}|]. \end{aligned}
\end{equation*}
\notag
$$
First letting $n$ and then $k$ tend to infinity and applying the dominated convergence theorem, we see that the right-hand side of the previous relation tends to zero for each $\lambda >1$.
Based on this result, we arrive at the chain of equalities
$$
\begin{equation*}
\begin{aligned} \, &\lim_{n\to \infty}\mathbb{E}[H_{n}\mid \Psi (\lambda n,\lambda m,za_{m},\varphi) ] \\ &\qquad =\lim_{k\to \infty}\lim_{n\to \infty }\frac{\mathbb{E}[(H_{n}-H_{k}) ;\,\Psi (\lambda n,\lambda m,za_{m},\varphi) ]}{\mathbb{P}(\Psi (\lambda n,\lambda m,za_{m},\varphi))} \\ &\qquad \qquad +\lim_{k\to \infty}\lim_{n\to \infty }\frac{\mathbb{E}[H_{k};\Psi (\lambda n,\lambda m,za_{m},\varphi ) ]}{\mathbb{P}(\Psi (\lambda n,\lambda m,za_{m},\varphi))} \\ &\qquad =\lim_{k\to \infty}A_{1}(z\lambda^{-1/\alpha })\, \mathbb{E}_{0}^{\uparrow}[H_{k}] =A_{1}(z\lambda^{-1/\alpha })\, \mathbb{E}_{0}^{\uparrow}[H_{\infty}], \end{aligned}
\end{equation*}
\notag
$$
which is convenient to rewrite as
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{E}[H_{n};\Psi (\lambda n,\lambda m,za_{m},\varphi ) ] \\ &\qquad=(A_{1}(z\lambda^{-1/\alpha})\, \mathbb{E}_{0}^{\uparrow}[ H_{\infty}] +o(1)) \, \mathbb{P}(\Psi (\lambda n,\lambda m,za_{m},\varphi)). \end{aligned}
\end{equation*}
\notag
$$
Without loss of generality let $H_{\infty}>0$ and $A_{1}(z\lambda^{-1/\alpha}) \, \mathbb{E}_{0}^{\uparrow}[H_{\infty}] \leqslant 1$; in view of this assumption, we conclude that
$$
\begin{equation*}
\begin{aligned} \, &\bigl|\mathbb{E}[H_{n};\Psi (n,m,za_{m},\varphi) ] -A_{1}(z\lambda^{-1/\alpha})\, \mathbb{E}_{0}^{\uparrow}[H_{\infty }] \, \mathbb{P}(\Psi (\lambda n,\lambda m,za_{m},\varphi)) \bigr| \\ &\qquad \leqslant \bigl|\mathbb{E}[H_{n};\Psi (\lambda n,\lambda m,za_{m},\varphi) ] -A_{1}(z\lambda^{-1/\alpha})\, \mathbb{E}_{0}^{\uparrow}[H_{\infty}] \, \mathbb{P}(\Psi (\lambda n,\lambda m,za_{m},\varphi)) \bigr| \\ &\qquad\qquad +\bigl|\mathbb{P}(\Psi (\lambda n,\lambda m,za_{m},\varphi)) -\mathbb{P}(\Psi (n,m,za_{m},\varphi)) \bigr|. \end{aligned}
\end{equation*}
\notag
$$
We have already proved that the first difference on the right-hand side of this inequality is of order
$$
\begin{equation*}
o(\mathbb{P}(\Psi (\lambda n,\lambda m,za_{m},\varphi)))
\end{equation*}
\notag
$$
as $n\to \infty $; therefore, it is of order $o(\mathbb{P}(S_{n}\leqslant \varphi (n),\, L_{n}\geqslant 0)) $, since
$$
\begin{equation}
\lim_{n\to \infty}\frac{\mathbb{P}(S_{n\lambda}\leqslant \varphi (n),\,L_{n\lambda}\geqslant 0)}{\mathbb{P}(S_{n}\leqslant \varphi (n),\,L_{n}\geqslant 0)}=\lim_{n\to \infty}\frac{b_{n\lambda }}{b_{n}}=\lambda^{1+1/\alpha}
\end{equation}
\tag{5.5}
$$
in view of (3.3) and (2.19).
Furthermore, recalling relation (3.3), Corollary 1 and the definition (2.19) once again we obtain
$$
\begin{equation*}
\begin{aligned} \, &\bigl|\mathbb{P}(\Psi (\lambda n,\lambda m,za_{m},\varphi)) -\mathbb{P}(\Psi (n,m,za_{m},\varphi)) \bigr| \\ &\qquad\leqslant \biggl|\mathbb{P}(\Psi (\lambda n,\lambda m,za_{m},\varphi)) -g(0)A_{1}(z\lambda^{-1/\alpha })b_{n\lambda}\sum_{j=0}^{\varphi (n)}V^{+}(j)\biggr| \\ &\qquad\qquad+\biggl|\mathbb{P}(\Psi (n,m,za_{m},\varphi)) -g(0)A_{1}(z)b_{n}\sum_{j=0}^{\varphi (n)}V^{+}(j)\biggr| \\ &\qquad\qquad+g(0)\bigl|A_{1}(z\lambda^{-1/\alpha})b_{n\lambda}-A_{1}(z)b_{n}\bigr| \sum_{j=0}^{\varphi (n)}V^{+}(j) \\ &\qquad=o\biggl(b_{n}\sum_{j=0}^{\varphi (n)}V^{+}(j)\biggr) +g(0)b_{n}A_{1}(z) \biggl|\frac{A_{1}(z\lambda^{-1/\alpha})b_{n\lambda }}{A_{1}(z)b_{n}}-1\biggr| \sum_{j=0}^{\varphi (n)}V^{+}(j). \end{aligned}
\end{equation*}
\notag
$$
Letting $\lambda$ tend to $1$ and using (5.5) and the continuity of the function $A_{1}(z)$, we deduce that
$$
\begin{equation*}
\lim_{\lambda \downarrow 1}\lim_{n\to \infty}\frac{| \mathbb{P}(\Psi (\lambda n,\lambda m,za_{m},\varphi) ) -\mathbb{P}(\Psi (n,m,za_{m},\varphi)) |}{b_{n}\sum_{j=0}^{\varphi (n)}V^{+}(j)}=0.
\end{equation*}
\notag
$$
Combining the above estimates proves (5.1) in the $(1;0)$-lattice case.
To prove (5.2) for the $(1;0)$-lattice case we must repeat almost verbatim the arguments used in the verification of (5.1), replacing the reference to Corollary 1 by a reference to Corollary 2.
To verify (5.3) in the $(1;0)$-lattice case we repeat almost verbatim the arguments used to prove assertion (5.1), replacing the reference to Corollary 1 by a reference to Corollary 3.
To prove assertion (5.1) in the absolutely continuous case we replace $\sum_{j=0}^{\varphi (n)}V^{+}(j)$ in the above reasoning by $\displaystyle\int_{0}^{\varphi (n)}V^{+}(w)\,dw$ and use Lemma 8. Similar arguments also make it possible to verify (5.2) and (5.3) in the case when the random variable $X_{1}$ has an absolutely continuous distribution.
Theorem 2 is proved.
§ 6. Branching processes in random environment In this section we use the results obtained for random walks to study the population size of critical branching processes evolving in an unfavourable random environment. To describe the model under consideration and state the problems to be solved in detail, we introduce the space $\mathcal{M}$ $=\{\mathfrak{f}\} $ of all probability measures on the set $\mathbb{N}_{0}$. To simplify our notation we identify a measure $\mathfrak{f}=\{\mathfrak{f}(\{0\}),\mathfrak{f}(\{ 1\}),\dots\} \in \mathcal{M}$ with the corresponding probability generating function
$$
\begin{equation*}
f(s)=\sum_{k=0}^{\infty}\mathfrak{f}(\{k\})s^{k}, \qquad s\in [0,1],
\end{equation*}
\notag
$$
and do not distinguish between $\mathfrak{f}$ and $f$. The space $\mathcal{M}$ $=\{\mathfrak{f}\} =\{f\} $ endowed with a metric induced by the distance in variation between distributions turns into a Polish space. Let
$$
\begin{equation*}
F(s)=\sum_{j=0}^{\infty}F(\{j\}) s^{j}, \qquad s\in [0,1],
\end{equation*}
\notag
$$
be a random variable with values in $\mathcal{M}$, and let
$$
\begin{equation*}
F_{n}(s)=\sum_{j=0}^{\infty}F_{n}(\{j\}) s^{j}, \qquad s\in [0,1], \quad n\in \mathbb{N}:=\mathbb{N}_{0}\setminus \{0\},
\end{equation*}
\notag
$$
be a sequence of independent probabilistic copies of the random variable $F$. The infinite sequence $\mathcal{E}=\{F_{n},\,n\in \mathbb{N}\} $ is called a random environment. A sequence of nonnegative integer-valued random variables $\mathcal{Z}=\{Z_{n},\,n\in \mathbb{N}_{0}\} $ defined on some probability space $(\Omega,\mathcal{F},\mathbb{P})$ is called a branching process in a random environment (BPRE) if $Z_{0}$ is independent of $\mathcal{E}$ and, under the condition that $\mathcal{E}$ is fixed, the process $\mathcal{Z}$ is a Markov chain:
$$
\begin{equation*}
\mathcal{L}\bigl(Z_{n}\bigm| Z_{n-1}=z_{n-1},\, \mathcal{E}=(f_{1},f_{2},\dots)\bigr) =\mathcal{L}(\xi_{n1}+\dots +\xi_{ny_{n-1}})
\end{equation*}
\notag
$$
for all $n\in \mathbb{N}$, $z_{n-1}\in \mathbb{N}_{0}$, and $f_{1},f_{2},\ldots\in \mathcal{M}$, where $\xi_{n1},\xi_{n2},\dots $ is a sequence of independent identically distributed random variables with distribution $f_{n}$. Thus, $Z_{n-1}$ is the size of the $(n-1)$st generation of the branching process and $f_{n}$ is the distribution of the number of offspring of each particle in the $(n-1)$st generation. The sequence
$$
\begin{equation*}
S_{0}=0, \qquad S_{n}=X_{1}+\dots+X_{n}, \quad n\geqslant 1,
\end{equation*}
\notag
$$
where $X_{i}=\log F_{i}'(1)$, $i=1,2,\dots$, is called the associated random walk for the process $\mathcal{Z}$. Recall that, according to the classification introduced in [1], which is generally accepted now, a BPRE $\mathcal{Z}$ is called: In this section, we impose the following constraints on the properties of the BPRE to be analyzed. Condition B1. The step of the associated random walk satisfies Conditions A1 and A2. According to the above classification, Condition B1 implies the criticality of the BPRE under consideration. Our second condition on the properties of the random environment concerns the laws of particle reproduction. Let
$$
\begin{equation*}
\gamma (b)=\frac{\sum_{k=b}^{\infty}k^{2}F(\{k\}) }{\bigl(\sum_{i=0}^{\infty}iF(\{i\})\bigr)^{2}}.
\end{equation*}
\notag
$$
Condition B2. There are numbers $\varepsilon >0$ and $b\in\mathbb{N}$ such that
$$
\begin{equation*}
\mathbb{E}[(\log^{+}\gamma (b))^{\alpha +\varepsilon}]<\infty,
\end{equation*}
\notag
$$
where $\log^{+}x=\log (\max (x,1))$. It is known (see [1], Theorem 1.1 and Corollary 1.2) that if Conditions B1 and B2 are satisfied, then there are a constant $\theta \in (0,\infty)$ and a sequence $l(1),l(2),\dots$ slowly varying at infinity such that
$$
\begin{equation*}
\mathbb{P}(Z_{n}>0) \sim \theta n^{-(1-\rho)}l(n)
\end{equation*}
\notag
$$
as $n\to \infty $ and
$$
\begin{equation}
\lim_{n\to \infty}\mathbb{P}\biggl(\frac{\log Z_{[nt]}}{a_{n}}\leqslant x\biggm| Z_{n}\!>\!0\!\biggr) =\lim_{n\to \infty}\mathbb{P}\biggl(\frac{S_{[nt]}}{a_{n}}\leqslant x\biggm| Z_{n}\!>\!0\!\biggr) =\mathbf{P}(Y_{t}^{+}\leqslant x)
\end{equation}
\tag{6.1}
$$
for any $t\in [0,1] $ and $x\geqslant 0$, where $\mathcal{Y}^{+}=\{Y_{t}^{+},\,0\leqslant t\leqslant1\} $ denotes the meander of a strictly stable process $\mathcal{Y}$ of index $\alpha$. Thus, if a BPRE is critical, then the random variables $\log Z_{[nt]}$, $t\in (0,1]$, under the condition $Z_{n}>0$ and $S_{n}$ (the value of the associated random walk ensuring the survival of the population by the moment $n$) grow as $a_{n}$ multiplied by random factors. These results were supplemented in [28] by an analysis of the distributions of the appropriately rescaled random variables $\log Z_{[nt]}$, $t\in (0,1]$, considered under the condition that $Z_{n}>0$ and $S_{n}\leqslant \varphi (n)$, where $\varphi (n)\to \infty $ as $n\to \infty$ so that $\varphi (n)=o(a_{n})$. In this case, in view of (6.1), the event $\{S_{n}\leqslant \varphi(n)\} $ can be interpreted as unfavourable for the evolution of the critical branching process in random environment conditioned to survive by a distant moment of time. We introduce the notation
$$
\begin{equation*}
A_{\mathrm{u.s}}:=\{Z_{n}>0\text{ for all }n>0\}
\end{equation*}
\notag
$$
for the event meaning that the process survives throughout its existence and set
$$
\begin{equation*}
\tau_n:= \min\{k\geqslant 0\colon S_k=\min(0,L_n)\}.
\end{equation*}
\notag
$$
Note that, due to [27], Theorem 1, if Conditions B1 and B2 are satisfied, the distribution of the increments of the associated random walk is absolutely continuous, and $\varphi(n)=o(a_{n})$, then
$$
\begin{equation}
\mathbb{P}(S_{n}\leqslant \varphi (n),\,Z_{n}>0) \sim \Theta \, \mathbb{P}(S_{n}\leqslant \varphi (n),\,L_{n}>0) \sim \frac{\Theta}{na_{n}}\int_{0}^{\varphi (n)}V^{+}(w)\,dw
\end{equation}
\tag{6.2}
$$
as $n\to \infty $, where
$$
\begin{equation}
\Theta =\sum_{j=0}^{\infty}\sum_{k=1}^{\infty}\mathbb{P}(Z_{j}=k,\,\tau _{j}=j)\, \mathbb{P}^{\uparrow}(A_{\mathrm{u.s}}\mid Z_{0}=k) \in (0,\infty).
\end{equation}
\tag{6.3}
$$
A similar asymptotics with $\displaystyle\int_{0}^{\varphi (n)}V^{+}(w)\,dw$ replaced by $\sum_{j=0}^{\varphi (n)}V^{+}(j)$ takes place for the $(1;0)$-lattice case. Along with the asymptotic behaviour of the survival probability of a critical BPRE, the following theorem describes the rate of growth of the population size on the logarithmic scale. Theorem 3 (see [28]). Assume that Conditions B1 and B2 are satisfied. If $\varphi (n)\to \infty $ as $n\to \infty $ so that $\varphi (n)=o(a_{n})$, then
$$
\begin{equation}
\lim_{n\to \infty}\mathbb{P}\biggl(\frac{1}{\varphi (n)}\log Z_{n}\leqslant y\biggm| S_{n}\leqslant \varphi (n),\,Z_{n}>0\biggr) =y^{\alpha \rho +1}
\end{equation}
\tag{6.4}
$$
for any $y\in (0,1]$ and
$$
\begin{equation}
\lim_{n\to \infty}\mathbb{P}\biggl(\frac{1}{a_{n}}\log Z_{[ nt]}\leqslant x\biggm| S_{n}\leqslant \varphi (n),\,Z_{n}>0\biggr) =\mathbf{P}( Y_{t}^{++}\leqslant x)
\end{equation}
\tag{6.5}
$$
for any $t\in (0,1) $ and $x\in [0,\infty)$, where $\mathcal{Y}^{++}=\{Y_{t}^{++},\,0\leqslant t\leqslant1\} $ is the excursion of a strictly stable process $\mathcal{Y}$ of index $\alpha $. There is a substantial difference between the scaling orders in (6.4) and (6.5), which shows that there must be a phase transition in the rate of growth of the population size in the case when we consider the process inside an interval $[n-m,n)$, where $m=o(n)$ as $m\to \infty $. Such a transition actually takes place, and the aim of this section is to prove that three distinct limit distributions occur when the random variables $\log Z_{n-m}$ are rescaled by $a_{m}$. The form of these distributions depends on which of the following three conditions holds as $\min (n-m,m)\to \infty $: the function $\varphi(n)$ is of order $o(m)$; the function $\varphi (n)$ is proportional to $m$; $m=o(\varphi (n))$. Our main result is as follows. Recall that in the lattice case, for the simplicity of presentation we agree to restrict ourselves to the case of a $(1;0)$-lattice. Theorem 4. Assume that Conditions B1 and B2 are satisfied, $\min (m,n)\to \infty $ and $m=o(n)$. Then: (1) if $\varphi (n)\to \infty $ as $n\to \infty $ so that $\varphi (n)=o(a_{m})$, then
$$
\begin{equation*}
\lim_{n\to \infty}\mathbb{P} \biggl(\frac{1}{a_{m}}\log Z_{n-m}\leqslant z\biggm| S_{n}\leqslant \varphi (n),\,Z_{n}>0\biggr) =A_{1}(z)
\end{equation*}
\notag
$$
for any $z\in (0,\infty)$, where $A_{1}(z)$ is defined in (3.1); (2) if $\varphi (n)\to \infty $ as $n\to \infty $ so that $\varphi (n)\sim Ta_{m}$, $T\in (0,\infty)$, then
$$
\begin{equation*}
\lim_{n\to \infty}\mathbb{P}\biggl(\frac{1}{a_{m}}\log Z_{n-m}\leqslant z\biggm| S_{n}\leqslant \varphi (n),\,Z_{n}>0\biggr) =B(z,T)
\end{equation*}
\notag
$$
for any $z\in [0,\infty)$, where $B(z,T)$ is defined in (3.12); (3) if $m=o(\varphi (n))$, $\varphi (n)=o(a_{n})$, then
$$
\begin{equation*}
\lim_{n\to \infty}\mathbb{P}\biggl(\frac{1}{a_{m}}(\log Z_{n-m}-S_{n}) \leqslant z\biggm| S_{n}\leqslant \varphi (n),\,Z_{n}>0\biggr) =\mathbf{P}(Y_{1}\leqslant z)
\end{equation*}
\notag
$$
for any $z\in [0,\infty)$. Proof. For integers $0\leqslant r\leqslant n$, we consider the rescaled population size process $\mathcal{X}^{r,n}=\{X_{t}^{r,n},\,0\leqslant t\leqslant 1\} $ defined by
$$
\begin{equation*}
X_{t}^{r,n}=e^{-S_{r+[(n-r)t]}}Z_{r+[(n-r)t]}, \qquad 0\leqslant t\leqslant 1.
\end{equation*}
\notag
$$
It was shown in [28], Theorem 1, that if $r_{1},r_{2},\dots$ is a sequence of natural numbers such that $r_{n}\leqslant n$ and $r_{n}\to \infty $ and $\varphi (n)\to \infty $ as $n\to \infty $ so that $\varphi (n)=o(a_{n})$, then
$$
\begin{equation}
\mathcal{L}\{X_{t}^{r_{n},n},\,0\leqslant t\leqslant 1\mid S_{n}\leqslant \varphi (n),\,Z_{n}>0\} \quad \Longrightarrow\quad \mathcal{L}\{W_{t},\,0\leqslant t\leqslant1\}
\end{equation}
\tag{6.6}
$$
as $n\to \infty $, where $W_{t},0\leqslant t\leqslant 1$, is a stochastic process with almost surely constant trajectories, that is,
$$
\begin{equation*}
\mathbf{P}(W_{t}=W\text{ for all }t\in (0,1]) =1
\end{equation*}
\notag
$$
for some random variable $W$. In addition,
$$
\begin{equation}
\mathbf{P}(0<W<\infty) =1.
\end{equation}
\tag{6.7}
$$
Let
$$
\begin{equation*}
\widehat{Z}(k)=e^{-S_{k}}Z_{k}, \qquad k=0,1,2,\dots,
\end{equation*}
\notag
$$
and, for brevity, let
$$
\begin{equation}
R_{n}(x):=\{S_{n}\leqslant x,\,Z_{n}>0\}\quad\text{and} \quad Q_{n}(x):=\{S_{n}\leqslant x,\,L_{n}\geqslant 0\}.
\end{equation}
\tag{6.8}
$$
It follows from (6.6) that if $\min (m,n)\to \infty $ and $m=o(n)$, then
$$
\begin{equation*}
\lim_{n\to \infty}\mathbb{P}\bigl(\widehat{Z}(n-m)\leqslant x\bigm| R_{n}(\varphi(n))\bigr) =\mathbf{P}(W\leqslant x)
\end{equation*}
\notag
$$
for any continuity point $x\in (0,\infty)$ of the distribution of the random variable $W$. Using (6.7), we conclude that for any $\varepsilon >0$ there exists $M=M(m,n)$ such that
$$
\begin{equation}
\mathbb{P}\bigl(\widehat{Z}(n-m)\in (M^{-1},M)\bigm| R_{n}(\varphi (n))\bigr) \geqslant 1-\varepsilon
\end{equation}
\tag{6.9}
$$
for all $m\geqslant m_{0}$ and $n-m\geqslant n_{0}$. For $z\in (0,\infty)$ we write the relation
$$
\begin{equation}
\begin{aligned} \, &\mathbb{P}\bigl(\log Z_{n-m}\leqslant za_{m},\,R_{n}(\varphi (n))\bigr) \notag \\ &\qquad=\mathbb{P}\bigl(\log \widehat{Z}_{n-m}+S_{n-m}\leqslant za_{m},\,\widehat{Z}_{n-m}\in (M^{-1},M),\,R_{n}(\varphi (n))\bigr) \notag \\ &\qquad\qquad +\mathbb{P}\bigl(\log Z_{n-m}\leqslant za_{m},\,\widehat{Z}_{n}\notin (M^{-1},M),\,R_{n}(\varphi (n))\bigr) \end{aligned}
\end{equation}
\tag{6.10}
$$
and study separately the asymptotic behaviour of the terms on the right-hand side of this equality as $\min(m,n)\to \infty $, $m=o(n)$. In view of (6.9) it suffices to analyze the quantity
$$
\begin{equation*}
\begin{aligned} \, &\lim_{\min (m,n-m)\to \infty}\frac{\mathbb{P}(\log \widehat{Z}_{n-m}+S_{n-m}\leqslant za_{m},\,\widehat{Z}_{n-m}\in (M^{-1},M),\,R_{n}(\varphi (n)))}{\mathbb{P}(R_{n}(\varphi (n)))} \\ &\qquad\qquad \leqslant \lim_{\min (m,n-m)\to \infty}\frac{\mathbb{P}(S_{n-m}\leqslant za_{m},\,R_{n}(\varphi (n)))}{\mathbb{P}(R_{n}(\varphi (n)))}. \end{aligned}
\end{equation*}
\notag
$$
We fix $J\in [1,n]$ and write the representation
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{P}\bigl(S_{n-m}\leqslant za_{m},\,R_{n}(\varphi (n))\bigr) \\ &\qquad=\mathbb{P}\bigl(S_{n-m}\leqslant za_{m},\,R_{n}(\varphi (n)),\,\tau_{n}>J\bigr) +\mathbb{P}\bigl(S_{n-m}\leqslant za_{m},\,R_{n}(\varphi (n)),\,\tau_{n}\leqslant J\bigr). \end{aligned}
\end{equation*}
\notag
$$
According to [27], Lemma 5,
$$
\begin{equation*}
\lim_{J\to \infty}\limsup_{n\to \infty }\frac{\mathbb{P}(R_{n}(\varphi (n)),\,\tau_{n}>J)}{\mathbb{P}(Q_{n}(\varphi (n)))}=0.
\end{equation*}
\notag
$$
Using this result and recalling (6.2), we infer that
$$
\begin{equation*}
\begin{aligned} \, &\lim_{J\to \infty}\limsup_{n\to \infty} \frac{\mathbb{P}(S_{n-m}\leqslant za_{m},\,R_{n}(\varphi (n)),\,\tau_{n}>J)}{\mathbb{P}(R_{n}(\varphi (n)))} \\ &\qquad \leqslant \lim_{J\to \infty}\limsup_{n\to \infty} \frac{\mathbb{P}(R_{n}(\varphi (n)),\,\tau_{n}>J)}{\mathbb{P}(R_{n}(\varphi (n)))}=0. \end{aligned}
\end{equation*}
\notag
$$
For fixed $j\in [1,J]$,
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{P}\bigl(S_{n-m}\leqslant za_{m},\,R_{n}(\varphi (n)),\,\tau _{n}=j, \,S_{j}\leqslant -\sqrt{\varphi (n)}\,\bigr) \\ &\qquad \leqslant \mathbb{P}\bigl(S_{n}\leqslant \varphi (n),\,Z_{j}>0,\,\tau_{n}=j, \,S_{j}\leqslant -\sqrt{\varphi (n)}\,\bigr) \\ &\qquad =\mathbb{E}\bigl[\mathbb{P}(Z_{j}>0\mid \mathcal{E}),\,S_{n}\leqslant \varphi (n); \,\tau_{n}=j, \,S_{j}\leqslant -\sqrt{\varphi (n)}\,\bigr] \\ &\qquad \leqslant \mathbb{E}\bigl[e^{S_{j}},\,S_{n}\leqslant \varphi (n),\,\tau_{n}=j; \,S_{j}\leqslant -\sqrt{\varphi (n)}\,\bigr] =o(\mathbb{P}(Q_{n-j}(\varphi (n)))), \end{aligned}
\end{equation*}
\notag
$$
where the last equality was established in [28], Lemma 5. Furthermore,
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{P} \bigl(S_{n-m}\leqslant za_{m},\,R_{n}(\varphi (n)),\,\tau_{n}=j,\,Z_{j}>K,\,S_{j}>-\sqrt{\varphi (n)}\,\bigr) \\ &\qquad \leqslant \mathbb{P}\bigl(S_{n}\leqslant \varphi (n),\,\tau_{n}=j, \,S_{j}>-\sqrt{\varphi (n)},\,Z_{j}>K\bigr) \\ &\qquad =\mathbb{P}\bigl(S_{n}-S_{j}\leqslant \varphi (n)-S_{j},\,\tau_{n}=j, \,S_{j}>-\sqrt{\varphi (n)},\,Z_{j}>K\bigr) \\ &\qquad \leqslant \mathbb{P}\bigl(S_{n}-S_{j}\leqslant \varphi (n)+\sqrt{\varphi(n)}, \,\tau_{n}=j,\,Z_{j}>K\bigr) \\ &\qquad =\mathbb{P}(\tau_{j}=j,\,Z_{j}>K)\, \mathbb{P}\bigl(S_{n-j}\leqslant \varphi (n)+\sqrt{\varphi (n)},\,L_{n-j}\geqslant 0\bigr) \\ &\qquad \leqslant \delta (K) \mathbb{P}\bigl(S_{n-j}\leqslant \varphi(n)+\sqrt{\varphi (n)},\,L_{n-j}\geqslant 0\bigr), \end{aligned}
\end{equation*}
\notag
$$
where we used the estimate
$$
\begin{equation}
\mathbb{P}(\tau_{j}=j,\,Z_{j}>K) \leqslant \mathbb{P}(Z_{j}>K) =\delta (K)\to 0
\end{equation}
\tag{6.11}
$$
as $K\to \infty $ at the last step.
From this point until the end of the proof we assume that the distribution of the random variable $X_{1}$ is absolutely continuous. To prove the required assertions in the case when $X_{1}$ has a lattice distribution, it is necessary to replace $\displaystyle\int $ by $\sum $ throughout what follows.
We consider the right-hand side of the equality
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{P}\bigl(S_{n-m}\leqslant za_{m},R_{n}(\varphi (n)), \,\tau_{n}=j,\,S_{j}>-\sqrt{\varphi (n)},\,Z_{j}\leqslant K\bigr) \\ &\qquad =\int_{-\sqrt{\varphi (n)}}^{0}\sum_{k=1}^{K} \mathbb{P}(S_{j}\in dq,\,Z_{j}=k,\,\tau_{j}=j) \\ &\qquad\qquad \times \mathbb{E}\bigl[\mathbb{P}(Z_{n-j}>0\mid \mathcal{E},\,Z_{0}=k) ; \,S_{n-m-j}\leqslant za_{m},\,Q_{n-j}(\varphi (n)-q)\bigr]. \end{aligned}
\end{equation*}
\notag
$$
Note that, due to the monotonicity of the survival probability, for any $k\in \mathbb{N}$ we have
$$
\begin{equation}
H_{n-j}(k):=\mathbb{P}(Z_{n-j}>0\mid \mathcal{E},\,Z_{0}=k)\to\mathbb{P}^{\uparrow}(A_{\mathrm{u.s}}\mid \mathcal{E},\,Z_{0}=k)=:H_{\infty}(k)
\end{equation}
\tag{6.12}
$$
$\mathbb{P}^{\uparrow}$-almost surely as $n-j\to \infty $. In addition, $H_{\infty}(k) >0$ $\mathbb{P}^{\uparrow}$-almost surely by Proposition 3.1 in [ 1]. Furthermore, for $q\in (-\sqrt{\varphi (n)},0]$ we have
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{E}\bigl[H_{n-j}(k);\,S_{n-m-j}\leqslant za_{m}, \,Q_{n-j}\bigl(\varphi(n)+\sqrt{\varphi (n)}\,\bigr)\bigr] \\ &\qquad \geqslant \mathbb{E}\bigl[H_{n-j}(k); \,S_{n-m-j}\leqslant za_{m},\,Q_{n-j}(\varphi (n)-q)\bigr] \\ &\qquad \geqslant \mathbb{E}\bigl[H_{n-j}(k); \,S_{n-m-j,n-j}\leqslant za_{m},\,Q_{n-j}(\varphi (n))\bigr]. \end{aligned}
\end{equation*}
\notag
$$
By virtue of Theorem 2,
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{E}\bigl[H_{n-j}(k);\,S_{n-m-j}\leqslant za_{m},\,Q_{n-j}(\varphi(n))\bigr] \\ &\qquad \sim \mathbb{P}\bigl(S_{n-m-j,n-j}\leqslant za_{m},\,Q_{n-j}(\varphi(n))\bigr)\, \mathbb{E}^{\uparrow}[H_{\infty}(k)] \\ &\qquad =\mathbb{P}\bigl(S_{n-m-j,n-j}\leqslant za_{m}\bigm| Q_{n-j}(\varphi(n))\bigr) \, \mathbb{P}(Q_{n-j}(\varphi (n))) \mathbb{P}^{\uparrow}(A_{\mathrm{u.s}}\mid Z_{0}=k) \end{aligned}
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{E}\bigl[ H_{n-j}(k);\,S_{n-m-j}\leqslant za_{m},\,Q_{n-j}\bigl(\varphi(n)+\sqrt{\varphi (n)}\,\bigr)\bigr] \\ &\qquad \sim \mathbb{P}\bigl(S_{n-m-j,n-j}\leqslant za_{m}, \,Q_{n-j}\bigl(\varphi(n)+\sqrt{\varphi (n)}\,\bigr)\bigr) \, \mathbb{P}^{\uparrow}(A_{\mathrm{u.s}}\mid Z_{0}=k) \\ &\qquad =\mathbb{P}\bigl(S_{n-m-j,n-j}\leqslant za_{m}\bigm| Q_{n-j}\bigl(\varphi(n)+\sqrt{\varphi (n)}\,\bigr)\bigr) \\ &\qquad \qquad \times \mathbb{P}\bigl(Q_{n-j} \bigl(\varphi (n)+\sqrt{\varphi(n)}\,\bigr)\bigr)\, \mathbb{P}^{\uparrow}(A_{\mathrm{u.s}}\mid Z_{0}=k) \end{aligned}
\end{equation*}
\notag
$$
as $n\to \infty $.
Owing to the properties of regularly varying functions, the second equivalence in (6.2) and the asymptotic equivalence
$$
\begin{equation*}
\int_{0}^{\varphi (n)}V^{+}(w)\,dw \sim \frac{1}{\alpha \rho +1}\varphi(n)V^{+}(\varphi (n))
\end{equation*}
\notag
$$
as $n\to \infty $, which follows from (2.4), we have
$$
\begin{equation*}
\begin{aligned} \, 1 &\leqslant \lim_{n\to \infty}\sup_{-\sqrt{\varphi (n)}\leqslant q\leqslant0} \frac{\mathbb{P}(S_{n-j}\leqslant \varphi (n)-q,\,L_{n-j}\geqslant 0) }{\mathbb{P}(S_{n-j}\leqslant \varphi (n),\,L_{n-j}\geqslant 0)} \\ &= \lim_{n\to \infty}\frac{\mathbb{P}\bigl(S_{n-j}\leqslant \varphi(n)+\sqrt{\varphi (n)}, \,L_{n-j}\geqslant 0\bigr)}{\mathbb{P}(S_{n-j}\leqslant\varphi (n),\,L_{n-j}\geqslant 0)}=1. \end{aligned}
\end{equation*}
\notag
$$
This estimate in combination with (6.8) shows that
$$
\begin{equation*}
\begin{aligned} \, &\sup_{-\sqrt{\varphi (n)}\leqslant q\leqslant 0}\frac{\mathbb{E}[ H_{n-j}(k);\,S_{n-m-j}\leqslant za_{m};\,Q_{n-j}(\varphi (n)-q)] }{\mathbb{P}(Q_{n-j}(\varphi (n)))} \\ &\quad \leqslant \sup_{-\sqrt{\varphi (n)}\leqslant q\leqslant 0}\frac{\mathbb{P}( Q_{n-j}(\varphi (n)-q))}{\mathbb{P}(Q_{n-j}(\varphi (n)))} =\frac{\mathbb{P}\bigl(S_{n-j}\leqslant \varphi (n)+\sqrt{\varphi (n)},\,L_{n-j}\geqslant 0\bigr)}{\mathbb{P}(S_{n-j}\leqslant \varphi (n),\,L_{n-j}\geqslant 0)}\leqslant C \end{aligned}
\end{equation*}
\notag
$$
for all $n\geqslant j$.
Now we consider separately the cases when $\varphi (n)=o(a_{m})$, $\varphi (n)\sim Ta_{m}$ and $a_{m}=o(\varphi (n))$.
1) Assume that $\varphi (n)=o(a_{m})$. In this case, setting $H_{n}:=\mathbb{P}({Z_{n-j}\!>\!0\mid \mathcal{E}, Z_{0}\!=\!k})$ in (5.1) and recalling (6.12), for each $q\in [-\sqrt{\varphi (n)},0] $ we obtain
$$
\begin{equation*}
\begin{aligned} \, &\lim_{n\to \infty}\frac{\mathbb{E}[H_{n-j}(k);\,S_{n-m-j}\leqslant za_{m},\,Q_{n-j}(\varphi (n)-q)]}{\mathbb{P}(Q_{n-j}(\varphi(n)))} \\ &\qquad=A_{1}(z)\, \mathbb{P}^{\uparrow}(A_{\mathrm{u.s}}\mid Z_{0}=k) \lim_{n\to \infty}\frac{\mathbb{P}(Q_{n-j}(\varphi (n)-q))}{\mathbb{P}(Q_{n-j}(\varphi (n)))} =A_{1}(z)\, \mathbb{P}^{\uparrow}(A_{\mathrm{u.s}}\mid Z_{0}\,{=}\,k). \end{aligned}
\end{equation*}
\notag
$$
For $j\in \mathbb{N}_{0}$ and $K\in \mathbb{N}\cup\{\infty \} $ we set
$$
\begin{equation*}
\Theta_{j}(K):=\sum_{k=1}^{K}\mathbb{P}(Z_{j}=k,\,\tau_{j}=j)\, \mathbb{P}^{\uparrow}(A_{\mathrm{u.s}}\mid Z_{0}=k).
\end{equation*}
\notag
$$
Note that $\Theta_{j}(\infty)\leqslant \mathbb{P}(\tau_{j}=j) $.
Applying the dominated convergence theorem we conclude that
$$
\begin{equation*}
\begin{aligned} \, &\lim_{n\to \infty}\frac{\mathbb{P}\bigl(S_{n-m}\leqslant za_{m},\,R_{n}(\varphi (n)),\,\tau_{n}=j, \,S_{j}>-\sqrt{\varphi (n)},\,Z_{j}\leqslant K\bigr)}{\mathbb{P}(Q_{n-j}(\varphi (n)))} \\ &\qquad =\lim_{n\to \infty}\int_{-\sqrt{\varphi(n)}}^{0} \sum_{k=1}^{K}\mathbb{P}(S_{j}\in dq,\,Z_{j}=k,\,\tau_{j}=j) \\ &\qquad \qquad \times \frac{\mathbb{E}[H_{n-j}(k);\,S_{n-m-j}\leqslant za_{m}, \,Q_{n-j}(\varphi (n)-q)]}{\mathbb{P}(Q_{n-j}(\varphi (n)))} \\ &\qquad =A_{1}(z)\int_{-\infty}^{0}\sum_{k=1}^{K} \mathbb{P}(S_{j}\in dq,\,Z_{j}=k,\,\tau_{j}=j) \, \mathbb{P}^{\uparrow}(A_{\mathrm{u.s}}\mid Z_{0}=k) \\ &\qquad =A_{1}(z)\Theta_{j}(K) \end{aligned}
\end{equation*}
\notag
$$
for fixed $K<\infty $.
Combining the above estimates and letting $K$ tend to infinity, we arrive at the relation
$$
\begin{equation*}
\lim_{n\to \infty}\frac{\mathbb{P}(S_{n-m}\leqslant za_{m},\,R_{n}(\varphi (n)),\,\tau_{n}=j)}{\mathbb{P}( Q_{n-j}(\varphi (n)))}=A_{1}(z)\Theta_{j}(\infty),
\end{equation*}
\notag
$$
or, in view of (6.2) at the relation
$$
\begin{equation*}
\lim_{n\to \infty}\frac{\mathbb{P}(S_{n-m}\leqslant za_{m}, \,S_{n}\leqslant \varphi (n),\,Z_{n}>0,\,\tau_{n}=j)} {\mathbb{P}(S_{n}\leqslant \varphi (n),\,Z_{n}>0)}=A_{1}(z)\frac{\Theta_{j}(\infty)}{\Theta}.
\end{equation*}
\notag
$$
Summing such equalities over $j$ and taking account of the definition of the quantity $\Theta $ in (6.3) we have
$$
\begin{equation*}
\lim_{n\to \infty}\frac{\mathbb{P}(\log Z_{n-m}\leqslant za_{m}, \,S_{n}\leqslant \varphi (n),\,Z_{n}>0)}{\mathbb{P}(S_{n}\leqslant\varphi (n),\,Z_{n}>0)}=A_{1}(z),
\end{equation*}
\notag
$$
as required.
2) Now assume that $\varphi (n)\sim Ta_{m}$, $T\in (0,\infty)$. Using (5.2) and the dominated convergence theorem, we see that
$$
\begin{equation*}
\begin{aligned} \, &\lim_{n\to \infty}\frac{\mathbb{P}\bigl(S_{n-m}\leqslant za_{m},\,R_{n}(\varphi (n)), \,\tau_{n}=j,\,S_{j}>-\sqrt{\varphi (n)},\,Z_{j}\leqslant K\bigr)}{\mathbb{P}(Q_{n-j}(\varphi (n)))} \\ &\qquad =\lim_{n\to \infty}\int_{-\sqrt{\varphi(n)}}^{0} \sum_{k=1}^{K}\mathbb{P}(S_{j}\in dq,\,Z_{j}=k,\,\tau_{j}=j) \\ &\qquad \qquad \times \frac{\mathbb{E}[H_{n-j}(k); \,S_{n-m-j}\leqslant za_{m},\,Q_{n-j}(\varphi (n)-q)]}{\mathbb{P}(Q_{n-j}(\varphi (n)))} \\ &\qquad =B(z,T)\int_{-\infty}^{0}\sum_{k=1}^{K}\mathbb{P} (S_{j}\in dq,\,Z_{j}=k,\,\tau_{j}=j) \, \mathbb{P}^{\uparrow}(A_{\mathrm{u.s}}\mid Z_{0}=k) \\ &\qquad =B(z,T)\Theta_{j}(K). \end{aligned}
\end{equation*}
\notag
$$
Letting $K$ tend to infinity, summing these relations over $j$, and taking account of (6.2) and the definition of $\Theta $, we obtain
$$
\begin{equation*}
\lim_{n\to \infty}\frac{\mathbb{P}(\log Z_{n-m}\leqslant za_{m}, \,S_{n}\leqslant \varphi (n),\,Z_{n}>0)}{\mathbb{P}(S_{n}\leqslant \varphi (n),\,Z_{n}>0)}=B(z,T),
\end{equation*}
\notag
$$
as required.
3) Finally, we consider the case when $m=m(n)\to \infty $ and $a_{m}=o(\varphi(n))$ as ${n\to \infty}$. We introduce the notation $S_{n-m,n}:=S_{n-m}-S_{n}$ and write the equality
$$
\begin{equation*}
\begin{aligned} \, &\mathbb{P}\bigl(\log Z_{n-m}-S_{n}\leqslant za_{m},\,R_{n}(\varphi (n))\bigr) \\ &\qquad =\mathbb{P}\bigl(\log \widehat{Z}_{n-m}+S_{n-m,n}\leqslant za_{m},\,\widehat{Z}_{n-m}\in (M^{-1},M),\,R_{n}(\varphi (n))\bigr) \\ &\qquad\qquad +\mathbb{P}\bigl(\log Z_{n-m}-S_{n}\leqslant za_{m},\,\widehat{Z}_{n}\notin (M^{-1},M),\,R_{n}(\varphi (n))\bigr) \end{aligned}
\end{equation*}
\notag
$$
for $z\in (-\infty,\infty)$. It is straightforward to verify that if we replace $S_{n-m}$ by $S_{n-m,n}$ and $S_{n-m-j}$ by $S_{n-m-j,n-j}$ in all relations between (6.10) and (6.11), then all estimates between (6.10) and (6.11) remain valid. Thus, we have
$$
\begin{equation*}
\lim_{J\to \infty}\lim_{n\to \infty} \frac{\mathbb{P}(\log Z_{n-m}-S_{n}\leqslant za_{m}, \,R_{n}(\varphi (n)),\,\tau_{n}>J)}{\mathbb{P}(R_{n}(\varphi (n)))}=0
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
\lim_{K\to \infty}\lim_{n\to \infty} \frac{\mathbb{P}\bigl(\log Z_{n-m}\,{-}\,S_{n}\,{\leqslant}\, za_{m}, \,R_{n}(\varphi (n)),\,\tau_{n}\,{=}\,j,\,Z_{j}\,{>}\,K, \,S_{j}\,{>}\,{-}\sqrt{\varphi (n)}\,\bigr)}{\mathbb{P}(R_{n}(\varphi (n)))}=0
\end{equation*}
\notag
$$
for each fixed $j\in [0,J]$. Using (5.3) and the dominated convergence theorem, we conclude that
$$
\begin{equation*}
\begin{aligned} \, &\lim_{n\to \infty}\frac{\mathbb{P}\bigl(S_{n-m,n}\leqslant za_{m}, \,R_{n}(\varphi (n)),\,\tau_{n}=j,\,S_{j}>-\sqrt{\varphi (n)}, \,Z_{j}\leqslant K\bigr)}{\mathbb{P}(Q_{n-j}(\varphi (n)))} \\ &\qquad =\mathbf{P}(Y_{1}\leqslant z) \Theta_{j}(K). \end{aligned}
\end{equation*}
\notag
$$
Combining the above estimates and letting $K$ tend to infinity we see that
$$
\begin{equation*}
\begin{aligned} \, &\lim_{n\to \infty} \frac{\mathbb{P}\bigl(S_{n-m,n}\leqslant za_{m},\,R_{n}(\varphi (n)), \,\tau_{n}=j,\,S_{j}>-\sqrt{\varphi (n)}\,\bigr) }{\mathbb{P}(Q_{n-j}(\varphi (n)))} \\ &\qquad =\mathbf{P}(Y_{1}\leqslant z) \Theta_{j}(\infty). \end{aligned}
\end{equation*}
\notag
$$
Summing up the obtained equalities over $j$ and taking account of (6.2) and the definition of $\Theta $, we arrive at the relation
$$
\begin{equation*}
\lim_{n\to \infty}\frac{\mathbb{P}(\log Z_{n-m}-S_{n}\leqslant za_{m}, \,S_{n}\leqslant \varphi (n),\,Z_{n}>0)}{\mathbb{P} (S_{n}\leqslant \varphi (n),\,Z_{n}>0)}=\mathbf{P}(Y_{1}\leqslant z).
\end{equation*}
\notag
$$
Theorem 4 is proved. Acknowledgement The authors are grateful to the referee, whose comments made it possible to improve the presentation of the results in the paper.
|
|
|
Bibliography
|
|
|
1. |
V. I. Afanasyev, J. Geiger, G. Kersting and V. A. Vatutin, “Criticality for branching processes in random environment”, Ann. Probab., 33:2 (2005), 645–673 |
2. |
V. I. Afanasyev, “Invariance principle for a critical Galton-Watson process attaining a high level”, Theory Probab. Appl., 55:4 (2011), 559–574 |
3. |
V. I. Afanasyev, C. Böinghoff, G. Kersting and V. A. Vatutin, “Limit theorems for weakly subcritical branching processes in random environment”, J. Theoret. Probab., 25:3 (2012), 703–732 |
4. |
J. Bertoin and R. A. Doney, “On conditioning a random walk to stay nonnegative”, Ann. Probab., 22:4 (1994), 2152–2167 |
5. |
N. H. Bingham, C. M. Goldie and J. L. Teugels, Regular variation, Encyclopedia Math. Appl., 27, Cambridge Univ. Press, Cambridge, 1987, xx+491 pp. |
6. |
E. Bolthausen, “On a functional central limit theorem for random walks conditioned to stay positive”, Ann. Probab., 4:3 (1976), 480–485 |
7. |
A. Bryn-Jones and R. A. Doney, “A functional limit theorem for random walk conditioned to stay non-negative”, J. London Math. Soc. (2), 74:1 (2006), 244–258 |
8. |
L. Chaumont, “Excursion normalisée, méandre at pont pour les processus de Lévy stables”, Bull. Sci. Math., 121:5 (1997), 377–403 |
9. |
F. Caravenna, “A local limit theorem for random walks conditioned to stay positive”, Probab. Theory Related Fields, 133:4 (2005), 508–530 |
10. |
F. Caravenna and L. Chaumont, “Invariance principles for random walks conditioned to stay positive”, Ann. Inst. Henri Poincaré Probab. Stat., 44:1 (2008), 170–190 |
11. |
F. Caravenna and L. Chaumont, “An invariance principle for random walk bridges conditioned to stay positive”, Electron. J. Probab., 18 (2013), 60, 32 pp. |
12. |
L. Chaumont and R. A. Doney, “Invariance principles for local times at the maximum of random walks and Lévy processes”, Ann. Probab., 38:4 (2010), 1368–1389 |
13. |
F. den Hollander, Random polymers, École d'Été de Probabilités de Saint-Flour XXXVII – 2007, Lecture Notes in Math., 1974, Springer-Verlag, Berlin, 2009, xiv+258 pp. |
14. |
R. A. Doney, “Local behaviour of first passage probabilities”, Probab. Theory Related Fields, 152:3–4 (2012), 559–588 |
15. |
W. Feller, An introduction to probability theory and its applications, v. 2, John Wiley & Sons, Inc., New York–London–Sydney, 1966, xviii+626 pp. |
16. |
B. V. Gnedenko and A. N. Kolmogorov, Limit distributions for sums of independent random variables, Addison-Wesley Publishing Company, Inc., Cambridge, MA, 1954, ix+264 pp. |
17. |
D. L. Iglehart, “Functional central limit theorems for random walks conditioned to stay positive”, Ann. Probab., 2:2 (1974), 608–619 |
18. |
W. D. Kaigh, “An invariance principle for random walk conditioned by a late return to zero”, Ann. Probab., 4:1 (1976), 115–121 |
19. |
G. Kersting and V. Vatutin, Discrete time branching processes in random environment, Math. Stat. Ser., John Wiley & Sons, London; ISTE, Hoboken, NJ, 2017, xiv+286 pp. |
20. |
T. M. Liggett, “An invariance principle for conditioned sums of independent random variables”, J. Math. Mech., 18:6 (1968), 559–570 |
21. |
B. A. Rogozin, “The distribution of the first ladder moment and height and fluctuation of a random walk”, Theory Probab. Appl., 16:4 (1971), 575–595 |
22. |
E. Seneta, Regularly varying functions, Lecture Notes in Math., 508, Springer-Verlag, Berlin–New York, 1976, v+112 pp. |
23. |
Ya. G. Sinai, “On the distribution of the first positive sum for a sequence of independent random variables”, Theory Probab. Appl., 2:1 (1957), 122–129 |
24. |
C. Stone, “A local limit theorem for nonlattice multi-dimensional distribution functions”, Ann. Math. Statist., 36:2 (1965), 546–551 |
25. |
V. A. Vatutin and V. Wachtel, “Local probabilities for random walks conditioned to stay positive”, Probab. Theory Related Fields, 143:1–2 (2009), 177–217 |
26. |
V. Vatutin and E. Dyakonova, “Path to survival for the critical branching processes in a random environment”, J. Appl. Probab., 54:2 (2017), 588–602 |
27. |
V. A. Vatutin and E. E. Dyakonova, Critical branching processes evolving in an unfavorable random environment, 2022, 15 pp., arXiv: 2209.13611 |
28. |
V. A. Vatutin and E. E. Dyakonova, “Population size of a critical branching process evolving in an unfavorable environment”, Theory Probab. Appl., 68:3 (2023), 411–430 |
Citation:
V. A. Vatutin, C. Dong, E. E. Dyakonova, “Random walks conditioned to stay nonnegative and branching processes in an unfavourable environment”, Sb. Math., 214:11 (2023), 1501–1533
Linking options:
https://www.mathnet.ru/eng/sm9908https://doi.org/10.4213/sm9908e https://www.mathnet.ru/eng/sm/v214/i11/p3
|
Statistics & downloads: |
Abstract page: | 344 | Russian version PDF: | 30 | English version PDF: | 70 | Russian version HTML: | 83 | English version HTML: | 125 | References: | 30 | First page: | 8 |
|