Loading [MathJax]/jax/element/mml/optable/GeneralPunctuation.js
Sbornik: Mathematics
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Forthcoming papers
Archive
Impact factor
Guidelines for authors
License agreement
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Mat. Sb.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Sbornik: Mathematics, 2024, Volume 215, Issue 1, Pages 119–140
DOI: https://doi.org/10.4213/sm9923e
(Mi sm9923)
 

This article is cited in 1 scientific paper (total in 1 paper)

Asymptotic behaviour of the survival probability of almost critical branching processes in a random environment

V. V. Kharlamov

Steklov Mathematical Institute of Russian Academy of Sciences, Moscow, Russia
References:
Abstract: A generalization of the well-known result concerning the survival probability of a critical branching process in random environment Zk is considered. The triangular array scheme of branching processes in random environment Zk,n that are close to Zk for large n is studied. The equivalence of the survival probabilities for the processes Zn,n and Zn is obtained under rather natural assumptions on the closeness of Zk,n and Zk.
Bibliography: 7 titles.
Keywords: random walks, branching processes, random environments.
Funding agency Grant number
Russian Science Foundation 19-11-00111
This research was supported by the Russian Science Foundation under grant no. 19-11-00111, https://rscf.ru/en/project/19-11-00111/.
Received: 16.04.2023 and 18.07.2023
Bibliographic databases:
Document Type: Article
MSC: 60J80
Language: English
Original paper language: Russian

§ 1. Introduction

Branching processes in random environment (BPREs) are a generalization of well-known Galton–Watson branching processes. Unlike the latter, the distribution of the number of offspring in each generation in a BPRE depends on some random factor, called an environment. Like in the case of Galton–Watson branching processes, the subcritical, critical and supercritical cases can be considered for BPREs. There are results concerning the survival probability for all three types of BPREs in the case when the environment consists of independent and identically distributed random elements.

In this paper we consider a critical BPRE. The asymptotic behaviour of the survival probability of the process in the case when the generating function of the distribution for the number of offspring of a single particle is linear fractional was deduced in [1]. With no assumption on the explicit form of the distribution for the number of offspring of a single particle, some results were obtained in [2] and generalized in [3].

A sequence Ξ={ξi,iN} of independent and identically distributed random elements with values in a measurable space (Y,G) is called a random environment. We denote by (Ω,F,PΞ) the probability space on which a random environment Ξ is defined.

We consider a family of generating functions

{fy,yY}.
Fix ωΩ.

The process Z={Zk,k0} is called a branching process in a random environment Ξ.

For each fixed ωΩ we introduce a probability measure Pω on the space of sequences N0, N0:={0}N. For any GN0 and VF we set

P(ZG,ΞV):=VPω(ZG)PΞ(dω)
and extend the probability measure P to (N0×Ω,σ(2N0×F)). Note that
P(Z2N0,ΞV)=PΞ(V)
for any VF.

We set

Xi:=logFi1(1),S0:=0andSk:=X1++Xk,i,kN.
By the definition of Ξ the random variables Xi, i1, are independent and identically distributed. The sequence of random variables {Sk,kN} is called the associated random walk.

Assumption 1. P-almost surely, F0(1),F0 and

\begin{equation*} \mathsf{E} X_1=0, \qquad \mathsf{D} X_1=\sigma^2 \in (0, \infty). \end{equation*} \notag

We introduce an operator T on the space of generating functions that maps a generating function f to the quantity

\begin{equation*} T(f)=\frac{f''(1)}{(f'(1))^2}. \end{equation*} \notag

Assumption 2. There exists \delta_1 \in (0, 1/2) such that

\begin{equation*} \sum_{j=1}^{\infty}\sqrt{\frac{h_j^0}{j}}<\infty, \quad\text{where } h_j^0 :=\mathsf{P}\bigl(T(F_0)> \exp\{j^{1/2-\delta_1}\}\bigr). \end{equation*} \notag

Theorem 1 (see [4], Theorem 5.1). Under Assumptions 1 and 2,

\begin{equation} \mathsf{P}(Z_n>0)\sim \Upsilon \frac{e^{-c_{-}}}{\sqrt{\pi n}}, \qquad n \to \infty, \end{equation} \tag{1.1}
where c_{-}=\sum_{k=1}^{\infty} k^{-1} (\mathsf{P}(S_k<0)-1/2) and \Upsilon is some positive constant.

Remark 1. Theorem 1 follows from the results obtained in [3] and [4], but this fact requires a separate substantiation, which is presented in § 3.

In this paper we consider the triangular array scheme of branching processes \{Z_{k,n},\, {k \!\leqslant\! n}\} in the random environment \Xi. Our goal is to find conditions on Z_{k,n} under which the probabilities \mathsf{P}(Z_n>0) and \mathsf{P}(Z_{n,n}>0) are equivalent as n \to \infty.

The paper is structured as follows. In § 2 the main result is presented. In § 3 a theorem on survival probability for a BPRE is proved. In § 4 several assertions used to substantiate the main result are established.

§ 2. The main result

We consider a family of generating functions

\begin{equation*} \{f_{y,i,n},\, y \in Y,\, 0 \leqslant i<n\} \end{equation*} \notag
and fix \omega \in \Omega.

The set of random variables \{Z_{k,n},\,0 \leqslant k \leqslant n,\,Z_{0,n}=1\} is said to be a perturbed branching process in the random environment \Xi (PBPRE).

We state the following assumption on the smallness of perturbations.

Assumption 3. For all y \in Y, 0 \leqslant i<n and s \in [0, 1],

\begin{equation*} f_{y,i,n}(s) \to f_y(s), \qquad n \to \infty. \end{equation*} \notag

Remark 2. It follows from Assumption 3 that

\begin{equation*} \mathsf{P}_{\omega}\bigl((Z_{1,n}, \dots, Z_{k,n}) \in G\bigr) \to \mathsf{P}_{\omega}\bigl((Z_1, \dots, Z_k) \in G\bigr), \qquad n \to \infty, \end{equation*} \notag
for any \omega \in \Omega, k \in \mathbb{N} and G \subset \mathbb{N}_0^k.

We denote the deviations of the logarithms of the first moments of the generating functions by

\begin{equation*} a_{i,n}(\omega) :=\log F_{i-1,n;\omega}'(1)-\log F_{i-1;\omega}'(1); \quad\text{let}\quad b_{k,n}(\omega) :=\sum_{i=1}^k a_{i,n}(\omega). \end{equation*} \notag

Assumption 4. For some \delta_2 \in (0, 1/2) and C_2>0 the sequence of events

\begin{equation*} Q_n :=\{\omega \in \Omega\colon |b_{k,n}(\omega)| \leqslant \theta(k),\, 1 \leqslant k \leqslant n\}, \end{equation*} \notag
where \theta(k) :=C_2 k^{1/2-\delta_2}, k \in \mathbb{N} and \theta(0) :=0, satisfies
\begin{equation*} \sqrt{n}\, \mathsf{P}(\overline{Q}_n) \to 0, \qquad n \to \infty. \end{equation*} \notag

Assumption 5. Let

\begin{equation*} \widehat{F}_j:=\sup_{n: n>j} T(F_{j,n}). \end{equation*} \notag
Then there exists \delta_3 \in (0, 1/2) such that
\begin{equation*} \sum_{j=1}^{\infty} h_j<\infty\quad\text{and} \quad \sum_{j=1}^{\infty} \sqrt{\frac{h_j}{j}}<\infty, \quad\text{where } h_j :=\mathsf{P}\bigl(\widehat{F}_j> \exp\{j^{1/2-\delta_3}\}\bigr). \end{equation*} \notag

The main result in this paper is as follows.

Theorem 2. Under Assumptions 1 and 35,

\begin{equation} \mathsf{P}(Z_{n,n}>0) \sim \mathsf{P}(Z_n>0) \sim \Upsilon \frac{e^{-c_{-}}}{\sqrt{\pi n}}, \qquad n \to \infty. \end{equation} \tag{2.1}

§ 3. Proof of Theorem 1

In what follows we use the notation K,K_1,\dots for positive constants, which are generally different in distinct assertions.

We prove the theorem on the basis of the proof of Theorem 5.1 in [4].

For an arbitrary nonnegative integer-valued random variable \zeta and a relevant generating function f we use the notation

\begin{equation*} f[c] :=\mathsf{P}(\zeta=c)\quad\text{and} \quad \varkappa(f; c) :=\frac{\sum_{y=c}^{\infty} y^2 f[y]}{(f'(1))^2}, \qquad c \in \mathbb{N}_0. \end{equation*} \notag

Assumption 6 (see [4], Assumption C). There exists c \in \mathbb{N}_0 such that

\begin{equation*} \mathsf{E} \bigl(\log^+ \varkappa(F_0; c)\bigr)^4<\infty. \end{equation*} \notag

Theorem 3 (see [4], Theorem 5.1). Relation (1.1) holds under Assumptions 1 and 6.

In the proof of Theorem 5.1 in [4], Assumption C is used only to prove Lemma 5.5. We prove this lemma in the case when Assumptions 1 and 2 hold.

The statement of Lemma 5.5 involves a certain measure \mathsf{P}^+. We will not dwell on the details of its construction: it was thoroughly described in [4], § 5.

By the definition of the measure \mathsf{P}^+,

\begin{equation} \mathsf{E}^+ Y_n=\mathsf{E}\bigl(Y_n U(S_n);\,L_n \geqslant 0\bigr) \end{equation} \tag{3.1}
for a \sigma(S_1, \dots, S_n)-measurable nonnegative random variable Y_n, where
\begin{equation*} U(x) :=\mathsf{I}\{x \geqslant 0\} + \sum_{n=1}^{\infty} \mathsf{P}\bigl(S_n \geqslant -x,\,\max\{S_1, \dots, S_n\}<0\bigr) \end{equation*} \notag
and
\begin{equation*} L_n :=\min\{S_0, \dots, S_n\}. \end{equation*} \notag
The function U(x) has a number of useful properties, which were described in [4], § 4.4.3. In particular, U(x) is a renewal function; hence
\begin{equation} U(x + y) \leqslant U(x) + U(y), \qquad x, y \geqslant 0. \end{equation} \tag{3.2}
Owing to Lemma 4.3 in [4], we have
\begin{equation} U(x) \sim \frac{x \sqrt{2}}{\sigma}\, e^{c_{-}}, \qquad x \to \infty. \end{equation} \tag{3.3}
By virtue of (3.3) and Assumption 1, the quantity \mathsf{E} U(X)^2 is finite.

Lemma 1 (see [4], Theorem 4.6). Let L_{k,n} :=\min\{S_k, \dots, S_{n}\}\,{-}\,S_k. Then, under Assumption 1,

\begin{equation} \mathsf{P}(L_{k,n+k} \geqslant 0) =\mathsf{P}(L_n \geqslant 0) \sim \frac{e^{-c_{-}}}{\sqrt{\pi n}}, \qquad n \to \infty, \end{equation} \tag{3.4}
for each k \in \mathbb{N}_0. In addition, there exists a constant K such that
\begin{equation} \mathsf{P}(L_{k,n} \geqslant 0) \leqslant \frac{K}{\max\{\sqrt{n-k}, K\}} \end{equation} \tag{3.5}
for all 0 \leqslant k \leqslant n.

Lemma 2. Assume that Assumptions 1 and 2 hold. Then the series

\begin{equation} \sum_{j=1}^{\infty} T(F_j) \exp\{-S_{j-1}\} \end{equation} \tag{3.6}
converges \mathsf{P}^+-almost surely.

Proof. In was deduced in the proof of Lemma 5.5 in [4] that by Assumption 1, for any \delta \in (0, 1/2) there exists a set \Omega''=\Omega''(\delta), \mathsf{P}^+(\Omega'')=1, such that for any \omega \in \Omega'' the inequality
\begin{equation} S_j(\omega) \geqslant D_1(\omega) j^{1/2-\delta} \end{equation} \tag{3.7}
holds for all j \in \mathbb{N} and some positive function D_1(\omega). We choose \delta :=\delta_1/2>0 and fix \omega \in \Omega''.

To estimate the sum (3.6) we need an estimate for the probability

\begin{equation*} \mathsf{P}^+\bigl(T(F_j)>x\bigr), \qquad x \geqslant 1. \end{equation*} \notag
By virtue of the independence of (S_j, L_j) and (T(F_j), X_{j+1}), in view of (3.1) and (3.2) we have
\begin{equation} \begin{aligned} \, \notag &\mathsf{P}^+(T(F_j)>x) =\mathsf{E}\bigl(\mathsf{I}\{T(F_j)>x\}\, U(S_{j+1}) \, \mathsf{I}\{L_{j+1} \geqslant 0\}\bigr) \\ \notag &\qquad\leqslant \mathsf{E}\bigl((U(S_j) + U(X_{j+1}))\, \mathsf{I}\{T(F_j)>x\} \, \mathsf{I}\{L_j \geqslant 0\}\bigr) \\ \notag &\qquad=\mathsf{E}\bigl(U(S_j)\, \mathsf{I}\{L_j \geqslant 0\}\bigr)\, \mathsf{P}\big(T(F_j)>x\bigr) + \mathsf{E}\bigl(U(X_{j+1}) \, \mathsf{I}\{T(F_j)>x\}\bigr)\mathsf{P}(L_j \geqslant 0) \\ &\qquad=\mathsf{P}\bigl(T(F_j)>x\bigr)+ \mathsf{E}\bigl(U(X_{j+1}) \, \mathsf{I}\{T(F_j)>x\}\bigr)\mathsf{P}(L_j \geqslant 0). \end{aligned} \end{equation} \tag{3.8}

Using the Cauchy–Schwarz–Bunyakovsky inequality, Assumption 1 and relation (3.3), we obtain

\begin{equation} \begin{aligned} \, \notag \mathsf{E}\bigl(U(X_{j+1}) \mathsf{I}\{T(F_j)>x\}\bigr) &\leqslant \sqrt{\mathsf{E} U(X_{j+1})^2\, \mathsf{E}\, \mathsf{I}^2\{T(F_j)>x\}} \\ &=\sqrt{\mathsf{E} U(X_{j+1})^2\, \mathsf{P}\bigl(T(F_j)>x\bigr)} \leqslant K_1 \sqrt{\mathsf{P}\bigl(T(F_j)>x\bigr)}. \end{aligned} \end{equation} \tag{3.9}

Combining the estimates (3.8) and (3.9) and using Lemma 1 we infer that

\begin{equation} \sum_{j=1}^{\infty} \mathsf{P}^+(T(F_j)>x_j) \leqslant \sum_{j=1}^{\infty} h_j^0 + K_2 \sum_{j=1}^{\infty} \sqrt{\frac{h_j^0}{j}} \end{equation} \tag{3.10}
for x=x_j=\exp\{j^{1/2-\delta_1}\}. The second term on the right-hand side of (3.10) is finite by Assumption 2. We prove the finiteness of the first term on the right-hand side of (3.10).

Note that

\begin{equation} \sum_{j=1}^{\infty} h_j^0 =\sum_{j=1}^{\infty} \sqrt{j h_j^0}\, \sqrt{\frac{h_j^0}{j}} \leqslant \sup\Bigl\{\sqrt{j h_j^0}\Bigm| j \in \mathbb{N}\Bigr\} \sum_{j=1}^{\infty} \sqrt{\frac{h_j^0}{j}}. \end{equation} \tag{3.11}
If the first factor on the right-hand side of (3.11) is finite, then the first term on the right-hand side of (3.10) is too. Assume the contrary. Then there exists an increasing sequence \{n_k \in \mathbb{N} \mid k \in \mathbb{N}\} such that h_{n_k}^0>1/n_k. The sequence h_j^0 does not increase, which yields
\begin{equation} \sum_{j=[n_k/2]}^{n_k} \sqrt{\frac{h_j^0}{j}} \geqslant \sum_{j=[n_k/2]}^{n_k} \sqrt{\frac{2 h_{n_k}^0}{n_k}} \geqslant \sum_{j=[n_k/2]}^{n_k} \frac{\sqrt{2}}{n_k} \geqslant \frac{1}{\sqrt{2}}. \end{equation} \tag{3.12}
Since n_k \geqslant k for all k \in \mathbb{N} the relation (3.12) contradicts Assumption 2 in view of Cauchy’s criterion.

By virtue of the convergence of the two series on the right-hand side of (3.10) and the Borel–Cantelli lemma, there exists a set \Omega''', \mathsf{P}^+(\Omega''')=1, such that for any \omega \in \Omega''' there exists a positive function D_2(\omega) such that

\begin{equation} T(F_j) \leqslant D_2(\omega) \exp\{j^{1/2-\delta_1}\}. \end{equation} \tag{3.13}
Using (3.13) we obtain
\begin{equation*} \sum_{j=1}^{\infty} T(F_j) \exp\{-S_{j-1}\} \leqslant D_2(\omega) \sum_{j=1}^{\infty} \exp\{j^{1/2-\delta_1} - D_1(\omega) (j-1)^{1/2-\delta_1/2}\} < \infty \end{equation*} \notag
for \omega \in \Omega' :=\Omega'' \cap \Omega'''.

Lemma 2 is proved.

Proof of Theorem 1. By Lemma 2 and Corollary 5.7 in [4] there is a set \Omega' \subset \Omega, \mathsf{P}^+(\Omega')=1, such that
\begin{equation*} \liminf_{n \to \infty} \mathsf{P}_{\omega}(Z_n>0)>0 \end{equation*} \notag
for \omega \in \Omega'. It follows that
\begin{equation} \mathsf{P}^+\biggl(\bigcap_{n=1}^{\infty} \{Z_n>0\}\biggr)>0. \end{equation} \tag{3.14}
This relation replaces Lemma 5.8 in [4], which is used there in the proof of Theorem 5.1. The other assertions in [4] involved in the proof of Theorem 5.1, namely Lemmas 5.2 and 5.9 and Theorem 4.6, use only Assumption 1.

Theorem 1 is proved.

§ 4. Auxiliary assertions and the proof of the main result

Lemma 3. Let

\begin{equation*} J_0=\Omega\quad\textit{and} \quad J_k=\{S_i>S_k\ \forall\, i \in \{0, \dots, k-1\}\}, \quad k \in \mathbb{N}. \end{equation*} \notag
Then
\begin{equation} \mathsf{P}(Z_{n,n}>0)=\sum_{k=0}^n \sum_{l=1}^{+\infty} A_{k,l,n} B_{n-k,l,n}\quad\textit{and} \quad \mathsf{P}(Z_n>0) =\sum_{k=0}^n \sum_{l=1}^{+\infty} A_{k,l} B_{n-k,l}, \end{equation} \tag{4.1}
where
\begin{equation*} \begin{gathered} \, A_{k,l,n} :=\mathsf{P}(\{Z_{k,n}=l\} \cap J_k), \qquad B_{n-k,l,n} :=\mathsf{P}(Z_{n,n}>0,\, L_{k,n} \geqslant 0 \mid Z_{k,n}=l), \\ A_{k,l} :=\mathsf{P}(\{Z_k=l\} \cap J_k)\quad\textit{and} \quad B_{n-k,l} :=\mathsf{P}(Z_n>0,\,L_{k,n} \geqslant 0 \mid Z_k=l). \end{gathered} \end{equation*} \notag

Proof. We prove only the first part of (4.1). The second part is a special case of the first for F_{i,n} \equiv F_i.

Let

\begin{equation*} \tau_n :=\min\bigl\{\arg \min\{S_k \mid k \in \{0, \dots, n\}\}\bigr\} \end{equation*} \notag
denote the first moment of time at which the random walk attains its minimum on the set \{0, \dots, n\}. Then
\begin{equation} \mathsf{P}(Z_{n,n}>0) =\sum_{k=0}^n \sum_{l=1}^{+\infty} \mathsf{P}(Z_{n,n}>0,\, Z_{k,n}=l,\, \tau_n=k). \end{equation} \tag{4.2}
The event \{\tau_n=k\} coincides with the event
\begin{equation*} \{S_i>S_k \ \forall\, i \in \{0, \dots, k-1\},\,L_{k,n} \geqslant 0\} =J_k \cap \{L_{k,n} \geqslant 0\}. \end{equation*} \notag
By the independence of (S_1, \dots, S_k) and (L_{k,n}, Z_{n,n}) under the condition Z_{k,n}=l, we have
\begin{equation} \begin{aligned} \, \notag &\mathsf{P}(Z_{n,n}>0,\, Z_{k,n}=l,\, \tau_n=k) \\ &\qquad =\mathsf{P}(\{Z_{k,n}=l\} \cap J_k)\, \mathsf{P}(Z_{n,n}>0, \, L_{k,n} \geqslant 0 \mid Z_{k,n}=l) \end{aligned} \end{equation} \tag{4.3}
for arbitrary natural k and l. Using (4.2) and (4.3) we arrive at the required equality (4.1).

The lemma is proved.

To prove Theorem 2 we divide the double sum in the representation from Lemma 3 into the two parts corresponding to the subsets of indices k \leqslant m and k>m, respectively. We show that the second part is negligible in comparison with the first for large m. For this purpose we need the following estimate.

Lemma 4. Let \widehat{L}_k denote \min\{S_i + \theta(i) \mid 0 \leqslant i \leqslant k\}. Then, under Assumption 1, for any \delta \in (0, 1/2) there exists a constant K such that

\begin{equation} \mathsf{E} \exp\{\widehat{L}_k\} \leqslant \frac{K}{k^{1/2-\delta}} \end{equation} \tag{4.4}
for all k \in \mathbb{N}.

Proof. We have
\begin{equation} \mathsf{E} \exp\{\widehat{L}_k\} =\mathsf{E}\bigl(\exp\{\widehat{L}_k\}; \, \widehat{L}_k \,{\leqslant}\, {-}\log k\bigr) + \mathsf{E}\bigl(\exp\{\widehat{L}_k\};\, \widehat{L}_k \,{>}\,{-}\log k\bigr) =: p_1 + p_2. \end{equation} \tag{4.5}
The inequality
\begin{equation} p_1 \leqslant \mathsf{E}\biggl(\frac{1}{k};\, \widehat{L}_k \leqslant-\log k\biggr) \leqslant \frac{1}{k} \end{equation} \tag{4.6}
holds.

We estimate the quantity p_2. We define recursively a random variable \nu_i. Set {\nu_0=0} and assume that \nu_i is defined for i \geqslant 0; then

\begin{equation} \nu_{i+1} :=\min\{j>\nu_i\mid S_j-S_{\nu_i} + \theta(j-\nu_i)<0\} \in \mathbb{N} \cup \{+\infty\}. \end{equation} \tag{4.7}
We set \rho_k :=\max\{r \geqslant 0 \mid \nu_r \leqslant k\}.

Then

\begin{equation} p_2 \leqslant \mathsf{P}\bigl(\widehat{L}_k >-\log k\bigr) \leqslant \mathsf{P}\bigl(\rho_k \leqslant k^{\delta/2}-1\bigr)+ \mathsf{P}\bigl(\widehat{L}_k >-\log k,\, \rho_k>k^{\delta/2}-1\bigr). \end{equation} \tag{4.8}

We estimate the first term on the right-hand side of (4.8). We set \Delta_0 \,{:=}\, 0 and \Delta_i :=\nu_i-\nu_{i-1} for i>0 and note that the \Delta_i are independent, identically distributed random variables for i>0. Since the events \{\rho_k \leqslant r\} and \{\nu_{r+1}>k\} coincide for an arbitrary r, Markov’s inequality and the subadditivity of the function g(x) :=x^{1/2-\delta/2}, x \geqslant 0, imply the relation

\begin{equation} \begin{aligned} \, \notag \mathsf{P}(\rho_k \leqslant r)=\mathsf{P}(\nu_{r+1}>k) &=\mathsf{P}\biggl(\biggl(\sum_{i=1}^{r+1} \Delta_i\biggr)^{1/2-\delta/2} > k^{1/2-\delta/2}\biggr) \\ &\leqslant \frac{\mathsf{E} \bigl(\sum_{i=1}^{r+1} \Delta_i\bigr)^{1/2-\delta/2} }{k^{1/2-\delta/2}} \leqslant \frac{(r+1) \, \mathsf{E} \Delta_1^{1/2-\delta/2}}{k^{1/2-\delta/2}}. \end{aligned} \end{equation} \tag{4.9}
It follows from Assumption 1 that g(i)=o(\mathsf{D} S_i) as i \to \infty. Owing to [5], Theorem 1,
\begin{equation*} \mathsf{P}(\Delta_1 \geqslant i) \leqslant \frac{K_1}{i^{1/2-\delta/4}} \end{equation*} \notag
for i \geqslant 1. Then
\begin{equation} \begin{aligned} \, \notag \mathsf{E} \Delta_1^{1/2-\delta/2} &=\sum_{i=1}^{\infty} (i^{1/2-\delta/2}-(i-1)^{1/2-\delta/2}) \, \mathsf{P}(\Delta_1 \geqslant i) \\ &\leqslant K_1 + \sum_{i=1}^{\infty} \frac{K_2}{i^{1+\delta/4}} =: K_3<\infty. \end{aligned} \end{equation} \tag{4.10}
By estimates (4.9) and (4.10) we have
\begin{equation} \mathsf{P}(\rho_k \leqslant k^{\delta/2}-1) \leqslant \frac{K_3 k^{\delta/2}}{k^{1/2-\delta/2}} =\frac{K_3}{k^{1/2-\delta}}. \end{equation} \tag{4.11}

Now we estimate the second term on the right-hand side of (4.8). For any i>0, by the subadditivity of \theta(i) we have

\begin{equation} \begin{aligned} \, \notag S_{\nu_i} + \theta(\nu_i)- (S_{\nu_{i-1}} + \theta(\nu_{i-1})) &=S_{\nu_i}-S_{\nu_{i-1}} + \theta(\nu_i)-\theta(\nu_{i-1}) \\ &\leqslant S_{\nu_i}-S_{\nu_{i-1}} + \theta(\nu_i-\nu_{i-1})=: \eta_i. \end{aligned} \end{equation} \tag{4.12}
Note that the \eta_i, i>0, are independent and identically distributed negative random variables. It follows that
\begin{equation} \begin{aligned} \, \notag \widehat{L}_k &=\min\{S_{\nu_i} + \theta(\nu_i)\mid 0 \leqslant i \leqslant \rho_k\} =S_{\nu_{\rho_k}} + \theta(\nu_{\rho_k}) \\ &=\sum_{i=1}^{\rho_k} \bigl(S_{\nu_i} + \theta(\nu_i) - (S_{\nu_{i-1}} + \theta(\nu_{i-1}))\bigr) \leqslant \sum_{i=1}^{\rho_k} \eta_i. \end{aligned} \end{equation} \tag{4.13}
By virtue of (4.13), Markov’s inequality, and since the \eta_i are nonnegative, the second term on the right-hand side of (4.8) is estimated as follows:
\begin{equation} \begin{aligned} \, \notag &\mathsf{P}(\widehat{L}_k >-\log k,\, \rho_k>k^{\delta/2}-1) \\ &\qquad\leqslant \mathsf{P}\biggl(\sum_{i=1}^{\rho_k} \eta_i >-\log k,\, \rho_k>k^{\delta/2}-1\biggr) \leqslant \mathsf{P}\biggl(\sum_{i=1}^{[k^{\delta/2}-1]} \eta_i >-\log k\biggr) \notag \\ &\qquad=\mathsf{P}\biggl(\exp\biggl\{\sum_{i=1}^{[k^{\delta/2}-1]} \eta_i\biggr\} > \frac{1}{k}\biggr) \leqslant k (\mathsf{E} e^{\eta_1})^{[k^{\delta/2}-1]}. \end{aligned} \end{equation} \tag{4.14}
Note that \mathsf{E} e^{\eta_1}<1, which yields
\begin{equation} k (\mathsf{E} e^{\eta_1})^{[k^{\delta/2}-1]} \leqslant \frac{K_4}{k}, \qquad k \in \mathbb{N}. \end{equation} \tag{4.15}
The estimates (4.5), (4.6), (4.8), (4.11), (4.14) and (4.15) imply the required inequality
\begin{equation*} \mathsf{E} \exp\{\widehat{L}_k\} \leqslant \frac{1}{k} + \frac{K_3}{k^{1/2-\delta}} + \frac{K_4}{k} \leqslant \frac{K}{k^{1/2-\delta}}. \end{equation*} \notag

Lemma 4 is proved.

Lemma 5. Let

\begin{equation*} Q_{k,n} :=\{\omega \in \Omega\colon|b_{i,n}(\omega)| \leqslant \theta(i), \, 1 \leqslant i \leqslant k\} \end{equation*} \notag
for k, n \in \mathbb{N}, k \leqslant n. Then under Assumption 1, for some constants \delta_2 \in (0, 1/2) and K>0 the inequality
\begin{equation*} \mathsf{P}(\{Z_{k,n}>0\} \cap Q_{k,n} \cap J_k) \leqslant \frac{K}{k^{1+\delta_2/8}} \end{equation*} \notag
holds for 1 \leqslant k \leqslant n.

Proof. We set
\begin{equation} q_1 :=\mathsf{P}\bigl(\{Z_{k,n}>0,\,S_k \leqslant -k^{1/2-\delta_2/2}\} \cap Q_{k,n} \cap J_k\bigr) \end{equation} \tag{4.16}
and
\begin{equation*} q_2 :=\mathsf{P}\bigl(\{Z_{k,n}>0,\,S_k>-k^{1/2-\delta_2/2}\} \cap Q_{k,n} \cap J_k\bigr). \end{equation*} \notag
By the definition of the event Q_{k,n} the first quantity is estimated as follows:
\begin{equation} \begin{aligned} \, \notag q_1 &\leqslant \mathsf{E}\bigl(Z_{k,n};\, \{S_k \leqslant -k^{1/2-\delta_2/2}\} \cap Q_{k,n} \cap J_k\bigr) \\ \notag &\leqslant \mathsf{E}\bigl(\exp\{S_k + \theta(k)\};\, S_k \leqslant -k^{1/2-\delta_2/2}\bigr) \\ &\leqslant \exp\{k^{1/2-\delta_2}(-k^{\delta_2/2} + k^{\delta_2-1/2} \theta(k))\} =\exp\{k^{1/2-\delta_2} (-k^{\delta_2/2} + C_2)\}. \end{aligned} \end{equation} \tag{4.17}
We introduce the notation
\begin{equation*} \widehat{\tau}_k=\min\biggl\{\arg \min\biggl\{S_i + \theta(i)\biggm| i \leqslant \biggl[\frac k3\biggr]\biggr\}\biggr\}. \end{equation*} \notag
Let \omega \in Q_{k,n}. On the strength of the fact that the event \{Z_{k,n}>0\} is embedded in \{Z_{\widehat{\tau}_k,n}>0\} and Z_{\widehat{\tau}_k,n} is integer valued, we have
\begin{equation} \mathsf{P}_{\omega}(Z_{k,n}>0) \leqslant \mathsf{P}_{\omega}(Z_{\widehat{\tau}_k,n}>0) \leqslant \mathsf{E}_{\omega} Z_{\widehat{\tau}_k,n}. \end{equation} \tag{4.18}
Owing to the properties of the branching process and the choice of \omega, we have
\begin{equation} \mathsf{E}_{\omega} Z_{\widehat{\tau}_k,n} =\exp\{S_{\widehat{\tau}_k}(\omega) \,{+}\, b_{\widehat{\tau}_k,n}(\omega)\} \leqslant \exp\{S_{\widehat{\tau}_k}(\omega) \,{+}\, \theta(\widehat{\tau}_k)\} =\exp\{\widehat{L}_{[k/3]}(\omega)\}. \end{equation} \tag{4.19}
By virtue of relations (4.18) and (4.19),
\begin{equation} \begin{aligned} \, \notag q_2 &\leqslant \mathsf{E}\bigl(\exp\{\widehat{L}_{[k/3]}\};\, \{S_k>-k^{1/2-\delta_2/2}\} \cap J_k\bigr) \\ \notag &\leqslant \mathsf{E}\biggl(\exp\{\widehat{L}_{[k/3]}\};\, S_i>S_k \ \forall\, i \in \biggl\{\biggl[\frac{2k}{3}\biggr], \dots, k-1\biggr\},\, S_k \in (-k^{1/2-\delta_2/2}, 0)\biggr) \\ &=\mathsf{E} Y \, \mathsf{I}\{S_k \in (-k^{1/2-\delta_2/2}, 0)\}, \end{aligned} \end{equation} \tag{4.20}
where
\begin{equation*} Y :=\exp\{\widehat{L}_{[k/3]}\}\, \mathsf{I}\biggl\{S_i>S_k\ \forall\, i \in \biggl\{\biggl[\frac{2k}3\biggr], \dots, k-1\biggr\}\biggr\}. \end{equation*} \notag

We consider the sigma algebra

\begin{equation*} \mathcal{H}_k :=\sigma(X_1, \dots, X_{[k/3]}, X_{[2 k/3] + 1}, \dots, X_k). \end{equation*} \notag
Since the random variable Y is \mathcal{H}_k-measurable, we have
\begin{equation} \begin{aligned} \, \notag \mathsf{E} Y \, \mathsf{I}\{S_k \in (-k^{1/2-\delta_2/2}, 0)\} &=\mathsf{E} \bigl(\mathsf{E}(Y \mathsf{I}\{S_k \in (-k^{1/2-\delta_2/2}, 0)\} \mid \mathcal{H}_k)\bigr) \\ &=\mathsf{E}(Y\, \mathsf{P}\bigl(S_k \in (-k^{1/2-\delta_2/2}, 0) \mid \mathcal{H}_k)\bigr). \end{aligned} \end{equation} \tag{4.21}
Note that
\begin{equation} \begin{aligned} \, \notag &\mathsf{P}\bigl(S_k \in (-k^{1/2-\delta_2/2}, 0)\bigm| \mathcal{H}_k\bigr) \\ \notag &\qquad =\mathsf{P}\bigl(S_{[2 k/3]}-S_{[k/3]} + (S_{[k/3]} + S_k-S_{[2 k/3]}) \in (-k^{1/2-\delta_2/2}, 0) \bigm| \mathcal{H}_k\bigr) \\ &\qquad \leqslant \sup_{x \in \mathbb{R}} \mathsf{P}\bigl(S_{[2 k/3]}-S_{[k/3]} + x \in (0, k^{1/2-\delta_2/2}) \bigm| \mathcal{H}_k\bigr). \end{aligned} \end{equation} \tag{4.22}
Since the random variable S_{[2 k/3]}-S_{[k/3]} is independent of \mathcal{H}_k, the following identity holds \mathsf{P}-almost surely:
\begin{equation} \begin{aligned} \, \notag &\sup_{x \in \mathbb{R}} \mathsf{P}\bigl(S_{[2 k/3]}-S_{[k/3]} + x \in (0, k^{1/2-\delta_2/2})\bigm| \mathcal{H}_k\bigr) \\ &\qquad =\sup_{x \in \mathbb{R}}\mathsf{P}\bigl(S_{[2 k/3]-[k/3]} + x\in (0, k^{1/2-\delta_2/2})\bigr). \end{aligned} \end{equation} \tag{4.23}
By (4.21)(4.23) we have
\begin{equation} \begin{aligned} \, \notag &\mathsf{E} Y \, \mathsf{I}\{S_k \in (-k^{1/2-\delta_2/2}, 0)\} \\ &\qquad \leqslant \mathsf{E} Y\sup_{x \in \mathbb{R}}\mathsf{P}\bigl(S_{[2 k/3]-[k/3]} + x\in (0, k^{1/2-\delta_2/2})\bigr)\leqslant q_{2,1} q_{2,2} q_{2,3}, \end{aligned} \end{equation} \tag{4.24}
where
\begin{equation*} \begin{gathered} \, q_{2,1} :=\mathsf{E} \exp\{\widehat{L}_k\}, \\ q_{2,2} :=\sup_{x \in \mathbb{R}}\mathsf{P}\bigl(S_{[2k/3]}-S_{[k/3]}\in (-x-k^{1/2-\delta_2/2}, -x)\bigr) \end{gathered} \end{equation*} \notag
and
\begin{equation*} q_{2,3} :=\mathsf{P}\biggl(S_i>S_k \ \forall\, i \in \biggl\{\biggl[\frac{2k}3\biggr], \dots, k-1\biggr\}\biggr). \end{equation*} \notag

We obtain upper estimates for each factor on the right-hand side of (4.24). Owing to Lemma 4, for \delta=3 \delta_2/8 we have

\begin{equation} q_{2,1} \leqslant \frac{K_1}{k^{1/2-3 \delta_2/8}}. \end{equation} \tag{4.25}
It follows from the concentration inequality (see [6], Ch. III, Theorem 9) that there exists a positive constant K_2 such that
\begin{equation} q_{2,2} \leqslant \frac{K_2 k^{1/2-\delta_2/2}}{\sqrt{k}} =\frac{K_2}{k^{\delta_2/2}}. \end{equation} \tag{4.26}
By Lemma 1 we have
\begin{equation} q_{2,3} \leqslant \frac{K_3}{\sqrt{k}}. \end{equation} \tag{4.27}
Inequalities (4.20) and (4.24)(4.27) yield the estimate
\begin{equation} q_2 \leqslant q_{2,1} q_{2,2} q_{2,3} \leqslant \frac{K_4}{k^{1+\delta_2/8}}. \end{equation} \tag{4.28}

Finally, estimates (4.17) and (4.28) for q_1 and q_2 imply the inequality

\begin{equation*} \begin{aligned} \, \mathsf{P}(\{Z_{k,n}>0\} \cap Q_{k,n} \cap J_k) &=q_1 + q_2 \\ &\leqslant \exp\{k^{1/2-\delta_2} (-k^{\delta_2/2} + C_2)\} + \frac{K_4}{k^{1+\delta_2/8}}\leqslant \frac{K}{k^{1+\delta_2/8}}. \end{aligned} \end{equation*} \notag

Lemma 5 is proved.

Lemma 6. For m<n, under Assumptions 1 and 4,

\begin{equation} \sum_{k=m+1}^n \sum_{l=1}^{\infty} A_{k,l,n} B_{n-k,l,n} \leqslant \frac{\alpha_m}{\sqrt{n}}, \end{equation} \tag{4.29}
where \alpha_m \to 0 as m \to \infty.

Proof. Owing to the proof of Lemma 3, we have
\begin{equation} \begin{aligned} \, \notag &\sum_{k=m+1}^n \sum_{l=1}^{\infty} A_{k,l,n} B_{n-k,l,n} =\mathsf{P}(Z_{n,n}>0,\,\tau_n>m) \\ &\qquad =\mathsf{P}(\{Z_{n,n}>0,\, \tau_n>m\} \cap Q_n) + \mathsf{P}(\{Z_{n,n}>0,\,\tau_n>m\} \cap \overline{Q}_n). \end{aligned} \end{equation} \tag{4.30}
Note that
\begin{equation} \sqrt{n}\, \mathsf{P}(\{Z_{n,n}>0,\,\tau_n>m\} \cap \overline{Q}_n) \leqslant \sqrt{n}\, \mathsf{P}(\overline{Q}_n) \leqslant \widetilde{\alpha}_n \end{equation} \tag{4.31}
by Assumption 4, where \{\widetilde{\alpha}_n,\,n \in \mathbb{N}\} is a sequence of positive numbers decreasing to zero.

On the other hand, by virtue of the relation

\begin{equation*} \{\tau_n>m\}=\bigcup_{k=m+1}^n \{\tau_n=k\} =\bigcup_{k=m+1}^n J_k \cap \{L_{k,n} \geqslant 0\} \end{equation*} \notag
we have the estimate
\begin{equation} \begin{aligned} \, \notag &\mathsf{P}(\{Z_{n,n}>0,\,\tau_n>m\} \cap Q_n) \\ \notag &\qquad \leqslant \sum_{k=m+1}^n \mathsf{P}(\{Z_{k,n}>0,\,L_{k,n} \geqslant 0\} \cap J_k \cap Q_{k,n}) \\ &\qquad =\sum_{k=m+1}^n \mathsf{P}(\{Z_{k,n}>0\}\cap J_k \cap Q_{k,n})\, \mathsf{P}(L_{k,n} \geqslant 0) \end{aligned} \end{equation} \tag{4.32}
for the first term on the right-hand side of (4.30).

Using (4.30)(4.32) and Lemmas 1 and 5 we obtain

\begin{equation} \begin{aligned} \, \notag &\sum_{k=m+1}^n \sum_{l=1}^{\infty} A_{k,l,n} B_{n-k,l,n} \\ \notag &\qquad\leqslant \mathsf{P}(\overline{Q}_n) + K_1 \sum_{k=m+1}^n \frac{1}{k^{1 + \delta_2/8}} \frac{K_2}{\max\{\sqrt{n-k}, K_2\}} \\ \notag &\qquad \leqslant \frac{\widetilde{\alpha}_n}{\sqrt{n}} + \sum_{k=m+1}^{n-[n/2]-1} \frac{K_3}{k^{1 + \delta_2/8} \sqrt{n-k}} + \sum_{k=n-[n/2]}^{n-1} \frac{K_3}{k^{1 + \delta_2/8} \sqrt{n-k}} + \frac{K_3}{n^{1 + \delta_2/8}} \\ &\qquad \leqslant \frac{\widetilde{\alpha}_m}{\sqrt{n}} + \frac{K_4}{\sqrt{n} m^{\delta_2/8}} + \frac{K_4}{n^{1/2 + \delta_2/8}} \int_{1/2}^1 \frac{d x}{x^{1 + \delta_2/8} \sqrt{1-x}} + \frac{K_4}{\sqrt{n} m^{\delta_2/8}} \leqslant \frac{\alpha_m}{\sqrt{n}}, \end{aligned} \end{equation} \tag{4.33}
where
\begin{equation*} \alpha_m=K_5 \biggl(\widetilde{\alpha}_m+ \frac{3}{m^{\delta_2/8}}\biggr). \end{equation*} \notag

Lemma 6 is proved.

Owing to Lemma 6, we have reduced the investigation of the asymptotic behaviour of the sequence \sum_{k=0}^n \sum_{l=1}^{\infty} A_{k,n,l} B_{n-k,l,n} to examining the sums \sum_{k=0}^m \sum_{l=1}^{\infty} A_{k,n,l} B_{n-k,l,n}. Now we constrain the values of l.

Lemma 7. Under Assumptions 1 and 3, for fixed m there exists a sequence of positive numbers \{\beta_M= \beta_M(m),\,M \in \mathbb{N}\} such that \beta_M \to 0 as M \to \infty and

\begin{equation} \sum_{k=0}^m \sum_{l=M+1}^{\infty} A_{k,l,n} B_{n-k,l,n} \leqslant \frac{\beta_M}{\sqrt{n}}. \end{equation} \tag{4.34}

Proof. We prove that for each k \in \mathbb{N} there exists a sequence of positive numbers \{\beta_M^{(k)},\,M \in \mathbb{N}\} tending to zero as M \to \infty such that
\begin{equation} \sup_{n\colon n \geqslant k} \mathsf{P}(Z_{k,n}>M)\leqslant \beta_M^{(k)}. \end{equation} \tag{4.35}
Assume that the left-hand side of (4.35) does not tend to zero for some k. Then there are a positive number \varepsilon and an increasing sequence of natural numbers \{M_r,\,r \in \mathbb{N}\} such that
\begin{equation} \sup_{n\colon n \geqslant k} \mathsf{P}(Z_{k,n}>M_r)> \varepsilon. \end{equation} \tag{4.36}
By virtue of (4.36), for each r \in \mathbb{N} there is n_r such that
\begin{equation*} \mathsf{P}(Z_{k,n_r}>M_r) \geqslant \varepsilon. \end{equation*} \notag

If \sup\{n_r \mid r \in \mathbb{N}\}<\infty, then there exists a natural number that is infinitely often repeated in the sequence \{n_r,\,r \in \mathbb{N}\}. We denote this number by n. By the definition of n there is an increasing sequence of natural numbers \widetilde{M}_r such that

\begin{equation} \mathsf{P}(Z_{k,n}>\widetilde{M}_r) \geqslant \varepsilon. \end{equation} \tag{4.37}
However, relation (4.37) contradicts the continuity of the probability measure.

If \sup\{n_r \mid r \in \mathbb{N}\}=\infty, then there exist increasing sequences of natural numbers \{\widetilde{n}_r,\,r \in \mathbb{N}\} and \{\widetilde{M}_r,\,r \in \mathbb{N}\} such that

\begin{equation} \mathsf{P}(Z_{k,\widetilde{n}_r}>\widetilde{M}_r) \geqslant \varepsilon. \end{equation} \tag{4.38}

We choose M>0 so that

\begin{equation} \mathsf{P}(Z_k>M)<\varepsilon. \end{equation} \tag{4.39}
We choose R>0 so that \widetilde{M}_r>M for r>R. Then, owing to (4.38),
\begin{equation} \mathsf{P}(Z_{k,\widetilde{n}_r}>M) \geqslant \mathsf{P}(Z_{k,\widetilde{n}_r}>\widetilde{M}_r) \geqslant \varepsilon \end{equation} \tag{4.40}
for all r>R. By virtue of Assumption 3 and Remark 2, the left-hand side of (4.40) tends to \mathsf{P}(Z_k>M) as r \to \infty, which yields the estimate \mathsf{P}(Z_k>M) \geqslant \varepsilon. However, this contradicts (4.39).

In all above cases we arrive at a contradiction; hence (4.35) is true. By the inequality B_{n-k,l,n} \leqslant \mathsf{P}(L_{k,n} \geqslant 0), Lemma 1 and (4.35) we have

\begin{equation} \begin{aligned} \, \notag \sum_{k=0}^m \sum_{l=M+1}^{\infty} A_{k,l,n} B_{n-k,l,n} &\leqslant \sum_{k=0}^m \mathsf{P}(Z_{k,n}>M) \frac{K_1}{\max\{\sqrt{n-k}, K_1\}} \\ &\leqslant \sum_{k=0}^m \beta_M^{(k)} \min\biggl\{1, \frac{K_1}{\sqrt{n-k}}\biggr\}. \end{aligned} \end{equation} \tag{4.41}
Relation (4.41) makes it possible to deduce the required estimate (4.34).

Lemma 7 is proved.

Using Lemmas 6 and 7 we can reduce the problem to the examination of a finite number of combinations (k, l). In this case we can use Assumption 3 to switch from F_{i-1,n} to F_{i-1} for i \leqslant k.

Lemma 8. Under Assumptions 1 and 3, for fixed m and for any \varepsilon>0 there exist M=M(\varepsilon) and N=N(\varepsilon) such that

\begin{equation} \biggl|\sum_{k=0}^m \sum_{l=1}^{+\infty} A_{k,l,n} B_{n-k,l,n} - \sum_{k=0}^m \sum_{l=1}^M A_{k,l} B_{n-k,l,n}\biggr| \leqslant \frac{\varepsilon}{\sqrt{n}} \end{equation} \tag{4.42}
for each n>N.

Proof. We fix \varepsilon>0. By Lemma 7, there exist positive numbers M and N_1 such that
\begin{equation*} \sum_{k=0}^m \sum_{l=M+1}^{+\infty} A_{k,l,n} B_{n-k,l,n} \leqslant \frac{\varepsilon}{2 \sqrt{n}} \end{equation*} \notag
for any n>N_1 \geqslant 2 m. Then
\begin{equation} \begin{aligned} \, \notag &\biggl|\sum_{k=0}^m \sum_{l=1}^{+\infty} A_{k,l,n} B_{n-k,l,n} - \sum_{k=0}^m \sum_{l=1}^M A_{k,l} B_{n-k,l,n}\biggr| \\ &\qquad \leqslant \sum_{k=0}^m \sum_{l=1}^M |A_{k,l,n}-A_{k,l}| B_{n-k,l,n} + \frac{\varepsilon}{2 \sqrt{n}}. \end{aligned} \end{equation} \tag{4.43}
Applying Lemma 1 we obtain the estimate
\begin{equation} B_{n-k,l,n} \leqslant \mathsf{P}(L_{k,n} \geqslant 0) \leqslant \frac{K_1}{\sqrt{n-k}} \leqslant \frac{2 K_1}{\sqrt{n}} \end{equation} \tag{4.44}
for n>N_1.

It follows from Assumption 3 and Remark 2 that for k \leqslant m Z_{k,n} converges to Z_k in distribution as n \to \infty for all (\xi_1, \dots, \xi_k). By Lebesgue’s dominated convergence theorem we have

\begin{equation*} A_{k,l,n}=\mathsf{E}\bigl(\mathsf{I}_{J_k} \mathsf{P}(Z_{k,n}=l \mid \xi_1, \dots, \xi_k)\bigr) \to \mathsf{E}\bigl(\mathsf{I}_{J_k} \mathsf{P}(Z_k=l \mid \xi_1, \dots, \xi_k)\bigr)=A_{k,l} \end{equation*} \notag
as n \to \infty. Thus, there exists N \geqslant N_1 such that
\begin{equation} |A_{k,l,n}-A_{k,l}| \leqslant \frac{\varepsilon}{4 K_1 M (m+1)} \end{equation} \tag{4.45}
for n>N, k \leqslant m and l \leqslant M. We conclude from (4.43)(4.45) that
\begin{equation} \begin{aligned} \, \notag &\biggl|\sum_{k=0}^m \sum_{l=1}^{+\infty} A_{k,l,n} B_{n-k,l,n} - \sum_{k=0}^m \sum_{l=1}^M A_{k,l,n}^0 B_{n-k,l,n}\biggr| \\ &\qquad \leqslant \sum_{k=0}^m \sum_{l=1}^M \frac{\varepsilon}{2 M (m+1) \sqrt{n}} + \frac{\varepsilon}{2 \sqrt{n}} =\frac{\varepsilon}{\sqrt{n}}. \end{aligned} \end{equation} \tag{4.46}

Lemma 8 is proved.

It follows from Lemmas 68 that the main contribution to the asymptotic behaviour of the survival probability of a PBPRE is made by the sum \sum_{k=0}^m \sum_{l=1}^M A_{k,l} B_{n-k,l,n}. It remains to establish a result on the asymptotic behaviour of B_{n-k,l,n}. Since the number of terms in the sum under consideration is finite, there is no question of uniform asymptotics with respect to k and l.

In what follows we need an expression for survival probability in the case when the environment is fixed.

Lemma 9 (see [4], Propositions 1.3 and 1.4). The survival probability of the PBPRE Z_{k,n} satisfies

\begin{equation} \begin{aligned} \, \notag &\mathsf{P}_{\omega}(Z_{k+r,n}>0 \mid Z_{k,n}=1) \\ &\qquad =\frac{1}{\sum_{i=0}^r d_{i+k,k+r,n}(\omega) \exp\bigl\{- \sum_{j=k+1}^{i+k} (X_j(\omega) + a_{j,n}(\omega))\bigr\}}, \end{aligned} \end{equation} \tag{4.47}
where \sum_{j=k+1}^k=0, the random variables on the right-hand side are considered at \omega,
\begin{equation*} \begin{gathered} \, d_{j,r,n}= \begin{cases} 1, & j=r, \\ \varphi_{F_{j,n}}(F_{j:r,n}(0)), & j<r, \end{cases} \\ \varphi_f(s)=\frac{1}{1-f(s)}-\frac{1}{f'(1) (1-s)}, \\ F_{j:r,n}(0)=F_{j,n}(F_{j+1:r,n}(0))\quad\textit{and} \quad F_{r:r,n}(0)=0. \end{gathered} \end{equation*} \notag
In addition, for j \leqslant r \leqslant n-k,
\begin{equation} d_{j-1,r,n} \leqslant T(F_{j-1,n})=\exp\{-2 (X_j + a_{j,n})\} F_{j-1,n}''(1). \end{equation} \tag{4.48}

To prove the next assertion we must pass to the limit under the condition that some part of the trajectory of the sequence S_i, i \geqslant 0, is nonnegative. A tool making it possible to take the limit is the measure \mathsf{P}^+. The properties of this measure are described in § 3.

Lemma 10. Under Assumptions 1 and 5, for any k \in \mathbb{N}_0 there is a set \Omega', \mathsf{P}^+(\Omega')=1, such that for all \omega\in \Omega' and \varepsilon>0 there exists a parameter {R=R(\omega, \varepsilon)} such that

\begin{equation} \mathsf{I}_{Q_n}(\omega) \sum_{i=r}^{n-k} T(F_{i+k,n;\omega}) \exp\biggl\{-\sum_{j=k+1}^{i+k} (X_j(\omega) + a_{j,n}(\omega))\biggr\} \leqslant \varepsilon \end{equation} \tag{4.49}
for any r>R and n>k + r.

Proof. Our argument is like in the proof of Lemma 2.

By virtue of Assumption 1, relation (3.7) for \delta :=\min\{\delta_2, \delta_3\}/2 holds for some \Omega''=\Omega''(\delta) such that \mathsf{P}^+(\Omega'')=1.

To estimate the sum (4.49) we need an estimate for the probability

\begin{equation*} \mathsf{P}^+(\widehat{F}_j>x) =\mathsf{P}^+\Bigl(\sup_{n\colon n>j} T(F_{j,n})> x\Bigr). \end{equation*} \notag
Owing to the independence of (S_j, L_j) and (\widehat{F}_j, X_{j+1}), in view of (3.1) and (3.2) we have
\begin{equation} \begin{aligned} \, \notag \mathsf{P}^+(\widehat{F}_j>x)&=\mathsf{E}\bigl(\mathsf{I}\{\widehat{F}_j>x\}\, U(S_{j+1}) \, \mathsf{I}\{L_{j+1} \geqslant 0\}\bigr) \\ \notag &\leqslant \mathsf{E}\bigl((U(S_j) + U(X_{j+1}))\, \mathsf{I}\{\widehat{F}_j>x\} \, \mathsf{I}\{L_j \geqslant 0\}\bigr) \\ \notag &=\mathsf{E}\bigl(U(S_j) \, \mathsf{I}\{L_j \geqslant 0\}\bigr)\, \mathsf{P}(\widehat{F}_j>x) + \mathsf{E}\bigl(U(X_{j+1})\, \mathsf{I}\{\widehat{F}_j>x\}\bigr)\, \mathsf{P}(L_j \geqslant 0) \\ &=\mathsf{P}(\widehat{F}_j>x)+ \mathsf{E}\bigl(U(X_{j+1})\, \mathsf{I}\{\widehat{F}_j>x\}\bigr)\, \mathsf{P}(L_j \geqslant 0). \end{aligned} \end{equation} \tag{4.50}

Using the Cauchy–Schwarz–Bunyakovsky inequality, Assumption 1 and (3.3) we obtain

\begin{equation} \begin{aligned} \, \notag \mathsf{E}(U(X_{j+1}) \mathsf{I}\{\widehat{F}_j>x\}) &\leqslant \sqrt{\mathsf{E} U(X_{j+1})^2\, \mathsf{E}\, \mathsf{I}^2\{\widehat{F}_j>x\}} \\ &=\sqrt{\mathsf{E} U(X_{j+1})^2\, \mathsf{P}(\widehat{F}_j>x)} \leqslant K_1 \sqrt{\mathsf{P}(\widehat{F}_j>x)} \end{aligned} \end{equation} \tag{4.51}
for x \geqslant 1.

From estimates (4.50) and (4.51), Lemma 1 and Assumption 5 we infer the inequality

\begin{equation} \sum_{j=0}^{\infty}\mathsf{P}^+(\widehat{F}_j>x_j)\leqslant 1 + \sum_{j=1}^{\infty} h_j + K_2 \sum_{j=1}^{\infty}\sqrt{\frac{h_j}{j}}<\infty \end{equation} \tag{4.52}
for x=x_j=\exp\{j^{1/2-\delta_3}\}. The convergence of the series (4.52) and the Borel–Cantelli lemma imply the existence of a set \Omega''', \mathsf{P}^+(\Omega''')=1, such that for any \omega \in \Omega''' there is a positive function D_3(\omega) such that
\begin{equation} T(F_{j,n;\omega}) \leqslant T(\widehat{F}_{j;\omega}) \leqslant D_3(\omega) \exp\{j^{1/2-\delta_3}\}. \end{equation} \tag{4.53}
Using estimate (4.53), for \omega \in \Omega' :=\Omega'' \cap \Omega''' we infer that
\begin{equation} \begin{aligned} \, \notag &\mathsf{I}_{Q_n}(\omega) \sum_{i=r}^{n-k}T(F_{i+k,n;\omega}) \exp\biggl\{- \sum_{j=k+1}^{i+k}(X_j(\omega) + a_{j,n}(\omega))\biggr\} \\ \notag &\qquad \leqslant \mathsf{I}_{Q_n}(\omega) \exp\{S_k(\omega) + b_{k,n}(\omega)\} \sum_{i=r}^{n-k} T(F_{i+k,n;\omega}) \exp\{-S_{i+k}(\omega)-b_{i+k,n}(\omega)\} \\ \notag &\qquad \leqslant D_3(\omega) \exp\{S_k(\omega) + C_2 k^{1/2-\delta_2}\} \sum_{i=r}^{n-k} \exp\{(i+k)^{1/2-\delta_3}\} \\ &\qquad\qquad \times \exp\{- D_1(\omega) (i+k)^{1/2-\delta} + C_2 (i+k)^{1/2-\delta_2}\}. \end{aligned} \end{equation} \tag{4.54}
Since \delta<\delta_2 and \delta<\delta_3, the terms of the series on the right-hand side of (4.54) are exponentially small. It follows that for all \omega \in \Omega' and \varepsilon>0 there exists R=R(\omega, \varepsilon) such that the left-hand side of inequality (4.49) is at most \varepsilon for r>R and n>k+r.

Lemma 10 is proved.

Lemma 11. Under Assumptions 1 and 35,

\begin{equation} \sqrt{n} (B_{n-k,l,n}-B_{n-k,l}) \to 0, \qquad n \to \infty, \end{equation} \tag{4.55}
for all natural k and l.

Proof. Note that
\begin{equation} B_{n-k,l,n}=\mathsf{E}\bigl(f_l(\widetilde{\pi}_{k,n}) \bigm| L_{k,n} \geqslant 0\bigr)\, \mathsf{P}(L_{k,n} \geqslant 0), \end{equation} \tag{4.56}
where
\begin{equation*} f_l(x) :=1-(1-x)^l, \qquad \widetilde{\pi}_{k,n}(\omega) :=\mathsf{P}_{\omega}(Z_{n,n}>0 \mid Z_{k,n}=1). \end{equation*} \notag
In a similar way,
\begin{equation} \begin{gathered} \, B_{n-k,l}=\mathsf{E}\bigl(f_l(\widetilde{\pi}_{k,n}^{\,0}) \bigm| L_{k,n} \geqslant 0\bigr)\, \mathsf{P}(L_{k,n} \geqslant 0), \\ \widetilde{\pi}_{k,n}^{\,0}(\omega):=\mathsf{P}_{\omega}(Z_n>0 \mid Z_k=1). \end{gathered} \end{equation} \tag{4.57}

We prove that the difference of the first factors on the right-hand sides of (4.56) and (4.57) tends to zero. Note that

\begin{equation} \begin{aligned} \, \notag \frac{|B_{n-k,l,n}-B_{n-k,l}|}{\mathsf{P}(L_{k,n} \geqslant 0)} &\leqslant \mathsf{E}\bigl(|f_l(\widetilde{\pi}_{k,n}) - f_l(\widetilde{\pi}_{k,n}^{\,0})| \bigm| L_{k,n} \geqslant 0\bigr) \\ \notag &\leqslant l\, \mathsf{E}\bigl(|\widetilde{\pi}_{k,n} - \widetilde{\pi}_{k,n}^{\,0}| \bigm| L_{k,n} \geqslant 0\bigr) \\ &\leqslant l\, \mathsf{E}\bigl(\mathsf{I}_{Q_n} |\widetilde{\pi}_{k,n} - \widetilde{\pi}_{k,n}^{\,0}| \bigm| L_{k,n} \geqslant 0\bigr) + l \, \mathsf{P}(\overline{Q}_n \mid L_{k,n} \geqslant 0). \end{aligned} \end{equation} \tag{4.58}
By Lemma 1 and Assumption 4 the second term on the right-hand side of (4.58) admits the estimate
\begin{equation} l \, \mathsf{P}(\overline{Q}_n \mid L_{k,n} \geqslant 0) \leqslant l \frac{\mathsf{P}(\overline{Q}_n)}{\mathsf{P}(L_{k,n} \geqslant 0)}\to 0, \qquad n \to \infty. \end{equation} \tag{4.59}

We estimate the first term on the right in (4.58). Fix \varepsilon>0. By virtue of Lemmas 9 and 10 there exists a set \Omega' \in \Omega, \mathsf{P}^+(\Omega')=1 such that for any \omega \in \Omega' there exists R=R(\omega, \varepsilon) such that

\begin{equation} \mathsf{I}_{Q_n}(\omega) \sum_{i=r+1}^{n-k} d_{i+k,k+r,n}(\omega) \exp\biggl\{-\sum_{j=k+1}^{i+k}(X_j(\omega) + a_{j,n}(\omega))\biggr\} \leqslant \frac{\varepsilon}{3} \end{equation} \tag{4.60}
for all r>R and n>k + r.

We fix \omega \in \Omega'. Note that

\begin{equation} \widetilde{\pi}_{k,n} =\mathsf{P}_{\omega}(Z_{n,n}>0 \mid Z_{k,n}=1) \leqslant \mathsf{P}_{\omega}(Z_{k+r,n}>0 \mid Z_{k,n}=1). \end{equation} \tag{4.61}
Owing to Lemma 9 and relation (4.60), we have
\begin{equation} \begin{aligned} \, \notag \mathsf{I}_{Q_n}(\omega) \widetilde{\pi}_{k,n}(\omega) &=\mathsf{I}_{Q_n}(\omega)\, \mathsf{P}_{\omega}(Z_{n,n}>0 \mid Z_{k,n}=1) \\ \notag &=\frac{\mathsf{I}_{Q_n}(\omega)}{\sum_{i=0}^{n-k} d_{i+k,k+r,n}(\omega) \exp\bigl\{-\sum_{j=k+1}^{i+k} (X_j(\omega) + a_{j,n}(\omega))\bigr\}} \\ &\geqslant \frac{\mathsf{I}_{Q_n}(\omega)}{\sum_{i=0}^r d_{i+k,k+r,n}(\omega) \exp\bigl\{-\sum_{j=k+1}^{i+k} (X_j(\omega) + a_{j,n}(\omega))\bigr\} + \varepsilon/3}. \end{aligned} \end{equation} \tag{4.62}
If \omega \notin Q_n, then the last expression in (4.62) is zero. Otherwise, the denominator is at least 1, which yields
\begin{equation} \begin{aligned} \, \notag &\frac{\mathsf{I}_{Q_n}(\omega)}{\sum_{i=0}^r d_{i+k,k+r,n}(\omega) \exp\bigl\{-\sum_{j=k+1}^{i+k} (X_j(\omega) + a_{j,n}(\omega))\bigr\} + \varepsilon/3} \\ \notag &\qquad \geqslant \biggl(1-\frac{\varepsilon}{3}\biggr) \frac{\mathsf{I}_{Q_n}(\omega)}{\sum_{i=0}^r d_{i+k,k+r,n}(\omega) \exp\bigl\{-\sum_{j=k+1}^{i+k} (X_j(\omega) + a_{j,n}(\omega))\bigr\}} \\ &\qquad =\biggl(1-\frac{\varepsilon}{3}\biggr)\, \mathsf{I}_{Q_n}(\omega) \, \mathsf{P}_{\omega}(Z_{k+r,n}>0 \mid Z_{k,n}=1). \end{aligned} \end{equation} \tag{4.63}
It follows from (4.61)(4.63) that
\begin{equation} \mathsf{I}_{Q_n}(\omega) \bigl|\widetilde{\pi}_{k,n}(\omega) - \mathsf{P}_{\omega}(Z_{k+r,n}>0 \mid Z_{k,n}=1)\bigr| \leqslant \frac{\varepsilon}{3}. \end{equation} \tag{4.64}

In view of the convergence of the sequence \widetilde{\pi}_{k,n}^{\,0}(\omega) as n \to \infty, there exists r>R such that

\begin{equation} |\widetilde{\pi}_{k,k+r}^{\,0}(\omega) - \widetilde{\pi}_{k,n}^{\,0}(\omega)| \leqslant \frac{\varepsilon}{3} \end{equation} \tag{4.65}
for each n>k+r.

Assumption 3 and Remark 2 imply that there exists N>k + r such that

\begin{equation} \bigl|\mathsf{P}_{\omega}(Z_{k+r,n}>0 \mid Z_{k,n}=1) - \widetilde{\pi}_{k,k+r}^{\,0}(\omega)\bigr| \leqslant \frac{\varepsilon}{3} \end{equation} \tag{4.66}
for each n>N.

It follows from (4.64)(4.66) that

\begin{equation} \begin{aligned} \, \notag &\mathsf{I}_{Q_n}(\omega) |\widetilde{\pi}_{k,n}(\omega)- \widetilde{\pi}_{k,n}^{\,0}(\omega)| \\ \notag &\quad\leqslant \mathsf{I}_{Q_n}(\omega)\bigl|\widetilde{\pi}_{k,n}(\omega)- \mathsf{P}_{\omega}(Z_{k+r,n}>0 \mid Z_{k,n}=1)\bigr| \\ &\quad\qquad + \bigl|\mathsf{P}_{\omega}(Z_{k+r,n}>0 \mid Z_{k,n}=1) - \widetilde{\pi}_{k,k+r}^{\,0}(\omega)\bigr| + |\widetilde{\pi}_{k,k+r}^{\,0}(\omega)-\widetilde{\pi}_{k,n}^{\,0}(\omega)| \leqslant \varepsilon. \end{aligned} \end{equation} \tag{4.67}
Since \varepsilon is arbitrary, we conclude that the sequence of random variables {\mathsf{I}_{Q_n} |\widetilde{\pi}_{k,n}-\widetilde{\pi}_{k,n}^{\,0}|} converges to 0 \mathsf{P}^+-almost surely as n \to \infty. As this sequence is uniformly bounded, it follows from [4], Lemma 5.2, that
\begin{equation} \mathsf{E}\bigl(\mathsf{I}_{Q_n} |\widetilde{\pi}_{k,n}-\widetilde{\pi}_{k,n}^{\,0}| \bigm| L_{k,n} \geqslant 0\bigr) \to 0, \qquad n \to \infty. \end{equation} \tag{4.68}
Lemma 1 and relations (4.58), (4.59) and (4.68) yield the required assertion (4.55).

Lemma 11 is proved.

Lemma 12. Assumptions 35 imply Assumption 2.

Proof. Fix \omega \in \Omega and j \in \mathbb{N}_0. By Assumption 3 and Theorem 3.1.1 in [7] there exist a probability space (\widehat{\Omega}_{\omega}, \widehat{\mathcal{F}}_{\omega}, \widehat{\mathsf{P}}_{\omega}) and random variables \widehat{Y}_{\omega, n}, j<n, and \widehat{Y}_{\omega} defined on it such that F_{j;\omega} is the generating function for \widehat{Y}_{\omega}, F_{j,n;\omega} is the generating function for \widehat{Y}_{\omega,n}, and \widehat{Y}_{\omega,n} \to \widehat{Y}_{\omega} as n \to \infty \widehat{\mathsf{P}}_{\omega}-almost surely. We denote the mean on this space by \widehat{\mathsf{E}}_{\omega}. By Fatou’s lemma,
\begin{equation} \begin{aligned} \, \notag F_{j;\omega}''(1) &=\widehat{\mathsf{E}}_{\omega}\bigl(\widehat{Y}_{\omega}(\widehat{Y}_{\omega}-1)\bigr) \\ &\leqslant \liminf_{n \to \infty} \widehat{\mathsf{E}}_{\omega} \bigl(\widehat{Y}_{\omega,n} (\widehat{Y}_{\omega,n}-1)\bigr) =\liminf_{n \to \infty} F_{j,n;\omega}''(1). \end{aligned} \end{equation} \tag{4.69}
We consider \omega \in Q_n, n \in \mathbb{N}. For an arbitrary j<n we have
\begin{equation} \begin{aligned} \, \notag |a_{j+1,n}(\omega)| &=|b_{j+1,n}(\omega)-b_{j,n}(\omega)| \\ &\leqslant |b_{j+1,n}(\omega)| + |b_{j,n}(\omega)| \leqslant 2 C_2 (j+1)^{1/2-\delta_2}. \end{aligned} \end{equation} \tag{4.70}
Owing to the definition of a_{j+1,n},
\begin{equation} \frac{F_{j,n;\omega}'(1)}{F_{j;\omega}'(1)} =\exp\{a_{j+1,n}(\omega)\} \leqslant \exp\{2 C_2 (j+1)^{1/2-\delta_2}\}. \end{equation} \tag{4.71}
Inequalities (4.70) and (4.71) yield
\begin{equation} \frac{\mathsf{I}_{Q_n}(\omega)}{(F_{j;\omega}'(1))^2} \leqslant \exp\{4 C_2 (j+1)^{1/2-\delta_2}\} \frac{\mathsf{I}_{Q_n}(\omega)}{(F_{j,n;\omega}'(1))^2} \end{equation} \tag{4.72}
for arbitrary \omega \in \Omega and n \in \mathbb{N}. Note that, by virtue of Assumption 4, the sequence \mathsf{I}_{Q_n} converges to 1 in probability as n \to \infty. By Riesz’s theorem, there is a subsequence \mathsf{I}_{Q_{n_k}} that converges to 1 \mathsf{P}-almost surely as k \to \infty. It follows that
\begin{equation} \limsup_{n \to \infty} \mathsf{I}_{Q_n} \geqslant \limsup_{k \to \infty} \mathsf{I}_{Q_{n_k}} =1 \end{equation} \tag{4.73}
\mathsf{P}-almost surely. Using (4.69), (4.72) and (4.73), we obtain
\begin{equation} \begin{aligned} \, \notag T(F_j) &\leqslant \limsup_{n \to \infty} \mathsf{I}_{Q_n} \frac{F_j''(1)}{(F_j'(1))^2} \\ \notag &\leqslant \limsup_{n \to \infty} \exp\{4 C_2 (j+1)^{1/2-\delta_2}\} \frac{\mathsf{I}_{Q_n}}{(F_{j,n}'(1))^2} \liminf_{n \to \infty} F_{j,n}''(1) \\ \notag &\leqslant \exp\{4 C_2 (j+1)^{1/2-\delta_2}\} \limsup_{n \to \infty} \frac{\mathsf{I}_{Q_n} F_{j,n}''(1)}{(F_{j,n}'(1))^2} \\ & \leqslant \exp\{4 C_2 (j+1)^{1/2-\delta_2}\} \limsup_{n \to \infty} T(F_{j,n}) \end{aligned} \end{equation} \tag{4.74}
\mathsf{P}-almost surely. By (4.74),
\begin{equation} \begin{aligned} \, \notag &\mathsf{P}\bigl(T(F_j) > \exp\{4 C_2 (j+1)^{1/2-\delta_2} + j^{1/2-\delta_3}\}\bigr) \\ &\qquad\leqslant \mathsf{P}\Bigl(\limsup_{n \to \infty} T(F_{j,n}) > \exp\{j^{1/2-\delta_3}\}\Bigr) \leqslant \mathsf{P}\bigl(\widehat{F}_j > \exp\{j^{1/2-\delta_3}\}\bigr). \end{aligned} \end{equation} \tag{4.75}
For some N \in \mathbb{N} we have
\begin{equation} j^{1/2-\delta_1} \geqslant 4 C_2 (j+1)^{1/2-\delta_2} + j^{1/2-\delta_3}, \quad\text{where } \delta_1 :=\frac{\min\{\delta_2, \delta_3\}}{2} \in \biggl(0, \frac12\biggr), \end{equation} \tag{4.76}
for all j>N. From Assumption 5 and relations (4.75) and (4.76) we conclude that
\begin{equation} \begin{aligned} \, \notag \sum_{j=1}^{\infty} \sqrt{\frac{h_j^0}{j}} &=\sum_{j=1}^{\infty}\sqrt{\frac{\mathsf{P}(T(F_j)> \exp\{j^{1/2-\delta_1}\})}{j}} \\ \notag &\leqslant N + \sum_{j=N+1}^{\infty}\sqrt{\frac{\mathsf{P}(T(F_j) > \exp\{4 C_2 (j+1)^{1/2-\delta_2}+ j^{1/2-\delta_3}\})}{j}} \\ &\leqslant N + \sum_{j=N+1}^{\infty}\sqrt{\frac{\mathsf{P}(\widehat{F}_j > \exp\{j^{1/2-\delta_3}\})}{j}}< \infty. \end{aligned} \end{equation} \tag{4.77}

Lemma 12 is proved.

We have proved all necessary assertions and can switch to the main result.

Proof of Theorem 2. Note that Lemmas 6 and 8 apply to F_{i,n} \equiv F_i.

Fix \varepsilon>0. By Lemma 6 there exists m such that

\begin{equation} \sqrt{n}\, \biggl|\mathsf{P}(Z_{n,n}>0) - \sum_{k=0}^m \sum_{l=1}^{\infty} A_{k,l,n} B_{n-k,l,n}\biggr| \leqslant \frac{\varepsilon}{6} \end{equation} \tag{4.78}
and
\begin{equation} \sqrt{n}\, \biggl|\mathsf{P}(Z_n>0) - \sum_{k=0}^m \sum_{l=1}^{\infty} A_{k,l} B_{n-k,l}\biggr| \leqslant \frac{\varepsilon}{4} \end{equation} \tag{4.79}
for all n>m. Owing to Lemma 8 and the relations (4.78) and (4.79), there exist M, N_1>m such that
\begin{equation} \sqrt{n}\, \biggl|\mathsf{P}(Z_{n,n}>0) - \sum_{k=0}^m \sum_{l=1}^M A_{k,l} B_{n-k,l,n}\biggr| \leqslant \frac{\varepsilon}{3} \end{equation} \tag{4.80}
and
\begin{equation} \sqrt{n}\, \biggl|\mathsf{P}(Z_n>0) - \sum_{k=0}^m \sum_{l=1}^M A_{k,l} B_{n-k,l}\biggr| \leqslant \frac{\varepsilon}{2} \end{equation} \tag{4.81}
for each n>N_1. By Lemma 11 there exists N>N_1 such that
\begin{equation} \sqrt{n}\, \biggl|\sum_{k=0}^m \sum_{l=1}^M A_{k,l} B_{n-k,l,n} - \sum_{k=0}^m \sum_{l=1}^M A_{k,l} B_{n-k,l}\biggr| \leqslant \frac{\varepsilon}{6} \end{equation} \tag{4.82}
for any n>N. It follows from (4.80)(4.82) for n>N that
\begin{equation} \begin{aligned} \, &\sqrt{n}\, |\mathsf{P}(Z_{n,n}>0)-\mathsf{P}(Z_n>0)| \nonumber \\ &\qquad \leqslant \sqrt{n}\, \biggl|\mathsf{P}(Z_{n,n}>0) - \sum_{k=0}^m \sum_{l=1}^M A_{k,l} B_{n-k,l,n}\biggr| \nonumber \\ &\qquad\qquad + \sqrt{n}\, \biggl|\sum_{k=0}^m \sum_{l=1}^M A_{k,l} B_{n-k,l,n} - \sum_{k=0}^m \sum_{l=1}^M A_{k,l} B_{n-k,l}\biggr| \nonumber \\ &\qquad\qquad + \sqrt{n}\, \biggl|\sum_{k=0}^m \sum_{l=1}^M A_{k,l} B_{n-k,l} - \mathsf{P}(Z_n>0)\biggr| \leqslant \varepsilon. \end{aligned} \end{equation} \tag{4.83}
Since \varepsilon>0 is arbitrary, we have
\begin{equation} \sqrt{n}\, |\mathsf{P}(Z_{n,n}>0)-\mathsf{P}(Z_n>0)| \to 0, \qquad n \to \infty. \end{equation} \tag{4.84}

Lemma 12 implies that Assumption 2 is fulfilled. By Assumptions 1 and 2, Theorem 1 is true for the BPRE \{Z_n,\,n \geqslant 0\}, which implies the convergence

\begin{equation} \sqrt{n}\, \mathsf{P}(Z_n>0) \to \Upsilon \frac{e^{c_{-}}}{\sqrt{\pi}}, \qquad n \to \infty. \end{equation} \tag{4.85}
Relations (4.84) and (4.85) yield the required asymptotics (2.1).

Theorem 2 is proved.

Acknowledgements

The author is grateful to A. V. Shklyaev for his permanent support of this research. The author is grateful to anonymous referees for their comments, which made it possible to improve the presentation considerably.


Bibliography

1. M. V. Kozlov, “On the asymptotic behavior of the probability of non-extinction for critical branching processes in a random environment”, Theory Probab. Appl., 21:4 (1977), 791–804  mathnet  crossref  mathscinet  zmath
2. J. Geiger and G. Kersting, Theory Probab. Appl., 45:3 (2001), 517–525  mathnet  crossref  mathscinet  zmath
3. V. I. Afanasyev, J. Geiger, G. Kersting and V. A. Vatutin, “Criticality for branching processes in random environment”, Ann. Probab., 33:2 (2005), 645–673  crossref  mathscinet  zmath
4. G. Kersting and V. Vatutin, Discrete time branching processes in random environment, Math. Stat. Ser., John Wiley & Sons, London; ISTE, Hoboken, NJ, 2017, xiv+286 pp.  crossref  zmath
5. D. Denisov, A. Sakhanenko and V. Wachtel, “First-passage times for random walks with nonidentically distributed increments”, Ann. Probab., 46:6 (2018), 3313–3350  crossref  mathscinet  zmath
6. V. V. Petrov, Sums of independent random variables, Akademie-Verlag, Berlin, 1975, x+348 pp.  crossref  zmath
7. A. V. Skorokhod, “Limit theorems for stochastic processes”, Theory Probab. Appl., 1:3 (1956), 261–290  mathnet  crossref  mathscinet  zmath

Citation: V. V. Kharlamov, “Asymptotic behaviour of the survival probability of almost critical branching processes in a random environment”, Sb. Math., 215:1 (2024), 119–140
Citation in format AMSBIB
\Bibitem{Kha24}
\by V.~V.~Kharlamov
\paper Asymptotic behaviour of the survival probability of almost critical branching processes in a~random environment
\jour Sb. Math.
\yr 2024
\vol 215
\issue 1
\pages 119--140
\mathnet{http://mi.mathnet.ru//eng/sm9923}
\crossref{https://doi.org/10.4213/sm9923e}
\mathscinet{http://mathscinet.ams.org/mathscinet-getitem?mr=4741226}
\zmath{https://zbmath.org/?q=an:07878632}
\adsnasa{https://adsabs.harvard.edu/cgi-bin/bib_query?2024SbMat.215..119K}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001224793300007}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85193370720}
Linking options:
  • https://www.mathnet.ru/eng/sm9923
  • https://doi.org/10.4213/sm9923e
  • https://www.mathnet.ru/eng/sm/v215/i1/p131
  • This publication is cited in the following 1 articles:
    1. V. V. Kharlamov, “Asimptotika veroyatnosti nevyrozhdeniya pochti kriticheskikh vetvyaschikhsya protsessov v sluchainoi srede: modeli i primery”, Diskret. matem., 36:4 (2024), 138–157  mathnet  crossref
    Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Математический сборник Sbornik: Mathematics
    Statistics & downloads:
    Abstract page:351
    Russian version PDF:17
    English version PDF:45
    Russian version HTML:39
    English version HTML:133
    References:41
    First page:15
     
      Contact us:
    math-net2025_01@mi-ras.ru
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2025