|
This article is cited in 1 scientific paper (total in 1 paper)
Exact formulas in some boundary crossing problems
for integer-valued random walks
V. I. Lotovab a Sobolev Institute of Mathematics, Siberian Branch of the Russian Academy of Sciences, Novosibirsk
b Novosibirsk State University
Abstract:
For a wide class of integer-valued random walks,
we obtain exact expressions for the distribution
of the first excess over level
and the corresponding renewal function
as well as for the distribution of the trajectory supremum
if it is finite.
We discuss possibilities of obtaining explicit expressions
for pre-stationary and stationary distributions
of a random walk with switchings at the strip boundaries.
The research is based on the factorization representations
for the double moment generating functions of the distributions under study.
Keywords:
integer-valued random walk, factorization method,
excess over boundary, renewal function, oscillating random walk.
Received: 16.02.2022 Revised: 13.06.2022
§ 1. Introduction. Preliminary results Let $X,X_1,X_2,\dots$ be a sequence of independent identically distributed random variables, $ S_{n}=X_{1}+\dots+X_{n}$, $n\geqslant1$, $S_0=0$. The sequence $\{S_n,\, n\geqslant 0\}$ is called a random walk, and the set of points $\{(n,S_n),\, n\geqslant 0\}$ connected by line segments on the coordinate plane is called a trajectory of the random walk. A boundary crossing problem for a random walk is usually concerned with distributions related to hitting (or non-hitting) of the boundary of a certain set by a trajectory of the random walk. If a random variable $X$ takes values only of the form $kh$, $k=0,\pm 1,\pm2,\dots$, for some $h>0$, then its distribution is usually called arithmetic. As a rule, in this case, when dealing with boundary crossing problems, it can be assumed without loss of generality that $h=1$. Thus, we come to considering random walks on the lattice of integers, and, in what follows, we assume that the random variables $X$, $X_j$ are integer. We are interested in exact formulas for the distribution of the first excess over the barrier of an integer-valued random walk and of the renewal function associated with it. We are also interested in explicit formulas for the distribution of the trajectory supremum in cases where it is finite. It is known that the presence of an excess over the boundary leads to certain difficulties in many problems, and, in particular, in the study of the distributions associated with the first exit time of the trajectory from the strip. It is much easier to deal with random walks without excess over the boundary, or, in cases an excess takes place, if it is distributed according to the exponential or geometric law. In the general case, the distribution of the excess cannot be found explicitly. For this reason, in many papers, various approximations of this distribution are studied: the limit distribution with infinitely receding barrier (see, for example, Ch. 10 in [1]), asymptotic expansions under the same condition (see [2]–[5]), and moment estimates (see [6], [7]), etc. In the present article, the starting point is the well-known factorization representation for the double moment generating function of the joint distribution of the first time of hitting a positive level and the value of the excess (Theorem 1). Further, in § 2, we analyze the possibilities of finding explicit expressions for the factorization components used in consideration of integer-valued random walks. The main result of this section is contained in Theorem 2 and its corollary. Next, in § 3, the representation obtained in Corollary 1 is inverted with respect to the spatial variable. As a conclusion (Theorem 3, Corollaries 2 and 3), we give an exact formulas for the distribution of the excess, the distribution of the trajectory supremum, and the renewal function. The concluding § 4 contains some applications of the above results to the study of random walks with switchings. Consider some features that arise in integer-valued random walks. Instead of the characteristic function, here it is more convenient to use the moment generating function
$$
\begin{equation*}
\mathbf{E}\mu^X=\psi(\mu):=\sum_{k=-\infty}^{\infty}\mu^kp_k,\qquad p_k=\mathbf{P}(X=k),\quad |\mu|=1.
\end{equation*}
\notag
$$
We need the following factorization representation, which is often used in boundary crossing problems:
$$
\begin{equation*}
1-z\psi(\mu)= R_{-}(z,\mu)R_0(z) R_{+}(z, \mu),\qquad |\mu|=1,\quad |z|<1;
\end{equation*}
\notag
$$
here
$$
\begin{equation}
\begin{aligned} \, R_{-}(z,\mu) &=\exp\biggl\{-\sum_{n=1}^{\infty}\frac{z^n}{n} \mathbf{E}\bigl(\mu^{S_n};\, S_n<0\,\bigr)\biggr\}, \\ R_{+}(z,\mu) &=\exp\biggl\{-\sum_{n=1}^{\infty}\frac{z^n}{n} \mathbf{E}\bigl(\mu^{S_n};\, S_n> 0\,\bigr)\biggr\}, \\ R_{0}(z) &=\exp\biggl\{-\sum_{n=1}^{\infty}\frac{z^n}{n} \mathbf{P}\bigl(S_n=0\bigr)\biggr\}. \end{aligned}
\end{equation}
\tag{1}
$$
Other representations for the components of the above factorization are also available. Let
$$
\begin{equation*}
\eta_-=\inf\{n\geqslant 1\colon S_n<0\},\qquad \eta_+=\inf\{n\geqslant 1\colon S_n> 0\},\qquad \chi_{\pm}=S_{\eta_{\pm}},
\end{equation*}
\notag
$$
then
$$
\begin{equation}
R_{\pm}(z,\mu)=1-\mathbf{E}(z^{\eta_{\pm}}\mu^{\chi_{\pm}};\, \eta_{\pm}<\infty),\qquad |z|<1.
\end{equation}
\tag{2}
$$
More details about factorization can be found in [1], [3] for both non-lattice and lattice distributions. The properties of the factorization components $ R_{\pm}(z,\mu)$ are well known. For $|z|<1$, the positive component $R_{+}(z,\mu)$ is an analytic function of the variable $\mu$ in the unit circle $|\mu|<1$; it is continuous on the boundary, bounded, and does not vanish at $|\mu|\leqslant 1$. The negative component $R_{-}(z,\mu)$ has similar properties in the exterior of the unit disc. A factorization with such properties of its components is called canonical (see [1]). Let $G$ be a set of functions $g$ of the form
$$
\begin{equation*}
g(\mu)=\sum_{k=-\infty}^{\infty}g_k\mu^k,\qquad |\mu|=1, \quad \text{where}\quad \sum_{k=-\infty}^{\infty}|g_k|<\infty.
\end{equation*}
\notag
$$
Denote by $S(m,n)$ the set of functions $g\in G$ such that
$$
\begin{equation*}
g(\mu)=\sum_{k=m}^{n}g_k\mu^k,\qquad m<n.
\end{equation*}
\notag
$$
It is clear that $\mathbf{E}(\mu^{S_n};\,S_n> 0)$, qua function of $\mu$, belongs to $S(1,\infty)$ for all $n$. We also note that
$$
\begin{equation*}
C_{\mp}(z):=\mp\sum_{n=1}^\infty\frac{z^n}{n}\mathbf{E}\bigl(\mu^{S_n};\,S_n> 0\bigr)\in S(1,\infty).
\end{equation*}
\notag
$$
Hence
$$
\begin{equation*}
R_{+}^{\pm 1}(z,\mu)=\exp\{C_{\mp}(z)\}=1\mp C_{\mp}(z)+\frac{C^2_{\mp}(z)}{2}+\cdots\in S(0,\infty).
\end{equation*}
\notag
$$
A similar argument shows that the functions $R_{-}(z,\mu)$ and $R_{-}^{-1}(z,\mu)$ belong to $S(-\infty,0)$. For each function $g\in G$, we define
$$
\begin{equation*}
[g(\mu)]^A=\sum_{k\in A}g_k\mu^k
\end{equation*}
\notag
$$
for each subset $A$ of the set of integers. Next, given an arbitrary natural number $b\geqslant 1$, we introduce the time $\tau$ of first hitting a level $b$ by the trajectory of the random walk $\{S_n\}$ and the excess over this level is defined by
$$
\begin{equation*}
\tau=\tau(b)=\inf\{n\geqslant 1\colon S_n\geqslant b\},\qquad \chi=\chi(b)=S_{\tau}-b.
\end{equation*}
\notag
$$
We put $\tau=\infty$ if $S_n<b$ for all $n$. In this case, the value of $\chi $ remains undefined. The double moment generating function of the joint distribution of the pair $(\tau,S_{\tau})$ can be expressed in terms of the positive factorization component as follows (see, for example, [8], [9]). Theorem 1. For $|z|<1$ and $|\mu|=1$,
$$
\begin{equation}
\mathbf{E}\bigl(z^{\tau}\mu^{S_{\tau}};\,\tau<\infty\bigr) =R_{+}(z,\mu)[R_{+}^{-1}(z,\mu)]^{[b,\infty)}.
\end{equation}
\tag{3}
$$
The conditions implying $\mathbf{P}(\tau<\infty)=1$ are well known (see, for example, [1]). In particular, if $a=\mathbf{E} X\geqslant 0$ exists, then $\mathbf{P}(\tau<\infty)=1$, and
$$
\begin{equation*}
\mathbf{P}(\tau<\infty)=\mathbf{P}\Bigl(\sup_{n\geqslant 0}S_n\geqslant b\Bigr)<1 \quad \text{if} \quad a<0.
\end{equation*}
\notag
$$
In [3], A. A. Borovkov and B. A. Rogozin, using the multi-step factorization method, obtained the exhaustive asymptotic results for the joint distribution of the first hitting time of the receding barrier and the excess over this level for an integer-valued random walk; in this case, the time intervals between jumps were assumed to be random. As a first step of this method, the double moment generating functions of the distributions under study were also expressed in terms of factorization components, but these representations differed from (3). In [8] and subsequent works of the author of the present paper, representation (3) was used at a first step; it turned out to be more convenient when dealing with random walks with two boundaries. Asymptotic properties as $b\to\infty$ of this representation under the Cramér-type conditions were studied. As a result, the principal part of this expression was singled out and the remainder was estimated and was found to exponentially small compared to the principal part. Next, the principal part was subjected to the asymptotic inversion procedure, which also led to asymptotic results for the joint distribution of the pair $(\tau,\,S_{\tau})$. In contrast to many asymptotic studies, the goal of the present paper is to obtain exact formulas for the distribution of the excess $\chi$ for those types of integer-valued random walks for which this is possible. Representation (3) is a starting point of our study. However, the straightforward use of (1) and (2) for factorization components does not lead to further results. It is clear that, first of all, one should search for the factorization components in a form that would provide further inversion the right-hand side of representation (3).
§ 2. Explicit form of factorization Let us describe the class of integer-valued random walks for which the function $R_{+}(z,\mu)$ can be expressed explicitly in terms of zeros and poles of the function $1-z\psi(\mu)$. Following [1], consider the set $K_1$ of distributions with probabilities $p_k= \mathbf{P}(X= k)$, for $k\geqslant 0$,
$$
\begin{equation}
p_k=\sum_{i=1}^d\sum_{j=1}^{l_i}a_{ij}k^{j-1}q_i^k,\qquad k=0,1,\dots\,.
\end{equation}
\tag{4}
$$
Here, $d$ and $l_i$ are natural numbers, the coefficients $a_{ij}$ are positive, $0<q_i<1$. We do not impose any restrictions on the probabilities $p_k$ if $k<0$. It is easily seen that the restriction of such distributions to the right semi-axis of the real axis corresponds to mixtures of geometric distributions with different $q_i$ and their convolutions. To the probabilities $p_k$ that satisfy condition (4), there correspond the rational moment generating functions
$$
\begin{equation*}
\mathbf{E} (\mu^X;\,X\geqslant 0)=\sum_{k=0}^\infty \mu^k p_k= \frac{R_m(\mu)}{Q_n(\mu)},
\end{equation*}
\notag
$$
where $R_m(\mu)$ and $Q_n(\mu)$ are polynomials of degree $m$ and $n$, respectively. Here, $m<n=\sum_{i=1}^d l_i$, and, for definiteness, we put
$$
\begin{equation*}
Q_n(\mu)=\prod_{i=1}^d(1-q_i \mu)^{l_i}.
\end{equation*}
\notag
$$
We can consider another class $K_2$ of distributions with right-bounded support, that is, $\mathbf{P}(X\leqslant r)=1$ for some $r<\infty$. Obviously, this class is everywhere dense in the sense of weak convergence on the set of all integer-valued distributions. For distributions of this class,
$$
\begin{equation*}
\mathbf{E} (\mu^X;\,X\geqslant 0)=T_r(\mu)=\sum_{k=0}^r \mu^kp_k
\end{equation*}
\notag
$$
is a polynomial of degree $r$. Let $K$ be the set of distributions $P$ of the form
$$
\begin{equation*}
P=\alpha P_1+(1-\alpha)P_2,\qquad P_1\in K_1,\quad P_2\in K_2,\quad 0\leqslant\alpha\leqslant1.
\end{equation*}
\notag
$$
If the distribution $P$ of a random variable $X$ lies in $K$, then $\mathbf{E}(\mu^X;\,X\,{\geqslant}\,0)$ takes the form
$$
\begin{equation}
\mathbf{E}(\mu^X;\,X\geqslant 0) =\alpha\frac{R_m(\mu)}{Q_n(\mu)}+(1-\alpha)T_r(\mu) =\frac{P_s(\mu)}{Q_n(\mu)},
\end{equation}
\tag{5}
$$
where all the functions in the right-hand side are polynomials, the subscripts indicate their degrees, and $s=n+r$. We assume that $n+r>0$. Here, the following remark about the equality (5) is worth making. The expectation $\mathbf{E}(\mu^X;\,X\geqslant 0)$ makes sense under the condition that $|\mu|$ is smaller than the smallest root of the polynomial $Q_n(\mu)$, which is necessarily real. However, the right-hand side of (5), as well as the whole function $\psi(\mu)$, can be analytically extended to the exterior of the unit disc. Therefore, in what follows, when dealing with the zeros and poles of the function $1-z\psi(\mu)$ we will mean those of its analytic continuation in the exterior of the unit disc. We claim that, for distributions from the class $K$, the factorization components can be expressed explicitly in terms of zeros and poles of the function $1-z\psi(\mu)$ lying outside the unit disc. Note that, in [1], Ch. 12, § 6.2, the factorization is also considered for distributions of the class $K$, but the function $1-z\psi(\mu)$ was factorized at $z=1$. It is clear that it is impossible to decompose $1-\psi(\mu)$ into a product of two non-zero components with the required analyticity properties (the canonical factorization requirement), because $1-\psi(1)=0$. So in [1] the function $1-\psi(\mu)$ was also corrected by zero at the point $\mu=1$. In our case, a straightforward use of (3) is impossible for $z=1$ due to the obvious singularity of the function $R_+^{-1}(z,\mu)$ in the square brackets. For this reason, we will factorize the function $1-z\psi(\mu)$, for which the canonicity property (non-vanishing of the factorization components in the domain of analyticity) is ensured precisely by the condition $|z|<1$. Theorem 2. Let the distribution $P$ of a random variable $X$ belong to $K$ and let (5) hold. Then the function $1-z\psi(\mu)$ for $|z|<1$ has exactly $s$ zeros $\mu_1(z),\dots,\mu_s(z)$ (counting the multiplicities) in the domain $|\mu|>1$, and the representation
$$
\begin{equation*}
1-z\psi(\mu)=\frac{(1-z\psi(\mu))Q_n(\mu)}{\Lambda_s(z,\mu)}\, \frac{\Lambda_s(z,\mu)}{Q_n(\mu)},\quad \textit{where}\quad \Lambda_s(z,\mu)=\prod_{j=1}^s(\mu-\mu_j(z)),
\end{equation*}
\notag
$$
is a canonical factorization on the circle $|\mu|=1$. Proof. Using (5), we have
$$
\begin{equation*}
\psi(\mu)=\sum_{k=-\infty}^{-1}\mu^kp_k+\frac{P_s(\mu)}{Q_n(\mu)}.
\end{equation*}
\notag
$$
Changing to the random variable $-X$, we obtain
$$
\begin{equation*}
\mathbf{E} \mu^{-X}=\psi(\mu^{-1})= \sum_{k=-\infty}^{-1}\mu^{-k}p_{k} +\frac{P_s(\mu^{-1})}{Q_n(\mu^{-1})}.
\end{equation*}
\notag
$$
Let
$$
\begin{equation*}
P_s(\mu)=\sum_{k=0}^s a_k \mu^k,\qquad Q_n(\mu)=\sum_{k=0}^n b_k \mu^k,
\end{equation*}
\notag
$$
then
$$
\begin{equation}
P_s(\mu^{-1})=\frac{\sum_{k=0}^s a_{s-k}\mu^k}{\mu^s}, \qquad Q_n(\mu^{-1})=\frac{\sum_{k=0}^n b_{n-k}\mu^k}{\mu^n}
\end{equation}
\tag{6}
$$
and
$$
\begin{equation}
\frac{P_s(\mu^{-1})}{Q_n(\mu^{-1})} =\frac{\sum_{k=0}^s a_{s-k}\mu^k}{\mu^{s-n}\sum_{k=0}^n b_{n-k}\mu^k}.
\end{equation}
\tag{7}
$$
Note that the function
$$
\begin{equation*}
\frac{P_s(\mu)}{Q_n(\mu)}=\sum_{k=0}^{\infty}\mu^{k}p_{k}
\end{equation*}
\notag
$$
has no singularities in the disc $|\mu |\leqslant 1$. This means that the zeros of the polynomial $Q_n(\mu)$ are contained in the set $|\mu| > 1$ and, correspondingly, all zeros of the function $Q_n(\mu^{-1})$ lie in the open disc $|\mu|<1$. From representation (6) it follows that all zeros of the polynomial $\sum_{k=0}^n b_{n-k}\mu^k$ also lie in the open disc $|\mu|<1$. Hence by (7), the function $P_s(\mu^{-1})/Q_n(\mu^{-1})$ has $s$ poles on account of their multiplicities in the unit disc.
The function $\sum_{k=-\infty}^{-1}\mu^{-k}p_{k}$ is analytic in the unit disc, and so all the singularities of the function
$$
\begin{equation*}
1-z\psi(\mu^{-1})=1-z\sum_{k=-\infty}^{-1}\mu^{-k}p_{k}-z\frac{P_s(\mu^{-1})}{Q_n(\mu^{-1})}
\end{equation*}
\notag
$$
in the unit disc coincide with the above-mentioned $s$ poles.
Obviously, $|z\psi(\mu^{-1})|<1$ for $|z|<1 $ and $|\mu|=1$, so the function $1-z\psi(\mu^{ -1})$ does not obtain argument increment as the point $\mu$ goes along the unit circle. Therefore, by the argument principle, the function $1-z\psi(\mu^{-1})$ has exactly $s$ zeros $\lambda_1(z),\dots,\lambda_s(z)$ inside the unit disc (counting their multiplicities). So, we have shown that the function $1-z\psi(\mu)$ has exactly $s$ zeros $1/\lambda_1(z), \dots, 1/\lambda_s(z)$ in the set $|\mu|>1$. It remains to put $\mu_i(z)=1/\lambda_i(z)$, $i=1, \dots, s$.
Note that all numbers $\mu_i(z)$ are finite. This is immediate from (5). Indeed, $\mathbf{E}(\mu^X;\,X<0)\to 0$ and $R_m(\mu)/Q_n(\mu)\to 0$ as $|\mu|\to\infty$. Therefore, $1-z\psi (\mu)\to 1$ as $|\mu|\to\infty$ whenever the polynomial $T_r(\mu)$ is absent in (5). If the polynomial $T_r(\mu)$ appears in (5), then $|T_r(\mu)|\to\infty$ as $|\mu|\to\infty$ and $r\geqslant 1$. If $r=0$, then $|T_r(\mu)|\to c\leqslant 1$. In both these cases, the point at infinity cannot be a solution to the equation $1-z\psi(\mu)=0$.
By construction, the function $r_+(z,\mu):=\Lambda_s(z,\mu)/Q_n(\mu)$ satisfies all the required properties of the positive component of a canonical factorization: it is analytic with respect to $\mu$ in the unit disc, it is continuous on its boundary, and, what is very important, it does not vanish for $|\mu|\leqslant 1$. The function
$$
\begin{equation*}
r_-(z,\mu):=\frac{(1-z\psi(\mu))Q_n(\mu)}{\Lambda_s(z,\mu)}
\end{equation*}
\notag
$$
has the same properties on the set $|\mu|\geqslant 1$. The theorem is proved. If, in contrast to Theorem 2, we assume that the distribution of the random variable $-X$ belongs to the class $K$, then the rationality of the factorization component $R_-(z,\mu)$ can be established in the same way; this function will be expressed in terms of zeros and poles of the function $1-z\psi(\mu)$ lying in the unit disc. It is known [1] that the components of a canonical factorization are uniquely determined up to a constant factor possibly depending on $z$. Thus, we can assert that, for some $c(z)$,
$$
\begin{equation*}
R_+(z,\mu)=c(z)\frac{\Lambda_s(z,\mu)}{Q_n(\mu)}.
\end{equation*}
\notag
$$
A subsequent substitution of this expression into (3), cancels the function $c(z)$, and so there is no need to specify its value. From Theorems 1 and 2 we obtain Corollary 1. Let the distribution $P$ of a random variable $X$ belong to the class $K$. Then, for $|z|<1$ and $|\mu|=1$,
$$
\begin{equation}
\mathbf{E}(z^{\tau}\mu^{S_{\tau}};\,\tau<\infty)=\frac{\Lambda_s(z,\mu)}{Q_n(\mu)} \biggl[\frac{Q_n(\mu)}{\Lambda_s(z,\mu)}\biggr]^{[b,\infty)}.
\end{equation}
\tag{8}
$$
§ 3. Distribution of excess and related problems In order to derive explicit formulas for the distribution of the random variable $\chi$, we invert the above expression. We assume that the distribution $P$ of a random variable $X$ belongs to the class $K$. Let $1\leqslant\mu<q$, where $q$ is the smallest root of the polynomial $Q_n(\mu)$. It is easy to see that the function $\psi(\mu)$ is concave, $\psi(1)=1$, and the equation $\psi(\mu)=1/z$ has exactly one positive solution $\mu_1(z)>1$ of multiplicity one if $1-\delta<z<1$ for some small $\delta>0$. Under the conditions of Theorem 2, this equation has $s-1$ additional solutions $\mu_2(z),\dots,\mu_{s}(z)$; they all are complex numbers exceeding $\mu_1(z)$ in absolute value. Obviously, $\mu_1(z)\to \mu_1(1)$ as $z\to 1$; moreover, $\mu_1(1)=1$ if $\mathbf{E} X\geqslant 0$, and $\mu_1(1)>1$ if $\mathbf{E} X<0$. We can also establish the existence of numbers $\mu_2(z),\dots,\mu_{s}(z)$ for $z=1$ by using the argument principle, or simply, by extending them by continuity as $z\to 1$. Suppose that the zeros $\mu_2(z),\dots,\mu_s(z)$ of the polynomial $\Lambda_s(z,\mu)$ are also prime if $1-\delta<z<1$ for some small $\delta>0$. Let us expand the rational function $Q_n(\mu)/\Lambda_s(z,\mu)$ into partial fractions: for $s>n$, we have a decomposition of the form
$$
\begin{equation}
\frac{Q_n(\mu)}{\Lambda_s(z,\mu)}=\sum_{k=1}^s\frac{c_k(z)}{\mu-\mu_k(z)},
\end{equation}
\tag{9}
$$
in which the coefficients can be evaluated by
$$
\begin{equation*}
c_k(z)=Q_n(\mu_k(z))\lim_{\mu\to\mu_k(z)}\frac{\mu-\mu_k(z)}{\Lambda_s(z,\mu)}.
\end{equation*}
\notag
$$
If $s=n$, then the right-hand side of (9) is augmented with a term independent of $\mu$, which disappears after evaluation of $[Q_n(\mu)/\Lambda_s(z,\mu)]^{[b,\infty)}$. Since $|\mu_k(z)|>1$, for $|\mu|=1$, we have
$$
\begin{equation*}
\begin{aligned} \, \biggl[\frac{Q_n(\mu)}{\Lambda_s(z,\mu)}\biggr]^{[b,\infty)} &=\sum_{k=1}^s\biggl[\frac{c_k(z)}{\mu-\mu_k(z)} \biggr]^{[b,\infty)} =-\sum_{k=1}^s \frac{c_k(z)}{\mu_k(z)}\biggl[\sum_{i=0}^{\infty}\mu^i\mu_k^{-i}(z)\biggr]^{[b,\infty)} \\ &=-\sum_{k=1}^s\frac{c_k(z)}{\mu_k(z)}\sum_{i=b}^{\infty}\mu^i\mu_k^{-i}(z) =\mu^b\sum_{k=1}^s\frac{c_k(z)}{\mu_k^b(z)}\, \frac{1}{\mu-\mu_k(z)}. \end{aligned}
\end{equation*}
\notag
$$
We set
$$
\begin{equation*}
M_k(z,\mu)=\frac{\Lambda_s(z,\mu)}{\mu-\mu_k(z)}=\prod_{1\leqslant j\leqslant s,\, j\neq k} (\mu-\mu_j(z)).
\end{equation*}
\notag
$$
From (8) we have
$$
\begin{equation}
\mathbf{E}\bigl(z^{\tau}\mu^{S_{\tau}};\,\tau<\infty\bigr) =\mathbf{E}(z^{\tau}\mu^{b+\chi};\,\tau<\infty) =\frac{\mu^b}{Q_n(\mu)}\sum_{k=1}^s\frac{c_k(z)M_k(z,\mu)}{\mu_k^b(z)}.
\end{equation}
\tag{10}
$$
The left-hand side of (10) is defined for $|z|\leqslant 1$, $|\mu|\leqslant 1$. The right-hand side is defined by continuity as follows: for $z=1$ it includes the numbers
$$
\begin{equation}
\mu_k=\mu_k(1),\quad c_k=c_k(1), \qquad k=1, \dots, s.
\end{equation}
\tag{11}
$$
Let
$$
\begin{equation*}
\begin{gathered} \, M_k(z,\mu)=\sum_{j=0}^{s-1} a_{kj}(z)\mu^j,\qquad k=1, \dots, s, \\ \frac{1}{Q_n(\mu)}=\sum_{i=0}^{\infty} d_i\mu^i. \end{gathered}
\end{equation*}
\notag
$$
Multiplying these functions, we have
$$
\begin{equation}
\frac{M_k(z,\mu)}{Q_n(\mu)}=\sum_{i=0}^{\infty} t_{ki}(z)\mu^i, \qquad t_{ki}(z)=\sum_{0\leqslant j\leqslant \min(i,s-1)} a_{kj}(z)d_{i-j},\quad i\geqslant 0,
\end{equation}
\tag{12}
$$
and, finally,
$$
\begin{equation*}
\mathbf{E} (z^{\tau}\mu^{\chi};\,\tau<\infty)=\sum_{k=1}^s\frac{c_k(z)}{\mu_k^b(z)} \sum_{i=0}^{\infty} t_{ki}(z)\mu^i=\sum_{i=0}^{\infty} \sum_{k=1}^s\frac{c_k(z)t_{ki}(z)}{\mu_k^b(z)} \mu^i.
\end{equation*}
\notag
$$
We also set
$$
\begin{equation}
a_{kj}=a_{kj}(1),\qquad t_{ki}=t_{ki}(1)=\sum_{0\leqslant j\leqslant \min(i,s-1)} a_{kj}d_{i-j}.
\end{equation}
\tag{13}
$$
For $z=1$, we have
$$
\begin{equation*}
\mathbf{E} (\mu^{\chi};\,\tau<\infty)=\sum_{i=0}^{\infty} \sum_{k=1}^s\frac{c_kt_{ki}}{\mu_k^b} \mu^i=\sum_{i=0}^{\infty} \mu^i\, \mathbf{P}(\chi=i).
\end{equation*}
\notag
$$
Thus, we have reached the following result. Theorem 3. Let the distribution of a random variable $X$ belong to the class $K$ and let all zeros $\mu_1(z),\dots,\mu_s(z )$ of the function $1-z\psi(\mu)$ lying in the set $|\mu|>1$ be prime for some $\delta>0$ if $1-\delta<z<1$. Then
$$
\begin{equation}
\mathbf{P}(\chi=i)=\sum_{k=1}^s\frac{c_kt_{ki}}{\mu_k^b},\qquad i=0,1,\dots,
\end{equation}
\tag{14}
$$
the numbers $c_k$ and $\mu_{k}$ are defined by (9) and (11), the quantities $t_{ki}$ are defined by (13). We also have the following results. Corollary 2. Under the conditions of Theorem 3, if $\mathbf{E} X<0$, then
$$
\begin{equation}
\mathbf{P}\Bigl(\sup_{j\geqslant 0}S_j\geqslant b\Bigr) =\mathbf{P}(\tau<\infty) =\sum_{k=1}^s\frac{c_kM_k(1,1)}{Q_n(1)\mu_k^b}.
\end{equation}
\tag{15}
$$
This result is immediate from (10) for $z=1$ and $\mu=1$. Corollary 3. Under the conditions of Theorem 3, let $a=\mathbf{E} X>0$. Then $\mathbf{E} \tau<\infty$, $\mathbf{E} S_{\tau}<\infty$, and the exact expression
$$
\begin{equation}
\mathbf{E} \tau=a^{-1}\mathbf{E} (b+\chi) =\frac{b}{a}+\frac{1}{a}\sum_{i=0}^{\infty}i\sum_{k=1}^s\frac{c_kt_{ki}}{\mu_k^b}
\end{equation}
\tag{16}
$$
for the renewal function $\mathbf{E} \tau=\mathbf{E} \tau(b)$ follows from the Wald identity. Remark 1. Formula (14) becomes more involved if the the function $1-z\psi(\mu)$ has multiple zeros. Assume that, for some $i$, the number $\mu_i(z)$ is a zero of multiplicity $j>1$. Then the expansion of $Q_n(\mu)/\Lambda_s(z,\mu)$ into partial fractions contains an expression of the form
$$
\begin{equation*}
\frac{d_{i1}(z)}{\mu-\mu_i(z)}+\frac{d_{i2}(z)}{(\mu-\mu_i(z))^2}+\dots+ \frac{d_{ij}(z)}{(\mu-\mu_i(z))^j}.
\end{equation*}
\notag
$$
Exact formulas for the coefficients $d_{i1}(z), \dots, d_{ij}(z)$ can be derived in various ways; the corresponding proofs are included in many textbooks. Our further analysis is similar to that use in the proof of Theorem 3 for the representation of the function $\mathbf{E} (z^{\tau}\mu^{\chi};\,\tau<\infty)$ as an expansion in powers of $\mu$. However, we now need to specify a way for evaluation of expressions of the form
$$
\begin{equation*}
\biggl[\frac{1}{(\mu-\mu_i(z))^j}\biggr]^{[b,\infty)},\qquad j>1.
\end{equation*}
\notag
$$
This will be done via differentiation: for every $|u|>\mu$ (here we mean the derivative of the order $j-1$), we have
$$
\begin{equation*}
\begin{aligned} \, \frac{1}{(\mu-u)^j} &=\frac{(-1)^{j-1}}{(j-1)!}\biggl(\frac{1}{\mu-u}\biggr)^{(j-1)}=\frac{(-1)^{j}}{(j-1)!} \biggl(\frac{1}{u}\sum_{i=0}^{\infty}\frac{\mu^i}{u^i}\biggr)^{(j-1)} \\ &=\frac{(-1)^{j}}{(j\,{-}\,1)!}\, \frac{1}{u} \sum_{i=j-1}^{\infty}\frac{i(i\,{-}\,1)\cdots(i\,{-}\,j\,{+}\,2)\mu^{i-j+1}}{u^i} \,{=}\,\frac{(-1)^{j}}{u}\! \sum_{i=j-1}^{\infty}\frac{C_i^{j-1}\mu^{i-j+1}}{u^i} \\ &=\frac{(-1)^{j}}{u} \sum_{k=0}^{\infty}\frac{C_{k+j-1}^{j-1}\mu^{k}}{u^{k+j-1}} = \frac{(-1)^{j}}{u^j} \sum_{k=0}^{\infty}\frac{C_{k+j-1}^{j-1}\mu^{k}}{u^{k}}, \end{aligned}
\end{equation*}
\notag
$$
that is,
$$
\begin{equation*}
\biggl[\frac{1}{(\mu-\mu_i(z))^j} \biggr]^{[b,\infty)}=\frac{(-1)^{j}}{\mu_i^j(z)} \sum_{k=b}^{\infty}\frac{C_{k+j-1}^{j-1}\mu^{k}}{\mu_i^{k}(z)}.
\end{equation*}
\notag
$$
A further representation of the function $\mathbf{E} (z^{\tau}\mu^{\chi};\,\tau<\infty)$ in powers of $\mu$ is omitted since it is too lengthy. Note that if, for $z=1$, all zeros of the function $1-z\psi(\mu)$ located outside the unit disc are simple, then the same property is preserved for $1-\delta<z<1$ for sufficiently small $\delta$. Finding zeros of the function $1-\psi(\mu)$ is a separate problem; explicit formulas for them are generally inaccessible, but, with the help of approximate methods, it is possible to localize them with any accuracy. Remark 2. Let us consider separately the particular cases $P\in K_1$ and $P\in K_2$. Let $P\in K_1$, that is, in representation (5), we have
$$
\begin{equation*}
T_r(\mu)=0, \qquad \mathbf{E}(\mu^X;\,X\geqslant 0)=\frac{R_m(\mu)}{Q_n(\mu)},\quad m<n.
\end{equation*}
\notag
$$
In this case, the function $1-z\psi(\mu)$ has exactly $n$ zeros (counting their multiplicities) $\mu_1(z),\dots,\mu_n(z)$ in the set $|\mu|>1$; here $\Lambda_n(z,\mu)=\prod_{j=1}^n(\mu-\mu_j(z))$, and the conclusions of Theorem 3 and Corollaries 2 and 3 remain valid with $s$ replaced by $n$. Suppose now that $\mathbf{P}(X\leqslant r)=1$, $r\geqslant 1$, that is, $P\in K_2$. In this case, in (5), we put $R_m(\mu)=0$, $Q_n(\mu)=1$, and
$$
\begin{equation*}
\mathbf{E}(\mu^X;\,X\geqslant 0)=T_r(\mu) =\sum_{k=0}^r p_k\mu^k.
\end{equation*}
\notag
$$
In this case, the function $1-z\psi(\mu)$ has exactly $r$ zeros $\mu_1(z),\dots,\mu_r(z)$ (also counting their multiplicities) in the set $|\mu|>1$. Here,
$$
\begin{equation*}
\begin{gathered} \, \Lambda_r(z,\mu)=\prod_{j=1}^r(\mu-\mu_j(z)),\qquad \frac{1}{\Lambda_r(z,\mu)}=\sum_{k=1}^r\frac{c_k(z)}{\mu-\mu_k(z)}, \\ M_k(z,\mu)=\frac{\Lambda_r(z,\mu)}{\mu-\mu_k(z)}=\sum_{j=0}^{r-1}\, a_{kj}(z)\mu^j, \end{gathered}
\end{equation*}
\notag
$$
and (14)– (16) also hold with $s$ replaced by $r$ and with the coefficients $t_{ki}$ replaced by $a_{ki}$.
§ 4. Applications to the study of random walks with switchings In [10], the author of the present paper introduced and studied an oscillating random walk with switchings whose jump distribution changes when hitting each of the two regulatory barriers. Let us recall its definition. Let $\{X_i^{(j)}\}_{i=1}^{\infty}$, $j = 1,2$, be two independent sequences of independent and identically distributed random variables for fixed $j$, and let $S_0^{(1)}=S_0^{(2)}=0$,
$$
\begin{equation*}
S_n^{(j)}=X_1^{(j)}+\dots+X_n^{(j)},\qquad \mathbf{E} X_i^{(1)}>0,\qquad \mathbf{E} X_i^{(2)}<0.
\end{equation*}
\notag
$$
For an arbitrary number $b>0$, we set
$$
\begin{equation*}
N_1=\min\{n\geqslant 1\colon S_n^{(1)}\geqslant b\},\qquad N_2=\min\{n\geqslant 1\colon S_n^{(2)}\leqslant -b\}.
\end{equation*}
\notag
$$
On the time interval $0\leqslant n\leqslant T_1:=N_1+N_2$, the random walk $\{Y_n\}$ is defined by
$$
\begin{equation*}
Y_n = \begin{cases} \min\{S_n^{(1)}, b\}, &0\leqslant n\leqslant N_1, \\ \max\{b+S_{n-N_1}^{(2)}, 0\}, &N_1< n\leqslant N_1+N_2. \end{cases}
\end{equation*}
\notag
$$
For $n>T_1$, the trajectory evolves according to the same scheme. First, the elements of an independent copy of the sequence $\{X_i^{(1)}\} $ are taken as its jumps until a level $b$ is hit at some time $T_1+N_3$. We put $Y_{T_1+N_3}=b$ and then choose independent copies of the elements of the sequence $\{X_i^{(2)}\}$ as jumps until hitting zero at some time $T_1+N_3+N_4=T_1+T_2$. We again put $Y_{T_1+T_2}=0$, and use the same arguments to establish the evolution of the trajectory on subsequent time intervals of length $T_3,T_4,\dots$ . Here, we continue the study of the above sequence $\{Y_n\}$ in the context of integer-valued random walks. As in some other boundary crossing problems, the factorization components play an important role here. Assume that the random variables $X_1^{(j)}$ take only integer values, the number $b>0$ is integer, and $\psi_j(\mu)=\mathbf{E}\mu^{X_1^{(j)}}$, $j=1,2$. As above, we introduce the factorization identities
$$
\begin{equation*}
1-z\psi_j(\mu)=R_{+}^{(j)}(z,\mu)R_{0}^{(j)}(z)R_{-}^{(j)}(z,\mu),\qquad |z|<1,\quad |\mu|=1,
\end{equation*}
\notag
$$
where
$$
\begin{equation*}
\begin{aligned} \, R_{-}^{(j)}(z,\mu) &=\exp\biggl\{-\sum_{n=1}^{\infty}\frac{z^n}{n} \mathbf{E}\bigl(\mu^{S_n^{(j)}};\, S_n^{(j)}<0\bigr)\biggr\}, \\ R_{+}^{(j)}(z,\mu) &=\exp\biggl\{-\sum_{n=1}^{\infty}\frac{z^n}{n} \mathbf{E}\bigl(\mu^{S_n^{(j)}};\, S_n^{(j)}> 0\bigr)\biggr\}, \\ R_{0}^{(j)}(z) &=\exp\biggl\{-\sum_{n=1}^{\infty}\frac{z^n}{n} \mathbf{P}(S_n^{(j)}=0)\biggr\}. \end{aligned}
\end{equation*}
\notag
$$
Our problem is to find the double moment generating function
$$
\begin{equation*}
\Psi (z, \mu) =\sum_{n=0}^\infty\,z^n\mathbf{E} \mu^{Y_n},\qquad |z| < 1, \quad |\mu| = 1.
\end{equation*}
\notag
$$
Theorem 4. For $|z| < 1$ and $|\mu| = 1$,
$$
\begin{equation}
\begin{aligned} \, \Psi (z, \mu) &= \frac{R_+^{(1)}(z,\mu)[1/R_+^{(1)}(z,\mu)]^{[0,b)}}{(1 - z \psi_1(\mu))(1-g(z))} +\frac{\mu^bg_1(z)R_-^{(2)}(z,\mu)[1/R_-^{(2)}(z,\mu)]^{( -b,0]}}{(1 - z \psi_2(\mu))(1-g(z))} \nonumber \\ &=\frac{[1/R_+^{(1)}(z,\mu)]^{[0,b)}}{R_-^{(1)}(z,\mu)R_0^{(1)}(z)(1-g(z))} +\frac{\mu^bg_1(z)[1/R_-^{(2)}(z,\mu)]^{( -b,0]}}{R_+^{(2)}(z,\mu)R_0^{(2)}(z) (1-g(z))}, \end{aligned}
\end{equation}
\tag{17}
$$
where $g(z)=\mathbf{E}z^{T_1}=\mathbf{E}z^{N_1+N_2}$, $g_1(z)=\mathbf{E}z^{N_1}$. Proof. From the definition, it follows that the switchings of the process $Y_n$ occur at the times $t_k=N_1+\dots+N_k$; moreover, switching at level $b$ occurs for $k=2m+1$, and switching while hitting zero occurs for $k=2m$, $m=0,1,\dots$ . We assume that $t_0=0$.
Let $\nu (n)$ be the number of switchings in the sequence $Y_1, \dots, Y_n$. Then, for $|z| < 1$ and $|\mu| = 1$,
$$
\begin{equation*}
\begin{aligned} \, \Psi (z, \mu) &= \sum^{\infty}_{n=0}\sum_{m=0}^{[n/2]} z^n \, \mathbf{E}\bigl(\mu^{Y_n}; \, \nu(n)=2m\bigr) \\ &\qquad +\sum^{\infty}_{n=1}\sum_{m=0}^{[(n-1)/2]} z^n\, \mathbf{E}\bigl(\mu^{Y_n}; \, \nu(n) =2m+1\bigr) = I_1 + I_2. \end{aligned}
\end{equation*}
\notag
$$
The right-hand side of this relation is evaluated as follows:
$$
\begin{equation*}
\begin{aligned} \, I_1 &= \sum^{\infty}_{n=0}\sum_{m=0}^{[n/2]} z^n \, \mathbf{E}\bigl(\mu^{Y_n}; \, \nu(n) =2m\bigr) \\ &= \sum^{\infty}_{n = 0} z^n \sum^{[n/2]}_{m=0}\sum_{k=0}^{n} \mathbf{E}\bigl(\mu^{S_{n-k}^{(1)}}; \, t_{2m}=k,\, N_1 > n-k\bigr). \end{aligned}
\end{equation*}
\notag
$$
Note that, in this formula, by the sums $S_{n-k}^{(1)}$ and the moment $N_1$ of the first hitting the level $b$, we understand the random variables based on the copy of the sequence $\{X_n^{(1)}\} $ which is used to construct the trajectory $Y_n$ after hitting zero at the time $t_{2m}=k$. Here, the behaviour of the process $Y_n$ up to the moment $k$ inclusively does not depend on that of the further trajectory. We next have
$$
\begin{equation*}
\begin{aligned} \, I_1&= \sum^{\infty}_{n = 0} z^n \sum^{[n/2]}_{m=0}\sum_{k=0}^{n} \mathbf{E}\bigl(\mu^{S_{n-k}^{(1)}}; \, N_1 > n-k\bigr)\, \mathbf{P}(t_{2m}=k) \\ &= \sum^{\infty}_{m = 0}\, \sum^{\infty}_{n=2m}z^n\sum_{k=0}^{n} \mathbf{E}\bigl(\mu^{S_{n-k}^{(1)}}; \, N_1 > n-k\bigr)\, \mathbf{P}(t_{2m}=k) \\ &=\sum^{\infty}_{m=0}\, \sum^{\infty}_{i = 2m} z^i \, \mathbf{P}(t_{2m}=i) \sum_{j=0}^{\infty}z^j\, \mathbf{E}\bigl(\mu^{S_{j}^{(1)}}; \, N_1 > j\bigr). \end{aligned}
\end{equation*}
\notag
$$
Let us now analyze the expressions on the right of this equality. Using the well-known identity
$$
\begin{equation*}
(1 - z \psi_1(\mu)) \sum^{\infty}_{n=0} z^n \mathbf{E}\bigl(\mu^{S_n^{(1)}}; \, N_1 > n\bigr) = 1 - \mathbf{E}\bigl(z^{N_1}\mu^{S_{N_1}^{(1)}}\bigr)
\end{equation*}
\notag
$$
(see [ 11], Ch. 18, § 1), we have
$$
\begin{equation*}
\sum^{\infty}_{n=0} z^n \mathbf{E}\bigl(\mu^{S_n^{(1)}}; \, N_1 > n\bigr) = (1 - z\psi_1(\mu))^{-1}\bigr(1 - \mathbf{E}\bigl(z^{N_1}\mu^{S_{N_1 }^{(1)}}\bigr)\bigr).
\end{equation*}
\notag
$$
Next, employing the factorization representation (3), we get that
$$
\begin{equation*}
\mathbf{E}\bigl(z^{N_1}\mu^{S_{N_1}^{(1)}}\bigr) = R_+^{(1)}(z,\mu)\biggl[\frac1{R_+^{(1)}(z,\mu)}\biggr]^{[b,\infty)} =1-R_+^{(1)}(z,\mu)\biggl[\frac1{R_+^{(1)}(z,\mu)}\biggr]^{[0,b)}.
\end{equation*}
\notag
$$
So,
$$
\begin{equation*}
\sum^{\infty}_{n=0} z^n \mathbf{E}\bigl(\mu^{S_n^{(1)}}; \, N_1 > n\bigr) =\frac{R_+^{(1)}(z,\mu)[1/R_+^{(1)}(z,\mu)]^{[0,b)}}{1 - z\psi_1(\mu)}.
\end{equation*}
\notag
$$
Recall that $t_{2m}=T_1+\dots+T_m$, where the random variables $T_k$, $k\geqslant 1$, are independent, $T_k=N_{2k-1}+N_{2k}\geqslant 2 $. Denoting $g(z)=\mathbf{E}z^{T_1}$, we have
$$
\begin{equation*}
\sum^{\infty}_{i = 2m} z^i \, \mathbf{P}(t_{2m}=i)=\sum^{\infty}_{i = 2m} z^i \, \mathbf{P}(T_1+\dots+T_m=i)=g^m(z).
\end{equation*}
\notag
$$
Hence
$$
\begin{equation*}
\sum^{\infty}_{m=0}\, \sum^{\infty}_{i = 2m} z^i \, \mathbf{P}(t_{2m}=i)=\sum^{\infty}_{m=0} g^m(z)=\frac1{1-g(z)},\qquad |z|<1,
\end{equation*}
\notag
$$
and, as a result, we obtain
$$
\begin{equation*}
I_1=\frac{R_+^{(1)}(z,\mu)\big[ 1/R_+^{(1)}(z,\mu) \big]^{[0,b)}}{(1 - z \psi_1(\mu))(1-g(z))}.
\end{equation*}
\notag
$$
The sum $I_2$ is dealt with similarly. We have
$$
\begin{equation*}
\begin{aligned} \, I_2 &= \sum^{\infty}_{n=1}\sum_{m=0}^{[(n-1)/2]} z^n \mathbf{E}\bigl(\mu^{Y_n};\, \nu(n)=2m+1\bigr) \\ &= \sum^{\infty}_{n = 1} z^n \sum^{[(n-1)/2]}_{m=0}\sum_{k=1}^{n} \mathbf{E}\bigl(\mu^{b+S_{n-k}^{(2)}}; \, t_{2m+1}=k,\, N_2 > n-k\bigr) \\ &= \mu^b\sum^{\infty}_{n = 1} z^n \sum^{[(n-1)/2]}_{m=0}\sum_{k=1}^{n} \mathbf{E}\bigl(\mu^{S_{n-k}^{(2)}}; \, N_2> n-k\bigr)\, \mathbf{P}(t_{2m+1}=k) \\ &= \mu^b\sum^{\infty}_{m=0}\, \sum^{\infty}_{n =2m+1} z^n \sum_{k=1}^{n} \mathbf{E}\bigl(\mu^{S_{n-k}^{(2)}}; \, N_2 > n-k\bigr)\, \mathbf{P}(t_{2m+1}=k) \\ &=\mu^b\sum^{\infty}_{m=0}\, \sum^{\infty}_{i = 2m+1} z^i \, \mathbf{P}(t_{2m+1}=i)\sum_{j=0}^{\infty}z^j\, \mathbf{E}\bigl(\mu^{S_{j}^{(2)}}; \, N_2 > j\bigr). \end{aligned}
\end{equation*}
\notag
$$
Using the relations (see [8], [11])
$$
\begin{equation*}
\begin{gathered} \, (1 - z \psi_2(\mu)) \sum^{\infty}_{n=0} z^n \mathbf{E}\bigl(\mu^{S_n^{(2)}}; \, N_2 > n\bigr) = 1 - \mathbf{E}\bigl(z^{N_2}\mu^{S_{N_2 }^{(2)}}\bigr), \\ \mathbf{E}\bigl(z^{N_2 }\mu^{S_{N_2}^{(2)}}\bigr) = R_-^{(2)}(z,\mu) \biggl[ \frac1{R_-^{(2)}(z,\mu)} \biggr]^{(-\infty, -b]}=1-R_-^{(2)}(z,\mu) \biggl[ \frac1{R_-^{(2)}(z,\mu)} \biggr]^{( -b,0]}, \end{gathered}
\end{equation*}
\notag
$$
we infer that
$$
\begin{equation*}
\sum_{j=0}^{\infty}z^j \mathbf{E}\bigl(\mu^{S_{j}^{(2)}}; \, N_2 > j\bigr)= (1 - z\psi_2(\mu))^{-1}R_-^{(2)}(z,\mu) \biggl[ \frac1{R_-^{(2)}(z,\mu)} \biggr]^{(-b,0]}.
\end{equation*}
\notag
$$
Now, consider
$$
\begin{equation*}
\sum^{\infty}_{i = 2m+1} z^i \, \mathbf{P}(t_{2m+1}=i)=\sum^{\infty}_{i = 2m+1} z^i \, \mathbf{P}(T_1+\dots+T_m +N_{2m+1}=i)=g^m(z)g_1(z),
\end{equation*}
\notag
$$
where we set $g_1(z)=\mathbf{E}z^{N_{2m+1}}=\mathbf{E}z^{N_{1}}$. We have
$$
\begin{equation*}
\sum^{\infty}_{m=0} \sum^{\infty}_{i = 2m+1} z^i \, \mathbf{P}(t_{2m+1}=i)=\sum^{\infty}_{m=0}g^m(z)g_1(z)=\frac{g_1(z)}{1-g(z)},
\end{equation*}
\notag
$$
that is,
$$
\begin{equation*}
I_2=\frac{\mu^bg_1(z)R_-^{(2)}(z,\mu)[1/R_-^{(2)}(z,\mu)]^{( -b,0]}}{(1 - z \psi_2(\mu))(1-g(z))}.
\end{equation*}
\notag
$$
Combining $I_1$ and $I_2$ together, we come to (17). Theorem 4 is proved. In [10], it was noted that $\{Y_n\}$ is a regenerative random process with regeneration periods $T_1,T_2,\dots$ . By the condition $\mathbf{E} T_1=\mathbf{E} N_1+\mathbf{E} N_2<\infty$, there exists a stationary distribution of the process $\{Y_n\}$:
$$
\begin{equation*}
Q(A)=\lim_{n\to \infty} \mathbf{P}(Y_n\in A).
\end{equation*}
\notag
$$
The moment generating function of this distribution was found in [10]. Theorem 5 (see [10]). Let $q_k=Q(\{k\})$ and let $b>0$ be an integer. Then, for $|\mu|=1$,
$$
\begin{equation}
\begin{aligned} \, \sum_{k=-\infty}^\infty \mu^{k}q_k &= \frac 1 {\mathbf{E}T_1}\lim_{z\to 1} \biggl\{\frac{R_{+}^{(1)}(z,\mu)[(R_{+}^{(1)})^{-1}(z,\mu)]^{[0,b)}}{1-z\psi_1(\mu)} \nonumber \\ &\qquad\qquad\qquad +\mu^b\frac{R_{-}^{(2)}(z,\mu)[(R_{-}^{(2)})^{-1}(z,\mu)] ^{(-b,0]}}{1-z\psi_2(\mu)}\biggr\} \nonumber \\ &= \frac1{\mathbf{E}T_1}\lim_{z\to 1} \biggl\{\frac{[(R_{+}^{(1)})^{-1}(z,\mu)]^{[0,b)}}{R_{0}^{(1)}(z)R_{-}^{(1)}(z,\mu)}+ \mu^b\frac{[(R_{-}^{(2)})^{-1}(z,\mu)]^{(-b,0]}}{R_{0}^{(2)}(z)R_{+}^{(2)}(z,\mu)}\biggr\}. \end{aligned}
\end{equation}
\tag{18}
$$
Observe that the same result follows from (17), since
$$
\begin{equation*}
\sum_{k=-\infty}^\infty \mu^{k}q_k=\lim_{z\to1}(1-z)\Psi (z, \mu).
\end{equation*}
\notag
$$
Theorem 2 now implies that if the distributions of $X_1^{(j)}$ and $-X_1^{(j)}$ for $j=1,2$ belong to the class $K$ (for instance, if $|X_1^{(j)}|\leqslant r$ for some $r<\infty$), then all the factorization components involved in (17), (18), as well as the resulting moment generating functions, are rational functions of the variable $\mu$. This means, in particular, that the above algorithm can be applied to them, including expansion into partial fractions and subsequent search of the exact formulas for the probabilities $q_k$ and the functions
$$
\begin{equation*}
f_k(z):=\sum_{n=1}^\infty z^n\, \mathbf{P}(Y_n=k).
\end{equation*}
\notag
$$
The inversion of these moment generating functions with respect to the variable $z$ is a very difficult problem. At the same time, the knowledge of exact expressions for $f_k(z)$ provides additional possibilities for studying the characteristics of the distribution of the random variable $Y_n$.
|
|
|
Bibliography
|
|
|
1. |
A. A. Borovkov, Probability theory, Izd. stereotip., URSS, Moscow, 2021 (Russian); English transl. of 5th ed. Universitext, Springer, London, 2013 |
2. |
A. A. Borovkov, “New limit theorems in boundary problems for sums of independent terms”, Sibirsk. Mat. Zh., 3:5 (1962), 645–694 ; English transl. Select. Transl. Math. Stat. Probab., 5, Amer. Math. Soc., Providence, RI, 1965, 315–372 |
3. |
A. A. Borovkov and B. A. Rogozin, “Boundary value problems for some two-dimensional
random walks”, Teor. Veroyatnost. i Primenen., 9:3 (1964), 401–430 ; English transl. Theory Probab. Appl., 9:3 (1964), 361–388 |
4. |
V. I. Lotov, “On the asymptotics of the distribution of excess”, Sib. Èlektron. Mat. Izv., 12 (2015), 292–299 (Russian) |
5. |
V. I. Lotov, “On some boundary crossing problems for Gaussian random walks”, Ann. Probab., 24:4 (1996), 2154–2171 |
6. |
G. Lorden, “On excess over the boundary”, Ann. Math. Statist., 41:2 (1970), 520–527 |
7. |
A. A. Mogul'skii, “Absolute estimates for moments of certain boundary functionals”, Teor. Veroyatnost. i Primenen., 18:2 (1973), 350–357 ; English transl. Theory Probab. Appl., 18:2 (1973), 340–347 |
8. |
V. I. Lotov, “Asymptotic analysis of the distributions in problems with two boundaries. I”, Teor. Veroyatnost. i Primenen., 24:3 (1979), 475–485 ; English transl. Theory Probab. Appl., 24:3 (1980), 480–491 |
9. |
V. I. Lotov, “On an approach to two-sided boundary problems”, Statistics and control of stochastic processes, Nauka, Moscow, 1989, 117–121 (Russian) |
10. |
V. I. Lotov, “On a random walk with switchings”, Sib. Èlektron. Mat. Izv., 15 (2018), 1320–1331 |
11. |
W. Feller, An introduction to probability theory and its applications, v. 2, 2nd ed., John Wiley & Sons, Inc., New York–London–Sydney, 1971 ; Russian transl. v. 2, Mir, Moscow, 1984 |
Citation:
V. I. Lotov, “Exact formulas in some boundary crossing problems
for integer-valued random walks”, Izv. RAN. Ser. Mat., 87:1 (2023), 49–64; Izv. Math., 87:1 (2023), 45–60
Linking options:
https://www.mathnet.ru/eng/im9323https://doi.org/10.4213/im9323e https://www.mathnet.ru/eng/im/v87/i1/p49
|
Statistics & downloads: |
Abstract page: | 335 | Russian version PDF: | 59 | English version PDF: | 69 | Russian version HTML: | 169 | English version HTML: | 100 | References: | 40 | First page: | 20 |
|