Russian Mathematical Surveys
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Archive
Impact factor
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Uspekhi Mat. Nauk:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Russian Mathematical Surveys, 2024, Volume 79, Issue 2, Pages 189–227
DOI: https://doi.org/10.4213/rm10160e
(Mi rm10160)
 

On extensibility and qualitative properties of solutions to Riccati's equation

I. V. Astashovaab, V. A. Nikishova

a Lomonosov Moscow State University
b Plekhanov Russian State University of Economics
References:
Abstract: We consider Riccati's equation on the real axis with continuous coefficients and non-negative discriminant of the right-hand side. We study the extensibility of its solutions to unbounded intervals. We obtain asymptotic formulae for its solutions in their dependence on the initial values and the properties of the functions representing roots of the right-hand side of the equation. We obtain results on the asymptotical behaviour of solutions defined near $\pm\infty$. We study the structure of the set of bounded solutions in the case when the roots of the right-hand side of the equation are $C^1$-functions which are different on the whole of their domain and tend monotonically to some limits as $x\to\pm\infty$. We extend, improve, or refine some well-known results.
Bibliography: 47 titles.
Keywords: Riccati's equation, non-negative discriminant, continuous coefficients, extensibility, qualitative properties, asymptotic properties.
Funding agency Grant number
Russian Science Foundation 20-11-20272-П
The research was supported by the Russian Science Foundation under grant no. 20-11-20272-$\Pi$, https://rscf.ru/en/project/20-11-20272/.
Received: 29.07.2023
Bibliographic databases:
Document Type: Article
UDC: 517.923
MSC: 34A34, 34D05
Language: English
Original paper language: Russian

1. Introduction

Riccati’s equation is an equation of the form

$$ \begin{equation} y'=R(x)y^2+Q(x)y+P(x), \end{equation} \tag{1.1} $$
where $R\not\equiv 0$. Equation (1.1) has applications in many fields, in particular, physics (the theory of gravitational waves [1], quantum mechanics [2], continuum mechanics [3]), financial mathematics [4], instrumentation [5], and mechanical engineering [6]. It also serves as a tool for solving problems in a wide variety of fields of mathematics, for example, differential geometry [7], [8]. The book [8] describes in detail geometric approaches to the study of the integrability of Riccati’s equation (including the matrix version) and applications of Riccati’s equation to problems in the calculus of variations. In [9] and [10] an equation of the form (1.1) was used to establish the limits of applicability of Chaplygin’s theorem on differential inequalities to a second-order linear equation. In [11] a four-parameter family of differential equations on a torus was considered. The function on the parameter space describing the curves preserving the rotation number of this equation satisfies the third Painlevé equation. This latter is shown to have a family of solutions that are also solutions of a certain Riccati equation obtained from Bessel’s equation by replacing the unknown function. A technique using Riccati’s equation was used in [12] to study the oscillation of solutions to some quasi-linear equations. Other applications of Riccati’s equation can be found in the references of the works cited.

We recall some well-known facts from the history of Riccati’s equation.

For the first time an equation of this type was mentioned by J. Bernoulli [13] in 1694. Namely, he considered a special case of equation (1.1), the equation

$$ \begin{equation} a^2 y'=y^2+x^2 \end{equation} \tag{1.2} $$
with $a\in\mathbb{R}$, $a\ne 0$.

Later, in his letters to Leibniz (see [14]) Bernoulli presented a solution to (1.2) for $a=1$ in the form of a quotient of the sums of two series. However, he failed to express it by quadratures. In 1724 Riccati [15] considered the equation

$$ \begin{equation} y'+a y^2=b x^{\alpha},\qquad a,b\ne 0, \end{equation} \tag{1.3} $$
now called the special Riccati equation.

In his note [16] on the article [15] D. Bernoulli presented (see also [17], Chap. I, § 8) an infinite sequence of values of $\alpha$ such that equation (1.3) is integrable by quadratures:

$$ \begin{equation*} \alpha_k=\frac{4k}{1-2k}\,,\qquad k\in\mathbb{Z}. \end{equation*} \notag $$

Later, Liouville [18] (1841) showed that there are no other values of $\alpha$ with this property. In the 18th and 19th centuries various representations of solutions to (1.3) by means of series and integrals were obtained, (see, for example, [19]–[22]).

Riccati’s equation of the general form (1.1) was investigated by Euler. He showed [19] that if a particular solution $y_1$ is known, then the general solution can be obtained by two quadratures: first, by the substitution $y=y_1+\theta$ equation (1.1) is reduced to

$$ \begin{equation} \theta'=R(x){\theta}^2+(2R(x)y_1(x)+Q(x))\theta, \end{equation} \tag{1.4} $$
and then equation (1.4) is reduced to a linear one by the substitution $\theta=1/v$.

If we know two partial solutions to (1.1), then, according to Euler, the general solution can be obtained with the help of a single quadrature.

Subsequently, Weyr [23] and Picard [24] showed that the general solution to (1.1) is a linear-fractional function of an arbitrary constant, and deduced from this that for any four different partial solutions $y_1$, $y_2$, $y_3$, and $y_4$ to (1.1), the anharmonic ratio

$$ \begin{equation*} \frac{(y_1-y_2)}{(y_1-y_4)}:\frac{(y_3-y_2)}{(y_3-y_4)} \end{equation*} \notag $$
is independent of $x$. Thus, knowing three partial solutions to (1.1), we can obtain the general solution without quadratures.

Further results of interest contain relations satisfied by solutions to equation (1.1) as functions of their initial values [25], [26]. It is known [27] that Riccati’s equation with continuous periodic coefficients can have at most two periodic solutions.

Theorems proved in [28] refine, for Riccati’s equation of a special form, the classical theorem (see, for example, [29], Chap 7, Theorem 6) on the continuous dependence of solutions on the right-hand side and initial conditions.

Some generalizations of the scalar differential equation (1.1) are also worth mentioning. Such generalizations include, for example, the matrix differential Riccati equation

$$ \begin{equation} Y'=YR(x)Y+YA(x)+B(x)Y+P(x) \end{equation} \tag{1.5} $$
for $R,A,B,P\colon X\subset\mathbb{R}\to M_n(\mathbb{R})$ and the unknown $n\times n$-matrix $Y(\,\cdot\,)$, where $M_n(\mathbb{R})$ is the space of $n \times n$ $\mathbb{R}$-matrices.

A broad overview of the available results related to the matrix differential Riccati equations was given in [30].

Another generalization of (1.1) is the equation

$$ \begin{equation} y'=P_0(x)y^n+P_1(x)y^{n-1}+\cdots+P_{n-1}(x)y+P_n(x). \end{equation} \tag{1.6} $$
It was investigated, for example, in [31]–[33].

In [31] it was proved, in particular, that if $n\ne 1$, $P_0\equiv 1$, and the functions $P_i(\,\cdot\,)$, $i=1,\dots,n$, are continuous and bounded in a neighbourhood of $+\infty$, then equation (1.6) cannot have solutions $y(\,\cdot\,)$ with the property that $\lim_{x\to+\infty} y(x)=+\infty$.

As we will see below, for equation (1.1) with non-negative discriminant of the right-hand side, this result follows from Theorem 3.1.3. Many well-known results related to various analogues and generalizations of equation (1.1) were presented in [34].

Since even the special Riccati equation is not always integrable by quadratures (see also [35]–[37] on integrability cases of (1.1)), it is useful to study the qualitative properties of solutions to (1.1).

The qualitative and asymptotic properties of solutions to Riccati’s equation

$$ \begin{equation} y' = y^{2}+ Q(x)y+P(x) \end{equation} \tag{1.7} $$
with continuous coefficients were studied by many authors, in particular, see [38], Chap. XI, § 7, [34], and [39]–[42].

Suppose that $Q^{2}(x)-4P(x)\geqslant 0,\,x\in\mathbb{R}$; then the quadratic equation

$$ \begin{equation} y^{2} + Q(x)y + P(x) = 0 \end{equation} \tag{1.8} $$
has the continuous real roots
$$ \begin{equation} \alpha_{1,2}(x)=\frac{1}{2}\bigl(-Q(x) \mp \sqrt{Q^2(x)-4P(x)}\,\bigr),\qquad x\in\mathbb{R}, \end{equation} \tag{1.9} $$
and equation (1.7) can be written as
$$ \begin{equation} y'=(y-\alpha_1(x))(y-\alpha_2(x)). \end{equation} \tag{1.10} $$

The case when the function $\alpha_2(x)$ is unbounded for $x\geqslant x_0$ was considered in [40] and [41].

Theorem A ([40], p. 18). Suppose that the function $\alpha_1(x)$ is unbounded for $x\geqslant x_0$, increases monotonically, and $\alpha_1 (x)<\alpha_2(x)$. If there exists a function $\beta\colon[x_0,+\infty)\to\mathbb{R}$ satisfying, for $x\geqslant x_0$, the inequalities

$$ \begin{equation*} \beta(x)>\alpha_2(x) \quad\textit{and}\quad \beta'(x)<(\beta(x)-\alpha_1(x))(\beta(x)-\alpha_2(x)), \end{equation*} \notag $$
then for any $\varepsilon>0$ such that $\alpha_2(x)-\varepsilon>\alpha_1(x)$, $x\geqslant x_0$, equation (1.10) has at least one solution $y_\varepsilon$ defined on $[x_0,+\infty)$ and such that
$$ \begin{equation*} \alpha_2(x)-\varepsilon<y_\varepsilon(x)<\beta(x). \end{equation*} \notag $$
Moreover, there exists at least one solution $y(\,\cdot\,)$ to (1.10) defined on $[x_0,+\infty)$ and such that
$$ \begin{equation*} \alpha_2(x)<y(x)<\beta(x). \end{equation*} \notag $$

Theorem B ([41], p. 239). If the functions $\alpha_1$ and $\alpha_2$ are positive and non- decreasing on $[x_0,+\infty)$, the function $\alpha_1$ is bounded on $[x_0,+\infty)$, and $\lim_{x\to+\infty}\alpha_2(x)=+\infty$, then each solution to equation (1.10) defined at a point $x_0$ is extensible onto $[x_0,+\infty)$.

Moreover, if $y(\,\cdot\,)$ is a solution to (1.10) which is positive on the interval $[x_0,+\infty)$, then either

$$ \begin{equation*} \lim_{x\to+\infty}y(x)=+\infty, \end{equation*} \notag $$
or
$$ \begin{equation*} \exp\biggl(-\int_{x_0}^x(\alpha_1(t)+\alpha_2(t))\,dt\biggr)< y(x)<\frac{\alpha_1(x)+\alpha_2(x)}{2}\,,\qquad x\geqslant x_0. \end{equation*} \notag $$

The author of Theorem A also studied an equation similar to (1.10) but with a greater number of factors on the right-hand side, namely,

$$ \begin{equation} y'=(y-f_1(x))(y-f_2(x))\cdots(y-f_n(x)), \end{equation} \tag{1.11} $$
where $n\geqslant 3$. In [43] he studied the existence of solutions to (1.11) satisfying certain conditions in the case when $f_i\in C[x_0,+\infty)$, $f_1(x)<\cdots<f_n(x)$ for $x\geqslant x_0$, and $f_n(x)\to +\infty$ as $x\to +\infty$.

Theorems on the qualitative properties of solutions to (1.10) in the case when the functions $\alpha_1(\,\cdot\,)$ and $\alpha_2(\,\cdot\,)$ are bounded on the whole number line were presented in [42] and [34]. The behaviour of solutions to equation (1.10) with $C^1$-functions $\alpha_1(\,\cdot\,)$ and $\alpha_2(\,\cdot\,)$, $\alpha_1(x)<\alpha_2(x)$, which are bounded on $\mathbb{R}$ and tend monotonically to some limits $\alpha_1^{\pm}\in\mathbb{R}$ and $\alpha_2^{\pm}\in\mathbb{R}$ as $x\to\pm\infty$, respectively, were studied in [39]. It was proved there that under the above conditions all bounded solutions to (1.10) have limits as $x\to\pm\infty$ and, in accordance with their possible values, are divided into the following four types:

$$ \begin{equation*} \begin{alignedat}{3} \text{Type I:}&&\quad y_{-}&=\alpha_1^-,&\quad y_{+}&=\alpha_1^+; \\ \text{Type II:}&&\quad y_{-}&=\alpha_2^-,&\quad y_{+}&=\alpha_1^+; \\ \text{Type III:}&&\quad y_{-}&=\alpha_2^-,&\quad y_{+}&=\alpha_2^+; \\ \text{Type IV:}&&\quad y_{-}&=\alpha_1^-,&\quad y_{+}&=\alpha_2^+, \end{alignedat} \end{equation*} \notag $$
where $y_{\pm}:=\lim_{x\to\pm\infty}y(x)\in \mathbb{R}$. It was also found that

This article continues the study of the qualitative and asymptotic properties of solutions to equation (1.10). In many results here it is additionally assumed that

$$ \begin{equation} -\infty<m\leqslant \alpha_1(x)\leqslant\alpha_2(x)\leqslant M<+\infty,\qquad x\in\mathbb{R} \end{equation} \tag{1.12} $$
for some constants $m$ and $M$.

In the first part of the paper (see §§ 3.1 and 4.1) we study the dependence of the qualitative and asymptotic properties of solutions to (1.10) on the initial value $y(x_0)$. Some results from [42] and [34] are extended or refined, the result of the theorem on p. 17 in [40] is extended, and the result of the corollary on p. 240 in [41] is strengthened.

In the second part of the paper (see §§ 3.2 and 4.2) we consider the set of solutions to (1.10) defined near $+\infty$ and obtain results showing its structure. In particular, we prove that if equation (1.10) with bounded $\alpha_1$ and $\alpha_2$ such that $(\alpha_1+\alpha_2)/2\in C^1 [x_0,+\infty)$ has two solutions which are defined on $[x_0,+\infty)$ and have two different finite limits as $x\to +\infty$, then each other solution defined on $[x_0,+\infty)$ has a finite limit as $x\to +\infty$ which is equal to the limit of the smaller of the two solutions.

In the third part of the paper (see §§ 3.3 and 4.3) the results obtained in the first two parts are applied to study the structure of the set of bounded solutions to the equation in the case when $\alpha_1(\,\cdot\,)$ and $\alpha_2(\,\cdot\,)$ are $C^1$-functions different on the whole of their domain and tending monotonically to finite limits as $x\to\pm\infty$.

The results obtained extend the ones of [39].

Some results obtained by the authors were presented in [44]–[46].

2. Notation and main definitions

In this paper we suppose for equation (1.7) that

$$ \begin{equation*} P,Q\in C(\mathbb{R}),\quad Q^{2}(x)-4P(x)\geqslant 0,\quad x\in\mathbb{R}. \end{equation*} \notag $$
In this case (1.7) can be written as (1.10) with continuous real roots (1.9).

Put

$$ \begin{equation} \alpha(x):=\frac{\alpha_1(x)+\alpha_2(x)}{2}\,,\qquad x\in\mathbb{R}. \end{equation} \tag{2.1} $$
At each point $x\in\mathbb{R}$ where the function $\alpha=-Q/2$ is differentiable we define the functions $U_0(x)$ and $Y_0(x)$ by
$$ \begin{equation} U_0(x):=Q'(x)-\frac{D(x)}{2}\,, \end{equation} \tag{2.2} $$
where
$$ \begin{equation} D(x):=Q^2(x)-4P(x), \end{equation} \tag{2.3} $$
and (see [39], § 3.4)
$$ \begin{equation} Y_0(x):=\frac{(\alpha_1(x)-\alpha_2(x))^2}{4}+\alpha'(x). \end{equation} \tag{2.4} $$

Note that if $Q(x)=-(\alpha_1(x)+\alpha_2(x))$ is a $C^1$-function for $x\in\Delta\subset\mathbb{R}$, then the functions $U_0(x)$ and $Y_0(x)$ are continuous for $x\in\Delta$.

When we say in this paper that a solution $y(\,\cdot\,)$ is defined on an interval $\Delta\subset\mathbb{R}$, we mean that $y(\,\cdot\,)$ is defined at each point in $\Delta$ (but $\Delta$ need not be the maximal possible domain of definition of the solution $y(\,\cdot\,)$).

As usual, we say that a function $f(\,\cdot\,)$ is monotonically increasing (decreasing) on an interval $\Delta\subset\mathbb{R}$ if for any $x_1, x_2\in\Delta$ such that $x_1< x_2$ the inequality $f(x_1)\leqslant\!(\geqslant)\,\, f(x_2)$ holds. A function $f(\,\cdot\,)$ is said to be strictly monotonically increasing (decreasing) on the interval $\Delta\subset\mathbb{R}$ if for any $x_1, x_2\in\Delta$ such that $x_1<x_2$ the inequality $f(x_1)<\!(>)\,\, f(x_2)$ holds.

Lemma 2.1 (corollary of Lemma 4.1 in [47]). If $x_0<\omega\leqslant+\infty$, $Q\in C^1 [x_0,\omega)$, and there exists a solution to (1.10) defined on $(\delta,\omega)$ for some $\delta<\omega$, then there exist $S_*\in [x_0,\omega)$ and a solution $y_*(\,\cdot\,)$ to this equation which is defined on $(S_*,\omega)$ and such that for any solution $y(\,\cdot\,)$ to (1.10) defined on $(S,\omega)$, where $S\geqslant x_0$, the following inequalities hold:

$$ \begin{equation*} S\geqslant S_*\quad\textit{and}\quad y(x)\leqslant y_*(x),\quad x\in(S,\omega). \end{equation*} \notag $$

The solution $y_*(x)$ in the last lemma is called a principal solution on the interval $(x_0,\omega)$.

Definition 1 [39]. A solution $y(\,\cdot\,)$ to equation (1.10) is called stabilizing if

$$ \begin{equation} \text{the finite limit } \lim_{x\to+\infty}y(x)=y_{+}\in \mathbb{R} \quad \text{exists} \end{equation} \tag{2.5} $$
and
$$ \begin{equation} \text{the finite limit } \lim_{x\to-\infty}y(x)=y_{-}\in \mathbb{R}\quad \text{exists}. \end{equation} \tag{2.6} $$

3. Main results

3.1. Extensibility and asymptotic behaviour of solutions as dependent on the mutual arrangement of their initial values and the roots of the right-hand side of the equation

Now we formulate a theorem extending the basic theorem of differential inequalities [10] for first-order equations and generalizing Theorem 7.3 in [42] (or, which is the same, the first statement of Theorem 5.7 in [34]).

Theorem 3.1.1. Consider the equation

$$ \begin{equation} y'=f(x,y), \end{equation} \tag{3.1} $$
where $f$ is continuous on its domain, which contains $[x_0,+\infty)$. If there exists a differentiable function $\beta\colon[x_0,+\infty)\to\mathbb{R}$ such that for any $x\geqslant x_0$ the inequality
$$ \begin{equation*} \beta' (x)>f(x,\beta(x)) \end{equation*} \notag $$
holds, then any solution $y(\,\cdot\,)$ to equation (3.1) such that $y(x_0)\leqslant\beta(x_0)$ satisfies the condition $y(x)<\beta(x)$ for $x\in(x_0,b)$, where $b=\sup\operatorname{dom}y$ is the right-hand end-point of the maximal domain of the solution $y(\,\cdot\,)$.

Corollary 1. Under the assumptions of Theorem 3.1.1, if $\beta(x)$ is bounded above for $x\geqslant x_0$, then any solution $y(x)$ such that $y(x_0)\leqslant\beta(x_0)$ is bounded above for $x\in(x_0,b)$, where $b=\sup\operatorname{dom}y$.

Now we take $f(x,y)$ to be

$$ \begin{equation*} f(x,y)=y^2+Q(x)y+P(x) \end{equation*} \notag $$
and set $\beta=(\alpha_1+\alpha_2)/2$. Then we obtain the following result.

Corollary 2. If the function $Q(\,\cdot\,)$ is differentiable on $[x_0,+\infty)$, $Q'(x)<Q^2(x)/2-2P(x)$ for $x\geqslant x_0$, and a solution $y(\,\cdot\,)$ to equation (1.10) satisfies $y(x_0)\leqslant-Q(x_0)/2$, then $y(x)<-Q(x)/2$ for $x\in(x_0,b)$, where $b=\sup\operatorname{dom}y$.

Remark 1. The second condition in Corollary 2 can be written as

$$ \begin{equation} U_0(x)<0 \end{equation} \tag{3.2} $$
for $x\geqslant x_0$. We have $U_0(x)=-2Y_0(x)$, so that condition (3.2) is just condition B from [39], § 3.3, at the point $x$.

Corollary 3. Suppose that $\alpha_1(x)=\alpha_2(x)$ for $x\geqslant x_0$, the function $\alpha_1$ is differentiable on $[x_0,+\infty)$, and $\alpha'_1(x)>0$ for $x\geqslant x_0$. If a solution $y(\,\cdot\,)$ to equation (1.10) satisfies $y(x_0)\leqslant\alpha_1(x_0)$, then $y(x)<\alpha_1(x)$ for $x\in(x_0,b)$, where $b=\sup\operatorname{dom}y$.

Theorem 3.1.2. Suppose $x_0\in\mathbb{R}$ and condition (1.12) holds. Then for any solution $y(\,\cdot\,)$ to equation (1.10) defined at $x_0$ the inequality $y(x)\geqslant\min(y(x_0),m)$ holds for $x\in[x_0,b)$, where $b=\sup\operatorname{dom}y$.

Note that it follows from this theorem and Corollary 3.2 in [38] that in Corollaries 2 and 3 above, the right-hand endpoint $b$ of the maximal domain of the solution equals $+\infty$.

Theorem 3.1.3. Let $M_1, M_2\in\mathbb{R}$ satisfy $\alpha_1(x)\leqslant M_1$ and $\alpha_2(x)\leqslant M_2$ for $x\geqslant x_0$. If a solution $y(\,\cdot\,)$ to equation (1.10) satisfies the conditions $y(x_0)>M_1$ and $y(x_0)> M_2$, then there exists $x^*\in\mathbb{R}$, $x^*>x_0$, such that $y(\,\cdot\,)$ is strictly increasing on $(x_0,x^*)$ and

$$ \begin{equation*} \lim_{x\to x^*-0}y(x)=+\infty. \end{equation*} \notag $$

Theorem 3.1.3 generalizes the statement of Theorem 7.1 in [42] on the behaviour of solutions to equation (1.10) to the right of the point $x_0$ (or, which is the same, the statement of Theorem 5.5 in [34] on the behaviour of solutions to the right of the point $t_0$) to the case when $\alpha_1\not\equiv\alpha_2$. The condition $\alpha'(t)>0$, $t\leqslant t_0$, must be added to the assumptions of Theorem 5.5 [34] on the behaviour of solutions to the left of the point $t_0$. Namely, the following theorem holds.

Theorem 3.1.4. If condition (1.12) holds, $\alpha_1(x)=\alpha_2(x)=\alpha(x)$ for $x\in\mathbb{R}$, the function $\alpha$ is differentiable on $(-\infty, x_0)$, and $\alpha'(x)>0$ for $x\leqslant x_0$, then all solutions $y(\,\cdot\,)$ to equation (1.10) such that $y(x_0)>M$ satisfy

$$ \begin{equation*} y(x)\to m_1\geqslant m,\qquad x\to -\infty, \end{equation*} \notag $$
where $m_1\in\mathbb{R}$.

The following example shows that the condition $\alpha'(x)>0$, $x\leqslant x_0$, is essential in Theorem 3.1.4.

Example 1. Consider the solution $y(\,\cdot\,)$ to the equation

$$ \begin{equation*} y'=(y+\arctan x)^2 \end{equation*} \notag $$
with initial value $y(0)=0$. We have
$$ \begin{equation*} m=-\frac{\pi}{2}<-\arctan x<\frac{\pi}{2}=M,\qquad x\in\mathbb{R}, \end{equation*} \notag $$
and
$$ \begin{equation*} y'(0)=0>-1=-\arctan' (0). \end{equation*} \notag $$
Hence there exists $\delta>0$ such that
$$ \begin{equation*} y(x)>-\arctan x\quad\text{for}\ \ x\in(0,\delta]\quad\text{and}\quad y(x)<-\arctan x \quad\text{for}\ \ x\in[-\delta,0). \end{equation*} \notag $$
By the monotony of the function $-\arctan(\,\cdot\,)$, for $x\geqslant\delta$ we have the relations
$$ \begin{equation*} -\arctan x\leqslant -\arctan \delta=:M_1<y(\delta). \end{equation*} \notag $$
So, according to Theorem 3.1.3, there exists $x^*>\delta$ such that $\lim_{x\to x^*} y(x)=+\infty$. Similarly, using Theorem 3.1.3' (see below) we prove the existence of $x_*<-\delta$ such that $\lim_{x\to x_*} y(x)=-\infty$. So, $y(x_0)>M$ for some $x_0>0$ and the solution $y(x)$ is unbounded for $x\leqslant x_0$.

Theorem 3.1.3'. Suppose that $m_1,m_2\in\mathbb{R}$ satisfy $\alpha_1(x)\geqslant m_1$ and $\alpha_2(x)\geqslant m_2$ for $x\leqslant x_0$. If a solution $y(\,\cdot\,)$ to equation (1.10) satisfies the conditions $y(x_0)<m_1$ and $y(x_0)<m_2$, then there exists $x_*\in\mathbb{R}$ such that $x_*<x_0$, $y(\,\cdot\,)$ is strictly decreasing on $(x_*,x_0)$, and

$$ \begin{equation*} \lim_{x\to x_*+0}y(x)=-\infty. \end{equation*} \notag $$

Theorem 3.1.3' generalizes the statement of Theorem 7.2 in [42] (or, which is the same, Theorem 5.6 in [34]) on the behaviour of solutions to equation (1.10) to the left of the point $x_0$ to the case when $\alpha_1$ and $\alpha_2$ are different. To the assumptions of Theorem 7.2 [42] on the behaviour of solutions to the right of the point $x_0$, the condition that the finite limit $\lim_{x\to +\infty}\alpha(x)=:\alpha_+\in\mathbb{R}$ exists should be added. Namely, the following theorem holds.

Theorem 3.1.5. If condition (1.12) holds, $\alpha_1(x)=\alpha_2(x)=\alpha(x)$ for $x\in\mathbb{R}$, the finite limit $\lim_{x\to +\infty}\alpha(x)=:\alpha_+\in\mathbb{R}$ exists, and a solution $y(\,\cdot\,)$ to equation (1.10) satisfies $y(x_0)<m$, then either the graph of this solution intersects the graph of $\alpha(\,\cdot\,)$, or $\alpha(x)-y(x)\to +0$ as $x\to+\infty$.

The following example shows that the condition of the existence of a finite limit $\lim_{x\to +\infty}\alpha(x)=:\alpha_+\in\mathbb{R}$ in Theorem 3.1.5 is essential.

Example 2. Consider the function

$$ \begin{equation*} p(x)=\begin{cases} 16(k^2 (x-a_k))^2 (k^2 (x-a_k)-1)^2, & x\in [a_k,b_k], \\ 0, & x\in (-\infty,+\infty)\setminus \displaystyle\bigcup_{k=1}^{\infty}[a_k,b_k], \end{cases} \end{equation*} \notag $$
where
$$ \begin{equation*} a_k=(k-1)+\sum_{l=1}^{k-1}\frac{1}{l^2}\,,\quad b_k=(k-1)+\sum_{l=1}^{k}\frac{1}{l^2}\,,\qquad k\in\mathbb{N}. \end{equation*} \notag $$
We have $\displaystyle\int_{0}^{\infty}p(x)\,dx<\infty$, but $p(x)$ has no limit as $x\to+\infty$. Consider equation (1.10) for
$$ \begin{equation*} \alpha_1(x)=\alpha_2(x)=\alpha(x)=\int_{0}^{x}p(t)\,dt+ \arctan x+\sqrt{p(x)+\frac{1}{1+x^2}}\,. \end{equation*} \notag $$
Note that $\alpha(x)>-\pi/2=m$, $x\in\mathbb{R}$, the function $\alpha(x)$ has no limit as $x\to+\infty$, and $y_1(x)=\displaystyle\int_{0}^{x}p(t)\,dt+\arctan x$ is a solution to (1.10). We also have
$$ \begin{equation*} y'_1(x)=p(x)+\frac{1}{1+x^2}>0,\quad x\in\mathbb{R},\quad\text{and}\quad \lim_{x\to+\infty} y_1(x)=\int_{0}^{\infty}p(t)\,dt+\frac{\pi}{2}\,. \end{equation*} \notag $$
So the graph of the function $y_1(x)$ does not intersect the graph of $\alpha_1(x)$ and there is no limit of $y_1(x)-\alpha(x)$ as $x\to+\infty$. Let $y(\,\cdot\,)$ be a solution to (1.10) with $y(0)<m<y_1(0)=0$. We have $y(x)<y_1(x)<\alpha(x)$ for all $x\geqslant 0$, hence, according to Weierstrass’s theorem, a finite limit $\lim_{x\to+\infty}y(x)$ exists. Therefore, there is no finite limit of $y(x)-\alpha(x)$ as $x\to +\infty$.

The following example shows that the statement of Theorem 5.8 in [34] on the boundedness of solutions to the left of the point $t_0$ need not hold for all continuous and bounded functions $\alpha_1$ and $\alpha_2$.

Example 3. Consider equation (1.10) for

$$ \begin{equation*} \begin{aligned} \, \alpha_1(x)&=\begin{cases} \dfrac{1}{x}, & x\geqslant 1, \vphantom{\biggl\}} \\ \dfrac{2}{x}, & x\leqslant -1, \vphantom{\biggl\}} \\ -\dfrac{3}{2}x^3+\dfrac{1}{4}x^2+3x-\dfrac{3}{4}, & x\in (-1,1); \end{cases} \end{aligned} \end{equation*} \notag $$
and
$$ \begin{equation*} \begin{aligned} \, \alpha_2(x)&=\begin{cases} \dfrac{2}{x}, & x\geqslant 1, \vphantom{\biggl\}} \\ \dfrac{1}{x}, & x\leqslant -1,\vphantom{\biggl\}} \\ -\dfrac{3}{2}x^3-\dfrac{1}{4}x^2+3x+\dfrac{3}{4}, & x\in (-1,1). \end{cases} \end{aligned} \end{equation*} \notag $$

The functions $\alpha_{1,2}(\,\cdot\,)$ are bounded and therefore satisfy condition (1.12) for some constants $m,M\in\mathbb{R}$. By [31], pp. 244–245, no solution to equation (1.10) is defined on $[1,+\infty)$. Similarly, using the substitution $u(x)=-y(-x)$ we can prove that no solution is defined on $(-\infty,-1]$. Thus, for any solution $y(\,\cdot\,)$ there exist $x^*$, $x_*\in\mathbb{R}$ such that $x^*>x_*$, $\lim_{x\to x^*} y(x)=+\infty$, and $\lim_{x\to x_*} y(x)=-\infty$. Hence $y(x_0)>M$ for some $x_0\in(x_*,x^*)$, and the solution $y(x)$ is unbounded for $x\leqslant x_0$.

Theorem 3.1.6. 1. For functions $\alpha_1$ and $\alpha_2$ and a solution $y(\,\cdot\,)$ to equation (1.10), suppose that condition (1.12) holds, the function $\alpha_1$ increases monotonically on $[x_0,+\infty)$, $\alpha_1(x)<\alpha_2(x)$ for $x\geqslant x_0$, and $y_0=y(x_0)<\alpha_1(x_0)$. Then

2. Suppose that $\alpha_2$ decreases monotonically on $[x_0,+\infty)$ and $y(\,\cdot\,)$ is a solution to equation (1.10) such that $y_0=y(x_0)>\alpha_2(x_0)$. Then there exists $x^*\in\mathbb{R}$, $x^*>x_0$, such that

3. For the functions $\alpha_1$ and $\alpha_2$ and a solution $y(\,\cdot\,)$ to equation (1.10) suppose that condition (1.12) holds, the function $\alpha_1$ decreases monotonically on $[x_0,+\infty)$, the function $\alpha_2$ increases monotonically on $[x_0,+\infty)$, and $\alpha_1(x_0)<y_0=y(x_0)<\alpha_2(x_0)$. Then

Note that Theorem 3.1.6 extends the theorem on p. 17 in [40].

Now we give an example to show that the assumption ‘$\alpha_1$ increases monotonically on $[x_0,+\infty)$’ is essential in part 1 of Theorem 3.1.6.

Example 4. Consider the equation

$$ \begin{equation*} y'=y(y-f_\varepsilon^\delta(x)),\qquad x\geqslant 0, \end{equation*} \notag $$
where
$$ \begin{equation*} f_\varepsilon^\delta(x)=\begin{cases} 16\varepsilon\biggl(\dfrac{x}{\delta}\biggr)^2 \biggl(\dfrac{x}{\delta}-1\biggr)^2-1, & x\in[0,\delta], \\ -1, & x\geqslant \delta, \end{cases} \end{equation*} \notag $$
$\varepsilon\in(0,1)$, and $\delta\in(0,+\infty)$.

For any $\varepsilon\in(0,1)$ and $\delta\in(0,+\infty)$ there exist $\xi_\varepsilon^\delta\in (\delta/2,\delta)$ and a solution $y_\varepsilon^\delta$ to the last equation defined on $[0,+\infty)$ such that

$$ \begin{equation*} y_\varepsilon^\delta(x)<f_\varepsilon^\delta(x)\quad\text{for } x\in [0,\xi_\varepsilon^\delta);\qquad y_\varepsilon^\delta(x)>f_\varepsilon^\delta(x)\quad\text{for } x\in (\xi_\varepsilon^\delta,+\infty). \end{equation*} \notag $$

Theorem 3.1.7. Let $Q\in C^1 [x_0,+\infty)$.

I. Suppose that $U_0(x)\geqslant 0$ on $[x_0,+\infty)$, $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx=\infty$, and $y(\,\cdot\,)$ is a solution to equation (1.10) defined at the point $x_0$. Then $y(\,\cdot\,)$ cannot be extended onto $[x_0,+\infty)$, and if condition (1.12) holds, then there exists $x^*>x_0$ such that $\lim_{x\to x^*-0}y(x)=+\infty$.

II. Suppose that $U_0(x)\geqslant 0$ on $[x_0,+\infty)$, $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx<\infty$, condition (1.12) holds, and $y(\,\cdot\,)$ is a solution to equation (1.10) defined on $[x_0,+\infty)$. Then

$$ \begin{equation} \begin{gathered} \, \nonumber \min(y_0,m)\leqslant y(x)\leqslant\alpha(x),\qquad x\geqslant x_0, \\ y(x)-\alpha(x)\to 0\quad\textit{as}\ \ x\to+\infty \end{gathered} \end{equation} \tag{3.3} $$
and, moreover,
$$ \begin{equation} \int_{x_0}^{\infty}(y(x)-\alpha(x))^2\,dx<\infty. \end{equation} \tag{3.4} $$

III. Suppose that $U_0(x)< 0$ on $[x_0,+\infty)$, condition (1.12) holds, and $y(\,\cdot\,)$ is a solution to equation (1.10) defined at the point $x_0$. Then the following statements hold:

Note that part I of Theorem 3.1.7 extends the statement of Corollary 1 on p. 240 in [41].

Now we give an example showing that the condition $U_0(x)\geqslant 0$, $x\geqslant x_0$, is essential in part I of Theorem 3.1.7.

Example 5. Consider the equation

$$ \begin{equation*} y'=y(y-1) \end{equation*} \notag $$
on $[0,+\infty)$. For this equation we have $U_0(x)=-1/2<0$ for $x\geqslant x_0$, and the integral $\displaystyle\int_{0}^{\infty}U_0(x)\,dx$ diverges. Further, if a solution $y(\,\cdot\,)$ satisfies the inequality $y(0)\leqslant 1$, then $y(\,\cdot\,)$ is extensible onto $[0,+\infty)$.

Remark 3. Suppose that condition (1.12) holds, $Q\in C^1 [x_0,+\infty)$, and $\alpha_1(x)=\alpha_2(x)=\alpha(x)$ for $x\geqslant x_0$. Then for $x\geqslant x_0$ the condition $U_0(x)\,\mathop{\geqslant\!(<)}\,0$ is equivalent to $\alpha'(x)\,\mathop{\leqslant\!(>)}\,0$. If $U_0(x)\geqslant 0$ for all $x\geqslant x_0$, then the condition $\displaystyle\int_{x_0}^{\infty}\!\!\!U_0(x)\,dx=\infty$ is equivalent to $\lim_{x\to+\infty}\alpha(x)=-\infty$, while the condition $\displaystyle\int_{x_0}^{\infty}\!\!\!U_0(x)\,dx\kern-1pt < \kern-1pt \infty$ is equivalent to the existence of a finite limit $\lim_{x\to+\infty}\alpha(x)$. So, as $\alpha$ is bounded, if $U_0(x)\geqslant 0$ for $x\geqslant x_0$, then the condition $\displaystyle\int_{x_0}^{\infty}\!\!\!U_0(x)\,dx <\infty$ holds and part I of Theorem 3.1.7 is impossible.

Theorem 3.1.8. If $Q\in C^1 [x_0,+\infty)$ and $y(\,\cdot\,)$ is a solution to equation (1.10) defined in a neighbourhood of $+\infty$, then the convergence in (3.3) holds if and only if

$$ \begin{equation*} \sup_{0<v<\infty}(1+v)^{-1}\biggl|\int_{x}^{x+v}\!\!U_0(s)\,ds\biggr| \to 0\quad\textit{as}\ \ x\to+\infty. \end{equation*} \notag $$

Corollary 4. Suppose that $Q\in C^1 [x_0,+\infty)$ and $y(\,\cdot\,)$ is a solution to equation (1.10) defined in a neighbourhood of $+\infty$. If condition (3.3) holds for $y(\,\cdot\,)$, then this condition also holds for all solutions defined in a neighbourhood of $+\infty$.

Corollary 5. If $Q\in C^1 [x_0,+\infty)$, $\alpha_1(x)=\alpha_2(x)=\alpha(x)$ for $x\geqslant x_0$ and $y(\,\cdot\,)$ is a solution to equation (1.10) defined in a neighbourhood of $+\infty$, then the convergence in (3.3) holds if and only if

$$ \begin{equation*} \sup_{0<v<\infty}\frac{|\alpha(x+v)-\alpha(x)|}{1+v}\underset{} \to0\quad\textit{as}\ \ x\to+\infty. \end{equation*} \notag $$

Theorem 3.1.9. Suppose that $Q\in C^1 [x_0,+\infty)$ and $y(x)$ is a solution to equation (1.10) defined on $[x_0,+\infty)$. Then the following statements are equivalent:

Now we give an example showing that condition (3.3) in the last theorem is essential.

Example 6. Consider the function

$$ \begin{equation*} r(x)=\begin{cases} 16(k^2 (x-a_k))^2 (k^2 (x-a_k)-1)^2, & x\in [a_k,b_k], \\ 0, & x\in (-\infty,+\infty)\setminus \displaystyle\bigcup_{k=1}^{\infty}[a_k,b_k], \end{cases} \end{equation*} \notag $$
where
$$ \begin{equation*} a_k=(k-1)+\sum_{l=1}^{k-1}\frac{1}{l^2}\quad\text{and}\quad b_k=(k-1)+\sum_{l=1}^{k}\frac{1}{l^2}\,,\qquad k\in\mathbb{N}. \end{equation*} \notag $$
Set
$$ \begin{equation*} \alpha_1(x) =\int_{0}^{x}\bigl(r'(t)+r^2(t)-1\bigr)\,dt-1 \end{equation*} \notag $$
and
$$ \begin{equation*} \alpha_2(x) =\int_{0}^{x} \bigl(r'(t)+r^2(t)-1\bigr)\,dt+1. \end{equation*} \notag $$

In this case $y(x)=\displaystyle\int_{0}^{x}(r'(t)+r^2(t)-1)\,dt-r(x)$ is a solution to equation (1.10) defined on $(-\infty,+\infty)$. We have

$$ \begin{equation*} \int_{0}^{\infty}(y(x)-\alpha(x))^2\,dx=\int_{0}^{\infty}r^2(x)\,dx<\infty, \end{equation*} \notag $$
and
$$ \begin{equation*} \alpha(x)-y(x)=r(x) \not\to 0\quad\text{as}\ \ x\to+\infty. \end{equation*} \notag $$
Then, according to Theorem 3.1.9, the integral $\displaystyle\int_{0}^{\infty}U_0(x)\,dx$ diverges.

The following theorem gives a criterion for relation (3.4) to hold for some solution $y(x)$ to equation (1.10) defined on $[x_0,+\infty)$, provided that $Q\in C^1 [x_0,+\infty)$.

Theorem 3.1.10. Suppose that $Q\in C^1 [x_0,+\infty)$ and $y(x)$ is a solution to equation (1.10) defined on $[x_0,+\infty)$. Then the following statements are equilent:

If Statement (1) holds, then
$$ \begin{equation*} \lim_{T\to+\infty}\frac{1}{T-x_0}\int_{x_0}^{T}dt\, \biggl(\int_{x_0}^{t}U_0(s)\,ds\biggr)=2\biggl[(\alpha(x_0)-y(x_0))- \int_{x_0}^{\infty}(y(x)-\alpha(x))^2\,dx\biggr] \end{equation*} \notag $$
and
$$ \begin{equation*} \frac{1}{T-x_0}\int_{x_0}^{T}\biggl|c- \frac{1}{2}\int_{x_0}^{t}U_0(s)\,ds\biggr|^2 dt\to 0 \quad\textit{as}\ \ T\to +\infty, \end{equation*} \notag $$
where
$$ \begin{equation*} c=(\alpha(x_0)-y(x_0))-\int_{x_0}^{\infty}(y(x)-\alpha(x))^2\,dx. \end{equation*} \notag $$

Corollary 6. Suppose that $Q\in C^1 [x_0,+\infty)$. If the integral $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx$ diverges, but a finite limit

$$ \begin{equation*} \lim_{T\to+\infty}\frac{1}{T-x_0}\int_{x_0}^{T}dt\, \biggl(\int_{x_0}^{t}U_0(s)\,ds\biggr), \end{equation*} \notag $$
exists, then for any solution $y(x)$ to equation (1.10) defined on $[x_0,+\infty)$ the function $y(x)-\alpha(x)$ has no limit as $x\to+\infty$.

Corollary 7. Suppose that condition (1.12) holds, $x_0\in\mathbb{R}$, $Q\in C^1 [x_0,+\infty)$, and $\alpha_1(x)=\alpha_2(x)=\alpha(x)$ for $x\geqslant x_0$.

1. If there exists no $a\in\mathbb{R}$ such that

$$ \begin{equation*} \lim_{T\to+\infty}\frac{1}{T-x_0}\int_{x_0}^{T}|\alpha(t)-a|^2\,dt=0, \end{equation*} \notag $$
then equation (1.10) has no solution defined on $[x_0,+\infty)$.

2. If there exists $a\in\mathbb{R}$ such that

$$ \begin{equation*} \lim_{T\to+\infty}\frac{1}{T-x_0}\int_{x_0}^{T}|\alpha(t)-a|^2\,dt=0, \end{equation*} \notag $$
and $y(x)$ is a solution to equation (1.10) defined on $[x_0,+\infty)$, then
$$ \begin{equation*} \lim_{x\to+\infty}y(x)=a \end{equation*} \notag $$
and relation (3.4) holds true.

Theorem 3.1.11. Suppose that $Q\in C^1 [x_0,+\infty)$ and equation (1.10) has solutions defined in a neighbourhood of $+\infty$. Then at most one solution $y(\,\cdot\,)$ defined on $[x_1,+\infty)$ satisfies the condition

$$ \begin{equation*} \textit{the integral}\ \ \int_{x_1}^{\infty}(y(x)-\alpha(x))\,dx\ \ \textit{converges} \end{equation*} \notag $$
for some $x_1\geqslant x_0$. In particular, at most one solution $y(\,\cdot\,)$ defined on $[x_1,+\infty)$, for some $x_1\geqslant x_0$, satisfies the condition
$$ \begin{equation*} \int_{x_1}^{\infty}|y(x)-\alpha(x)|\,dx<\infty. \end{equation*} \notag $$

Now we give an example showing that the first phrase ‘at most one’ in Theorem 3.1.11 cannot be replaced by ‘exactly one’ or ‘no’. This example shows also that the condition of the divergence of the integral $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx$ in part I of Theorem 3.1.7 and the condition $U_0(x)\geqslant 0$, $x\geqslant x_0$, in part II of the same theorem are essential.

Example 7. Consider the equation

$$ \begin{equation*} y'=y\biggl(y-\frac{k}{x}\biggr),\qquad x\geqslant x_0:=1, \end{equation*} \notag $$
for $k>1$.

We have $\alpha(x)=\dfrac{k}{2x}$ , the integral $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx$ converges, the equation has solutions defined on $[x_0,+\infty)$, the function $y_*(x)=\dfrac{k-1}{x}$ is a principal solution on $(x_0,+\infty)$, and the following statements hold:

(1) if $1<k<2$, then $U_0(x)>0$ for $x\geqslant x_0$, $y_*(x)<k/(2x)=\alpha(x)$ for $x\geqslant x_0$, and, for any solution $y$ defined on $[x_1,+\infty)$ for $x_1\geqslant x_0$, the integral $\displaystyle\int_{x_1}^{\infty}(y(x)-\alpha(x))\,dx$ diverges;

(2) if $k=2$, then $U_0(x)=0$ for $x\geqslant x_0$, $y_*(x)=k/(2x)=\alpha(x)$ for $x\geqslant x_0$, and

$$ \begin{equation*} \int_{x_0}^{\infty}(y_*(x)-\alpha(x))\,dx=0<+\infty; \end{equation*} \notag $$

(3) if $k>2$, then $U_0(x)<0$ for $x\geqslant x_0$, $y_*(x)>k/(2x)=\alpha(x)$ for $x\geqslant x_0$, while for any solution $y$ defined on $[x_1,+\infty)$ for $x_1\geqslant x_0$ the integral $\displaystyle\int_{x_1}^{\infty}(y(x)-\alpha(x))\,dx$ diverges.

Theorem 3.1.12. Suppose that (1.12) holds, $Q\in C^1 [x_0,+\infty)$, $\alpha_1(x)<\alpha_2(x)$ for $x\geqslant x_0$, the function $\alpha_1(x)$ increases monotonically on $[x_0,+\infty)$, and $U_0(x)\geqslant 0$ for $x\geqslant x_0$. Then the integral $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx$ converges, and for any solution $y(\,\cdot\,)$ to equation (1.10) defined on $[x_0,+\infty)$ relation (3.4) holds true.

Theorem 3.1.13. Suppose that (1.12) holds, $Q \in C^1 [x_0,+\infty)$, $\alpha_1(x_0)<\alpha_2(x_0)$, the function $\alpha_1(x)$ decreases monotonically on $[x_0,+\infty)$, the function $\alpha_2(x)$ increases monotonically on $[x_0,+\infty)$, and $U_0(x)\geqslant 0$ for $x\geqslant x_0$. Then the integral $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx$ converges, and any solution $y(\,\cdot\,)$ to equation (1.10) defined on $[x_0,+\infty)$ satisfies (3.4).

3.2. On the structure of the set of solutions defined in a neighbourhood of $+\infty$

Theorem 3.2.1. Suppose that $Q\in C^1 [x_0,+\infty)$. If solutions $y_3<y_2<y_1$ to equation (1.10) are defined at the point $x_0$, and the solution $y_1$ is defined on $[x_0,+\infty)$, then the solutions $y_3$ and $y_2$ are extensible onto the same interval; moreover, the function $\dfrac{y_1(x)-y_3(x)}{y_1(x)-y_2(x)}\geqslant 1$ decreases and has a finite limit as $x\to+\infty$, which equals 1 if $y_1$ is a principal solution on $(x_0,+\infty)$.

Theorem 3.2.2. Suppose that $Q\in C^1 [x_0,+\infty)$ and condition (1.12) holds. If solutions $y_2<y_1$ to equation (1.10) are defined on $[x_0,+\infty)$ and have different finite limits as $x\to+\infty$, then $y_1$ is a principal solution on $(x_0,+\infty)$ and each solution $y$ on $[x_0,+\infty)$ other than $y_1$ has the same limit at infinity as $y_2$ has.

Now we give an example showing that the condition of different limits is essential in Theorem 3.2.2.

Example 8. Put

$$ \begin{equation*} f(x)=16(x-n)^2 (x-n-1)^2+1,\qquad x\in [n,n+1],\quad n\in\mathbb{N}_0. \end{equation*} \notag $$
Consider the equation
$$ \begin{equation} y'=y(y-f(x)). \end{equation} \tag{3.5} $$
If a solution $y(\,\cdot\,)$ to equation (3.5) satisfies the condition $y(0)\leqslant 0$, then
$$ \begin{equation*} \lim_{x\to+\infty}y(x)=0. \end{equation*} \notag $$
However, equation (3.5) has a non-constant periodic solution.

Theorem 3.2.3. Suppose that $Q\in C^1 [x_0,+\infty)$ and condition (1.12) holds. If solutions $y_2<y_1$ to equation (1.10) defined on $[x_0,+\infty)$ have finite (maybe, equal) limits as $x\to+\infty$, then any solution $y(\,\cdot\,)$ satisfying $y(x_0)<y_1(x_0)$ is extensible onto $[x_0,+\infty)$ and has the same limit at infinity as $y_2$ has.

Remark 4. Note that in the case $\alpha_1(x)=\alpha_2(x)=\alpha(x)$ the statements of Theorems 3.2.2 and 3.2.3 follow from Corollary 7.

3.3. Asymptotic behaviour at $\pm\infty$ of solutions to an equation the roots of whose right-hand side tend monotonically to finite limits

In this subsection we suppose that

$$ \begin{equation*} P,Q\in C^1(\mathbb{R})\quad\text{and}\quad \alpha_1 (x)<\alpha_2 (x),\quad x\in\mathbb{R}, \end{equation*} \notag $$
that
$$ \begin{equation} \text{ the limits } \lim_{x\to\pm\infty}\alpha_j(x)=:\alpha_j^{\pm}\in\mathbb{R},\qquad j=1,2,\ \text{ exist and are finite}, \end{equation} \tag{3.6} $$
and that
$$ \begin{equation} \begin{gathered} \, \text{there exists } A>0 \text{ such that for any } x\notin[-A,A] \\ \text{ the relations } \alpha'_1(x)\ne 0\quad\text{and}\quad \alpha'_2(x)\ne 0 \text{ hold true.} \end{gathered} \end{equation} \tag{3.7} $$

As shown in [39], in this case all bounded solutions are stabilizing, and all stabilizing solutions have a non-vanishing derivative near $\infty$ and can be divided into four types (see the discussion after Theorem B in § 1).

Theorem 3.3.1. If $\alpha_1^+\ne \alpha_2^+$, then equation (1.10) has a solution $y_{\rm I}$ such that

$$ \begin{equation*} \lim_{x\to+\infty}y_{\rm I}(x)=\alpha_1^+ \end{equation*} \notag $$
and there exists at most one solution $y$ such that
$$ \begin{equation*} \lim_{x\to+\infty}y(x)=\alpha_2^+. \end{equation*} \notag $$

Theorem 3.3.1'. If $\alpha_1^-\ne \alpha_2^-$, then equation (1.10) has a solution $y_{\rm II}$ such that

$$ \begin{equation*} \lim_{x\to-\infty}y_{\rm II}(x)=\alpha_2^- \end{equation*} \notag $$
and there exists at most one solution $y$ such that
$$ \begin{equation*} \lim_{x\to-\infty}y(x)=\alpha_1^-. \end{equation*} \notag $$

Theorem 3.3.2. Suppose equation (1.10) has a stabilizing solution of Type II. Then the following statements hold.

1. There exists a stabilizing solution $y_{\rm III}$ of Type III. If $\alpha_1^+\ne \alpha_2^+$, then such a solution is unique. If $\alpha_1^+\ne \alpha_2^+$ and $y(\,\cdot\,)$ is a solution to equation (1.10) defined at the point $x_0\in\mathbb{R}$, then

  • $\bullet$
    $$ \begin{equation*} \lim_{x\to+\infty}y(x)=\alpha_1^+ \end{equation*} \notag $$
    if $y(x_0)<y_{\rm III}(x_0)$;
  • $\bullet$
    $$ \begin{equation*} \lim_{x\to x^*-0}y(x)=+\infty \end{equation*} \notag $$
    for some $x^*>x_0$ if $y(x_0)>y_{\rm III}(x_0)$.

2. There exists a stabilizing solution $y_{\rm I}$ of Type I. If $\alpha_1^-\ne \alpha_2^-$, then such a solution is unique. If $\alpha_1^-\ne \alpha_2^-$ and $y(\,\cdot\,)$ is a solution to the equation defined at the point $x_0\in\mathbb{R}$, then

  • $\bullet$
    $$ \begin{equation*} \lim_{x\to-\infty}y(x)=\alpha_2^- \end{equation*} \notag $$
    if $y(x_0)>y_{\rm I}(x_0)$;
  • $\bullet$
    $$ \begin{equation*} \lim_{x\to x_*+0}y(x)=-\infty \end{equation*} \notag $$
    for some $x_*<x_0$ if $y(x_0)<y_{\rm I}(x_0)$.

Note that Theorem 3.3.2 improves Theorem 2.1 in [39] and has an important corollary, namely, the following theorem.

Theorem 3.3.3 (see Fig. 1). If $\alpha_1^+\ne \alpha_2^+$, $\alpha_1^-\ne \alpha_2^-$, and equation (1.10) has a stabilizing solution of Type II, then it has unique solutions $y_{\rm I}$ and $y_{\rm III}$ of Types I and III, respectively. For any solution $y(\,\cdot\,)$ we have:

(1) if $y_{\rm I}<y<y_{\rm III}$, then $y(\,\cdot\,)$ is a stabilizing solution of Type II;

(2) if $y>y_{\rm III}$, then there exists $x^*\in\mathbb{R}$ such that $y(\,\cdot\,)$ is extensible onto the interval $(-\infty,x^*)$,

$$ \begin{equation*} y_-=\alpha_2^-,\quad\textit{and}\quad \lim_{x\to x^*-0}y(x)=+\infty; \end{equation*} \notag $$

(3) if $y<y_{\rm I}$, then there exists $x_*\in\mathbb{R}$ such that $y(\,\cdot\,)$ is extensible onto the interval $(x_*,+\infty)$,

$$ \begin{equation*} y_+=\alpha_1^+,\quad\textit{and}\quad \lim_{x\to x_*+0}y(x)=-\infty. \end{equation*} \notag $$

Theorem 3.3.4. If $\alpha_1^+\ne \alpha_2^+$ and equation (1.10) has a Type I solution, then it also has a Type II solution.

Note that Theorem 3.3.4 extends Theorem 2.2 in [39].

Theorem 3.3.4'. If $\alpha_1^-\ne \alpha_2^-$ and equation (1.10) has a Type III solution, then it also has a Type II solution.

Theorem 3.3.5. If equation (1.10) has stabilizing solutions of Type I and III, then it also has a stabilizing solution of Type II.

The last theorem improves Theorem 2.3 in [39].

Theorem 3.3.6. Under the conditions $\alpha_1^+\ne \alpha_2^+$ and $\alpha_1^-\ne \alpha_2^-$ equation (1.10) has a stabilizing solution of Type II if and only if it has stabilizing solutions of Types I and III.

Theorem 3.3.7. Under the conditions $\alpha_1^+\ne \alpha_2^+$ and $\alpha_1^-\ne \alpha_2^-$ equation (1.10) satisfies exactly one of the following assertions:

The following example shows that the set of equations with property (a) is not empty.

Example 9. Suppose $c_1,c_2\in\mathbb{R}$ and $c_2>c_1$. If $0<\varepsilon<\dfrac{\sqrt{2e}}{8}(c_2-c_1)^2$, then the equation

$$ \begin{equation} (y-\varepsilon e^{-x^2})'=(y-\varepsilon e^{-x^2}-c_1) (y-\varepsilon e^{-x^2}-c_2) \end{equation} \tag{3.8} $$
is just equation (1.10) in which the functions
$$ \begin{equation*} \alpha_{1,2}=\frac{c_1+c_2}{2}+\varepsilon e^{-x^2}\mp \frac{\sqrt{(c_2-c_1)^2+8\varepsilon x e^{-x^2}}}{2}\, \end{equation*} \notag $$
satisfy the condition $\alpha_1(x)<\alpha_2(x)$ for $x\in\mathbb{R}$ and conditions (3.6)(3.7). In this case equation (3.8) has a stabilizing solution of Type II.

The following example presents classes of equations with properties (b) and (c).

Example 10. Suppose $k_0\in(0,16)$ and $n_0\in\mathbb{N}$. Put

$$ \begin{equation*} \begin{aligned} \, f_{k_0}^{n_0}(x)&=\begin{cases} -1, & x\leqslant 0, \\ k_0x^2(x-1)^2-1, & x\in [0,1], \\ -(16(x-1)^2(x-2)^2)^{n_0}-1, & x\in \biggl[0,\dfrac{3}{2}\biggr], \vphantom{\Biggl\}} \\ -2, & x\geqslant \dfrac{3}{2}\,, \end{cases} \\ g_{k_0}^{n_0}(x)&=\begin{cases} 0, & x\leqslant \dfrac{3}{2}, \vphantom{\biggl\}} \\ (16(x-1)^2(x-2)^2)^{n_0}-1, & x\in \biggl[\dfrac{3}{2}\,,2\biggr], \vphantom{\biggl\}} \\ -k_0x^2(x-1)^2-1, & x\in [2,3], \\ -1, & x\geqslant 3, \end{cases} \end{aligned} \end{equation*} \notag $$
and
$$ \begin{equation*} h_{k_0}^{n_0}(x)=\frac{f_{k_0}^{n_0}(x)+g_{k_0}^{n_0}(x)}{2}\,. \end{equation*} \notag $$
We have proved the following:

(1) there exist $k_0\in(0,16)$ and $n_0\in\mathbb{N}$ such that if $0<\varepsilon<\dfrac{\sqrt{2e}}{8}\biggl(1-\dfrac{k_0}{16}\biggr)^2$, then the functions

$$ \begin{equation*} \alpha_{1,2}(x)=h_{k_0}^{n_0}(x)+\varepsilon e^{-x^2}\mp \frac{\sqrt{(g_{k_0}^{n_0}(x)-f_{k_0}^{n_0}(x))^2+8\varepsilon x e^{-x^2}}}{2} \end{equation*} \notag $$
satisfy the condition $\alpha_1(x)<\alpha_2(x)$ for $x\in\mathbb{R}$ and conditions (3.6)(3.7), while equation (1.10) with suitable $\alpha_{1,2}$ realizes case (b);

(2) there exist $k_0\in(0,16)$ and $n_0\in\mathbb{N}$ such that if $0<\varepsilon<\dfrac{\sqrt{2e}}{8}\biggl(1-\dfrac{k_0}{16}\biggr)^2$, then the functions

$$ \begin{equation*} \alpha_{1,2}(x)=h_{k_0}^{n_0}(x)+\varepsilon e^{-x^2}\mp \frac{\sqrt{(g_{k_0}^{n_0}(x)-f_{k_0}^{n_0}(x))^2+8\varepsilon x e^{-x^2}}}{2} \end{equation*} \notag $$
satisfy the condition $\alpha_1(x)<\alpha_2(x)$ for $x\in\mathbb{R}$ and conditions (3.6)(3.7), while equation (1.10) with suitable $\alpha_{1,2}$ realizes case (c).

Remark 5. Consider the case of equation (1.10) for

$$ \begin{equation*} \alpha_1(x)=\alpha_2(x)=\alpha(x), \qquad x\in(-\infty,+\infty). \end{equation*} \notag $$
Then conditions (3.6) and (3.7) hold, which means that Now, if $y(x)$ is a stabilizing solution to equation (1.10), then $y_+=\alpha_+$ and $y_-=\alpha_-$. Each bounded solution to this equation is also stabilizing (and vice versa), and each stabilizing solutions in a neighbourhood of $\infty$ has a non-vanishing derivative. So, in the case when $\alpha_1(x)=\alpha_2(x)=\alpha(x)$ for $x\in\mathbb{R}$ all the four types of bounded solutions to equation (1.10) coincide, that is, we have a trivial classification of bounded solutions.

4. Proofs of main results

4.1. Extensibility and asymptotic behaviour of solutions as dependent on the mutual arrangement of their initial values and the roots of the right-hand side of the equation

Proof of Theorem 3.1.1. Case 1: $y(x_0)<\beta(x_0)$. Suppose there exists $x_1>x_0$ such that $y(x_1)\geqslant\beta(x_1)$. Then there exists $c\in (x_0,x_1]$ such that $y(c)=\beta(c)$. We can assume $c$ to be the leftmost of such points. Thus, $y(x)<\beta(x)$ whenever $x\in[x_0,c)$. Therefore, $y'(c)\geqslant\beta'(c)$. On the other hand the assumptions of the theorem yield
$$ \begin{equation*} \beta'(c)>f(c,\beta(c))=f(c,y(c))=y'(c). \end{equation*} \notag $$
This contradiction proofs the theorem.

Case 2: $y(x_0)=\beta(x_0)$. We have $y'(x_0)<\beta'(x_0)$. Hence in a right half- neighbourhood of the point $x_0$ we have the inequality $y(x)<\beta(x)$. Now the proof reduces to the previous case.

The theorem is proved.

Proof of Corollary 2. Put
$$ \begin{equation*} f(x,y):=y^2+Q(x)y+P(x)\quad\text{and}\quad \beta:=\frac{\alpha_1+\alpha_2}{2}\,. \end{equation*} \notag $$
We have
$$ \begin{equation*} f(x,y)=y^2+Q(x)y+P(x)=(y-\alpha_1)(y-\alpha_2). \end{equation*} \notag $$
According to Vieta’s theorem, $\beta(x)=-Q(x)/2$. Therefore,
$$ \begin{equation*} \beta'(x)=-\frac{Q'(x)}{2}\,. \end{equation*} \notag $$
On the other hand
$$ \begin{equation*} f(x,\beta(x))=(\beta(x)-\alpha_1(x))(\beta(x)-\alpha_2(x))= -\frac{(\alpha_1(x)-\alpha_2(x))^2}{4}\,. \end{equation*} \notag $$
So the inequality $\beta'(x)>f(x,\beta(x))$ holds if and only if
$$ \begin{equation*} -\frac{Q'(x)}{2}>-\frac{(\alpha_1(x)-\alpha_2(x))^2}{4}\,, \end{equation*} \notag $$
which is equivalent to
$$ \begin{equation*} Q'(x)<\frac{(\alpha_1(x)-\alpha_2(x))^2}{2}=\frac{D(x)}{2}\,. \end{equation*} \notag $$
Thus we obtain that the property
$$ \begin{equation*} Q'(x)<\frac{Q^2(x)}{2}-2P(x)\quad\text{ for all } x\geqslant x_0 \end{equation*} \notag $$
holds if and only if
$$ \begin{equation*} \beta' (x)>f(x,\beta(x))\quad\text{ for all } x\geqslant x_0. \end{equation*} \notag $$
We also have
$$ \begin{equation*} y(x_0)\leqslant-\frac{Q(x_0)}{2}=\beta(x_0). \end{equation*} \notag $$
Thus, all conditions of Theorem 3.1.1 are satisfied. Hence
$$ \begin{equation*} y(x)<-\frac{Q(x)}{2}\quad\text{ whenever } x\in(x_0,b), \end{equation*} \notag $$
where $b=\sup\operatorname{dom}y$. Corollary 2 is proved.
Proof of Corollary 3. We have
$$ \begin{equation*} \begin{aligned} \, U_0(x)=-2Y_0(x)&=-2\biggl[\frac{(\alpha_1(x)-\alpha_1(x))^2}{4}+ \frac{(\alpha_1(x)+\alpha_1(x))'}{2}\biggr] \\ &=-2\bigl[0+\alpha'_1(x)\bigr]=-2\alpha_1'(x). \end{aligned} \end{equation*} \notag $$
Hence
$$ \begin{equation*} U_0(x)<0\quad\text{ for any } x\geqslant x_0. \end{equation*} \notag $$
We also have
$$ \begin{equation*} y(x_0)<\alpha_1(x_0)=\frac{\alpha_1(x_0)+\alpha_1(x_0)}{2}= -\frac{Q(x_0)}{2}\,. \end{equation*} \notag $$
Thus, all conditions of Corollary 2 are satisfied. Hence
$$ \begin{equation*} y(x)<\alpha_1(x)\quad\text{ whenever } x\in(x_0,b), \end{equation*} \notag $$
where $b=\sup\operatorname{dom}y$. Corollary 3 is proved.

Justification of Remark 1. We have

$$ \begin{equation*} \begin{aligned} \, U_0=Q'-\frac{D}{2}&=-(\alpha_1+\alpha_2)'-\frac{(\alpha_1-\alpha_2)^2}{2}= -\biggl[(\alpha_1+\alpha_2)'+\frac{(\alpha_1-\alpha_2)^2}{2}\biggr] \\ &=-2\biggl[\frac{(\alpha_1-\alpha_2)^2}{4}+ \frac{(\alpha_1+\alpha_2)'}{2}\biggr]=-2Y_0. \end{aligned} \end{equation*} \notag $$
This yields that $U_0(x)<0$ iff $Y_0(x)>0$. The last inequality is just Condition B at the point $x$ (see [39], § 3.3).

Proof of Theorem 3.1.2. Suppose there exists $x_1>x_0$ such that $y(x_1)<\min(y_0,m)$. Then we have
$$ \begin{equation*} y(x_1)<m\quad\Longrightarrow\quad y(x_1)<\alpha_1(x_1)\leqslant\alpha_2(x_1), \end{equation*} \notag $$
so that
$$ \begin{equation*} y'(x_1)=(y(x_1)-\alpha_1(x_1))(y(x_1)-\alpha_2(x_1))>0. \end{equation*} \notag $$
Therefore, $y'(x)>0$ in a neighbourhood of the point $x_1$. In this neighbourhood $y(x)$ decreases strictly as $x$ decreases.

We choose $\widetilde{m}<m$ so that $y(x_1)< \widetilde{m}$. Then for all $x\in \mathbb{R}$ we have $\alpha_1(x)> \widetilde{m}$. Suppose there exists $x_2\in [x_0,x_1)$ such that $y(x_2)\geqslant\alpha_1(x_2)$. Then we obtain $y(x_2)>\widetilde{m}$ and $y(x_1)<\widetilde{m}$. Therefore, there exists $\xi\in(x_2, x_1)$ such that $y(\xi)=\widetilde{m}$. We select the rightmost of such points; it must satisfy $y'(\xi)\leqslant 0$.

On the other hand it follows from the form of the differential equation that

$$ \begin{equation*} y'(\xi)=(y(\xi)-\alpha_1(\xi))(y(\xi)-\alpha_2(\xi))>0. \end{equation*} \notag $$
This contradiction shows that our assumption fails and $y(x)<\alpha_1(x)$ for all $x\in [x_0,x_1]$. Hence $y'(x)>0$ for all $x\in [x_0, x_1]$. Thus, $y(x)$ increases strictly on $[x_0, x_1]$. This yields $y_0<y(x_1)$, which contradicts the condition $y(x_1)<\min(y_0,m)$. The theorem is proved.

Proof of Theorem 3.1.3. Just as in the proof of Theorem 3.1.2, it is easy to prove in our case the inequality $y'(x)>0$ whenever $x\in[x_0,b)$, where $b=\sup\operatorname{dom}y$. So $y(x)$ is strictly increasing, and we have $y(x) > M_1$ and $y(x) > M_2$ for $x\in[x_0,b)$. For all $x\in[x_0,b)$ we have
$$ \begin{equation*} \frac{1}{(y(x) - M_1)(y(x) - M_2)} \geqslant \frac{1}{(y(x)-\alpha_1(x))(y(x) - \alpha_2(x))}. \end{equation*} \notag $$
Since $y'(x)>0$ for $x\in[x_0,b)$, for these $x$ it follows that
$$ \begin{equation*} \frac{y'(x)}{(y(x) - M_1)(y(x) - M_2)} \geqslant \frac{y'(x)}{(y(x) - \alpha_1(x))(y(x) - \alpha_2(x))}\,, \end{equation*} \notag $$
and integrating this inequality over $[x_0;x]$ we obtain
$$ \begin{equation*} \psi(y):=\int_{y_0}^{y}{\frac{ds}{(s-M_1)(s-M_2)}} \geqslant x-x_0. \end{equation*} \notag $$
Since $\psi(y)\leqslant \displaystyle\int_{y_0}^{\infty}\dfrac{ds}{(s-M_1)(s-M_2)}<\infty$, the solution cannot be extended to the right of the point $x_0+\displaystyle\int_{y_0}^{\infty}\dfrac{ds}{(s-M_1)(s-M_2)}$ . So taking [38], Corollary 3.1, into account we see that the solution tends to $+\infty$ at this point or even before it. The theorem is proved.
Proof of Theorem 3.1.4. In fact, the solution $y(\,\cdot\,)$ is a monotonically increasing function on its domain. Since $y(x_0)>M\geqslant\alpha(x_0)$, from Corollary 3 and Remark 6 below we obtain $y(x)>\alpha(x)\geqslant m$ for $x\leqslant x_0$. Hence, by Weierstrass’s theorem $y\to m_1\geqslant m$ as $x\to -\infty$.
Proof of Theorem 3.1.5. In fact, if $y(x_0)<\alpha(x_0)$ and the graph of the solution $y(x)$ does not intersect the curve $y=\alpha(x)$ at $x\geqslant x_0$, then $y(x)<\alpha(x)$ for all $x\geqslant x_0$. Because of the monotonicity of $y(\,\cdot\,)$, by Weierstrass’s theorem the limit $\lim_{x\to +\infty}y(x)=:y_+\in\mathbb{R}$ exists, so that
$$ \begin{equation*} \int_{x_0}^{\infty}(\alpha(x)-y(x))^2\,dx<\infty. \end{equation*} \notag $$
If $y_+<\alpha_+$, then the last integral diverges. Therefore, $y_+=\alpha_+$ and
$$ \begin{equation*} \alpha(x)-y(x)\to +0\quad\text{as}\ \ x\to+\infty. \end{equation*} \notag $$
The theorem is proved.
Proof of Theorem 3.1.6. Consider part (1) of the theorem. First we prove that $y(x)\leqslant \alpha_1(x)$ for $x\in[x_0,b)$, where $b=\sup\operatorname{dom}y$. Assume the contrary. Then there exists $x_1>x_0$ such that $y(x_1)>\alpha_1(x_1)$. Hence there exists $x_2\in(x_0,x_1)$ such that $y(x_2)=\alpha_1(x_2)$. We can choose the rightmost point with this property as $x_2$ and, without loss of generality, suppose that $y(x)>\alpha_1(x)$ for $x\in(x_2,x_1]$ and $y(x)<\alpha_2(x)$ for $x\in[x_2,x_1]$. So, if $x\in (x_2,x_1]$, then
$$ \begin{equation*} \alpha_1(x)<y(x)<\alpha_2(x). \end{equation*} \notag $$
Now, taking the form of our differential equation into account we obtain $y'(x)<0$ for $x\in(x_2,x_1]$, and therefore $y(x)$ decreases strictly on $[x_2,x_1]$. So $y(x_1)<y(x_2)$. We have
$$ \begin{equation*} \alpha_1(x_1)<y(x_1)<y(x_2)=\alpha_1(x_2). \end{equation*} \notag $$
Thus, we obtain the inequality $\alpha_1(x_1)<\alpha_1(x_2)$, which contradicts the monotonicity of the function $\alpha_1(x)$. Hence our assumption fails and $y(x)\leqslant \alpha_1(x)$ for $x\in[x_0,b)$. The last statement yields $y'(x)\geqslant 0$ for $x\in[x_0,b)$, and therefore $y(x)$ increases monotonically on $[x_0,b)$.

Now we prove that $\lim_{x\to+\infty}y(x)=\lim_{x\to+\infty}\alpha_1(x)$. As proved above, the function $y(x)$ is bounded above on $[x_0,b)$: $y(x)\leqslant\alpha_1(x)\leqslant M$. It is increasing and therefore has a finite limit

$$ \begin{equation*} \lim_{x\to b}y(x)=:a\geqslant y_0. \end{equation*} \notag $$
Thus, $b=+\infty$. Because of the monotonicity and boundedness of $\alpha_1(x)$, it has a finite limit $\lim_{x\to+\infty}\alpha_1(x)=\alpha_1^+\geqslant a$.

We have

$$ \begin{equation*} y(x)=y_0+\int_{x_0}^x y'(x)\,dx,\qquad x\geqslant x_0. \end{equation*} \notag $$
So
$$ \begin{equation*} \int_{x_0}^{\infty} y'(x)\,dx<\infty. \end{equation*} \notag $$
Now,
$$ \begin{equation*} (\alpha_1(x)-y(x))^2\leqslant (\alpha_1(x)-y(x))(\alpha_2(x)-y(x))=y'(x),\qquad x\geqslant x_0, \end{equation*} \notag $$
and therefore $\displaystyle\int_{x_0}^{\infty}(\alpha_1(x)-y(x))^2\,dx<\infty$. Hence $a=\alpha_1^+$.

Now we prove part (2). Because of the monotonicity of $\alpha_2$, we have $\alpha_2(x)\leqslant \alpha_2(x_0)$ for all $x\geqslant x_0$. Hence, as $y(x_0)>\alpha_2(x_0)$, Theorem 3.1.3 yields that there exists $x^*>x_0$ such that $y(\,\cdot\,)$ increases strictly monotonically on $[x_0,x^*)$ and

$$ \begin{equation*} \lim_{x\to x^*-0}y(x)=+\infty. \end{equation*} \notag $$
The monotonicity of $y(\,\cdot\,)$ yields $y(x)\geqslant y(x_0)>\alpha_2(x_0)\geqslant\alpha_2(x)$ for $x>x_0$.

Now consider part (3). First we prove that $y(x)<\alpha_2(x)$ for $x\in[x_0,b)$, where $b=\sup\operatorname{dom}y$. We choose a number $c$ so that $c\in(y_0,\alpha_2(x_0))$. Then we have

$$ \begin{equation*} \alpha_2(x)\geqslant\alpha_2(x_0)>c>y_0,\qquad x\in[x_0,b). \end{equation*} \notag $$
Suppose there exists $x_1>x_0$ such that $y(x_1)\geqslant \alpha_2(x_1)>c$. We have $y(x_0)<c$ and $y(x_1)>c$. Hence there exists $\xi\in(x_0,x_1)$ such that $y(\xi)=c<\alpha_2(\xi)$. Without loss of generality, we choose the leftmost of such $\xi$. Then $y'(\xi)\geqslant 0$. On the other hand $y'(\xi)=(c-\alpha_1(\xi))(c-\alpha_2(\xi))<0$. We obtain a contradiction. Therefore, $y(x)<\alpha_2(x)$ for $x\in[x_0,b)$.

Note that $y(x)\geqslant \alpha_1(x)$ for $x\in[x_0,b)$. This inequality can be proved similarly to the proof of the inequality $y(x)\leqslant \alpha_1(x)$ for $x\in[x_0,b)$ in part (1).

Thus, $\alpha_1(x)\leqslant y(x)<\alpha_2(x)$ for $x\in[x_0,b)$, that is, the function $y(x)$ is bounded and monotonically decreasing on $[x_0,b)$. Hence the same considerations as in part (1), using the inequality $(\alpha_2(x)-y(x))(y(x)-\alpha_1(x)) \geqslant (\alpha_2(x_0)-y_0)(y(x)-\alpha_1(x))$ for $x\in[x_0,b)$, yield $b=+\infty$ and

$$ \begin{equation*} \lim_{x\to+\infty}y(x)=\lim_{x\to+\infty}\alpha_1(x)<y_0. \end{equation*} \notag $$
The theorem is proved.

Remark 6. After the substitution

$$ \begin{equation*} u(x)=-y(-x) \end{equation*} \notag $$
equation (1.10) looks like
$$ \begin{equation} \frac{du}{dx}(-x)=(u(-x)-\beta_1(-x))(u(-x)-\beta_2(-x)), \end{equation} \tag{4.1} $$
where $\beta_1(-x)=-\alpha_1(x)$ and $\beta_2(-x)=-\alpha_2(x)$. Thus we can obtain analogues of Theorems 3.1.13.1.6 and their corollaries in the case when $x\leqslant x_0$. In particular, Theorem 3.1.3' becomes an analogue of Theorem 3.1.3.

Lemma 4.1. Suppose that condition (1.12) holds and $y(\,\cdot\,)$ is a solution to equation (1.10) defined on $(a,b)$. Then:

Proof. Part (1) follows from Corollary 3.1 in [38] and Theorem 3.1.2. Part (2) follows from part (1) and Remark 6. The lemma is proved.

Now we use some results of [47]. In [47] the author studied differential equations of the form

$$ \begin{equation} r'+G(x,r)+q(x)=0, \end{equation} \tag{4.2} $$
where $G(x,r)$ and $q(x)$ are continuous functions for $-\infty<r<+\infty$ and $0\leqslant x<\omega$ $(\,\leqslant +\infty$). In some results of [47] it was additionally assumed that the function $G(x,r)$ is convex in $r$. It is easy to see that in all the considerations and theorems of [47] the initial point $0$ can be replaced by an arbitrary $x_0$. We are interested in the case when $G(x,r)=r^2$. In this case we obtain Riccati’s equation of the special form
$$ \begin{equation} r'+r^2+q(x)=0. \end{equation} \tag{4.3} $$
Note that an equation of the form (1.10), when the function $\alpha$ is differentiable, can be reduced to the form (4.3) by the sequence of substitutions
$$ \begin{equation*} z=y-\frac{1}{2}(\alpha_1(x)+\alpha_2(x)),\qquad r=-z. \end{equation*} \notag $$
The first substitution produces (see [39]) the equation
$$ \begin{equation*} z'=z^2-Y_0(x). \end{equation*} \notag $$
The second substitutions reduces this equation to the form
$$ \begin{equation} r'+r^2-Y_0(x)=0 \end{equation} \tag{4.4} $$
or, which the same,
$$ \begin{equation} r'+r^2+\frac{1}{2}U_0(x)=0. \end{equation} \tag{4.5} $$
Then Proposition 2.2 in [47] for equation (4.5) yields the following lemma.

Lemma 4.2. Suppose that $Q\in C^1 [x_0,+\infty)$ and $U_0(x)\geqslant 0$ for all $x\geqslant x_0$. Moreover, suppose the condition

$$ \begin{equation*} \int_{x_0}^{\infty}U_0(x)\,dx=\infty \end{equation*} \notag $$
holds. Then no solution $y(\,\cdot\,)$ to equation (1.10) defined at the point $x_0$ can be extended onto $[x_0,+\infty)$. If condition (1.12) holds, then each solution $y(\,\cdot\,)$ tends to $+\infty$ at some finite point $x^*=x^*(y)>x_0$.

Proof. Take the minorant $m(r)=G(x,r)=r^2$. In the case of equation (4.5) we have
$$ \begin{equation*} q(x)=\frac{1}{2}U_0(x)\geqslant 0. \end{equation*} \notag $$
If we suppose that equation (1.10) has a solution $y(x)$ defined on $[x_0,+\infty)$, then for equation (4.5) such a solution has the form
$$ \begin{equation*} r(x)=-\biggl(y-\frac{1}{2}(\alpha_1(x)+\alpha_2(x))\biggr). \end{equation*} \notag $$
According to Proposition 2.2 in [47], we obtain
$$ \begin{equation*} \int_{x_0}^{\infty}\frac{1}{2}U_0(x)\,dx<\infty. \end{equation*} \notag $$
So we have a contradiction, which proves our lemma by taking Lemma 4.1 into account.

Suppose that condition (1.12) holds, $Q\in C^1[x_0,+\infty)$, and $U_0(x)<0$ for all $x\geqslant x_0$. Using Corollary 2 we study the behaviour of solutions to equation (1.10) in their dependence on the initial values.

Case I: $y_0=y(x_0)\leqslant \alpha(x_0)=(\alpha_1(x_0)+\alpha_2(x_0))/2$. According to Corollary 2 and Theorem 3.1.2, any solution is bounded on $x\in[x_0,b)$, where $b$ is the right-hand endpoint of the maximal interval of existence of the solution $y(\,\cdot\,)$:

$$ \begin{equation*} \min(y_0,m)\leqslant y(x)<\alpha(x),\qquad x\in[x_0,b). \end{equation*} \notag $$
Hence it follows from Lemma 4.1 that $y(x)$ is extensible onto $[x_0,+\infty)$.

Case II: $y_0=y(x_0)>\alpha(x_0)$. The condition $U_0(x)<0$ for $x\geqslant x_0$ can be written as $\alpha'(x)>(\alpha(x)-\alpha_1(x))(\alpha(x)-\alpha_2(x))$. So we have two possibilities for $x\geqslant x_0$:

The last case can be split into two subcases:

Consider case II.(2b). We try to estimate the value of $x^*$.

Put $f(x,y):=(y-\alpha_1(x))(y-\alpha_2(x))$. We have $\alpha'(x)>f(x,\alpha(x))$. Now,

$$ \begin{equation*} \begin{aligned} \, y'(x)&=f(x,y(x))=f(x,\alpha(x))+[f(x,y(x))-f(x,\alpha(x))] \\ &=f(x,\alpha(x))+\biggl[f(x,y(x))-\frac{\alpha_2-\alpha_1}{2}\, \frac{\alpha_1-\alpha_2}{2}\biggr] \\ &=f(x,\alpha(x))+\biggl[y^2-(\alpha_1(x)+\alpha_2(x))y+ \alpha_1\alpha_2+\frac{(\alpha_1-\alpha_2)^2}{4}\biggr] \\ &=f(x,y)+\biggl[y^2-(\alpha_1(x)+\alpha_2(x))y+\alpha_1\alpha_2+ \frac{\alpha_1^2}{4}+\frac{\alpha_2^2}{4}-\frac{\alpha_1\alpha_2}{2}\biggr] \\ &=f(x,\alpha)+\biggl[y^2-(\alpha_1(x)+\alpha_2(x))y+ \frac{(\alpha_1+\alpha_2)^2}{4}\biggr] \\ &=f(x,\alpha)+\biggl[y-\frac{\alpha_1+\alpha_2}{2}\biggr]^2= f(x,\alpha)+(y(x)-\alpha(x))^2. \end{aligned} \end{equation*} \notag $$
So,
$$ \begin{equation*} y'(x)=f(x,\alpha)+(y(x)-\alpha(x))^2<\alpha'(x)+(y(x)-\alpha(x))^2, \end{equation*} \notag $$
that is,
$$ \begin{equation*} (y(x)-\alpha(x))'<(y(x)-\alpha(x))^2. \end{equation*} \notag $$
We put $\beta(x):=y(x)-\alpha(x)$ and obtain
$$ \begin{equation*} \beta'(x)<\beta^2(x). \end{equation*} \notag $$
We know that $\beta(x_0)=y(x_0)-\alpha(x_0)>0$. Let $\gamma(x)$ be a solution to the initial value problem
$$ \begin{equation*} \begin{cases} \gamma'(x)=\gamma^2(x), \\ \gamma(x_0)=y(x_0)-\alpha(x_0). \end{cases} \end{equation*} \notag $$
We have $\beta(x)<\gamma(x)$. Separating the variables, integrating, and substituting the initial values we obtain
$$ \begin{equation*} \gamma(x)=\biggl(\frac{1}{y(x_0)-\alpha(x_0)}-(x-x_0)\biggr)^{-1}. \end{equation*} \notag $$
So,
$$ \begin{equation*} \beta(x)<\gamma(x)=\biggl(\frac{1}{y(x_0)-\alpha(x_0)}- (x-x_0)\biggr)^{-1},\qquad x>x_0. \end{equation*} \notag $$
Thus, we obtain an estimate for $x^*$:
$$ \begin{equation*} x^*\geqslant x_0+\frac{1}{y(x_0)-\alpha(x_0)}. \end{equation*} \notag $$
Generalizing these considerations, we obtain the following result.

Lemma 4.3. Suppose that condition (1.12) holds, $Q\in C^1 [x_0,+\infty)$, and $U_0(x)< 0$ for all $x\geqslant x_0$, let $y(\,\cdot\,)$ be a solution to the equation defined at the point $x_0$, and let $y(x_0)=y_0$. Then the following hold:

Proposition 2.2 in [47] and Theorem 3.1.2 yield the following lemma.

Lemma 4.4. Suppose that condition (1.12) holds, $Q\in C^1 [x_0,+\infty)$, and $U_0(x)\geqslant 0$ for all $x\geqslant x_0$, where

$$ \begin{equation*} \int_{x_0}^{\infty}U_0(x)\,dx<\infty. \end{equation*} \notag $$
Then solutions to equation (1.10) defined on $[x_0,+\infty)$ (if they exist) satisfy the relations
$$ \begin{equation*} \min(y_0,m)\leqslant y(x)\leqslant\alpha(x),\qquad x\geqslant x_0. \end{equation*} \notag $$
In particular, if $y(x)$ satisfies the inequality $y_0=y(x_0)> \alpha(x_0)$, then $y(x)$ tends to $+\infty$ at a finite point $x^*>x_0$.

Remark 7. If $U_0(x)<0$ for all $x\geqslant x_0$, then the graph of any solution to equation (1.10) can have at most one point of intersection with the graph of $\alpha(x)$ at some $x\geqslant x_0$.

Proposition 2.3 in [47] yields the following lemma.

Lemma 4.5. Suppose that $Q\in C^1 [x_0,+\infty)$, the integral $\displaystyle\int_{x_0}^{\infty}\!\!U_0(x)\,dx$ converges, and $y(x)$ is a solution to equation (1.10) defined on $[x_0,+\infty)$. Then relations (3.4) and (3.3) hold.

Proof. The function $r(x)=\alpha(x)-y(x)$ is a solution to equation (4.5) defined on $[x_0,+\infty)$. Then, according to Proposition 2.3 in [47], we have
$$ \begin{equation*} \int_{x_0}^{\infty}r^2(x)\,dx= \int_{x_0}^{\infty}(y(x)-\alpha(x))^2\,dx<\infty \end{equation*} \notag $$
and
$$ \begin{equation*} r(x)=\alpha(x)-y(x)\to 0,\qquad x\to+\infty. \end{equation*} \notag $$
The lemma is proved.

Lemmas 4.24.5 prove Theorem 3.1.7.

Justification of Remark 3. It suffices to note that

$$ \begin{equation*} U_0(x)=-2\alpha'(x) \end{equation*} \notag $$
and the integral $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx$ converges if and only if the integral $\displaystyle\int_{x_0}^{\infty}\alpha'(x)\,dx$ converges, that is, $\alpha(x)$ has a finite limit as $x\to +\infty$.

For equation (1.10) Proposition 2.4 in [47] yields Theorem 3.1.8. Using Theorem 3.1.8 for $\alpha_1(x)=\alpha_2(x)=\alpha(x)$, $x\geqslant x_0$, we obtain Corollary 5.

Lemma 4.6. Suppose $Q\in C^1[x_0,+\infty)$. If there exists a solution $y(x)$ to equation (1.10) defined on $[x_0,+\infty)$ such that relations (3.4) and (3.3) hold, then the integral $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx$ converges.

Proof. The function $r(x)=\alpha(x)-y(x)$ is a solution to equation (4.5) defined on $[x_0,+\infty)$ and such that
$$ \begin{equation*} \int_{x_0}^{\infty}r^2(x)\,dx<\infty \end{equation*} \notag $$
and
$$ \begin{equation*} r(x)\to 0,\qquad x\to+\infty. \end{equation*} \notag $$
In (4.5) we move all terms but the derivative to the right-hand side and integrate over $[x_0;x]$:
$$ \begin{equation*} r(x)-r(x_0)=-\int_{x_0}^{x}r^2(x)\,dx- \frac{1}{2}\int_{x_0}^{x}U_0(x)\,dx, \end{equation*} \notag $$
so that
$$ \begin{equation*} \int_{x_0}^{x}U_0(x)\,dx=-2\biggl[r(x)-r(x_0)+ \int_{x_0}^{x}r^2(x)\,dx\biggr]. \end{equation*} \notag $$
Passing to the limit as $x\to+\infty$, we obtain the convergence of the integral $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx$. The lemma is proved.

Lemmas 4.5 and 4.6 yield Theorem 3.1.9.

Proposition 2.5 [47] and the proof of Lemma 7.1 in [38], Ch. XI, § 7, yield Theorem 3.1.10.

Consider the case when condition (1.12) holds, $Q\in C^1 [x_0,+\infty)$, and $\alpha_1(x)=\alpha_2(x)=\alpha(x)$ for $x\geqslant x_0$. In this case statement (4) of Theorem 3.1.10 is equivalent to the assertion that

$$ \begin{equation*} \lim_{T\to+\infty}\,\frac{1}{T-x_0}\int_{x_0}^{T}|\alpha(t)-a|^2\,dt =0\quad\text{ for some }a\in\mathbb{R}. \end{equation*} \notag $$

In fact,

$$ \begin{equation*} \begin{aligned} \, \frac{1}{T-x_0}\int_{x_0}^{T}\biggl|c- \frac{1}{2}\int_{x_0}^{t}U_0(s)\,ds\biggr|^2\,dt&= \frac{1}{T-x_0}\int_{x_0}^{T}\biggl|c+ \int_{x_0}^{t}\alpha'(s)\,ds\biggr|^2\,dt \\ &=\frac{1}{T-x_0}\int_{x_0}^{T}|\alpha(t)-(\alpha(x_0)-c)|^2\,dt. \end{aligned} \end{equation*} \notag $$

Now,

$$ \begin{equation*} \begin{aligned} \, \frac{1}{T-x_0}\int_{x_0}^{T}\,dt\biggl(\int_{x_0}^{t}U_0(s)\,ds\biggr)&= \frac{1}{T-x_0}\int_{x_0}^{T}\,dt\biggl(-2\int_{x_0}^{t}\alpha'(s)\,ds\biggr) \\ &=-\frac{2}{T-x_0}\int_{x_0}^{T}(\alpha(t)-\alpha(x_0))\,dt \\ &=-\frac{2}{T-x_0}\biggl[\int_{x_0}^{T}\alpha(t)\,dt-\alpha(x_0)(T-x_0)\biggr] \\ &=-2\biggl[\frac{1}{T-x_0}\int_{x_0}^{T}\alpha(t)\,dt-\alpha(x_0)\biggr]. \end{aligned} \end{equation*} \notag $$

Thus, statement (3) of Theorem 3.1.10 is equivalent to the condition

$$ \begin{equation*} \varlimsup_{T\to+\infty}\biggl[\frac{1}{T-x_0} \int_{x_0}^{T}\alpha(t)\,dt\biggr]<+\infty. \end{equation*} \notag $$
On the other hand $\alpha(x)$ is a bounded function, hence the last statement on the limit superior holds true. So it follows from Theorem 3.1.10 that if there exists a solution $y(x)$ to equation (1.10) defined on $[x_0,+\infty)$, then statements (1)–(4) of Theorem 3.1.10 hold true. So conditions (1) and (4) are necessary for the existence of a solution $y(x)$ defined on $[x_0,+\infty)$.

Now suppose that $\displaystyle\lim_{T\to+\infty}\frac{1}{T-x_0}\int_{x_0}^{T} |\alpha(t)-a|^2\,dt=0$ for some $a\in\mathbb{R}$ and there exists a solution $y(x)$ to equation (1.10) defined on $[x_0,+\infty)$. Then statement (1) is true and the finite limit

$$ \begin{equation*} \lim_{x\to+\infty}y(x)=y(x_0)+\int_{x_0}^{\infty}(y(t)-\alpha(t))^2\,dt \end{equation*} \notag $$
exists. Taking the equality $a=\alpha(x_0)-c$ and the formula for $c$ into account we obtain
$$ \begin{equation*} a=\lim_{x\to+\infty}y(x). \end{equation*} \notag $$
This produces the statement of Corollary 7.

Proposition 3.2 in [47], as applied to equation (1.10), proves Theorem 3.1.11.

Remark 8. Suppose $Q\in C^1[x_0,+\infty)$. For $x_1\geqslant x_0$ note that

$$ \begin{equation*} \int_{x_1}^{\infty}|y(x)-\alpha(x)|\,dx<\infty\quad\Longrightarrow\quad \int_{x_1}^{\infty}|y(x)-\alpha(x)|^2\,dx<\infty. \end{equation*} \notag $$
Generally, the inverse is wrong. Thus, many solutions $y(\,\cdot\,)$ defined on $[x_1,+\infty)$ for some $x_1\geqslant x_0$ can satisfy the condition $\displaystyle\int_{x_1}^{\infty}|y(x)-\alpha(x)|^2\,dx<\infty$. For instance, when (1.12) holds and $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx$ converges, the last condition is proved (see Lemma 4.5) to be satisfied by all solutions defined near $+\infty$. But at most one solution defined near $+\infty$ can ‘tend to $\alpha(x)$ sufficiently rapidly’ to provide the relation $\displaystyle\int_{x_1}^{\infty}|y(x)-\alpha(x)|\,dx<\infty$.

Theorems 3.1.12 and 3.1.13 can be proved by a joint application of Theorems 3.1.6 and 3.1.7.

4.2. On the structure of the set of solutions defined in a neighbourhood of $+\infty$

Remark 9. Suppose $Q\in C^1[x_0,\omega)$. If there exists a solution $y(\,\cdot\,)$ defined on $[x_0,\omega)$, then the principal solution on $(x_0,\omega)$ is defined on the whole interval. In the case when $y_*(x)\to -\infty$ as $x\to x_0+0$ the inequality $y(x)\leqslant y_*(x)$ for $x\in(x_0,\omega)$ is violated. Hence the case when $y_*(x)\to -\infty$ as $x\to x_0+0$ is impossible. Therefore, by Lemma 4.1, the principal solution $y_*(x)$ is extensible onto $[x_0,\omega)$.

Thus, Lemma 2.1 produces the following result.

Corollary 8. Suppose $x_0<\omega\leqslant\infty$ and $Q\in C^1 [x_0,\omega)$. Consider solutions to equation (1.10) defined at the point $x_0$. Assume that among them there exists a solution defined on $[x_0,\omega)$. Then there exists a solution $y_*(x)$ (the principal one) defined on $[x_0,\omega)$ such that if $y(x)$ is a solution defined on $(x_0,\omega)$, then $y(x)\leqslant y_*(x)$ for $x\in(x_0,\omega)$.

Lemma 4.7. Suppose that $x_0<\omega\leqslant\infty$, $Q\in C^1 [x_0,\omega)$, and there exists at least one solution to equation (1.10) defined on $[x_0,\omega)$. Then there exists a solution $y_*(x)$ (the principal one) defined on $[x_0,\omega)$ such that any solution $y(x)$ defined at the point $x_0$

Proof. Statement (2) holds because of Corollary 8. Now we prove statement (1). We have
$$ \begin{equation*} y(x)\leqslant y_*(x) \end{equation*} \notag $$
on the intersection of the domain of $y(x)$ and the half-open interval $[x_0,\omega)$. Then for any $x_0<x^*<\omega$ such that $y(x)$ is defined on $[x_0, x^*)$ the solution $y(x)$ is bounded above on $[x_0,x^*)$. Using the same considerations as in the proof of Theorem 3.1.2, for $x\in[x_0,x^*)$ we can prove the inequality
$$ \begin{equation*} y(x)\geqslant \min\Bigl(y(x_0),\min_{x\in [x_0,x^*]}\alpha_1(x)\Bigr), \end{equation*} \notag $$
that is, $y(x)$ is bounded below on $[x_0,x^*)$. So the solution $y(x)$ cannot tend to infinity at a finite point $x^*<\omega$. Therefore, by Corollary 3.1 in [38] the solution $y(x)$ is extensible onto the whole $[x_0,\omega)$. The lemma is proved.

Lemma 4.2 in [47] and Theorem 4.2 in [47] produce the following lemma.

Lemma 4.8. Suppose that $Q\in C^1[x_0,\omega)$ and $y_3(x)<y_2(x)<y_1(x)$ are different solutions to equation (1.10) defined on a common interval $(x_0,\omega)$. Then the function $\dfrac{y_1(x)-y_3(x)}{y_1(x)-y_2(x)}$ is greater than or equal to 1 and decreases on the above interval. In particular, the finite limit

$$ \begin{equation*} \lim_{x\to\omega-0}\,\frac{y_1(x)-y_3(x)}{y_1(x)-y_2(x)} \end{equation*} \notag $$
exists. Moreover, if $y_1(x)=y_*(x)$, then this limit equals $1$.

Proof. The functions $r_1(x)<r_2(x)<r_3(x)$ are solutions to equation (4.5) defined on a common interval $(x_0,\omega)$. Here
$$ \begin{equation*} \begin{aligned} \, r_1(x)&=\alpha(x)-y_1(x), \\ r_2(x)&=\alpha(x)-y_2(x), \end{aligned} \end{equation*} \notag $$
and
$$ \begin{equation*} r_3(x) =\alpha(x)-y_3(x). \end{equation*} \notag $$
By Lemma 4.3 [47] the function $\dfrac{r_3(x)-r_1(x)}{r_2(x)-r_1(x)}$ is greater than or equal to 1 and decreases on the interval in question. In particular, the finite limit
$$ \begin{equation*} \lim_{x\to\omega-0}\frac{r_3(x)-r_1(x)}{r_2(x)-r_1(x)}\, \end{equation*} \notag $$
exists. Further,
$$ \begin{equation*} \frac{r_3-r_1}{r_2-r_1}= \frac{(\alpha-y_3)-(\alpha-y_1)}{(\alpha-y_2)-(\alpha-y_1)}= \frac{y_1-y_3}{y_1-y_2}\,. \end{equation*} \notag $$
In the case when $y_1(x)=y_*(x)$ we have $r_1(x)=r_*(x)$, and from Theorem 4.2 we obtain
$$ \begin{equation*} \lim_{x\to\omega-0}\,\frac{y_1(x)-y_3(x)}{y_1(x)-y_2(x)}= \lim_{x\to\omega-0}\,\frac{r_3(x)-r_1(x)}{r_2(x)-r_1(x)}=1. \end{equation*} \notag $$
The lemma is proved.

Lemmas 4.7 and 4.8 produce Theorem 3.2.1.

Lemma 4.9. Suppose $Q\in C^1[x_0,+\infty)$ and condition (1.12) holds. If two solutions to equation (1.10) are defined on $[x_0,+\infty)$ and have different finite limits as $x\to +\infty$, then any solution defined on $[x_0,+\infty)$ also has a finite limit as $x\to+\infty$.

Proof. Let $y_1>y_2$ be two solutions to equation (1.10) defined on $[x_0,+\infty)$ and having two different finite limits as $x\to+\infty$. Let $y(\,\cdot\,)$ be a solution defined on $[x_0,+\infty)$ and other than $y_1$ and $y_2$. Put
$$ \begin{equation*} \lim_{x\to+\infty}y_1(x)=:a\quad\text{and}\quad \lim_{x\to+\infty}y_2(x)=:b. \end{equation*} \notag $$

Case 1: $y(x_0)>y_1(x_0)$. By Theorem 3.2.1 the function $(y-y_1)/(y-y_2)$ increases monotonically and tends to some $d\in\mathbb{R}$, $d\leqslant 1,$ as $x\to+\infty$.

Thus,

$$ \begin{equation*} \frac{y-y_1}{y-y_2}=d+\beta(x), \end{equation*} \notag $$
where $\beta(x)$ increases monotonically for $x\geqslant x_0$ and $\beta(x)=\bar{o}(1)$ as $x\to+\infty$. This yields
$$ \begin{equation*} \begin{gathered} \, y-y_1=(y-y_2)d+(y-y_2)\beta(x), \\ y-y_1=y d-y_2 d+y\beta(x)-y_2\beta(x), \end{gathered} \end{equation*} \notag $$
and
$$ \begin{equation*} (1-d-\beta(x))y=y_1-y_2(d+\beta(x)). \end{equation*} \notag $$
Hence
$$ \begin{equation*} y=\frac{y_1-y_2(d+\beta(x))}{1-d-\beta(x)}\,. \end{equation*} \notag $$

If $d<1$, then

$$ \begin{equation*} y(x)\to\frac{a-bd}{1-d}\in\mathbb{R},\qquad x\to+\infty. \end{equation*} \notag $$

If $d=1$, then

$$ \begin{equation*} y(x)=\frac{y_1(x)-y_2(x)(1+\beta(x))}{-\beta(x)}= \frac{y_1(x)-y_2(x)}{-\beta(x)}+y_2(x)\to+\infty,\qquad x\to+\infty, \end{equation*} \notag $$
which is impossible by Theorem 3.1.3.

Case 2: $y_2(x_0)\kern-0.5pt<\kern-0.5pty(x_0)\kern-0.5pt<\kern-0.5pty_1(x_0)$. By Theorem 3.2.1 the function $(y_1- y_2)/(y_1- y)$ decreases monotonically and tends to some $d\in\mathbb{R}$, $d\geqslant 1$, as $x\to+\infty$; hence the function $(y_1-y)/(y_1-y_2)$ increases monotonically and tends to $1/d\in\mathbb{R}$, $1/d\leqslant 1$, as $x\to+\infty$. We have

$$ \begin{equation*} y_1-y=(y_1-y_2)\biggl(\frac{1}{d}+\bar{o}(1)\biggr), \qquad x\to+\infty, \end{equation*} \notag $$
so
$$ \begin{equation*} y =y_1+(y_2-y_1)\biggl(\frac{1}{d}+\bar{o}(1)\biggr), \qquad x \to+\infty. \end{equation*} \notag $$
Thus,
$$ \begin{equation*} y(x)\to a-\frac{a-b}{d}\in\mathbb{R},\qquad x\to+\infty. \end{equation*} \notag $$

Case 3: $y_1(x_0)<y_2(x_0)$. By Theorem 3.2.1 the function $(y_1-y)/(y_1-y_2)$ decreases monotonically and tends to some $d\in\mathbb{R}$, $d\geqslant 1$, as $x\to+\infty$. So,

$$ \begin{equation*} y(x)=y_1(x)-(y_1(x)-y_2(x))(d+\bar{o}(1))\to a-(a-b)d\in\mathbb{R},\qquad x\to+\infty. \end{equation*} \notag $$

Thus, in all possible cases the solution $y(x)$ has a finite limit as $x\to+\infty$. The lemma is proved.

Lemma 4.9 and the formulae for the limit of the solution $y(\,\cdot\,)$ in Cases 2 and 3 for $d=1$ yield the following result.

Lemma 4.10. Suppose $Q\in C^1[x_0,+\infty)$ and condition (1.12) holds. If solutions $y_2<y_1$ to equation (1.10) are defined on $[x_0,+\infty)$ and have different finite limits as $x\to+\infty$ and, moreover, $y_1$ is the principal solution on $(x_0,+\infty)$, then each solution $y$ defined on $[x_0,+\infty)$ and other than $y_1$ has the same limit at infinity as $y_2$ has.

Proof of Theorem 3.2.2. By Lemma 4.9 the principal solution $y_*(x)$ on $(x_0,+\infty)$ has a finite limit as $x\to+\infty$. If the solution $y_1$ is not principal on $(x_0,+\infty)$, then $y_2(x)<y_1(x)<y_*(x)$ whenever $x\geqslant x_0$. In this case, by Lemma 4.10 the solutions $y_1(x)$ and $y_2(x)$ have the same limits as $x\to+\infty$, which contradicts the assumptions of the theorem. Thus, the solution $y_1$ is principal on $(x_0,+\infty)$. Hence, by Lemma 4.10 each solution $y$ defined on $[x_0,+\infty)$ and other than $y_1$ has the same limit at infinity as $y_2$ has. The lemma is proved.
Proof of Theorem 3.2.3. If
$$ \begin{equation*} \lim_{x\to+\infty}y_1(x)\ne\lim_{x\to+\infty}y_2(x), \end{equation*} \notag $$
then the statement of the theorem follows from Theorem 3.2.2.

Suppose

$$ \begin{equation*} \lim_{x\to+\infty}y_1(x)=\lim_{x\to+\infty}y_2(x). \end{equation*} \notag $$
Note that while proving Theorem 3.2.2 we did not use the condition $a\ne b$ in Cases 2 and 3. So, using the formulae from that proof for the limit of the solution $y(x)$ as $x\to+\infty$ we obtain the statement of Theorem 3.2.3.

4.3. Asymptotic behaviour at $\pm\infty$ of solutions to an equation the roots of whose right-hand side tend monotonically to finite limits

Theorem 3.2.1 yields the following result.

Lemma 4.11. Suppose $\alpha_1^+\ne \alpha_2^+$ and $y_{\rm I}<y_{\rm II}$ are two bounded solutions to equation (1.10) defined at the point $x_0$ and such that

$$ \begin{equation*} \lim_{x\to+\infty}y_{\rm I}(x)=\alpha_1^+]\quad\textit{and}\qquad \lim_{x\to+\infty}y_{\rm II}(x)=\alpha_2^+. \end{equation*} \notag $$

Let $y(x)$ be a solution to equation (1.10). Then the following hold:

Proof. First we prove part (1). Since the solution $y(x)$ is bounded above by a solution defined on $[x_0,+\infty)$, it follows from Lemma 4.1 that $y(x)$ is extensible onto $[x_0,+\infty)$ as a bounded function. Hence it obeys condition (2.5) (see [39], Proposition 2.4). Therefore, the limit of $y(x)$ equals $\alpha_1^+$ or $\alpha_2^+$. Consider separately the cases $y(x_0)>y_{\rm I}(x_0)$ and $y(x_0)<y_{\rm I}(x_0)$.

If $y(x_0)>y_{\rm I}(x_0)$, then by Theorem 3.2.1,

$$ \begin{equation*} \lim_{x\to+\infty}\frac{y_{\rm II}(x)-y_{\rm I}(x)}{y_{\rm II}(x)-y(x)}= d\in\mathbb{R},\qquad d\geqslant 1. \end{equation*} \notag $$
So, $\lim_{x\to+\infty}y(x)\ne \alpha_2^+$, since otherwise the following relation must hold:
$$ \begin{equation*} \lim_{x\to+\infty}\dfrac{y_{\rm II}(x)-y_{\rm I}(x)}{y_{\rm II}(x)-y(x)}=+\infty. \end{equation*} \notag $$
Thus, $\lim_{x\to+\infty}y(x)=\alpha_1^+$.

If $y(x_0)<y_{\rm I}(x_0)$, then $y(x)<y_{\rm I}(x),\:x\geqslant x_0$. Therefore, $\lim_{x\to+\infty}y(x)\ne \alpha_2^+$. Thus, $\lim_{x\to+\infty}y(x)=\alpha_1^+$.

So, in any case, $\lim_{x\to+\infty}y(x)=\alpha_1^+$.

Now we prove part (2). Assume the contrary, that is, assume that the solution $y(x)$ is extensible onto $[x_0,+\infty)$. Then $y(x)>y_{\rm II}(x)$ for $x\geqslant x_0$, and therefore $\lim_{x\to+\infty}y(x)\ne \alpha_1^+$. Hence $\lim_{x\to+\infty}y(x)=\alpha_2^+$. We obtain

$$ \begin{equation*} \lim_{x\to+\infty}\frac{y(x)-y_{\rm I}(x)}{y(x)-y_{\rm II}(x)}=+\infty, \end{equation*} \notag $$
which contradicts Theorem 3.2.1. Thus, our assumption is wrong and $y(x)$ tends to $+\infty$ at a finite point $x^*\geqslant x_0$. The lemma is proven.

Further, the following result holds.

Lemma 4.12. If $\alpha_{1,+}\ne\alpha_{2,+}$, then equation (1.10) has a solution $y_1$ defined in a neighbourhood of $+\infty$ and such that

$$ \begin{equation*} \lim_{x\to+\infty}y_1(x)=\alpha_{1,+}. \end{equation*} \notag $$

Proof. We choose an arbitrary $\varepsilon$ so that
$$ \begin{equation*} 0<\varepsilon<\frac{\alpha_{2,+}-\alpha_{1,+}}{4}\,. \end{equation*} \notag $$
Then there exists $x_0=x_0(\varepsilon)\in\mathbb{R}$ such that for all $x\geqslant x_0$ the inequalities
$$ \begin{equation*} |\alpha_1(x)-\alpha_{1,+}|<\varepsilon\quad \text{and}\quad |\alpha_2(x)-\alpha_{2,+}|<\varepsilon \end{equation*} \notag $$
hold. Consider a solution $y_1$ defined at the point $x_0$ and such that
$$ \begin{equation*} y_1(x_0)=\frac{\alpha_{1,+}+\alpha_{2,+}}{2}\,. \end{equation*} \notag $$
Then $\alpha_{1,+}+\varepsilon<y_1(x_0)<\alpha_{2,+}-\varepsilon$. We have
$$ \begin{equation*} \begin{aligned} \, \alpha_1(x_0)<y_1(x_0)<\alpha_2(x_0)\quad &\Longrightarrow\quad y'_1(x_0)<0 \\ &\Longrightarrow\quad y'_1(x)<0,\quad x\in[x_0,x_0+\delta]. \end{aligned} \end{equation*} \notag $$
Assume that there exists $\hat{x}>x_0$ such that $y_1(\hat{x})\geqslant y_1(x_0)$. Then $\hat{x}>x_0+\delta$ since otherwise, owing to the monotonicity of $y_1$ on $[x_0,x_0+\delta]$, we have $y_1(\hat{x})< y_1(x_0)$. Suppose $y_1(\hat{x})> y_1(x_0)$. Then
$$ \begin{equation*} y_1(x_0+\delta)<y_1(x_0)<y_1(\hat{x}). \end{equation*} \notag $$
Hence there exists $\xi\in(x_0+\delta,\hat{x})$ such that $y_1(\xi)=y_1(x_0)$.

Thus, if there exists $\hat{x}>x_0$ such that $y_1(\hat{x})\geqslant y_1(x_0)$, then there exists $\xi>x_0$ such that $y_1(\xi)=y_1(x_0)$. Without loss of generality we assume that $\xi$ is the leftmost of such points. Then $y'_1(\xi)\geqslant 0$. On the other hand

$$ \begin{equation*} \alpha_1(\xi)<\alpha_{1,+}+\varepsilon<y_1(x_0)= y_1(\xi)<\alpha_{2,+}-\varepsilon<\alpha_2(\xi) \quad\Longrightarrow\quad y'_1(\xi)< 0. \end{equation*} \notag $$
This contradiction shows that our assumption is wrong, and for each $x>x_0$ at which the solution $y_1$ is defined, the inequality $y_1(x)<y_1(x_0)$ holds. Therefore, $y_1$ is a solution bounded for $x\geqslant x_0$. Hence (see [39]) $y_1$ satisfies (2.5) and either $\lim_{x\to+\infty}y_1(x)=\alpha_{1,+}$ or $\lim_{x\to+\infty}y_1(x)=\alpha_{2,+}$. The latter is impossible since
$$ \begin{equation*} y_1(x)<y_1(x_0)<\alpha_{2,+} \text{ if } x\geqslant x_0. \end{equation*} \notag $$
Thus,
$$ \begin{equation*} \lim_{x\to+\infty}y_1(x)=\alpha_{1,+}, \end{equation*} \notag $$
and $y_1$ is the required solution. The lemma is proved.

Lemma 4.12 and Remark 6 produce the following result.

Lemma 4.12'. Suppose $\alpha_{1,-}\ne\alpha_{2,-}$. Then equation (1.10) has a solution $y_1$ defined in a neighbourhood of $-\infty$ and such that

$$ \begin{equation*} \lim_{x\to-\infty}y_1(x)=\alpha_{2,-}. \end{equation*} \notag $$

Lemmas 4.12 and 4.11 produce Theorem 3.3.1. Theorem 3.3.1 and Remark 6 yield the statement of Theorem 3.3.1'. Further, Theorem 2.1 and Remark 2.3 in [39], in combination with Theorems 3.3.1 and 3.3.1', produce Theorem 3.3.2, which improves Theorem 2.1 from [39]. Theorem 3.3.2, in its turn, yields Theorem 3.3.3.

Proof of Theorem 3.3.4. Suppose $\alpha_{1,-}=\alpha_{2,-}$. Then any Type I solution is also a Type II solution.

Now suppose $\alpha_{1,-}\ne\alpha_{2,-}$. By Theorem 2.2 from [39] there exists a stabilizing solution which is not of Type I.

1. Suppose there exists a Type II solution. In this case the statement of the theorem can be proved immediately.

2. Suppose there exists a Type III solution. Then by Theorem 3.3.1 the Type III solution $y_{\rm III}$ is unique, whereas there is no Type IV solution. Similarly, we obtain from Theorem 3.3.1' that a Type I solution $y_{\rm I}$ is unique, whereas there is no Type IV solution. So any solution $y(x)$ whose initial value $y(0)$ satisfies $y_{\rm I}(0)<y(0)<y_{\rm III}(0)$ is a stabilizing Type II solution.

3. Suppose there exists a Type IV solution $y_{\rm IV}$. Let $y_{\rm I}$ be a Type I solution. Then

$$ \begin{equation*} y_{\rm I}(x)\to\alpha_{1,-}\quad\text{and}\quad y_{\rm IV}(x)\to\alpha_{1,-} \end{equation*} \notag $$
as $x\to-\infty$.

On the other hand, by Lemma 4.12' there exists a solution $y_1$ such that

$$ \begin{equation*} y_1(x)\to\alpha_{2,-} \end{equation*} \notag $$
as $x\to-\infty$. This is in contradiction with Theorem 3.3.1'. Therefore, our assumption is wrong and there is no Type IV solution.

The theorem is proved.

Similarly, by using Lemma 4.12 we can prove Theorem 3.3.4'.

Proof of Theorem 3.3.5. The case when $\alpha_{1,+}\ne \alpha_{2,+}$ and $\alpha_{1,-}\ne \alpha_{2,-}$ was considered in part 2 of the proof of Theorem 3.3.4.

If $\alpha_{1,-}=\alpha_{2,-}$, then each Type I solution is also a Type II solution (and vice versa).

If $\alpha_{1,+}=\alpha_{2,+}$, then each Type III solution is also a Type II solution (and vice versa). The theorem is proved.

Theorem 3.3.3 and part 2 of the proof of Theorem 3.3.4 immediately produce Theorem 3.3.6.

Proof of Theorem 3.3.7. If equation (1.10) has a Type I solution, then using Theorem 3.3.4 we obtain statement (a). Suppose the equation has no Type I solution. Then it has no Type II solution by Theorem 3.3.6. Suppose that the equation has a Type III solution. Then using Theorem 3.3.4', we obtain the existence of a Type II solution. This contradiction shows that there is no Type III solution. So, if there exists a stabilizing solution, then it is a Type IV solution. In this case we obtain from Theorem 3.3.1 and Lemma 4.12 that a stabilizing solution is unique and statement (b) holds. If there is no stabilizing solution, then we obtain statement (c). The theorem is proved.

Bibliography

1. I. G. Fikhtengol'ts, “Elements of the theory of gravitational waves”, Theoret. and Math. Phys., 79:1 (1989), 445–448  mathnet  crossref  mathscinet  adsnasa
2. A. V. Lysukhina, Equivalence of some quantum mechanical models, Bachelor Thesis, Faculty of Physics, Moscow State University, Moscow, 2017 (Russian)
3. E. A. Lukashev, V. V. Palin, E. V. Radkevich, and N. N. Yakovlev, “Nonclassical regularization of the multicomponent Euler system”, J. Math. Sci. (N. Y.), 196:3 (2014), 322–345  crossref  mathscinet  zmath
4. J. Da Fonseca, M. Grasselli, and C. Tebaldi, “A multifactor volatility Heston model”, Quant. Finance, 8:6 (2008), 591–604  crossref  mathscinet  zmath
5. D. A. Smorodinov, “Parametrization of the regulator of multicontour stabiliazation of the isolation diameter and the capacitance of one meter of twisted pair cabling”, Zh. Nauchn. Publikatsii Aspirantov i Doktorantov, 4 (2013) http://jurnal.org/articles/2013/inf3.html (Russian)
6. I. I. Artobolevskii and V. S. Loshchinin, Dynamics of machine assemblies in marginal motion regimes, Nauka, Moscow, 1977, 305 pp. (Russian)
7. N. A. Kil'chevskii, A course of theoretical mechanics, v. 1, Kinematics, statics, point mass dynamics, Nauka, Moscow, 1972, 75 pp. (Russian)  zmath
8. M. I. Zelikin, Control theory and optimization, v. I, Encyclopaedia Math. Sci., 86, Homogeneous spaces and the Riccati equation in the calculus of variations, Springer-Verlag, Berlin, 2000, xii+284 pp.  crossref  mathscinet  zmath  zmath
9. N. N. Luzin, “On the method of approximate integration of academician S. A. Chaplygin”, Uspekhi Mat. Nauk, 6:6(46) (1951), 3–27 (Russian)  mathnet  mathscinet  zmath
10. S. A. Chaplygin, A new method of approximate integration of differential equations, Gostekhizdat, Moscow–Leningrad, 1950, 102 pp. (Russian)
11. A. Glutsuk, “On germs of constriction curves in model of overdamped Josephson junction, dynamical isomonodromic foliation and Painlevé 3 equation”, Mosc. Math. J., 23:4 (2023), 479–513  crossref
12. Z. Došlá, P. Hasil, S. Matucci, and M. Veselý, “Euler type linear and half-linear differential equations and their non-oscillation in the critical oscillation case”, J. Inequal. Appl., 2019, 189, 30 pp.  crossref  mathscinet  zmath
13. J. Bernoulli, “Modus generalis construendi omnes æquationes differentiales primi gradus”, Acta Erud., 1694, 435–437
14. G. N. Watson, A treatise on the theory of Bessel functions, 2nd ed., Cambridge Univ. Press, Cambridge, England; The Macmillan Co., New York, 1944, vi+804 pp.  mathscinet  zmath
15. J. F. Riccati, “Animadversiones in æquationes differentiales secundi gradus”, Acta Erud. Suppl., 8 (1724), 66–73
16. D. Bernoulli, “Notata in J. Riccati ‘Animadversiones in æquationes differentiales secundi gradus’ ”, Acta Erud. Suppl., 8 (1724), 73–75
17. V. V. Stepanov, A course of differential equations, 8th ed., GIFML, Moscow, 1959, 468 pp. (Russian)  zmath; German transl of 6th ed. W. W. Stepanow, Lehrbuch der Differentialgleichungen, Hochschulbücher für Math., 20, VEB Deutscher Verlag der Wissenschaften, Berlin, 1956, ix+470 pp.  mathscinet  zmath
18. J. Liouville, “Remarques nouvelles sur l'équation de Riccati”, J. Math. Pures Appl., 1841, 1–13
19. L. Euler, “De integratione aequationum differentialium”, Nov. Comm. Acad. Sci. Petrop., VIII, (1760–1761) (1763), 3–63
20. L. Euler, “De resolutione aequationis $dy+ayy\,dx=bx^m\,dx$”, Nov. Comm. Acad. Sci. Petrop., IX, (1762–1763) (1764), 154–169
21. A. Cayley, “On Riccati's equation”, Philos. Mag. (4), XXXVI:244 (1868), 348–351  crossref  zmath; The collected mathematical papers, v. VII, Cambridge Univ. Press, Cambridge, 1894, 9–12  zmath
22. R. Murphy, “On the general properties of definite integrals”, Trans. Camb. Phil. Soc., III (1830), 429–443
23. E. Weyr, Zur Integration der Differentialgleichungen erster Ordnung, Abh. Königl. böhm. Ges. Wiss. (6), 6, Prag, Dr. Ed. Gregr, 1875, 44 pp.  zmath
24. É. Picard, “Application de la théorie des complexes linéaires à l'étude des surfaces et des courbes gauches”, Ann. Sci. École Norm. Sup. (2), 6 (1877), 329–366  mathscinet  zmath
25. R. Redheffer, “On solutions of Riccati's equation as functions of the initial values”, J. Rational Mech. Anal., 5:5 (1956), 835–848  crossref  mathscinet  zmath
26. G. McCarty, Jr., “Solutions to Riccati's problem as functions of initial values”, J. Math. Mech., 9:6 (1960), 919–925  crossref  mathscinet  zmath
27. V. A. Pliss, Nonlocal problems of the theory of oscillations, Academic Press, New York–London, 1966, xii+306 pp.  mathscinet  zmath
28. I. V. Astashova, “Remark on continuous dependence of solutions to the Riccati equation on its righthand side”, International workshop QUALITDE – 2021, Abstracts (Tbilisi 2021), A. Razmadze Math. Inst. of I. Javakhishvili Tbilisi State Univ., Tbilisi, 14–17 https://rmi.tsu.ge/eng/QUALITDE-2021/Abstracts_workshop_2021.pdf
29. A. F. Filippov, Introduction to the theory of differential equations, URSS, Moscow, 2004, 239 pp. (Russian)
30. W. T. Reid, Riccati differential equations, Math. Sci. Eng., 86, Academic Press, New York–London, 1972, x+216 pp.  mathscinet  zmath
31. M. Bertolino, “Non-stabilité des courbes de points stationnaires des solutions des équations différentielles”, (Serbo-Croatian), Mat. Vesnik, 2(15)(30):3 (1978), 243–253  mathscinet  zmath
32. M. Bertolino, “Équations différentielles aux coefficients infinis”, Mat. Vesnik, 4(17)(32):2 (1980), 150–155  mathscinet  zmath
33. M. Bertolino, “Asymptotes verticales des solutions des équations différentielles”, Mat. Vesnik, 5(18)(33):2 (1981), 139–144  mathscinet  zmath
34. A. I. Egorov, Riccati's equation, Fizmatlit, Moscow, 2001, 328 pp. (Russian)  zmath
35. E. Kamke, Differentialgleichungen. Lösungsmethoden und Lösungen, v. 1, Mathematik und ihre Anwendungen in Physik und Technik. Reihe A, 18, Gewöhnliche Differentialgleichungen, 6. Aufl., Akademische Verlagsgesellschaft, Geest & Portig K.-G., Leipzig, 1959, xxvi+666 pp.  mathscinet  zmath
36. N. M. Kovalevskaya, On some cases of integrability of a general Riccati equaton, 2006, 4 pp., arXiv: math/0604243v1
37. N. M. Kovalvskaya, “Integrability of the general Riccati equation”, Zh. Nauchn. Publikatsii Aspirantov i Doktorantov, 5 (2011) http://jurnal.org/articles/2011/mat3.html (Russian)
38. Ph. Hartman, Ordinary differential equations, John Wiley & Sons, Inc., New York–London–Sydney, 1964, xiv+612 pp.  mathscinet  zmath
39. V. V. Palin and E. V. Radkevich, “Behavior of stabilizing solutions of the Riccati equation”, J. Math. Sci. (N.Y.), 234:4 (2018), 455–469  mathnet  crossref  mathscinet  zmath
40. M. Bertolino, “Sur une synthèse pratique de deux méthodes qualitatives d'étude des équations différentielles”, Mat. Vesnik, 13(28):1 (1976), 9–19  mathscinet  zmath
41. I. Merovci, “Sur quelques propriétés des solutions de l'équation $y'=(y-\alpha_1)(y-\alpha_2)$”, (Serbo-Croatian), Mat. Vesnik, 2(15)(30):3 (1978), 235–242  mathscinet
42. N. P. Erugin, Reader for a general course in differential equations, 3d revised and augented ed., Nauka i technika, Minsk, 1979, 743 pp. (Russian)  mathscinet  zmath
43. M. Bertolino, “Tuyaux étagés de l'approximation des équations différentielles”, Publ. Inst. Math. (Beograd) (N. S.), 12(26) (1971), 5–10  mathscinet  zmath
44. I. V. Astashova and V. A. Nikishov, “On extensibility and asymptotics of solutions to the Riccati equation with real roots of its right part”, International workshop QUALITDE – 2022, Reports of QUALITDE (Tbilisi 2022), v. 1, A. Razmadze Math. Inst. of I. Javakhishvili Tbilisi State Univ., Tbilisi, 27–30 https://rmi.tsu.ge/eng/QUALITDE-2022/Reports_workshop_2022.pdf
45. I. V. Astashova and V. A. Nikishov, “On qualitative properties of solutions of Riccati's equation”, Current methods in the theory of boundary value problems., Voronezh Spring Mathematical School (3–9 May 2023), Publishing House of Voronezh State University, Voronezh, 2023, 50–53 https://vvmsh.math-vsu.ru/files/vvmsh2023.pdf (Russian)
46. I. V. Astashova and V. A. Nikishov, “Extensibility and asymptotics of solutions of Riccati's equation with real roots of the right-hand side”, Differ. Uravn., 59:6 (2023), 856–858 (Russian)
47. P. Hartman, “On an ordinary differential equation involving a convex function”, Trans. Amer. Math. Soc., 146 (1969), 179–202  crossref  mathscinet  zmath

Citation: I. V. Astashova, V. A. Nikishov, “On extensibility and qualitative properties of solutions to Riccati's equation”, Russian Math. Surveys, 79:2 (2024), 189–227
Citation in format AMSBIB
\Bibitem{AstNik24}
\by I.~V.~Astashova, V.~A.~Nikishov
\paper On extensibility and qualitative properties of solutions to Riccati's equation
\jour Russian Math. Surveys
\yr 2024
\vol 79
\issue 2
\pages 189--227
\mathnet{http://mi.mathnet.ru//eng/rm10160}
\crossref{https://doi.org/10.4213/rm10160e}
\mathscinet{http://mathscinet.ams.org/mathscinet-getitem?mr=4782811}
\zmath{https://zbmath.org/?q=an:07945457}
\adsnasa{https://adsabs.harvard.edu/cgi-bin/bib_query?2024RuMaS..79..189A}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001306112700001}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85203688724}
Linking options:
  • https://www.mathnet.ru/eng/rm10160
  • https://doi.org/10.4213/rm10160e
  • https://www.mathnet.ru/eng/rm/v79/i2/p3
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Успехи математических наук Russian Mathematical Surveys
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2025