Abstract:
We consider Riccati's equation on the real axis with continuous coefficients and non-negative discriminant of the right-hand side. We study the extensibility of its solutions to unbounded intervals. We obtain asymptotic formulae for its solutions in their dependence on the initial values and the properties of the functions representing roots of the right-hand side of the equation. We obtain results on the asymptotical behaviour of solutions defined near $\pm\infty$. We study the structure of the set of bounded solutions in the case when the roots of the right-hand side of the equation are $C^1$-functions which are different on the whole of their domain and tend monotonically to some limits as $x\to\pm\infty$. We extend, improve, or refine some well-known results.
Bibliography: 47 titles.
where $R\not\equiv 0$. Equation (1.1) has applications in many fields, in particular, physics (the theory of gravitational waves [1], quantum mechanics [2], continuum mechanics [3]), financial mathematics [4], instrumentation [5], and mechanical engineering [6]. It also serves as a tool for solving problems in a wide variety of fields of mathematics, for example, differential geometry [7], [8]. The book [8] describes in detail geometric approaches to the study of the integrability of Riccati’s equation (including the matrix version) and applications of Riccati’s equation to problems in the calculus of variations. In [9] and [10] an equation of the form (1.1) was used to establish the limits of applicability of Chaplygin’s theorem on differential inequalities to a second-order linear equation. In [11] a four-parameter family of differential equations on a torus was considered. The function on the parameter space describing the curves preserving the rotation number of this equation satisfies the third Painlevé equation. This latter is shown to have a family of solutions that are also solutions of a certain Riccati equation obtained from Bessel’s equation by replacing the unknown function. A technique using Riccati’s equation was used in [12] to study the oscillation of solutions to some quasi-linear equations. Other applications of Riccati’s equation can be found in the references of the works cited.
We recall some well-known facts from the history of Riccati’s equation.
For the first time an equation of this type was mentioned by J. Bernoulli [13] in 1694. Namely, he considered a special case of equation (1.1), the equation
Later, in his letters to Leibniz (see [14]) Bernoulli presented a solution to (1.2) for $a=1$ in the form of a quotient of the sums of two series. However, he failed to express it by quadratures. In 1724 Riccati [15] considered the equation
In his note [16] on the article [15] D. Bernoulli presented (see also [17], Chap. I, § 8) an infinite sequence of values of $\alpha$ such that equation (1.3) is integrable by quadratures:
Later, Liouville [18] (1841) showed that there are no other values of $\alpha$ with this property. In the 18th and 19th centuries various representations of solutions to (1.3) by means of series and integrals were obtained, (see, for example, [19]–[22]).
Riccati’s equation of the general form (1.1) was investigated by Euler. He showed [19] that if a particular solution $y_1$ is known, then the general solution can be obtained by two quadratures: first, by the substitution $y=y_1+\theta$ equation (1.1) is reduced to
and then equation (1.4) is reduced to a linear one by the substitution $\theta=1/v$.
If we know two partial solutions to (1.1), then, according to Euler, the general solution can be obtained with the help of a single quadrature.
Subsequently, Weyr [23] and Picard [24] showed that the general solution to (1.1) is a linear-fractional function of an arbitrary constant, and deduced from this that for any four different partial solutions $y_1$, $y_2$, $y_3$, and $y_4$ to (1.1), the anharmonic ratio
is independent of $x$. Thus, knowing three partial solutions to (1.1), we can obtain the general solution without quadratures.
Further results of interest contain relations satisfied by solutions to equation (1.1) as functions of their initial values [25], [26]. It is known [27] that Riccati’s equation with continuous periodic coefficients can have at most two periodic solutions.
Theorems proved in [28] refine, for Riccati’s equation of a special form, the classical theorem (see, for example, [29], Chap 7, Theorem 6) on the continuous dependence of solutions on the right-hand side and initial conditions.
Some generalizations of the scalar differential equation (1.1) are also worth mentioning. Such generalizations include, for example, the matrix differential Riccati equation
for $R,A,B,P\colon X\subset\mathbb{R}\to M_n(\mathbb{R})$ and the unknown $n\times n$-matrix $Y(\,\cdot\,)$, where $M_n(\mathbb{R})$ is the space of $n \times n$ $\mathbb{R}$-matrices.
A broad overview of the available results related to the matrix differential Riccati equations was given in [30].
In [31] it was proved, in particular, that if $n\ne 1$, $P_0\equiv 1$, and the functions $P_i(\,\cdot\,)$, $i=1,\dots,n$, are continuous and bounded in a neighbourhood of $+\infty$, then equation (1.6) cannot have solutions $y(\,\cdot\,)$ with the property that $\lim_{x\to+\infty} y(x)=+\infty$.
As we will see below, for equation (1.1) with non-negative discriminant of the right-hand side, this result follows from Theorem 3.1.3. Many well-known results related to various analogues and generalizations of equation (1.1) were presented in [34].
Since even the special Riccati equation is not always integrable by quadratures (see also [35]–[37] on integrability cases of (1.1)), it is useful to study the qualitative properties of solutions to (1.1).
The qualitative and asymptotic properties of solutions to Riccati’s equation
The case when the function $\alpha_2(x)$ is unbounded for $x\geqslant x_0$ was considered in [40] and [41].
Theorem A ([40], p. 18). Suppose that the function $\alpha_1(x)$ is unbounded for $x\geqslant x_0$, increases monotonically, and $\alpha_1 (x)<\alpha_2(x)$. If there exists a function $\beta\colon[x_0,+\infty)\to\mathbb{R}$ satisfying, for $x\geqslant x_0$, the inequalities
then for any $\varepsilon>0$ such that $\alpha_2(x)-\varepsilon>\alpha_1(x)$, $x\geqslant x_0$, equation (1.10) has at least one solution $y_\varepsilon$ defined on $[x_0,+\infty)$ and such that
Theorem B ([41], p. 239). If the functions $\alpha_1$ and $\alpha_2$ are positive and non- decreasing on $[x_0,+\infty)$, the function $\alpha_1$ is bounded on $[x_0,+\infty)$, and $\lim_{x\to+\infty}\alpha_2(x)=+\infty$, then each solution to equation (1.10) defined at a point $x_0$ is extensible onto $[x_0,+\infty)$.
Moreover, if $y(\,\cdot\,)$ is a solution to (1.10) which is positive on the interval $[x_0,+\infty)$, then either
where $n\geqslant 3$. In [43] he studied the existence of solutions to (1.11) satisfying certain conditions in the case when $f_i\in C[x_0,+\infty)$, $f_1(x)<\cdots<f_n(x)$ for $x\geqslant x_0$, and $f_n(x)\to +\infty$ as $x\to +\infty$.
Theorems on the qualitative properties of solutions to (1.10) in the case when the functions $\alpha_1(\,\cdot\,)$ and $\alpha_2(\,\cdot\,)$ are bounded on the whole number line were presented in [42] and [34]. The behaviour of solutions to equation (1.10) with $C^1$-functions $\alpha_1(\,\cdot\,)$ and $\alpha_2(\,\cdot\,)$, $\alpha_1(x)<\alpha_2(x)$, which are bounded on $\mathbb{R}$ and tend monotonically to some limits $\alpha_1^{\pm}\in\mathbb{R}$ and $\alpha_2^{\pm}\in\mathbb{R}$ as $x\to\pm\infty$, respectively, were studied in [39]. It was proved there that under the above conditions all bounded solutions to (1.10) have limits as $x\to\pm\infty$ and, in accordance with their possible values, are divided into the following four types:
where $y_{\pm}:=\lim_{x\to\pm\infty}y(x)\in \mathbb{R}$. It was also found that
$\bullet$ if (1.10) has a Type II solution, then it also has a Type I solution;
$\bullet$ if (1.10) has a Type I solution, then it also has a solution of another type;
$\bullet$ if (1.10) has both Type I and Type III solutions, then it also has a Type II or a Type IV solution.
This article continues the study of the qualitative and asymptotic properties of solutions to equation (1.10). In many results here it is additionally assumed that
In the first part of the paper (see §§ 3.1 and 4.1) we study the dependence of the qualitative and asymptotic properties of solutions to (1.10) on the initial value $y(x_0)$. Some results from [42] and [34] are extended or refined, the result of the theorem on p. 17 in [40] is extended, and the result of the corollary on p. 240 in [41] is strengthened.
In the second part of the paper (see §§ 3.2 and 4.2) we consider the set of solutions to (1.10) defined near $+\infty$ and obtain results showing its structure. In particular, we prove that if equation (1.10) with bounded $\alpha_1$ and $\alpha_2$ such that $(\alpha_1+\alpha_2)/2\in C^1 [x_0,+\infty)$ has two solutions which are defined on $[x_0,+\infty)$ and have two different finite limits as $x\to +\infty$, then each other solution defined on $[x_0,+\infty)$ has a finite limit as $x\to +\infty$ which is equal to the limit of the smaller of the two solutions.
In the third part of the paper (see §§ 3.3 and 4.3) the results obtained in the first two parts are applied to study the structure of the set of bounded solutions to the equation in the case when $\alpha_1(\,\cdot\,)$ and $\alpha_2(\,\cdot\,)$ are $C^1$-functions different on the whole of their domain and tending monotonically to finite limits as $x\to\pm\infty$.
Note that if $Q(x)=-(\alpha_1(x)+\alpha_2(x))$ is a $C^1$-function for $x\in\Delta\subset\mathbb{R}$, then the functions $U_0(x)$ and $Y_0(x)$ are continuous for $x\in\Delta$.
When we say in this paper that a solution $y(\,\cdot\,)$ is defined on an interval $\Delta\subset\mathbb{R}$, we mean that $y(\,\cdot\,)$ is defined at each point in $\Delta$ (but $\Delta$ need not be the maximal possible domain of definition of the solution $y(\,\cdot\,)$).
As usual, we say that a function $f(\,\cdot\,)$ is monotonically increasing (decreasing) on an interval $\Delta\subset\mathbb{R}$ if for any $x_1, x_2\in\Delta$ such that $x_1< x_2$ the inequality $f(x_1)\leqslant\!(\geqslant)\,\, f(x_2)$ holds. A function $f(\,\cdot\,)$ is said to be strictly monotonically increasing (decreasing) on the interval $\Delta\subset\mathbb{R}$ if for any $x_1, x_2\in\Delta$ such that $x_1<x_2$ the inequality $f(x_1)<\!(>)\,\, f(x_2)$ holds.
Lemma 2.1 (corollary of Lemma 4.1 in [47]). If $x_0<\omega\leqslant+\infty$, $Q\in C^1 [x_0,\omega)$, and there exists a solution to (1.10) defined on $(\delta,\omega)$ for some $\delta<\omega$, then there exist $S_*\in [x_0,\omega)$ and a solution $y_*(\,\cdot\,)$ to this equation which is defined on $(S_*,\omega)$ and such that for any solution $y(\,\cdot\,)$ to (1.10) defined on $(S,\omega)$, where $S\geqslant x_0$, the following inequalities hold:
3.1. Extensibility and asymptotic behaviour of solutions as dependent on the mutual arrangement of their initial values and the roots of the right-hand side of the equation
Now we formulate a theorem extending the basic theorem of differential inequalities [10] for first-order equations and generalizing Theorem 7.3 in [42] (or, which is the same, the first statement of Theorem 5.7 in [34]).
where $f$ is continuous on its domain, which contains $[x_0,+\infty)$. If there exists a differentiable function $\beta\colon[x_0,+\infty)\to\mathbb{R}$ such that for any $x\geqslant x_0$ the inequality
holds, then any solution $y(\,\cdot\,)$ to equation (3.1) such that $y(x_0)\leqslant\beta(x_0)$ satisfies the condition $y(x)<\beta(x)$ for $x\in(x_0,b)$, where $b=\sup\operatorname{dom}y$ is the right-hand end-point of the maximal domain of the solution $y(\,\cdot\,)$.
Corollary 1. Under the assumptions of Theorem 3.1.1, if $\beta(x)$ is bounded above for $x\geqslant x_0$, then any solution $y(x)$ such that $y(x_0)\leqslant\beta(x_0)$ is bounded above for $x\in(x_0,b)$, where $b=\sup\operatorname{dom}y$.
and set $\beta=(\alpha_1+\alpha_2)/2$. Then we obtain the following result.
Corollary 2. If the function $Q(\,\cdot\,)$ is differentiable on $[x_0,+\infty)$, $Q'(x)<Q^2(x)/2-2P(x)$ for $x\geqslant x_0$, and a solution $y(\,\cdot\,)$ to equation (1.10) satisfies $y(x_0)\leqslant-Q(x_0)/2$, then $y(x)<-Q(x)/2$ for $x\in(x_0,b)$, where $b=\sup\operatorname{dom}y$.
Remark 1. The second condition in Corollary 2 can be written as
for $x\geqslant x_0$. We have $U_0(x)=-2Y_0(x)$, so that condition (3.2) is just condition B from [39], § 3.3, at the point $x$.
Corollary 3. Suppose that $\alpha_1(x)=\alpha_2(x)$ for $x\geqslant x_0$, the function $\alpha_1$ is differentiable on $[x_0,+\infty)$, and $\alpha'_1(x)>0$ for $x\geqslant x_0$. If a solution $y(\,\cdot\,)$ to equation (1.10) satisfies $y(x_0)\leqslant\alpha_1(x_0)$, then $y(x)<\alpha_1(x)$ for $x\in(x_0,b)$, where $b=\sup\operatorname{dom}y$.
Theorem 3.1.2. Suppose $x_0\in\mathbb{R}$ and condition (1.12) holds. Then for any solution $y(\,\cdot\,)$ to equation (1.10) defined at $x_0$ the inequality $y(x)\geqslant\min(y(x_0),m)$ holds for $x\in[x_0,b)$, where $b=\sup\operatorname{dom}y$.
Note that it follows from this theorem and Corollary 3.2 in [38] that in Corollaries 2 and 3 above, the right-hand endpoint $b$ of the maximal domain of the solution equals $+\infty$.
Theorem 3.1.3. Let $M_1, M_2\in\mathbb{R}$ satisfy $\alpha_1(x)\leqslant M_1$ and $\alpha_2(x)\leqslant M_2$ for $x\geqslant x_0$. If a solution $y(\,\cdot\,)$ to equation (1.10) satisfies the conditions $y(x_0)>M_1$ and $y(x_0)> M_2$, then there exists $x^*\in\mathbb{R}$, $x^*>x_0$, such that $y(\,\cdot\,)$ is strictly increasing on $(x_0,x^*)$ and
Theorem 3.1.3 generalizes the statement of Theorem 7.1 in [42] on the behaviour of solutions to equation (1.10) to the right of the point $x_0$ (or, which is the same, the statement of Theorem 5.5 in [34] on the behaviour of solutions to the right of the point $t_0$) to the case when $\alpha_1\not\equiv\alpha_2$. The condition $\alpha'(t)>0$, $t\leqslant t_0$, must be added to the assumptions of Theorem 5.5 [34] on the behaviour of solutions to the left of the point $t_0$. Namely, the following theorem holds.
Theorem 3.1.4. If condition (1.12) holds, $\alpha_1(x)=\alpha_2(x)=\alpha(x)$ for $x\in\mathbb{R}$, the function $\alpha$ is differentiable on $(-\infty, x_0)$, and $\alpha'(x)>0$ for $x\leqslant x_0$, then all solutions $y(\,\cdot\,)$ to equation (1.10) such that $y(x_0)>M$ satisfy
So, according to Theorem 3.1.3, there exists $x^*>\delta$ such that $\lim_{x\to x^*} y(x)=+\infty$. Similarly, using Theorem 3.1.3' (see below) we prove the existence of $x_*<-\delta$ such that $\lim_{x\to x_*} y(x)=-\infty$. So, $y(x_0)>M$ for some $x_0>0$ and the solution $y(x)$ is unbounded for $x\leqslant x_0$.
Theorem 3.1.3'. Suppose that $m_1,m_2\in\mathbb{R}$ satisfy $\alpha_1(x)\geqslant m_1$ and $\alpha_2(x)\geqslant m_2$ for $x\leqslant x_0$. If a solution $y(\,\cdot\,)$ to equation (1.10) satisfies the conditions $y(x_0)<m_1$ and $y(x_0)<m_2$, then there exists $x_*\in\mathbb{R}$ such that $x_*<x_0$, $y(\,\cdot\,)$ is strictly decreasing on $(x_*,x_0)$, and
Theorem 3.1.3' generalizes the statement of Theorem 7.2 in [42] (or, which is the same, Theorem 5.6 in [34]) on the behaviour of solutions to equation (1.10) to the left of the point $x_0$ to the case when $\alpha_1$ and $\alpha_2$ are different. To the assumptions of Theorem 7.2 [42] on the behaviour of solutions to the right of the point $x_0$, the condition that the finite limit $\lim_{x\to +\infty}\alpha(x)=:\alpha_+\in\mathbb{R}$ exists should be added. Namely, the following theorem holds.
Theorem 3.1.5. If condition (1.12) holds, $\alpha_1(x)=\alpha_2(x)=\alpha(x)$ for $x\in\mathbb{R}$, the finite limit $\lim_{x\to +\infty}\alpha(x)=:\alpha_+\in\mathbb{R}$ exists, and a solution $y(\,\cdot\,)$ to equation (1.10) satisfies $y(x_0)<m$, then either the graph of this solution intersects the graph of $\alpha(\,\cdot\,)$, or $\alpha(x)-y(x)\to +0$ as $x\to+\infty$.
The following example shows that the condition of the existence of a finite limit $\lim_{x\to +\infty}\alpha(x)=:\alpha_+\in\mathbb{R}$ in Theorem 3.1.5 is essential.
Note that $\alpha(x)>-\pi/2=m$, $x\in\mathbb{R}$, the function $\alpha(x)$ has no limit as $x\to+\infty$, and $y_1(x)=\displaystyle\int_{0}^{x}p(t)\,dt+\arctan x$ is a solution to (1.10). We also have
So the graph of the function $y_1(x)$ does not intersect the graph of $\alpha_1(x)$ and there is no limit of $y_1(x)-\alpha(x)$ as $x\to+\infty$. Let $y(\,\cdot\,)$ be a solution to (1.10) with $y(0)<m<y_1(0)=0$. We have $y(x)<y_1(x)<\alpha(x)$ for all $x\geqslant 0$, hence, according to Weierstrass’s theorem, a finite limit $\lim_{x\to+\infty}y(x)$ exists. Therefore, there is no finite limit of $y(x)-\alpha(x)$ as $x\to +\infty$.
The following example shows that the statement of Theorem 5.8 in [34] on the boundedness of solutions to the left of the point $t_0$ need not hold for all continuous and bounded functions $\alpha_1$ and $\alpha_2$.
The functions $\alpha_{1,2}(\,\cdot\,)$ are bounded and therefore satisfy condition (1.12) for some constants $m,M\in\mathbb{R}$. By [31], pp. 244–245, no solution to equation (1.10) is defined on $[1,+\infty)$. Similarly, using the substitution $u(x)=-y(-x)$ we can prove that no solution is defined on $(-\infty,-1]$. Thus, for any solution $y(\,\cdot\,)$ there exist $x^*$, $x_*\in\mathbb{R}$ such that $x^*>x_*$, $\lim_{x\to x^*} y(x)=+\infty$, and $\lim_{x\to x_*} y(x)=-\infty$. Hence $y(x_0)>M$ for some $x_0\in(x_*,x^*)$, and the solution $y(x)$ is unbounded for $x\leqslant x_0$.
Theorem 3.1.6. 1. For functions $\alpha_1$ and $\alpha_2$ and a solution $y(\,\cdot\,)$ to equation (1.10), suppose that condition (1.12) holds, the function $\alpha_1$ increases monotonically on $[x_0,+\infty)$, $\alpha_1(x)<\alpha_2(x)$ for $x\geqslant x_0$, and $y_0=y(x_0)<\alpha_1(x_0)$. Then
2. Suppose that $\alpha_2$ decreases monotonically on $[x_0,+\infty)$ and $y(\,\cdot\,)$ is a solution to equation (1.10) such that $y_0=y(x_0)>\alpha_2(x_0)$. Then there exists $x^*\in\mathbb{R}$, $x^*>x_0$, such that
3. For the functions $\alpha_1$ and $\alpha_2$ and a solution $y(\,\cdot\,)$ to equation (1.10) suppose that condition (1.12) holds, the function $\alpha_1$ decreases monotonically on $[x_0,+\infty)$, the function $\alpha_2$ increases monotonically on $[x_0,+\infty)$, and $\alpha_1(x_0)<y_0=y(x_0)<\alpha_2(x_0)$. Then
$\varepsilon\in(0,1)$, and $\delta\in(0,+\infty)$.
For any $\varepsilon\in(0,1)$ and $\delta\in(0,+\infty)$ there exist $\xi_\varepsilon^\delta\in (\delta/2,\delta)$ and a solution $y_\varepsilon^\delta$ to the last equation defined on $[0,+\infty)$ such that
I. Suppose that $U_0(x)\geqslant 0$ on $[x_0,+\infty)$, $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx=\infty$, and $y(\,\cdot\,)$ is a solution to equation (1.10) defined at the point $x_0$. Then $y(\,\cdot\,)$ cannot be extended onto $[x_0,+\infty)$, and if condition (1.12) holds, then there exists $x^*>x_0$ such that $\lim_{x\to x^*-0}y(x)=+\infty$.
II. Suppose that $U_0(x)\geqslant 0$ on $[x_0,+\infty)$, $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx<\infty$, condition (1.12) holds, and $y(\,\cdot\,)$ is a solution to equation (1.10) defined on $[x_0,+\infty)$. Then
III. Suppose that $U_0(x)< 0$ on $[x_0,+\infty)$, condition (1.12) holds, and $y(\,\cdot\,)$ is a solution to equation (1.10) defined at the point $x_0$. Then the following statements hold:
on $[0,+\infty)$. For this equation we have $U_0(x)=-1/2<0$ for $x\geqslant x_0$, and the integral $\displaystyle\int_{0}^{\infty}U_0(x)\,dx$ diverges. Further, if a solution $y(\,\cdot\,)$ satisfies the inequality $y(0)\leqslant 1$, then $y(\,\cdot\,)$ is extensible onto $[0,+\infty)$.
Remark 3. Suppose that condition (1.12) holds, $Q\in C^1 [x_0,+\infty)$, and $\alpha_1(x)=\alpha_2(x)=\alpha(x)$ for $x\geqslant x_0$. Then for $x\geqslant x_0$ the condition $U_0(x)\,\mathop{\geqslant\!(<)}\,0$ is equivalent to $\alpha'(x)\,\mathop{\leqslant\!(>)}\,0$. If $U_0(x)\geqslant 0$ for all $x\geqslant x_0$, then the condition $\displaystyle\int_{x_0}^{\infty}\!\!\!U_0(x)\,dx=\infty$ is equivalent to $\lim_{x\to+\infty}\alpha(x)=-\infty$, while the condition $\displaystyle\int_{x_0}^{\infty}\!\!\!U_0(x)\,dx\kern-1pt < \kern-1pt \infty$ is equivalent to the existence of a finite limit $\lim_{x\to+\infty}\alpha(x)$. So, as $\alpha$ is bounded, if $U_0(x)\geqslant 0$ for $x\geqslant x_0$, then the condition $\displaystyle\int_{x_0}^{\infty}\!\!\!U_0(x)\,dx <\infty$ holds and part I of Theorem 3.1.7 is impossible.
Theorem 3.1.8. If $Q\in C^1 [x_0,+\infty)$ and $y(\,\cdot\,)$ is a solution to equation (1.10) defined in a neighbourhood of $+\infty$, then the convergence in (3.3) holds if and only if
Corollary 4. Suppose that $Q\in C^1 [x_0,+\infty)$ and $y(\,\cdot\,)$ is a solution to equation (1.10) defined in a neighbourhood of $+\infty$. If condition (3.3) holds for $y(\,\cdot\,)$, then this condition also holds for all solutions defined in a neighbourhood of $+\infty$.
Corollary 5. If $Q\in C^1 [x_0,+\infty)$, $\alpha_1(x)=\alpha_2(x)=\alpha(x)$ for $x\geqslant x_0$ and $y(\,\cdot\,)$ is a solution to equation (1.10) defined in a neighbourhood of $+\infty$, then the convergence in (3.3) holds if and only if
Theorem 3.1.9. Suppose that $Q\in C^1 [x_0,+\infty)$ and $y(x)$ is a solution to equation (1.10) defined on $[x_0,+\infty)$. Then the following statements are equivalent:
Then, according to Theorem 3.1.9, the integral $\displaystyle\int_{0}^{\infty}U_0(x)\,dx$ diverges.
The following theorem gives a criterion for relation (3.4) to hold for some solution $y(x)$ to equation (1.10) defined on $[x_0,+\infty)$, provided that $Q\in C^1 [x_0,+\infty)$.
Theorem 3.1.10. Suppose that $Q\in C^1 [x_0,+\infty)$ and $y(x)$ is a solution to equation (1.10) defined on $[x_0,+\infty)$. Then the following statements are equilent:
Theorem 3.1.11. Suppose that $Q\in C^1 [x_0,+\infty)$ and equation (1.10) has solutions defined in a neighbourhood of $+\infty$. Then at most one solution $y(\,\cdot\,)$ defined on $[x_1,+\infty)$ satisfies the condition
for some $x_1\geqslant x_0$. In particular, at most one solution $y(\,\cdot\,)$ defined on $[x_1,+\infty)$, for some $x_1\geqslant x_0$, satisfies the condition
Now we give an example showing that the first phrase ‘at most one’ in Theorem 3.1.11 cannot be replaced by ‘exactly one’ or ‘no’. This example shows also that the condition of the divergence of the integral $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx$ in part I of Theorem 3.1.7 and the condition $U_0(x)\geqslant 0$, $x\geqslant x_0$, in part II of the same theorem are essential.
We have $\alpha(x)=\dfrac{k}{2x}$ , the integral $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx$ converges, the equation has solutions defined on $[x_0,+\infty)$, the function $y_*(x)=\dfrac{k-1}{x}$ is a principal solution on $(x_0,+\infty)$, and the following statements hold:
(1) if $1<k<2$, then $U_0(x)>0$ for $x\geqslant x_0$, $y_*(x)<k/(2x)=\alpha(x)$ for $x\geqslant x_0$, and, for any solution $y$ defined on $[x_1,+\infty)$ for $x_1\geqslant x_0$, the integral $\displaystyle\int_{x_1}^{\infty}(y(x)-\alpha(x))\,dx$ diverges;
(2) if $k=2$, then $U_0(x)=0$ for $x\geqslant x_0$, $y_*(x)=k/(2x)=\alpha(x)$ for $x\geqslant x_0$, and
(3) if $k>2$, then $U_0(x)<0$ for $x\geqslant x_0$, $y_*(x)>k/(2x)=\alpha(x)$ for $x\geqslant x_0$, while for any solution $y$ defined on $[x_1,+\infty)$ for $x_1\geqslant x_0$ the integral $\displaystyle\int_{x_1}^{\infty}(y(x)-\alpha(x))\,dx$ diverges.
Theorem 3.1.12. Suppose that (1.12) holds, $Q\in C^1 [x_0,+\infty)$, $\alpha_1(x)<\alpha_2(x)$ for $x\geqslant x_0$, the function $\alpha_1(x)$ increases monotonically on $[x_0,+\infty)$, and $U_0(x)\geqslant 0$ for $x\geqslant x_0$. Then the integral $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx$ converges, and for any solution $y(\,\cdot\,)$ to equation (1.10) defined on $[x_0,+\infty)$ relation (3.4) holds true.
Theorem 3.1.13. Suppose that (1.12) holds, $Q \in C^1 [x_0,+\infty)$, $\alpha_1(x_0)<\alpha_2(x_0)$, the function $\alpha_1(x)$ decreases monotonically on $[x_0,+\infty)$, the function $\alpha_2(x)$ increases monotonically on $[x_0,+\infty)$, and $U_0(x)\geqslant 0$ for $x\geqslant x_0$. Then the integral $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx$ converges, and any solution $y(\,\cdot\,)$ to equation (1.10) defined on $[x_0,+\infty)$ satisfies (3.4).
3.2. On the structure of the set of solutions defined in a neighbourhood of $+\infty$
Theorem 3.2.1. Suppose that $Q\in C^1 [x_0,+\infty)$. If solutions $y_3<y_2<y_1$ to equation (1.10) are defined at the point $x_0$, and the solution $y_1$ is defined on $[x_0,+\infty)$, then the solutions $y_3$ and $y_2$ are extensible onto the same interval; moreover, the function $\dfrac{y_1(x)-y_3(x)}{y_1(x)-y_2(x)}\geqslant 1$ decreases and has a finite limit as $x\to+\infty$, which equals 1 if $y_1$ is a principal solution on $(x_0,+\infty)$.
Theorem 3.2.2. Suppose that $Q\in C^1 [x_0,+\infty)$ and condition (1.12) holds. If solutions $y_2<y_1$ to equation (1.10) are defined on $[x_0,+\infty)$ and have different finite limits as $x\to+\infty$, then $y_1$ is a principal solution on $(x_0,+\infty)$ and each solution $y$ on $[x_0,+\infty)$ other than $y_1$ has the same limit at infinity as $y_2$ has.
Now we give an example showing that the condition of different limits is essential in Theorem 3.2.2.
However, equation (3.5) has a non-constant periodic solution.
Theorem 3.2.3. Suppose that $Q\in C^1 [x_0,+\infty)$ and condition (1.12) holds. If solutions $y_2<y_1$ to equation (1.10) defined on $[x_0,+\infty)$ have finite (maybe, equal) limits as $x\to+\infty$, then any solution $y(\,\cdot\,)$ satisfying $y(x_0)<y_1(x_0)$ is extensible onto $[x_0,+\infty)$ and has the same limit at infinity as $y_2$ has.
Remark 4. Note that in the case $\alpha_1(x)=\alpha_2(x)=\alpha(x)$ the statements of Theorems 3.2.2 and 3.2.3 follow from Corollary 7.
3.3. Asymptotic behaviour at $\pm\infty$ of solutions to an equation the roots of whose right-hand side tend monotonically to finite limits
$$
\begin{equation}
\text{ the limits } \lim_{x\to\pm\infty}\alpha_j(x)=:\alpha_j^{\pm}\in\mathbb{R},\qquad j=1,2,\ \text{ exist and are finite},
\end{equation}
\tag{3.6}
$$
and that
$$
\begin{equation}
\begin{gathered} \, \text{there exists } A>0 \text{ such that for any } x\notin[-A,A] \\ \text{ the relations } \alpha'_1(x)\ne 0\quad\text{and}\quad \alpha'_2(x)\ne 0 \text{ hold true.} \end{gathered}
\end{equation}
\tag{3.7}
$$
As shown in [39], in this case all bounded solutions are stabilizing, and all stabilizing solutions have a non-vanishing derivative near $\infty$ and can be divided into four types (see the discussion after Theorem B in § 1).
Theorem 3.3.1. If $\alpha_1^+\ne \alpha_2^+$, then equation (1.10) has a solution $y_{\rm I}$ such that
Theorem 3.3.2. Suppose equation (1.10) has a stabilizing solution of Type II. Then the following statements hold.
1. There exists a stabilizing solution $y_{\rm III}$ of Type III. If $\alpha_1^+\ne \alpha_2^+$, then such a solution is unique. If $\alpha_1^+\ne \alpha_2^+$ and $y(\,\cdot\,)$ is a solution to equation (1.10) defined at the point $x_0\in\mathbb{R}$, then
2. There exists a stabilizing solution $y_{\rm I}$ of Type I. If $\alpha_1^-\ne \alpha_2^-$, then such a solution is unique. If $\alpha_1^-\ne \alpha_2^-$ and $y(\,\cdot\,)$ is a solution to the equation defined at the point $x_0\in\mathbb{R}$, then
Theorem 3.3.3 (see Fig. 1). If $\alpha_1^+\ne \alpha_2^+$, $\alpha_1^-\ne \alpha_2^-$, and equation (1.10) has a stabilizing solution of Type II, then it has unique solutions $y_{\rm I}$ and $y_{\rm III}$ of Types I and III, respectively. For any solution $y(\,\cdot\,)$ we have:
(1) if $y_{\rm I}<y<y_{\rm III}$, then $y(\,\cdot\,)$ is a stabilizing solution of Type II;
(2) if $y>y_{\rm III}$, then there exists $x^*\in\mathbb{R}$ such that $y(\,\cdot\,)$ is extensible onto the interval $(-\infty,x^*)$,
Theorem 3.3.6. Under the conditions $\alpha_1^+\ne \alpha_2^+$ and $\alpha_1^-\ne \alpha_2^-$ equation (1.10) has a stabilizing solution of Type II if and only if it has stabilizing solutions of Types I and III.
Theorem 3.3.7. Under the conditions $\alpha_1^+\ne \alpha_2^+$ and $\alpha_1^-\ne \alpha_2^-$ equation (1.10) satisfies exactly one of the following assertions:
$$
\begin{equation*}
\alpha_{1,2}=\frac{c_1+c_2}{2}+\varepsilon e^{-x^2}\mp \frac{\sqrt{(c_2-c_1)^2+8\varepsilon x e^{-x^2}}}{2}\,
\end{equation*}
\notag
$$
satisfy the condition $\alpha_1(x)<\alpha_2(x)$ for $x\in\mathbb{R}$ and conditions (3.6)–(3.7). In this case equation (3.8) has a stabilizing solution of Type II.
The following example presents classes of equations with properties (b) and (c).
Example 10. Suppose $k_0\in(0,16)$ and $n_0\in\mathbb{N}$. Put
(1) there exist $k_0\in(0,16)$ and $n_0\in\mathbb{N}$ such that if $0<\varepsilon<\dfrac{\sqrt{2e}}{8}\biggl(1-\dfrac{k_0}{16}\biggr)^2$, then the functions
$$
\begin{equation*}
\alpha_{1,2}(x)=h_{k_0}^{n_0}(x)+\varepsilon e^{-x^2}\mp \frac{\sqrt{(g_{k_0}^{n_0}(x)-f_{k_0}^{n_0}(x))^2+8\varepsilon x e^{-x^2}}}{2}
\end{equation*}
\notag
$$
satisfy the condition $\alpha_1(x)<\alpha_2(x)$ for $x\in\mathbb{R}$ and conditions (3.6)–(3.7), while equation (1.10) with suitable $\alpha_{1,2}$ realizes case (b);
(2) there exist $k_0\in(0,16)$ and $n_0\in\mathbb{N}$ such that if $0<\varepsilon<\dfrac{\sqrt{2e}}{8}\biggl(1-\dfrac{k_0}{16}\biggr)^2$, then the functions
$$
\begin{equation*}
\alpha_{1,2}(x)=h_{k_0}^{n_0}(x)+\varepsilon e^{-x^2}\mp \frac{\sqrt{(g_{k_0}^{n_0}(x)-f_{k_0}^{n_0}(x))^2+8\varepsilon x e^{-x^2}}}{2}
\end{equation*}
\notag
$$
satisfy the condition $\alpha_1(x)<\alpha_2(x)$ for $x\in\mathbb{R}$ and conditions (3.6)–(3.7), while equation (1.10) with suitable $\alpha_{1,2}$ realizes case (c).
Remark 5. Consider the case of equation (1.10) for
Now, if $y(x)$ is a stabilizing solution to equation (1.10), then $y_+=\alpha_+$ and $y_-=\alpha_-$. Each bounded solution to this equation is also stabilizing (and vice versa), and each stabilizing solutions in a neighbourhood of $\infty$ has a non-vanishing derivative. So, in the case when $\alpha_1(x)=\alpha_2(x)=\alpha(x)$ for $x\in\mathbb{R}$ all the four types of bounded solutions to equation (1.10) coincide, that is, we have a trivial classification of bounded solutions.
4. Proofs of main results
4.1. Extensibility and asymptotic behaviour of solutions as dependent on the mutual arrangement of their initial values and the roots of the right-hand side of the equation
Proof of Theorem 3.1.1.Case 1: $y(x_0)<\beta(x_0)$. Suppose there exists $x_1>x_0$ such that $y(x_1)\geqslant\beta(x_1)$. Then there exists $c\in (x_0,x_1]$ such that $y(c)=\beta(c)$. We can assume $c$ to be the leftmost of such points. Thus, $y(x)<\beta(x)$ whenever $x\in[x_0,c)$. Therefore, $y'(c)\geqslant\beta'(c)$. On the other hand the assumptions of the theorem yield
Case 2: $y(x_0)=\beta(x_0)$. We have $y'(x_0)<\beta'(x_0)$. Hence in a right half- neighbourhood of the point $x_0$ we have the inequality $y(x)<\beta(x)$. Now the proof reduces to the previous case.
Therefore, $y'(x)>0$ in a neighbourhood of the point $x_1$. In this neighbourhood $y(x)$ decreases strictly as $x$ decreases.
We choose $\widetilde{m}<m$ so that $y(x_1)< \widetilde{m}$. Then for all $x\in \mathbb{R}$ we have $\alpha_1(x)> \widetilde{m}$. Suppose there exists $x_2\in [x_0,x_1)$ such that $y(x_2)\geqslant\alpha_1(x_2)$. Then we obtain $y(x_2)>\widetilde{m}$ and $y(x_1)<\widetilde{m}$. Therefore, there exists $\xi\in(x_2, x_1)$ such that $y(\xi)=\widetilde{m}$. We select the rightmost of such points; it must satisfy $y'(\xi)\leqslant 0$.
On the other hand it follows from the form of the differential equation that
This contradiction shows that our assumption fails and $y(x)<\alpha_1(x)$ for all $x\in [x_0,x_1]$. Hence $y'(x)>0$ for all $x\in [x_0, x_1]$. Thus, $y(x)$ increases strictly on $[x_0, x_1]$. This yields $y_0<y(x_1)$, which contradicts the condition $y(x_1)<\min(y_0,m)$. The theorem is proved.
Proof of Theorem 3.1.3. Just as in the proof of Theorem 3.1.2, it is easy to prove in our case the inequality $y'(x)>0$ whenever $x\in[x_0,b)$, where $b=\sup\operatorname{dom}y$. So $y(x)$ is strictly increasing, and we have $y(x) > M_1$ and $y(x) > M_2$ for $x\in[x_0,b)$. For all $x\in[x_0,b)$ we have
Since $\psi(y)\leqslant \displaystyle\int_{y_0}^{\infty}\dfrac{ds}{(s-M_1)(s-M_2)}<\infty$, the solution cannot be extended to the right of the point $x_0+\displaystyle\int_{y_0}^{\infty}\dfrac{ds}{(s-M_1)(s-M_2)}$ . So taking [38], Corollary 3.1, into account we see that the solution tends to $+\infty$ at this point or even before it. The theorem is proved.
Proof of Theorem 3.1.4. In fact, the solution $y(\,\cdot\,)$ is a monotonically increasing function on its domain. Since $y(x_0)>M\geqslant\alpha(x_0)$, from Corollary 3 and Remark 6 below we obtain $y(x)>\alpha(x)\geqslant m$ for $x\leqslant x_0$. Hence, by Weierstrass’s theorem $y\to m_1\geqslant m$ as $x\to -\infty$.
Proof of Theorem 3.1.5. In fact, if $y(x_0)<\alpha(x_0)$ and the graph of the solution $y(x)$ does not intersect the curve $y=\alpha(x)$ at $x\geqslant x_0$, then $y(x)<\alpha(x)$ for all $x\geqslant x_0$. Because of the monotonicity of $y(\,\cdot\,)$, by Weierstrass’s theorem the limit $\lim_{x\to +\infty}y(x)=:y_+\in\mathbb{R}$ exists, so that
Proof of Theorem 3.1.6. Consider part (1) of the theorem. First we prove that $y(x)\leqslant \alpha_1(x)$ for $x\in[x_0,b)$, where $b=\sup\operatorname{dom}y$. Assume the contrary. Then there exists $x_1>x_0$ such that $y(x_1)>\alpha_1(x_1)$. Hence there exists $x_2\in(x_0,x_1)$ such that $y(x_2)=\alpha_1(x_2)$. We can choose the rightmost point with this property as $x_2$ and, without loss of generality, suppose that $y(x)>\alpha_1(x)$ for $x\in(x_2,x_1]$ and $y(x)<\alpha_2(x)$ for $x\in[x_2,x_1]$. So, if $x\in (x_2,x_1]$, then
Now, taking the form of our differential equation into account we obtain $y'(x)<0$ for $x\in(x_2,x_1]$, and therefore $y(x)$ decreases strictly on $[x_2,x_1]$. So $y(x_1)<y(x_2)$. We have
Thus, we obtain the inequality $\alpha_1(x_1)<\alpha_1(x_2)$, which contradicts the monotonicity of the function $\alpha_1(x)$. Hence our assumption fails and $y(x)\leqslant \alpha_1(x)$ for $x\in[x_0,b)$. The last statement yields $y'(x)\geqslant 0$ for $x\in[x_0,b)$, and therefore $y(x)$ increases monotonically on $[x_0,b)$.
Now we prove that $\lim_{x\to+\infty}y(x)=\lim_{x\to+\infty}\alpha_1(x)$. As proved above, the function $y(x)$ is bounded above on $[x_0,b)$: $y(x)\leqslant\alpha_1(x)\leqslant M$. It is increasing and therefore has a finite limit
Thus, $b=+\infty$. Because of the monotonicity and boundedness of $\alpha_1(x)$, it has a finite limit $\lim_{x\to+\infty}\alpha_1(x)=\alpha_1^+\geqslant a$.
and therefore $\displaystyle\int_{x_0}^{\infty}(\alpha_1(x)-y(x))^2\,dx<\infty$. Hence $a=\alpha_1^+$.
Now we prove part (2). Because of the monotonicity of $\alpha_2$, we have $\alpha_2(x)\leqslant \alpha_2(x_0)$ for all $x\geqslant x_0$. Hence, as $y(x_0)>\alpha_2(x_0)$, Theorem 3.1.3 yields that there exists $x^*>x_0$ such that $y(\,\cdot\,)$ increases strictly monotonically on $[x_0,x^*)$ and
The monotonicity of $y(\,\cdot\,)$ yields $y(x)\geqslant y(x_0)>\alpha_2(x_0)\geqslant\alpha_2(x)$ for $x>x_0$.
Now consider part (3). First we prove that $y(x)<\alpha_2(x)$ for $x\in[x_0,b)$, where $b=\sup\operatorname{dom}y$. We choose a number $c$ so that $c\in(y_0,\alpha_2(x_0))$. Then we have
Suppose there exists $x_1>x_0$ such that $y(x_1)\geqslant \alpha_2(x_1)>c$. We have $y(x_0)<c$ and $y(x_1)>c$. Hence there exists $\xi\in(x_0,x_1)$ such that $y(\xi)=c<\alpha_2(\xi)$. Without loss of generality, we choose the leftmost of such $\xi$. Then $y'(\xi)\geqslant 0$. On the other hand $y'(\xi)=(c-\alpha_1(\xi))(c-\alpha_2(\xi))<0$. We obtain a contradiction. Therefore, $y(x)<\alpha_2(x)$ for $x\in[x_0,b)$.
Note that $y(x)\geqslant \alpha_1(x)$ for $x\in[x_0,b)$. This inequality can be proved similarly to the proof of the inequality $y(x)\leqslant \alpha_1(x)$ for $x\in[x_0,b)$ in part (1).
Thus, $\alpha_1(x)\leqslant y(x)<\alpha_2(x)$ for $x\in[x_0,b)$, that is, the function $y(x)$ is bounded and monotonically decreasing on $[x_0,b)$. Hence the same considerations as in part (1), using the inequality $(\alpha_2(x)-y(x))(y(x)-\alpha_1(x)) \geqslant (\alpha_2(x_0)-y_0)(y(x)-\alpha_1(x))$ for $x\in[x_0,b)$, yield $b=+\infty$ and
where $\beta_1(-x)=-\alpha_1(x)$ and $\beta_2(-x)=-\alpha_2(x)$. Thus we can obtain analogues of Theorems 3.1.1–3.1.6 and their corollaries in the case when $x\leqslant x_0$. In particular, Theorem 3.1.3' becomes an analogue of Theorem 3.1.3.
Lemma 4.1. Suppose that condition (1.12) holds and $y(\,\cdot\,)$ is a solution to equation (1.10) defined on $(a,b)$. Then:
where $G(x,r)$ and $q(x)$ are continuous functions for $-\infty<r<+\infty$ and $0\leqslant x<\omega$ $(\,\leqslant +\infty$). In some results of [47] it was additionally assumed that the function $G(x,r)$ is convex in $r$. It is easy to see that in all the considerations and theorems of [47] the initial point $0$ can be replaced by an arbitrary $x_0$. We are interested in the case when $G(x,r)=r^2$. In this case we obtain Riccati’s equation of the special form
Note that an equation of the form (1.10), when the function $\alpha$ is differentiable, can be reduced to the form (4.3) by the sequence of substitutions
holds. Then no solution $y(\,\cdot\,)$ to equation (1.10) defined at the point $x_0$ can be extended onto $[x_0,+\infty)$. If condition (1.12) holds, then each solution $y(\,\cdot\,)$ tends to $+\infty$ at some finite point $x^*=x^*(y)>x_0$.
Proof. Take the minorant $m(r)=G(x,r)=r^2$. In the case of equation (4.5) we have
So we have a contradiction, which proves our lemma by taking Lemma 4.1 into account.
Suppose that condition (1.12) holds, $Q\in C^1[x_0,+\infty)$, and $U_0(x)<0$ for all $x\geqslant x_0$. Using Corollary 2 we study the behaviour of solutions to equation (1.10) in their dependence on the initial values.
Case I: $y_0=y(x_0)\leqslant \alpha(x_0)=(\alpha_1(x_0)+\alpha_2(x_0))/2$. According to Corollary 2 and Theorem 3.1.2, any solution is bounded on $x\in[x_0,b)$, where $b$ is the right-hand endpoint of the maximal interval of existence of the solution $y(\,\cdot\,)$:
Hence it follows from Lemma 4.1 that $y(x)$ is extensible onto $[x_0,+\infty)$.
Case II: $y_0=y(x_0)>\alpha(x_0)$. The condition $U_0(x)<0$ for $x\geqslant x_0$ can be written as $\alpha'(x)>(\alpha(x)-\alpha_1(x))(\alpha(x)-\alpha_2(x))$. So we have two possibilities for $x\geqslant x_0$:
Generalizing these considerations, we obtain the following result.
Lemma 4.3. Suppose that condition (1.12) holds, $Q\in C^1 [x_0,+\infty)$, and $U_0(x)< 0$ for all $x\geqslant x_0$, let $y(\,\cdot\,)$ be a solution to the equation defined at the point $x_0$, and let $y(x_0)=y_0$. Then the following hold:
In particular, if $y(x)$ satisfies the inequality $y_0=y(x_0)> \alpha(x_0)$, then $y(x)$ tends to $+\infty$ at a finite point $x^*>x_0$.
Remark 7. If $U_0(x)<0$ for all $x\geqslant x_0$, then the graph of any solution to equation (1.10) can have at most one point of intersection with the graph of $\alpha(x)$ at some $x\geqslant x_0$.
Proposition 2.3 in [47] yields the following lemma.
Lemma 4.5. Suppose that $Q\in C^1 [x_0,+\infty)$, the integral $\displaystyle\int_{x_0}^{\infty}\!\!U_0(x)\,dx$ converges, and $y(x)$ is a solution to equation (1.10) defined on $[x_0,+\infty)$. Then relations (3.4) and (3.3) hold.
Proof. The function $r(x)=\alpha(x)-y(x)$ is a solution to equation (4.5) defined on $[x_0,+\infty)$. Then, according to Proposition 2.3 in [47], we have
and the integral $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx$ converges if and only if the integral $\displaystyle\int_{x_0}^{\infty}\alpha'(x)\,dx$ converges, that is, $\alpha(x)$ has a finite limit as $x\to +\infty$.
For equation (1.10) Proposition 2.4 in [47] yields Theorem 3.1.8. Using Theorem 3.1.8 for $\alpha_1(x)=\alpha_2(x)=\alpha(x)$, $x\geqslant x_0$, we obtain Corollary 5.
Lemma 4.6. Suppose $Q\in C^1[x_0,+\infty)$. If there exists a solution $y(x)$ to equation (1.10) defined on $[x_0,+\infty)$ such that relations (3.4) and (3.3) hold, then the integral $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx$ converges.
Proof. The function $r(x)=\alpha(x)-y(x)$ is a solution to equation (4.5) defined on $[x_0,+\infty)$ and such that
Proposition 2.5 [47] and the proof of Lemma 7.1 in [38], Ch. XI, § 7, yield Theorem 3.1.10.
Consider the case when condition (1.12) holds, $Q\in C^1 [x_0,+\infty)$, and $\alpha_1(x)=\alpha_2(x)=\alpha(x)$ for $x\geqslant x_0$. In this case statement (4) of Theorem 3.1.10 is equivalent to the assertion that
$$
\begin{equation*}
\lim_{T\to+\infty}\,\frac{1}{T-x_0}\int_{x_0}^{T}|\alpha(t)-a|^2\,dt =0\quad\text{ for some }a\in\mathbb{R}.
\end{equation*}
\notag
$$
On the other hand $\alpha(x)$ is a bounded function, hence the last statement on the limit superior holds true. So it follows from Theorem 3.1.10 that if there exists a solution $y(x)$ to equation (1.10) defined on $[x_0,+\infty)$, then statements (1)–(4) of Theorem 3.1.10 hold true. So conditions (1) and (4) are necessary for the existence of a solution $y(x)$ defined on $[x_0,+\infty)$.
Now suppose that $\displaystyle\lim_{T\to+\infty}\frac{1}{T-x_0}\int_{x_0}^{T} |\alpha(t)-a|^2\,dt=0$ for some $a\in\mathbb{R}$ and there exists a solution $y(x)$ to equation (1.10) defined on $[x_0,+\infty)$. Then statement (1) is true and the finite limit
Generally, the inverse is wrong. Thus, many solutions $y(\,\cdot\,)$ defined on $[x_1,+\infty)$ for some $x_1\geqslant x_0$ can satisfy the condition $\displaystyle\int_{x_1}^{\infty}|y(x)-\alpha(x)|^2\,dx<\infty$. For instance, when (1.12) holds and $\displaystyle\int_{x_0}^{\infty}U_0(x)\,dx$ converges, the last condition is proved (see Lemma 4.5) to be satisfied by all solutions defined near $+\infty$. But at most one solution defined near $+\infty$ can ‘tend to $\alpha(x)$ sufficiently rapidly’ to provide the relation $\displaystyle\int_{x_1}^{\infty}|y(x)-\alpha(x)|\,dx<\infty$.
Theorems 3.1.12 and 3.1.13 can be proved by a joint application of Theorems 3.1.6 and 3.1.7.
4.2. On the structure of the set of solutions defined in a neighbourhood of $+\infty$
Remark 9. Suppose $Q\in C^1[x_0,\omega)$. If there exists a solution $y(\,\cdot\,)$ defined on $[x_0,\omega)$, then the principal solution on $(x_0,\omega)$ is defined on the whole interval. In the case when $y_*(x)\to -\infty$ as $x\to x_0+0$ the inequality $y(x)\leqslant y_*(x)$ for $x\in(x_0,\omega)$ is violated. Hence the case when $y_*(x)\to -\infty$ as $x\to x_0+0$ is impossible. Therefore, by Lemma 4.1, the principal solution $y_*(x)$ is extensible onto $[x_0,\omega)$.
Corollary 8. Suppose $x_0<\omega\leqslant\infty$ and $Q\in C^1 [x_0,\omega)$. Consider solutions to equation (1.10) defined at the point $x_0$. Assume that among them there exists a solution defined on $[x_0,\omega)$. Then there exists a solution $y_*(x)$ (the principal one) defined on $[x_0,\omega)$ such that if $y(x)$ is a solution defined on $(x_0,\omega)$, then $y(x)\leqslant y_*(x)$ for $x\in(x_0,\omega)$.
Lemma 4.7. Suppose that $x_0<\omega\leqslant\infty$, $Q\in C^1 [x_0,\omega)$, and there exists at least one solution to equation (1.10) defined on $[x_0,\omega)$. Then there exists a solution $y_*(x)$ (the principal one) defined on $[x_0,\omega)$ such that any solution $y(x)$ defined at the point $x_0$
on the intersection of the domain of $y(x)$ and the half-open interval $[x_0,\omega)$. Then for any $x_0<x^*<\omega$ such that $y(x)$ is defined on $[x_0, x^*)$ the solution $y(x)$ is bounded above on $[x_0,x^*)$. Using the same considerations as in the proof of Theorem 3.1.2, for $x\in[x_0,x^*)$ we can prove the inequality
that is, $y(x)$ is bounded below on $[x_0,x^*)$. So the solution $y(x)$ cannot tend to infinity at a finite point $x^*<\omega$. Therefore, by Corollary 3.1 in [38] the solution $y(x)$ is extensible onto the whole $[x_0,\omega)$. The lemma is proved.
Lemma 4.2 in [47] and Theorem 4.2 in [47] produce the following lemma.
Lemma 4.8. Suppose that $Q\in C^1[x_0,\omega)$ and $y_3(x)<y_2(x)<y_1(x)$ are different solutions to equation (1.10) defined on a common interval $(x_0,\omega)$. Then the function $\dfrac{y_1(x)-y_3(x)}{y_1(x)-y_2(x)}$ is greater than or equal to 1 and decreases on the above interval. In particular, the finite limit
By Lemma 4.3 [47] the function $\dfrac{r_3(x)-r_1(x)}{r_2(x)-r_1(x)}$ is greater than or equal to 1 and decreases on the interval in question. In particular, the finite limit
Lemma 4.9. Suppose $Q\in C^1[x_0,+\infty)$ and condition (1.12) holds. If two solutions to equation (1.10) are defined on $[x_0,+\infty)$ and have different finite limits as $x\to +\infty$, then any solution defined on $[x_0,+\infty)$ also has a finite limit as $x\to+\infty$.
Proof. Let $y_1>y_2$ be two solutions to equation (1.10) defined on $[x_0,+\infty)$ and having two different finite limits as $x\to+\infty$. Let $y(\,\cdot\,)$ be a solution defined on $[x_0,+\infty)$ and other than $y_1$ and $y_2$. Put
Case 1: $y(x_0)>y_1(x_0)$. By Theorem 3.2.1 the function $(y-y_1)/(y-y_2)$ increases monotonically and tends to some $d\in\mathbb{R}$, $d\leqslant 1,$ as $x\to+\infty$.
Case 2: $y_2(x_0)\kern-0.5pt<\kern-0.5pty(x_0)\kern-0.5pt<\kern-0.5pty_1(x_0)$. By Theorem 3.2.1 the function $(y_1- y_2)/(y_1- y)$ decreases monotonically and tends to some $d\in\mathbb{R}$, $d\geqslant 1$, as $x\to+\infty$; hence the function $(y_1-y)/(y_1-y_2)$ increases monotonically and tends to $1/d\in\mathbb{R}$, $1/d\leqslant 1$, as $x\to+\infty$. We have
Case 3: $y_1(x_0)<y_2(x_0)$. By Theorem 3.2.1 the function $(y_1-y)/(y_1-y_2)$ decreases monotonically and tends to some $d\in\mathbb{R}$, $d\geqslant 1$, as $x\to+\infty$. So,
Thus, in all possible cases the solution $y(x)$ has a finite limit as $x\to+\infty$. The lemma is proved.
Lemma 4.9 and the formulae for the limit of the solution $y(\,\cdot\,)$ in Cases 2 and 3 for $d=1$ yield the following result.
Lemma 4.10. Suppose $Q\in C^1[x_0,+\infty)$ and condition (1.12) holds. If solutions $y_2<y_1$ to equation (1.10) are defined on $[x_0,+\infty)$ and have different finite limits as $x\to+\infty$ and, moreover, $y_1$ is the principal solution on $(x_0,+\infty)$, then each solution $y$ defined on $[x_0,+\infty)$ and other than $y_1$ has the same limit at infinity as $y_2$ has.
Proof of Theorem 3.2.2. By Lemma 4.9 the principal solution $y_*(x)$ on $(x_0,+\infty)$ has a finite limit as $x\to+\infty$. If the solution $y_1$ is not principal on $(x_0,+\infty)$, then $y_2(x)<y_1(x)<y_*(x)$ whenever $x\geqslant x_0$. In this case, by Lemma 4.10 the solutions $y_1(x)$ and $y_2(x)$ have the same limits as $x\to+\infty$, which contradicts the assumptions of the theorem. Thus, the solution $y_1$ is principal on $(x_0,+\infty)$. Hence, by Lemma 4.10 each solution $y$ defined on $[x_0,+\infty)$ and other than $y_1$ has the same limit at infinity as $y_2$ has. The lemma is proved.
Note that while proving Theorem 3.2.2 we did not use the condition $a\ne b$ in Cases 2 and 3. So, using the formulae from that proof for the limit of the solution $y(x)$ as $x\to+\infty$ we obtain the statement of Theorem 3.2.3.
4.3. Asymptotic behaviour at $\pm\infty$ of solutions to an equation the roots of whose right-hand side tend monotonically to finite limits
Lemma 4.11. Suppose $\alpha_1^+\ne \alpha_2^+$ and $y_{\rm I}<y_{\rm II}$ are two bounded solutions to equation (1.10) defined at the point $x_0$ and such that
Proof. First we prove part (1). Since the solution $y(x)$ is bounded above by a solution defined on $[x_0,+\infty)$, it follows from Lemma 4.1 that $y(x)$ is extensible onto $[x_0,+\infty)$ as a bounded function. Hence it obeys condition (2.5) (see [39], Proposition 2.4). Therefore, the limit of $y(x)$ equals $\alpha_1^+$ or $\alpha_2^+$. Consider separately the cases $y(x_0)>y_{\rm I}(x_0)$ and $y(x_0)<y_{\rm I}(x_0)$.
If $y(x_0)>y_{\rm I}(x_0)$, then by Theorem 3.2.1,
If $y(x_0)<y_{\rm I}(x_0)$, then $y(x)<y_{\rm I}(x),\:x\geqslant x_0$. Therefore, $\lim_{x\to+\infty}y(x)\ne \alpha_2^+$. Thus, $\lim_{x\to+\infty}y(x)=\alpha_1^+$.
So, in any case, $\lim_{x\to+\infty}y(x)=\alpha_1^+$.
Now we prove part (2). Assume the contrary, that is, assume that the solution $y(x)$ is extensible onto $[x_0,+\infty)$. Then $y(x)>y_{\rm II}(x)$ for $x\geqslant x_0$, and therefore $\lim_{x\to+\infty}y(x)\ne \alpha_1^+$. Hence $\lim_{x\to+\infty}y(x)=\alpha_2^+$. We obtain
which contradicts Theorem 3.2.1. Thus, our assumption is wrong and $y(x)$ tends to $+\infty$ at a finite point $x^*\geqslant x_0$. The lemma is proven.
Further, the following result holds.
Lemma 4.12. If $\alpha_{1,+}\ne\alpha_{2,+}$, then equation (1.10) has a solution $y_1$ defined in a neighbourhood of $+\infty$ and such that
Assume that there exists $\hat{x}>x_0$ such that $y_1(\hat{x})\geqslant y_1(x_0)$. Then $\hat{x}>x_0+\delta$ since otherwise, owing to the monotonicity of $y_1$ on $[x_0,x_0+\delta]$, we have $y_1(\hat{x})< y_1(x_0)$. Suppose $y_1(\hat{x})> y_1(x_0)$. Then
Hence there exists $\xi\in(x_0+\delta,\hat{x})$ such that $y_1(\xi)=y_1(x_0)$.
Thus, if there exists $\hat{x}>x_0$ such that $y_1(\hat{x})\geqslant y_1(x_0)$, then there exists $\xi>x_0$ such that $y_1(\xi)=y_1(x_0)$. Without loss of generality we assume that $\xi$ is the leftmost of such points. Then $y'_1(\xi)\geqslant 0$. On the other hand
This contradiction shows that our assumption is wrong, and for each $x>x_0$ at which the solution $y_1$ is defined, the inequality $y_1(x)<y_1(x_0)$ holds. Therefore, $y_1$ is a solution bounded for $x\geqslant x_0$. Hence (see [39]) $y_1$ satisfies (2.5) and either $\lim_{x\to+\infty}y_1(x)=\alpha_{1,+}$ or $\lim_{x\to+\infty}y_1(x)=\alpha_{2,+}$. The latter is impossible since
Lemmas 4.12 and 4.11 produce Theorem 3.3.1. Theorem 3.3.1 and Remark 6 yield the statement of Theorem 3.3.1'. Further, Theorem 2.1 and Remark 2.3 in [39], in combination with Theorems 3.3.1 and 3.3.1', produce Theorem 3.3.2, which improves Theorem 2.1 from [39]. Theorem 3.3.2, in its turn, yields Theorem 3.3.3.
Proof of Theorem 3.3.4. Suppose $\alpha_{1,-}=\alpha_{2,-}$. Then any Type I solution is also a Type II solution.
Now suppose $\alpha_{1,-}\ne\alpha_{2,-}$. By Theorem 2.2 from [39] there exists a stabilizing solution which is not of Type I.
1. Suppose there exists a Type II solution. In this case the statement of the theorem can be proved immediately.
2. Suppose there exists a Type III solution. Then by Theorem 3.3.1 the Type III solution $y_{\rm III}$ is unique, whereas there is no Type IV solution. Similarly, we obtain from Theorem 3.3.1' that a Type I solution $y_{\rm I}$ is unique, whereas there is no Type IV solution. So any solution $y(x)$ whose initial value $y(0)$ satisfies $y_{\rm I}(0)<y(0)<y_{\rm III}(0)$ is a stabilizing Type II solution.
3. Suppose there exists a Type IV solution $y_{\rm IV}$. Let $y_{\rm I}$ be a Type I solution. Then
as $x\to-\infty$. This is in contradiction with Theorem 3.3.1'. Therefore, our assumption is wrong and there is no Type IV solution.
The theorem is proved.
Similarly, by using Lemma 4.12 we can prove Theorem 3.3.4'.
Proof of Theorem 3.3.5. The case when $\alpha_{1,+}\ne \alpha_{2,+}$ and $\alpha_{1,-}\ne \alpha_{2,-}$ was considered in part 2 of the proof of Theorem 3.3.4.
If $\alpha_{1,-}=\alpha_{2,-}$, then each Type I solution is also a Type II solution (and vice versa).
If $\alpha_{1,+}=\alpha_{2,+}$, then each Type III solution is also a Type II solution (and vice versa). The theorem is proved.
Theorem 3.3.3 and part 2 of the proof of Theorem 3.3.4 immediately produce Theorem 3.3.6.
Proof of Theorem 3.3.7. If equation (1.10) has a Type I solution, then using Theorem 3.3.4 we obtain statement (a). Suppose the equation has no Type I solution. Then it has no Type II solution by Theorem 3.3.6. Suppose that the equation has a Type III solution. Then using Theorem 3.3.4', we obtain the existence of a Type II solution. This contradiction shows that there is no Type III solution. So, if there exists a stabilizing solution, then it is a Type IV solution. In this case we obtain from Theorem 3.3.1 and Lemma 4.12 that a stabilizing solution is unique and statement (b) holds. If there is no stabilizing solution, then we obtain statement (c). The theorem is proved.
Bibliography
1.
I. G. Fikhtengol'ts, “Elements of the theory of gravitational waves”, Theoret. and Math. Phys., 79:1 (1989), 445–448
2.
A. V. Lysukhina, Equivalence of some quantum mechanical models, Bachelor Thesis, Faculty of Physics, Moscow State University, Moscow, 2017 (Russian)
3.
E. A. Lukashev, V. V. Palin, E. V. Radkevich, and N. N. Yakovlev, “Nonclassical regularization of the multicomponent Euler system”, J. Math. Sci. (N. Y.), 196:3 (2014), 322–345
4.
J. Da Fonseca, M. Grasselli, and C. Tebaldi, “A multifactor volatility Heston model”, Quant. Finance, 8:6 (2008), 591–604
5.
D. A. Smorodinov, “Parametrization of the regulator of multicontour stabiliazation of the isolation diameter and the capacitance of one meter of twisted pair cabling”, Zh. Nauchn. Publikatsii Aspirantov i Doktorantov, 4 (2013) http://jurnal.org/articles/2013/inf3.html (Russian)
6.
I. I. Artobolevskii and V. S. Loshchinin, Dynamics of machine assemblies in marginal motion regimes, Nauka, Moscow, 1977, 305 pp. (Russian)
7.
N. A. Kil'chevskii, A course of theoretical mechanics, v. 1, Kinematics, statics, point mass dynamics, Nauka, Moscow, 1972, 75 pp. (Russian)
8.
M. I. Zelikin, Control theory and optimization, v. I, Encyclopaedia Math. Sci., 86, Homogeneous spaces and the Riccati equation in the calculus of variations, Springer-Verlag, Berlin, 2000, xii+284 pp.
9.
N. N. Luzin, “On the method of approximate integration of academician S. A. Chaplygin”, Uspekhi Mat. Nauk, 6:6(46) (1951), 3–27 (Russian)
10.
S. A. Chaplygin, A new method of approximate integration of differential equations, Gostekhizdat, Moscow–Leningrad, 1950, 102 pp. (Russian)
11.
A. Glutsuk, “On germs of constriction curves in model of overdamped Josephson
junction, dynamical isomonodromic foliation and Painlevé 3 equation”, Mosc. Math. J., 23:4 (2023), 479–513
12.
Z. Došlá, P. Hasil, S. Matucci, and M. Veselý, “Euler type linear and half-linear differential equations and their non-oscillation in the critical oscillation case”, J. Inequal. Appl., 2019, 189, 30 pp.
13.
J. Bernoulli, “Modus generalis construendi omnes æquationes differentiales primi gradus”, Acta Erud., 1694, 435–437
14.
G. N. Watson, A treatise on the theory of Bessel functions, 2nd ed., Cambridge Univ. Press, Cambridge, England; The Macmillan Co., New York, 1944, vi+804 pp.
15.
J. F. Riccati, “Animadversiones in æquationes differentiales secundi gradus”, Acta Erud. Suppl., 8 (1724), 66–73
16.
D. Bernoulli, “Notata in J. Riccati ‘Animadversiones in æquationes differentiales secundi gradus’ ”, Acta Erud. Suppl., 8 (1724), 73–75
17.
V. V. Stepanov, A course of differential equations, 8th ed., GIFML, Moscow, 1959, 468 pp. (Russian) ; German transl of 6th ed. W. W. Stepanow, Lehrbuch der Differentialgleichungen, Hochschulbücher für Math., 20, VEB Deutscher Verlag der Wissenschaften, Berlin, 1956, ix+470 pp.
18.
J. Liouville, “Remarques nouvelles sur l'équation de Riccati”, J. Math. Pures Appl., 1841, 1–13
A. Cayley, “On Riccati's equation”, Philos. Mag. (4), XXXVI:244 (1868), 348–351; The collected mathematical papers, v. VII, Cambridge Univ. Press, Cambridge, 1894, 9–12
22.
R. Murphy, “On the general properties of definite integrals”, Trans. Camb. Phil. Soc., III (1830), 429–443
23.
E. Weyr, Zur Integration der Differentialgleichungen erster Ordnung, Abh. Königl. böhm. Ges. Wiss. (6), 6, Prag, Dr. Ed. Gregr, 1875, 44 pp.
24.
É. Picard, “Application de la théorie des complexes linéaires à l'étude des surfaces et des courbes gauches”, Ann. Sci. École Norm. Sup. (2), 6 (1877), 329–366
25.
R. Redheffer, “On solutions of Riccati's equation as functions of the initial values”, J. Rational Mech. Anal., 5:5 (1956), 835–848
26.
G. McCarty, Jr., “Solutions to Riccati's problem as functions of initial values”, J. Math. Mech., 9:6 (1960), 919–925
27.
V. A. Pliss, Nonlocal problems of the theory of oscillations, Academic Press, New York–London, 1966, xii+306 pp.
28.
I. V. Astashova, “Remark on continuous dependence of solutions to the Riccati equation on its righthand side”, International workshop QUALITDE – 2021, Abstracts (Tbilisi 2021), A. Razmadze Math. Inst. of I. Javakhishvili Tbilisi State Univ., Tbilisi, 14–17https://rmi.tsu.ge/eng/QUALITDE-2021/Abstracts_workshop_2021.pdf
29.
A. F. Filippov, Introduction to the theory of differential equations, URSS, Moscow, 2004, 239 pp. (Russian)
30.
W. T. Reid, Riccati differential equations, Math. Sci. Eng., 86, Academic Press, New York–London, 1972, x+216 pp.
31.
M. Bertolino, “Non-stabilité des courbes de points stationnaires des solutions des équations différentielles”, (Serbo-Croatian), Mat. Vesnik, 2(15)(30):3 (1978), 243–253
32.
M. Bertolino, “Équations différentielles aux coefficients infinis”, Mat. Vesnik, 4(17)(32):2 (1980), 150–155
33.
M. Bertolino, “Asymptotes verticales des solutions des équations différentielles”, Mat. Vesnik, 5(18)(33):2 (1981), 139–144
34.
A. I. Egorov, Riccati's equation, Fizmatlit, Moscow, 2001, 328 pp. (Russian)
35.
E. Kamke, Differentialgleichungen. Lösungsmethoden und Lösungen, v. 1, Mathematik und ihre Anwendungen in Physik und Technik. Reihe A, 18, Gewöhnliche Differentialgleichungen, 6. Aufl., Akademische Verlagsgesellschaft, Geest & Portig K.-G., Leipzig, 1959, xxvi+666 pp.
36.
N. M. Kovalevskaya, On some cases of integrability of a general Riccati equaton, 2006, 4 pp., arXiv: math/0604243v1
37.
N. M. Kovalvskaya, “Integrability of the general Riccati equation”, Zh. Nauchn. Publikatsii Aspirantov i Doktorantov, 5 (2011) http://jurnal.org/articles/2011/mat3.html (Russian)
38.
Ph. Hartman, Ordinary differential equations, John Wiley & Sons, Inc., New York–London–Sydney, 1964, xiv+612 pp.
39.
V. V. Palin and E. V. Radkevich, “Behavior of stabilizing solutions of the Riccati equation”, J. Math. Sci. (N.Y.), 234:4 (2018), 455–469
40.
M. Bertolino, “Sur une synthèse pratique de deux méthodes qualitatives d'étude des équations différentielles”, Mat. Vesnik, 13(28):1 (1976), 9–19
41.
I. Merovci, “Sur quelques propriétés des solutions de l'équation
$y'=(y-\alpha_1)(y-\alpha_2)$”, (Serbo-Croatian), Mat. Vesnik, 2(15)(30):3 (1978), 235–242
42.
N. P. Erugin, Reader for a general course in differential equations, 3d revised and augented ed., Nauka i technika, Minsk, 1979, 743 pp. (Russian)
43.
M. Bertolino, “Tuyaux étagés de l'approximation des équations différentielles”, Publ. Inst. Math. (Beograd) (N. S.), 12(26) (1971), 5–10
44.
I. V. Astashova and V. A. Nikishov, “On extensibility and asymptotics of solutions to the Riccati equation with real roots of its right part”, International workshop QUALITDE – 2022, Reports of QUALITDE (Tbilisi 2022), v. 1, A. Razmadze Math. Inst. of I. Javakhishvili Tbilisi State Univ., Tbilisi, 27–30https://rmi.tsu.ge/eng/QUALITDE-2022/Reports_workshop_2022.pdf
45.
I. V. Astashova and V. A. Nikishov, “On qualitative properties of solutions of Riccati's equation”, Current methods in the theory of boundary value problems., Voronezh Spring Mathematical School (3–9 May 2023), Publishing House of Voronezh State University, Voronezh, 2023, 50–53https://vvmsh.math-vsu.ru/files/vvmsh2023.pdf (Russian)
46.
I. V. Astashova and V. A. Nikishov, “Extensibility and asymptotics of solutions of Riccati's equation with real roots of the right-hand side”, Differ. Uravn., 59:6 (2023), 856–858 (Russian)
47.
P. Hartman, “On an ordinary differential equation involving a convex function”, Trans. Amer. Math. Soc., 146 (1969), 179–202
Citation:
I. V. Astashova, V. A. Nikishov, “On extensibility and qualitative properties of solutions to Riccati's equation”, Russian Math. Surveys, 79:2 (2024), 189–227