|
Controllability of difference approximation for a control system with continuous time
E. R. Avakova, G. G. Magaril-Il'yaevbcd a V. A. Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, Moscow, Russia
b Faculty of Mechanics and Mathematics, Lomonosov Moscow State University, Moscow, Russia
c Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute), Moscow, Russia
d Moscow Institute of Physics and Technology, Dolgoprudny, Moscow region, Russia
Abstract:
For a control system with continuous time a discrete control system approximating it is constructed and shown to be locally controllable with respect to a trajectory admissible for the continuous system in question. Examples illustrating this result are given.
Bibliography: 10 titles.
Keywords:
control system, control system with discrete time, local controllability.
Received: 25.10.2021 and 27.07.2022
§ 1. Introduction This paper is concerned with constructing a discrete control system that approximates the original control system with continuous time and finding sufficient conditions for the local controllability of the resulting system. This problem appears naturally in the numerical solution of control problems. Let us give a brief survey on the controllability of discrete systems. On an interval $[t_0,T]$ consider the control system
$$
\begin{equation}
\dot x=f(t,x,u(t)), \qquad u(t)\in U \quad\text{for almost all } t\in[t_0, T],
\end{equation}
\tag{1}
$$
where $f\colon \mathbb R\times\mathbb R^n\times\mathbb R^r\to\mathbb R^n$ is a mapping of $t$, $x$, and $u$, and $U$ is a nonempty subset of $\mathbb R^r$. In a numerical analysis of such a system one usually assigns to it, for each $s\in \mathbb N$, a discrete system of the form
$$
\begin{equation}
x_{i+1}=f_i(x_i,u_i), \qquad u_i\in U, \quad i=0,1,\dots,s-1.
\end{equation}
\tag{2}
$$
An admissible trajectory of a control system (2) is, by definition, any finite sequence $\overline x=(x_0,x_1,\dots,x_s)$, $x_i \in \mathbb R^n$, $i=0,1,\dots,s$, such that there exists a finite sequence of controls $\overline u=(u_0,u_1,\dots,u_{s-1})$, $u_i\in\mathbb R^r$, $i=0,1,\dots,s-1$, satisfying (2). The set of admissible trajectories for control system (2) with initial vector $x_0$ will be denoted by $\mathcal D(x_0,s)$. In the literature such a control system is usually considered per se and is not associated with any control system with continuous time. With an aim to investigate the controllability of (2), consider the set of attainability
$$
\begin{equation*}
\mathcal R(x_0,s)=\bigl\{y\in\mathbb R^n \mid \exists\,\overline x\in \mathcal D(x_0,s)\colon x_s=y\bigr\}.
\end{equation*}
\notag
$$
A control system (2) is said to be locally controllable with respect to an admissible trajectory $\widehat{\overline x}=(\widehat x_0,\widehat x_1,\dots,\widehat x_s)$ if $\widehat x_s\in \operatorname{int}\mathcal R(\widehat x_0,s)$. If a sequence of controls $\widehat {\overline u}=(\widehat u_0,\widehat u_1,\dots,\widehat u_{s-1})$ corresponding to an admissible trajectory $\widehat{\overline x}$ is such that $\widehat u_i\in\operatorname{int}U$, $i=0,1,\dots,s-1$, then, as is well known (see, for example, [1]), system (2) is locally controllable if its linear approximation is completely controllable. Note that such additional conditions on a control are quite natural if $\widehat{\overline x}$ is an equilibrium point ($\widehat x_i=\widehat x_0$, $i=1,\dots,s$). Otherwise the class of problems under consideration reduces considerably under such conditions. In a more involved situation, where the linear approximation of (2) is not completely controllable, second-order local controllability conditions for (2) were obtained in [2] in terms of the index of a certain quadratic form associated with this system (note that in [2] a more general problem was considered, with a smooth manifold as the phase space). In [3] sufficient conditions for the local controllability of system (2) were obtained (without the assumption that $\widehat u_i\in\operatorname{int}U$, $i=0,\dots,s-1$), namely, an admissible pair $(\widehat {\overline x},\widehat {\overline u})$ should not satisfy the assumptions of the discrete Pontryagin maximum principle with discrete Pontryagin function $H_i(x_i,u_i,p_i)=\langle p_i,f_i(x_i,u_i)\rangle$. These conditions were obtained under the very restrictive assumption that the sets $f_i(x, U)=\bigl\{f_i(x,u)\in\mathbb R^n\colon u\in U\bigr\}$, $i=0,\dots,s-1$, are convex for all $x\in \mathbb R^{n(s+1)}$. However, the analogous sufficient conditions for the local controllability of a control system with continuous time do not involve such assumptions (see, for example, [4] and [5]). Let us give a simplest example showing that the above convexity requirement for a discrete system is essential. Consider the system
$$
\begin{equation*}
x_{i+1}=x_i+s^{-1}u_i, \qquad u_i\in U=\{-1,0,1\}, \quad i=0,1,\dots,s-1, \quad x_0=0.
\end{equation*}
\notag
$$
It is easily checked that the assumptions of the discrete maximum principle are not fulfilled by the admissible pair $(\widehat{\overline x},\widehat{\overline u})$, where $\widehat{\overline x}=(0,\dots,0)$ and $\widehat{\overline u}=(0,\dots,0)$. On the other hand $|x_s|=s^{-1}\bigl|\sum_{i=0}^{s-1}u_i\bigr|$ is clearly either zero or not smaller than $1/s$, hence the set of attainability does not contain any disc with centre at the origin, that is, this system is not locally controllable with respect to the origin. At the same time it is easily verified that its continuous analogue $(\dot x=u$, $x(t_0)=0$) is a locally controllable system with respect to the origin. A similar situation also occurs in the study of discrete optimal control problems. As is known (see, for example, [6]), the maximum principle for such problems holds under the above convexity requirement of the sets $f_i(x, U)$. In particular, this underlie the introduction in [7] of the so-called approximative maximum principe, which is free from these convexity conditions. In this paper we provide a different approach towards the study of the controllability of discrete systems. Namely, we assume that the original control system is a system with continuous time of the form (1). The problem here is to construct an appropriate discrete system defined at times $t_0,t_1,\dots,t_s=T$ that is locally controllable with respect to an admissible trajectory $\widehat x(\,\cdot\,)$ for system (1), that is, $\widehat x(T)\in \operatorname{int}\mathcal R(\widehat x(t_0),s)$. The trajectory of the discrete system whose endpoint hits a neighbourhood of $\widehat x(T)$ should be chosen so that it lies arbitrarily close to the trajectory $\widehat x(\,\cdot\,)$ at the points $t_0,t_1,\dots,t_s$. The paper is organized as follows. In § 2 the main result is formulated and proved. Illustrative examples are given in § 3. In § 4 a generalized implicit function theorem and three lemmas (two approximation lemmas and an implicit function lemma) are proved. The main theorem in our paper is based substantially on these results, which, in our opinion, are of independent interest.
§ 2. The main result and the proof For system (1) we always assume that $f$ is continuous, together with its $x$-derivative, on $\mathbb R\times\mathbb R^n\times \mathbb R^r$. By $C([t_0,T],\mathbb R^n)$, $\operatorname{AC}([t_0,T],\mathbb R^n)$ and $L_\infty([t_0,T],\mathbb R^r)$ we denote, respectively, the space of continuous vector functions on $[t_0, T]$ with values in $\mathbb R^n$, the space of absolutely continuous vector functions with values in $\mathbb R^n$ and the space of essentially bounded vector functions with values in $\mathbb R^r$ (for $r=1$ we write $L_\infty([t_0,T])$). If a pair $(x(\,\cdot\,),u(\,\cdot\,))\in \operatorname{AC}([t_0,T],\mathbb R^n)\times L_\infty([t_0,T],\mathbb R^r)$ satisfies the conditions in (1), then we say that this pair is admissible for control system (1) (the adjective ‘control’ will be omitted in what follows). In this case we say that $x(\,\cdot\,)$ is an admissible trajectory for system (1). Let $s\in \mathbb N$ and $h=(T-t_0)/s$, so that the points $t_{i+1}=t_i+h$, $i=0,1,\dots,s-1$, $t_s=T$, define a uniform partition of the interval $[t_0, T]$. Let $\widehat x(\,\cdot\,)$ be an admissible trajectory for system (1), and let $\alpha_{ik}\geqslant0$ satisfy $\sum_{k=1}^{n+1}\alpha_{ik}=1$, $i=0,1,\dots,s-1$, $k=1,\dots, n+1$. With this system we associate its difference approximation
$$
\begin{equation}
x_{i+1}=x_i+h\sum_{k=1}^{n+1}\alpha_{ik}f(t_{i},x_{i},u_{ik}), \qquad u_{ik}\in U, \quad i=0,1,\dots, s-1,
\end{equation}
\tag{3}
$$
where $x_0=\widehat x(t_0)$. Definition 1. An admissible trajectory for control system (3) with respect to a trajectory $\widehat x(\,\cdot\,)$ is any finite sequence $\overline x=(\widehat x(t_0),x_1,\dots,x_s)$, $x_i\in\mathbb R^n$, $i=1,\dots,s$, satisfying (3) for some $\alpha_{ik}\geqslant0$ such that $\sum_{k=1}^{n+1}\alpha_{ik}=1$, and $u_{ik}\in \mathbb R^r$, $i= 0,1,\dots, s- 1$, $k=1,\dots, n+1$. We denote the set of all admissible trajectories for system (3) by $D(\widehat x(t_0),s)$. Let $s\in\mathbb N$. Then the set of attainability for (3) is defined by
$$
\begin{equation*}
R(\widehat x(t_0),s) =\bigl\{y\in\mathbb R^n \mid \exists \,\overline x_y=(\widehat x(t_0),x_{1y},\dots,x_{sy})\in D(\widehat x(t_0),s)\colon x_{sy}=y\bigr\},
\end{equation*}
\notag
$$
Definition 2. We say that a discrete control system (3) is locally controllable with respect to an admissible trajectory $\widehat x(\,\cdot\,)$ for (1) if $\widehat x(T)\in\operatorname{int}R(\widehat x(t_0), s)$. To formulate the main result we need some notation. We denote by $\langle \lambda,x\rangle=\sum_{i=i}^n\lambda_ix_i$ the linear functional $\lambda=(\lambda_1,\dots,\lambda_n)\in(\mathbb R^n)^*$ evaluated at the element $x=(x_1,\dots,x_n)^{\top}\in\mathbb R^n$ (here $\top$ is transposition). We denote the set of all linear functionals on $\mathbb R^n$ that are nonnegative on vectors with nonnegative coordinates by $(\mathbb R^n)^*_+$. The adjoint operator of a linear operator $\Lambda\colon \mathbb R^n\to\mathbb R^m$ is denoted by $\Lambda^*$. If a pair $(\widehat x(\,\cdot\,),\widehat u(\,\cdot\,))$ is fixed, then for brevity we write $\widehat f(t)=f(t,\widehat x(t),\widehat u(t))$, and similarly, for the derivative, $\widehat f_x(t)=f_x(t,\widehat x(t),\widehat u(t))$. With each finite sequence $\overline x=(x_0,x_1,\dots,x_s)$, $x_i\in\mathbb R^n$, $i=0,1,\dots,s$, we associate the piecewise linear function $l(\overline x)\in C([t_0,T],\mathbb R^n)$ defined by
$$
\begin{equation*}
l(\overline x)(t)=x_i+\frac{t-t_i}{h}(x_{i+1}-x_i), \qquad t\in [t_i,t_{i+1}], \quad i=0,1,\dots,s-1.
\end{equation*}
\notag
$$
Let the pair $(\widehat x(\,\cdot\,),\widehat u(\,\cdot\,))$ be admissible for system (1). We denote by $\Lambda(\widehat x(\,\cdot\,),\widehat u(\,\cdot\,))$ the set of nonzero functions $p(\,\cdot\,)\in \operatorname{AC}([t_0, T],(\mathbb R^n)^*)$ satisfying
$$
\begin{equation}
\begin{aligned} \, \dot p(t) &=-p(t)\widehat f_x(t), \\ \max_{u\in U}\bigl\langle p(t),f(t,\widehat x(t),u)\bigr\rangle&=\langle p(t), \widehat f(t)\rangle \quad\text{for a.e. } t\in[t_0, T]. \end{aligned}
\end{equation}
\tag{4}
$$
Note that these relations are the assumptions of the Pontryagin maximum principle with Pontryagin function $H(t)=\bigl\langle p(t), f(t,x(t),u(t))\bigr\rangle$. The main result in our paper is as follows. Theorem 1. Let $(\widehat x(\,\cdot\,),\widehat u(\,\cdot\,))$ be an admissible pair for the control system (1), and let $\Lambda(\widehat x(\,\cdot\,),\widehat u(\,\cdot\,))=\varnothing$. Then there exists $s_0\in\mathbb N$ such that for all $s\geqslant s_0$, the discrete control system (3) is locally controllable with respect to the trajectory $\widehat x(\,\cdot\,)$. Moreover, for any neighbourhood $V$ of the function $\widehat x(\,\cdot\,)$ in $C([t_0,T],\mathbb R^n)$ there exists a neighbourhood of the vector $\widehat x(T)$ such that for each $s\geqslant s_0$ any point of this neighbourhood is the endpoint of an admissible trajectory $\overline x=(\widehat x(t_0),x_{1},\dots,x_{s})$ for the discrete system (3) satisfying $l(\overline x)\in V$. We prove only the second part of the theorem, because the first part clearly follows from the second. For the proof we require a proposition characterizing the property of the set $\Lambda(\widehat x(\,\cdot\,),\widehat u(\,\cdot\,))$ to be empty in different terms. First we give some definitions and formulate a lemma. In what follows a ‘neighbourhood $\mathcal O(x)$’ means a ‘neighbourhood $\mathcal O(x)$ of the point $x$’. The origin in $\mathbb R^m$ is denoted by $0_{\mathbb R^m}$. For the brevity of notation we also frequently write $\widehat x$, $\widehat u$ and so on in place of $\widehat x(\,\cdot\,)$, $\widehat u(\,\cdot\,)$ and so on. For $N>1$ let $\widehat{\overline \alpha}=\widehat{\overline \alpha}(N)=(1,0, \dots,0)$ be an element of $(L_\infty([t_0, T]))^{N}$, where the function $1$ is identically equal to one and $0$ is identically equal to zero. Lemma 1 (on a variational equation). Let $\widehat u\in L_\infty([t_0, T],\mathbb R^r)$, let $\widehat x$ be a solution of the equation
$$
\begin{equation}
\dot x=f(t,x,\widehat u(t))
\end{equation}
\tag{5}
$$
which is defined on $[t_0, T]$, let $N>1$, and let $\overline v=(v_1,\dots,v_{N-1})\in (L_\infty([t_0, T], \mathbb R^r))^{N-1}$. Then there exists a neighbourhood $\mathcal O(\widehat{\overline \alpha})\subset(L_\infty([t_0, T]))^{N}$ such that for all $\overline\alpha=(\alpha_1,\dots,\alpha_N)\in \mathcal O(\widehat{\overline \alpha})$ the Cauchy problem
$$
\begin{equation}
\dot x=\alpha_1(t)f\bigl(t,x,\widehat u(t)\bigr)+\sum_{i=2}^{N}\alpha_i(t)f\bigl(t,x,v_{i-1}(t)\bigr), \qquad x(t_0)=\widehat x(t_0),
\end{equation}
\tag{6}
$$
has a unique solution $x(\,\cdot\,,\overline\alpha; \overline v)$ defined on $[t_0, T]$. As a map to $C([t_0, T], \mathbb R^n)$, the mapping $\overline\alpha\mapsto x(\,\cdot\,,\overline\alpha;\overline v)$ is continuously differentiable. If $\widehat{x}^{\,'}(\overline v)$ is the derivative of this mapping at a point $\widehat{\overline \alpha}$, then for each $\overline\alpha=(\alpha_1,\dots,\alpha_N)\in(L_\infty([t_0, T]))^N$ the function $z=\widehat{x}^{\,'}(\overline v)\overline\alpha$ is a solution of the variational equation
$$
\begin{equation}
\dot z=\widehat f_x(t)z+\alpha_1(t)\widehat f(t) +\sum_{i=2}^N\alpha_i(t)f\bigl(t,\widehat x(t),v_{i-1}(t)\bigr), \qquad z(t_0)=0,
\end{equation}
\tag{7}
$$
on the interval $[t_0, T]$. This lemma is a particular case of Lemma $2$ in [8] for $k=1$. We set
$$
\begin{equation*}
\mathcal U=\bigl\{u\in L_\infty([t_0, T],\mathbb R^r)\colon u(t)\in U\text{ a.e. on }[t_0, T]\bigr\}
\end{equation*}
\notag
$$
and for $k\in\mathbb N$ define
$$
\begin{equation*}
\mathcal A_k=\bigl\{\overline\alpha=(\alpha_1,\dots,\alpha_k)\in (L_\infty([t_0,T]))^k \colon \overline\alpha(t)\in\Sigma^k\text{ a.e. on } [t_0,T]\bigr\},
\end{equation*}
\notag
$$
where $\Sigma^k=\bigl\{\overline\alpha=(\alpha_1,\dots,\alpha_k)\in \mathbb R_+^k \colon \sum_{i=1}^k\alpha_i=1\bigr\}$. Let $U_Z(z,r)$ be the open ball with centre $z$ and radius $r>0$ in the normed space $Z$. Proposition 1. If $\Lambda(\widehat x,\widehat u) = \varnothing$, then there exist $\widehat N > 1$ and $\overline v = (v_1,\dots,v_{\widehat N-1}) \in \mathcal U^{\widehat N-1}$ such that
$$
\begin{equation}
0\in\operatorname{int} \bigl\{(\widehat{x}^{\,'}(\overline v)\overline\alpha)(T)\colon \overline\alpha\in\mathcal A_{\widehat N}-\widehat{\overline \alpha}(\widehat N)\bigr\}.
\end{equation}
\tag{8}
$$
Proof. Let $M(\overline v)$ be the set in curly brackets, and let $\mathcal V$ be the set of all tuples $\overline v=(v_1,\dots,v_{N-1})\in \mathcal U^{N-1}$, $N>1$. First we verify that if $\Lambda(\widehat x,\widehat u)=\varnothing$, then
$$
\begin{equation}
0\in\operatorname{int} \bigcup_{\overline v\in\mathcal V}M(\overline v).
\end{equation}
\tag{9}
$$
Assuming for a contradiction that (9) is wrong, we show that $\Lambda(\widehat x,\widehat u)\ne\varnothing$. To this end we show first that the set on the right-hand side of (9) is convex.
Indeed, denoting this set by $M$, consider $y_i\in M$ and $\beta_i>0$, $i=1,2$, such that $\beta_1+\beta_2=1$. We claim that $\beta_1y_1+\beta_2y_2\in M$.
We have $y_i\in M$, and so $y_i=(\widehat{x}^{\,'}(\overline v_i)\overline\alpha_i)(T)$ for some $\overline v_i=(v_{i1},\dots,v_{i(N_i-1)})\in\mathcal U^{N_i-1}$. We also have $\overline\alpha_i=(\alpha_{i1}-1,\alpha_{i2},\dots,\alpha_{iN_i})\in \mathcal A_{N_i}-\widehat{\overline \alpha}(N_i)$, $i=1,2$. By Lemma 1 the functions $z_i=\widehat{x}^{\,'}(\overline v_i)\overline\alpha_i$, $i=1,2$, obey (7). Hence, as is easily checked, the function $z=\beta_1z_1+\beta_2z_2$ satisfies the equation
$$
\begin{equation*}
\begin{aligned} \, \dot z &=\widehat f_x(t)z+\bigl(\beta_1\alpha_{11}(t)+\beta_2\alpha_{21}(t)-1\bigr)\widehat f(t) \\ &\qquad +\sum_{i=2}^{N_1}\beta_1\alpha_{1i}(t)f\bigl(t,\widehat x(t),v_{1(i-1)}(t)\bigr) \\ &\qquad +\sum_{i=N_1+1}^{N_1+N_2-1}\beta_2\alpha_{2(i-N_1+1)}(t)f\bigl(t,\widehat x(t),v_{2(i-N_1)}(t)\bigr), \qquad z(t_0)=0, \end{aligned}
\end{equation*}
\notag
$$
on $[t_0, T]$.
This equation coincides with the variational equation satisfied by $\widehat{x}^{\,'}(\overline v)\overline\alpha$, where $\overline v=(\overline v_{1}, \overline v_2)$ and $\overline\alpha=(\beta_1\alpha_{11}+\beta_2\alpha_{21}-1, \beta_1\alpha_{12},\dots,\beta_1\alpha_{1N_1},\beta_2\alpha_{22},\dots, \beta_2\alpha_{2N_2})$. Hence $z=\widehat{x}^{\,'}(\overline v)\overline\alpha$, since any linear equation is uniquely solvable.
It is easily seen that $\overline\alpha\in\mathcal A_{N}-\widehat{\overline \alpha}(N)$, where $N=N_1+N_2-1$, and therefore $z(T)\in M(\overline v) \subset M$. Hence $\beta_1y_1+\beta_2y_2=\beta_1z_1(T)+\beta_2z_2(T)=z(T)\in M$, which means the convexity of $M$.
If inclusion (9) is true, then by the separation theorem there exists a nonzero vector $\lambda\in(\mathbb R^n)^*$ such that
$$
\begin{equation}
\bigl\langle\lambda, \,(\widehat{x}^{\,'}(\overline v)(\overline\alpha-\widehat{\overline \alpha}(N)))(T)\bigr\rangle\geqslant0
\end{equation}
\tag{10}
$$
for all $N>1$, $\overline v=(v_1, \dots,v_{N-1})\in\mathcal V$ and $\overline\alpha\in\mathcal A_N$.
Let $p$ be the solution of the Cauchy problem
$$
\begin{equation}
\dot p =-p\,\widehat f_x(t), \qquad p(T)=-\lambda.
\end{equation}
\tag{11}
$$
Since $\lambda\ne0$, $p$ is a nonzero function.
Consider only singleton tuples $\overline v=v\in\mathcal U$ in (10). Then $N=2$ and $\widehat{\overline \alpha}=\widehat{\overline \alpha}(2)=(0,1)$. By (7), for each $\overline\alpha=(\alpha_1,\alpha_{2})\in(L_\infty([t_0, T]))^{2}$ the derivative $\widehat{x}^{\,'}(v)\overline\alpha$ satisfies
$$
\begin{equation}
\begin{gathered} \, \notag \dot{\widehat{x}^{\,'}}(v)\overline\alpha =\widehat f_x(t)\widehat{x}^{\,'}(v)\overline\alpha+\alpha_1(t)\widehat f(t) +\alpha_{2}(t)f\bigl(t,\widehat x(t),v(t)\bigr), \\ \bigl(\widehat x(v)\overline\alpha\bigr)(t_0)=0. \end{gathered}
\end{equation}
\tag{12}
$$
Let $\overline\alpha=(1/2,1/2)\in \mathcal A_{2}$. From (10)– (12) we obtain
$$
\begin{equation*}
\begin{aligned} \, 0 &\leqslant-\bigl\langle p(T), (\widehat{x}^{\,'}(v)(\overline\alpha-\widehat{\overline \alpha}))(T)\bigr\rangle \\ &=-\int_{t_0}^{T}\bigl(\langle p(t),(\dot {\widehat{x}^{\,'}}(v)(\overline\alpha-\widehat{\overline \alpha}))(t)\rangle +\langle \dot p(t), (\widehat{x}^{\,'}(v)(\overline\alpha-\widehat{\overline \alpha}))(t)\rangle\bigr)\,dt \\ &=-\int_{t_0}^{T}\biggl\langle p(t),-\frac12\widehat f(t)+\frac12 f(t,\widehat x(t),v(t)\biggr\rangle\,dt, \end{aligned}
\end{equation*}
\notag
$$
that is,
$$
\begin{equation*}
\int_{t_0}^{T}\bigl\langle p(t),f(t,\widehat x(t),v(t)\bigr\rangle\,dt \leqslant \int_{t_0}^{T}\bigl\langle p(t),\widehat f(t)\bigr\rangle\,dt
\end{equation*}
\notag
$$
for all $v\in\mathcal U$. Now using the standard argument we verify that the maximum condition in (4) is satisfied, which, in combination with (11), proves that the set $\Lambda(\widehat x,\widehat u)$ is nonempty. So we have shown that if $\Lambda(\widehat x,\widehat u)=\varnothing$, then inclusion (9) holds. We claim that (8) follows from (9).
By (9) there exists an $n$-dimensional simplex $S\subset M$ such that $0\in\operatorname{int}S$. Hence $U_{\mathbb R^n}(0,\rho)\subset S$ for some $\rho>0$. Let $e_1,\dots,e_{n+1}$ be the vertices of $S$. Then $e_i=(\widehat{x}^{\,'}(\overline v_i)\overline\alpha_i)(T)$ for some $\overline\alpha_i\in \mathcal A_{N_i}-\widehat{\overline \alpha}(N_i)$ and $\overline v_i\in\mathcal U^{N_i-1}$, $i=1,\dots, n+1$. By Lemma 1 the functions $z_i=\widehat{x}^{\,'}(\overline v_i)\overline\alpha_i$, $i=1,\dots,n+1$, satisfy the corresponding variational equation (see (7)).
Let $y\in U_{\mathbb R^n}(0,\rho)$. Then $y=\sum_{i=1}^{n+1}\beta_ie_i$ for some $\beta_i>0$ such that $\sum_{i=1}^{n+1}\beta_i=1$. Next, arguing as above, we see that $y=(\widehat{x}^{\,'}(\overline v)\overline\alpha)(T)$, where $\overline v=(\overline v_1,\dots,\overline v_{n+1})$, and $\overline\alpha=(\beta_1\alpha_{11} +\dots+\beta_{n+1}\alpha_{(n+1)1}-1,\beta_1\alpha_{12},\dots, \beta_1\alpha_{1N_1},\dots,\beta_{n+1}\alpha_{(n+1)2},\dots, \beta_{n+1}\alpha_{(n+1)N_{n+1}})$.
This proves inclusion (8) since $y=(\widehat{x}^{\,'}(\overline v)\overline\alpha)(T)$ for each $y\in U_{\mathbb R^n}(0,\rho)$, and since the tuple $\overline v$ (of cardinality $\widehat N- 1$, where $\widehat N=\sum_{i=1}^{n+1}N_i-n$) is independent of $y$, Proposition 1 is proved. Proof of Theorem 1. Since $\Lambda(\widehat x,\widehat u)=\varnothing$, Proposition 1 can be applied. Let $\overline v=(v_1,\dots,v_{\widehat N-1})\in \mathcal U^{\widehat N-1}$ be the tuple from that proposition and let $x(\,\cdot\,,\overline\alpha)=x(\,\cdot\,,\overline\alpha;\overline v)$ be the function from Lemma 1 corresponding to $\overline v$ and defined for all $\overline\alpha\in\mathcal O(\widehat{\overline \alpha})$, where $\widehat{\overline \alpha}=\widehat{\overline \alpha}(\widehat N)$.
Let $\mathcal O_0(\widehat{\overline \alpha})\subset\mathcal O(\widehat{\overline \alpha})$ be the neighbourhood from Lemma 3. Reducing $\mathcal O_0(\widehat{\overline \alpha})$ if necessary we can assume that the mapping $\overline\alpha\mapsto x(\,\cdot\,,\overline\alpha)$ is bounded on $\mathcal O_0(\widehat{\overline \alpha})$.
We apply the implicit function lemma (Lemma 4) to $X=(L_\infty([t_0, T]))^{\widehat N}$, ${K=\mathcal A_{\widehat N}}$, $\widehat w=\widehat{\overline \alpha}$, $W=\mathcal O_0(\widehat{\overline \alpha})$ and $\widehat \Phi\colon W\to\mathbb R^{n}$ equal to the mapping $\overline\alpha\mapsto x(T,\overline\alpha)$. It is clear that (8) is equivalent to $0\in\operatorname{int}\widehat\Phi'(\widehat w)(K-\widehat w)$. Hence all the assumptions of Lemma 4 are met.
Let $s_0\in\mathbb N$ be such that for all $s\geqslant s_0$ the mappings $\overline\alpha\mapsto x_s(\,\cdot\,,\overline\alpha)=x_s(\,\cdot\,,\overline\alpha;\overline v)$ (where $\overline v$ is the above tuple) as defined in Lemma 3 belong to the space $C({W\cap K}, C([t_0, T],\mathbb R^n))$ and converge in this space to the mapping $\overline\alpha\mapsto x(\,\cdot\,,\overline\alpha)$ as $s\to\infty$. Hence for such $s$ the formula
$$
\begin{equation*}
\Phi_s(\overline\alpha)=x_s(T,\overline\alpha)
\end{equation*}
\notag
$$
defines a continuous mappings $\Phi_s\colon W\cap K\to\mathbb R^{n}$ which lie in $C(W\cap K,\,\mathbb R^{n})$ and converge in this space to $\overline\alpha\mapsto \widehat\Phi(\overline\alpha)=x(T,\overline\alpha)$ as $s\to\infty$.
Let $V$ be a neighbourhood of the function $\widehat x(\,\cdot\,)$, and let $\varepsilon > 0$ be such that $U_{C([t_0, T],\mathbb R^n)}(\widehat x(\,\cdot\,),\varepsilon) \subset V$. Increasing $s_0$ is necessary, by the above we have $\|x_s(\,\cdot\,,\overline\alpha)-x(\,\cdot\,,\overline\alpha)\|_{C([t_0, T],\mathbb R^n)}<\varepsilon/2$ for all $s \geqslant s_0$ and $\overline\alpha\in W\cap K$. Next, the mapping $\overline\alpha\mapsto x(\,\cdot\,,\overline\alpha)$ is continuous at the point $\widehat{\overline \alpha}$, and so there exists $\delta_0>0$ such that $\|x(\,\cdot\,,\overline\alpha)-\widehat x(\,\cdot\,)\|_{C([t_0, T],\mathbb R^n)}<\varepsilon/2$ for $\|\overline\alpha-\widehat{\overline \alpha}\|_{(L_\infty([t_0, T]))^{\widehat N}}<\delta_0$.
Hence, if $s\geqslant s_0$ and $\overline\alpha\in W\cap K$ are such that $\|\overline\alpha\,{-}\,\widehat{\overline \alpha}\|_{(L_\infty([t_0, T]))^{\widehat N}}<\delta_0$, then
$$
\begin{equation}
\begin{aligned} \, \notag &\|x_s(\,\cdot\,,\overline\alpha)-\widehat x(\,\cdot\,)\|_{C([t_0,T],\mathbb R^n)}\leqslant\|x_s(\,\cdot\,,\overline\alpha)-x(\,\cdot\,,\overline\alpha)\|_{C([t_0, T],\mathbb R^n)} \\ &\qquad\qquad +\|x(\,\cdot\,,\overline\alpha)-\widehat x(\,\cdot\,)\|_{C([t_0, T],\mathbb R^n)}<\varepsilon. \end{aligned}
\end{equation}
\tag{13}
$$
Let $r_0$ and $\gamma$ be the constants from Lemma 4. Choosing $r\in(0,r_0]$ so as to have $\gamma r<\delta_0$ and increasing $s_0$ if necessary, we have $\Phi_s\in U_{C(W\cap K,\,\mathbb R^{n})}(\widehat \Phi,r)$ for all $s \geqslant s_0$. We fix an arbitrary $s\geqslant s_0$.
Let $y\in U_{\mathbb R^{n}}(\widehat x(T),r)$ ($\widehat x(T)=\widehat\Phi(\widehat w)$). If $g_{s}(y)=g_{\Phi_s}(y)\in W\cap K$ is the corresponding element from Lemma 4, then by this lemma, for this $y$, setting $g_{s}(y)=\overline\alpha_y$ we have
$$
\begin{equation}
x_s(T,\overline\alpha_y)=y, \qquad \|\overline\alpha_y-\widehat{\overline \alpha}\|_{(L_\infty([t_0, T]))^{\widehat N}}\leqslant\gamma r.
\end{equation}
\tag{14}
$$
The quantity on the right in this inequality is smaller than $\delta_0$. Hence, using (13) we obtain
$$
\begin{equation}
\|x_s(\,\cdot\,,\overline\alpha_y)-\widehat x(\,\cdot\,)\|_{C([t_0,T],\mathbb R^n)}<\varepsilon.
\end{equation}
\tag{15}
$$
Let $F_s\colon C([t_0, T],\mathbb R^n)\times\mathcal A_{\widehat N} \to C([t_0, T],\mathbb R^n)$, $s\in\mathbb N$, be the mappings defined in Lemma 2, where $x_0=\widehat x(t_0)$ and $\overline u=(\widehat u,\overline v)$. According to Lemma 3, for each ${s\geqslant s_0}$ the function $x_s(\,\cdot\,,\overline\alpha_y)$ is a solution of the equation $F_s(x,\overline\alpha_y)(t)=0$ for all $t\in[t_0, T]$, that is,
$$
\begin{equation}
F_s\bigl(x_s(\,\cdot\,,\overline\alpha_y),\overline\alpha_y\bigr)(t)=0, \qquad t\in[t_0, T].
\end{equation}
\tag{16}
$$
In particular, $F_s(x_s(\,\cdot\,,\overline\alpha_y),\overline\alpha_y)(t_i) =y_s(x_s(\,\cdot\,,\overline\alpha_y),\overline\alpha_y)(t_i)=0$, $i=0,1,\dots,s-1$ (see (42)). Hence, setting $x_{sy}(t)=x_s(t,\overline\alpha_y)$ we find that
$$
\begin{equation*}
x_{sy}(t_i) =\widehat x(t_0)+\sum_{m=0}^{i-1}\int_{t_m}^{t_{m+1}} f\bigl(t_m,x_{sy}(t_m),u_s(\overline\alpha_y)(t)\bigr)\,dt=0,
\end{equation*}
\notag
$$
$i=0,1,\dots,s-1$ (the sum is absent for $i=0$), where $u_s(\overline\alpha)=u_s(\overline\alpha;\overline u)$ (for each $s\in \mathbb N$) is the piecewise constant function on $[t_0,T]$ constructed in Lemma 2, which is $u_k^s\in U$ (up to the values at the endpoints) on the subsubinterval $\Delta_{ik}(s)$ of length $\lambda^s_{ki}h$ of the subinterval $[t_i,t_{i+1}]$, $i=0,\dots,s-1$ (where $h=(T-t_0)/s$ and $\sum_{k=1}^{p_s}\lambda^s_{ki}=1$).
Now it follows from the last equality that $x_{sy}(t_{0})=\widehat x(t_0)$ and
$$
\begin{equation}
\begin{aligned} \, \notag x_{sy}(t_{i+1}) &=x_{sy}(t_i)+\int_{t_i}^{t_{i+1}}f\bigl(t_i,x_{sy}(t_i),u_s(\overline\alpha_y)(t)\bigr)\,dt \\ &=x_{sy}(t_i)+h\sum_{k=1}^{p_s}\lambda^s_{ki}f\bigl(t_i,x_{sy}(t_i), u_{k}^s\bigr), \qquad i=0,1,\dots,s-1. \end{aligned}
\end{equation}
\tag{17}
$$
For each $i$ the sum on the right-hand side of (17) lies in the convex hull of the set $f(t_i,x_{sy}(t_i),U)=\bigl\{f(t_i,x_{sy}(t_i),u)\in\mathbb R^n\colon u\in U\bigr\}$. Therefore, by Carathéodory’s theorem there exist numbers $\alpha_{ik}\geqslant0$, $k=1,\dots,n+1$, $\sum_{k=1}^{n+1}\alpha_{ik}=1$, and controls $u_{ik}\in U$, $k=1,\dots,n+1$ (depending on $y$ and $s$) such that
$$
\begin{equation*}
\sum_{k=1}^{p_s}\lambda^s_{ki}f(t_i,x_{sy}(t_i), u_{k}^s)=\sum_{k=1}^{n+1}\alpha_{ik}f(t_i,x_{sy}(t_i), u_{ik}), \qquad i=0,1,\dots,s-1.
\end{equation*}
\notag
$$
Setting $x_{iy}=x_{sy}(t_i)$, $i=0,1,\dots,s$, and using (17) and the last equality, we find that $\overline x_y=(\widehat x(t_0), x_{1y},\dots,x_{sy})$ is an admissible trajectory for (3). From (15) it is clear that the piecewise linear function $l(\overline x_y)$ lies in $U_{C([t_0, T],\mathbb R^n)}(\widehat x(\,\cdot\,),\varepsilon)\subset V$.
Next, in view of (14) we have $x_{sy}=x_s(T,\overline\alpha_y)=y$; this equality also holds for any $y \in U_{\mathbb R^{n}}(\widehat x(T),r)$ and $s\geqslant s_0$. Thus, for this neighbourhood $V$ of the trajectory $\widehat x(\,\cdot\,)$ we have found $s_0$ such that for each $s\geqslant s_0$ any point $y$ of some neighbourhood of $\widehat x(T)$ is the endpoint of some admissible (for (3)) trajectory $\overline x_y=(\widehat x(t_0), x_{1y},\dots,x_{sy})$ which lies in $V$. This proves the second part of Theorem 1, which, as already mentioned, implies its first part.
§ 3. Examples Example 1. Consider the control system
$$
\begin{equation}
\begin{gathered} \, \dot x_1=u, \quad \dot x_2=x_1, \qquad u(t)\in U=\{-1,0,1\} \quad\text{for almost all } t\in[0,1], \\ x_1(0)=x_2(0)=0, \qquad x_1(1)=x_2(1)=0. \end{gathered}
\end{equation}
\tag{18}
$$
Let $s\in \mathbb N$ and $h=h(s)=1/s$, so that the points $t_{i+1}=t_i+h$, $i=0,1,\dots,s-1$, $t_0= 0$, $t_s=1$, define a uniform partition of the interval $[0, 1]$. The corresponding difference approximation of system (18) can be written as
$$
\begin{equation}
x_{1i}=h\sum_{j=0}^{i-1}\sum_{k=1}^3\alpha_{jk}u_{jk}, \qquad i=1,\dots,s,
\end{equation}
\tag{19}
$$
$$
\begin{equation}
x_{2i}=h\sum_{j=0}^{i-1}x_{1j}, \qquad i=1,\dots,s,
\end{equation}
\tag{20}
$$
$$
\begin{equation}
x_{10}=x_{20}=0 \quad\text{and} \quad x_{1s}=x_{2s}=0,
\end{equation}
\tag{21}
$$
where $\alpha_{jk}\geqslant0$, $\sum_{k=1}^3\alpha_{jk}=1$, and $u_{jk}\in U$, $j=0,1,\dots,s-1$, $k=1,2,3$. The pair $(\widehat x(\,\cdot\,),\widehat u(\,\cdot\,))$, where $(\widehat x_1(\,\cdot\,),\widehat x_2(\,\cdot\,))=(0,0)$ and $\widehat u(\,\cdot\,)=0$, is admissible for system (18). We investigate the controllability of this system with respect to the zero trajectory $\widehat x(\,\cdot\,)=0$. In this case $\Lambda(0,0)$ (see (4)) is the set of nonzero functions $p(\,\cdot\,)=(p_1(\,\cdot\,), p_2(\,\cdot\,))$ such that
$$
\begin{equation*}
\dot p_1(t)=-p_2(t), \quad \dot p_2(t)=0\quad\text{and} \quad \max_{u\in U}p_1(t)u=0 \quad\text{for almost all } t\in[0, 1].
\end{equation*}
\notag
$$
Substituting $u=-1$ and $u=1$ into the last equality we find that $p_1(\,\cdot\,)=0$. But then $p_2(\,\cdot\,)=0$, and so $\Lambda(0,0)=\varnothing$. Therefore, the conclusions of Theorem 1 hold for the system under consideration. Example 2. Consider the control system
$$
\begin{equation}
\begin{gathered} \, \dot x_1=u, \quad \dot x_2=u^3, \qquad u(t)\in U=[c,+\infty) \quad\text{for almost all } t\in[0,1], \\ x_1(0)=x_2(0)=0\quad\text{and} \quad x_1(1)=x_2(1)=1, \end{gathered}
\end{equation}
\tag{22}
$$
where $c\in\mathbb R$. As in the preceding example, the corresponding difference approximation of system (22) can be written in the form
$$
\begin{equation}
x_{1i}=h\sum_{j=0}^{i-1}\sum_{k=1}^3\alpha_{jk}u_{jk}, \qquad i=1,\dots,s,
\end{equation}
\tag{23}
$$
$$
\begin{equation}
x_{2i}=h\sum_{j=0}^{i-1}\sum_{k=1}^3\alpha_{jk}u^3_{jk}, \qquad i=1,\dots,s,
\end{equation}
\tag{24}
$$
$$
\begin{equation}
x_{10}=x_{20}=0\quad\text{and} \quad x_{1s}=x_{2s}=1,
\end{equation}
\tag{25}
$$
where $\alpha_{jk}\geqslant0$, $\sum_{k=1}^3\alpha_{jk}=1$ and $u_{jk}\in U$, $j=0,1,\dots,s-1$, $k=1,2,3$. The pair $(\widehat x(\,\cdot\,),\widehat u(\,\cdot\,)$, where $\widehat x(t)=(\widehat x_1(t),\widehat x_2(t))=(t,t)$ and $\widehat u(t)=1$, $t\in[0,1]$, is admissible for system (22). We investigate the controllability of this system with respect to the trajectory $\widehat x(\,\cdot\,)$. The following result holds. Proposition 2. System (23)–(25) is locally controllable with respect to the trajectory $\widehat x(\,\cdot\,)$ if and only if $c<-2$. Proof. Necessity. Assume for a contradiction that $c \geqslant -2$. We show that system (23)–(25) is not locally controllable with respect to $\widehat x(\,\cdot\,)$ in this case. To do this it suffices to verify that any neighbourhood of the vector $(0,0,1,1)$ contains points not reachable for any $s$.
Let the sequences $x_{1i}$ and $x_{2i}$, $i=1,\dots,s$, satisfy (23) and (24) for some $\alpha_{jk}\geqslant0$ such that $\sum_{k=1}^3\alpha_{jk}=1$, and $u_{jk}\in U$, $j=0,1,\dots,s-1$, $k=1,2,3$, and, in addition, $x_{10}=x_{20}=0$ and $x_{1s}=1$. Then
$$
\begin{equation*}
\begin{aligned} \, x_{2s} &=h\sum_{j=0}^{s-1}\sum_{k=1}^3\alpha_{jk}u^3_{jk} =h\sum_{j=0}^{s-1}\sum_{k=1}^3\alpha_{jk}(1+(u_{jk}-1))^3 \\ &=h\sum_{j=0}^{s-1}\sum_{k=1}^3\alpha_{jk}+ 3h\sum_{j=0}^{s-1}\sum_{k=1}^3\alpha_{jk}(u_{jk}-1) \\ &\qquad+h\sum_{j=0}^{s-1}\sum_{k=1}^3\alpha_{jk}(u_{jk}-1)^2(2+u_{jk})\geqslant1, \end{aligned}
\end{equation*}
\notag
$$
because the first term on the right is clearly equal to 1, the second is 0 since $x_{1s}=h\sum_{j=0}^{s-1}\sum_{k=1}^3\alpha_{jk}u_{jk}=1$, and the third term is nonnegative, since $u_{jk}\geqslant-2$ for all $j=0,1,\dots,s-1$ and $k=1,2,3$ by assumption. Therefore, the point $(0,0,1,1-\varepsilon)$ is not reachable for any $\varepsilon>0$.
Sufficiency. Let $c<-2$. The local controllability of system (23)–(25) with respect to the trajectory $\widehat x(\,\cdot\,)$ will be proved using Theorem 1. In this case the set $\Lambda(\widehat x(\,\cdot\,),\widehat u(\,\cdot\,))$ (see (4)) is the family of nonzero functions $p(\,\cdot\,)=(p_1(\,\cdot\,),p_2(\,\cdot\,))$ such that
$$
\begin{equation*}
\begin{gathered} \, \dot p_1(t)=0, \qquad \dot p_2(t)=0, \\ \text{and}\quad \max_{u\in [c,+\infty)}\bigl(p_1(t)u+p_2(t)u^3\bigr)=p_1(t)+p_2(t) \quad\text{for a.e. } t\in[0, 1]. \end{gathered}
\end{equation*}
\notag
$$
Let $(p_1(\,\cdot\,),p_2(\,\cdot\,))\in\Lambda(\widehat x(\,\cdot\,),\widehat u(\,\cdot\,))$. It is clear that the functions $p_1(\,\cdot\,)$ and $p_2(\,\cdot\,)$ are constants. It follows from the maximum condition that $p_2\leqslant0$. If $p_2=0$, then $p_1\ne0$, and then the function $u\mapsto p_1u$ must attain its maximum on $[c,+\infty)$ at the point 1 (as required in the maximum condition), but this is clearly impossible. Therefore, $p_2<0$. If $p_1\leqslant0$, then the function $f(u)=p_1u+p_2u^3$ is monotonically decreasing; hence it also cannon reach its maximum on $[c,+\infty)$ at 1.
Thus, $p_1>0$ and $p_2<0$. The function $f$ attains its maximum at 1, so the zero derivative at this point implies that $p_1=-3p_2$. Hence $f(1)=p_1+p_2=-2p_2$.
The function $f$ vanishes at the point ${-}\sqrt{3}$, is monotonically decreasing on the interval $(-\infty, -\sqrt{3}\,]$, is equal to $-2p_2$ at the point $-2<-\sqrt{3}$, and therefore it is greater than $-2p_2$ at the point $c<-2$s. But this contradicts the fact that $f$ attains its maximum at 1. Therefore, for $c<-2$ no nonzero pair $(p_1,p_2)$ lies in $\Lambda(\widehat x(\,\cdot\,),\widehat u(\,\cdot\,))$, and now the required result is immediate from Theorem 1.
This proves Proposition 2.
§ 4. Application We need the following definitions. Let $\mathcal M$ be a topological space and $Z$ be a normed space. We let $C(\mathcal M, Z)$ denote the space of continuous mappings $G\colon {\mathcal M\to Z}$ with finite norm
$$
\begin{equation*}
\|G\|_{C(\mathcal M, Z)}=\sup_{x\in \mathcal M}\|G(x)\|_Z.
\end{equation*}
\notag
$$
Let $X$ and $Y$ be normed spaces, $\Sigma$ be a topological space and $M$ be a nonempty subset of $X$. We let $C^1_x(M\times\Sigma, Y)$ denote the restriction to $M\times\Sigma$ of the set of mappings $F\colon X\times\Sigma\to Y$ that are continuous, together with their $x$-derivatives, and have finite norms
$$
\begin{equation*}
\|F\|_{C^1_x(M\times\Sigma,\,Y)}=\|F\|_{C(M\times\Sigma,\, Y)}+\|F_x\|_{C(M\times\Sigma, \,\mathcal L(X,Y))},
\end{equation*}
\notag
$$
where $\mathcal L(X,Y)$ is the space of linear continuous operators from $X$ to $Y$. Theorem 2 (generalized implicit function theorem). Let $X$ and $Y$ be Banach spaces and $\Sigma$ be a metric space, let $\widehat \sigma\in \Sigma$, let $Q$ be a convex closed subset of $X$, and let $V$ be a neighbourhood of a point $\widehat x\in Q$. Next let $\widehat F\in C^1_x((V\cap Q)\times\Sigma,\, Y)$, $\widehat F(\widehat x,\widehat \sigma)=0$, and let the operator $\Lambda=\widehat F_x(\widehat x,\widehat \sigma)$ be invertible. Let $\{L_s\}_{s\in \mathbb N}$ be a sequence of complemented subspaces of $Y$, $\{P_s\}_{s\in \mathbb N}$ be the corresponding sequence of continuous projections on $L_s$ satisfying $\sup_{s\in\mathbb N}\|P_s\|< \infty$, and let $N_s=\Lambda^{-1}(L_s)$, $s\in\mathbb N$. Then the following results hold. 1) There exist $r>0$ and neighbourhoods $V_0\subset V$ and $U_0$ of the points $\widehat x$ and $\widehat \sigma$ such that, if for some $s \in \mathbb N$ the mappings $\widehat F$ and $F_s\in U_{C_x^1((V\cap Q)\times\Sigma,\,L_s)}(P_s\widehat F,r)$ satisfy $x-\Lambda^{-1}\widehat F(x,\sigma)\in Q$ for all $(x,\sigma)\in (V_0\cap Q)\times U_0$ and $x-\Lambda^{-1}F_s(x,\sigma) \in Q$ for all $(x,\sigma)\in (V_0\cap Q\cap (\widehat x+N_s))\times U_0$, then there exist continuous mappings $g_{\widehat F}\colon U_0\to V_0\cap Q$ and $g_s=g_{F_s}\colon U_0\to V_0\cap Q\cap(\widehat x+N_s)$ such that
$$
\begin{equation}
\widehat F(g_{\widehat F}(\sigma),\sigma)=0, \qquad F_s(g_{s}(\sigma),\sigma)=0
\end{equation}
\tag{26}
$$
for all $\sigma\in U_0$ and, moreover, $g_{\widehat F}(\widehat \sigma)=\widehat x$. 2) The equalities $\widehat F(x,\sigma) = 0$, $F_s(x,\sigma)=0$ on $(V_0\,{\cap}\, Q) \times U_0$ and $(V_0\,{\cap}\, Q\,{\cap}\, (\widehat x+ N_s))\times U_0$, respectively, are only possible if $x=g_{\widehat F}(\sigma)$ and $x=g_{s}(\sigma)$. 3) There exist a positive constant $c$ and a neighbourhood $U'_0\subset U_0$ of $\widehat \sigma$ such that
$$
\begin{equation}
\|g_{\widehat F}-g_s\|_{C(U'_0, X)}\leqslant c\|\widehat F-F_s\|_{C((V\cap Q)\times\Sigma,\,Y)}.
\end{equation}
\tag{27}
$$
Proof. For each $\delta>0$ such that $U_{X\times\Sigma}((\widehat x,\widehat \sigma),\delta)\subset V\times\Sigma$ we set
$$
\begin{equation*}
\beta_1(\delta) = \sup_{(x,\sigma)\in U_{X\times \Sigma}((\widehat x,\widehat \sigma), \delta)}\|\widehat{F}_x(x,\sigma)- \Lambda \|
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
\beta_2(\delta) = \sup_{(x,\sigma)\in U_{X\times \Sigma}((\widehat x,\widehat \sigma), \delta)}\|\widehat{F}(x,\sigma)\|_Y.
\end{equation*}
\notag
$$
Next, set $a=\|\Lambda^{-1}\|$, $b=\sup_{s\in\mathbb N}\|P_s\|$, and let $r>0$ and $\delta_1\geqslant\delta_2\geqslant\delta_3> 0$ be constants satisfying
$$
\begin{equation}
\begin{gathered} \, \notag \varepsilon (r,\delta_1) < \frac{1}{a}, \qquad \frac{a}{(1-\varepsilon (r,\delta_1)a)}\bigl(r +b\beta_2(\delta_2)\bigr)+ \delta_2 < \delta_1 \\ \text{and}\quad \frac{a}{(1-\varepsilon(r,\delta_1)a)}\bigl(r+b\beta_2(\delta_3)\bigr)<\delta_2, \end{gathered}
\end{equation}
\tag{28}
$$
where $\varepsilon (r,\delta_1)=3r + b\beta_1(\delta_1)$ (clearly, such constants always exist).
Set $V_0=U_X(\widehat x,\delta_1)$, $U_0=U_\Sigma(\widehat \sigma,\delta_2)$, and let $s\in\mathbb N$ be such that $F_s\in U_{C^1_x((V\cap Q)\times\Sigma,\, L_s)}(P_s\widehat F,r)$. We claim that for all $x,x'\in V_0\cap Q$ and $\sigma\in U_0$,
$$
\begin{equation}
\bigl\|F_s(x,\sigma)-F_s(x',\sigma)-P_s\Lambda(x-x')\bigr\|_{Y} \leqslant \varepsilon(r,\delta_1)\|x-x'\|_{X}.
\end{equation}
\tag{29}
$$
Indeed, first we have ($F_{sx}$ is the partial derivative of $F_s$ with respect to $x$)
$$
\begin{equation}
\begin{aligned} \, \notag &\|F_{sx}(x,\sigma)-F_{sx}(\widehat x,\widehat \sigma)\|\leqslant\|F_{sx}(x,\sigma)-P_s\widehat F_x(x,\sigma)\| \\ &\qquad\qquad +\|P_s\widehat F_x(x,\sigma)-P_s\Lambda\| +\|F_{sx}(\widehat x,\widehat \sigma)-P_s\Lambda\|<2r+b\beta_1(\delta_1). \end{aligned}
\end{equation}
\tag{30}
$$
Next, the sets $V_0$ and $Q$ are convex, and therefore, if $x,x' \in V_0\cap Q$, then $x_\theta=(1-\theta)x+\theta x'\in V_0\cap Q$ for $\theta\in[0,1]$. Applying the mean value theorem, to the mapping $x\to F_s(x,\sigma)-F_{sx}(\widehat x,\widehat \sigma)x$, where $\sigma\in U_0$, and using (30), from the choice of $F_s$ we obtain
$$
\begin{equation*}
\begin{aligned} \, &\bigl\|F_s(x,\sigma)-F_s(x',\sigma)- P_s\Lambda(x-x')\bigr\|_{Y} \\ &\qquad \leqslant\bigl\|F_s(x,\sigma)-F_s(x',\sigma) -F_{sx}(\widehat x,\widehat \sigma)(x-x')\bigr\|_{Y} \\ &\qquad \qquad+\bigl\|F_{sx}(\widehat x,\widehat \sigma)(x-x')-P_s\Lambda(x-x')\bigr\|_Y \\ &\qquad \leqslant\sup_{\theta\in[0,1]}\bigl\|F_{sx}(x_\theta,\sigma)-F_{sx}(\widehat x,\widehat \sigma)\bigr\|\, \|x-x'\|_{X}+\bigl\|F_{sx}(\widehat x,\widehat \sigma)-P_s\Lambda\bigr\|\,\|x-x'\|_X \\ &\qquad <\bigl(2r+b\beta_1(\delta_1)\bigr)\|x-x'\|_{X}+r\|x-x'\|_X=\varepsilon (r,\delta_1)\|x-x'\|_X, \end{aligned}
\end{equation*}
\notag
$$
proving inequality (29).
Let $(x,\sigma)\!\in\! (V'_0\cap Q\cap (\widehat x+N_s))\times U_0$, where $V'_0\!=\!U_X(\widehat x,\delta_2)$, and ${x\!-\!\Lambda^{-1}F_s(x,\sigma)\!\in\! Q}$ for all $(x,\sigma)\in (V_0\cap Q\cap (\widehat x+N_s))\times U_0$. Consider the sequence
$$
\begin{equation}
x_n=x_{n-1}-\Lambda^{-1}F_s(x_{n-1},\sigma), \quad n\in\mathbb N, \qquad x_0=x
\end{equation}
\tag{31}
$$
(a modified Newton method). We show that it lies in $V_0\cap Q\cap (\widehat x+N_s)$ and is a Cauchy sequence. The first claim is proved using induction. It is clear that $x_0\in V_0\cap Q\cap (\widehat x+N_s)$. Let $x_k\in V_0\cap Q\cap (\widehat x+N_s)$, $1\leqslant k\leqslant n$. We verify that $x_{n+1}\in V_0\cap Q\cap (\widehat x+N_s)$.
Applying the operator $\Lambda$ to both sides of (31) we find that
$$
\begin{equation}
\Lambda(x_n-x_{n-1})=-F_s(x_{n-1},\sigma).
\end{equation}
\tag{32}
$$
Since $x_n-x_{n-1}\in N_s$, we have $\Lambda(x_n-x_{n-1})\in P_sY$, which gives $\Lambda(x_n-x_{n-1})=P_s\Lambda(x_n-x_{n-1})$. Using this fact, applying (31), (32) and (29) in succession, and repeating this procedure we have (for brevity we set $\varepsilon=\varepsilon(r,\delta_1)$ and recall that $a=\|\Lambda^{-1}\|$)
$$
\begin{equation}
\begin{aligned} \, \notag \|x_{n+1}-x_n\|_X &\leqslant a\|F_s(x_n,\sigma)\|_Y=a\bigl\|F_s(x_n,\sigma)-F_s(x_{n-1},\sigma) -P_s\Lambda(x_n-x_{n-1})\bigr\|_Y \\ &\leqslant \varepsilon a\|x_n-x_{n-1}\|_X \leqslant\dotsb\leqslant(\varepsilon a)^n\|x_1-x\|_X. \end{aligned}
\end{equation}
\tag{33}
$$
Next, using the triangle inequality and the formula for the sum of a geometric progression, from (33) and (31), by the choice of $F_s$ we obtain
$$
\begin{equation}
\begin{aligned} \, \notag &\|x_{n+1}-\widehat x\|_X \leqslant\|x_{n+1}-x\|_X+\|x-\widehat x\|_X \\ \notag &\qquad\leqslant \|x_{n+1}-x_n\|_X+\dots+\|x_1-x\|_X +\|x-\widehat x\|_X \\ \notag &\qquad\leqslant \bigl((\varepsilon a)^n+(\varepsilon a)^{n-1}+\dots+1\bigr)\|x_1-x\|_X+\|x-\widehat x\|_X \\ \notag &\qquad< \frac{a}{1-\varepsilon a}\|F_s(x,\sigma)\|_Y+\|x-\widehat x\|_X \\ \notag &\qquad \leqslant \frac{a}{1-\varepsilon a}\bigl(\|F_s(x,\sigma)-P_s\widehat F(x,\sigma)\|_Y +\|P_s\widehat F(x,\sigma)\|_Y\bigr)+\|x-\widehat x\|_X \\ &\qquad <\frac{a}{1-\varepsilon a}(r+b\beta_2(\delta_2))+\delta_2<\delta_1, \end{aligned}
\end{equation}
\tag{34}
$$
that is, $x_{n+1}\in V_0$.
By the induction assumption $x_n\in V_0\cap Q\cap (\widehat x+N_s)$, hence $x_{n+1}=x_n-\Lambda^{-1}F_s(x_n,\sigma)\in Q$ and $x_{n+1}=x_n-\Lambda^{-1}F_s(x_n,\sigma)\in \widehat x+N_s+N_s=\widehat x+N_s$. Therefore, the whole sequence $\{x_n\}$ belongs to $V_0\cap Q\cap (\widehat x+N_s)$.
Using (33) and arguing as in the previous inequality, for all $n,l\in\mathbb N$ we have
$$
\begin{equation}
\begin{aligned} \, \notag \|x_{n+l}-x_n\|_X &\leqslant\|x_{n+l}-x_{n+l-1}\|_X+\dots+\|x_{n+1}-x_n\|_X \\ &\leqslant\bigl((\varepsilon a)^{n+l-1}+\dots+(\varepsilon a)^n\bigr)\|x_1-x\|_X < \|x_1-x\|_X (\varepsilon a)^{n-1}, \end{aligned}
\end{equation}
\tag{35}
$$
that is, $\{x_n\}$ is a Cauchy sequence.
The functions $x_n$ are defined on $(V'_0\cap Q\cap (\widehat x+N_s))\times U_0$. Let $(x,\sigma)\in (V'_0\cap Q\cap (\widehat x+N_s))\times U_0$. We set $\widetilde g_s(x,\sigma)=\widetilde g_{F_s}(x,\sigma)=\lim_{n\to\infty}x_n$. It follows from (34) that $\widetilde g_s(x,\sigma)\in V_0$. The set $Q\cap(\widehat x+N_s)$ is closed (the subspace $L_s$ is complemented, so it is closed by definition and $N_s$ is the preimage of a closed set under a continuous mapping $\Lambda^{-1}$), and therefore $\widetilde g_s(x,\sigma)\in Q\cap(\widehat x+N_s)$. Thus, the mapping $\widetilde g_s\colon (V'_0\cap Q\cap(\widehat x+N_s))\times U_0\to V_0\cap Q\cap (\widehat x+N_s)$ is well defined.
Letting $n \to \infty$ in (32) we obtain the equality $F_s(\widetilde g_s(x,\sigma),\sigma) = 0$ for each $(x,\sigma)\in (V'_0\cap Q\cap (\widehat x+N_s))\times U_0$. We claim that $\widetilde g_s(x,\sigma)=\widetilde g_s(\widehat x,\sigma)$ for such $(x,\sigma)$. Indeed, in view of (29) (and since $\widetilde g_s(x,\sigma)-\widetilde g_s(\widehat x,\sigma)\in N_s$, which implies that $\Lambda(\widetilde g_s(x,\sigma)-\widetilde g_s(\widehat x,\sigma))=P_s\Lambda(\widetilde g_s(x,\sigma)-\widetilde g_s(\widehat x,\sigma))$) we have
$$
\begin{equation}
\begin{aligned} \, \notag &\bigl\|\widetilde g_s(x,\sigma)-\widetilde g_s(\widehat x,\sigma)\bigr\|_X =\bigl\|\Lambda^{-1}\Lambda(\widetilde g_s(x,\sigma)-\widetilde g_s(\widehat x,\sigma))\bigr\|_X \\ \notag &\qquad\leqslant a\bigl\|\Lambda(\widetilde g_s(x,\sigma)-\widetilde g_s(\widehat x,\sigma))\bigr\|_Y \\ \notag &\qquad =a\bigl\|F_s(\widetilde g_s(x,\sigma),\sigma)-F_s(\widetilde g_s(\widehat x,\sigma),\sigma) -P_s\Lambda(\widetilde g_s(x,\sigma)-\widetilde g_s(\widehat x,\sigma)\bigr\|_Y \\ &\qquad\leqslant \varepsilon a\bigl\|\widetilde g_s(x,\sigma)-\widetilde g_s(\widehat x,\sigma)\bigr\|_X. \end{aligned}
\end{equation}
\tag{36}
$$
Now, since $\varepsilon a<1$, we conclude that $\widetilde g_s(x,\sigma)=\widetilde g_s(\widehat x,\sigma)$.
We set $g_s(\sigma)=\widetilde g_s(\widehat x,\sigma)$. This is a mapping from $U_0$ to $V_0\cap Q\cap(\widehat x+N_s)$, and by the above $F_s(g_s(\sigma),\sigma)=0$ for all $\sigma\in U_0$.
From (31) it follows by induction that the functions $x_n$ are continuous on $U_0$ as functions of $\sigma$. Letting $l\to\infty$ in (35), we see that the mapping $\sigma\mapsto g_s(\sigma)$ is the uniform limit of continuous functions, so that it is also continuous.
The fact that $F_s(x,\sigma)=0$ on $(V_0\cap Q\cap(\widehat x+N_s))\times U_0$ only for $x=g_{s}(\sigma)$ is proved similarly to (36).
Thus, assertions 1) and 2) of the theorem are verified for the mapping $F_s$. Setting $L_s=Y$ (so that $P_s$ is the identity operator and $N_s=X$) and repeating verbatim the above arguments, we prove assertions 1) and 2) of the theorem for the mapping $\widehat F$ (except for the equality $g_{\widehat F}(\widehat \sigma)=\widehat x$).
Let us prove the third assertion of the theorem and the equality $g_{\widehat F}(\widehat \sigma)=\widehat x$. From (34) it follows that $\|x_{n+1}-x\|_X\leqslant(a/(1-\varepsilon a))\|F_s(x,\sigma)\|_Y$. Letting $n\to\infty$, we arrive at the inequality
$$
\begin{equation}
\|g_s(\sigma)-x\|_X\leqslant \frac{a}{1-\varepsilon a}\|F_s(x,\sigma)\|_Y,
\end{equation}
\tag{37}
$$
which holds for all $(x,\sigma)\in (V'_0\cap Q\cap (\widehat x+N_s))\times U_0$.
Similarly, if $L_s=Y$, then we obtain the inequality
$$
\begin{equation}
\|g_{\widehat F}(\sigma)-x\|_X\leqslant \frac{a}{1-\varepsilon a}\|\widehat F(x,\sigma)\|_Y,
\end{equation}
\tag{38}
$$
which holds for all $(x,\sigma)\in (V'_0\cap Q)\times U_0$. Now for $x=\widehat x$ and $\sigma=\widehat \sigma$ we have $g_{\widehat F}(\widehat \sigma)=\widehat x$.
We set $U'_0=U_{\Sigma}(\widehat \sigma,\delta_3)$ and let $\sigma\in U'_0$. From (37) for $x\!=\!\widehat x$ we obtain (${\widehat F(\widehat x,\widehat \sigma)\!=\!0}$)
$$
\begin{equation*}
\begin{aligned} \, &\|g_s(\sigma)-\widehat x\|_X\leqslant \frac{a}{1-\varepsilon a}\|F_s(\widehat x,\sigma)\|_Y \\ &\qquad \leqslant\frac{a}{1-\varepsilon a}(\|F_s(\widehat x,\sigma) -P_s\widehat F(\widehat x,\sigma)\|_Y+\|P_s\|\,\|\widehat F(\widehat x,\sigma)-\widehat F(\widehat x,\widehat \sigma)\|_Y \\ &\qquad<\frac{a}{1-\varepsilon a}(r+b\beta_2(\delta_3))<\delta_2. \end{aligned}
\end{equation*}
\notag
$$
Therefore, $g_s(\sigma)\in V'_0$, and thus $g_s(\sigma)\in V'_0\cap Q\cap(\widehat x+N_s)$. Now substituting $x=g_s(\sigma)$ into (38) and subtracting the zero element $F_s(g_s(\sigma),\sigma)$ from the normed expression on the right we arrive at the inequality
$$
\begin{equation*}
\|g_{\widehat F}(\sigma)-g_s(\sigma)\|_X\leqslant c\|\widehat F-F_s\|_{C((V\cap Q)\times\Sigma,\,Y)},
\end{equation*}
\notag
$$
where $c=a/(1-\varepsilon a)$, which holds for each $\sigma\in U'_0$. Hence it is equivalent to inequality (27). This proves Theorem 2. The above theorem is a generalization of Theorem $4$ proved in our paper [8], where $L_s=Y$ for all $s\in\mathbb N$. Before stating the next result we give the requisite definitions. Recall that for $s\in\mathbb N$ we have set $h=h(s)=(T-t_0)/s$ and $t_i=t_0+ih$, $i=0,\dots,s$, thereby obtaining a subpartition of the interval $[t_0, T]$ into $s$ subintervals $[t_i, t_{i+1}]$ of length $h$, $i=0,\dots,s-1$. We denote by $L_s$ the subspace of $C([t_0, T],\mathbb R^n)$ formed by the ‘polygonal lines’ with knots at the points $t_i$, $i=0,\dots,s$, that is, by the functions $x(\,\cdot\,)$ such that if $t\in [t_0, T]$ (and therefore $t\in [t_m, t_{m+1}]$ for some $0\leqslant m\leqslant s-1$), then
$$
\begin{equation*}
x(t)=x(t_m)+\frac{t-t_m}{h}\bigl(x(t_{m+1}) -x(t_m)\bigr).
\end{equation*}
\notag
$$
It is clear that $\operatorname{dim}L_s=s+1$. Let $P_s\colon C([t_0, T],\mathbb R^n)\to L_s$ be the mapping associating with $x(\,\cdot\,)\in C([t_0, T],\mathbb R^n)$ the function in $L_s$ that interpolates $x(\,\cdot\,)$ at the points $t_i$, $i=0,1,\dots,s$. It is clear that $P_s$ is a continuous projection and $\|P_s\|=1$ for each $s\in\mathbb N$. Let $(x,u)\in C([t_0, T],\mathbb R^n)\times L_\infty([t_0, T],\mathbb R^r)$ and $x_0\in\mathbb R^n$. Set
$$
\begin{equation*}
y_s(x,u, x_0)(t_m)=x(t_m)-x_0-\sum_{i=0}^{m-1}\int_{t_{i}}^{t_{i+1}} f\bigl(t_{i},x(t_{i}),u(t)\bigr)\,dt,
\end{equation*}
\notag
$$
$m=0,1,\dots,s$, where the sum is absent for $m=0$. Let $t\in [t_0, T]$. Then $t\in [t_m, t_{m+1}]$ for some $0\leqslant m\leqslant s-1$. Set
$$
\begin{equation}
\begin{aligned} \, \notag \mathcal F_{s}(x,u,x_0)(t) &=y_s(x,u,x_0)(t_m) \\ &\qquad +\frac{t-t_m}{h}\bigl(y_s(x,u,x_0)(t_{m+1})-y_s(x,u,x_0)(t_m)\bigr), \end{aligned}
\end{equation}
\tag{39}
$$
that is, $\mathcal F_{s}(x,u,x_0)(\,\cdot\,)\in L_s$. Thus, the mapping $\mathcal F_s\colon C([t_0, T],\mathbb R^n)\times L_\infty([t_0, T],\mathbb R^r)\to L_s$ is well defined for all $x_0\in\mathbb R^n$ and $s\in\mathbb N$. Let $L>0$. We denote by $Q_L=Q_L([t_0,T],\mathbb R^n)$ the set of Lipschitz vector functions on $[t_0, T]$ with values in $\mathbb R^n$, with Lipschitz constant $L$ (for short, $L$-Lipschitz functions). Recall that the space $C_x^1(M\times\Sigma,\,Y)$ was defined before Theorem 2, and the sets $\mathcal U$ and $\mathcal A_k$, for $k\in\mathbb N$, were defined before Proposition 1. We denote the closed ball in the normed space $Z$ with centre at a point $z$ and radius $r>0$ by $B_Z(z,r)$. Lemma 2 (first approximation lemma). Let $M\subset C([t_0, T],\mathbb R^n) $ be a bounded set, let $N>1$, $\overline u=(u_1,\dots, u_N)\in\mathcal U^N$, $x_0\in\mathbb R^n$, and let $L>0$. Then the mapping $F\colon C([t_0, T],\mathbb R^n)\times(L_\infty([t_0, T]))^N\to C([t_0, T],\mathbb R^n)$ defined, for $t\in[t_0, T]$, by
$$
\begin{equation}
\begin{aligned} \, &F(x,\overline\alpha)(t) =F(x,\overline\alpha;x_0,\overline u)(t) \nonumber \\ &\qquad =x(t)-x_0-\sum_{i=1}^N\int_{t_0}^t\alpha_i(\tau)f\bigl(\tau,x(\tau),u_i(\tau)\bigr) \,d\tau, \qquad \overline\alpha=(\alpha_1,\dots,\alpha_N), \end{aligned}
\end{equation}
\tag{40}
$$
lies in the space $C_x^1((M\cap Q_L)\times\mathcal A_N,\,C([t_0, T],\mathbb R^n))$, and for each $\overline\alpha\in\mathcal A_N$ there exists a sequence of piecewise constant controls $u_s(\overline\alpha;\overline u)\in \mathcal U$, $s\in\mathbb N$ such that the mappings $F_s\colon C([t_0, T],\mathbb R^n)\times\mathcal A_N \to C([t_0, T],\mathbb R^n)$ defined, for all $t\in[t_0, T]$, by
$$
\begin{equation*}
F_s(x,\overline\alpha)=F_s(x,\overline\alpha;x_0,\overline u)(t)=\mathcal F_s\bigl(x,u_s(\overline\alpha;\overline u),x_0\bigr)(t),
\end{equation*}
\notag
$$
also lie in $C_x^1((M\cap Q_L)\times\mathcal A_N,\,C([t_0, T],\mathbb R^n))$ and the sequence of mappings ${F_s-P_s F}$, $s\in\mathbb N$, converges to zero in this space as $s\to\infty$. Moreover, the mappings $F_s$ converge to $F$ in the space $C((M\cap Q_L)\times\mathcal A_N, C([t_0, T],\mathbb R^n))$ as $s\to\infty$. Proof. First we show that the mappings $F$ and $F_s$, $s\in\mathbb N$, lie in the space $C((M\cap Q_L)\times\mathcal A_N,\,C([t_0, T],\mathbb R^n))$ and the $F_s$ converge there to $F$ as $s\to\infty$. Next we show that these mappings also lie in the space $C^1_x((M\cap Q_L)\times\mathcal A_N,\,C([t_0, T],\mathbb R^n))$, and that the mappings $F_s-P_s F$ converge to zero in this space as $s\to\infty$.
Below, we set $\mathcal M=(M\cap Q_L)\times\mathcal A_N$ for brevity.
1) The mapping $F$ is continuous on $C([t_0, T],\mathbb R^n)\times(L_\infty([t_0, T]))^N$ and bounded on the set $\mathcal M$ (for similar arguments, see the beginning of the proof of Lemma 3 in [4]).
Now consider the mappings $F_s$, $s\in\mathbb N$. We have $\overline u=(u_1,\dots,u_N)\in \mathcal U^N$, and so there exists a compact set $\mathcal K\subset\mathbb R^r$ such that $u_i(t)\in \mathcal K\cap U$, $i=1,\dots,N$, for almost all $t\in [t_0, T]$. First, for each $\overline\alpha=(\alpha_1,\dots,\alpha_N)\in\mathcal A_N$ we construct a sequence of piecewise constant controls $u_s(\overline \alpha;\overline u)\in \mathcal U$, $s\in \mathbb N$. Let $s\in \mathbb N$. Consider the covering of $\mathcal K\subset\mathbb R^r$ by open balls $\mathcal O^s_1,\dots,\mathcal O^s_{p_s}$ of radius $1/s$, and let $\psi_1^s(\,\cdot\,),\dots,\psi_{p_s}^s(\,\cdot\,)$ be a partition of unity subordinate to this covering, that is, the functions $\psi_k^s(\,\cdot\,)$, $k=1,\dots,p_s$, are continuous, the support of $\psi_k^s(\,\cdot\,)$ lies in the ball $\mathcal O^s_k$, $0\leqslant\psi_k^s(u)\leqslant1$, and $\sum_{k=1}^{p_s}\psi_k^s(u)=1$ for all $u\in \mathcal K$.
Next, fix $u_k^s\in \mathcal O_k^s\cap U$, $k=1,\dots,p_s$. If for some $k$ the intersection $\mathcal O_k^s\cap U$ is empty, then we put $u_k^s=\widetilde u$, where $\widetilde u$ is an element of $\mathcal K\cap U$ fixed throughout the proof.
Set $\lambda_k^s(t)=\lambda_k^s(t,\overline\alpha,\overline u)=\sum_{i=1}^N\alpha_i(t)\psi_k^s(u_i(t))$, $k=1,\dots,p_s$, $t\in[t_0, T]$. The functions $\lambda_k^s(\,\cdot\,)$ are measurable on $[t_0, T]$, and for almost all $t\in[t_0, T]$
$$
\begin{equation*}
\begin{gathered} \, 0\leqslant\lambda_k^s(t)=\sum_{i=1}^N\alpha_i(t)\psi_k^s(u_i(t))\leqslant\sum_{i=1}^N\alpha_i(t)=1, \qquad k=1,\dots,p_s, \\ \sum_{k=1}^{p_s}\lambda_k^s(t)=\sum_{i=1}^N\alpha_i(t)\sum_{k=1}^{p_s}\psi_k^s(u_i(t))=1. \end{gathered}
\end{equation*}
\notag
$$
Recall that for each $s\in\mathbb N$ the partition of the interval $[t_0, T]$ into $s$ subintervals $[t_i, t_{i+1}]$ of length $h=h(s)=(T-t_0)/s$, $i=0,\dots,s-1$, was defined before Lemma 2. We also set
$$
\begin{equation}
\lambda^s_{ki}=\frac1{h}\int_{t_i}^{t_{i+1}}\lambda^s_k(t)\,dt, \qquad k=1,\dots,p_s, \quad i=0,\dots,s-1.
\end{equation}
\tag{41}
$$
It is clear that $\lambda^s_{ki}\geqslant0$ and $\sum_{k=1}^{p_s}\lambda^s_{ki}=1$, $i=0,\dots,s-1$.
We subdivide each subinterval $[t_i,t_{i+1}]$ into $p_s$ consecutive subsubintervals $\Delta_{ik}(s)$ of length $\lambda^s_{ki}h$, $k=1,\dots, p_s$.
Let the function $u_s(\overline\alpha)=u_s(\overline\alpha;\overline u)$ be defined on $[t_0,T]$ by
$$
\begin{equation*}
u_s(\overline\alpha)(t)=u_k^s, \qquad t\in \operatorname{int}\Delta_{ik}(s), \quad k=1,\dots,p_s, \quad i=0,1\dots,s-1,
\end{equation*}
\notag
$$
where we choose arbitrary values for $u_s(\overline\alpha)$ at the endpoints of the subsubintervals. It is clear that $u_s(\overline\alpha)$ is a piecewise constant function, and $u_s(\overline\alpha)(t)\in \mathcal K\cap U$ for almost all $t\in[t_0, T]$ and any $\overline\alpha\in\mathcal A_N$.
Note that the construction of the function $u_s(\overline\alpha)$ was used for the first time by Gamkrelidze [9] for so-called generalized controls (see also [10]).
We claim that the mappings $F_s$ are continuous on $C([t_0, T],\mathbb R^n)\times\mathcal A_N$. First we show that the mappings $\overline\alpha\mapsto u_s(\overline\alpha)$ from $\mathcal A_N$ to $L_1([t_0, T],\mathbb R^r)$ are uniformly continuous with respect to $s\in\mathbb N$.
Fix $\widetilde {\overline\alpha}\in\mathcal A_N$, and let ${\overline\alpha}$ be another vector in $\mathcal A_N$. Next, let $\widetilde{\lambda^s_k}(\,\cdot\,)$, $\widetilde\lambda^s_{ki}$ and ${\lambda^s_k}(\,\cdot\,)$, $\lambda^s_{ki}$, $k=1,\dots,p_s$, $i=0,1,\dots,s-1$, be the functions and numbers defined above for $(\widetilde {\overline\alpha},\overline u)$ and $({\overline\alpha},\overline u)$, respectively.
Let $\rho>0$ be such that $\mathcal K\subset B_{\mathbb R^r}(0,\rho)$. In order not to overburden the text, we assume that $p_s=2$, $t_0=0$ and $T=1$. On each subinterval $[t_i,t_{i+1}]$ we have
$$
\begin{equation*}
\begin{aligned} \, &\int_{t_i}^{t_{i+1}}|u_s(\overline\alpha)(t)-u_s(\widetilde{\overline\alpha})(t)|\,dt \\ &\qquad=\biggl|\int_{\widetilde\lambda^s_{1i}/s}^{\lambda^s_{1i}/s}|u^s_1-u^s_2|\,dt\biggr|\leqslant \frac{2\rho}{s}|\lambda^s_{1i}-\widetilde\lambda^s_{1i}| \\ &\qquad= \frac{2\rho}{s}\biggl|s\int_{t_i}^{t_{i+1}}\sum_{j=1}^N\alpha_j(t)\psi_1^s(u_j(t))\,dt -s\int_{t_i}^{t_{i+1}}\sum_{j=1}^N\widetilde\alpha_j(t)\psi_1^s(u_j(t))\,dt\biggr| \\ &\qquad \leqslant2\rho\int_{t_i}^{t_{i+1}}\biggl(\sum_{j=1}^N|\alpha_j(t)-\widetilde\alpha_j(t)| \psi_1^s(u_j(t))\biggl)\, dt \\ &\qquad \leqslant2\rho\|\overline\alpha-\widetilde{\overline\alpha}\|_{(L_\infty([t_0, T]))^N}h(s). \end{aligned}
\end{equation*}
\notag
$$
Summing these inequalities over all $i=0,\dots,s-1$ we find that
$$
\begin{equation*}
\int_{0}^{1}|u_s(\overline\alpha)(t)-u_s(\widetilde{\overline\alpha})(t)|\,dt\\\leqslant2\rho \|\overline\alpha-\widetilde{\overline\alpha}\|_{(L_\infty([t_0,T]))^N},
\end{equation*}
\notag
$$
that is, the mappings $\overline\alpha\mapsto u_s(\overline\alpha)$ are continuous at $\widetilde{\overline\alpha}$, and therefore everywhere on $\mathcal A_N$, uniformly in $s$.
Now we proceed directly to the proof that the mappings $F_s$ are continuous.
Let $(x^0,\overline\alpha^{\,0})\in C([t_0, T],\mathbb R^n)\times\mathcal A_N$. We set
$$
\begin{equation*}
K_1=\bigl\{(t,x)\in\mathbb R^{n+1}\colon|x-x^0(t)|\leqslant\delta_1,\, t\in[t_0, T]\bigr\}\times \mathcal K, \qquad \delta_1>0.
\end{equation*}
\notag
$$
Both $f$ and $f_x$ are continuous on the compact set $K_1$. We also set
$$
\begin{equation*}
C_1=\max\bigl\{|f(t,x,u)|\colon (t,x,u)\in K_1\bigr\}\quad\!\!\!\!\text{and}\!\!\!\!\quad C_2=\max\bigl\{\|f_x(t,x,u)\| \colon (t,x,u)\in K_1\bigr\}.
\end{equation*}
\notag
$$
Let $\varepsilon>0$. The mappings $f$ and $f_x$ being uniformly continuous on $K_1$, there exists $\delta_2$, $0<\delta_2\leqslant\min(\delta_1,\varepsilon)$, such that $|f(t,x_1, u_1)-f(t,x_2, u_2)|<\varepsilon$ and $\|f_x(t,x_1, u_1)-f_x(t,x_2, u_2)\|<\varepsilon$ for all points $(t,x_i, u_i)\in K_1$, $i=1,2$, satisfying $|x_1-x_2|<\delta_2$ and $|u_1-u_2|<\delta_2$.
By the above there exists a neighbourhood $\mathcal O(\overline\alpha^{\,0})$ such that if $\overline\alpha\in\mathcal O(\overline\alpha^{\,0})\cap\mathcal A_N$, then $u_s(\overline\alpha)\in U_{L_1([t_0, T],\mathbb R^r)}(u_s(\overline\alpha^{\,0}),\varepsilon\delta_2)$ for all $s\in\mathbb N$. For all such $\overline\alpha$ and $s$ we set $E_{\delta_2}=E_{\delta_2}(\overline\alpha,s)=\bigl\{t\in [t_0, T] \colon |u_s(\overline\alpha)(t)-u_s(\overline\alpha^{\,0})(t)|\geqslant\delta_2\bigr\}$. Then
$$
\begin{equation*}
\begin{aligned} \, \delta_2\operatorname{mes}E_{\delta_2} &\leqslant\int_{E_{\delta_2}}|u_s(\overline\alpha)(t)-u_s(\overline\alpha^{\,0})(t)|\,dt \\ &\leqslant\|u_s(\overline\alpha)-u_s(\overline\alpha^{\,0})\|_{L_1([t_0, T],\mathbb R^r)}<\varepsilon\delta_2 \end{aligned}
\end{equation*}
\notag
$$
which shows that $\operatorname{mes}E_{\delta_2}<\varepsilon$.
Now let $x\in U_{C([t_0, T],\mathbb R^n)}(x^0,\delta_2)$, $\overline\alpha\in\mathcal O(\overline\alpha^{\,0})$ and $t\in[t_0,T]$. Then $t\in [t_m,t_{m+1}]$ for some $0\leqslant m\leqslant s-1$. By definition (see the statement of the lemma and (39))
$$
\begin{equation*}
\begin{aligned} \, F_s(x,\overline\alpha)(t) &=\mathcal F_s\bigl(x,u_s(\overline\alpha;\overline u),x_0\bigr)(t) =y_s\bigl(x,u_s(\overline\alpha;\overline u),x_0\bigr)(t_m) \\ &\qquad +\frac{t-t_m}{h}\bigl(y_s(x,u_s(\overline\alpha;\overline u),x_0)(t_{m+1}) -y_s(x,u_s(\overline\alpha;\overline u),x_0)(t_m)\bigr), \end{aligned}
\end{equation*}
\notag
$$
where (we set $y_s(x,u_s(\overline\alpha;\overline u),x_0)(t_{m})=y_s(x,\overline\alpha)(t_m)$)
$$
\begin{equation}
y_s(x,\overline\alpha)(t_m)=x(t_m)-x_0-\sum_{i=0}^{m-1}\int_{t_{i}}^{t_{i+1}} f\bigl(t_{i},x(t_{i}),u_s(\overline\alpha;\overline u)(t)\bigr)\,dt.
\end{equation}
\tag{42}
$$
Let us estimate the difference $F_s(x,\overline\alpha)(t)-F_s(x^0,\overline\alpha^{\,0})(t)$. First, let $t=t_m$, $0\leqslant m\leqslant s-1$. A simple algebra shows that (here $u_s(\overline\alpha)=u_s(\overline\alpha;\overline u)$)
$$
\begin{equation}
\begin{aligned} \, \notag &\bigl|F_s(x,\overline\alpha)(t_m)-F_s(x^0,\overline\alpha^{\,0})(t_m)\bigr| \leqslant|x(t_m)-x^0(t_m)| \\ \notag &\qquad\qquad +\sum_{i=0}^{m-1}\biggl(\int_{[t_i, t_{i+1}]\setminus E_{\delta_2}}\bigl|f(t_i,x(t_i),u_s(\overline\alpha)(t))-f(t_i, x^0(t_i),u_s(\overline\alpha^{\,0})(t))\bigr|\,dt \\ &\qquad\qquad +\int_{[t_i,t_{i+1}]\cap E_{\delta_2}} \bigl|f(t_i,x(t_i),u_s(\overline\alpha)(t))- f(t_i, x^0(t_i),u_s(\overline\alpha^{\,0})(t))\bigr|\,dt\biggr). \end{aligned}
\end{equation}
\tag{43}
$$
By the choice of $x$ ($\delta_2\leqslant\varepsilon$) the first term on the right is smaller than $\varepsilon$. Next, since $u_s(\overline\alpha)(t)\in \mathcal K$ for almost all $t\in[t_0, T]$, for each $i$, $0\leqslant i\leqslant m-1$, the expression under the first integral sign is smaller than $\varepsilon$. Hence the integral itself is at most $\varepsilon h$. The second integral is majorized by $2C_1\operatorname{mes}([t_i, t_{i+1}]\cap E_{\delta_2})$. Summing these estimates over all $0\leqslant i\leqslant m-1$ we find that the third term on the right in (43) is at most $\varepsilon (T-t_0+ 2C_1)$, and thus the expression on the left-hand side of (43) is majorized by $\varepsilon(1+T-t_0+2C_1)$.
Now let $t\in (t_m,t_{m+1})$ for some $0\leqslant m\leqslant s-1$. By definition $F_s(x,\overline\alpha)(t)$ is a convex combination of the quantities $F_s(x,\overline\alpha)(t_m)$ and $F_s(x,\overline\alpha)(t_{m+1})$, so $|F_s(x,\overline\alpha)(t)-F_s(x^0,\overline\alpha^{\,0})(t)|\leqslant \varepsilon(1+T-t_0+2C_1)$ by the above.
Thus, the mappings $F_s$ are continuous at the point $(x^0,\overline\alpha^{\,0})$ and therefore at each point in $C([t_0,T],\mathbb R^n)\times\mathcal A_N$ uniformly with respect to $s$.
Next it is easily checked that, for all $(x,\overline\alpha)\in\mathcal M$, $s\in\mathbb N$ and $m$, $0\leqslant m\leqslant s-1$, we have $|y_s(x,\overline\alpha)(t_m)|\leqslant \delta+|x_0|+C(T-t_0)$, so that $|F_s(x,\overline\alpha)(t)|\leqslant \delta+|x_0|+C(T-t_0)$ for $(x,\overline\alpha)$ and $s$ as above and any $t\in[t_0, T]$. Thus, we have shown that $F_s\in C(\mathcal M,C([t_0, T],\mathbb R^n))$ for all $s\in\mathbb N$.
2) We claim that the $F_s$ converge to $F$ in the space $C(\mathcal M,C([t_0,T],\mathbb R^n))$ as ${s\!\to\!\infty}$.
Let $(x,\overline\alpha)\in\mathcal M$ and $t\in[t_0,T]$. Then $t\in[t_m, t_{m+1}]$ for some $0\leqslant m\leqslant s-1$, and therefore
$$
\begin{equation}
\begin{aligned} \, \notag &F_s(x,\overline \alpha)(t)-F(x,\overline\alpha)(t) =\frac{t_{m+1}-t}{h}\bigl(x(t_m)-x(t)\bigr)+\frac{t-t_m}{h}\bigl(x(t_{m+1})-x(t)\bigr) \\ \notag &\ -\sum_{i=0}^{m-1}\int_{t_{i}}^{t_{i+1}} \biggl(f\bigl(\tau_{i},x(\tau_{i}),u_s(\overline\alpha)(\tau)\bigr)- \sum_{j=1}^N\alpha_j(\tau)f\bigl(\tau,x(\tau), u_j(\tau)\bigr)\biggr)\,d\tau \\ &\ -\biggl(\frac{t-t_m}{h}\int_{t_{m}}^{t_{m+1}}f\bigl(\tau_{m},x(\tau_{m}), u_s(\overline\alpha)(\tau)\bigr)\,d\tau -\sum_{j=1}^N\int_{t_m}^t\alpha_j(\tau)f\bigl(\tau,x(\tau), u_j(\tau)\bigr)\,d\tau\biggr). \end{aligned}
\end{equation}
\tag{44}
$$
Let us estimate the terms on the right-hand side of this equality. Let $\varepsilon>0$, and let $K$ be the compact set defined at the beginning of the proof. The mapping $f$ is uniformly continuous on $K$, hence there exists $\delta_0$, $0<\delta_0<\varepsilon$, such that $|f(t',x',u')-f(t'',x'',u'')|<\varepsilon$ for all $(t',x',u')$ and $(t'',x'',u'')$ in $K$ such that $|t'-t''|<\delta_0$, $|x'-x''|<\delta_0$ and $|u'-u''|<\delta_0$.
Let $s_0=s_0(\varepsilon)$ be sufficiently large so that $h(s_0)<\min(\delta_0,\delta_0/L)$. We have $t\in [t_m,t_{m+1}]$, and therefore $|x(t_{m+1})-x(t)|\leqslant L|t_{m+1}-t|\leqslant L h(s_0)<\delta_0<\varepsilon$ and, similarly, $|x(t_{m})-x(t)|<\varepsilon$. Therefore, the absolute value of the sum of the first two terms on the right-hand side of (44) is smaller than $\varepsilon$ for $s\geqslant s_0$.
We estimate the third term (a sum of integrals) on the right. The absolute value of each integral under the summation sign is majorized by
$$
\begin{equation}
\begin{aligned} \, \notag & \biggl|\int_{t_{i}}^{t_{i+1}} f\bigl(\tau_{i},x(\tau_{i}),u_s(\overline\alpha)(\tau)\bigr)\,d\tau -\int_{t_{i}}^{t_{i+1}}\biggl(\sum_{j=1}^{p_s}\lambda_j^s(\tau)f\bigl(\tau,x(\tau),u^s_j\bigr) \biggr)\,d\tau\biggr| \\ &\qquad\qquad +\biggl|\int_{t_i}^{t_{i+1}}\biggl(\sum_{j=1}^N\alpha_j(\tau)f\bigl(\tau,x(\tau), u_j(\tau)\bigr)-\sum_{j=1}^{p_s}\lambda_j^s(\tau)f\bigl(\tau,x(\tau),u^s_j\bigr)\biggr)\,d\tau\biggr|. \end{aligned}
\end{equation}
\tag{45}
$$
Let us estimate these terms. By the definition of $u_s(\overline\alpha)$ the expression under the norm sign in the first term can be written as
$$
\begin{equation}
\sum_{j=1}^{p_s}\int_{\Delta_{ij}(s)}f\bigl(\tau_i,x(\tau_i),u^s_j\bigr)\,d\tau-\sum_{j=1}^{p_s} \int_{t_i}^{t_{i+1}}\lambda_j^s(t)f\bigl(t,x(t),u^s_j\bigr)\,dt.
\end{equation}
\tag{46}
$$
First we estimate each component of this difference. Let $f=(f_1,\dots,f_n)^{\top}$. We fix $l$, $1\leqslant l\leqslant n$. By the mean value theorem for integrals
$$
\begin{equation}
\begin{aligned} \, \notag & \biggl|\sum_{j=1}^{p_s}f_l\bigl(\tau_{i},x(\tau_{i}),u^s_j\bigr)\lambda^s_{ji}h -\sum_{j=1}^{p_s}f_l\bigl(\zeta_{i},x(\zeta_{i}),u^s_j\bigr)\int_{t_i}^{t_{i+1}}\lambda^s_j(t)\,dt \biggr| \\ &\qquad \leqslant h(s)\sum_{j=1}^{p_s}\lambda^s_{ji}\bigl|f_l(\tau_{i},x(\tau_{i}),u^s_j) -f_l(\zeta_{i},x(\zeta_{i}),u^s_j)\bigr|, \end{aligned}
\end{equation}
\tag{47}
$$
where $\zeta_{i}\in[t_i,t_{i+1}]$. If $s\geqslant s_0$, then $|\tau_i-\zeta_i|\leqslant h(s)\leqslant h(s_0)<\delta_0$ and $|x(\tau_i)-x(\zeta_i)|\leqslant L|\tau_i-\zeta_i|<\delta_0$. Therefore, the expression on the right in (47) is at most $h(s)\varepsilon$. Hence the norm of the difference of the integrals in (46) is majorized by $\sqrt{n}\, h(s)\varepsilon$.
We estimate the second term in (45). In the first sum we replace $j$ by $i$ under the integral sign and multiply the expression under the summation sign by $\sum_{j=1}^{p_s}\psi_j^s(u_i(\tau))$, which is 1 for almost all $t\in [t_0,T]$ and $i=1,\dots,N$, so that the integral of the new sum does not change. In the second sum we replace the function $\lambda_j^s(\,\cdot\,)$ by its expression. As a result, it is easily seen that the second term in (45) is majorized by
$$
\begin{equation*}
\int_{t_i}^{t_{i+1}}\biggl(\sum_{i=1}^N\alpha_i(\tau)\sum_{j=1}^{p_s}\psi_j^s(u_i(\tau)) \bigl|f(\tau,x(\tau),u_i(\tau))-f(\tau,x(\tau),u_j^s)\bigr|\biggr)\,d\tau.
\end{equation*}
\notag
$$
Let $s_0$ be sufficiently large so that $2/s_0<\delta_0$. If $j$, $1\leqslant j\leqslant p_s$, and $i$, $1\leqslant i\leqslant N$, are such that $u_i(\tau)\in \mathcal O_j^s\cap U$ on a set of positive measure, then $|u_i(\tau)-u_j^s|<2/s<\delta_0$ for $s\geqslant s_0$, and $|f(\tau,x(\tau),u_i(\tau))-f(\tau,x(\tau),u_j^s)|<\varepsilon$. Otherwise the corresponding term is zero. So the integral is at most $h(s)\varepsilon$.
Thus, in the third term on the right in (44) the norm of each integral under the summation sign is at most $(\sqrt{n}+1) h(s)\varepsilon$. Adding these inequalities for all $0\leqslant i\leqslant m-1$ we find that the third term on the right in (44) is majorized by $(1+\sqrt{n})(T-t_0)\varepsilon$.
Now let us estimate the fourth term (in large round brackets) on the right in (44). Let $\delta>0$ be such that $M\subset B_{C([t_0, T],\mathbb R^n)}(0,\delta)$. The mappings $f$ and $f_x$ are continuous on the compact set $K=[t_0, T]\times B_{\mathbb R^n}(0,\delta)\times \mathcal K$. Now set $C=\max\bigl\{|f(t,x,u)| \colon (t,x,u)\in K\bigr\}$ and $C_0=\max\bigl\{\|f_x(t,x,u)\| \colon (t,x,u)\in K\bigr\}$. Each term in the fourth summand on the right-hand side of (44) is clearly at most $C(t-t_m)$. Hence the norm of this summand is at most $2C(t-t_m)\leqslant 2 C h(s)<2C\delta_0< 2C\varepsilon$.
Summarizing, we find that the norm of the quantity on the right in (44) is at most $(1+(1+\sqrt{n})(T-t_0)+2C)\varepsilon$. This proves that the sequence $F_s$ converges to $F$ in $C(\mathcal M,C([t_0, T],\mathbb R^n))$ as $s\to\infty$.
3) We claim that the mappings $F$ and $F_s$ also lie in the space $C^1_x(\mathcal M,C([t_0, T], \mathbb R^n))$.
It is easily checked that, at any point $(x, {\overline\alpha})\in C([t_0, T],\mathbb R^n)\times(L_\infty([t_0, T]))^N$, the mapping $F$ has a continuous partial derivative with respect to $x$, which acts by
$$
\begin{equation}
F_{x}(x, {\overline\alpha})[z](t)=z(t)-\int_{t_0}^t\biggl(\sum_{i=1}^N\alpha_i(\tau)f_x\bigl(\tau, x(\tau),u_i(\tau)\bigr)\biggr)z(\tau)\,d\tau
\end{equation}
\tag{48}
$$
for all $z\in C([t_0, T],\mathbb R^n)$ and $t\in[t_0, T]$.
Next, it is easily seen that $|F_x(x,\overline\alpha)[z](t)|\leqslant (1+C_0(T-t_0))\|z\|_{C([t_0, T],\mathbb R^n)}$ for all $(x,\overline\alpha)\in\mathcal M$, $z\in C([t_0, T],\mathbb R^n)$ and $t\in[t_0, T]$. Now by the above we have $F\in C^1_x(\mathcal M,C([t_0, T],\mathbb R^n))$.
Now consider the mappings $F_s$, $s\in\mathbb N$. Each $F_s$ has a partial derivative with respect to $x$ at each point $(x,\overline\alpha)\in C([t_0,T],\mathbb R^n)\times\mathcal A_N$, which acts by
$$
\begin{equation*}
\begin{aligned} \, F_{sx}(x,\overline\alpha)[z](t) &=y_{sx}(x,\overline\alpha)[z](t_m) \\ &\qquad+\frac{t-t_m}{h}\bigl(y_{sx}(x,\overline\alpha)[z](t_{m+1}) -y_{sx}(x,\overline\alpha)[z](t_m)\bigr), \end{aligned}
\end{equation*}
\notag
$$
for $z\in C([t_0,T],\mathbb R^n)$ ($t\in[t_m,t_{m+1}]$), where
$$
\begin{equation*}
y_{sx}(x,\overline\alpha)[z](t_m)=z(t_m)-\sum_{i=0}^{m-1}\int_{t_{i}}^{t_{i+1}} f_x\bigl(t_{i},x(t_{i}),u_s(\overline\alpha)(t)\bigr)z(t)\,dt.
\end{equation*}
\notag
$$
The proof of this fact is similar to the proof that $F$ is differentiable with respect to $x$. It is clear that $F_{sx}(x,\overline\alpha)[z](\,\cdot\,)\in L_s$.
Let us show that the derivatives $F_{sx}$ are continuous on $C([t_0,T],\mathbb R^n)\times\mathcal A_N$. Let $\varepsilon>0$, and let $(x^0,\overline\alpha^{\,0})\in C([t_0, T],\mathbb R^n)\times\mathcal A_N$. Arguing precisely as in the proof of the continuity of $F_s$, we see that (see (43))
$$
\begin{equation}
\begin{aligned} \, \notag &\bigl|F_{sx}(x,\overline\alpha)[z](t_m)-F_{sx}(x^0,\overline\alpha^{\,0})[z](t_m)\bigr| \\ \notag &\ \leqslant \sum_{i=0}^{m-1}\biggl(\int_{[t_i, t_{i+1}]\setminus E_{\delta_2}}\bigl\|f_x(t_i,x(t_i),u_s(\overline\alpha)(t)) -f_x(t_i,x^0(t_i),u_s(\overline\alpha^{\,0})(t))\bigr\|\,|z(t)|\,dt \\ &\ \qquad +\int_{[t_i,t_{i+1}]\cap E_{\delta_2}} \bigl\|f_x(t_i,x(t_i),u_s(\overline\alpha)(t)) - f_x(t_i,x^0(t_i),u_s(\overline\alpha^{\,0})(t))\bigr\|\,|z(t)|\,dt\biggr) \end{aligned}
\end{equation}
\tag{49}
$$
for all $x\in U_{C([t_0, T],\mathbb R^n)}(x^0,\delta_2)$, $\overline\alpha\in\mathcal O(\overline\alpha^{\,0})$, $0\leqslant m\leqslant s-1$ and $z\in C([t_0,T],\mathbb R^n)$.
Next, repeating the arguments verifying the continuity of $F_s$, we find that
$$
\begin{equation*}
\bigl|F_{sx}(x,\overline\alpha)[z](t)-F_{sx}(x^0,\overline\alpha^{\,0})[z](t)\bigr|\leqslant \varepsilon ((T-t_0)+ 2C_2)\|z\|_{C([t_0, T],\mathbb R^n)}
\end{equation*}
\notag
$$
for all $t\in [t_0, T]$ and $z\in C([t_0, T],\mathbb R^n)$. This shows that the derivatives $F_{sx}$ are continuous on $C([t_0, T],\mathbb R^n)\times\mathcal A_N$ uniformly in $s$.
Now it is easily seen that for all $(x,\overline\alpha)\in\mathcal M$, $s\in\mathbb N$ and any $m$, ${0\leqslant m\leqslant s-1}$, we have the estimate $|y_{sx}(x,\overline\alpha)[z](t_m)|\leqslant (1+C_0(T-t_0))\|z\|_{C([t_0, T],\mathbb R^n)}$, and therefore, $|F_{sx}(x,\overline\alpha)[z](t)|\leqslant (1+C_0(T-t_0))\|z\|_{C([t_0, T],\mathbb R^n)}$ for $(x,\overline\alpha)$ and $s$ as above and any $t\in[t_0, T]$. This shows that $F_s\in C^1_x(\mathcal M,C([t_0, T],\mathbb R^n))$ for all $s\in\mathbb N$.
4) We claim that the sequence of mappings $F_s-P_s F$, $s\in\mathbb N$, converges to 0 in $C_x^1(\mathcal M,C([t_0, T],\mathbb R^n))$ as $s\to\infty$ (it is clear that $P_s F\in C_x^1(\mathcal M,C([t_0, T],\mathbb R^n))$ for each $s$).
At step 2) of the proof we showed that $F_s$ converges to $F$ in $C(\mathcal M,C([t_0, T],\mathbb R^n))$ as $s\to\infty$. For brevity we set $C=C(\mathcal M, C([t_0, T],\mathbb R^n))$. Since $\|F_s-P_s F\|_{C}=\|P_sF_s-P_sF\|_{C}\leqslant\|F_s-F\|_{C}$, it follows that $F_s-P_s F$ converges to zero in $C(\mathcal M,C([t_0, T],\mathbb R^n))$ as $s\to\infty$.
Let us now show that the difference $F_{sx}-P_s F_x$ tends to zero as $s\to\infty$ in the space $C(\mathcal M,\mathcal L(C([t_0, T],\mathbb R^n),C([t_0, T],\mathbb R^n)))$. Let $(x,\overline\alpha)\!\in\!\mathcal M$ and ${z\!\in\! C([t_0, T],\mathbb R^n)}$, and let $t\in[t_0, T]$. It is easily seen that, at the points $t_m$, $m=0,\dots,s-1$,
$$
\begin{equation*}
\begin{aligned} \, &F_{sx}(x,\overline\alpha)[z](t_m)-P_sF_{x}(x,\overline\alpha)[z](t_m) \\ &\qquad =-\sum_{i=0}^{m-1}\int_{t_{i}}^{t_{i+1}}\biggl(f_x(t_{i},x(t_{i}),u_s(\overline\alpha)(t))- \sum_{j=1}^N\alpha_j(t)f_x(t,x(t), u_j(t))\biggr)z(t)\,dt. \end{aligned}
\end{equation*}
\notag
$$
By estimating this difference precisely as the third term (a sum of integrals) on the right-hand side of (44) we find that for each $\varepsilon>0$ there exists $s_0$ such that $|F_{sx}(x,\overline\alpha)[z](t_m)-P_sF_{x}(x,\overline\alpha)[z](t_m)|\leqslant \varepsilon(1+\sqrt{n}\,)(T-t_0)\|z\|_{C([t_0, T],\mathbb R^n)}$ for all $s\geqslant s_0$.
It is clear that the function $F_{sx}(x,\overline\alpha)[z](\,\cdot\,)-P_sF_{x}(x,\overline\alpha)[z](\,\cdot\,)$ belongs to $L_s$, and therefore the estimate at any point $t\in[t_0, T]$ is the same. Hence, by the above the sequence $F_s-P_s F$ converges to zero in $C_x^1(\mathcal M,C([t_0, T],\mathbb R^n))$ as $s\to\infty$. This proves Lemma 2. Recall that the sets $\mathcal U$ and $\mathcal A_k$, $k\in \mathbb N$, were defined before Proposition 1, the mappings $F_s$, $s\in \mathbb N$, were defined in Lemma 1, and the families $\overline v=(v_1,\dots,v_{N-1})$ and $\overline u'=(\widehat u,v_1,\dots,v_{N-1})$ were defined before Lemma 1. Lemma 3 (second approximation lemma). Under the hypotheses of Lemma 1, where $\overline v\in \mathcal U^{N-1}$, let the function $x(\,\cdot\,,\overline\alpha;\overline v)$ and the neighbourhood $\mathcal O(\widehat{\overline \alpha})$ be as in that lemma. Next, let $x_0=\widehat x(t_0)$, $\overline u=(\widehat u, \overline v)$ and the mappings $(x,\alpha)\mapsto F_s(x,\overline\alpha;\overline v)=F_s(x,\overline\alpha;\widehat x(t_0),(\widehat u,\overline v))$, $s\in\mathbb N$, be as in Lemma 2. Then there exist a neighbourhood $\mathcal O_0(\widehat{\overline \alpha})\subset\mathcal O(\widehat{\overline \alpha})$ and $s_0\in\mathbb N$ such that, for all $\overline\alpha\in\mathcal O_0(\widehat{\overline \alpha})\cap\mathcal A_N$ and $s\geqslant s_0$ there exists a unique function $x_s(\,\cdot\,,\overline\alpha;\overline v)\in C([t_0, T],\mathbb R^n)$ satisfying the equation $F_s(x,\overline\alpha;\overline v)(t)=0$, $t\in[t_0, T]$, that is,
$$
\begin{equation}
F_s\bigl(x_s(\,\cdot\,,\overline\alpha;\overline v),\overline\alpha;\overline v\bigr)(t)=0, \qquad t\in[t_0, T].
\end{equation}
\tag{50}
$$
Moreover, for $s\geqslant s_0$ the mappings $\overline\alpha\mapsto x_s(\,\cdot\,,\overline\alpha;\overline v)$ lie in the space $C(\mathcal O_0(\widehat{\overline \alpha})\cap \mathcal A_N, C([t_0, T],\mathbb R^n))$ and converge there to the mapping $\overline\alpha\mapsto x(\,\cdot\,,\overline\alpha;\overline v)$ as $s\to\infty$. Proof. We will use Theorem 2, but first we make some preliminary considerations.
Let $\mathcal K$ be the compact set defined at the beginning of the proof of Lemma 2, let $\delta> 0$, $K_0=\bigl\{(t,x)\in \mathbb R\times\mathbb R^n \colon |x-\widehat x(t)|\leqslant\delta,\,t\in[t_0, T]\bigr\}\times \mathcal K$, where $\widehat x$ is the solution of equation (5), $C_0=\max\bigl\{|f(t,x,u)| \colon (t,x,u)\in K_0\bigr\}$ and $C_1=\max\bigl\{\|f_x(t,x,u)\|$: $(t,x,u)\in K_0\bigr\}$.
Since $\widehat x$ is a solution of equation (5), for all $t',t''\in[t_0, T]$ we have
$$
\begin{equation*}
|\widehat x(t')-\widehat x(t'')|\leqslant \biggl|\int_{t'}^{t''}\bigl|f(t,\widehat x(t),\widehat u(t))\bigr|\,dt\biggr|\leqslant C_0|t'-t''|,
\end{equation*}
\notag
$$
that is, the function $\widehat x$ is Lipschitzian with Lipschitz constant $C_0$.
Let $F\colon C([t_0, T],\mathbb R^n)\times(L_\infty([t_0, T]))^N\to C([t_0, T],\mathbb R^n)$ be the mapping defined in Lemma 2 (see (40)), where $x_0=\widehat x(t_0)$ and $\overline u=\overline u'$ (in what follows we suppress the dependence of $F$ on these fixed parameters). This mapping is continuous, together with its $x$-derivative. Clearly, $\widehat x$ is a solution of equation (5) if and only if $F(\widehat x,\widehat{\overline \alpha})=0$. The operator $\Lambda=F_x(\widehat x,\widehat{\overline \alpha})$ defined by (see (48))
$$
\begin{equation*}
F_{x}(\widehat x, \widehat{\overline \alpha})[z](t)=z(t)-\int_{t_0}^t f_x\bigl(\tau, \widehat x(\tau),\widehat u(\tau)\bigr)z(\tau)\,d\tau
\end{equation*}
\notag
$$
for all $z\in C([t_0, T],\mathbb R^n)$ and $t\in[t_0, T]$ is invertible. This follows from the solvability of the Cauchy problem for the corresponding linear equation with any initial conditions.
We set
$$
\begin{equation*}
L=\max(L_0+2C_1\|\widehat x\|_{C([t_0, T],\mathbb R^n)},L_0+2C_0),
\end{equation*}
\notag
$$
where $L_0=C_1(2\delta+\|\Lambda^{-1}\|(\delta+\|\widehat x\|_{C([t_0,T],\mathbb R^n)}+|\widehat x(t_0)|+(T-t_0)C_0)+C_0$.
Recall that $Q_L$ is the set of $L$-Lipschitz vector functions on $[t_0, T]$ with values in $\mathbb R^n$. It is easily checked that $Q_L$ is a closed set in $C([t_0, T],\mathbb R^n)$.
In Theorem 2 we take $X=Y=C([t_0, T],\mathbb R^n)$, $\Sigma=\mathcal A_N$, $\widehat \sigma=\widehat{\overline \alpha}$, $\widehat x=\widehat x(\,\cdot\,)$, $Q=Q_{L}$ (it is clear that $\widehat x\in Q$), $V=U_{C([t_0, T],\mathbb R^n)}(\widehat x,\delta)$ and $\widehat F=F$.
In Lemma 2, as a bounded set $M$ we take $U_{C([t_0, T],\mathbb R^n)}(\widehat x,\delta)$. It is clear that the mapping $\widehat F$ belongs to the space $C^1_x=C^1_x((U_{C([t_0, T],\mathbb R^n)}(\widehat x,\delta)\cap Q_L)\times \mathcal A_N, C([t_0, T],\mathbb R^n))=C^1_x((V\cap Q)\times\Sigma, Y)$.
Recall that the subspaces $L_s$ of $C([t_0, T],\mathbb R^n)$, $s\in\mathbb N$, formed by the polygonal lines with knots at the points $t_i=t_0+ih$, $i=0,1,\dots,s$, where $h=(T-t_0)/s$, and the $P_s\colon C([t_0, T],\mathbb R^n)\to L_s$, $\|P_s\|=1$, are continuous projections, were defined before Lemma 2. As they are finite-dimensional, the subspaces $L_s$ are complemented.
The mappings $F_s \in C^1_x$, $s\in\mathbb N$, were defined in Lemma 2 (where $x_0=\widehat x(t_0)$ and $\overline u=\overline u'$; the dependence on these fixed values of parameters is suppressed).
Let $r>0$ and let $V_0\subset V$ and $U_0\subset \mathcal A_N$ be the neighbourhoods of $\widehat x$ and $\widehat{\overline \alpha}$ from Theorem 2.
At step 4) of the proof of Lemma 2 we showed that the sequence of mappings ${F_s-P_s\widehat F}$ converges to zero in the space $C_x^1$ as $s\to\infty$. We clearly have ${F_s-P_s\widehat F\in L_s}$, and so there exists $s_0\in\mathbb N$ such that $F_s\in U_{C_x^1((V\cap Q)\times\Sigma,\,L_s)}(P_s\widehat F,r)$ for all $s\geqslant s_0$.
We claim that $x-\Lambda^{-1}\widehat F(x,\overline\alpha)\in Q_{L}$ and $x-\Lambda^{-1}F_s(x,\overline\alpha)\in Q_{L}$ for all $(x,\overline\alpha)\in(V_0\cap Q_{L})\times U_0$ and $(x,\overline\alpha)\in(V_0\cap Q_{L}\cap(\widehat x+N_s))\times U_0$, $s\in\mathbb N$, respectively. That $x-\Lambda^{-1}\widehat F(x,\overline\alpha)\in Q_{L}$ was proved in Lemma $4$ in [8], and we do not dwell on this. We claim that $x-\Lambda^{-1}F_s(x,\overline\alpha)\in Q_{L}$.
We set $z_1=x-\widehat x-\Lambda^{-1}F_s(x,\overline\alpha)$. Then $\Lambda z_1=\Lambda(x-\widehat x)-F_s(x,\overline\alpha)$, and therefore
$$
\begin{equation}
\begin{aligned} \, \notag &z_1(t)-\int_{t_0}^tf_x\bigl(\tau,\widehat x(\tau),\widehat u(\tau)\bigr)z_1(\tau)\,d\tau \\ &\qquad =x(t)-\widehat x(t) -\int_{t_0}^tf_x\bigl(\tau,\widehat x(\tau),\widehat u(\tau)\bigr)\bigl(x(\tau)-\widehat x(\tau)\bigr)\,d\tau -F_s(x,\overline\alpha)(t) \end{aligned}
\end{equation}
\tag{51}
$$
for all $t\in[t_0, T]$.
We denote the function on the right-hand side of (51) by $G=G_s(x,\overline \alpha)$. This function lies in $L_s$ because $x\in \widehat x+N_s$. Hence $z_1\in N_s$, which implies that $\Lambda z_1=G\in L_s$. We claim that $G$ is a Lipschitz function. First we estimate the norm of the difference of its values at adjacent knots. A simple algebra shows that
$$
\begin{equation*}
\begin{aligned} \, G(t_{m})-G(t_{m+1}) &=-\widehat x(t_{m})+\widehat x(t_{m+1}) +\int_{t_m}^{t_{m+1}} f_x\bigl(t,\widehat x(t),\widehat u(t)\bigr)(x(t) \\ &\qquad -\widehat x(t))\,dt-\int_{t_m}^{t_{m+1}}f\bigl(t_m,x(t_m),u_s(\overline\alpha)(t)\bigr)\,dt. \end{aligned}
\end{equation*}
\notag
$$
We have shown above that $\widehat x$ is a $C_0$-Lipschitz function. Hence
$$
\begin{equation*}
|G(t_{m})-G(t_{m+1})|\leqslant C_0h+C_1\delta h+C_0h=(2C_0+C_1\delta)h=D_1h.
\end{equation*}
\notag
$$
We have $G\in L_s$, and therefore
$$
\begin{equation*}
G(t)=G(t_m)+\frac {t-t_m}{h}\bigl(G({t_{m+1}})-G(t_{m})\bigr)
\end{equation*}
\notag
$$
at $t\in[t_m,t_{m+1}]$. Let $t',t''\in [t_m,t_{m+1}]$ and $t'<t''$. These inequalities imply that
$$
\begin{equation*}
|G(t')-G(t'')|\leqslant\frac {t''-t'}{h}|G({t_{m}})-G(t_{m+1})|\leqslant D_1(t''-t').
\end{equation*}
\notag
$$
Now let $t'\in [t_m,t_{m+1}]$, $t''\in [t_l,t_{l+1}]$ and $l\geqslant m$. In this case
$$
\begin{equation*}
\begin{aligned} \, &|G(t')-G(t'')| \leqslant |G(t')-G(t_{m+1})|+|G(t_{m+1})-G(t_{m+2})|+\dotsb \\ &\qquad\qquad +|G(t_{l-1})-G(t_{l})|+|G(t_l)-G(t'')| \\ &\qquad\leqslant D_1(t_{m+1}-t'+t_{m+2}-t_{m+1}+\dotsb +t_l-t_{l-1}+t''-t_l)=D_1(t''-t'), \end{aligned}
\end{equation*}
\notag
$$
and so the function $G$ is Lipschitz, with Lipschitz constant $D_1$.
We return to (51). Let us estimate the norm of $z_1$. For all $m$ and $s$ (see (42)) we have
$$
\begin{equation*}
\begin{aligned} \, |y_s(x,\overline\alpha)(t_m)| &\leqslant\delta+\|\widehat x\|_{C([t_0, T],\mathbb R^n)}+|\widehat x(t_0)|+mhC_0 \\ &\leqslant \delta+\|\widehat x\|_{C([t_0, T],\mathbb R^n)} +|\widehat x(t_0)|+(T-t_0)C_0=D_2, \end{aligned}
\end{equation*}
\notag
$$
and so $|F_s(x,\overline\alpha)(t)|\leqslant D_2$ for each $t\in [t_0,T]$. Therefore,
$$
\begin{equation*}
\|z_1\|_{C([t_0, T],\mathbb R^n)}\leqslant\delta+\|\Lambda^{-1}\|\, \|F_s(x,\overline\alpha)\|_{C([t_0, T],\mathbb R^n)}\leqslant \delta+\|\Lambda^{-1}\|D_2.
\end{equation*}
\notag
$$
Now from (51) and since $G$ is Lipschitzian, we have
$$
\begin{equation*}
|z_1(t')-z_1(t'')|\leqslant C_1\bigl(\delta+\|\Lambda^{-1}\|D_2\bigr)|t'-t''|+D_1|t'-t''|.
\end{equation*}
\notag
$$
Because $x-\Lambda^{-1}F_s(x,\overline\alpha)=z_1+\widehat x$, the function $x-\Lambda^{-1}F_s(x,\overline\alpha)$ is Lipschitz, with Lipschitz constant $C_1(\delta+\|\Lambda^{-1}\|D_2)+D_1 +C_0=L_0+2C_0\leqslant L$. As a result, we have $x-\Lambda^{-1}F_s(x,\overline\alpha)\in Q_L$.
Thus, all the assumptions of Theorem 2 are met. Therefore, there exist a unique mapping $g_{\widehat F}\colon U_0\to V_0\cap Q_{L}$, and, for each $s\geqslant s_0$, a unique mapping $g_s\colon U_0\to V_0\cap Q_{L}\cap(\widehat x+N_s)$ such that $\widehat F(g_{\widehat F}(\overline\alpha),\overline\alpha)(t)=0$ and $F_s(g_s(\overline\alpha),\overline\alpha)(t)=0$ for all $\overline\alpha\in U_0$ and $t\in[t_0, T]$. Moreover, $g_{\widehat F}$ and $g_s$ are continuous.
Reducing the neighbourhood $U_0$, we can assume that $U_0=\mathcal O_1(\widehat{\overline \alpha})\cap\mathcal A_N$ and $\mathcal O_1(\widehat{\overline \alpha})\subset\mathcal O(\widehat{\overline \alpha})$. By uniqueness, $g_{\widehat F}(\overline\alpha)$ coincides with the restriction of $x(\,\cdot\,,\overline\alpha;\overline v)$ to $\mathcal O_1(\widehat{\overline \alpha})\cap\mathcal A_N$.
Moreover, by Theorem 2 there exist a neighbourhood $U'_0\subset U_0$ of the point $\widehat{\overline \alpha}$ (it can be assumed that $U'_0 = \mathcal O_0(\widehat{\overline \alpha})\cap\mathcal A_N$, where $\mathcal O_0(\widehat{\overline \alpha})\subset \mathcal O_1(\widehat{\overline \alpha})$) and a constant $c>0$ such that
$$
\begin{equation*}
\|x-x_s\|_{C(\mathcal O_0(\widehat{\overline \alpha})\cap\mathcal A_N,\, C([t_0, T],\mathbb R^n))}\leqslant c\|\widehat F-F_s\|_{C((V\cap Q)\times\Sigma,\, C([t_0, T],\mathbb R^n))},
\end{equation*}
\notag
$$
where $x$ and $x_s$ are, respectively, the mappings $\overline\alpha\mapsto x(\,\cdot\,,\overline\alpha;\overline v)$ and $\overline\alpha\mapsto x_s(\,\cdot\,,\overline\alpha;\overline v)= g_s(\overline\alpha)$.
By Lemma 2 the quantity on the right tends to zero as $s\to\infty$, and therefore $x_s\to x$ as $s\to\infty$ in the metric of $C(\mathcal O_0(\widehat{\overline \alpha})\cap\mathcal A_N,\, C([t_0, T],\mathbb R^n))$. This proves Lemma 3. Lemma 4 (implicit function lemma). Let $X$ be a Banach space, $K$ be a convex closed subset of $X$, $W$ be a neighbourhood of a point $\widehat w\in K$, and let $\widehat\Phi\colon W\to\mathbb R^m$. Let the following conditions hold: 1) $\widehat \Phi\in C(W\cap K,\,\mathbb R^m)$; 2) $\widehat \Phi$ is continuously differentiable at $\widehat w$; 3) $0\in\operatorname{int}\widehat \Phi'(\widehat w)(K-\widehat w)$. Then there exist positive constants $r_0$ and $\gamma$ such that for all $r\in(0,r_0]$, $\Phi\in U_{C(W\cap K,\,\mathbb R^m)}(\widehat \Phi,r)$ and $y\in U_{\mathbb R^m}(\widehat \Phi(\widehat w),r)$ there exists an element $g_\Phi(y)\in W\cap K$ satisfying
$$
\begin{equation}
\Phi(g_\Phi(y))=y, \qquad \|g_\Phi(y)-\widehat w\|_X\leqslant\gamma r.
\end{equation}
\tag{52}
$$
This lemma is a part of a more general result proved in [8], and we omit its proof.
|
|
|
Bibliography
|
|
|
1. |
R. G. Faradzhev, P. V. Ngok and A. V. Shapiro, “Controllability theory of discrete dynamic systems”, Avtomat. i Telemekh., 1 (1986), 5–24 ; English transl. in Autom. Remote Control, 47 (1986), 1–20 |
2. |
M. Barbero-Linán and B. Jakubczyk, “Second order conditions for optimality and local controllability of discrete-time systems”, SIAM J. Control Optim., 53:1 (2015), 352–377 |
3. |
E. V. Duda, A. I. Korzun and O. Yu. Minchenko, “On the local controllability of discrete systems”, Differ. Uravn., 33:4 (1997), 462–469 ; English transl. in Differ. Equ., 33:4 (1997), 461–468 |
4. |
E. R. Avakov and G. G. Magaril-Il'yaev, “Relaxation and controllability in optimal control problems”, Mat. Sb., 208:5 (2017), 3–37 ; English transl. in Sb. Math., 208:5 (2017), 585–619 |
5. |
A. A. Agrachev and Yu. L. Sachkov, Control theory from the geometric viewpoint, Encyclopaedia Math. Sci., 87, Control theory and optimization, II, Springer-Verlag, Berlin, 2004, xiv+412 pp. |
6. |
A. D. Ioffe and V. M. Tihomirov, Theory of extremal problems, Nauka, Moscow, 1974, 479 pp. ; English transl., Stud. Math. Appl., 6, North-Holland Publishing Co., Amsterdam–New York, 1979, xii+460 pp. |
7. |
B. Sh. Mordukhovich, “An approximate maximum principle for finite-difference control systems”, Zh. Vychisl. Mat. Mat. Fiz., 28:2 (1988), 163–177 ; English transl. in Comput. Math. Math. Phys., 28:1 (1988), 106–114 |
8. |
E. R. Avakov and G. G. Magaril-Il'yaev, “Local infimum and a family of maximum principles in optimal control”, Mat. Sb., 211:6 (2020), 3–39 ; English transl. in Sb. Math., 211:6 (2020), 750–785 |
9. |
R. V. Gamkrelidze, Principles of optimal control theory, 3d revised ed., URSS, Moscow, 2019, 200 pp.; English transl. of 2nd ed., Math. Concepts Methods Sci. Eng., 7, Rev. ed., Plenum Press, New York–London, 1978, xii+175 pp. |
10. |
E. R. Avakov and G. G. Magaril-Il'yaev, “Local controllability and optimality”, Mat. Sb., 212:7 (2021), 3–38 ; English transl. in Sb. Math., 212:7 (2021), 887–920 |
Citation:
E. R. Avakov, G. G. Magaril-Il'yaev, “Controllability of difference approximation for a control system with continuous time”, Sb. Math., 213:12 (2022), 1620–1644
Linking options:
https://www.mathnet.ru/eng/sm9681https://doi.org/10.4213/sm9681e https://www.mathnet.ru/eng/sm/v213/i12/p3
|
|