Sbornik: Mathematics
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Forthcoming papers
Archive
Impact factor
Guidelines for authors
License agreement
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Mat. Sb.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Sbornik: Mathematics, 2024, Volume 215, Issue 4, Pages 438–463
DOI: https://doi.org/10.4213/sm9987e
(Mi sm9987)
 

Controllability of an approximately defined control system

E. R. Avakova, G. G. Magaril-Il'yaevbcd

a V. A. Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, Moscow, Russia
b Faculty of Mechanics and Mathematics, Lomonosov Moscow State University, Moscow, Russia
c Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute), Moscow, Russia
d Southern Mathematical Institute of the Vladikavkaz Scientific Center of the Russian Academy of Sciences, Vladikavkaz, Russia
References:
Abstract: We introduce the notion of controllability of a system of ordinary differential equations with respect to a prescribed function and present conditions that guarantee the controllability (with respect to this function) of both the original control system and all control systems close to it.
Bibliography: 10 titles.
Keywords: control system, perturbation, controllability, local controllability.
Received: 12.08.2023 and 12.01.2024
Bibliographic databases:
Document Type: Article
MSC: 93B05, 93C73
Language: English
Original paper language: Russian

Introduction

The notion of controllability of a control system is one of the most important concepts in optimal control theory. In this work we introduce the notion of controllability of a system of ordinary differential equations with boundary conditions of general form with respect to a function which, in general, is not an admissible trajectory of this system. The main result of the our consists in finding conditions which guarantee controllability with respect to such a function of both the originally given control system and all control systems close to it. Proximity is considered in spaces of continuous maps equipped with the uniform metric. It seems that in addition to being of purely academic interest, this result is quite important for applications. The point is that the maps involved in the definition of the original control system feature some sort of smoothness, whereas for the controllability of close ones we only need the continuity of the corresponding maps. In actual practice close maps are produced by inaccurately specified initial data and/or used to approximate ‘complicated’ maps by simpler ones, which usually exhibit only continuity.

The paper consists of four sections. In § 1 we formulate the main result. In § 2 we prove a special lemma on the inverse function for close maps and present some auxiliary assertions. Section 3 is devoted to the proof of the main result with the use of the inverse function lemma. In § 4 we give an illustrative example.

§ 1. Formulation of the main result

Consider the control system

$$ \begin{equation} \dot x=\varphi(t,x,u(t)), \qquad u(t)\in U \quad\text{for a.a. } t\in[t_0,t_1], \end{equation} \tag{1.1} $$
$$ \begin{equation} f(x(t_0),x(t_1))\leqslant0, \qquad g(x(t_0),x(t_1))=0, \end{equation} \tag{1.2} $$
where $\varphi\colon \mathbb R\times\mathbb R^n\times\mathbb R^r\to\mathbb R^n$ is a map with arguments $t$, $x$ and $u$, the maps $f\colon\mathbb R^n\times\mathbb R^n\to \mathbb R^{m_1}$ and $g\colon\mathbb R^n\times\mathbb R^n\to \mathbb R^{m_2}$ have arguments $\zeta_i\in\mathbb R^n$, $i=0,1$, and $U$ is a nonempty subset of $\mathbb R^r$.

From now on we assume that $\varphi$ in continuous and continuously differentiable with respect to $x$ on $\mathbb R\times\mathbb R^n\times \mathbb R^r$, while $f$ and $g$ are differentiable on $\mathbb R^n\times\mathbb R^n$.

The spaces of continuous vector functions on $[t_0,t_1]$ taking values in $\mathbb R^n$, absolutely continuous vector functions taking values in $\mathbb R^n$, and essentially bounded vector functions taking values in $\mathbb R^r$ are denoted by $C([t_0,t_1],\mathbb R^n)$, $\operatorname{AC}([t_0,t_1],\mathbb R^n)$ and $L_\infty([t_0,t_1],\mathbb R^r)$ (for $r=1$ we write $L_\infty([t_0,t_1])$), respectively.

The space $W_\infty^1([t_0,t_1],\mathbb R^n)$ is the set of all vector functions $x(\,\cdot\,)\in \operatorname{AC}([t_0,t_1],\mathbb R^n)$ such that $\dot x(\,\cdot\,)\in L_\infty([t_0,t_1],\mathbb R^n)$, equipped with the norm

$$ \begin{equation*} \|x(\,\cdot\,)\|_{W_\infty^1([t_0,t_1],\mathbb R^n)}=\|x(\,\cdot\,)\|_{C([t_0,t_1],\mathbb R^n)}+ \|\dot x(\,\cdot\,)\|_{L_\infty([t_0,t_1],\mathbb R^n)}. \end{equation*} \notag $$

A pair $(x(\,\cdot\,),u(\,\cdot\,))\in Z=C([t_0,t_1],\mathbb R^n)\times L_\infty([t_0,t_1],\mathbb R^r)$ is said to be admissible for the control system (1.1), (1.2) (the word ‘control’ is often omitted below) if conditions (1.1) and (1.2) are satisfied.

We define the set of attainability for the system (1.1), (1.2) with respect to an open set $V\subset C([t_0,t_1],\mathbb R^n)$:

$$ \begin{equation*} \begin{aligned} \, R(V) &=\bigl\{y=(y_1,y_2)\in\mathbb R^{m_1}\times\mathbb R^{m_2} \mid \exists\,(x(\,\cdot\,),u(\,\cdot\,))\in Z\colon \text{condition }(1.1)\text{ holds}, \\ &\qquad f(x(t_0),x(t_1))\leqslant y_1,\ g(x(t_0),x(t_1))=y_2\text{ and } x(\,\cdot\,)\in V\bigr\}. \end{aligned} \end{equation*} \notag $$

Definition 1. The system (1.1), (1.2) is said to be controllable with respect to a function $\widehat x(\,\cdot\,)\in C([t_0,t_1],\mathbb R^n)$ and a neighbourhood $V$ of it if the inclusion

$$ \begin{equation*} 0\in \operatorname{int}R(V) \end{equation*} \notag $$
holds.

It should be noted that the function $\widehat x(\,\cdot\,)$ in Definition 1 is not necessarily an admissible trajectory for the system (1.1), (1.2). In the standard definition of controllability it is assumed that $\widehat x(\,\cdot\,)$ is an admissible trajectory (see, for example, [1] and [2]).

Definition 1 means that the neighbourhood $V$ contains functions $x(\,\cdot\,)$ such that the pair $(x(\,\cdot\,),u(\,\cdot\,))\in Z$ satisfies condition (1.1), $f(x(t_0),x(t_1))\leqslant y_1$ and $g(x(t_0),x(t_1))=y_2$ for all $y_1$ and $y_2$ having a sufficiently small norm. In particular, if $y_1=0$, $y_2=0$, then $V$ also contains admissible trajectories for the system (1.1), (1.2).

In [3] these authors introduced the notion of local controllability of the system (1.1), (1.2) with respect to a function $\widehat x(\,\cdot\,)\in C([t_0,t_1],\mathbb R^n)$ which is also not necessarily an admissible trajectory. That definition (formulated in the terms used here) differs from Definition 1 in that the inclusion $0\in \operatorname{int}R(V)$ must hold for each neighbourhood $V$ of the function $\widehat x(\,\cdot\,)$. In the same work we derived sufficient conditions for local controllability.

Controllability research (see, for example, [1], [4] and [5]) is usually concerned with finding sufficient conditions for the controllability of the original system. Our aim is to derive sufficient conditions for controllability with respect to a function $\widehat x(\,\cdot\,)$ of both the original system (1.1), (1.2) and a system ‘close’ to it in the sense explained below. These conditions are formulated in Theorem 1. For close systems they guarantee controllability in the sense of Definition 1, and for the original system (as a consequence) they provide local controllability (which was established earlier by these authors).

To formulate the main result we need some additional notation and definitions. Let $k\in\mathbb N$. Set

$$ \begin{equation*} \mathcal A_k =\bigl\{\overline\alpha(\,\cdot\,)=(\alpha_1(\,\cdot\,),\dots,\alpha_k(\,\cdot\,))\in (L_\infty([t_0,t_1]))^k \colon \overline\alpha(t)\in\Sigma^k \text{ for a.a. } t\in[t_0,t_1]\bigr\}, \end{equation*} \notag $$
where
$$ \begin{equation*} \Sigma^k=\biggl\{\overline\alpha=(\alpha_1,\dots,\alpha_k)\in \mathbb R_+^k\colon \sum_{i=1}^k\alpha_i=1\biggr\}, \end{equation*} \notag $$
and consider the set
$$ \begin{equation*} \mathcal U=\bigl\{u(\,\cdot\,)\in L_\infty([t_0,t_1],\mathbb R^r)\colon u(t)\in U\text{ for a.a. } t\in[t_0,t_1]\bigr\}. \end{equation*} \notag $$

With (1.1), (1.2) we associate the following system:

$$ \begin{equation} \dot x =\sum_{i=1}^k\alpha_i(t)\varphi(t,x,u_i(t)), \qquad \overline\alpha(\,\cdot\,)\in \mathcal A_k, \quad \overline u(\,\cdot\,)\in \mathcal U^k , \end{equation} \tag{1.3} $$
$$ \begin{equation} f(x(t_0),x(t_1))\leqslant0, \qquad g(x(t_0),x(t_1))=0, \end{equation} \tag{1.4} $$
where $\overline\alpha(\,\cdot\,)=(\alpha_1(\,\cdot\,),\dots,\alpha_k(\,\cdot\,))$ and $\overline u(\,\cdot\,)=(u_1(\,\cdot\,),\dots,u_k(\,\cdot\,))$ are the control variables. This system can be thought of as a convex extension of (1.1), (1.2) (it is obvious that for $k=1$ it coincides with (1.1), (1.2)); we call it just a convex system.

As above, a triple $(x(\,\cdot\,),\overline\alpha(\,\cdot\,),\overline u(\,\cdot\,))\in C([t_0,t_1],\mathbb R^n)\times\mathcal A_k\times \mathcal U^k$ is said to be admissible for the convex system (1.3), (1.4) if conditions (1.3) and (1.4) are satisfied.

We denote the Euclidean norm in $\mathbb R^n$ by $|\cdot|$. The value of a linear functional $\lambda=(\lambda_1,\dots,\lambda_n)\in(\mathbb R^n)^*$ at an element $x=(x_1,\dots,x_n)^\top\in\mathbb R^n$ (the symbol $\top$ means taking the transpose) is denoted by $\langle \lambda,x\rangle=\sum_{i=i}^n\lambda_ix_i$. We let $(\mathbb R^n)^*_+$ denote the set of functionals on $\mathbb R^n$ taking nonnegative values at nonnegative vectors. If $\Lambda\colon \mathbb R^n\to\mathbb R^m$ is a linear operator, then $\Lambda^*$ denotes the adjoint of $\Lambda$.

Given a function $\widehat x(\,\cdot\,)$, for partial derivatives of the maps $f$ and $g$ with respect to $\zeta_0$ and $\zeta_1$ at the point $(\widehat x(t_0),\widehat x(t_1))$ we use the shorthand notation $\widehat f_{\zeta_i}$ and $\widehat g_{\zeta_i}$, $i=0,1$, respectively.

Let $k\in\mathbb N$ and suppose that a triple $(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,),\widehat{\overline u}(\,\cdot\,))$, where $\widehat{\overline \alpha}(\,\cdot\,)=(\widehat a_1(\,\cdot\,),\dots, \widehat a_k(\,\cdot\,))$ and $\widehat{\overline u}(\mkern1.5mu\cdot\mkern1.5mu)=(\widehat u_1(\mkern1.5mu\cdot\mkern1.5mu),\dots,\widehat u_k(\mkern1.5mu\cdot\mkern1.5mu))$, is admissible for the convex system (1.3), (1.4). Let $\Lambda(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,),\widehat{\overline u}(\,\cdot\,))$ denote the set of triples

$$ \begin{equation*} (\lambda_f,\lambda_g,p(\,\cdot\,))\in (\mathbb R^{m_1})_+^*\times(\mathbb R^{m_2})^*\times \operatorname{AC}([t_0,t_1],(\mathbb R^n)^*) \end{equation*} \notag $$
satisfying the relations
$$ \begin{equation} \begin{gathered} \, \dot p(t) =-p(t)\sum_{i=1}^k\widehat\alpha_i(t)\varphi_x(t,\widehat x(t),\widehat u_i(t)), \\ p(t_0)={\widehat {f}_{\zeta_0}}^*\lambda_f+{\widehat {g}_{\zeta_0}}^*\lambda_g, \qquad p(t_1)=-{\widehat{f}_{\zeta_1}}^*\lambda_f-{\widehat {g}_{\zeta_1}}^*\lambda_g, \\ \langle \lambda_f,f(\widehat x(t_0),\widehat x(t_1))\rangle=0, \\ \max_{u\in U}\langle p(t),\varphi(t,\widehat x(t),u)\rangle=\langle p(t), \dot {\widehat x}(t)\rangle \quad \text{a.e. on } [t_0,t_1]. \end{gathered} \end{equation} \tag{1.5} $$
It is clear that the zero triple satisfies these conditions.

Note that if $k=1$ (in this case $\widehat\alpha_1(\,\cdot\,)=1$ and we use the notation $\widehat u_1(\,\cdot\,)=\widehat u(\,\cdot\,)$), then these relations coincide with the conditions of Pontryagin’s maximum principle with Pontryagin function $H(t,x,u,p)=\langle p,\varphi(t,x,u)\rangle$.

Let $\mathcal M$ be a topological space. We denote by $C(\mathcal M,Z)$ the space of continuous bounded maps $F\colon \mathcal M\to Z$ equipped with the norm

$$ \begin{equation*} \|F\|_{C(\mathcal M,Z)}=\sup_{x\in \mathcal M}\|F(x)\|_Z. \end{equation*} \notag $$

Let $\rho>0$,

$$ \begin{equation*} \Delta(\rho)=\bigl\{(t,x,u)\in\mathbb R\times\mathbb R^n\times U \colon |x-\widehat x(t)|\leqslant\rho,\,t\in[t_0,t_1]\bigr\}, \end{equation*} \notag $$
and let $B(\rho)$ be the closed ball in $\mathbb R^{2n}$ with centre $(\widehat x(t_0),\widehat x(t_1))$ and radius $\rho$.

Set

$$ \begin{equation*} \mathcal L=C(\Delta(\rho),\mathbb R^n)\times C(B(\rho),\mathbb R^{m_1})\times C(B(\rho),\mathbb R^{m_2}). \end{equation*} \notag $$
This is a normed space; the norm in this space is defined as the sum of the norms of the factors.

With each triple of continuous maps $(\widetilde\varphi,\widetilde f, \widetilde g)\colon \mathcal L\to\mathbb R^n\times\mathbb R^{m_1}\times\mathbb R^{m_2}$ we associate the system of the form (1.1), (1.2) in which $\varphi$, $f$ and $g$ are replaced by the maps $\widetilde\varphi$, $\widetilde f$ and $\widetilde g$, respectively. We refer to this triple as the system $(\widetilde\varphi,\widetilde f, \widetilde g)$.

The main result of this work is as follows.

Theorem 1. Let $k\in\mathbb N$, and let $(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,),\widehat{\overline u}(\,\cdot\,))$ be such that $\Lambda(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,), \widehat{\overline u}(\,\cdot\,))=\{0\}$. Then for any neighbourhood $V_0$ of the function $\widehat x(\,\cdot\,)$ there exists a neighbourhood $W_0$ of zero in $\mathcal L$ such that each system $(\widetilde\varphi,\widetilde f,\widetilde g)$ with the property $(\widetilde\varphi-\varphi,\,\widetilde f-f, \widetilde g- g)\in W_0$ is controllable with respect to $\widehat x(\,\cdot\,)$ and $V_0$.

Note that for the controllability of a system $(\widetilde\varphi,\widetilde f,\widetilde g)$ which is close to $(\varphi, f, g)$ in the sense mentioned above, it is sufficient that the maps $\widetilde\varphi$, $\widetilde f$, and $\widetilde g$ be continuous.

Corollary 1. Under the hypotheses of Theorem 1 the system $(\varphi, f, g)$ is locally controllable with respect to the function $\widehat x(\,\cdot\,)$.

§ 2. Inverse function lemma and auxiliary assertions

In this section we prove Lemma 1 on the inverse function, present Lemma 2, which characterizes the condition $\Lambda(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,),\widehat{\overline u}(\,\cdot\,))=\{0\}$ in other terms, and formulate Lemma 3 on approximation, which is a particular case of Lemma 4.3 from these authors’ work [6].

Let $Z$ be a normed space $z_0\in Z$, and $\rho>0$. Here and throughout, $U_Z(z_0,\rho)$ and $B_Z(x_0,\rho)$ denote the open and closed ball in $Z$ centred at the point $z_0$ and of radius $\rho$.

Lemma 1 (on the inverse function). Let $X$, $Y$ and $Y_1$ be Banach spaces such that $Y_1$ is continuously embedded in $Y$, and $K$ and $Q$ be convex closed subsets of $X$, let $V$ be a neighbourhood of the point $\widehat x\in K\cap Q$, and let $F\in C(V,Y)$. Suppose that the following conditions hold:

(1) $F$ is differentiable in $\widehat x$ and $F(\widehat x)\in Y_1$,

(2) $0\in\operatorname{int}F'(\widehat x)(K-\widehat x)$,

(3) the set $B_X(0,1)\cap(F'(\widehat x))^{-1}B_{Y_1}(0,1)$ is relatively compact in $X$,

(4) $F-F'(\widehat x)\subset C(V,Y_1)$.

Then there exist constants $0<\delta_1\leqslant1$ and $c>0$ and a neighbourhood $W\subset C(V\cap K\cap Q,\,Y)$ of the map $F$ such that for each map $\widetilde F\in W$ satisfying the conditions $\widetilde F-F\in C(V\cap K\cap Q,\,Y_1)$ and

$$ \begin{equation} B_X(0,1)\cap(F'(\widehat x))^{-1}\bigl(F'(\widehat x)(x-\widehat x)+y-\widetilde F(x)\bigr) \subset Q-\widehat x \end{equation} \tag{2.1} $$
for all $(x,y)\in(V\cap K\cap Q)\times U_{Y_1}(F(\widehat x),\delta_1)$, one can find a map $\psi_{\widetilde F}\colon U_{Y_1}(F(\widehat x),\delta_1)\to V\cap K\cap Q$ that satisfies the relations
$$ \begin{equation} \begin{gathered} \, \widetilde F(\psi_{\widetilde F}(y))=y, \\ \|\psi_{\widetilde F}(y)-\widehat x\|_X\leqslant c\bigl(\|\widetilde F-F\|_{C(V\cap K\cap Q,\,Y)}+\|y-F(\widehat x)\|_Y\bigr) \end{gathered} \end{equation} \tag{2.2} $$
for each $y\in U_{Y_1}(F(\widehat x),\delta_1)$.

Proof. It follows from the assumptions of the lemma that the hypotheses of Lemma 1 in these authors’ work [7] are satisfied; in turn, that lemma yields the existence of positive numbers $\gamma$ and $a$ and a continuous map $R\colon U_{Y}(0,\gamma)\to K-\widehat x$ (a right inverse) such that for each $z\in U_{Y}(0,\gamma)$ the following relations hold (where $A=F'(\widehat x)$):
$$ \begin{equation} A R(z)=z, \qquad \|R(z)\|_X\leqslant a\|z\|_Y. \end{equation} \tag{2.3} $$

By virtue of condition $(1)$ there exists $0\!<\!\delta\!=\!\delta(a)\!\leqslant\! 1$ such that ${\widehat x\!+\!U_X(0,\delta)\!\subset\! V}$, and if $x\in U_X(0,\delta)$, then

$$ \begin{equation} \|F(\widehat x+x)-F(\widehat x)-A x\|_Y\leqslant\frac1{2a}\|x\|_X. \end{equation} \tag{2.4} $$

Since $Y_1$ is continuously embedded in $Y$, there exists a positive constant $b$ such that $\|y\|_Y\leqslant b\|y\|_{Y_1}$ for each $y\in Y_1$. We choose $r>0$ and $0<\delta_1\leqslant1$ so that

$$ \begin{equation} 2(r+b\delta_1)\leqslant\min\biggl(\gamma,\frac \delta a\biggr). \end{equation} \tag{2.5} $$
We put $W=U_{C(V\cap K\cap Q,\,Y)}(F,r)$ and $V_1=U_{Y_1}(F(\widehat x),\delta_1)$.

Now, in accordance with the hypothesis of the lemma, fix $y\in V_1$ and $\widetilde F\in W$ satisfying the condition $\widetilde F-F\in C(V\cap K\cap Q,\,Y_1)$ and inclusion (2.1), which, in turn, yields the inclusion $B_X(0,1)\cap A^{-1}(z(x))\subset Q-\widehat x$ for each $x\in E=U_X(0,\delta)\cap(K-\widehat x)\cap(Q-\widehat x))$, where

$$ \begin{equation*} z(x)=z_{y,\widetilde F}(x)=A x+y-\widetilde F(\widehat x+x). \end{equation*} \notag $$

The map $x\mapsto z(x)$ from $E$ to $Y$ is continuous since $\widetilde F\in W$. Then $\widetilde F\in C(V\cap K\cap Q,\,Y)$; hence the map $x\mapsto \widetilde F(\widehat x+x)$ from $E$ to $Y$ is continuous.

We write $z(x)$ in the form

$$ \begin{equation*} z(x)=A(\widehat x+x)-F(\widehat x+x)+F(\widehat x+x)-\widetilde F(\widehat x+x)+F(\widehat x)-A\widehat x+y-F(\widehat x). \end{equation*} \notag $$
It follows from condition $(4)$ in the lemma and the choice of $\widetilde F$ and $y$ that $z(x)\in Y_1$ for each $x\in E$ and that the following estimate holds:
$$ \begin{equation*} \begin{aligned} \, \|z(x)\|_{Y_1} &\leqslant\|F-A\|_{C(V,Y_1)}+\|F-\widetilde F\|_{C(V\cap K\cap Q,\,Y_1)} \\ &\qquad+\|F(\widehat x)-A\widehat x\|_{Y_1}+ \|y-F(\widehat x)\|_{Y_1}, \end{aligned} \end{equation*} \notag $$
which means that $z(x)\in B_{Y_1}(0,\kappa)$ for each $x\in E$, where $\kappa=\kappa(y,\widetilde F)$ is the quantity on the right-hand side of this estimate.

We write $z(x)$ in the form

$$ \begin{equation*} z(x)=-(F(\widehat x+x)-F(\widehat x)-A x) +F(\widehat x+x)-\widetilde F(\widehat x+x)+y-F(\widehat x). \end{equation*} \notag $$
Then in view of (2.4) and the choice of $\widetilde F$ we have
$$ \begin{equation} \begin{aligned} \, \|z(x)\|_Y &\leqslant \frac1{2a}\|x\|_X +\|\widetilde F-F\|_{C(V\cap K\cap Q,\,Y)}+\|y-F(\widehat x)\|_Y \nonumber \\ &=\frac1{2a}\|x\|_X+d, \end{aligned} \end{equation} \tag{2.6} $$
where (as $Y_1$ is continuously embedded in $Y$)
$$ \begin{equation} \begin{aligned} \, \notag d=d(y, \widetilde F) &=\|\widetilde F-F\|_{C(V\cap K\cap Q,\,Y)}+\|y-F(\widehat x)\|_Y \\ &<r+b\|y-F(\widehat x)\|_{Y_1}< r+b\delta_1. \end{aligned} \end{equation} \tag{2.7} $$

We set

$$ \begin{equation*} M=B_X(0, 2ad)\cap (K-\widehat x)\cap (Q-\widehat x) \end{equation*} \notag $$
and introduce the map $\Phi=\Phi_{y,\widetilde F}\colon M\to X$ which acts by the formula
$$ \begin{equation*} \Phi(x)=R(z(x)). \end{equation*} \notag $$
It is well defined. In fact, first, $2ad<2a(r+b\delta_1)\leqslant\delta$ by (2.7) and (2.5) and thus $M\subset E$, second, for any $x\in M$, by (2.4)(2.7) we have
$$ \begin{equation*} \|z(x)\|_Y\leqslant\frac1{2a}\|x\|_X+d\leqslant 2d<2(r+b\delta_1)\leqslant\gamma. \end{equation*} \notag $$
The map $\Phi$ is continuous, since it is a composition of continuous maps. Let us show that $\Phi$ maps the convex closed set $M$ into itself.

Indeed, let $x\in M$. By (2.3) and the above estimate we have

$$ \begin{equation*} \|\Phi(x)\|_X=\|R(z(x))\|_X\leqslant a\|z(x)\|_Y\leqslant2ad, \end{equation*} \notag $$
which means that $\Phi(x)\in B_X(0, 2ad)$.

Then $\Phi(x)\in K-\widehat x$ by the definition of $R$. It follows from the first relation in (2.3) that $A\Phi(x)=A R(z(x))=z(x)$. This yields the inclusion $\Phi(x)\in A^{-1}(z(x))$. Since $2ad<\delta\leqslant1$, we have $\Phi(x)\in B_X(0,1)$, and now from (2.1) we obtain the inclusion $\Phi(x)\in B_X(0,1)\cap A^{-1}(z(x))\subset Q-\widehat x$.

Thus, $\Phi(M)\subset M$. Let us show that the set $\Phi(M)$ is relatively compact in $X$.

It follows from the relation $A\Phi(x)=z(x)$, the inclusion $z(x)\in B_{Y_1}(0,\kappa)$ established above and the inclusion $\Phi(x)\in B_X(0,1)$, which holds for each $x\in M$, that $\Phi(M)\subset B_X(0,1)\cap A^{-1}B_{Y_1}(0,\kappa)$.

By condition (3) in the lemma the set $B_X(0,1)\cap A^{-1}B_{Y_1}(0,1)$ is relatively compact; it readily follows that the set $B_X(0,1)\cap A^{-1}B_{Y_1}(0,\kappa)$ is also relatively compact, and so its subset $\Phi(M)$ is relatively compact as well.

Thus, the continuous map $\Phi$ maps the convex closed set $M$ into itself and its image is relatively compact in $X$. Consequently, by Schauder’s fixed-point theorem (which states that if the image of a continuous map taking a convex closed subset of a Banach space to itself is relatively compact, then this map has a fixed point: see, for example, [8], § 3.6.2) there exists an element $\widetilde x=\widetilde x(y,\widetilde F)$ such that $\Phi(\widetilde x)=\widetilde x$. Then by the definition of $\Phi$ and $z(x)$ with regard to the first relation in (2.3) we have

$$ \begin{equation*} A\widetilde x=A\Phi(\widetilde x)=A R(z(\widetilde x))=z(\widetilde x)=A\widetilde x+y-\widetilde F(\widehat x+\widetilde x), \end{equation*} \notag $$
which means that $\widetilde F(\widehat x+\widetilde x)=y$. Put $\psi_{\widetilde F}(y)=\widehat x+\widetilde x$. By hypothesis $\widetilde x\in U_X(0,\delta)\cap(K- \widehat x)\cap(Q-\widehat x)\subset (V-\widehat x)\cap(K-\widehat x)\cap(Q-\widehat x)$, and therefore $\psi_{\widetilde F}(y)\in V\cap K\cap Q$, which means that the first relation in (2.2) holds for any $y\in V_1$.

Since $\psi_{\widetilde F}(y)-\widehat x=\widetilde x\in B_X(0, 2ad)$, taking (2.7) into account we obtain the inequality in (2.2) for $c=2a$, which completes the proof of Lemma 1 on the inverse function.

Let $k\in\mathbb N$ and suppose that a triple $(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,),\widehat{\overline u}(\,\cdot\,))$, where $\widehat{\overline \alpha}(\,\cdot\,)=(\widehat a_1(\,\cdot\,),\dots, \widehat a_k(\,\cdot\,))$ and $\widehat{\overline u}(\mkern1.5mu\cdot\mkern1.5mu)=(\widehat u_1(\mkern1.5mu\cdot\mkern1.5mu),\dots,\widehat u_k(\mkern1.5mu\cdot\mkern1.5mu))$, is admissible for the convex system (1.3), (1.4). Next, let $m=m_1+m_2$ and fix tuples $\overline N=(N_1,\dots, N_{m+1})$, where $N_i\in \mathbb N$ and $N_i>k$, $\overline\alpha=(\overline\alpha_1(\,\cdot\,),\dots,\overline\alpha_{m+1}(\,\cdot\,))$, where $\overline\alpha_i(\,\cdot\,)=(\alpha_{i1}(\,\cdot\,),\dots,\alpha_{iN_i}(\,\cdot\,))\in (L_\infty([t_0,t_1])^{N_i}$, and $\overline u=(\overline u_1(\,\cdot\,),\dots,\overline u_{m+1}(\,\cdot\,))$, where $\overline u_i(\,\cdot\,)=(u_{i1}(\,\cdot\,),\dots, u_{i(N_i-k)}(\,\cdot\,))\in (L_\infty([t_0,t_1],\mathbb R^r))^{N_i-k}$, $i=1,\dots,m+1$.

Put

$$ \begin{equation*} X=C([t_0,t_1],\mathbb R^n)\times\mathbb R^n\times \mathbb R^{m+1}\times\mathbb R^{m_1}\quad\text{and} \quad Y=C([t_0,t_1],\mathbb R^n)\times\mathbb R^{m_1}\times \mathbb R^{m_2} \end{equation*} \notag $$
and consider the map $A_{\overline N}(\overline\alpha,\overline u)\colon X\to Y$ defined for $(h(\mkern1.5mu\cdot\mkern1.5mu),\xi,\beta,\nu)\in X$ and ${t\,{\in}\,[t_0,t_1]}$ by
$$ \begin{equation*} \begin{aligned} \, &A_{\overline N}(\overline\alpha,\overline u)[h(\,\cdot\,),\xi,\beta,\nu](t) =\biggl(h(t)-\xi- \int_{t_0}^t\biggl(\sum_{j=1}^k\widehat\alpha_j(\tau) \varphi_x(\tau,\widehat x(\tau),\widehat u_j(\tau))h(\tau) \\ &\ \ +\sum_{i=1}^{m+1}\beta_i\biggl(\sum_{j=1}^k\alpha_{ij}(\tau)\varphi(\tau,\widehat x(\tau),\widehat u_j(\tau))+ \sum_{j=k+1}^{N_i}\alpha_{ij}(\tau)\varphi(\tau,\widehat x(\tau),u_{i(j-k)}(\tau))\biggr)\biggr)\,d\tau, \\ &\quad\qquad\qquad \widehat f'[\xi,h(t_1)]+\nu, \, \widehat g'[\xi,h(t_1)]\biggr), \end{aligned} \end{equation*} \notag $$
where $\beta=(\beta_1,\dots,\beta_{m+1})$.

This is a well-defined continuous linear operator. Indeed, set

$$ \begin{equation*} \gamma_u=\max\bigl\{\|\widehat{\overline u}(\,\cdot\,)\|_{(L_\infty([t_0,t_1],\mathbb R^r))^k}, \,\|\overline u_i(\,\cdot\,)\|_{(L_\infty([t_0,t_1],\mathbb R^r))^{N_i-k}},\,1\leqslant i\leqslant m+1\bigr\}. \end{equation*} \notag $$

The maps $(t,u)\mapsto \varphi(t,\widehat x(t),u)$ and $(t,u)\mapsto \varphi_x(t,\widehat x(t),u)$ are continuous on the compact set $\mathcal K=[t_0,t_1]\times B_{\mathbb R^r}(0,\gamma_u)$. Set

$$ \begin{equation} \begin{aligned} \, C_0 &=\max_{(t,u)\in \mathcal K}|\varphi(t,\widehat x(t),u)|, \\ C_1 &=\max_{(t,u)\in \mathcal K}\|\varphi_x(t,\widehat x(t),u)\|, \end{aligned} \end{equation} \tag{2.8} $$
where $\|\,\cdot\,\|$ is the operator norm of a linear operator from $\mathbb R^n$ to $\mathbb R^n$.

Since the functions $\widehat\alpha_j(\mkern1.5mu\cdot\mkern1.5mu)$, $j=1,\dots,k$, and $\alpha_{ij}(\mkern1.5mu\cdot\mkern1.5mu)$, $i=1,\dots,m+1$, ${j=1,\dots,N_i}$, are essentially bounded, the integrand in the definition of the map $A_{\overline N}(\overline\alpha,\overline u)$ belongs to $L_\infty([t_0,t_1],\mathbb R^n)$ and is thereby integrable. Consequently, the first component of the image of this map belongs to $C([t_0,t_1],\mathbb R^n)$. Then it is clear that this map acts from $X$ to $Y$. Its linearity is obvious, and the fact that it is bounded can be verified directly.

Set

$$ \begin{equation*} K_0=C([t_0,t_1],\mathbb R^n)\times\mathbb R^n\times\Sigma^{m+1}\times(\mathbb R^{m_1}_++f(\widehat x(t_0),\widehat x(t_1))), \end{equation*} \notag $$
where for each $k\in\mathbb N$ the simplex $\Sigma^k$ was defined above.

Recall that the sets $\mathcal A_k$, $k\in\mathbb N$, and $\mathcal U$ were introduced before the definition of the convex system (1.3), (1.4).

For $N>k$ set $\widehat{\overline \alpha}_N(\,\cdot\,)=(\widehat\alpha_1(\,\cdot\,),\dots,\widehat\alpha_k(\,\cdot\,), 0,\dots,0)\in \mathcal A_N$.

Lemma 2. Let $k\in\mathbb N$. Then the following conditions are equivalent.

(1) The relation $\Lambda(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,),\widehat{\overline u}(\,\cdot\,))=\{0\}$ is valid.

(2) There exist tuples $\overline N^{\,*} = \,(N_1,\dots, N_{m+1})$, where $N_i > k$, $\overline\alpha_*=(\overline\alpha_1(\,\cdot\,),\dots, \overline\alpha_{m+1}(\,\cdot\,))$, where $\overline\alpha_i(\,\cdot\,)=(\alpha_{i1}(\,\cdot\,), \dots,\alpha_{iN_i}(\,\cdot\,))\in \mathcal A_{N_i}-\widehat{\overline\alpha}_{N_i}(\,\cdot\,)$, and $\overline u_*=(\overline u_1(\,\cdot\,), \dots,\overline u_{m+1}(\,\cdot\,))$, where $\overline u_i(\,\cdot\,)=(u_{i1}(\,\cdot\,),\dots,u_{i(N_i-k)}(\,\cdot\,))\in \mathcal U^{N_i-k}$, $i=1,\dots,m+1$, such that

$$ \begin{equation} 0\in\operatorname{int}A_{\overline N_*}(\overline\alpha_*,\overline u_*)K_0. \end{equation} \tag{2.9} $$

The proof of this lemma is based on the following two propositions.

Proposiion 1. Let $X$, $Y_1$ and $Y_2$ be Banach spaces, $A_i\colon X\to Y_i$, $i=1,2$, be continuous linear operators, let the operator $A=(A_1,A_2)\colon X\to Y=Y_1\times Y_2$ be defined by $Ax=(A_1x,A_2x)$, $x\in X$, and let $C$ be a convex closed subset of $X$.

Then $0\in\operatorname{int}AC$ if and only if $0\in\operatorname{int}A_1C$ and $0\in\operatorname{int}A_2(C\cap \operatorname{Ker}A_1)$.

Proof. Suppose that $0\in\operatorname{int}AC$. Then there exists $r>0$ such that $U_{Y_1}(0,r)\times U_{Y_2}(0,r)\!\subset\! AC$. This yields the inclusion $U_{Y_1}(0,r)\!\subset\! A_1C$, and therefore ${0\!\in\!\operatorname{int}A_1C}$.

Next, since $\{0\}\times U_{Y_2}(0,r)\subset AC$, for each $y_2\in U_{Y_2}(0,r)$ there exists an element $x\in C\cap\operatorname{Ker}A_1$ such that $A_2x=y_2$ and so $0\in\operatorname{int}A_2(C\cap \operatorname{Ker}A_1)$.

Conversely, suppose that $0\in\operatorname{int}A_1C$ and $0\in\operatorname{int}A_2(C\cap \operatorname{Ker}A_1)$. It follows from the first inclusion that the hypotheses of Lemma $1$ from these authors’ work [7] are satisfied, and therefore, as already noted in the beginning of the proof of Lemma 1, there exist positive numbers $\gamma$ and $a$ and a map $R\colon U_{Y_1}(0,\gamma)\to C$ such that the following relations hold for any $y_1\in U_{Y_1}(0,\gamma)$:

$$ \begin{equation} A_1R(y_1)=y_1, \qquad \|R(y_1)\|_X\leqslant a\|y_1\|_{Y_1}. \end{equation} \tag{2.10} $$
It follows from the second inclusion that $U_{Y_2}(0,\rho)\subset A_2(C\cap \operatorname{Ker}A_1)$ for some $\rho>0$. Fix $0<\delta\leqslant \min(\rho/4,\rho/4a\|A_2\|,\gamma/2)$, and let us show that $U_{Y}(0,\delta)\subset AC$.

If $y=(y_1,y_2)\in U_{Y}(0,\delta)$, then $2y_1\in U_{Y_1}(0,\gamma)$ by the choice of $\delta$; hence, by (2.10) there exists $x_1=R(2y_1)\in C$ such that $A_1x_1=2y_1$.

New let us show that $2y_2-A_2x_1\in U_{Y_2}(0,\rho)$. Indeed, with regard to the choice of $\delta$ and the inequality in (2.10) we have

$$ \begin{equation*} \begin{aligned} \, \|2y_2-A_2x_1\|_{Y_2} &\leqslant2\|y_2\|_{Y_2}+\|A_2\|\|x_1\|_X<\frac{\rho}2+\|A_2\|a\|2y_1\|_{Y_1} \\ &<\frac{\rho}2 +\|A_2\|2a\delta\leqslant \frac{\rho}2+ \frac{\rho}2=\rho. \end{aligned} \end{equation*} \notag $$
Consequently, there exists $x_2\in C\cap \operatorname{Ker}A_1$ such that $A_2x_2=2y_2-A_2x_1$.

Put $x=(1/2)x_1+(1/2)x_2$. As $x_i\in C$, $i=1,2$, and $C$ is convex, we have $x\in C$. Next, since $x_2\in \operatorname{Ker}A_1$, we have $A_1x=(1/2)A_1x_1+(1/2)A_1x_2=(1/2)A_1x_1=y_1$, and it follows from the definition of $x_2$ that $A_2x=(1/2)A_2x_1+(1/2)A_2x_2=(1/2)A_2x_1+y_2-(1/2)A_2x_1=y_2$, which means that $Ax=(A_1x,A_2x)=(y_1,y_2)\,{=}\,y$.

Thus, $U_{Y}(0,\delta)\subset AC$ and the proof of Proposition 1 is complete.

The operator $A_{\overline N}(\overline\alpha,\overline u)$ defined before Lemma 2 can be represented in the form $A_{\overline N}(\overline\alpha,\overline u)=(A_{1\overline N}(\overline\alpha,\overline u), A_{2\overline N}(\overline\alpha,\overline u))$, where the linear operator $A_{1\overline N}(\overline\alpha, \overline u)\colon X\to Y_1=C([t_0,t_1],\mathbb R^n)$ acts by the formula

$$ \begin{equation*} \begin{aligned} \, &A_{1\overline N}(\overline\alpha,\overline u)[h(\,\cdot\,),\xi,\beta,\nu](t)=h(t)-\xi- \int_{t_0}^t\biggl(\sum_{j=1}^k\widehat\alpha_j(\tau) \varphi_x(\tau,\widehat x(\tau),\widehat u_j(\tau))h(\tau) \\ &\qquad + \sum_{i=1}^{m+1}\beta_i\biggl(\sum_{j=1}^k\alpha_{ij}(\tau) \varphi(\tau,\widehat x(\tau),\widehat u_j(\tau))+\kern-2pt \sum_{j=k+1}^{N_i}\alpha_{ij}(\tau)\varphi(\tau,\widehat x(\tau),u_{i(j-k)}(\tau))\biggr)\!\biggr)\,d\tau \end{aligned} \end{equation*} \notag $$
for each $(h(\,\cdot\,),\xi,\beta,\nu)\in X$ and $t\in[t_0,t_1]$, and the linear operator $A_{2\overline N}(\overline\alpha,\overline u)\colon X\to Y_2=\mathbb R^{m_1}\times \mathbb R^{m_2}$ acts by the formula
$$ \begin{equation*} A_{2\overline N}(\overline\alpha,\overline u)[h(\,\cdot\,),\xi,\beta,\nu] =(\widehat f'[\xi,h(t_1)]+\nu,\,\widehat g'[\xi,h(t_1)]) \end{equation*} \notag $$
for each $(h(\,\cdot\,),\xi,\beta,\nu)\in X$.

Proposiion 2. Let $k\in\mathbb N$. Then the following conditions are equivalent.

(a) $\Lambda(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,),\widehat{\overline u}(\,\cdot\,))=\{0\}$.

(b) There exist tuples $\overline N^{\,*} = (N_1,\dots, N_{m+1})$, where $N_i > k$, $\overline\alpha_* = (\overline\alpha_1(\,\cdot\,),\dots, \overline\alpha_{m+1}(\,\cdot\,))$, where $\overline\alpha_i(\,\cdot\,)=(\alpha_{i1}(\,\cdot\,), \dots,\alpha_{iN_i}(\,\cdot\,))\in \mathcal A_{N_i}-\widehat{\overline \alpha}_{N_i}(\,\cdot\,)$, and $\overline u_*=(\overline u_1(\,\cdot\,), \dots,\overline u_{m+1}(\,\cdot\,))$, where $\overline u_i(\,\cdot\,)=(u_{i1}(\,\cdot\,),\dots,u_{i(N_i-k)}(\,\cdot\,))\in \mathcal U^{N_i-k}$, $i=1,\dots,m+1$, such that

$$ \begin{equation} 0\in\operatorname{int}A_{2\overline N_*}(\overline\alpha_*,\overline u_*)(K_0\cap\operatorname{Ker}A_{1\overline N_*}(\overline\alpha_*,\overline u_*)). \end{equation} \tag{2.11} $$

Proof. (a) $\Rightarrow$ (b). Let $N>k$, let
$$ \begin{equation*} X_1=C([t_0,t_1],\mathbb R^n)\times\mathbb R^n\times (L_\infty([t_0,t_1]))^N\times\mathbb R^{m_1}, \end{equation*} \notag $$
let the space $Y$ be as before Lemma 2, and let $\overline u(\,\cdot\,)=(u_1(\,\cdot\,),\dots,u_{N-k}(\,\cdot\,))\in\mathcal U^{N-k}$.

Consider the map $A_N(\overline u)\colon X_1\to Y$ acting on each $(h(\,\cdot\,),\xi,\overline \alpha(\,\cdot\,),\nu)\in X_1$ and $t\in[t_0,t_1]$ by the formula

$$ \begin{equation*} \begin{aligned} \, &A_N(\overline u)[h(\,\cdot\,),\xi,\overline \alpha(\,\cdot\,),\nu](t)=\biggl(h(t)-\xi- \int_{t_0}^t\biggl(\sum_{i=1}^k\widehat\alpha_i(\tau)\varphi_x(\tau,\widehat x(\tau),\widehat u_i(\tau))h(\tau) \\ &\qquad +\sum_{i=1}^k\alpha_i(\tau)\varphi(\tau,\widehat x(\tau),\widehat u_i(\tau))+ \sum_{i=k+1}^N\alpha_i(\tau)\varphi(\tau,\widehat x(\tau),u_{i-k}(\tau))\biggr)\,d\tau, \\ &\qquad\qquad\qquad \widehat f'[\xi,h(t_1)]+\nu,\,\widehat g'[\xi,h(t_1)]\biggr), \end{aligned} \end{equation*} \notag $$
where $\overline\alpha(\,\cdot\,)=(\alpha_1(\,\cdot\,),\dots,\alpha_N(\,\cdot\,))$. The fact that it is a well defined continuous linear operator can be established in the same way as for the operator $A_{\overline N}(\overline\alpha,\overline u)$ (see above).

The operator $A_N(\overline u)$ can be represented in the form $A_N(\overline u)=(A_{1N}(\overline u), A_{2N}(\overline u))$, where the linear operator $A_{1N}(\overline u)\colon X_1\to C([t_0,t_1],\mathbb R^n)$ acts by the formula

$$ \begin{equation*} \begin{aligned} \, &A_{1N}(\overline u)[h(\,\cdot\,),\xi,\overline \alpha(\,\cdot\,),\nu](t)=h(t)-\xi- \int_{t_0}^t\biggl(\sum_{i=1}^k\widehat\alpha_i(\tau)\varphi_x(\tau,\widehat x(\tau),\widehat u_i(\tau))h(\tau) \\ &\qquad+\sum_{i=1}^k\alpha_i(\tau)\varphi(\tau,\widehat x(\tau),\widehat u_i(\tau))+ \sum_{i=k+1}^N\alpha_i(\tau)\varphi(\tau,\widehat x(\tau),u_{i-k}(\tau))\biggr)\,d\tau \end{aligned} \end{equation*} \notag $$
for all $(h(\,\cdot\,),\xi,\overline \alpha(\,\cdot\,),\nu)\in X_1$ and $t\in[t_0,t_1]$, and the linear operator $A_{2N}(\overline u)\colon X_1\to \mathbb R^{m_1}\times \mathbb R^{m_2}$ acts by
$$ \begin{equation*} A_{2N}(\overline u)[h(\,\cdot\,),\xi,\overline \alpha(\,\cdot\,),\nu]=(\widehat f'[\xi,h(t_1)]+\nu, \,\widehat g'[\xi,h(t_1)]) \end{equation*} \notag $$
for each $(h(\,\cdot\,),\xi,\overline \alpha(\,\cdot\,),\nu)\in X_1$.

Put

$$ \begin{equation*} K_N=C([t_0,t_1],\mathbb R^n)\times\mathbb R^n\times(\mathcal A_N-\widehat{\overline \alpha}_{N}(\,\cdot\,))\times(\mathbb R^{m_1}_++f(\widehat x(t_0),\widehat x(t_1))). \end{equation*} \notag $$
It is obvious that $K_N$ is a convex closed subset of $X_1$.

First, let us show that condition (a) yields the inclusion

$$ \begin{equation} 0\in\operatorname{int} \bigcup_{\overline u(\,\cdot\,)\in\mathcal V}A_{2N}(\overline u)(K_{N}\cap\operatorname{Ker}A_{1N}(\overline u)), \end{equation} \tag{2.12} $$
where $\mathcal V$ is the set of all tuples $\overline u(\,\cdot\,)=(u_1(\,\cdot\,),\dots,u_{N-k}(\,\cdot\,))\in \mathcal U^{N-k}$, $N>k$.

We prove this by contradiction. Supposing that the inclusion (2.12) fails to hold, let us show that $\Lambda(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,),\widehat{\overline u}(\,\cdot\,))\ne\{0\}$.

We start by proving that the set in (2.12) is convex. Indeed, let $M$ be this set, let $y_i\in M$, $\beta_i>0$, $i=1,2$, and $\beta_1+\beta_2=1$. We need to show that $\beta_1y_1+\beta_2y_2\in M$.

Since $y_i\in M$, there exist $N_i>k$, $\overline u_i(\,\cdot\,)=(u_{i1}(\,\cdot\,),\dots,u_{i(N_i-k)}(\,\cdot\,))\in\mathcal U^{N_i-k}$ and $z_i=(h_i(\,\cdot\,),\xi_i,\overline\alpha_i(\,\cdot\,),\nu_i)\in K_{N_i}\cap \operatorname{Ker}A_{1N_i}(\overline u_i)$, where $\overline\alpha_i(\,\cdot\,)=(\alpha_{i1}(\,\cdot\,)\dots,\alpha_{iN_i}(\,\cdot\,))$ and (since $z_i\in\operatorname{Ker}A_{1N_i}(\overline u_i)$)

$$ \begin{equation} \begin{aligned} \, \notag \dot h_i(t) &=\sum_{i=1}^k\widehat\alpha_i(t)\varphi_x(t,\widehat x(t),\widehat u_i(t))h_i(t)+ \sum_{j=1}^k\alpha_{ij}(t)\varphi(t,\widehat x(t),\widehat u_j(t)) \\ &\qquad +\sum_{j=k+1}^{N_i}\alpha_{ij}(t)\varphi(t,\widehat x(t),u_{i(j-k)}(t)) \quad\text{for } t\in[t_0,t_1], \qquad h_i(t_0)=\xi_i, \end{aligned} \end{equation} \tag{2.13} $$
such that $y_i=A_{2N_i}(\overline u_i)[z_i]$, $i=1,2$.

Let $N=N_1+N_2-k$ and $\overline u(\,\cdot\,)=(\overline u_{1}(\,\cdot\,), \overline u_2(\,\cdot\,))$. Put $z=(h(\,\cdot\,),\xi,\overline\alpha(\,\cdot\,),\nu)$, where $h(\,\cdot\,)=\beta_1h_1(\,\cdot\,)+\beta_2h_2(\,\cdot\,)$, $\xi=\beta_1\xi_1+\beta_2\xi_2$,

$$ \begin{equation*} \begin{aligned} \, \overline\alpha(\,\cdot\,) &=\bigl(\beta_1\alpha_{11}(\,\cdot\,)+\beta_2\alpha_{21}(\,\cdot\,), \dots, \beta_1\alpha_{1k}(\,\cdot\,)+\beta_2\alpha_{2k}(\,\cdot\,), \\ &\qquad \beta_1\alpha_{1(k+1)}(\,\cdot\,),\dots,\beta_1\alpha_{1N_1}(\,\cdot\,), \beta_2\alpha_{2(k+1)}(\,\cdot\,),\dots,\beta_2\alpha_{2N_2}(\,\cdot\,)\bigr) \end{aligned} \end{equation*} \notag $$
and $\nu=\beta_1\nu_1+\beta_2\nu_2$.

It is easily verified that $\overline\alpha(\,\cdot\,)\in \mathcal A_N-\widehat{\overline \alpha}_N(\,\cdot\,)$ and therefore $z\in K_N$.

It follows from (2.13) that the function $h(\,\cdot\,)$ satisfies the relation

$$ \begin{equation*} \begin{aligned} \, \dot h(t) &=\sum_{i=1}^k\widehat\alpha_i(t)\varphi_x(t,\widehat x(t),\widehat u_i(t))h(t) \\ &\qquad+\sum_{j=1}^k(\beta_1\alpha_{1j}(t)+\beta_2\alpha_{2j}(t))\varphi(t,\widehat x(t),\widehat u_{j}(t)) \\ &\qquad+ \sum_{j=k+1}^{N_1}\beta_1\alpha_{1j}(t)\varphi(t,\widehat x(t),u_{1(j-k)}(t)) \\ &\qquad +\sum_{j=N_1+1}^{N}\beta_2\alpha_{2(j-N_1+k)}(t)\varphi(t,\widehat x(t),u_{2(j-N_1)}(t)), \qquad h(t_0)=\xi, \end{aligned} \end{equation*} \notag $$
for each $t\in [t_0, t_1]$. Hence $z\in \operatorname{Ker}A_{1N}(\overline u)$, and therefore $z\in K_N\cap\operatorname{Ker}A_{1N}(\overline u)$. Then
$$ \begin{equation*} A_{2N}(\overline u)[z]=\beta_1A_{2N_1}(\overline u_1)[z_1]+\beta_2A_{2N_2}(\overline u_2)[z_2]=\beta_1y_1+\beta_2y_2, \end{equation*} \notag $$
which means that $\beta_1y_1+\beta_2y_2\in A_{2N}(\overline u)(K_{N}\cap\operatorname{Ker}A_{1N}(\overline u))$ and thus the set $M$ is convex.

We turn back to inclusion (2.12). If it fails to hold, then either the set $M$ has a nonempty interior not containing the origin, or the interior of $M$ is empty.

Since $M$ is convex, its interior is convex too, and if it is not empty and does not contain the origin, then by the finite-dimensional separation theorem there exists a nonzero vector $\lambda\in (\mathbb R^{m_1}\times\mathbb R^{m_2})^*$ such that

$$ \begin{equation} \langle\lambda,A_{2N}(\overline u)[h(\,\cdot\,),\xi,\overline\alpha(\,\cdot\,),\nu]\rangle\geqslant0 \end{equation} \tag{2.14} $$
for each tuple $\overline u(\,\cdot\,)=(u_1(\,\cdot\,),\dots,u_{N-k}(\,\cdot\,))\in \mathcal U^{N-k}$ and all $(h(\,\cdot\,),\xi,\overline\alpha(\,\cdot\,),\nu)\in K_{N} \cap \operatorname{Ker}A_{1N}(\overline u)$. Moreover, $h(\,\cdot\,) = h(\,\cdot\,,\xi,\overline\alpha;\overline u)$ satisfies the differential equation
$$ \begin{equation} \begin{aligned} \, \notag \dot h &=\sum_{i=1}^k\widehat\alpha_i(t)\varphi_x(t,\widehat x(t),\widehat u_i(t))h+ \sum_{i=1}^k\alpha_{i}(t)\varphi(t,\widehat x(t),\widehat u_i(t)) \\ &\qquad +\sum_{i=k+1}^{N}\alpha_{i}(t)\varphi(t,\widehat x(t),u_{i-k}(t)), \qquad h(t_0)=\xi, \end{aligned} \end{equation} \tag{2.15} $$
where $\overline\alpha(\,\cdot\,)=(\alpha_1(\,\cdot\,),\dots,\alpha_N(\,\cdot\,))$ (since $(h(\,\cdot\,),\xi,\overline\alpha(\,\cdot\,),\nu) \in \operatorname{Ker}A_{1N}(\overline u)$).

It is well known (see, for example, [9]) that if the interior of $M$ is empty, then the set $M$ lies in a hyperplane through the origin (since the origin does not belong to the set), and in this case inequality (2.14) holds as equality.

Put $\lambda=(\lambda_f,\lambda_{g})\in (\mathbb R^{m_1})^*\times (\mathbb R^{m_2})^*$. By the definition of the operator $A_{2N}(\overline u)$ inequality (2.14) can be written as

$$ \begin{equation} \langle\lambda_f,\,\widehat f_{\zeta_0}\xi+\widehat f_{\zeta_1}h(t_1,\xi,\overline\alpha;\overline u)+\nu\rangle+\langle\lambda_g,\,\widehat g_{\zeta_0}\xi+\widehat g_{\zeta_1}h(t_1,\xi,\overline\alpha;\overline u)\rangle\geqslant0, \end{equation} \tag{2.16} $$
where $\nu=\nu'+f(\widehat x(t_0),\widehat x(t_1))$ и $\nu'\in\mathbb R^{m_1}_+$.

Below we use (2.16) to show that $\Lambda(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,),\widehat{\overline u}(\,\cdot\,))\ne\{0\}$.

Put $\xi=0$, $\overline\alpha(\,\cdot\,)=0$ (then $h(\,\cdot\,,\xi,\overline\alpha;\overline u)=0$ by the uniqueness of the solution to the linear equation (2.15)) and $\nu'=\nu_0-f(\widehat x(t_0),\widehat x(t_1))$ for an arbitrary $\nu_0\in\mathbb R^{m_1}_+$. Then it follows from (2.16) that the inequality $\langle\lambda_f,\nu_0\rangle\geqslant0$ holds for any $\nu_0\in\mathbb R^{m_1}_+$, which means that $\lambda_f\in(\mathbb R^{m_1})^*_+$.

Let $\xi = 0$, $\overline\alpha(\,\cdot\,) = 0$ and $\nu' = 0$. Then (2.16) yields $\langle\lambda_f,f(\widehat x(t_0), \widehat x(t_1))\rangle \geqslant 0$. However, $\lambda_f\in(\mathbb R^{m_1})^*_+$ and $f(\widehat x(t_0),\widehat x(t_1))\leqslant0$, hence $\langle\lambda_f,f(\widehat x(t_0), \widehat x(t_1))\rangle\leqslant0$, which means that $\langle\lambda_f,f(\widehat x(t_0),\widehat x(t_1))\rangle=0$, and therefore the fourth relation in (1.5) holds.

Now let $\overline\alpha(\,\cdot\,)=0$ and $\nu'=-f(\widehat x(t_0),\widehat x(t_1))$ in (2.16). Then, since $\xi\in \mathbb R^n$ and $h(\,\cdot\,)=h(\,\cdot\,,\xi,\overline\alpha;\overline u)$ depends linearly on $\xi$, inequality (2.16) is satisfied as equality:

$$ \begin{equation} \langle \lambda_f,\widehat f_{\zeta_0} \xi+\widehat f_{\zeta_1} h(t_1,\xi,0;\overline u)\rangle +\langle\lambda_g,\widehat g_{\zeta_0} \xi+\widehat g_{\zeta_1} h(t_1,\xi,0;\overline u)\rangle=0. \end{equation} \tag{2.17} $$

Let $p(\,\cdot\,)$ be the solution to the equation

$$ \begin{equation} \dot p =-p\sum_{i=1}^k\widehat\alpha_i(t)\varphi_x(t,\widehat x(t),\widehat u_i(t)),\qquad p(t_1) =-{\widehat{f}_{\zeta_1}}^*\lambda_f-{\widehat {g}_{\zeta_1}}^*\lambda_g. \end{equation} \tag{2.18} $$
From (2.17) and (2.18) we obtain
$$ \begin{equation} \begin{aligned} \, \notag &\langle\widehat f_{\zeta_0}^*\lambda_f+\widehat g_{\zeta_0}^*\lambda_g,\xi\rangle =-\langle\widehat f_{\zeta_1}^*\lambda_f+\widehat g_{\zeta_1}^*\lambda_g, h(t_1,\xi,0;\overline u)\rangle =\langle p(t_1),h(t_1,\xi,0;\overline u)\rangle \\ \notag &\qquad=\int_{t_0}^{t_1}(\langle p(t),\dot h(t,\xi,0;\overline u)\rangle+\langle\dot p(t),h(t,\xi,0;\overline u)\rangle)\,dt +\langle p(t_0),h(t_0,\xi,0;\overline u)\rangle \\ &\qquad = \langle p(t_0),h(t_0,\xi,0;\overline u)\rangle=\langle p(t_0),\xi\rangle \end{aligned} \end{equation} \tag{2.19} $$
for any $\xi\in\mathbb R^n$, and therefore
$$ \begin{equation*} p(t_0)=\widehat f_{\zeta_0}^*\lambda_f+\widehat g_{\zeta_0}^*\lambda_g. \end{equation*} \notag $$
In view of (2.18) this means that the first three relations in (1.5) are satisfied. It remains to prove the last relation, namely, the maximum condition.

In (2.16) we put $\nu'=-f(\widehat x(t_0),\widehat x(t_1))$, and let $\overline\alpha(\,\cdot\,)=(\alpha_1(\,\cdot\,),\dots,\alpha_N(\,\cdot\,))\in \mathcal A_N-\widehat{\overline \alpha}_N(\,\cdot\,)$. With regard to the expressions for $p(t_0)$ and $p(t_1)$, from (2.16) we obtain the inequality

$$ \begin{equation} \langle p(t_0),\xi\rangle-\langle p(t_1),h(t_1,\xi,\overline\alpha; \overline u)\rangle\geqslant0, \end{equation} \tag{2.20} $$
which holds for each $\xi\in\mathbb R^n$.

It follows from (2.20), (2.18) and (2.15) that ($h(t_0,\xi,\overline\alpha; \overline u)=\xi$)

$$ \begin{equation} \begin{aligned} \, \notag 0 &\leqslant \langle p(t_0),h(t_0,\xi,\overline\alpha; \overline u)\rangle-\langle p(t_1),h(t_1,\xi,\overline\alpha; \overline u)\rangle \\ \notag &=-\int_{t_0}^{t_1}\bigl (\langle p(t),\dot h(t,\xi,\overline\alpha; \overline u)\rangle+\langle \dot p(t), h(t,\xi,\overline\alpha; \overline u)\rangle\bigr)\,dt \\ &=-\int_{t_0}^{t_1}\biggl(\sum_{i=1}^k\alpha_i(t)\langle p(t), \varphi(t,\widehat x(t),\widehat u_i(t))\rangle \nonumber \\ &\qquad +\sum_{i=k+1}^N\alpha_i(t)\langle p(t),\varphi(t,\widehat x(t),u_{i-k}(t))\rangle\biggr)\,dt. \end{aligned} \end{equation} \tag{2.21} $$
For each $1\leqslant i\leqslant k$ denote $\overline\alpha_i(\,\cdot\,)=(0,\dots,0,-(1/2)\widehat\alpha_i(\,\cdot\,),0,\dots,0,(1/2)\widehat \alpha_i(\,\cdot\,), 0,\dots,0)$, with the element $-(1/2)\widehat\alpha_i(\,\cdot\,)$ at the $i$th position, $(1/2)\widehat\alpha_i(\,\cdot\,)$ at the ${(k+1)}$th position, and the last $N-k-1$ entries equal to zero. It is obvious that $\overline\alpha_i(\,\cdot\,)\in\mathcal A_N-\widehat{\overline \alpha}_N(\,\cdot\,)$.

We substitute these tuples into the right-hand side of (2.21) and set $v(\,\cdot\,)=u_1(\,\cdot\,)$. Then we obtain

$$ \begin{equation} \int_{t_0}^{t_1}\widehat\alpha_{i}(t)\langle p(t),\varphi(t,\widehat x(t),v(t))\rangle\,dt\leqslant \int_{t_0}^{t_1}\widehat\alpha_i(t)\langle p(t),\varphi(t,\widehat x(t),\widehat u_i(t))\rangle\,dt \end{equation} \tag{2.22} $$
for each $1\leqslant i\leqslant k$ and all $v(\,\cdot\,)\in \mathcal U$.

We denote by $T_0$ the intersection of the sets of Lebesgue points for the functions $\widehat\alpha_i(\,\cdot\,)$ and $\langle p(\,\cdot\,),\varphi(\,\cdot\,,\widehat x(\,\cdot\,),\widehat u_i(\,\cdot\,))\rangle$, $i=1,\dots,k$, on $(t_0,t_1)$. As these functions are essentially bounded, one can easily verify that $T_0$ is also the set of Lebesgue points of the functions $\widehat\alpha_i(\,\cdot\,)\langle p(\,\cdot\,),\varphi(\,\cdot\,,\widehat x(\,\cdot\,),\widehat u_i(\,\cdot\,))\rangle$, $i=1,\dots,k$.

We denote by $T_1$ the set of points $t\in[t_0,t_1]$ such that $\widehat u_i(t)\in U$, $i=1,\dots,k$. By definition, for each $i$ this is a set of full measure; therefore, $T_1$ is of full measure too.

Let $\tau\in T_0\cap T_1$ and $v\in U$. Fix $1\leqslant i\leqslant k$. For each $h>0$ such that $[\tau-h,\tau+h]\subset (t_0,t_1)$ set $v_h(t)=v$ for $t\in [\tau-h,\tau+h]$ and $v_h(t)=\widehat u_i(t)$ if $t\in [t_0,t_1]\setminus[\tau-h,\tau+h]$. It is obvious that $v_h(\,\cdot\,)\in\mathcal U$, and inequality (2.22) yields

$$ \begin{equation*} \frac1{2h}\int_{\tau-h}^{\tau+h}\widehat\alpha_{i}(t)\langle p(t),\varphi(t,\widehat x(t),v)\rangle\,dt\leqslant \frac1{2h}\int_{\tau-h}^{\tau+h}\widehat\alpha_i(t)\langle p(t),\varphi(t,\widehat x(t),\widehat u_i(t))\rangle\,dt. \end{equation*} \notag $$
Since the function $\varphi(\,\cdot\,,\widehat x,v)$ is continuous, $\tau$ is a Lebesgue point of this function; at the same time, according to the above remark made, $\tau$ is a Lebesgue point of the function $\widehat\alpha_i(\,\cdot\,)\langle p(\,\cdot\,),\varphi(\,\cdot\,,\widehat x,v)\rangle$. Passing to the limit as $h\to0$ in the last inequality we obtain
$$ \begin{equation} \widehat\alpha_i(\tau)\langle p(\tau),\varphi(\tau,\widehat x(\tau),v)\rangle \leqslant\widehat\alpha_i(\tau)\langle p(\tau),\varphi(\tau,\widehat x(\tau),\widehat u_i(\tau))\rangle \end{equation} \tag{2.23} $$
for each $i=1,\dots,k$.

By assumption $\widehat\alpha_i(t)\geqslant0$, $i=1,\dots,k$, and $\sum_{i=1}^k\widehat\alpha_i(t)=1$ for almost all $t\in[t_0, t_1]$. We denote the set of such values of $t$ by $T_2$, and let $\tau\in T_0\cap T_1\cap T_2$. There exists an index $1\,{\leqslant}\, i\,{\leqslant}\, k$ such that $\widehat\alpha_i(\tau)>0$. Then it follows from (2.23) that the function $v\mapsto \langle p(\tau),\varphi(\tau,\widehat x(\tau),v)\rangle$ attains its maximum on $U$ at the point $\widehat u_i(\tau)\in U$.

Combining inequalities (2.23) for $\tau\in T_0\cap T_1\cap T_2$ we obtain the relation

$$ \begin{equation*} \begin{aligned} \, \langle p(\tau),\varphi(\tau,\widehat x(\tau),v)\rangle &\leqslant\sum_{i=1}^k\widehat\alpha_i(\tau)\langle p(\tau),\varphi(\tau,\widehat x(\tau),\widehat u_i(\tau))\rangle \\ &\leqslant\max_{v\in U}\langle p(\tau),\varphi(\tau,\widehat x(\tau),v)\rangle\sum_{i=1}^k\widehat\alpha_i(\tau) \\ &=\max_{v\in U}\langle p(\tau),\varphi(\tau,\widehat x(\tau),v)\rangle. \end{aligned} \end{equation*} \notag $$
Consider the maximum of the left-hand expression over $v\in U$. Then all inequalities are satisfied as equalities. Since the triple $(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,),\widehat{\overline u}(\,\cdot\,))$ is admissible for the convex system (1.3), (1.4), the equation in (1.3) holds true at Lebesgue points of the function on the right-hand side. Consequently,
$$ \begin{equation*} \begin{aligned} \, \max_{v\in U}\langle p(\tau),\varphi(\tau,\widehat x(\tau),v)\rangle &=\sum_{i=1}^k\widehat\alpha_i(\tau)\langle p(\tau),\varphi(\tau,\widehat x(\tau),\widehat u_i(\tau))\rangle \\ &=\langle p(\tau),\sum_{i=1}^k\widehat\alpha_i(\tau)\varphi(\tau,\widehat x(\tau),\widehat u_i(\tau))\rangle =\langle p(\tau),\dot{\widehat x}(\tau)\rangle. \end{aligned} \end{equation*} \notag $$
Since $T_0\cap T_1\cap T_2$ is a set of full measure, the last condition in (1.5) is satisfied.

Thus, we have demonstrated that if $\Lambda(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,),\widehat{\overline u}(\,\cdot\,))=\{0\}$, then inclusion (2.12) holds.

Now let us show that this inclusion implies (2.11). For brevity set $Z=\mathbb R^{m_1}\times\mathbb R^{m_2}$ and $m=m_1+m_2$. Since inclusion (2.12) holds, there exists an $m$-dimensional simplex $S\subset M$ such that $0\in\operatorname{int}S$ (see [9]), and thus $U_{Z}(0,\rho_0)\subset S$ for an appropriate $\rho_0>0$.

Let $e_1,\dots,e_{m+1}$ be vertices of $S$. There exist $N_i>k$, $\overline u_i(\,\cdot\,)=(u_{i1}(\,\cdot\,),\dots, u_{i(N_i-k)}(\,\cdot\,))\in\mathcal U^{N_i-k}$, and $z_i=(h_i(\,\cdot\,),\xi_i,\overline\alpha_i(\,\cdot\,),\nu_i)\in K_{N_i}\cap \operatorname{Ker}A_{1N_i}(\overline u_i)$, where $\overline\alpha_i(\,\cdot\,)=(\alpha_{i1}(\,\cdot\,),\dots,\alpha_{iN_i}(\,\cdot\,))\in\mathcal A_{N_i}-\widehat{\overline \alpha}_{N_i}(\mkern1.5mu\cdot\mkern1.5mu)$ (recall that $\widehat{\overline \alpha}_{N_i}(\mkern1.5mu\cdot\mkern1.5mu)=(\widehat\alpha_1(\mkern1.5mu\cdot\mkern1.5mu), \dots\widehat\alpha_k(\mkern1.5mu\cdot\mkern1.5mu), 0,\dots,0)\in\mathcal A_{N_i}$), $i=1,\dots,m+1$ and (as $z_i\in\operatorname{Ker}A_{1N_i}(\overline u_i)$)

$$ \begin{equation*} \begin{aligned} \, &A_{1N_i}(\overline u_i)[h_i(\,\cdot\,),\xi_i,\overline \alpha_i(\,\cdot\,),\nu_i](t) \\ &\quad=h_i(t)-\xi_i- \int_{t_0}^t\biggl(\sum_{j=1}^k\widehat\alpha_j(\tau)\varphi_x(\tau,\widehat x(\tau),\widehat u_j(\tau))h_i(\tau) \\ &\quad\qquad+\sum_{j=1}^k(\alpha_{ij}(\tau))\varphi(\tau,\widehat x(\tau),\widehat u_j(\tau))+\sum_{j=k+1}^{N_i}\alpha_{ij}(\tau)\varphi(\tau,\widehat x(\tau),u_{i(j-k)}(\tau))\biggr)\, d\tau=0 \end{aligned} \end{equation*} \notag $$
for all $t\in[t_0,t_1]$, or, equivalently,
$$ \begin{equation*} \begin{aligned} \, \dot h_i(t) &=\sum_{j=1}^k\widehat\alpha_{j}(t)\varphi_x(t,\widehat x(t),\widehat u_j(t))h_i(t) +\sum_{j=1}^k\alpha_{ij}(t)\varphi(t,\widehat x(t),\widehat u_j(t)) \\ &\qquad + \sum_{j=k+1}^{N_i}\alpha_{ij}(t)\varphi(t,\widehat x(t),u_{i(j-k)}(t)), \qquad h_i(t_0)=\xi_i, \end{aligned} \end{equation*} \notag $$
for almost all $t\in[t_0,t_1]$, such that
$$ \begin{equation*} e_i=A_{2N_i}(\overline u_i)[z_i]=(\widehat f'[\xi_i,h_i(t_1)]+\nu_i,\, \widehat g'[\xi_i,h_i(t_1)]), \end{equation*} \notag $$
where $h_i(\,\cdot\,)=h_i(\,\cdot\,,\xi_i,\overline\alpha_i;\overline u_i)$, $i=1,\dots,m+1$.

Let $y\in U_{Z}(0,\rho_0)$. Then $y=\sum_{i=1}^{m+1}\beta_ie_i$ for some $\beta_i>0$ such that $\sum_{i=1}^{m+1}\beta_i=1$ (see [9]).

Put

$$ \begin{equation*} h(\,\cdot\,)=\sum_{i=1}^{m+1}\beta_ih_i(\,\cdot\,),\qquad \xi=\sum_{i=1}^{m+1}\beta_i\xi_i\quad \text{and}\quad \nu=\sum_{i=1}^{m+1}\beta_i\nu_i. \end{equation*} \notag $$
Then $h(\,\cdot\,)$ obviously obeys the relation
$$ \begin{equation} \begin{aligned} \, \notag \dot h &=\sum_{j=1}^k\widehat\alpha_{j}(t)\varphi_x(t,\widehat x(t),\widehat u_j(t))h +\sum_{i=1}^{m+1}\beta_i\biggl(\sum_{j=1}^k\alpha_{ij}(t)\varphi(t,\widehat x(t),\widehat u_j(t)) \\ &\qquad +\sum_{j=k+1}^{N_i}\alpha_{ij}(t)\varphi(t,\widehat x(t),u_{i(j-k)}(t))\biggr), \qquad h(t_0)=\xi, \end{aligned} \end{equation} \tag{2.24} $$
for almost all $t\in[t_0,t_1]$, which, in turn, yields the inclusion $(h(\,\cdot\,),\xi,\beta,\nu)\in K_0\cap\operatorname{Ker}A_{1\overline N_*}(\overline\alpha_*,\overline u_*)$, where $\overline N_*{=}\,(N_1,\dots, N_{m+1})$, $\overline\alpha_*=(\overline\alpha_1(\,\cdot\,),\dots,\overline\alpha_{m+1}(\,\cdot\,))$ and $\overline u_*=(\overline u_1(\,\cdot\,),\dots,\overline u_{m+1}(\,\cdot\,))$.

Next, by definition,

$$ \begin{equation*} \begin{aligned} \, A_{2\overline N_*}(\overline\alpha_*,\overline u_*)[h(\,\cdot\,),\xi,\beta,\nu] &=\sum_{i=1}^{m+1}\beta_i(\widehat f'[\xi_i,h_i(t_1)]+\nu_i,\,\widehat g'[\xi_i,h_i(t_1)]) \\ &=\sum_{i=1}^{m+1}\beta_ie_i=y, \end{aligned} \end{equation*} \notag $$
which means that $U_{Y_1}(0,\rho_0)\subset A_{2\overline N_*}(\overline\alpha_*,\overline u_*)(K_0\cap\operatorname{Ker}A_{1\overline N_*}(\overline\alpha_*,\overline u_*))$ and so inclusion (2.11) is valid. The proof of the implication (a) $\Rightarrow$ (b) is complete.

Let us establish the implication (b) $\Rightarrow$ (a). We prove it by contradiction. Let $k\in\mathbb N$. Assume that $\Lambda(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,),\widehat{\overline u}(\,\cdot\,))\ne\{0\}$ and let us show that inclusion (2.11) holds for no tuples $\overline N$, $\overline\alpha$ and $\overline u$.

First we show that there exist no $ N>k$ and no tuple $\overline u(\,\cdot\,)=(u_1(\,\cdot\,),\dots, u_{N-k}(\,\cdot\,))\in\mathcal U^{N-k}$ such that

$$ \begin{equation} 0\in\operatorname{int}A_{2N}(\overline u)(K_{N}\cap\operatorname{Ker}A_{1N}(\overline u)), \end{equation} \tag{2.25} $$
where $K_N$ was defined at the beginning of the proof of Proposition 2.

Suppose that a nonzero triple $(\lambda_f,\lambda_g,p(\mkern1.5mu\cdot\mkern1.5mu))\!\in\! (\mathbb R^{m_1}\mkern-1.5mu)^*_+\times(\mathbb R^{m_2}\mkern-1.5mu)^*\times \operatorname{AC}([t_0,t_1],(\mathbb R^n)^*)$ belongs to $\Lambda(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,),\widehat{\overline u}(\,\cdot\,))$.

Fix a tuple $\overline u(\,\cdot\,)=(u_1(\,\cdot\,),\dots,u_{N-k}(\,\cdot\,))\in\mathcal U^{N-k}$. Let $\xi\in\mathbb R^n$, $\overline\alpha(\,\cdot\,)=(\alpha_1(\,\cdot\,),\dots,\alpha_N(\,\cdot\,))\in \mathcal A_N-\widehat{\overline \alpha}_N(\,\cdot\,)$, and let $h(\,\cdot\,,\xi,\overline\alpha;\overline u)$ be the solution of equation (2.15) under these conditions.

Let $T$ be the set of points $t\in[t_0,t_1]$ such that $\widehat u_i(t)\in U$, $i=1,\dots,k$, $u_i(t)\in U$, $i=1,\dots,N-k$, and $\widehat\alpha_i(t)\geqslant0$, $i=1,\dots,k$, $\sum_{i=1}^k\widehat\alpha_i(t)=1$.

By the maximum condition in (1.5) we obviously have the inequalities

$$ \begin{equation*} \langle p(t),\varphi(t,\widehat x(t),\widehat u_j(t))\rangle \leqslant\sum_{i=1}^k\widehat\alpha_i(t)\langle p(t),\varphi(t,\widehat x(t),\widehat u_i(t))\rangle \end{equation*} \notag $$
for each $t\in T$ and $j=1,\dots,k$.

We denote the right-hand side of this relation by $A(t)$ for brevity. Multiplying these inequalities by $\alpha_1(\,\cdot\,),\dots,\alpha_k(\,\cdot\,)$, respectively, combining them, and taking the relation $\sum_{j=1}^N\alpha_j(t)=0$ for almost all $t\in[t_0,t_1]$ into account, which follows from the inclusion $\overline\alpha(\,\cdot\,)\in \mathcal A_N-\widehat{\overline \alpha}_N(\,\cdot\,)$), we obtain

$$ \begin{equation} \sum_{j=1}^{k}\alpha_j(t)\langle p(t),\varphi(t,\widehat x(t),\widehat u_j(t))\rangle \leqslant\sum_{j=1}^{k}\alpha_j(t)A(t)=-\sum_{j=k+1}^{N}\alpha_j(t)A(t). \end{equation} \tag{2.26} $$

The maximum condition in (1.5) also yields the inequality

$$ \begin{equation*} \langle p(t),\varphi(t,\widehat x(t),u_{j-k}(t))\rangle \leqslant\sum_{i=1}^k\widehat\alpha_i(t)\langle p(t),\varphi(t,\widehat x(t),\widehat u_i(t))\rangle \end{equation*} \notag $$
for each $t\in T$ and $j=k+1,\dots,N$.

Combining these inequalities multiplied by $\alpha_{k+1}(\,\cdot\,),\dots,\alpha_{N}(\,\cdot\,)$, respectively, gives

$$ \begin{equation*} \sum_{j=k+1}^{N}\alpha_j(t)\langle p(t),\varphi(t,\widehat x(t),u_{j-k}(t))\rangle \leqslant\sum_{j=k+1}^{N}\alpha_j(t)A(t). \end{equation*} \notag $$
Now combining this inequality with (2.26) gives us the relation
$$ \begin{equation*} \sum_{j=1}^{k}\alpha_j(t)\langle p(t),\varphi(t,\widehat x(t),\widehat u_j(t))\rangle+\sum_{j=k+1}^{N}\alpha_j(t)\langle, p(t),\varphi(t,\widehat x(t),u_{j-k}(t))\rangle\leqslant0, \end{equation*} \notag $$
which holds for almost all $t\in[t_0,t_1]$. This means that the expression on the right-hand side of (2.21) is nonnegative; thus, in view of the second and third relations in (1.5) we have
$$ \begin{equation} \begin{aligned} \, \notag &\langle p(t_0),h(t_0,\xi,\overline\alpha; \overline u)\rangle-\langle p(t_1),h(t_1,\xi,\overline\alpha; \overline u)\rangle \\ &\qquad =\langle\lambda_f,\widehat f_{\zeta_0}\xi+\widehat f_{\zeta_1}h(t_1,\xi,\overline\alpha;\overline u)\rangle+\langle\lambda_g,\widehat g_{\zeta_0}\xi+\widehat g_{\zeta_1}h(t_1,\xi,\overline\alpha;\overline u)\rangle\geqslant0 \end{aligned} \end{equation} \tag{2.27} $$
for each $\xi\in\mathbb R^n$ and $\overline\alpha(\,\cdot\,)\in \mathcal A_N-\overline\alpha_N(\,\cdot\,)$.

Let $z=(h(\,\cdot\,),\xi,\overline\alpha(\,\cdot\,),\nu)\in K_{N}\cap\operatorname{Ker}A_{1N}(\overline u)$. The function $h(\,\cdot\,)$ must satisfy (2.15), hence $h(\,\cdot\,)= h(\,\cdot\,,\xi,\overline\alpha;\overline u)$ by uniqueness. Next, since $\lambda_f\in (\mathbb R^{m_1})^*_+$ and the fourth relation in (1.5) is satisfied, (2.27) implies the validity of inequality (2.16) for each $z$ as above. Since $(\lambda_f,\lambda_g)$ is a nonzero pair (otherwise $p(\,\cdot\,)$ is the function zero), this contradicts inclusion (2.25).

Thus, we have shown that if there exist an integer $\widehat N>k$ and a tuple $\overline v(\,\cdot\,)=(v_1(\,\cdot\,),\dots,v_{\widehat N-k}(\,\cdot\,))\in\mathcal U^{\widehat N-k}$ such that (2.25) is satisfied for these $\widehat N$ and $\overline v(\,\cdot\,)$, then $\Lambda(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,),\widehat{\overline u}(\,\cdot\,))=\{0\}$.

Now it should be noted that if we take $\overline N_*=(\widehat N,\dots,\widehat N)$, $\overline\alpha_*=(\overline\alpha(\,\cdot\,),\dots,\overline\alpha(\,\cdot\,))$, and $\overline u_*=(\overline v(\,\cdot\,),\dots,\overline v(\,\cdot\,))$ ($m+1$ occurrences), then inclusion (2.11) for these tuples is equivalent to inclusion (2.25) for $\widehat N$ and $\overline v(\,\cdot\,)$, which proves the implication $(\mathrm b)\Rightarrow (\mathrm a)$.

The proof of Proposition 2 is complete.

Proof of Lemma 2. (1) $\Rightarrow$ (2). Since $\Lambda(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,),\widehat{\overline u}(\,\cdot\,))=\{0\}$, by Proposition 2 we have inclusion (2.11) for the tuples $\overline N_*$, $\overline\alpha_*$ and $\overline u_*$. Let us show that the inclusion $0\in\operatorname{int}A_{1\overline N_*}(\overline v_*,\overline u_*)K_0$ is valid too.

Indeed, the action of the operator $A_{1\overline N_*}(\overline\alpha_*,\overline u_*)$ on the element $(h(\,\cdot\,),0,\beta,0)$ can be written in the form

$$ \begin{equation*} \begin{aligned} \, &A_{1\overline N_*}(\overline\alpha_*,\overline u_*)[h(\,\cdot\,),0,\beta,0](t) \\ &\qquad =h(t)-\int_{t_0}^t\biggl(\sum_{j=1}^k\widehat\alpha_j(\tau) \varphi_x(\tau,\widehat x(\tau),\widehat u_j(\tau))\biggr)h(\tau)\,d\tau -b(\beta,t) \end{aligned} \end{equation*} \notag $$
for each $t\in[t_0,t_1]$, where the function $b(\beta,\cdot\,)$ is continuous for each $\beta\in\mathbb R^{m+1}$.

It follows from the existence of a solution to the Cauchy problem for a linear equation that this map is surjective for each $\beta$. In particular, if $\beta\in \Sigma^{m+1}$, then $(h(\,\cdot\,),0,\beta,0)\in K_0$ and we obtain $0\in\operatorname{int}A_{1\overline N_*}(\overline v_*,\overline u_*)K_0$.

Now by Proposition 1 for $A_1=A_{1\overline N_*}(\overline\alpha_*,\overline u_*)$, $A_2=A_{2\overline N_*}(\overline\alpha_*,\overline u_*)$, $A=A_{\overline N_*}(\overline\alpha_*,\overline u_*)=(A_{1\overline N_*}(\overline\alpha_*,\overline u_*),A_{2\overline N_*}(\overline\alpha_*,\overline u_*))$ and $C=K_0$ we obtain inclusion (2.9).

(2) $\Rightarrow$ (1). If inclusion (2.9) holds, then by Proposition 1 (for the same $A_1$, $A_2$, $A$ and $C$) we have inclusion (2.11), which yields $\Lambda(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,), \widehat{\overline u}(\,\cdot\,))=\{0\}$ by Proposition 2.

The proof of Lemma 2 is complete.

Let $L>0$. We denote by $Q_L$ the set of Lipschitz-continuous functions in $C([t_0,t_1],\mathbb R^n)$ with Lipschitz constant $L$. It is easily verified that $Q_L$ is a convex closed subset of $C([t_0,t_1],\mathbb R^n)$.

Recall that the sets $\mathcal A_k$, $k\in\mathbb N$, and $\mathcal U$ were introduced before the definition of the convex system (1.3), (1.4).

Lemma 3 (on approximation). Let $M$ and $\Omega$ be bounded sets in $C([t_0,t_1],\mathbb R^n)$ and $\mathbb R^n$, respectively, let $N\in\mathbb N$, $\overline v(\,\cdot\,)=(v_1(\,\cdot\,),\dots,v_{N}(\,\cdot\,))\in \mathcal U^N$ and $L>0$.

Then for any $\overline\alpha(\,\cdot\,)=(\alpha_1(\,\cdot\,),\dots,\alpha_N(\,\cdot\,))\in\mathcal A_N$ there exists a sequence of controls $u_s(\overline\alpha;\overline v)(\,\cdot\,)\in \mathcal U^N$,

$$ \begin{equation*} \|u_s(\overline\alpha;\overline v)(\,\cdot\,)\|_{L_\infty([t_0,t_1],\mathbb R^r)}\leqslant \max\{\|v_i(\,\cdot\,)\|_{L_\infty([t_0,t_1],\mathbb R^r)},\,1\leqslant i\leqslant N\}, \qquad s\in\mathbb N, \end{equation*} \notag $$
such that the maps $\Phi_s\colon (M\cap Q_L)\times\Omega\times \mathcal A_N\to C([t_0,t_1],\mathbb R^n)$ defined for each $t\in[t_0,t_1]$ by
$$ \begin{equation*} \Phi_s(x(\,\cdot\,),\xi,\overline\alpha(\,\cdot\,);\overline v)(t)=x(t)-\xi-\int_{t_0}^t\varphi(\tau,x(\tau),u_s(\overline\alpha;\overline v)(\tau)) \,d\tau, \end{equation*} \notag $$
belong to the space $C((M\cap Q_L)\times\Omega\times \mathcal A_N,\,C([t_0,t_1],\mathbb R^n))$ and converge in this space as $s\to\infty$ to the map $\Phi\colon (M\cap Q_L)\times\Omega\times \mathcal A_N\to C([t_0,t_1],\mathbb R^n)$ defined for each $t\in[t_0,t_1]$ by
$$ \begin{equation*} \Phi(x(\,\cdot\,),\xi,\overline\alpha(\,\cdot\,);\overline v)(t)=x(t)-\xi-\sum_{i=1}^N\int_{t_0}^t\alpha_i(\tau)\varphi(\tau,x(\tau),v_i(\tau)) \,d\tau. \end{equation*} \notag $$

As already mentioned, this lemma is a particular case of Lemma $4.3$ from these authors’ work [6].

§ 3. Proof of Theorem 1 and Corollary 1

Fix $k\in\mathbb N$, assume that a triple $(\widehat x(\mkern1.5mu\cdot\mkern1.5mu),\widehat{\overline \alpha}(\mkern1.5mu\cdot\mkern1.5mu),\widehat{\overline u}(\mkern1.5mu\cdot\mkern1.5mu))$, where $\widehat{\overline \alpha}(\mkern1.5mu\cdot\mkern1.5mu)=(\widehat a_1(\mkern1.5mu\cdot\mkern1.5mu),\dots,\widehat a_k(\mkern1.5mu\cdot\mkern1.5mu))$ and $\widehat{\overline u}(\,\cdot\,)=(\widehat u_1(\,\cdot\,),\dots,\widehat u_k(\,\cdot\,))$, is admissible for the convex system (1.3), (1.4), and let the tuples $\overline N_*$, $\overline\alpha_*$ and $\overline u_*$ be as in Lemma 2.

We assume that the quantities $\gamma_u$, $C_0$, and $C_1$ (see (2.8)) correspond to the tuple $\overline u_*$.

Put $L=(1+\rho)C_1+(2\sqrt{m+1}+7)C_0+2$, where $\rho$ was introduced before the formulation of Theorem 1. Also recall that the set $Q_L$ was defined before Lemma 3.

Now we apply Lemma 1 to the spaces $X$ and $Y$ defined before the formulation of Lemma 2, $K=K_0$ ($K_0$ was defined at the same place),

$$ \begin{equation*} Y_1=W_\infty^1([t_0,t_1],\mathbb R^n)\times\mathbb R^{m_1}\times \mathbb R^{m_2}\quad\text{and} \quad Q=Q_L\times\mathbb R^n\times \Sigma^{m+1}\times\mathbb R^{m_1}. \end{equation*} \notag $$

It is clear that $Y_1$ is continuously embedded into $Y$ and that $K$ and $Q$ are convex closed subsets of $X$.

It follows from inclusion (2.9) in Lemma 2 that there exists an element $x_*=(h_*(\,\cdot\,),\xi_*,\beta_*,\nu_*)\in K_0$ such that $A_{\overline N_*}(\overline\alpha_*,\overline u_*)x_*=0$.

Set $\widehat x=(\widehat x(\,\cdot\,), \xi_*, \beta_*,\nu_*)$ and $V=U_X(\widehat x,\rho)$.

To avoid confusing $\widehat x$ with $\widehat x(\,\cdot\,)$ we write $\widehat z$ instead of $\widehat x$, that is, $\widehat z=(\widehat x(\,\cdot\,),\xi_*, \beta_*,\nu_*)$.

Since the triple $(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,),\widehat{\overline u}(\,\cdot\,))$ obeys the differential equation in (1.3), we have

$$ \begin{equation*} |\dot {\widehat x} (t)|\leqslant\sum_{i=1}^k\widehat\alpha_i(t)|\varphi(t,\widehat x(t),\widehat u_i(t))|\leqslant C_0 \end{equation*} \notag $$
for almost all $t\in[t_0,t_1]$, and therefore $\widehat x(\,\cdot\,)$ is a Lipschitz-continuous function with Lipschitz constant $C_0$. Consequently, $\widehat x(\,\cdot\,)\in Q_L$ and it is clear that $\widehat z\in K\cap Q$.

We define a map $F\colon V\to Y$ for each $t\in [t_0,t_1]$ by

$$ \begin{equation*} \begin{aligned} \, &F[x(\,\cdot\,),\xi,\beta,\nu](t)=\biggl(x(t)-\xi- \int_{t_0}^t\biggl(\sum_{j=1}^k\widehat\alpha_j(\tau) \varphi(\tau,x(\tau),\widehat u_j(\tau)) \\ &\ \ +\sum_{i=1}^{m+1}\beta_i\biggl(\sum_{j=1}^k\alpha_{ij}(\tau)\varphi(\tau,x(\tau),\widehat u_j(\tau))+ \sum_{j=k+1}^{N_i}\alpha_{ij}(\tau)\varphi(\tau,x(\tau),u_{i(j-k)}(\tau))\biggr)\biggr)\,d\tau, \\ &\ \ \qquad\qquad\qquad f(\xi,x(t_1))+ \nu, \, g(\xi,x(t_1))\biggr), \end{aligned} \end{equation*} \notag $$
where the tuples $\overline N_*$, $\overline\alpha_*$ and $\overline u_*$ were introduced in Lemma 2.

The first component of the image of $F$ lies in $C([t_0,t_1],\mathbb R^n)$ (for exactly the same reasons as in the case of the operator $A_{\overline N}(\overline\alpha,\overline u)$ above), and since $V$ is a bounded set and the maps $\varphi$, $f$ and $g$ are continuous, one can easily see that $F\in C(V,Y)$.

Now let us verify that the spaces, sets and the map $F$ introduced above satisfy the conditions of Lemma 1.

The integrand in the first component of $F(\widehat z)$ belongs to $L_\infty([t_0,t_1],\mathbb R^n)$ (see above for the corresponding argument for $A_{\overline N}(\overline\alpha,\overline u)$), the sum of the first two terms and the integral of the first sum under the integral sign is equal to zero since the triple $(\widehat{x}(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,),\widehat{\overline u}(\,\cdot\,))$ obeys the differential equation in (1.3), and therefore the whole component belongs to the space $W_\infty^1([t_0,t_1],\mathbb R^n)$, so that $F(\widehat z\,)\in Y_1$.

Since $\varphi$ is continuously differentiable with respect to $x$, the integral map in the first component of $F$ is continuously differentiable with respect to $x(\,\cdot\,)$ (see, for example, [10], § 2.4.2). Now taking into account the fact that $f$ and $g$ are continuously differentiable and $F$ is linear in $\xi$, $\beta$ and $\nu$, we finally see that the map $F$ is differentiable (in particular, at the point $\widehat z$), and thus condition (1) in Lemma 1 is satisfied. Let us verify condition (2).

Taking the form of the derivative of the integral map with respect to $x(\,\cdot\,)$ ( ibidem) into account, one can easily see that $F'(\widehat z)=A_{\overline N_*}(\overline\alpha_*,\overline u_*)$. Since $K_0=K$, $A_{\overline N_*}(\overline\alpha_*,\overline u_*)x_*=0$ and we have $\Lambda(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,),\widehat{\overline u}(\,\cdot\,)) =\{0\}$ by the hypotheses of the theorem, it follows from Lemma 2 that

$$ \begin{equation*} 0\in \operatorname{int}F'(\widehat z)(K-x_*)=\operatorname{int}F'(\widehat z)(K-\widehat z), \end{equation*} \notag $$
which means that condition (2) in Lemma 1 is satisfied.

Now we proceed to the verification of condition (3) in this lemma. If $(h(\,\cdot\,),\xi, \beta,\nu)\in B_X(0,1)\cap(F'(\widehat z))^{-1}(B_{Y_1}(0,1))$, then there exists a triple $(y(\,\cdot\,),w_1,w_2)\in B_{Y_1}(0,1)$ that satisfies $F'(\widehat z)[h(\,\cdot\,),\xi,\beta,\nu]=(y(\,\cdot\,),w_1,w_2)$.

In the definition of $A_{\overline N_*}(\overline\alpha_*,\overline u_*)$ set

$$ \begin{equation*} S(\,\cdot\,)=\sum_{j=1}^k\widehat\alpha_j(\,\cdot\,)\varphi_x(\,\cdot\,,\widehat x(\,\cdot\,),\widehat u_j(\,\cdot\,)), \end{equation*} \notag $$
we also denote by $P_i(\,\cdot\,)$ the expression multiplied by $\beta_i$ in the second sum under the integral sign, $i=1,\dots,m+1$. Then the above relation takes the form
$$ \begin{equation*} \begin{aligned} \, &\biggl(h(t)-\xi- \int_{t_0}^t\biggl(S(\tau)h(\tau) +\sum_{i=1}^{m+1}\beta_iP_i(\tau)\biggr)\,d\tau, \ \widehat f'[\xi,h(t_1)]+\nu, \ \widehat g'[\xi,h(t_1)]\biggr) \\ &\qquad=(y(t),w_1,w_2) \quad \forall\,t\in[t_0,t_1]. \end{aligned} \end{equation*} \notag $$

Since $y(\,\cdot\,)\in \operatorname{AC}([t_0,t_1],\mathbb R^n)$, we have $h(\,\cdot\,)\in \operatorname{AC}([t_0,t_1],\mathbb R^n)$, and therefore for almost all $t\in[t_0,t_1]$ we obtain

$$ \begin{equation*} \dot h(t)=S(t)h(t) +\sum_{i=1}^{m+1}\beta_iP_i(t)+\dot y(t). \end{equation*} \notag $$
Since $\|h(\,\cdot\,)\|_{C([t_0,t_1],\mathbb R^n)}\leqslant1$, $|\beta|\leqslant1$, and $\|y(\,\cdot\,)\|_{W_\infty^1([t_0,t_1],\mathbb R^n)}\leqslant1$, this yields (in view of (2.8) and the fact that $(\alpha_{i1}(\,\cdot\,),\dots,\alpha _{iN_i}(\,\cdot\,))\in\mathcal A_{N_i}-\widehat{\overline \alpha}_{N_i}(\,\cdot\,)$, $i=1,\dots,m+1$) the estimate $|\dot h(t)|\leqslant C_1+2C_0\sqrt{m+1}+1$ for almost all $t\in[t_0,t_1]$.

Thus, the set of such functions $h(\,\cdot\,)$ is bounded in $C([t_0,t_1],\mathbb R^n)$, they are Lipschitz continuous with the same Lipschitz constant $C_1+2C_0\sqrt{m+1}+1$, and therefore they are equicontinuous. Consequently, by the Arzelá–Ascoli theorem the set of these functions is relatively compact in $C([t_0,t_1],\mathbb R^n)$.

Further, since $|\xi|+|\beta|+|\nu|\leqslant1$, the set of such triples $(\xi,\beta,\nu)$ is bounded in $\mathbb R^n\times \mathbb R^{m+1}\times\mathbb R^{m_1}$; thus, the set $B_X(0,1)\cap(F'(\widehat x))^{-1}(B_{Y_1}(0,1))$ is relatively compact in $X$. Condition (3) in Lemma 1 is satisfied.

It remains to verify condition (4). For any $z=(x(\,\cdot\,), \xi, \beta,\nu)\in V$ and $t\in[t_0,t_1]$ we have

$$ \begin{equation*} \begin{aligned} \, &F[z](t)-F'(\widehat z)[z](t) \\ &\qquad=\biggl(-\int_{t_0}^{t}\biggl(\sum_{j=1}^{k}\biggl(\varphi(\tau,x(\tau),\widehat u_j(\tau))) -\varphi_x(\tau,\widehat x(\tau),\widehat u_j(\tau))x(\tau) \biggr) \\ &\qquad +\sum_{i=1}^{m+1}\beta_i\biggl(\sum_{j=1}^k\alpha_{ij}(\tau)(\varphi(\tau,x(\tau),\widehat u_j(\tau))- \varphi(\tau,\widehat x(\tau),\widehat u_j(\tau))) \\ &\qquad +\sum_{j=k+1}^{N_i}\alpha_{ij}(\tau)(\varphi(\tau,x(\tau),u_{i(j-k)}(\tau))- \varphi(\tau,\widehat x(\tau),u_{i(j-k)}(\tau)))\biggl)\biggl)\,d\tau, \\ &\qquad\qquad\qquad f(\xi,x(t_1))-\widehat f'[\xi,x(t_1)], \, g(\xi,x(t_1))-\widehat g'[\xi,x(t_1)]\biggr). \end{aligned} \end{equation*} \notag $$

As above, this means that the integrand in the first component of the image of the difference $F[z](\,\cdot\,)-F'(\widehat z)[z](\,\cdot\,)$ belongs to $L_\infty([t_0,t_1],\mathbb R^n)$, and therefore the whole of the first component belongs to $W_\infty^1([t_0,t_1],\mathbb R^n)$. Then the whole difference clearly lies in $Y_1$. Since $V$ is bounded and the maps $f$ and $g$ are continuous, this difference belongs to $C(V,Y_1)$, which means that condition (4) in Lemma 1 is satisfied.

Thus, all the hypotheses of Lemma 1 are satisfied. Take some constants ${0<\delta_1\leqslant1}$, $c>0$ and the neighbourhood $W$ mentioned in that lemma, and let $V_0$ be an arbitrary neighbourhood of the function $\widehat x(\,\cdot\,)$ in $C([t_0,t_1],\mathbb R^n)$. There exist $r_0>0$ and $0<r<\min(1,r_0/c)$ such that $U_{C([t_0,t_1],\mathbb R^n)}(\widehat x(\,\cdot\,),r_0)\subset V_0$ and $U_{C(V\cap K\cap Q,Y)}(F,r)\subset W$.

Now we note that the map $F\colon V\to Y$ defined above can be written in the form

$$ \begin{equation*} \begin{aligned} \, F[x(\,\cdot\,),\xi,\beta,\nu](t) &=\biggl(x(t)-\xi- \sum_{i=1}^N\int_{t_0}^t\alpha_{i\beta}(\tau)\varphi(\tau,x(\tau),v_i(\tau)) \,d\tau, \\ &\qquad f(\xi,x(t_1))+\nu,\,g(\xi,x(t_1))\biggr), \end{aligned} \end{equation*} \notag $$
where $N=\sum_{i=1}^{m+1}N_i-mk$,
$$ \begin{equation*} (v_1(\,\cdot\,),\dots,v_N(\,\cdot\,))=(\widehat u_1(\,\cdot\,),\dots,\widehat u_k(\,\cdot\,),\overline u_1(\,\cdot\,),\dots,\overline u_{m+1}(\,\cdot\,)) \end{equation*} \notag $$
and
$$ \begin{equation*} \begin{aligned} \, &\overline\alpha_\beta(\,\cdot\,) =(\alpha_{1\beta}(\,\cdot\,),\dots,\alpha_{N\beta}(\,\cdot\,))=\biggr(\sum_{i=1}^{m+1}\beta_i\alpha_{i1}(\mkern1.5mu\cdot\mkern1.5mu) +\widehat\alpha_1(\mkern1.5mu\cdot\mkern1.5mu),\dots, \sum_{i=1}^{m+1}\beta_i\alpha_{ik}(\mkern1.5mu\cdot\mkern1.5mu)+\widehat\alpha_k(\mkern1.5mu\cdot\mkern1.5mu), \\ &\ \beta_1\alpha_{1(k+1)}(\,\cdot\,),\dots,\beta_1\alpha_{1N_1}(\,\cdot\,), \dots, \beta_{m+1}\alpha_{(m+1)(k+1)}(\,\cdot\,),\dots,\beta_{m+1}\alpha_{(m+1)N_{m+1}}(\,\cdot\,)\!\biggl). \end{aligned} \end{equation*} \notag $$
Since $(\alpha_{i1}(\,\cdot\,),\dots,\alpha_{iN_i}(\,\cdot\,))\in \mathcal A_{N_i}-\widehat{\overline\alpha}_{N_i}(\,\cdot\,)$, $i=1,\dots,m+1$, it is easily verified that $\overline\alpha_\beta(\,\cdot\,)\in \mathcal A_{N}$ for each $\beta\in \Sigma^{m+1}$.

For each $s\in\mathbb N$ consider the map $F_s\colon V\cap K\cap Q\to Y$ defined by

$$ \begin{equation*} \begin{aligned} \, F_s(x(\,\cdot\,),\xi,\beta,\nu)(t) &=\biggl(x(t)-\xi-\int_{t_0}^t\varphi(\tau,x(\tau), u_s(\beta;\overline v)(\tau))\,d\tau, \\ &\qquad f(\xi,x(t_1))+\nu,\, g(\xi,x(t_1))\biggr) \end{aligned} \end{equation*} \notag $$
for $t\in[t_0,t_1]$, where the functions $u_s(\beta;\overline v)(\,\cdot\,)=u_s(\overline\alpha_\beta;\overline v)(\,\cdot\,)$ are specified by Lemma 3 for $M$ the projection of $V$ onto $C([t_0,t_1],\mathbb R^n)$, $\Omega$ the projection of $V$ onto $\mathbb R^n$, $N=\sum_{i=1}^{m+1}N_i-mk$, and $\overline u(\,\cdot\,)=\overline v(\,\cdot\,)=(v_1(\,\cdot\,),\dots,v_N(\,\cdot\,))$. The indication that $F_s$ depends on the tuple $\overline v(\,\cdot\,)$ is omitted.

By Lemma 3, for each $s\in\mathbb N$ the map $F_s$ belongs to $C(V\cap K\cap Q,Y)$ and there exists an integer $s_0$ such that

$$ \begin{equation} \begin{aligned} \, \notag &|F_{s_0}(x(\,\cdot\,),\xi,\beta,\nu)(t)-F(x(\,\cdot\,),\xi,\beta,\nu)(t)| \\ &\qquad =|(\Phi_{s_0}(x(\,\cdot\,),\xi,\overline\alpha_\beta(\,\cdot\,))(t)-\Phi(x(\,\cdot\,), \xi,\overline\alpha_\beta(\,\cdot\,))(t), 0, 0,0)|<\frac{r}2 \end{aligned} \end{equation} \tag{3.1} $$
for all $(x(\,\cdot\,),\xi,\beta,\nu)\in V\cap K\cap Q$ and $t\in[t_0,t_1]$.

Suppose that the neighbourhood $W_0$ in Theorem 1 is an open ball centred at the origin and of radius $r/(2\max(1,t_1-t_0))$, and let $(\widetilde\varphi-\varphi,\widetilde f-f, \widetilde g-g)\in W_0$. Consider the map $\widetilde F\colon V\cap K \cap Q\to Y$ defined by

$$ \begin{equation*} \begin{aligned} \, \widetilde F(x(\,\cdot\,),\xi,\beta,\nu)(t) &=\biggl(x(t)-\xi-\int_{t_0}^t\widetilde\varphi(\tau,x(\tau),u_{s_0}(\beta;\overline v)(\tau))\,d\tau, \\ &\qquad \widetilde f(\xi,x(t_1))+\nu,\, \widetilde g(\xi,x(t_1))\biggr) \end{aligned} \end{equation*} \notag $$
for $t\in[t_0,t_1]$. Then, taking into account that $V=U_X(\widehat z,\rho)$ and $u_{s_0}(\beta;\overline v)(t)\in U$ for almost every $t\in[t_0,t_1]$, for all $(x(\,\cdot\,),\xi,\beta,\nu)\in V\cap K \cap Q$ and $t\in[t_0,t_1]$ we obtain the estimate
$$ \begin{equation*} \begin{aligned} \, &|\widetilde F(x(\,\cdot\,),\xi,\beta,\nu)(t)-F_{s_0}(x(\,\cdot\,),\xi,\beta,\nu)(t)| \\ &\qquad \leqslant \int_{t_0}^{t}|\widehat\varphi(\tau,x(\tau),u_{s_0}(\beta;\overline v)(\tau))-\varphi(\tau,x(\tau),u_{s_0}(\beta;\overline v)(\tau))|\,d\tau \\ &\qquad\qquad +|\widetilde f(\xi,x(t_1))-f(\xi,x(t_1))|+|\widetilde g(\xi,x(t_1))-g(\xi,x(t_1))| \\ &\qquad \leqslant (t_1-t_0)\|\widehat\varphi-\varphi\|_{C(\Delta(\rho),\mathbb R^n)}+\|\widehat f-f\|_{C(B(\rho),\mathbb R^{m_1})}+\|\widehat g-g\|_{C(B(\rho),\mathbb R^{m_2})}<\frac{r}2. \end{aligned} \end{equation*} \notag $$
Combining it with (3.1) yields $\widetilde F\in U_{C(V\cap K\cap Q,\,Y)}(F,r)\subset W$.

For reasons mentioned many times already it is clear that the difference $\widetilde F-F$ belongs to $C(V\cap K\cap Q,\,Y_1)$.

We show that the inclusion (2.1) holds. Suppose that $(x(\,\cdot\,), \xi,\beta,\nu)$ belongs to the set on its left-hand side. Then $(x(\,\cdot\,), \xi,\beta,\nu)\in B_X(0,1)$ and there exist elements $(x'(\,\cdot\,), \xi',\beta',\nu')\in V\cap K\cap Q$ and $(y(\,\cdot\,),w_1,w_2)\in U_{Y_1}(F(\widehat z),\delta_1)$ such that (recall that $\widehat z=(\widehat x(\,\cdot\,),\xi_*, \beta_*,\nu_*)$)

$$ \begin{equation} \begin{aligned} \, \notag F'(\widehat z)[x(\,\cdot\,), \xi,\beta,\nu] &=F'(\widehat z)[x'(\,\cdot\,)-\widehat x(\,\cdot\,), \,\xi'-\xi_*,\,\beta'-\beta_*,\,\nu'-\nu_*] \\ &\qquad+(y(\,\cdot\,),w_1,w_2)-\widetilde F(x'(\,\cdot\,),\xi',\beta',\nu'). \end{aligned} \end{equation} \tag{3.2} $$

As already noted, $F'(\widehat z)=A_{\overline N_*}(\overline\alpha_*,\overline u_*)$. Let the functions $S(\,\cdot\,)$ and $P_i(\,\cdot\,)$, $i=1,\dots,m+1$, be the same as in the proof of condition (3) in Lemma 1. Then relation (3.2) can be written as

$$ \begin{equation*} \begin{aligned} \, &\biggl(x(t)-\xi- \int_{t_0}^t\biggl(S(\tau)x(\tau) +\sum_{i=1}^{m+1}\beta_iP_i(\tau)\biggr)\,d\tau,\ \widehat f'[\xi,x(t_1)]+\nu,\ \widehat g'[\xi,x(t_1)]\biggr) \\ &\quad =\biggl(-\widehat x(t)-\xi_*- \int_{t_0}^t\biggl(S(\tau)(x'(\tau)-\widehat x(\tau)) +\sum_{i=1}^{m+1}(\beta'_i-\beta_{i*})P_i(\tau) \biggr)\,d\tau+y(t) \\ &\quad\qquad +\int_{t_0}^t\widetilde\varphi(\tau,x'(\tau),u_{s_0}(\beta';\overline v)(\tau))\,d\tau,\ \widehat f'[\xi'-\xi_*,\,x(t_1)-\widehat x(t_1)]+\nu'-\nu_*+w_1, \\ &\quad\qquad\qquad \widehat g'[\xi'-\xi_*,\,x(t_1)-\widehat x(t_1)]+w_2\biggr) \quad \forall\,t\in[t_0,t_1], \end{aligned} \end{equation*} \notag $$
where $\beta_*=(\beta_{1*},\dots,\beta_{(m+1)*})$.

Now we use the same argument as in the proof of condition $(3)$. Since $y(\,\cdot\,)\in \operatorname{AC}([t_0,t_1],\mathbb R^n)$, we have $x(\mkern1.5mu\cdot\mkern1.5mu)\!\in\! \operatorname{AC}([t_0,t_1],\mathbb R^n)$, and therefore for almost all ${t\!\in\![t_0,t_1]}$

$$ \begin{equation} \begin{aligned} \, \notag \dot x(t) &=S(t)x(t) +\sum_{i=1}^{m+1}\beta_iP_i(t)-\dot{\widehat x}(t)-S(t)(x'(t)-\widehat x(t)) \\ &\qquad-\sum_{i=1}^{m+1}(\beta'_i-\beta_{i*})P_i(t) +\dot y(t)+\widetilde\varphi(t,x'(t),u_{s_0}(\beta';\overline v)(t)). \end{aligned} \end{equation} \tag{3.3} $$
Since
$$ \begin{equation*} \begin{gathered} \, \|x(\,\cdot\,)\|_{C([t_0,t_1],\mathbb R^n)}\leqslant1,\quad |\beta|\leqslant1,\quad \|y(\,\cdot\,)\|_{W_\infty^1([t_0,t_1],\mathbb R^n)}<\delta_1\leqslant1, \\ \|x'(\,\cdot\,)-\widehat x(\,\cdot\,)\|_{C([t_0,t_1],\mathbb R^n)}<\rho \end{gathered} \end{equation*} \notag $$
and $\beta',\beta_*\in\Sigma^{m+1}$, for almost all $t\in[t_0,t_1]$ the sum of the terms on the right-hand side of (3.3), except for the last term, is estimated by
$$ \begin{equation*} C_1+2C_0\sqrt{m+1}+C_0+C_1\rho+4C_0+1=(1+\rho)C_1+(2\sqrt{m+1}+5)C_0+1. \end{equation*} \notag $$

Subtracting and adding $\varphi(t,x'(t),u_{s_0}(\beta';\overline v)(t))$ to the last term on the right in (3.3) we can see that it does not exceed $1+C_0$ for almost all $t\in[t_0,t_1]$. Thus, for almost all $t\in[t_0,t_1]$ we have

$$ \begin{equation*} |\dot x(t)|\leqslant(1+\rho)C_1+(2\sqrt{m+1}+6)C_0+2=L-C_0. \end{equation*} \notag $$

We verify that $(x(\,\cdot\,), \xi,\beta,\nu)\in Q-\widehat z$. It is clearly sufficient to verify that the function $x(\,\cdot\,)$ belongs to $Q_L-\widehat x(\,\cdot\,)$. The Lipschitz constant of this function is equal to $L-C_0$, and that of $\widehat x(\,\cdot\,)$ is equal to $C_0$, and therefore $x(\,\cdot\,)+\widehat x(\,\cdot\,)\in Q_L$, which means that $x(\,\cdot\,)\in Q_L-\widehat x(\,\cdot\,)$ and inclusion (2.1) is established.

Thus, all the requirements imposed on the map $\widetilde F$ in Lemma 1 are fulfilled, and therefore there exists a map $\psi_{\widetilde F}\colon U_{Y_1}(F(\widehat x),\delta_1)\to V\cap K\cap Q$ satisfying relations (2.2).

Since $F(\widehat z)=(0,f(\widehat x(t_0),\widehat x(t_1)),0)$, there obviously exists an open ball $\mathcal O$ in $\mathbb R^{m_1}\times\mathbb R^{m_2}$ centred at the origin and of radius $0<\varepsilon<(r_0/c)-r$ such that $(0,f(\widehat x(t_0),\widehat x(t_1))+y_1,y_2)\in U_{Y_1}(F(\widehat x),\delta_1)$ for any pair $(y_1,y_2)\in\mathcal O$.

Let $(y_1,y_2)\in\mathcal O$ and $y=(0,f(\widehat x(t_0),\widehat x(t_1))+y_1,y_2)$. Put $\psi_{\widetilde F}(y)=(\widetilde x(\,\cdot\,),\widetilde\xi,\widetilde\beta,\widetilde\nu)$, where $\widetilde\nu=\widetilde \nu_1+f(\widehat x(t_0),\widehat x(t_1))$ and $\widetilde\nu_1\geqslant0$. Then equality in (2.2) takes the form

$$ \begin{equation*} \begin{aligned} \, &\widetilde F(\psi_{\widetilde F}(y))(t) =\widetilde F(\widetilde x(\,\cdot\,),\widetilde\xi,\widetilde\beta,\widetilde\nu)(t) \\ &\qquad=\biggl(\widetilde x(t)-\widetilde \xi-\int_{t_0}^t\widetilde\varphi(\tau,\widetilde x(\tau), u_{s_0}(\widetilde\beta;\overline v)(\tau))\,d\tau, \widetilde f(\widetilde \xi,\widetilde x(t_1))+\widetilde\nu,\, \widetilde g(\widetilde \xi,\widetilde x(t_1))\biggr) \\ &\qquad=(0,f(\widehat x(t_0),\widehat x(t_1))+y_1, y_2) \end{aligned} \end{equation*} \notag $$
for each $t\in [t_0,t_1]$.

Since the first components coincide, the pair $(\widetilde x(\,\cdot\,), \widetilde u(\,\cdot\,))$, where $\widetilde u(\,\cdot\,)=u_{s_0}(\widetilde\beta;\overline v)(\,\cdot\,)$, satisfies condition (1.1) for the control system $(\widetilde\varphi,\widetilde f,\widetilde g)$ and at the same time $\widetilde \xi=\widetilde x(t_0)$. As the second components also coincide, this yields the inequality $\widetilde f(\widetilde x(t_0),\widetilde x(t_1))\leqslant y_1$, and it is obvious that $\widetilde g(\widetilde x(t_0),\widetilde x(t_1))=y_2$.

Further, since

$$ \begin{equation*} \|y-F(\widehat z)\|_Y=\|y-F(\widehat z)\|_{Y_1}=|y_1|+|y_2|<\varepsilon<\frac{r_0}{c}-r, \end{equation*} \notag $$
it follows from the estimate $\|\widetilde F-F\|_{C(V\cap K\cap Q,\,Y)}<r$ in view of the inequality in (2.2) that
$$ \begin{equation*} \|\widetilde x(\,\cdot\,)-\widehat x(\,\cdot\,)\|_{C([t_0,t_1],\mathbb R^n)}< r_0, \end{equation*} \notag $$
which means that $\widetilde x(\,\cdot\,)\in V_0$.

Thus, each pair $(y_1,y_2)$ in a neighbourhood of the origin in $\mathbb R^{m_1}\times\mathbb R^{m_2}$ belongs to the set of attainability for the control system $(\widetilde\varphi,\widetilde f,\widetilde g)$ with respect to the neighbourhood $V_0$ of the point $\widehat x(\,\cdot\,)$. This means by definition that the system $(\widetilde\varphi,\widetilde f,\widetilde g)$ is controllable with respect to the function $\widehat x(\,\cdot\,)$ and its neighbourhood $V_0$. The proof of Theorem 1 is complete.

Let us prove Corollary 1. We only need to modify slightly the end of the proof of the Theorem. Let $s_1\in\mathbb N$ be chosen so as to satisfy inequality (3.1) for $r$ instead of $r/2$. Put $\widetilde F=F_{s_1}$. Then using exactly the same argument we show that in this case equality in (2.2) takes the form

$$ \begin{equation*} \begin{aligned} \, &\widetilde F(\psi_{\widetilde F}(y))(t)=\widetilde F(\widetilde x(\,\cdot\,),\widetilde\xi,\widetilde\beta,\widetilde\nu)(t) \\ &\qquad=\biggl(\widetilde x(t)-\widetilde \xi-\int_{t_0}^t\varphi(\tau,\widetilde x(\tau), u_{s_1}(\widetilde\beta;\overline v)(\tau))\,d\tau,\, f(\widetilde \xi,\widetilde x(t_1))+\widetilde\nu,\, g(\widetilde \xi,\widetilde x(t_1))\biggr) \\ &\qquad=(0,f(\widehat x(t_0),\widehat x(t_1))+y_1, y_2) \end{aligned} \end{equation*} \notag $$
for each $t\in [t_0,t_1]$.

As the first components coincide, the pair $(\widetilde x(\,\cdot\,), \widetilde u(\,\cdot\,))$, where $\widetilde u(\mkern1.5mu\cdot\mkern1.5mu)=u_{s_1}(\widetilde\beta;\overline v)(\mkern1.5mu\cdot\mkern1.5mu)$, satisfies condition (1.1) for the original control system $(\varphi, f, g)$ and at the same time $\widetilde \xi=\widetilde x(t_0)$. In view of the fact that the second components also coincide, this yields the inequality $f(\widetilde x(t_0),\widetilde x(t_1))\leqslant y_1$, and it is obvious that $g(\widetilde x(t_0),\widetilde x(t_1))=y_2$.

Now, as before, we obtain $\widetilde x(\,\cdot\,)\in V_0$. Thus, each pair $(y_1,y_2)$ in some neighbourhood of zero in $\mathbb R^{m_1}\times\mathbb R^{m_2}$ belongs to the set of attainability for the control system $(\varphi, f, g)$ with respect to an arbitrary neighbourhood $V_0$ of the point $\widehat x(\,\cdot\,)$. This means by definition that the system $(\varphi,f, g)$ is locally controllable with respect to the function $\widehat x(\,\cdot\,)$. The proof of Corollary 1 is complete.

§ 4. Example

Consider the following control system:

$$ \begin{equation} \dot x_1=u, \qquad \dot x_2=x_1, \qquad u(t)\in\{-1,1\} \quad \text{for a.a. } t\in[0,1], \end{equation} \tag{4.1} $$
$$ \begin{equation} x_1(0)=x_2(0)=0, \qquad x_1(1)=x_2(1)=0. \end{equation} \tag{4.2} $$
We show that it is locally controllable with respect to the function $\widehat x(\,\cdot\,)=(\widehat x_1(\,\cdot\,), \widehat x_2(\,\cdot\,))=(0,0)$, which is obviously not admissible for this system.

We use the corollary to Theorem 1. Let $k = 2$. The triple $(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,),\widehat{\overline u}(\,\cdot\,))$, where $\widehat{\overline \alpha}(\,\cdot\,)=(1/2, 1/2)$ and $\widehat{\overline u}(\,\cdot\,)=(-1,1)$, is admissible for the corresponding convex system, since

$$ \begin{equation*} \begin{aligned} \, \dot{\widehat x_1}(t)&=0=\frac12\,(-1)+\frac12\,1, \\ \dot{\widehat x_2}(t)&=0=\frac12\,0+\frac12\,0 \end{aligned} \end{equation*} \notag $$
and $\widehat x(0)=\widehat x(1)=0$.

Let us show that $\Lambda(\widehat x(\,\cdot\,),\widehat{\overline \alpha}(\,\cdot\,), \widehat{\overline u}(\,\cdot\,))=\{0\}$. In our case this is equivalent to the fact that the system of relations

$$ \begin{equation} \dot p_1(t)=-p_2(t), \qquad \dot p_2(t)=0, \end{equation} \tag{4.3} $$
$$ \begin{equation} \max_{u\in\{-1,1\}}p_1(t)u=\langle p(t),\dot{\widehat x}(t)\rangle=0 \quad\text{for a.a. } t\in[0,1] \end{equation} \tag{4.4} $$
with absolutely continuous vector functions $p(\,\cdot\,)=(p_1(\,\cdot\,),p_2(\,\cdot\,))$ has only the trivial solution.

Indeed, it follows from (4.4) that $p_1(\,\cdot\,)=0$; then (4.3) means that $p_2=0$ too. Thus, $\Lambda(\widehat x(\,\cdot\,),\widehat \alpha{(\,\cdot\,)},\widehat u{(\,\cdot\,)})=\{0\}$ and so the system (4.1), (4.2) is locally controllable with respect to the function $\widehat x(\,\cdot\,)=0$.

Now consider the following perturbation of the system (4.1), (4.2):

$$ \begin{equation} \begin{gathered} \, \dot x_1=u+f_1(t,x,u), \quad \dot x_2=x_1+f_2(t,x,u), \quad u(t)\in\{-1,1\}\quad \text{for a.a. }t\in[0,1], \\ x_1(0)=x_2(0)=0, \qquad x_1(1)=x_2(1)=0, \end{gathered} \end{equation} \tag{4.5} $$
where the $f_i\colon \mathbb R\times \mathbb R^2\times \mathbb R\to\mathbb R$, $i=1,2$, are continuous functions.

Let $\rho>0$, and let $\Delta(\rho)$ be the set defined before the Theorem 1, where $n=2$. Then by Theorem 1, for any neighbourhood $V_0$ of the point $\widehat x(\,\cdot\,)=(\widehat x_1(\,\cdot\,),\widehat x_2(\,\cdot\,))=(0,0)$ there exists $\varepsilon>0$ such that if $\|f_i\|_{C(\Delta(\rho),\mathbb R)}<\varepsilon$, $i=1,2$, then the system (4.5) is controllable with respect to $V_0$ and $\widehat x(\,\cdot\,)$.

Note that the functions $f_i$, $i=1,2$, are only supposed to be continuous and this result cannot be obtained using the methods from [4], Ch. 5, where these functions were supposed to be Lipschitz continuous.


Bibliography

1. E. B. Lee and L. Markus, Foundations of optimal control theory, John Wiley & Sons, Inc., New York–London–Sydney, 1967, x+576 pp.  mathscinet  zmath
2. E. R. Avakov and G. G. Magaril-Il'yaev, “Controllability and second-order necessary conditions for optimality”, Sb. Math., 210:1 (2019), 1–23  mathnet  crossref  mathscinet  zmath  adsnasa
3. E. R. Avakov and G. G. Magaril-Il'yaev, “Local controllability and optimality”, Sb. Math., 212:7 (2021), 887–920  mathnet  crossref  mathscinet  zmath  adsnasa
4. F. H. Clarke, Optimization and nonsmooth analysis, Canad. Math. Soc. Ser. Monogr. Adv. Texts, Wiley-Intersci. Publ. John Wiley & Sons, Inc., New York, 1983, xiii+308 pp.  mathscinet  zmath
5. H. J. Sussmann, “A general theorem on local controllability”, SIAM J. Control Opt., 25:1 (1987), 158–194  crossref  mathscinet  zmath
6. E. R. Avakov and G. G. Magaril-Il'yaev, “Local infimum and a family of maximum principles in optimal control”, Sb. Math., 211:6 (2020), 750–785  mathnet  crossref  mathscinet  zmath  adsnasa
7. E. R. Avakov and G. G. Magaril-Il'yaev, “General implicit function theorem for close mappings”, Proc. Steklov Inst. Math., 315 (2021), 1–12  mathnet  mathscinet  zmath
8. R. E. Edwards, Functional analysis. Theory and applications, Holt, Rinehart and Winston, New York–Toronto–London, 1965, xiii+781 pp.  mathscinet  zmath
9. G. G. Magaril-Il'yaev and V. M. Tikhomirov, Convex analysis: theory and applications, 5th augmented ed., Lenand, Moscow, 2020, 176 pp.; English transl. of 2nd ed., Transl. Math. Monogr., 222, Amer. Math. Soc., Providence, RI, 2003, viii+183 pp.  mathscinet  zmath
10. V. M. Alekseev, V. M. Tikhomirov and S. V. Fomin, Optimal control, 2nd ed., Fizmatlit, Moscow, 2005, 384 pp.; English transl. of 1st ed., Contemp. Soviet Math., Consultants Bureau, New York, 1987, xiv+309 pp.  crossref  mathscinet  zmath

Citation: E. R. Avakov, G. G. Magaril-Il'yaev, “Controllability of an approximately defined control system”, Sb. Math., 215:4 (2024), 438–463
Citation in format AMSBIB
\Bibitem{AvaMag24}
\by E.~R.~Avakov, G.~G.~Magaril-Il'yaev
\paper Controllability of an approximately defined control system
\jour Sb. Math.
\yr 2024
\vol 215
\issue 4
\pages 438--463
\mathnet{http://mi.mathnet.ru//eng/sm9987}
\crossref{https://doi.org/10.4213/sm9987e}
\mathscinet{http://mathscinet.ams.org/mathscinet-getitem?mr=4782818}
\zmath{https://zbmath.org/?q=an:07945681}
\adsnasa{https://adsabs.harvard.edu/cgi-bin/bib_query?2024SbMat.215..438A}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001298689600001}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85201703623}
Linking options:
  • https://www.mathnet.ru/eng/sm9987
  • https://doi.org/10.4213/sm9987e
  • https://www.mathnet.ru/eng/sm/v215/i4/p3
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Математический сборник Sbornik: Mathematics
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024