Sbornik: Mathematics
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Forthcoming papers
Archive
Impact factor
Guidelines for authors
License agreement
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Mat. Sb.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Sbornik: Mathematics, 2022, Volume 213, Issue 5, Pages 604–623
DOI: https://doi.org/10.1070/SM9627
(Mi sm9627)
 

This article is cited in 5 scientific papers (total in 5 papers)

Strong convexity of reachable sets of linear systems

M. V. Balashov

V. A. Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, Moscow, Russia
References:
Abstract: The reachable set on some time interval of a linear control system $x'\in Ax\,{+}\,U$, $x(0)=0$, is considered. A number of cases is examined when the reachable set is the intersection of some balls of fixed radius $R$ (that is, a strongly convex set of radius $R$). In some cases the radius $R$ is estimated from above. It turns out that strong convexity is fairly typical for this class of reachable sets in a certain sense.
Among possible applications of this result are the possibility of constructing outer polyhedral approximation of reachable sets with better accuracy in the Hausdorff metric than in the general case, and applications to linear differential games and some optimization problems.
Bibliography: 23 titles.
Keywords: strongly convex set, reachable set, linear control system, Aumann integral, Hausdorff metric, nonsmooth analysis.
Funding agency Grant number
Russian Science Foundation 22-11-00042
This research was carried out with the financial support of the Russian Science Foundation (grant no. 22-11-00042).
Received: 18.06.2021 and 01.11.2021
Russian version:
Matematicheskii Sbornik, 2022, Volume 213, Number 5, Pages 30–49
DOI: https://doi.org/10.4213/sm9627
Bibliographic databases:
Document Type: Article
MSC: Primary 49J53, 52A20; Secondary 93C05, 90C26
Language: English
Original paper language: Russian

§ 1. Introduction

1.1.

The properties of reachable sets for systems of differential equations in $\mathbb R^n$ have been studied extensively (see [1]–[4]). Of special importance here are linear systems. One of the principal problems for reachable sets is to find their structural properties, the most important of which are boundedness, closedness, convexity, or some other geometric and topological characteristics (for example, smoothness of the boundary). For a linear system $x'\in Ax+U(t)$, $x(0)=0$, where the $U(t)$ are compact sets varying continuously with $t$, the convexity of the reachable set follows from the properties of the Aumann integral (see [5]) and can be found in classical monographs (see, for example, Theorem 1 in § 2.2.2 of [1]). In essence, such results on the convexity of the Aumann integral and so on are direct consequences of the Lyapunov theorem on vector measures (see [6]).

Properties of reachable sets have proved useful in the analysis of dynamical systems and in some other cases involving the construction of a control. The alternating Pontryagin integral (see [7]), the Krasovskii stable bridge (see [8]) and other constructions, which can be looked upon as far-reaching generalizations of reachable sets, provide an important technical tool for the analysis and solution of linear zero-sum differential games. The errors of polyhedral or some other (for example, ellipsoidal) approximations of reachable sets (see [9]–[11]), measured usually in the Hausdorff metric, have also been widely studied. This leads to the problem of replacing the convexity of a set by some type of strict convexity condition, which is known to reduce the error of polyhedral approximation of the set on a grid of unit vectors (see [12]).

The strongly convex sets constitute an important subclass of strictly convex sets. A set $M\subset\mathbb R^n$ is a strongly convex set of radius $R$ if it can be represented as the intersection of closed balls of radius $R$. Strongly convex sets were investigated, for example, in [13]–[16].

In Theorem 4.2 of [17] it was shown that, for a nonlinear dynamical system $x'=f(x)$, $x(0)\in\Omega\subset\mathbb R^n$, under certain conditions (in particular, if the set of initial states $\Omega$ is strongly convex) the reachable set is a strongly convex set (of some radius). The proof of this result is based on the application of the support principle for strongly convex sets (which is established locally in time for reachable sets), rather than on the linearization of the system.

The author is aware of only a few global results on the strong convexity of the reachable sets for dynamical systems or on the strong convexity of Aumann integrals for fairly general set-valued mappings (the latter setting being much more general).

In [18] it was proved that, if the values of a continuous set-valued mapping $F(t)$, $t\in [0,T]$, are strongly convex sets of radius $R(t)$, then the Aumann integral $\displaystyle\int_{0}^{T}F(t)\, dt$ is a strongly convex set of radius $\displaystyle R=\int_{0}^{T}R(t)\, dt$.

Strong convexity properties of control constraint sets and the terminal set were used in [11] for a proof of the strong convexity of the alternating Pontryagin integral, and, as a corollary, for establishing the quadratic convergence of alternating Pontryagin sums with respect to the space.

In [19] it was noted that if a continuous set-valued mapping $F(t)$, $t\in [0,T]$, has compact (but not necessarily convex) values and $\displaystyle\int_{0}^{T}F(t)\, dt$ is a strongly convex set of radius $R$, then for each $t\in [0,T]$ the Aumann integral $\displaystyle\int_{0}^{t}F(s)\, ds$ is also a strongly convex set of radius $R$. This result follows from the fact that the integral is linear in $t$.

Our aim in the present paper is to describe sufficient conditions for the time-global strong convexity of the reachable set of a linear control system. In § 4 we show that in the two-dimensional space, for a control system with an arbitrary matrix $A$ and fairly general autonomous control constraint set $U(t)=U$, the reachable set for this system is strongly convex. The principal idea of the proof is given in Lemma 1, which guarantees the strong convexity of a convex compact set under a certain local condition. This result is also of independent interest. We discuss applications of these results to space of dimension $n\geqslant 3$ in § 5. Various applications of our results are discussed in § 6, the concluding section.

1.2. Some definitions and notation

The inner product of vectors $x$ and $y$ in Euclidean space $\mathbb R^n$ of dimension $n$ is denoted by $(x,y)$; as usual, $\| x\|=\sqrt{(x,x)}$. We also set $B_R(a)=\{ x\in\mathbb R^n\mid \| x-a\|\leqslant R\}$.

The standard orthonormal basis in $\mathbb R^n$ will be denoted by $e_1,\dots,e_n$. The Minkowski sum of sets $P,Q\subset\mathbb R^n$ is defined by

$$ \begin{equation*} P+Q=\{ x+y\mid x\in P,\ y\in Q\}; \end{equation*} \notag $$
the operation
$$ \begin{equation*} P\tfrac{\, *\,}{}\, Q=\{ x\mid x+Q\subset P\}=\bigcap_{x\in Q}(P-x) \end{equation*} \notag $$
is known as the geometric difference or the Minkowski-Pontryagin difference. We say that a set $P$ is a summand of a set $R$ if there exists a set $Q$ such that $P+Q=R$. The distance of a point $x\in\mathbb R^n$ to a set $P\subset\mathbb R^n$ is defined by
$$ \begin{equation*} \varrho_P(x)=\inf_{p\in P}\| x-p\|. \end{equation*} \notag $$

The Hausdorff distance between compact sets $P$ and $Q$ is defined by

$$ \begin{equation*} \begin{aligned} \, h(P,Q) &=\inf\bigl\{ \varepsilon>0\mid P\subset Q+B_{\varepsilon}(0),\, Q\subset P+B_{\varepsilon}(0)\bigr\} \\ &=\max\Bigl\{ \max_{x\in P}\varrho_Q(x),\, \max_{x\in Q}\varrho_P(x)\Bigr\}. \end{aligned} \end{equation*} \notag $$

The support function of a set $Q\subset\mathbb R^n$ is by definition

$$ \begin{equation*} s(p,Q)=\sup_{x\in Q}(p,x), \qquad p\in\mathbb R^n. \end{equation*} \notag $$

For a convex closed set $U\subset\mathbb R^n$, its normal cone at a point $x\in U$ is defined by $N(U,x)=\{ p\in\mathbb R^n\mid (p,x)=s(p,U)\}$. Given a convex compact set $U\subset\mathbb R^n$ and a vector $p\ne 0$, the set of supporting elements of $U$ in direction $p$ is denoted by $U(p)=\{ x\in U\mid (p,x)=s(p,U)\}$. In other words, $U(p)$ is the subdifferential of the function $s(\cdot,U)$ at the point $p$.

A set-valued mapping will be denoted by $F(t)$, $t\in [0,T]$. It will usually be assumed that for each $t$ the set $F(t)$ is a compact (and convex) set which depends continuously on $t$ in the Hausdorff metric. The set of supporting elements for a set $F(t)$ and a vector $p\ne 0$ will be denoted by $F(t)(p)$.

The subdifferential of a convex function $f\colon \mathbb R^n\supset D\to\mathbb R$ at a point $x_0\in\mathbb R^n$ is the (possibly empty) set

$$ \begin{equation*} \partial f(x_0)=\{ p\in\mathbb R^n\mid f(x)\geqslant f(x_0)+(p,x-x_0)\ \forall\, x\in D\}. \end{equation*} \notag $$

By $\operatorname{co} P$ and $\operatorname{cone}P$ we denote the convex and conical hulls of the given set $P\subset\mathbb R^n$, respectively. The closure and the interior of $P$ are denoted by $\operatorname{cl}P$ and $\operatorname{int}P$, respectively.

A set $M\ne\varnothing$ is a strongly convex set of radius $R$ if $M=\bigcap_{x\in X}B_R(x)$ for some set $X$. Given a convex compact set $M$, the following conditions are equivalent (see [15], Theorems 4.2.7 and 4.3.3):

§ 2. Integral of a set-valued mapping and the reachable set of a linear system

We recall that, for a set-valued mapping $F(t)$, $t\in [0,T]$, with compact values in $\mathbb R^n$ which depend continuously on $t$ in the Hausdorff metric, the Aumann integral (see [5]) is defined by

$$ \begin{equation*} \int_{0}^{t}F(s)\, ds=\biggl\{ \int_{0}^{t} f(s)\, ds\Bigm| f \text{ is a Lebesgue measurable selector of $F$}\biggr\}. \end{equation*} \notag $$
The Riemann integral (see [20], Ch. 4, § 30) is defined by
$$ \begin{equation*} \int_{0}^{t}F(s)\, ds=\lim_{|\tau |\to 0}\sum_{i=0}^{I-1}F(\xi_i)(x_{i+1}-x_i), \end{equation*} \notag $$
where, as usual, $\tau=\{ x_i\}_{i=0}^I$ is a partition, $|\tau |=\max_{i}(x_{i+1}-x_i)$ is its fineness, and $\xi_i\in [x_i,x_{i+1}]$.

For any continuous set-valued mapping with compact values it is known that these two integrals coincide and are equal to the integral of $\operatorname{cl}\operatorname{co} F(t)$ (see [20], where the equality of the Riemann and Lebesgue integrals is secured by Theorem 30.6 and Definition 31.7 in [20], while Theorem 35.1 establishes the equivalence of the Aumann and Lebesgue integrals).

It follows from the definitions that the integral is linear with respect to the integration interval, $s\biggl(p, \displaystyle\int_{0}^{t}F(s)\, ds\biggr)=\int_{0}^{t}s(p,F(s))\, ds$ for any $p\in\mathbb R^n$, and the set $\displaystyle\int_{0}^{t}F(s)\, ds$ of supporting elements in direction $p\ne 0$ is $\displaystyle\int_{0}^{t}F(s)(p)\, ds$. In other words,

$$ \begin{equation*} \biggl( \int_{0}^{t}F(s)\, ds\biggr)(p)=\int_{0}^{t}F(s)(p)\, ds. \end{equation*} \notag $$
For any matrix $J\in\mathbb R^{n\times n}$ we have
$$ \begin{equation*} J\int_{0}^{t}F(s)\, ds =\int_{0}^{t}JF(s)\, ds. \end{equation*} \notag $$

Example 1. We discuss an example borrowed from [20], Example 34.2. For a rotation matrix

$$ \begin{equation*} Q(s)=\begin{pmatrix} \cos s & -\sin s \\ \sin s & \cos s \end{pmatrix} \end{equation*} \notag $$
consider $F(s)=Q(s)P$, where $P=\operatorname{co}\{ \pm e_1\}$. Let $p=(\cos\theta, \sin\theta)$ be a unit vector. Then
$$ \begin{equation} s\biggl(p, \int_{0}^{2\pi}F(s)\, ds\biggr) =\int_{0}^{2\pi} |{\cos(s-\theta)}|\, ds=4=s(p, B_4(0)), \end{equation} \tag{2.1} $$
that is, $\displaystyle\int_{0}^{2\pi}F(s)\, ds=B_4(0)$.

Consider the system $x'\in Ax+U(t)$. Here $x\in\mathbb R^n$, $A\in\mathbb R^{n\times n}$ and the $U(t)\subset\mathbb R^n$ are convex compact sets that are continuous with respect to $t$ in the Hausdorff metric. Consider the reachable set $M(t)$ of this system with the initial condition $x(0)=0$. Substituting $x=e^{At}z$ we obtain

$$ \begin{equation*} M_z(t)=\int_{0}^{t}e^{-A s}U(s)\, ds\quad\text{and} \quad M(t)=M_x(t)=\int_{0}^{t}e^{A(t-s)}U(s)\, ds. \end{equation*} \notag $$
That $M(t)$ is convex follows from the convexity of the Aumann integral.

In this paper we consider the case of constant $U(t)=U$:

$$ \begin{equation} x'\in Ax+U, \quad x(0)=0, \qquad U \text{ is a convex compact set}. \end{equation} \tag{2.2} $$
In this case $\displaystyle M(t)=\int_{0}^{t}e^{A(t-s)}U\, ds=\int_{0}^{t}e^{As}U\, ds$.

§ 3. Some properties of integrals and strongly convex sets

Lemma 1. Let the convex set $C\subset \mathbb R^n$ satisfy the condition

$$ \begin{equation} \exists\, l_0>0\quad\forall\, p,q\in \partial B_1(0), \quad \| p-q\|\leqslant l_0, \quad \| x_p-x_q\|\leqslant R\| p-q\|, \end{equation} \tag{3.1} $$
where $x_p=\arg\max_{x\in C}(p,x)$. Then $C$ is a strongly convex set of radius $R$.

Note that (3.1) means that the set $\arg\max_{x\in C}(p,x)$ is a singleton for any unit vector $p$.

Proof of Lemma 1. Given $\| p\|=1$, we set $H_p(C)=\{ x\in\mathbb R^n\mid (p,x)\leqslant s(p,C)\}$. Note that $\operatorname{int} C\ne\varnothing$. Let $\Gamma\subset \partial C$ be the set of smooth points of $\partial C$. It is known from convex analysis (see Theorem 25.6 in [21]) that for any convex function $f\colon B_{\delta}(x_0)\to\mathbb R$ we have
$$ \begin{equation*} \begin{aligned} \, \partial f(x_0)=\operatorname{cl}\operatorname{co} &\Bigl\{\lim_{i\to\infty} f'(x_i)\Bigm| x_i\to x_0, \\ &\qquad f'(x_i)\text{ is the Fr}{\large{\unicode{233}}}\text{chet gradient and }\exists\,\lim_{i\to\infty} f'(x_i) \Bigr\}, \end{aligned} \end{equation*} \notag $$
so that $C=\bigcap_{x\in \Gamma,\, p\in N(C,x),\, \|p\|=1}H_p(C)$ (for a proof one should interpret $\partial C$ locally as the graph of a convex function).

Assume that there exist a point $x\in \Gamma$ and a unit vector $p(x)\in N(C,x)$ such that $C\not\subset B_R(x-Rp(x))$. If there is no such point $x$ in the set of smooth points $\Gamma$, then

$$ \begin{equation*} C\subset\bigcap_{x\in \Gamma,\ p(x)\in N(C,x),\, \|p(x)\|=1} B_R(x-Rp(x))\subset\bigcap_{x\in \Gamma,\, p\in N(C,x),\, \|p\|=1}H_p(C)=C \end{equation*} \notag $$
and so $C$ is a strongly convex set of radius $R$.

As usual, in place of $p(x)$ and $x\in\Gamma$ we write $p$ and $x_p$, respectively.

It is known that local strong convexity is equivalent to global convexity (see Theorem 1.1 in [16]); hence, for each $k\in\mathbb N$

$$ \begin{equation*} C\cap B_{1/k}(x_p)\not\subset B_R(x_p-Rp). \end{equation*} \notag $$
Next, the subdifferential of the convex characteristic function $\delta (x,C)$ is upper semicontinuous, and therefore for $\varepsilon=l_0$ we have
$$ \begin{equation} \exists\,\delta>0 \quad \forall\, x\in \partial C \quad \| x-x_p\|<\delta \quad \forall\, \| q\|=1, \quad q\in N(C,x), \qquad q\in p+B_{l_0}(0), \end{equation} \tag{3.2} $$
where we take into account that the normal cone $N(C,x_p)$ is one-dimensional. Note that the inclusion $x_p\in\Gamma$ implies that the unit vector $p$ is unique in the cone $N(C,x_p)$.

Hence, for all sufficiently large $k\in\mathbb N$ and each $x\in B_{1/k}(x_p)\cap \partial C$, any unit vector $q\in N(C,x)$ satisfies the condition $\| q-p\|\leqslant l_0$.

Let $k\in\mathbb N$, $1/k<\delta$ ($\delta$ is the number from (3.2)), and let $C_0=B_{1/k}(x_p)\cap \partial C$. We set $z=x_p-Rp$. Let $x_0\in C_0$ be such that $\| x_0-z\|=\max_{x\in C_0}\| x-z\|$. It is clear that $\| x_0-z\|=R_1>R$. We set $q_0=(x_0-z)/\| x_0-z\|$. By the construction of $q_0$ we have $x_0=\arg\max_{x\in C_0}(q_0,x)$, and so for $x_0=z+R_1q_0$ and $x_p=z+Rp$ we have

$$ \begin{equation*} \begin{aligned} \, \| x_0-x_p\|^2 &=\| q_0(R_1-R)+R(q_0-p)\|^2 \\ &=(R_1-R)^2+R^2\| q_0-p\|^2+2R(R_1-R)(q_0,q_0-p) \\ &\geqslant (R_1-R)^2+R^2\| q_0-p\|^2. \end{aligned} \end{equation*} \notag $$

If $\| p-q_0\|\leqslant l_0$, then for $q_0\in N(C,x_0)$ this contradicts (3.1). If $q_0\notin N(C,x_0)$, then from the definition of $C_0$ we obtain $x_{q_0}\notin C_0$. Hence $\|x_p-x_{q_0}\|>\|x_p-x_0\|$, which is again in contradiction to (3.1).

Let ${\| p-q_0\|> l_0}$. For any unit vector $q\in N(C,x_0)$ we have $\| q-p\|\leqslant l_0$, and so

$$ \begin{equation*} \| x_p-x_0\|>R\| p-q_0\|\geqslant R\| p-q\|, \end{equation*} \notag $$
which contradicts again the assumptions of the lemma.

Lemma 1 is proved.

Lemma 2. Let $f\colon [0,t]\to\mathbb R$ be a continuous function with positive values, let $U(s)\subset\mathbb R^n$, $s\in [0,t]$, be a set-valued mapping with convex compact values, let ${C>0}$, and let $f(s)\leqslant C$ for all $s\in [0,t]$. Then $\displaystyle \int_{0}^{t}f(s)U(s)\, ds$ is a summand of the set $\displaystyle \int_{0}^{t}CU(s)\, ds$.

Proof. Since $U(s)$ is convex for all $s\in [0,t]$, that is,
$$ \begin{equation*} CU(s)=f(s)U(s)+(C-f(s))U(s), \end{equation*} \notag $$
we have
$$ \begin{equation*} \int_{0}^{t}CU(s)\, ds=\int_{0}^{t}f(s)U(s)\, ds+\int_{0}^{t}(C-f(s))U(s)\, ds, \end{equation*} \notag $$
which implies the conclusion of the lemma.

Lemma 3. Let $f\colon [0,t]\to\mathbb R$ be a continuous function with positive values, let $U(s)\subset\mathbb R^n$, $s\in [0,t]$, be a set-valued mapping with convex compact values, and let $\displaystyle M(t)=\int_{0}^{t}f(s)U(s)\, ds$. Then for any unit vectors $p$ and $q$

$$ \begin{equation*} M(t)(p)-M(t)(q)=\int_{0}^{t} f(s)\bigl( U(s)(p)-U(s)(q)\bigr)\, ds. \end{equation*} \notag $$

This result follows from the properties of integrals.

§ 4. Strong convexity of the reachable set in the planar case

Lemma 4. Let $A_1=J^{-1}AJ$ be the Jordan canonical form of the matrix $A$ in (2.2) and let $U_1=J^{-1}U$, where $J\in \mathbb R^{n\times n}$ is the transition matrix. If the set $\displaystyle {M_1(t)=\int_{0}^t e^{A_1s}U_1\, ds}$ is strongly convex with radius $r$, then $\displaystyle M(t)=\int_{0}^{t}e^{As}U\, ds$ is strongly convex with radius $R=r\alpha^2/\beta$, where $\alpha=\| J\|=\max_{\| h\|=1}\| Jh\|$ and $\beta=\min_{\| h\|=1}\| Jh\|$.

Proof. Since $e^{As}=Je^{A_1s}J^{-1}$, we have
$$ \begin{equation*} M(t)=\int_{0}^{t}Je^{A_1s}J^{-1}U\, ds=\int_{0}^{t}Je^{A_1s}U_1\, ds=JM_1(t). \end{equation*} \notag $$
Now the conclusion of the lemma follows from Theorem 4.3.3 in [15].

Lemma 4 shows that it suffices to analyze the situation for system (2.2) with matrix $A$ in the Jordan canonical form.

Theorem 1. Let $f\colon [0,2\pi]\to \mathbb R_+$ be a continuous function such that $f(s)\leqslant C$ for all $s\in [0,2\pi]$, for some $C>0$. Next, let $Q(s)$ be a rotation matrix (see Example 1), and let $U\subset\mathbb R^2$ be a convex compact set. Let $\displaystyle M(t)=\int_{0}^t f(s)Q(s)U\, ds$. Then the set $M(2\pi)$ is strongly convex with radius $R=CL_U$, where $L_U$ is the perimeter of $\partial U$.

For a body $U$, its perimeter $L_U$ is defined to be the length of the rectifiable curve $\partial U$. The perimeter $L_U$ of a line segment is double its length.

Proof of Theorem 1. Let $\{U_m\}_{m=1}^{\infty}$ be a sequence of convex polygons that approximate $U$ in the Hausdorff metric ($h(U_m,U)\to0$) and let
$$ \begin{equation*} M_m(t)=\int_{0}^{t}f(s)Q(s)U_m\, ds. \end{equation*} \notag $$

Let $U_m$ and $M_m(t)$, $m\in\mathbb N$, be fixed. By $L_m$ we denote the length of $\partial U_m$.

Further, let $\theta_m$ be the smallest angle of the cones $N(U_m,x)$ over all the extreme points $x$ of the set $U_m$. Let $p$ and $q$ be unit vectors such that $\| p- q\|\mkern-3mu \leqslant\mkern-3mu 2\sin(\theta_m/2)$. By reducing $\theta_m>0$ if necessary we can ensure that $\theta_m\leqslant 2( 1+1/m)\sin(\theta_m/2)$. Then

$$ \begin{equation} \theta\leqslant 2\biggl( 1+\frac1m\biggr)\sin\frac{\theta}{2} \quad \forall\, \theta\in (0,\theta_m). \end{equation} \tag{4.1} $$

For all $s$ except a finite number of values, the set of supporting elements of the set $Q(s)U_m$ in direction $p$ is the point $U_m(Q^\top(s)p)$. As a result,

$$ \begin{equation*} \max_{a\in U_m}(p, Q(s)a)=\max_{a\in U_m}(Q^\top(s)p, a)=\bigl(Q^\top(s)p, U_m(Q^\top(s)p)\bigr). \end{equation*} \notag $$
We have
$$ \begin{equation} \begin{aligned} \, \notag \bigl\| M_m(2\pi)(p)-M_m(2\pi)(q)\bigr\| &=\biggl\| \int_{0}^{2\pi} f(s)Q(s)\bigl( U_m(Q^\top(s)p)-U_m(Q^\top(s)q)\bigr)\, ds\biggr\| \\ &\leqslant C \int_{0}^{2\pi} \bigl\| U_m(Q^\top(s)p)-U_m(Q^\top(s)q)\bigr\|\, ds. \end{aligned} \end{equation} \tag{4.2} $$

Let $\mathbb K=\operatorname{cone}\{ p,q\}$, and let $\theta$ be the angle between the vectors $p$ and $q$. Next, let $\{a_k\}_{k=1}^{K+1}$, $a_{K+1}=a_1$, be the vertices of $U_m$ ordered counterclockwise. The integrand on the right-hand side of (4.2) does not vanish if and only if ${Q(s)\mathbb K\not\subset \bigcup_{k=1}^K N(U_m,a_k)}$. Hence, by (4.2) we have

$$ \begin{equation*} \int_{0}^{2\pi} \bigl\| U_m(Q^\top(s)p)-U_m(Q^\top(s)q)\bigr\|\, ds \leqslant\sum_{k=1}^{K}\theta\| a_k-a_{k+1}\|=\theta L_m. \end{equation*} \notag $$
Next, an appeal to (4.1) shows that
$$ \begin{equation*} \theta L_m\leqslant 2\biggl( 1+\frac1m\biggr)L_m\sin\frac{\theta}{2} =\biggl( 1+\frac1m\biggr)L_m\| p-q\|. \end{equation*} \notag $$
As a result,
$$ \begin{equation*} \| M_m(2\pi)(p)-M_m(2\pi)(q)\|\leqslant \biggl( 1+\frac1m\biggr)CL_m\| p-q\| \end{equation*} \notag $$
for all unit vectors $p$ and $q$ making an angle $<\theta_m$. By Lemma 1 and in view of Lemma 2, $M_m(2\pi)$ is strongly convex with radius $R_m=(1+1/m)CL_m$. Letting $m\to\infty$, this gives
$$ \begin{equation*} M_m(2\pi)\to M(2\pi)\quad\text{and} \quad \biggl( 1+\frac1m\biggr)CL_m\to CL_U, \end{equation*} \notag $$
that is, $M(2\pi)$ is strongly convex with radius $R=CL_U$.

Theorem 1 is proved.

Note that the estimate in Theorem 1 is sharp in the setting of Example 1.

Corollary. In system (2.2) assume that $n=2$ and let $\lambda_{1,2}=\alpha\pm i\beta$, $\beta>0$, be the eigenvalues of the matrix $A$. Then for any convex compact set $U$ the set $M(2\pi/\beta)$ is strongly convex with radius $R=\frac{1}{\beta}CL_U$, where $L_U$ is the perimeter of the boundary of $U$ and $C=\max\{ 1, e^{2\pi{\alpha}/{\beta}}\}$. Moreover, if $\alpha<0$, then the set $M(+\infty)$ is strongly convex with radius $R=L_U/(\beta (1-e^{{2\pi\alpha}/{\beta}}))$.

Proof. By Theorem 1 the set $M(2\pi{\alpha}/{\beta})$ is strongly convex with radius $R=({1}/{\beta})CL_U$. If $\alpha<0$, then
$$ \begin{equation*} \begin{aligned} \, M(+\infty) &=\int_{0}^{+\infty} e^{\alpha s}Q(\beta s)U\, ds =\sum_{k=0}^{\infty}\int_{2\pi{k}/{\beta}}^{2\pi(k+1)/\beta} e^{\alpha s}Q(\beta s)U\, ds \\ &=\sum_{k=0}^{\infty}\frac{1}{\beta}\int_{2\pi k}^{2\pi (k+1)} e^{s\alpha/\beta}Q(s)U\, ds. \end{aligned} \end{equation*} \notag $$
From Theorem 1 we see that the $k$th term in the last sum is a strongly convex set of radius $R_k\leqslant e^{2\pi k\alpha/\beta}(L_U/\beta)$. Hence the set $M(+\infty)$ is strongly convex with radius
$$ \begin{equation*} R\leqslant \sum_{k=0}^{\infty}R_k \leqslant\frac{1}{\beta} \frac{L_U}{1-e^{2\pi\alpha/\beta}}. \end{equation*} \notag $$
The proof is complete.

Theorem 2. In system (2.2) assume that $n=2$ and the matrix $A$ has equal real eigenvalues $\lambda_{1,2}=\lambda\in\mathbb R$. Also assume that there exist $ R_0>0$, $ \gamma>0$ and $\delta>0$ such that

$$ \begin{equation} \forall\, p\in B_{\delta}(\pm e_2)\cap \partial B_1(0) \qquad U\cap B_{\gamma}(U(\pm e_2))\subset B_{R_0}(U(p)-R_0p). \end{equation} \tag{4.3} $$
Then for any $t>0$ the set $M(t)$ is strongly convex.

Condition (4.3) means that $U$ satisfies the support principle for strongly convex sets of radius $R_0$ locally for all unit vectors in a neighbourhood of $\pm e_2$.

Proof of Theorem 2. We fix $t>0$ and define
$$ \begin{equation*} Q(s)=\begin{pmatrix} 1 & s \\ 0 & 1\end{pmatrix}, \qquad U(s)=Q(s)U. \end{equation*} \notag $$
By translating the set $U$ if necessary we can assume without loss of generality that $b\geqslant 0$ for any point $(a,b)\in U$.

1. The set $U$ is a convex polygon. By (4.3) each $U(\pm e_2)$ is a singleton. Since $U(\pm e_2)$ is a singleton and $U$ is a convex polygon, there exists $\alpha_0>0$ such that the angle between the $Ox$-axis and any side of $\partial U(s)$ is $\geqslant \alpha_0$ for all $s\in [0,t]$.

Consider an arbitrary side $l=[(a_1,b_1),(a_2,b_2)]$ of the convex polygon $U$ such that $b_1<b_2$. Let $p,q\in\partial B_1(0)$ and let the angle between $p$ and $ q$ be $\theta$. Assume that $\theta>0$ is smaller than the angles of all normal cones at the vertices of the sets $U(s)$ for all $s\in [0,t]$. Hence the sets of supporting elements $U(s)(p)$ and $U(s)(q)$ are attained at vertices of $Q(\tau )l$, provided that $\tau\in [s,s+\Delta s]$, where $Q(s)l\perp q$, $Q(s+\Delta s)l\perp p$. This means that the angle between the vectors $l_1=(a_2-a_1+s(b_2-b_1), b_2-b_1)$ and $l_2=(a_2-a_1+(s+\Delta s)(b_2-b_1), b_2-b_1)$ is equal to the angle between $p$ and $q$, which is $\theta$.

Let

$$ \begin{equation*} d=\| l_1\|, \qquad \angle (l_1l_2O)=\alpha\in (\alpha_0,\pi-\alpha_0), \qquad \sigma=b_2-b_1>0, \qquad \angle (l_1Ol_2)=\theta. \end{equation*} \notag $$
Note that $\| l_1-l_2\|=\sigma\Delta s$.

From the definition of $\alpha_0$ we have ${\sigma}/{d}\geqslant \sin \alpha_0$ or, which is the same, ${{d}/{\sigma}\leqslant {1}/{\sin \alpha_0}}$. Applying the law of sines to the triangle $Ol_1l_2$ we obtain

$$ \begin{equation} \frac{\sin\theta}{\sigma\Delta s}=\frac{\sin\alpha}{d}\geqslant \frac{\sin\alpha_0}{d}\quad\text{and} \quad \Delta s\leqslant \frac{d}{\sigma}\,\frac{\sin\theta}{\sin\alpha_0}\leqslant \frac{\sin\theta}{\sin^2\alpha_0}. \end{equation} \tag{4.4} $$

Setting $C=\max\{ 1, e^{\lambda t}\}$ and $M=\max_{s\in [0,t]}\| Q(s)\|$ we have

$$ \begin{equation} \| M(t)(p)-M(t)(q)\|\leqslant \int_{0}^{t}e^{\lambda t}\| U(s)(p)-U(s)(q)\|\, ds\leqslant \sum_i Cd_i\Delta s_i; \end{equation} \tag{4.5} $$
the last summation is over all sides $l_i$ of $U$, $d_i=\max_{s\in [s_i,s_i+\Delta s_i]}\| Q(s)l_i\|$, $\| l_i\|$ is the length of $l_i$ (for $s\in [s_i,s_i+\Delta s_i]$ the sets of supporting elements $U(s)(p)$ and $U(s)(q)$ are attained at vertices of $Q(s)l_i$). In view of (4.4), from (4.5) we have
$$ \begin{equation*} \sum_i Cd_i\Delta s_i \leqslant \frac{CM}{\sin^2\alpha_0}\sin\theta\sum_i \| l_i\|=\frac{CM}{\sin^2\alpha_0}L_U\sin\theta. \end{equation*} \notag $$
Hence
$$ \begin{equation*} \| M(t)(p)-M(t)(q)\|\leqslant \frac{CM}{\sin^2\alpha_0}L_U\cdot 2\sin\frac{\theta}{2}=\frac{CM}{\sin^2\alpha_0}L_U\cdot\| p-q\|. \end{equation*} \notag $$
By Lemma 1, the set $M(t)$ is strongly convex with $R=({CM}/{(\sin^2\alpha_0)})L_U$.

2. $U$ is an arbitrary convex compact set. Let $\delta>0$ be the number from (4.3). We set

$$ \begin{equation*} \mathcal K_1=\bigl\{ p\in B_{\delta}(-e_2)\mid \|p\|=1\bigr\}, \qquad \mathcal K_2=\bigl\{ p\in B_{\delta}(e_2)\mid \|p\|=1\bigr\} \end{equation*} \notag $$
and $\mathcal K=\mathcal K_1\cup \mathcal K_2$. The images $Q(s)B_{R_0}(U(p)-R_0p)$ are ellipses, which are strongly convex sets (see Theorem 4.3.3 in [15]), hence there exists an $R_1>0$ such that, for any $p\in\mathcal K$ and $s\in [0,t]$,
$$ \begin{equation*} Q(s)B_{R_0}\bigl(U(p)-R_0p\bigr)\subset B_{R_1}\bigl(Q(s)U(p)-R_1\xi\bigr), \end{equation*} \notag $$
where
$$ \begin{equation*} \xi\in N\bigl(Q(s)U, Q(s)U(p)\bigr), \qquad \|\xi\|=1. \end{equation*} \notag $$

We find the dependence $\xi=\xi (s,p)$. Let $e$ be the direction vector of the support line $\mathcal L$ with normal vector $p\in\mathcal K$ that passes through the point $U(p)$. Then the support line $Q(s)\mathcal L$ is the line with direction vector $Q(s)e$ that passes through the point $Q(s)U(p)$. It is easily seen that the unit normal vector to this line is

$$ \begin{equation} \xi=\xi (s,p)=\frac{(Q^\top(s))^{-1}p}{\| (Q^\top(s))^{-1}p\|}. \end{equation} \tag{4.6} $$
Since $Q(s)U(\pm e_2)=U(s)(\pm e_2)$ for all $s\in [0,t]$ and the transformation $Q(s)$ is nondegenerate and continuous for $s\in [0,t]$, it follows from (4.6) that for all $p\in \mathcal K$ and $s\in [0,t]$,
$$ \begin{equation} U(s)\cap \bigl( Q(s)B_{\gamma}(U(\pm e_2))\bigr)\subset B_{R_1}(0)+U(s)(\xi(s,p))-R_1\xi(s,p). \end{equation} \tag{4.7} $$

For $s\in [0,t]$ and $i=1,2$, we set

$$ \begin{equation*} \mathcal M_i(s)=\operatorname{cone}\{ \xi (s,p)\mid p\in \mathcal K_i\}, \qquad \mathcal M_i=\bigcap_{s\in [0,t]}\mathcal M_i(s)\quad\text{and} \quad \mathcal M=\mathcal M_1 \cup \mathcal M_2. \end{equation*} \notag $$
Since $\pm e_2$ lie in the interior of $\mathcal M_{1,2}(s)$ for all $s$, from the continuity of $\xi (s,p)$ and compactness with respect to $s$ we have $-e_2\in \operatorname{int} \mathcal M_1$ and $e_2\in \operatorname{int} \mathcal M_2$. Therefore, there exists a $\mu>0$ such that, for any unit vector $q\notin \mathcal M_1\cup \mathcal M_2$ and any unit vector $e$ such that $(q,e)=0$ the sine of the angle between $e$ and the $Ox$-axis is $\geqslant \mu$. In other words, the angle between each line normal to $q$ and the $Ox$-axis is $\geqslant \alpha_0$.

We fix $s\in [0,t]$ and unit vectors $\xi$ and $ \eta$ in $\mathcal M_i$ for $i=1$ or $i=2$. From (4.7) we have

$$ \begin{equation*} \| U(s)(\xi)-U(s)(\eta)-R_1\xi\|\leqslant R_1, \end{equation*} \notag $$
and therefore $\| U(s)(\xi)-U(s)(\eta)\|^2\leqslant 2R_1(\xi, U(s)(\xi)-U(s)(\eta))$. Interchanging $\xi$ and $\eta$ we find that
$$ \begin{equation*} \| U(s)(\xi)-U(s)(\eta)\|^2\leqslant 2R_1\bigl(\eta, U(s)(\eta)-U(s)(\xi)\bigr). \end{equation*} \notag $$
Summing the last two inequalities we obtain
$$ \begin{equation} \| U(s)(\xi)-U(s)(\eta)\|\leqslant R_1\| \xi-\eta\|. \end{equation} \tag{4.8} $$

Consider the following approximation $\widetilde U$ of the set $U$:

$$ \begin{equation*} \partial\widetilde U=\Gamma_1\cup \Gamma_2\cup\Gamma, \end{equation*} \notag $$
where
$$ \begin{equation*} \Gamma_1=\bigl\{ U(p)\mid p\in \mathcal M_1, \|p\|=1\bigr\}\quad\text{and} \quad \Gamma_2=\bigl\{ U(p)\mid p\in \mathcal M_2, \|p\|=1\bigr\} \end{equation*} \notag $$
and $\Gamma$ is a polygonal line (with two connected components in general) that approximates $\partial U\setminus (\Gamma_1\cup \Gamma_2)$.

Consider an arbitrary edge $l=[(a_1,b_1),(a_2,b_2)]\subset \Gamma\subset \partial \widetilde U$, $b_1<b_2$. We set

$$ \begin{equation*} \ell_s=Q(s)l=[(a_1+sb_1,b_1),(a_2+sb_2,b_2)] \quad\text{and}\quad \sigma=b_2-b_1>0. \end{equation*} \notag $$
The angle between $\ell_s$ and the $Ox$-axis is $\geqslant \alpha_0=\arcsin \mu>0$. Hence, arguing as in the first case, we have ${\sigma}/{\ell_s}\geqslant\sin\alpha_0$. Let $d=\| \ell_s\|$.

Let $\theta$ be smaller than the opening of the normal cone at any vertex of $Q(s)\Gamma$ for all $s\in [0,t]$. If the normal cone to $U(s)$ has an empty interior at the common points (juncture) of $Q(s)\Gamma$ and $Q(s)\Gamma_i$, then we do not take this cone into account in the definition of $\theta$.

We also fix unit vectors $p$ and $q$ with angle $\leqslant \theta$ between them, and define $C=\max\{ 1, e^{\lambda t}\}$ and $M=\max_{s\in [0,t]}\| Q(s)\|$.

We have

$$ \begin{equation*} \begin{aligned} \, &\| \widetilde M(t)(p)-\widetilde M(t)(q)\|\leqslant \int_{0}^{t}e^{\lambda t} \| \widetilde U(s)(p)-\widetilde U(s)(q)\|\, ds \\ &\quad\leqslant 2\biggl( CtR_1\| p-q\|+\sum_i \frac{Cd_i^2}{\sigma_i\sin\alpha_0}\sin\theta\biggr)\leqslant 2\biggl( CtR_1+\sum_i \frac{CM}{\sin^2\alpha_0}\| l_i\|\biggr)\| p-q\| \\ &\quad=2\biggl( CtR_1+\frac{CM}{\sin^2\alpha_0}L_{\widetilde U}\biggr)\| p-q\|. \end{aligned} \end{equation*} \notag $$
This can be explained as follows. The term $CtR_1\| p-q\|$ corresponds to the case when $p,q\in\mathcal M_i$, $i=1$ or $2$; the sum $\sum_i ({Cd_i^2}/({\sigma_i\sin\alpha_0}))\sin\theta$ in the notation of part 1 corresponds to the case when both $\widetilde U(s)(p)$ and $\widetilde U(s)(q)$ lie on the polygonal line $Q(s)\Gamma$. The coefficient 2 appears when estimating the integral for $p\in \mathcal M_i$, $i=1$ or $i=2$, and $\widetilde U(s)(q)\in Q(s)\Gamma$.

So we have

$$ \begin{equation*} R_{\widetilde M(t)}=2\biggl( CtR_1+\frac{CM}{\sin^2\alpha_0}L_{\widetilde U}\biggr). \end{equation*} \notag $$
Letting $\widetilde U\to U$ we obtain
$$ \begin{equation*} R_{M(t)}=2\biggl( CtR_1+ \frac{CM}{\sin^2\alpha_0}L_{U}\biggr)=2\biggl( CtR_1+\frac{CM}{\mu^2}L_{U}\biggr). \end{equation*} \notag $$

Theorem 2 is proved.

Arguing as in the proof of Theorem 2 we obtain the following.

Theorem 3. Assume that $n=2$ and the matrix $A$ in system (2.2) has real distinct eigenvalues. Also assume that there exist $ R_0>0$, $ \gamma>0$ and $\delta>0$ such that

$$ \begin{equation} \forall\, p\in B_{\delta}(v)\cap \partial B_1(0) \quad U\cap B_{\gamma}(U(v))\subset B_{R_0}(U(p)-R_0p), \end{equation} \tag{4.9} $$
where $v=\pm e_1$ or $\pm e_2$. Then the set $M(t)$ is strongly convex for any $t>0$.

§ 5. Attainable sets in dimension greater than 2. The direct sum hull

Justification of results on the strong convexity of the reachable set for system (2.2) in the case when $n\geqslant 3$ involves considerable technical difficulties. For example, for the system with matrix

$$ \begin{equation*} A=\begin{pmatrix} -1 & 1 & 0 \\ 0 & -1 & 1 \\ 0 & 0 & -1\end{pmatrix} \end{equation*} \notag $$
and $U=\operatorname{co}\{\pm e_3\}$, it is fairly difficult to estimate the strong convexity radius of the reachable set $M(t)$ (a numerical experiment for $t=1$ gives a radius $R> 100$). Let us look at variants of (2.2) that render the analysis of the reachable set simpler.

Consider the matrix

$$ \begin{equation} A=\begin{pmatrix} A_1 & 0 \\ 0 & A_2 \end{pmatrix}\in\mathbb R^{n\times n}, \end{equation} \tag{5.1} $$
where $A_1\in\mathbb R^{m\times m}$ and $A_2\in \mathbb R^{(n-m)\times (n-m)}$. We set $E_1=\mathbb R^m$ and $E_2=\mathbb R^{n-m}$, $\mathbb R^n=E_1\,{\oplus}\, E_2$. Note that $E_1$ and $E_2$ are orthogonal subspaces. Assume that the set $U$ in (2.2) has the form
$$ \begin{equation*} U=U_1\oplus U_2, \qquad U_i\subset E_i, \quad i=1,2, \end{equation*} \notag $$
that is, $U$ is a direct sum of subsets of the $E_i$. Then
$$ \begin{equation*} e^{As}=\begin{pmatrix} e^{A_1s} & 0 \\ 0 & e^{A_2s} \end{pmatrix} \end{equation*} \notag $$
and
$$ \begin{equation} \begin{aligned} \, M(t) &=\int_{0}^{t}e^{As}U\, ds=\int_{0}^{t} \begin{pmatrix} e^{A_1s} & 0 \\ 0 & e^{A_2s} \end{pmatrix} (U_1\oplus U_2)\, ds=\int_{0}^{t} (e^{A_1s}U_1)\oplus (e^{A_2s}U_2)\, ds \notag\\ &=\biggl(\int_{0}^{t} e^{A_1s}U_1\, ds\biggr) \oplus \biggl(\int_{0}^{t} e^{A_2s}U_2\, ds\biggr). \end{aligned} \end{equation} \tag{5.2} $$

Definition. Let $\mathbb R^n=E_1\oplus E_2$ ($E_1=\mathbb R^m$, $E_2=\mathbb R^{n-m}$) and $M\subset \mathbb R^n$. We denote the orthogonal projection onto $E_i$ by $P_{E_i}$. The direct sum hull of $M$ relative to $E_1$ and $E_2$ is defined by

$$ \begin{equation*} \Sigma(M)=P_{E_1}M\oplus P_{E_2}M. \end{equation*} \notag $$

For example, the direct sum hull of $M=B_1(0)\subset \mathbb R^4$ relative to the subspaces $E_1=Ox_1x_2x_3$ and $E_2=Ox_4$ is as follows:

$$ \begin{equation*} \Sigma (B_1(0))=\{ x\in\mathbb R^4\mid x_1^2+x_2^2+x_3^2\leqslant 1,\, x_4\in [-1,1]\}. \end{equation*} \notag $$
For the same set $M$ and the subspaces $E_1=Ox_1x_2$ and $E_2=Ox_3x_4$ we have
$$ \begin{equation*} \Sigma (B_1(0))=\{ x\in\mathbb R^4\mid x_1^2+x_2^2\leqslant 1,\, x_3^2+x_4^2\leqslant 1\}. \end{equation*} \notag $$

If $U$ in (2.2) is a direct sum hull and the matrix $A$ is of the form (5.1), then $M(t)$ is a direct sum of sets of smaller dimension; it can be shown that $M(t)$ is strongly convex in the case $m=1$ and $m=2$, and also, using induction, in some other cases.

Some set-theoretic operations with the reachable set (5.2) are invariant under replacing a set involved by a direct sum hull of it. Below we consider one example.

Lemma 5. Let the reachable set for system (2.2) with matrix $A$ defined in (5.1) have the from (5.2), and let $U=U_1\oplus U_2$, where $U_i\subset E_i$, $i=1,2$. Next, let $M_0\subset\mathbb R^n$ be a compact set. Then

$$ \begin{equation*} M(t)\tfrac{\, *\,}{}\, M_0=M(t)\tfrac{\, *\,}{}\, \Sigma (M_0), \end{equation*} \notag $$
where $\Sigma (M_0)$ is the direct sum hull of the set $M_0$ relative to the subspaces $E_1$ and $E_2$.

Proof. We put
$$ \begin{equation*} M_i(t)=\int_{0}^{t} e^{A_is}U_i\, ds, \qquad i=1,2. \end{equation*} \notag $$

Since $M_0\,{\subset}\,\Sigma (M_0)$, we have $M(t)\frac{\, *\,}{}\, M_0 \supset M(t)\frac{\, *\,}{}\, \Sigma (M_0)$. As a result, if the set $M(t)\frac{\, *\,}{}\, M_0$ is empty, then so is $M(t)\frac{\, *\,}{}\, \Sigma (M_0)$.

Let $x\,{\in}\, M(t)\frac{\, *\,}{}\, M_0$, that is, $x\,{+}\,M_0\,{\subset}\, M(t)$. From the last inclusion we obtain

$$ \begin{equation*} P_{E_i}x+P_{E_i}M_0\subset P_{E_i}M(t)=M_i(t), \qquad i=1,2. \end{equation*} \notag $$
Summing these inclusions we find that
$$ \begin{equation*} P_{E_1}x\oplus P_{E_2}x+P_{E_1}M_0\oplus P_{E_2}M_0\subset M_1(t)\oplus M_2(t)=M(t) \end{equation*} \notag $$
and
$$ \begin{equation*} x+P_{E_1}M_0\oplus P_{E_2}M_0=x+\Sigma (M_0)\subset M(t), \quad \text{hence } x\in M(t)\tfrac{\, *\,}{}\, \Sigma (M_0). \end{equation*} \notag $$

The lemma is proved.

§ 6. Applications

An important practical advantage of the strong convexity of $M(t)$ is the possibility of polyhedral approximation of $M(t)$ with quadratic accuracy with respect to the step of the grid (this property is discussed in § 6.1 and employed in §§ 6.2 and 6.3).

In Example 2 we show how the strong convexity property of the reachable set proves to be instrumental in dealing with some problems without employing the machinery of polyhedral approximation. The strong convexity of the reachable set and some other properties allow one to establish a linear convergence rate of the gradient projection method for solving minimization problems for the support function of a certain subset of the unit sphere. This, in turn, makes it possible to find out whether or not the intersection $M(t)\cap M$, where $M$ is some convex compact set, is empty.

6.1. Second-order polyhedral approximation

It is known (see [12] and [15]) that the error of polyhedral approximations of a strongly convex set of radius $R$ is of the second order with respect to the step of a grid of unit vectors. Let $\mathbb G$ be a grid of unit vectors with step $\Delta\in (0,1/2)$ (see Definition 2.6.1 in [15] and Definition 1 in [12]). Consider an outer polyhedral approximation for a reachable set $M(t)$, that is, consider the set

$$ \begin{equation*} \widehat M(t)=\bigl\{ x\in\mathbb R^n\mid (p,x)\leqslant s(p,M(t))\ \forall\, p\in\mathbb G\bigr\}. \end{equation*} \notag $$
If $M(t)$ is a strongly convex set of radius $R$ in $\mathbb R^n$, then (see [12] and [15])
$$ \begin{equation*} h(M(t), \widehat M(t))\leqslant R\frac{\Delta^2}{1-\Delta^2/2}. \end{equation*} \notag $$

Consider the $\mathbb R^n$-setting, where $n\geqslant 3$. Let $M(t)$ be a direct sum $M_1(t)\oplus M_2(t)$, as in (5.2). If $M_i(t)$ is a strongly convex set of radius $R_i>0$, $i=1,2$, then one can easily find a polyhedral approximation of $M(t)$ of the second order with respect to the step $\Delta$. Note that the direct sum of strongly convex sets is not a strongly convex set.

To construct the required polyhedral approximation, consider grids $\mathbb G_i\subset E_i$ of unit vectors with step $\Delta$ in the spaces $E_i$, $i=1,2$, $E_1\oplus E_2=\mathbb R^n$. Let

$$ \begin{equation*} \widehat M_i(t)=\{ x\in E_i\mid (p,x)\leqslant s(p,M_i(t))\ \forall\, p\in\mathbb G_i\}, \end{equation*} \notag $$
and set $\widehat M(t)=\widehat M_1(t)\oplus \widehat M_2(t)$.

Proposition. Let $\varepsilon=\varepsilon (\Delta)=\frac{\Delta^2}{1-\Delta^2/2}$ and let $M_i(t)$ be strongly convex sets of radius $R_i>0$, $i=1,2$. Then, under the above assumptions,

$$ \begin{equation*} h(M(t),\widehat M(t))\leqslant \varepsilon (\Delta)\sqrt{R_1^2+R_2^2}. \end{equation*} \notag $$

Proof. Let $x\in \widehat M(t)$, that is, $x=x_1\oplus x_2$, $x_i\in \widehat M_i(t)$, $i=1,2$. By Remark 1 in [12],
$$ \begin{equation*} h(M_i(t),\widehat M_i(t))\leqslant R_i\varepsilon(\Delta), \qquad i=1,2. \end{equation*} \notag $$
This means that there exists a point $a_i\in M_i(t)$ such that $\| x_i-a_i\|\leqslant R_i\varepsilon$. We have $x_i-a_i\in E_i$, $i=1,2$, hence the vectors $x_1-a_1$ and $x_2-a_2$ are orthogonal. Now by Pythagoras’ theorem,
$$ \begin{equation*} \begin{aligned} \, \varrho^2_{M(t)}(x_1\oplus x_2) &\leqslant \|(x_1\oplus x_2) - (a_1\oplus a_2)\|^2 \\ &=\| x_1-a_1\|^2+\| x_2-a_2\|^2\leqslant\varepsilon^2(\Delta)(R_1^2+R_2^2), \end{aligned} \end{equation*} \notag $$
proving the proposition.

An undoubtable advantage of the direct sum representation of $M(t)$ is the fact that the total number of vectors in the grids $\mathbb G_1\subset E_1=\mathbb R^m$ and $\mathbb G_2\subset E_2=\mathbb R^{n-m}$ with step $\Delta$ is much smaller than that of the vectors in a grid with the same step $\Delta$ in $\mathbb R^n$.

6.2. Application to linear differential games

Consider the following linear differential game:

find an initial condition $x(0)$ from which a phase vector $x$ can be transferred in time $T$ to a terminal set $M$, that is, $x(T)\in M\subset\mathbb R^n$, provided that

$$ \begin{equation*} x'=Ax+u+v, \qquad t\in [0,T], \quad u\in U, \quad v\in V, \quad A\in \mathbb R^{n\times n}, \end{equation*} \notag $$
where $M$, $U$ and $V$ are convex compact sets.

In some algorithms one evaluates an alternating Pontryagin integral (see [7]) and its approximation by alternating sums. The change $z(t)=e^{A(T-t)}x(t)$ reduces the problem to the form

$$ \begin{equation*} z'=e^{A(T-t)}u+e^{A(T-t)}v, \qquad u\in U, \quad v\in V, \quad z(T)\in M. \end{equation*} \notag $$
Recall (see [11], formula (12)) that one should find the sets
$$ \begin{equation*} M_0=M, \qquad M_{k+1}=(M_k\tfrac{\, *\,}{}\, \mathcal V_k)+(-\mathcal U_k), \quad k=1,\dots,K \end{equation*} \notag $$
(a pursuit game), where
$$ \begin{equation*} \mathcal U_k=\int_{t_k}^{t_{k+1}} e^{A(T-t)}U\, dt\quad\text{and} \quad \mathcal V_k=\int_{t_k}^{t_{k+1}} e^{A(T-t)}V\, dt, \end{equation*} \notag $$
and $0=t_0<t_1<\dots<t_K=T$ is a partition of the interval $[0,T]$. (For details of the algorithms and definitions of strategies of the players (the functions $u$ and $v$), we refer the reader to [7], [11] and [8].) As a result, one constructs a set of initial states $M_K$ from which the first player (with control $u$) will surely reach the terminal set $M$ (for an arbitrary strategy of the second player, with control $v$).

In [11], a quadratic estimate (with respect to the space, that is, with respect to the step of the grid on which sets are approximated) for approximation by alternating Pontryagin sums was obtained. It is essential in this result that the sets $M$ and $U$ be strongly convex: this guarantees the strong convexity of all sets $M_k$ and therefore a quadratic rate of approximation. In some problems one can relax the requirements of the strong convexity of $U$ and assume instead that the integrals defining the $\mathcal U_k$ are strongly convex. For example, according to the corollary in § 4, this strong convexity is guaranteed for any compact set $U$ in $\mathbb R^2$, provided that the roots of the characteristic polynomial of the matrix $A$ are complex conjugate and $\operatorname{Im}\sigma (A)\ne 0$, where $\sigma (A)$ is the spectrum of the matrix $A$.

6.3. Some optimization problems

Consider the following problem:

given a system (2.2) and a convex compact set $M_0\subset \mathbb R^n$, solve the problem

$$ \begin{equation} t\to\min_{t\geqslant 0}, \qquad\exists\, x\in\mathbb R^n \quad x+M(t)\supset M_0. \end{equation} \tag{6.1} $$
In this problem we seek an initial condition $x_0=e^{-At} x$ and time $t$ (if they exist) such that by starting system (2.2) with initial condition $x(0)=x_0$ we can reach any point in $M_0$ in time $t$, and this time is minimum possible.

Problem (6.1) is related to the Chebyshev centre problem for a convex compact set $M_0\subset\mathbb R^n$, which is formalized as follows:

$$ \begin{equation*} R\to\min, \qquad \exists\, x\in\mathbb R^n \quad x+B_R(0)\supset M_0. \end{equation*} \notag $$
This problem has a unique solution $(x,R)\in\mathbb R^n\times \mathbb R$ in any Euclidean space. However, in general, the Chebyshev centre problem is NP-hard. The available algorithms for finding Chebyshev centres and generalizations thereof provide no meaningful estimates for the distance between point components of the exact and approximate solutions for an arbitrary convex compact set $M_0$. In [22] this author proposed an algorithm for searching a Chebyshev centre in a space of small dimension and estimated the error of this algorithm. It should be noted that in these results it is essential that both the ball $B_R(0)$ and the set considered in the problem be strongly convex. Similar methods also apply to problems of type (6.1), provided that the set $M(t)$ is strongly convex.

Now we give an example when the strong convexity of the reachable set guarantees the solvability of a certain optimization problem.

Example 2. Let the $M(t) \subset \mathbb R^n$ in system (2.2) be strongly convex sets of radius $r>0$ for $t\in [0,T]$, let $M\subset\mathbb R^n$ be a strongly convex compact set of radius $r_M$, and let $0\notin M$. Assume that there exists $R_0>0$ such that $M=X+B_{2R_0}(0)$, where $X$ is a convex compact set for which we know $s(p,X)$ and $X(p)$ for all $\| p\|=1$. Also let $0\in U$. This means that the reachable sets form an increasing nested sequence: $M(t_1)\subset M(t_2)$ for $0\leqslant t_1<t_2$.

Assume that $M(T)\cap M\ne \varnothing$. The problem is: given $t\in (0,T)$, find whether or not it $M(t)\cap M\ne\varnothing$.

Setting

$$ \begin{equation*} R=r+r_M, \qquad M_0=M\tfrac{\, *\,}{}\, B_{R_0}(0)=X+B_{R_0}(0)\quad\text{and}\quad Z(t)=M(t)+(-M_0), \end{equation*} \notag $$
we can reformulate this problem as follows:

given time $t\in (0,T)$, find the distance of $0$ to $Z(t)$ and compare this distance with $R_0$.

In the language of support functions, this is formalized as follows: find a solution of the problem

$$ \begin{equation*} \min_{\| p\|=1}f(p)=J, \end{equation*} \notag $$
where $f(p)=s(p,Z(t))=s(p,M(t))+s(p,-M_0)$. The function $f$ also depends on $t$, but this dependence is suppressed in our notation. Provided that $0\notin Z(t)$, the solution of this problem is clearly attained at the vector $p_0=-P_{Z(t)}0/\| P_{Z(t)}0\|$. By the definition of $f$, if $J<-R_0$, then the intersection $M(t)\cap M$ is empty. If ${J\geqslant -R_0}$, then $M(t)\cap M\ne\varnothing$.

This leads us to the problem of minimization of the support function $f(p)$ on the unit sphere. Let us show that this problem can be solved using the gradient method for $J< 0$ with an appropriate initial iterate.

Note that for all $t\in [0,T]$, $Z(t)$ is a strongly convex set of radius $R$. Moreover, for $f$ we have $f'(p)=Z(t)(p)$, for any nonzero vector $p$.

From the definition of $Z(t)$ we have $Z(t)\frac{\, *\,}{}\, B_{R_0}(0)+B_{R_0}(0)=Z(t)$. Hence, for any unit vectors $p$ and $q$ we have the estimate $\| Z(t)(p)-Z(t)(q)\|\geqslant R_0\| p-q\|$ (see [23], Definition 3.2, Theorem 3.6). Since $Z(t)$ is a strongly convex set of radius $R$, we obtain another estimate:

$$ \begin{equation*} \| Z(t)(p)-Z(t)(q)\|\leqslant R\| p-q\|. \end{equation*} \notag $$

Lemma 6. Let $u,v\in\mathbb R^n\setminus\{ 0\}$. Then

$$ \begin{equation*} \biggl\| \frac{u}{\| u\|}-\frac{v}{\| v\|}\biggr\|\leqslant \frac{1}{\sqrt{\| u\|\,\| v\|}}\| u-v\|. \end{equation*} \notag $$

To see this we have to multiply both sides of this inequality by $\sqrt{\| u\|\,\| v\|}$ and then square the result.

We set $S=\{ p\in\mathbb R^n\mid \| p\|=1\}$. Let $p_0\in S$ be a solution of the problem, that is, $f(p_0)=J < 0$. Then we have $|J|=|f(p_0)|>0$. From the necessary condition for an extremum we obtain e $|J|=|(p_0,f'(p_0))|=\| f'(p_0)\|$. We choose a unit vector $p_1$ so as to have $\|p_1-p_0\|\leqslant{R_0^2}/{R^2}$. Fix a positive number $\lambda$ satisfying

$$ \begin{equation*} \lambda\leqslant \min\biggl\{ \frac{R_0^2/R}{R^2+L|J|},\frac1L\biggr\}, \quad\text{where } L\geqslant h(\{0\},Z(T))+1. \end{equation*} \notag $$
We cannot precisely evaluate the minimum in the definition of $\lambda$ (about $J$ we only know that $J< 0$), but we can easily estimate $\lambda$ by bounding $|J|$ above. The roughest estimate is $|J|\leqslant L$.

Consider the iteration process $p_{k+1}=P_S(p_k-\lambda f'(p_k))$.

Assuming that the vector $p_k\,{\in}\, S$ has been constructed and $\| p_k\,{-}\,p_0\|\,{\leqslant}\, R_0^2/R^2$, we set $L_k=\| f'(p_k)\|=\| Z(t)(p_k)\|$. Note that

$$ \begin{equation*} L_k\leqslant h(\{0\},Z(T))\leqslant L-1\quad\text{and} \quad 1-\lambda L_k>1-\lambda L \geqslant 0. \end{equation*} \notag $$
From Lemma 6 and since
$$ \begin{equation*} \| p_0-\lambda f'(p_0)\|=\|p_0\|+\lambda \| f'(p_0)\|=1+\lambda |J|\quad\text{and} \quad \| p_k-\lambda f'(p_k)\|\geqslant 1-\lambda L_k \end{equation*} \notag $$
we have
$$ \begin{equation*} \begin{aligned} \, &\| p_{k+1}-p_0\|^2=\| P_S(p_k-\lambda f'(p_k))-P_S(p_0-\lambda f'(p_0))\|^2 \\ &\quad \stackrel{\text{Lemma 6}}{\leqslant} \frac{1}{(1-\lambda L_k)(1+\lambda |J|)} \| p_k-\lambda f'(p_k)-(p_0-\lambda f'(p_0))\|^2 \\ &\qquad=\frac{\| p_k-p_0\|^2-2\lambda (p_k-p_0,f'(p_k)-f'(p_0))+\lambda^2\| f'(p_k)-f'(p_0)\|^2}{(1-\lambda L_k)(1+\lambda |J|)}. \end{aligned} \end{equation*} \notag $$
Next, from the inequalities
$$ \begin{equation*} (p_k-p_0,f'(p_k)-f'(p_0))\geqslant \frac1R \| f'(p_k)-f'(p_0)\|^2 \end{equation*} \notag $$
(see [15], the proof of Theorem 4.3.2), and since
$$ \begin{equation*} \begin{gathered} \, \| f'(p_k)-f'(p_0)\|=\| Z(t)(p_k)-Z(t)(p_0)\|\geqslant R_0\| p_k-p_0\|, \\ \| f'(p_k)-f'(p_0)\|\leqslant R\| p_k-p_0\| \end{gathered} \end{equation*} \notag $$
we have
$$ \begin{equation*} \| p_{k+1}-p_0\|^2\leqslant \| p_k-p_0\|^2\frac{1-2\lambda R_0^2/R+\lambda^2R^2}{(1-\lambda L_k)(1+\lambda |J|)}. \end{equation*} \notag $$
We claim that, for the chosen value of $\lambda$,
$$ \begin{equation*} q(\lambda)=\frac{1-2\lambda R_0^2/R+\lambda^2R^2}{(1-\lambda L_k)(1+\lambda |J|)}\in (0,\gamma) \quad\text{for some } \gamma<1. \end{equation*} \notag $$
Using the estimate
$$ \begin{equation*} \begin{aligned} \, L_k-|J| &=\|f'(p_k)\|-\| f'(p_0)\|\leqslant \| f'(p_k) - f'(p_0)\|=\|Z(t)(p_k)-Z(t)(p_0) \| \\ &\leqslant R\|p_k-p_0\|\leqslant\frac{R_0^2}{R} \end{aligned} \end{equation*} \notag $$
we obtain
$$ \begin{equation*} \begin{aligned} \, &1-\frac{2\lambda R_0^2}{R}+\lambda^2R^2 - (1-\lambda L_k)(1+\lambda |J|) \\ &\qquad=-\frac{2\lambda R_0}{R}+\lambda^2R^2+\lambda (L_k-|J|)+\lambda^2 L_k |J| \leqslant \lambda \biggl(-\frac{R_0^2}{R}+\lambda R^2+\lambda L_k |J| \biggr) \\ &\qquad \leqslant \lambda \biggl(-\frac{R_0^2}{R}+\frac{R_0^2}{R}\,\frac{R^2+|J| (L-1)}{R^2+|J|L} \biggr)=-\frac{R_0^2}{R}\,\frac{\lambda |J|}{R^2+|J|L}<0; \end{aligned} \end{equation*} \notag $$
where we recall that $\sup_k L_k\leqslant L-1$. So the algorithm converges linearly:
$$ \begin{equation*} \| p_{k+1}-p_0\|\leqslant q(\lambda )\| p_{k}-p_0\| \quad\text{for all } k\geqslant 1. \end{equation*} \notag $$

If we seek the minimum $t>0$ for which $M(t)\cap M\ne\varnothing$, then we should catch the value $J=-R_0$.

To conclude, it should be noted that the evaluation of the gradient $f'(p_k)$ at time $t$ reduces to that of the integral

$$ \begin{equation*} \begin{gathered} \, f'(p_k)=Z(t)(p_k)=M(t)(p_k)+(-M_0)(p_k)=\int_{0}^{t}(e^{As}U)(p_k)\, ds+(-M_0)(p_k), \\ (-M_0)(p_k)=(-X)(p_k)-R_0p_k, \end{gathered} \end{equation*} \notag $$
which offers no serious difficulties.

Acknowledgement

The author is grateful to the referees for their useful comments.


Bibliography

1. E. B. Lee and L. Markus, Foundations of optimal control theory, John Wiley & Sons, Inc., New York–London–Sydney, 1967, x+576 pp.  mathscinet  zmath
2. M. Blanke, M. Kinnaert, J. Lunze and M. Staroswiecki, Diagnosis and fault-tolerant control, Springer, Berlin, 2003, xvii+571 pp.  mathscinet  zmath
3. A. Chutinan and B. H. Krogh, “Computational techniques for hybrid system verification”, IEEE Trans. Automat. Control, 48:1 (2003), 64–75  crossref  mathscinet  zmath
4. A. B. Singer and P. I. Barton, “Global optimization with nonlinear ordinary differential equations”, J. Global Optim., 34:2 (2006), 159–190  crossref  mathscinet  zmath
5. R. J. Aumann, “Integrals of set-valued functions”, J. Math. Anal. Appl., 12:1 (1965), 1–12  crossref  mathscinet  zmath
6. A. A. Lyapunov, “On completely additive vector functions”, Izv. Akad. Nauk SSSR Ser. Mat., 4:6 (1940), 465–478 (Russian)  mathnet  mathscinet  zmath
7. L. S. Pontryagin, “Linear differential games of pursuit”, Mat. Sb., 112(154):3(7) (1980), 307–330  mathnet  mathscinet  zmath; English transl. in Sb. Math., 40:3 (1981), 285–303  crossref
8. N. N. Krasovskii, Theory of control of motion: Linear systems, Nauka, Moscow, 1968, 475 pp. (Russian)  mathscinet  zmath
9. A. B. Kurzhanski and P. Varaiya, “On verification of controlled hybrid dynamics through ellipsoidal techniques”, Proceedings of the 44th IEEE conference on decision and control (Seville 2005), IEEE, 2005, 4682–4687  crossref
10. Inseok Hwang, D. M. Stipanović and C. J. Tomlin, “Polytopic approximations of reachable sets applied to linear dynamic games and a class of nonlinear systems”, Advances in control, communication networks, and transportation systems, Systems Control Found. Appl., Birkhäuser Boston, Boston, MA, 2005, 3–19  crossref  mathscinet  zmath
11. G. E. Ivanov and E. S. Polovinkin, “On strongly convex differential games”, Differ. Uravn., 31:10 (1995), 1641–1648  mathnet  mathscinet  zmath; English transl. in Differ. Equ., 31:10 (1995), 1603–1612
12. M. V. Balashov, “On polyhedral approximations in an $n$-dimensional space”, Zh. Vychisl. Mat. Mat. Fiz., 56:10 (2016), 1695–1701  mathnet  crossref  zmath; English transl. in Comput. Math. Math. Phys., 56:10 (2016), 1679–1685  crossref  mathscinet
13. J.-Ph. Vial, “Strong and weak convexity of sets and functions”, Math. Oper. Res., 8:2 (1983), 231–259  crossref  mathscinet  zmath
14. E. S. Polovinkin, “Strongly convex analysis”, Mat. Sb., 187:2 (1996), 103–130  mathnet  crossref  mathscinet  zmath; English transl. in Sb. Math., 187:2 (1996), 259–286  crossref
15. E. S. Polovinkin and M. V. Balashov, Elements of convex and strongly convex analysis, 2nd ed., Fizmatlit, Moscow, 2007, 438 pp. (Russian)  zmath
16. A. Weber and G. Reißig, “Local characterization of strongly convex sets”, J. Math. Anal. Appl., 400:2 (2013), 743–750  crossref  mathscinet  zmath
17. A. Weber and G. Reissig, “Classical and strong convexity of sublevel sets and application to attainable sets of nonlinear systems”, SIAM J. Control Optim., 52:5 (2014), 2857–2876  crossref  mathscinet  zmath
18. H. Frankowska and C. Olech, “$R$-convexity of the integral of set-valued functions”, Contributions to analysis and geometry (Baltimore, MD 1980), Johns Hopkins Univ. Press, Baltimore, MD, 1981, 117–129  mathscinet  zmath
19. M. V. Balashov, “Lipschitz stability of extremal problems with a strongly convex set”, J. Convex Anal., 27:1 (2020), 103–116  mathscinet  zmath
20. E. S. Polovinkin, Multivalued analysis and differential inclusions, Fizmatlit, Moscow, 2015, 524 pp. (Russian)
21. R. T. Rockafellar, Convex analysis, Princeton Math. Ser., 28, Princeton Univ. Press, Princeton, NJ, 1970, xviii+451 pp.  mathscinet  zmath
22. M. V. Balashov, “Chebyshev center and inscribed balls: properties and calculations”, Optim. Lett., 2021, 1–14, Publ. online  crossref
23. V. V. Goncharov and G. E. Ivanov, “Strong and weak convexity of closed sets in a Hilbert space”, Operations research, engineering, and cyber security, Springer Optim. Appl., 113, Springer, Cham, 2017, 259–297  crossref  mathscinet  zmath

Citation: M. V. Balashov, “Strong convexity of reachable sets of linear systems”, Mat. Sb., 213:5 (2022), 30–49; Sb. Math., 213:5 (2022), 604–623
Citation in format AMSBIB
\Bibitem{Bal22}
\by M.~V.~Balashov
\paper Strong convexity of reachable sets of linear systems
\jour Mat. Sb.
\yr 2022
\vol 213
\issue 5
\pages 30--49
\mathnet{http://mi.mathnet.ru/sm9627}
\crossref{https://doi.org/10.4213/sm9627}
\mathscinet{http://mathscinet.ams.org/mathscinet-getitem?mr=4461445}
\adsnasa{https://adsabs.harvard.edu/cgi-bin/bib_query?2022SbMat.213..604B}
\transl
\jour Sb. Math.
\yr 2022
\vol 213
\issue 5
\pages 604--623
\crossref{https://doi.org/10.1070/SM9627}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=000992262800002}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85162688542}
Linking options:
  • https://www.mathnet.ru/eng/sm9627
  • https://doi.org/10.1070/SM9627
  • https://www.mathnet.ru/eng/sm/v213/i5/p30
  • This publication is cited in the following 5 articles:
    Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Математический сборник Sbornik: Mathematics
    Statistics & downloads:
    Abstract page:566
    Russian version PDF:72
    English version PDF:61
    Russian version HTML:187
    English version HTML:154
    References:47
    First page:15
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024