Sbornik: Mathematics
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Forthcoming papers
Archive
Impact factor
Guidelines for authors
License agreement
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Mat. Sb.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Sbornik: Mathematics, 2023, Volume 214, Issue 9, Pages 1212–1240
DOI: https://doi.org/10.4213/sm9821e
(Mi sm9821)
 

Construction of invariant Lyapunov norms for planar switching systems

A. M. Musaeva

Faculty of Mechanics and Mathematics, Lomonosov Moscow State University, Moscow, Russia
References:
Abstract: We consider the problem of the stability of linear dynamical switching systems. It is known that an irreducible $d$-dimensional system always has an invariant Lyapunov norm (a Barabanov norm), which determines the stability of the system and the rate of growth of its trajectories. We prove that in the case of $d=2$ the invariant norm is a piecewise analytic function and can be constructed explicitly for every finite system of matrices. The method of construction, an algorithm for computing the Lyapunov exponent and a way for deciding the stability of the system are presented. A complete classification of invariant norms for planar systems is derived. A criterion of the uniqueness of an invariant norm of a given system is proved, and some special norms (norms generated by polygons and so on) are investigated.
Bibliography: 30 titles.
Keywords: linear switching system, dynamical system, stability, Lyapunov function, Lyapunov exponent.
Funding agency Grant number
Russian Science Foundation 22-11-00129
Foundation for the Development of Theoretical Physics and Mathematics BASIS 22-7-1-20-1
The research in §§ 1–6, 8, 9 and 11 was carried out in Lomonosov Moscow State University and supported by the Russian Science Foundation under grant no. 22-11-00129, https://rscf.ru/en/project/22-11-00129/. The research in §§ 7 and 10 was carried out with the support of the Theoretical Physics and Mathematics Advancement Foundation “BASIS” (grant no. 22-7-1-20-1).
Received: 15.08.2022 and 12.06.2023
Russian version:
Matematicheskii Sbornik, 2023, Volume 214, Number 9, Pages 27–57
DOI: https://doi.org/10.4213/sm9821
Bibliographic databases:
Document Type: Article
MSC: 37N35, 39A22, 93D20
Language: English
Original paper language: Russian

§ 1. Introduction

1.1. The stability of switching systems. The problem statement

A linear dynamical switching system is a linear system of ordinary differential equations

$$ \begin{equation} \begin{cases} \dot {\boldsymbol{x}} (t)=A(t)\boldsymbol{x} (t),\quad t \geqslant 0, \\ \boldsymbol{x}(0)=\boldsymbol{x}_0, \end{cases} \end{equation} \tag{1} $$
where the matrix $A(\cdot)$ is a controlled parameter with values in a given compact set of matrices $\mathscr A$, called the control set.

A switching law is an arbitrary measurable function $ A (\cdot)\colon{\mathbb R}_+ \to \mathscr A$. Linear switching systems have many theoretical applications in stability theory, optimal control, and so on. They are also used in applied problems of robotics, electronics, chemistry, engineering, and so on; see, for example, [1]–[4]. One of the central issues is the maximal growth of trajectories and stability of the system.

A system is asymptotically stable if all of its trajectories tend to zero as $t\to \infty$. We will not consider other types of stability, so we omit the word ‘asymptotically’. The stability of a system can be expressed in terms of the Lyapunov exponent.

The Lyapunov exponent $\sigma (\mathscr A)$ of a system is the infimum of numbers $\alpha\in \mathbb R$, for which

$$ \begin{equation*} \|{\boldsymbol x}(t)\| \leqslant C e^{\alpha t} \|{\boldsymbol x_0}\|, \qquad t \in [0, +\infty). \end{equation*} \notag $$
A system is known to be asymptotically stable (see [5]) if and only if $\sigma (\mathscr A) < 0$.

In the case of a stationary system, when the control set $\mathscr A$ consists of a single matrix $A$, we have $\sigma(\mathscr A)=\max\{\operatorname{Re} \lambda_k\}$ (where $\max\{\operatorname{Re} \lambda_k\}$ is the maximum spectral abscissa, and the $\lambda_k$ are the eigenvalues of $A$). A system is stable when $A$ is a Hurwitz matrix, that is, when all eigenvalues lie in the left half-plane.

Many methods for analysing stability of an arbitrary control set $\mathscr A$ are presented in the literature. Among them is the common quadratic Lyapunov function (CQLF) method, which gives sufficient stability conditions (see [1] and [2]), piecewise linear and piecewise quadratic Lyapunov function methods (see [6] and [3]) and the extreme polytope method (see [7]). One of the most effective methods is the method of invariant Lyapunov norms.

An invariant Lyapunov norm has the following characteristic property:

$$ \begin{equation*} \|{\boldsymbol x}(t)\| \leqslant e^{ \sigma t} \|{\boldsymbol x_0}\| \end{equation*} \notag $$
for every trajectory ${\boldsymbol x}(t)$, when $\boldsymbol x_0$ is the starting point, and for every point ${\boldsymbol x_0}$ there is a trajectory $\widetilde {\boldsymbol x}(t)$ such that
$$ \begin{equation*} \widetilde {\boldsymbol x}(0)= {\boldsymbol x_0} \quad\text{and}\quad \|\widetilde {\boldsymbol x}(t) \|= e^{\sigma t} \|{\boldsymbol x_0}\|, \quad\text{where } \sigma=\sigma(\mathscr A). \end{equation*} \notag $$

Barabanov [8] proved that for any irreducible system with convex compact control set $\mathscr A$ an invariant Lyapunov norm exists. However, constructing it is an extremely difficult problem (see [7]). A solution of the stability problem for an arbitrary family of matrices solves the problem of calculation of the Lyapunov exponent automatically. Indeed, we can compare the Lyapunov exponent with any fixed number, and therefore calculate this exponent using the bisection method.

Thus, the problem is to compute the Lyapunov exponent and construct an invariant Lyapunov norm.

1.2. Planar systems. An overview

There are many works devoted to planar systems ($d=2$). In [9] the existence of a common quadratic Lyapunov function was shown for a planar system with two matrices. In [10] an example of a system with unique nonstrictly convex Lyapunov norm was presented; such systems were described in detail in [11]. In [12] and [13] planar dwell-time systems were investigated. Under certain conditions on the system this author managed to obtain an accurate stability criteria in terms of eigenvalues. A detailed analysis of the variational approach to the stability of switching systems and its relationship with the method of Lie algebra was presented in [14], where general planar systems were also investigated.

For systems with two matrices it is possible to find the Lyapunov exponent explicitly (see [15]–[18] and the survey [4]). In [17] it was claimed (without references) that the case of diagonal matrices is well known, and a method for calculating the Lyapunov exponent with arbitrary accuracy was presented for the case of two $2 \times 2$ matrices. This method involved the consideration of many cases. In [16] this method was simplified, but the presentation contained some mistakes, which were subsequently corrected, in particular, in [15]. The main tool in the calculation of the Lyapunov exponent was the search for the ‘worst’ trajectory. Lyapunov functions were not considered in those works. In [17] two-dimensional systems with random switching were studied. For systems with discrete time the Lyapunov exponent is called the joint spectral radius. Note that for discrete $2\times 2$ systems, surprisingly enough, the situation is much more complicated, and there is no way to construct an invariant norm explicitly and no effective method for calculating the joint spectral radius (see [15]–[18]). As a rule, any discretization leads to simplification, but in this case we have the opposite situation: for discrete systems an explicit construction of Lyapunov norm is not known, even in the two-dimensional case. Moreover, most likely, such an algorithm does not exist. There are negative results in the literature about the algorithmic complexity of this problem (see [22] and [29]).

1.3. The main results

This article presents a method for constructing an invariant Lyapunov function explicitly for an arbitrary two-dimensional switching system, for a finite number of matrices. Since stability, as well as the Lyapunov norm, does not change after taking the convex hull of the control set $\mathscr A$, all our results are transferred to systems given by an arbitrary polyhedron in the space of $ d \times d $ matrices. Note that the case of an arbitrary finite number of matrices is not a direct generalization of the case of two matrices. It requires new approaches and a special theoretical base. The approach proposed here is based on the use of the invariant Lyapunov norm and its properties (instead of searching for the ‘worst’ trajectory, as in [4], [15]–[17], [28]).

Then we present an algorithm for constructing an invariant Lyapunov norm (a Barabanov norm) for an arbitrary finite family of $2\times 2$ matrices. The existence of an invariant norm for switching systems allows us to find the precise asymptotics for trajectories of fastest growth and also obtain accurate upper estimates for $\|\boldsymbol{x}(t)\|$, for all $t \in \mathbb{R}_+$.

One of the main results of this work is the classification of invariant norms for all planar systems with finite control set. In particular, all cases in which the norm is unique are found. The nonuniqueness of an invariant norm can occur only if there are at least two ‘dominant’ matrices in the control set (Definition 2). Nevertheless, in this case invariant norms permit complete classification.

Note that, unlike systems with discrete time, where the problem of uniqueness is solved in any dimension (see [30]), for systems with continuous time the problem has not been solved yet; it is solved only for a few narrow classes. As we will see, for some classes of two-dimensional systems the invariant norm is unique.

This work has the following structure: the formulations of problems are presented in § 2; in § 3 the basic definitions are given and auxiliary facts are formulated; in § 4 piecewise analyticity, which is necessary for the algorithm (described in what follows), is proved; § 5 contains auxiliary results; § 6 is devoted to systems of two matrices; systems with dominant matrices are studied separately in the case when invariant norms are defined by polygons (§ 7); in §§ 810 the main results are presented (namely, the algorithm constructing the invariant norm and calculating the Lyapunov exponent, theorems on uniqueness and structure of invariant norms, as well as the classification of invariant norms); in § 11 numerical implementation of the algorithm described in the previous sections is presented.

The main emphasis is made on the explicit construction of invariant norms and their classification, since it is the invariant norm that provides accurate estimates for the asymptotic behaviour of the trajectory of fastest growth.

The paper uses the following notation: vectors are highlighted in bold ($\boldsymbol{x}$, $\boldsymbol{y}$ and so on), $\langle\boldsymbol{x}\rangle=\{ \lambda \boldsymbol{x},\lambda \in \mathbb{R} \} $ is the span of a vector, all norms $\| \cdot\|$ are Euclidean by default.

§ 2. Problem statement

The following properties of Lyapunov exponents are well known.

Lemma 1. For each $\alpha \in \mathbb{R}$ the shift property is fulfilled:

$$ \begin{equation*} \sigma(\mathscr A+\alpha I)=\sigma(\mathscr A)+\alpha , \end{equation*} \notag $$
where the set $\mathscr A + \alpha I$ is defined as $\{A + \alpha I \mid A \in \mathscr A \}$ and $I$ is the identity matrix.

Indeed, if $\boldsymbol{x}(t)$ is the trajectory generated by a switching law $A(t)$, then replacing the set of matrices $\mathscr A$ by the set $\mathscr A+\alpha I$ we obtain the trajectory $\boldsymbol{y}(t)=e^{\alpha t}\boldsymbol{x}(t)$.

Lemma 2. For each $i$ the majorization property is fulfilled:

$$ \begin{equation*} \sigma(\mathscr A)\geqslant \sigma(A_i), \end{equation*} \notag $$
where $\mathscr A=\{A_1, A_2,\dots,A_N \}$.

Indeed, $\sigma(A_i)$ is an indicator of the growth of the trajectory corresponding to the switching law $A(t) \equiv A_i$. Therefore, this trajectory does not exceed the maximum growth trajectory for the set $\mathscr A$.

The maximum growth of the trajectory of the system is $C(t) e^{\sigma t}$, where ${\sigma=\sigma(\mathscr A)}$ is the Lyapunov exponent, and $C(t)$ is a function growing slower than the exponential function. For a generic system $C(t)=\mathrm{const}$. However, there are cases when $C(t)$ has power growth. Even in the simple case when $C(t)$ is a constant, finding an upper bound for it is a difficult problem. And an explicit solution to this problem can be given by means of a so-called invariant Lyapunov norm.

Definition 1. An invariant norm of a family of $ d \times d $ matrices is a norm $\| \cdot \|$ in $\mathbb{R}^d $ such that the inequality

$$ \begin{equation*} \| \boldsymbol{x}(t) \| \leqslant e^{\sigma t} \| \boldsymbol{x} (0) \| \end{equation*} \notag $$
is satisfied for all trajectories $ \boldsymbol{x}(t)$, and for each $x_0 \in \mathbb{R}^d$ there is a trajectory $ \boldsymbol{x}(t)$ for which
$$ \begin{equation*} \boldsymbol{x}(0)= \boldsymbol{x}_0\qquad\text{and}\qquad \| \boldsymbol{x}(t) \|=e^{\sigma t} \| {\boldsymbol{x}_0}\|. \end{equation*} \notag $$

Thus, after the transition to an invariant norm the constant $C$ becomes equal to 1. Therefore, an explicit construction of an invariant Lyapunov norm gives one an accurate estimate for the maximum asymptotic growth of the trajectory. The unit ball $B$ of an invariant norm is be called an invariant body of the system $\mathscr A$, and the boundary of this invariant body and, consequently, the unit sphere of the invariant norm is denoted by $S=\partial B$. Not every system has an invariant norm. The irreducibility of the system is necessary for its existence. A family of matrices $\mathscr A$ is called irreducible if the matrices in this family have no common proper invariant subspace. As proved by Barabanov in 1989, every irreducible system has an invariant norm.

Theorem A (see [8]). An irreducible compact family of matrices has an invariant norm.

The invariant body $B$ has the following geometric description. We say that for an arbitrary point $X\in\partial B$ a vector $\boldsymbol{y}$ is directed inside the body if there exists $t_0 > 0$ such that

$$ \begin{equation*} \boldsymbol{y}(t)=X + t_0 \boldsymbol{y} \in \operatorname{int} B; \end{equation*} \notag $$
a vector $\boldsymbol{y}$ is tangent to the body $B$ at the point $X$ if it is not directed inside the body and
$$ \begin{equation*} \operatorname{dist}(X + t \boldsymbol{y}, B)=o(t) \quad\text{as } t \to +0. \end{equation*} \notag $$

Theorem B (see [8]). A convex body $B\subset\mathbb{R}^d$ symmetrical about the origin is invariant for a family $\mathscr A$ if and only if for any point $\boldsymbol{x}\in\partial B$ at least one of the vectors $A\boldsymbol{x}$, $A \in\mathscr A$, is tangent to $B$, and the rest of them are either tangent or directed inside the body.

Obtaining an explicit expression for an invariant norm is a rare occurrence; typically, it is only known to exist. Nevertheless, we will show that for two-dimensional systems an invariant norm can always be calculated explicitly; all such norms are classified.

Definition 2. The matrix $A \in \mathscr A $ is dominant in the family $\mathscr A$ if $\sigma(A)=\sigma(\mathscr {A})$ and $ \sigma(\mathscr {A})$ is an eigenvalue of $A$.

Given a family of $ 2\times 2 $ matrices $\mathscr A=\{A_1, A_2,\dots, A_N\}$, we consider three problems.

Problem 1. Determine whether the inequality $\sigma(\mathscr A)\leqslant 0$ is true.

Problem 2. Calculate $\sigma(\mathscr A)$ with accuracy specified.

Problem 3. Construct an invariant norm for the family $\mathscr A$.

Notice that we are dealing with an irreducible family of matrices $\mathscr A$. If $\mathscr A$ is reducible, then an invariant norm may not exist, while Problems 1 and 2 are solved elementarily. Indeed, if $\boldsymbol{v}$ is a common eigenvector of the matrices $A_i$, then we choose an arbitrary vector $\boldsymbol{u}$ so that it is not collinear with $\boldsymbol{v}$. In the basis $\{\boldsymbol{v}, \boldsymbol{u}\}$ matrices in $\mathscr A$ have an upper triangular form:

$$ \begin{equation*} A_i= \begin{pmatrix} a_i & c_i \\ 0 & b_i \end{pmatrix}. \end{equation*} \notag $$
Then
$$ \begin{equation*} \sigma(\mathscr A)=\max \sigma(A_i)=\max\{a_i,b_i\} . \end{equation*} \notag $$

Solving Problem 1 by the bisection method we obtain a solution to Problem 2. Next we shift our family: $\mathscr A\longrightarrow\mathscr A- \sigma I$. For the new family, which satisfies $\sigma(\mathscr A)=0$, we solve Problem 3.

The construction of an invariant norm can always be reduced to the case when $\sigma=0$. To any family of matrices $\mathscr A$ we assign the invariant body of the normalized system

$$ \begin{equation*} \mathscr A - \sigma I=\{ A_1 - \sigma I, A_2 - \sigma I,\dots, A_N - \sigma I \}, \quad\text{where } \sigma=\sigma(\mathscr A). \end{equation*} \notag $$

We present a method for confirming or refuting the inequality $\sigma(\mathscr A)\leqslant 0$. After that an approximate value of the Lyapunov exponent can be found using the bisection method.

We start by considering a family of two matrices and constructing an invariant norm in this case, then we move on to solving problems for an arbitrary finite family.

§ 3. Basic concepts and definitions

The construction of an invariant norm is carried out geometrically by finding its unit ball in $\mathbb{R}^2$. In § 4 we show that the boundary of this unit ball is a piecewise analytic curve. We need some notation.

1. A flower $(\boldsymbol{a},\{ \boldsymbol{b}_1, \boldsymbol{b}_2,\dots, \boldsymbol{b}_n \})$ consists of the directed line segment $\boldsymbol{a}$ (stem) and the set of vectors $\boldsymbol{b}_1, \boldsymbol{b}_2,\dots, \boldsymbol{b}_n$ (petals) placed at the tip of $\boldsymbol{a}$. We assume that no petal is co-directed with the vector $-\boldsymbol{a}$ (Figure 1).

2. The petals $\boldsymbol{b}_1$ and $\boldsymbol{b}_2$ lie on opposite sides of the stem $\boldsymbol{a}$ if the line segment that connects the endpoints of $\boldsymbol{b}_1$ and $\boldsymbol{b}_2$ intersects the line $ \langle\boldsymbol{a}\rangle$. Otherwise we say that $\boldsymbol{b}_1$ and $\boldsymbol{b}_2$ lie on the same side of the stem $\boldsymbol{a}$ (Figure 2).

We denote the stem $\boldsymbol{a}$ by $\overrightarrow{OX}$, where $O$ is the origin.

3. The central angle between two petals $\boldsymbol{b}_1$ and $\boldsymbol{b}_2$ lying on opposite sides of $\boldsymbol{a}$ is the angle between the vectors $\boldsymbol{b}_1$ and $\boldsymbol{b}_2$ going out of the point $X$ that contains the point $O$ (Figure 3).

4. Consider a flower $(\boldsymbol{a},\{ \boldsymbol{b}_1, \boldsymbol{b}_2 \})$ in which the petals lie on the same side of the stem $\boldsymbol{a}$. We say that the petal $\boldsymbol{b}_1$ lies above the petal $\boldsymbol{b}_2$ if the angle between the vectors $\boldsymbol{b}_1$ and $\boldsymbol{a}$ is less than the angle between $\boldsymbol{b}_2$ and $\boldsymbol{a}$ (Figure 4).

5. An umbrella is a flower $(\boldsymbol{a},\{ \boldsymbol{b}_1,\boldsymbol{b}_2\})$ with two petals $\boldsymbol{b}_1$ and $\boldsymbol{b}_2$ that lie on opposite sides of $\boldsymbol{a}$ and form a central angle less than $\pi$.

Consider a system of arbitrary finite $ 2 \times 2 $ matrices $\mathscr A=\{A_1,\dots,A_N\}$. Let $B \subset \mathbb{R}^2$ be an invariant body of this system.

6. An umbrella switching is a switching in which the tangents to the invariant body on the left and right are different.

7. A switching law $A(t)$ is a measurable function with the set of values lying in $\mathscr A=\{ A_1,A_2,\dots,A_N\}$. The trajectory corresponding to the switching law $A(t)$ is a solution of equation (1).

8. A matrix $A$ majorizes a matrix $B$ if for every vector $\boldsymbol{x}$ in the flower $(\boldsymbol{x},\{A\boldsymbol{x}, B\boldsymbol{x}\})$, the petals $A\boldsymbol{x}$ and $B\boldsymbol{x}$ lie on the same side of the stem $\boldsymbol{x}$ and the petal $A\boldsymbol{x}$ lies above the petal $B\boldsymbol{x}$.

9. We will consider vectors $\boldsymbol{x}$ for which there exists $\lambda\in\mathbb{R}$ such that $A_1 \boldsymbol{x}=\lambda A_2\boldsymbol{x}$, so that $\lambda$ is a root of the equation $\operatorname{det}(A_1-\lambda A_2)=0$. The roots $\lambda_1$ and $\lambda_2$ are called guiding values for the pair of matrices $A_1$, $A_2$, and the kernels of $A_1-\lambda_1A_2$ and $A_1-\lambda_2A_2$ are the corresponding directions.

§ 4. Invariant body boundary analyticity

In this section we prove a theorem on the boundary of an invariant body. Then in §§ 8 and 9 we present an algorithm for constructing an invariant body, after which in § 10 we prove the uniqueness theorem.

Theorem 1. Each invariant body of system (1) has a piecewise analytic boundary.

Proof. At each point $\boldsymbol{x}\in\partial B $, at least one of the vectors $A_i \boldsymbol{x}$, $i=1,\dots, N$, is a tangent vector to the invariant body (and coincides with the tangent on the left or right at $\boldsymbol{x}$).

Points of the first type are points $\boldsymbol{x}$ for which exactly one of the vectors $A_i\boldsymbol{x}$, $i=1,\dots,N$, is tangent to the body at $\boldsymbol{x}$. The set of all such points is denoted by $M_1$. The remaining points are of the second type, and the corresponding set is denoted by $M_2$.

Note that $M_1$ is an open subset of $\partial B$. Now let us show that $M_2$ is finite.

Let $\boldsymbol{x}\in M_2$. Then two cases are possible.

$\bullet$ At least two vectors $A_i\boldsymbol{x}$ and $A_j\boldsymbol{x}$, $i\neq j$, coincide with the tangent on the left (right), which means that these vectors are collinear, that is, $\boldsymbol{x}$ is a solution of the equation $A_i \boldsymbol{x}=\lambda A_j \boldsymbol{x}$, $i \neq j$, for some $\lambda$. There is a finite number of such points. Each equation $A_i \boldsymbol{x}=\lambda A_j \boldsymbol{x}$, $i\neq j$, has at most two solutions. The number of such equations is $\binom{N}{2}$.

$\bullet$ There is a pair of vectors $A_i\boldsymbol{x}$, $A_j\boldsymbol{x}$, $i\neq j$, such that $A_i\boldsymbol{x}$ coincides with the tangent on the left, $A_j\boldsymbol{x}$, $i\neq j$, coincides with the tangent on the right, and the tangents on the left and right are different, so that an umbrella switching occurs. If for each umbrella switching it is possible to select a neighbourhood such that there are no other umbrella switchings in it, then it follows from the compactness of the boundary that there is a finite number of such points. It remains to show that any umbrella switching is isolated.

Otherwise, the set of switch points has a limit point $L$ (Figure 5). Let the central angle between the highest petals at point $L$ be equal to $\alpha$, $\pi >\alpha > 0$ (it cannot be equal to $\pi$, since in this case $\sigma(\mathscr A)>0$ and there is no invariant body). Since there are infinitely many umbrella switchings in any sufficiently small neighbourhood of $L$ and the angle of the solution is bounded below (because the number of matrices is finite), we conclude that the sum of the external angles of this invariant body is equal to $\infty$. The sum of the external angles of a convex plane set is known not to exceed $2\pi$. We arrive at a contradiction. So $M_2$ is a finite set. The whole boundary of the body is partitioned into $M_1$ and $M_2$, where the first set is open and the second is finite. Hence the boundary of the body is divided into open arcs; each of them is given by a curve $x(t)$, which is a solution of the equation $\dot{\boldsymbol{x}}=A\boldsymbol{x}$, where $A \in \mathscr A$ is a constant matrix. This proves the theorem.

§ 5. Auxiliary results

Having proved piecewise analyticity, one can proceed to the construction of an invariant body. The boundary of the body consists of a finite number of arcs, each of which is defined by the equation $\dot{\boldsymbol{x}}=A \boldsymbol{x}$, where $A$ is some matrix in $\mathscr A$. The problem of the construction is reduced to finding the switch points. All invariant bodies can be constructed in this way. The construction of a convex body $B$ is sufficient to calculate the invariant norm. It is defined as the Minkowski functional.

In what follows we assume that the set $\mathscr A$ does not contain the null matrix and is irreducible.

A sufficient condition for the positivity of $\sigma(\mathscr A)$ is the existence of an unbounded trajectory $\boldsymbol{x}(t)$. The following lemma gives a simpler condition.

Lemma 3. Let the following be true for some vector $\overrightarrow{OX}=\boldsymbol{x}\neq 0$ and a pair of matrices $A_1$, $A_2$:

1) the petals $A_1 \boldsymbol{x}$ and $A_2\boldsymbol{x}$ lie on opposite sides of $\boldsymbol{x}$;

2) the central angle between $A_1\boldsymbol{x}$ and $A_2\boldsymbol{x}$ at the point $X$ is at least $\pi$.

Then $\sigma(\mathscr A) > 0$.

Proof. The assumptions of the lemma imply the existence of a matrix $A \in \operatorname{co}\{A_1, A_2\}$ for which $\boldsymbol{x}$ is an eigenvector corresponding to a positive eigenvalue. Then the trajectory $\boldsymbol{x}(t)=e^{tA}\boldsymbol{x}_0$ is unbounded, hence $\sigma(\mathscr A) > 0$.

The following theorem reduces the number of cases. If at least one pair of matrices has a negative guiding value, then $\sigma(\mathscr A) > 0$.

Theorem 2. For any family of matrices $\mathscr A$ the presence of at least one negative guiding value ensures that $\sigma(\mathscr A) > 0$.

Proof. Assume that for two matrices $A_1, A_2 \in \mathscr A$ the guiding values $\lambda_1$ and $\lambda_2$ are such that $\lambda_1 > \lambda_2$ and $\lambda_2 < 0$, and the vectors $\boldsymbol{x}$ and $\boldsymbol{h}$ are such that
$$ \begin{equation*} A_1 \boldsymbol{x}=\lambda_2 A_2 \boldsymbol{x}\quad\text{and} \quad A_1 \boldsymbol{h}=\lambda_1 A_2 \boldsymbol{h}, \end{equation*} \notag $$
where $h=\| \boldsymbol{h}\|$ is small enough, and the petals $\boldsymbol{h}$ and $\boldsymbol{x}$ lie on the same side of the stem $A_2 \boldsymbol{x}$ in the flower $(A_2 \boldsymbol{x} , \{\boldsymbol{h}, \boldsymbol{x} \})$ (otherwise one can take the opposite vector $-\boldsymbol{h}$). Then, depending on the sign of $\lambda_1$, the following three cases are possible.

1) If $\lambda_1 > 0$ (Figure 6), then the central angle between the petals $A_1 (\boldsymbol{h}+\boldsymbol{x})$ and $A_2(\boldsymbol{h}+\boldsymbol{x})$ at the point $X$ is greater than $\pi$, and therefore the same is true at the points $X +\boldsymbol{h}$ for small $\boldsymbol{h}$.

2) When $\lambda_1=0$ (Figure 7), the petals $A_1 (\boldsymbol{h}+\boldsymbol{x})=A_1 \boldsymbol{x}$ and $A_2 (\boldsymbol{h}+\boldsymbol{x})$ form a central angle greater than $\pi$.

3) $\lambda_1 < 0 $ (Figure 8). Without loss of generality it can be assumed that

$$ \begin{equation*} \frac{\|A_1 \boldsymbol{h} \|}{\|A_1 \boldsymbol{x}\|}\geqslant\frac{\|A_2 \boldsymbol{h}\|}{\|A_2 \boldsymbol{x}\|}. \end{equation*} \notag $$

This yields $\|A_1 \boldsymbol{h} \|\geqslant{\|A_2 \boldsymbol{h}\| \|A_1 \boldsymbol{x}\|}/{\|A_2 \boldsymbol{x}\|}$. Then, from the similarity of triangles we obtain again that the central angle between the petals $A_1 (\boldsymbol{h}+\boldsymbol{x})$ and $A_2(\boldsymbol{h}+\boldsymbol{x})$ is greater than $\pi$.

This completes the proof.

In fact, in the general position a guiding vector corresponding to $\lambda_1 > 0$ is a switch point.

Theorem 3. If $\sigma({A_1, A_2})=0$ and for a pair of matrices $A_1$, $A_2$, a vector $\boldsymbol{x} \in \mathbb{R}^2\setminus \{ 0\}$ and a number $\lambda >0$ we have $A_1 \boldsymbol{x}=\lambda A_2 \boldsymbol{x}$ and the projections of the vectors $A_1 ^2 \boldsymbol{x}$ and $A_2 ^2 \boldsymbol{x}$ onto the line $l$ orthogonal to the vector $A_1 \boldsymbol{x}$ do not coincide, then switching occurs at the point $\boldsymbol{x}$.

Proof. Consider the Tailor expansions of the operators $e^{t A_1}$ and $e^{t A_2}$ at the point $t=0$,
$$ \begin{equation} e^{t A_1}\boldsymbol{x}=\boldsymbol{x} + t A_1\boldsymbol{x} + \frac{t^2 A_1^2}{2}\boldsymbol{x} + O(t^3) \end{equation} \tag{2} $$
and
$$ \begin{equation} e^{t A_2}\boldsymbol{x}=\boldsymbol{x} + t A_2\boldsymbol{x} + \frac{t^2 A_2^2}{2}\boldsymbol{x} + O(t^3). \end{equation} \tag{3} $$

We must compare the directions of the vectors of the main parts of the increments

$$ \begin{equation*} t A_1\boldsymbol{x} + \frac{t^2 A_1^2}{2}\boldsymbol{x}\quad\text{and} \quad t A_2\boldsymbol{x} + \frac{t^2 A_2^2}{2}\boldsymbol{x}. \end{equation*} \notag $$
Dividing by $t$ and using the equality $A_1 \boldsymbol{x}= \lambda A_2 \boldsymbol{x}$ we reduce the problem to the comparison of the directions of the vectors $A_1^2 \boldsymbol{x}$ and $\lambda A_2^2\boldsymbol{x}$.

For the sake of simplicity we introduce a coordinate system with origin $O$ and $O\boldsymbol{x}_1$- and $O\boldsymbol{x}_2$-axes co-directed with the vectors $A_2\boldsymbol{x}$ and $O\boldsymbol{x}$, respectively (Figure 9). In this system the axes are perpendicular. Set $\|A_i\boldsymbol{x}\|=h_i$, $i=1,2$ (then the vectors $h_2 A_1 \boldsymbol{x}$ and $h_1 A_2 \boldsymbol{x}$ have equal length) and project the vectors $t h_2 A_1^2\boldsymbol{x}/{2}$ and $t h_1 A_2 ^2 \boldsymbol{x}/{2}$ onto the $O\boldsymbol{x}_2$-axis. This means that for sufficiently small $t$ the vector giving the smaller projection forms a smaller angle with the $O\boldsymbol{x}_2$-axis regardless of the sign of $t$. This corresponds to switching at the point $\boldsymbol{x}$.

This completes the proof.

Lemma 4. For any irreducible family $\mathscr A$ of matrices

$$ \begin{equation} \max\{\|A\| \mid A \in \mathscr A \} \geqslant \sigma(\mathscr A) \geqslant \max\{\sigma(A) \mid A \in \mathscr A \}, \end{equation} \tag{4} $$
where $\|A\|=\sqrt{\max\{AA^\top\}}$.

Proof. Consider the norm $\boldsymbol{x}(t)$:
$$ \begin{equation*} \|\boldsymbol{x}(t)\|^2=(\boldsymbol{x}(t),\boldsymbol{x}(t)). \end{equation*} \notag $$

We differentiate this identity by substituting in the equality from (1):

$$ \begin{equation*} \begin{aligned} \, \frac{d}{dx}\|\boldsymbol{x}(t)\|^2 &=2(\dot{\boldsymbol{x}}(t),\boldsymbol{x}(t))= 2(A(t)\boldsymbol{x}(t),\boldsymbol{x}(t)) \\ &\leqslant 2 \|A(t)\|(\boldsymbol{x}(t),\boldsymbol{x}(t)) \leqslant 2 \max \{\| A\| \mid A \in \mathscr A\} \cdot \|\boldsymbol{x}(t)\|^2. \end{aligned} \end{equation*} \notag $$
We have
$$ \begin{equation*} \frac{d}{dx}\|\boldsymbol{x}(t)\|^2 \leqslant 2 \max \{\| A\| \mid A \in \mathscr A \} \cdot \|\boldsymbol{x}(t)\|^2. \end{equation*} \notag $$
Let us integrate both parts of this inequality:
$$ \begin{equation*} \|\boldsymbol{x}(t)\|^2 \leqslant e^{2 t\cdot \max \{\| A\| \mid A \in \mathscr A\}}\quad\text{and} \quad \|\boldsymbol{x}(t)\| \leqslant e^{t\cdot \max \{\| A\| \mid A \in \mathscr A \}}. \end{equation*} \notag $$
From the last calculation and the definition of the Lyapunov exponent, the left-hand inequality of the lemma follows. The right-hand inequality follows from Lemma 2.

§ 6. Construction of an invariant norm for a system of two matrices

The algorithm for the construction of an invariant norm consists of two nested loops. The outer loop bisects an interval containing the Lyapunov exponent, and the inner loop compares $\sigma(\mathscr A)$ with zero. In the case when the inner loop outputs $\sigma(\mathscr A)=0$, it also outputs an invariant norm. The choice of $\delta_0$ is provided by Lemma 4.

Algorithm 1 (construction of an invariant norm for a family $\mathscr A=\{A_1,A_2\}$).

Input: an irreducible family of $2 \times 2$ matrices $\mathscr A=\{ A_1,A_2\}$.

Output: $\sigma(\mathscr A)$ and an invariant norm for $\mathscr A$.

Zero iteration: Let $\mathscr A^{(0)}=\mathscr A$ and $\Delta_0=[-\delta_0 , \delta_0 ]$.

END

This algorithm computes the Lyapunov exponent of the family $\mathscr A$ and constructs simultaneously an invariant body for this family.

According to Lemma 2, $\sigma(\mathscr A)\geqslant \max\{\sigma(A_1), \sigma(A_2) \}$. Therefore, we assume that

$$ \begin{equation} \sigma(A_1),\sigma(A_2) \leqslant 0 . \end{equation} \tag{5} $$
Otherwise, $\sigma(\mathscr A) > 0$ and the iteration has already been done.

To find the invariant norm and calculate the Lyapunov exponent using Algorithm 1 it remains to present a description of the inner loop, which, given a family $\mathscr A$, determines the sign of the Lyapunov exponent $\sigma(\mathscr A)$ and in the case when $\sigma(\mathscr A)=0$ builds an invariant norm. The description of the inner loop is presented below and consists of 10 cases.

6.1. The matrices $A_1$ and $A_2$ are singular

Since one eigenvalue of the matrix $A_1$ is zero, according to Lemma 2, we obtain the estimate $\sigma(\mathscr A) \geqslant \sigma(A_1) \geqslant 0 $. It is required to distinguish between the cases $\sigma(\mathscr A) > 0 $ and $\sigma(\mathscr A)=0 $ and find an invariant body in the latter case. Consider the kernels $\operatorname{Ker} A_1$ and $\operatorname{Ker}A_2$ of the matrices in question. Since the matrices are nonzero, their kernels are one-dimensional. We denote the image of $A_i$ by $\operatorname{Im} A_i$. Let us draw two different straight lines $\ell_1$ and $\ell_1'$ which are symmetric with respect to the origin and parallel to $\operatorname{Im} A_i$. Through the point of intersection of $\operatorname{Ker} A_1$ and $\ell_1$ we draw a line $\ell_2$ parallel to $\operatorname{Im} A_2$.

If the point of intersection of $\operatorname{Ker} A_2$ and $\ell_2$ lies inside the strip between $\ell_1$ and $\ell_1'$, then by virtue of the assumption (5) the parallelogram constructed is an invariant body (Figure 10).

Proposition 1. If the point of intersection of $\operatorname{Ker} A_2$ and $\ell_2$ lies inside the strip and the point of intersection of $\operatorname{Ker} A_1$ and $\ell_2$ lies outside the strip (or on its boundary), then $\sigma (\mathscr A)=0$ and the parallelogram is an invariant body. Otherwise $\sigma(\mathscr A) >0$.

Proof. Under the above conditions there is an unbounded trajectory, which implies that $\sigma(\mathscr A) > 0$. The process of constructing an unbounded trajectory is shown in Figure 11, where the selected trajectory is shown by a dashed line. If we start at a point on $\operatorname{Ker} A_1$ for the matrix $A_2$ and switch on the kernels, then we reach a point on $\operatorname{Ker} A_1$ again, but this point is farther away from the centre.

6.2. At least one matrix is nonsingular

Without loss of generality we assume that $A_2$ is nonsingular. To determine the sign of the Lyapunov exponent we construct the boundary of the body based on the property of extremality. Note that switching from one matrix to another at a point $X$ is possible only when the vectors $A_1\boldsymbol{x}$ and $A_2 \boldsymbol{x}$ are co-directed, where $\boldsymbol{x}= \overrightarrow{OX}$. Indeed, if they lie on the same side about the vector $\overrightarrow{OX}$ (Figure 12), then the extremal trajectory is directed along the highest vector of $A_1\boldsymbol{x}$ and $ A_2\boldsymbol{x}$. If $A_1\boldsymbol{x}$ and $ A_2\boldsymbol{x}$ lie on opposite sides of $\overrightarrow{OX}$, then $X$ is the vertex of an umbrella. Let us show that this case is impossible.

Lemma 5. If $\sigma(\mathscr A)=0$ and the extremal trajectory passes through the vertex of the umbrella $(x,\{A_1x,A_2x\})$, then there can be no switching at this point.

Proof. Consider the umbrella $X$. If the direction of the motion along the boundary of the body changes, then at a short period of time the trajectory occurs inside the invariant body, which contradicts convexity.

So there are no switch points among the vertices of umbrellas. Since the vectors $A_1\boldsymbol{x}$ and $ A_2\boldsymbol{x}$ pointing in different directions with respect to $\boldsymbol{x}$ must form an umbrella, for otherwise $\sigma(\mathscr A) > 0$ (Theorem 2), we obtain the following result.

Corollary 1. If $\sigma(\mathscr A)=0$, then the vectors $A_1\boldsymbol{x}$ and $ A_2\boldsymbol{x}$ are co-directed at all switch points of the extremal trajectory.

Consider $\boldsymbol{x}$ for which there exists $\lambda\in\mathbb{R}$ such that $A_1\boldsymbol{x}=\lambda A_2\boldsymbol{x}$, so that $\lambda$ is a root of the equation $\operatorname{det}(A_1-\lambda A_2)=0$. The roots $\lambda_1$ and $\lambda_2$ are called guiding values for a pair of matrices $A_1$ and $A_2$. Depending on the sign of the discriminant $D$ of the quadratic equation and the signs of its roots $\lambda_1$ and $\lambda_2$, we obtain several cases.

6.2.1. The case when $D < 0$

Proposition 2. In the case when $D<0$ an invariant body for the system $\mathscr A$ is an invariant body for one of the two matrices in this system.

When $D<0$, there are no guiding values and, according to Corollary 1, no switch points.

Corollary 2. For $D<0$ we have $\sigma(\mathscr A)=\max \{ \sigma(A_1), \sigma(A_2) \}$. Note that the opposite is not true: this equality can also hold for $D>0$.

6.2.2. The case when $D=0$ and $\lambda_1 > 0 $

As in the previous case, there are no umbrella switches. Indeed, only one pair of symmetric switch points is possible. At the same time, by symmetry the same matrix dominates on both half-planes separated by the straight line connecting these points. Thus, we have proved the following.

Proposition 3. In the case when the matrix $A_2$ is nonsingular, $D=0$ and $\lambda_1 > 0$ we have $\sigma(\mathscr A)=\max\{\sigma(A_1), \sigma(A_2) \}$.

6.2.3. The case when $D=0$ and $\lambda_1 < 0 $

By virtue of Theorem 2 it remains to consider the case of the proportional matrices $A_1$ and $A_2$. But in this case $\sigma(A) \geqslant \max\{\sigma(A_1),\sigma(A_2)\}\geqslant 0 $, and the last inequality turns to equality only when $\sigma(A_1)=\sigma(A_2)= 0$. Indeed, an invariant body for any of these matrices is also an invariant body for $\mathscr A$.

Proposition 4. In the case when $D=0$ and $\lambda_1 < 0$, if both matrices have Lyapunov exponent 0, then the invariant body of each matrix is an invariant body for the system.

6.2.4. $A_2$ is nonsingular, $ D=0$ and $\lambda_1=0 $

Since $\lambda_1=0 $, we have $\sigma(A_1)=0$, and therefore $\sigma(\mathscr A) \geqslant 0$. Consider a vector $\boldsymbol{b}$ parallel to $ \ell_1$; then the vector $\boldsymbol{z}=A_2^{-1} \boldsymbol{b} $ lies in the kernel $\operatorname{Ker} A_1$, for otherwise we obtain a contradiction with the condition $D=0$. Consider the trajectory generated by the matrix $-A_2$ going out of the point $\boldsymbol{x}$ and extending till the point of intersection with the line corresponding to $\ell_1$. The central angle between $A_1\boldsymbol{x}$ and $ A_2\boldsymbol{x}$ at the switch point does not exceed $\pi$, and if the point of intersection does not lie on $\operatorname{Ker} A_1$, then this angle is strictly less than $\pi$. In either case an invariant body is obtained from the half constructed by means of a central symmetry (Figure 13).

Let us show that in this case the extremal trajectory corresponds to a stationary switching law, that is, it is generated by a single matrix.

Proposition 5. On extremal trajectories switching is possible only in the directions corresponding to positive guiding values.

6.2.5. $A_2$ is nonsingular, $D > 0$, $\lambda_1 > 0 $ and $\lambda_2<0 $

In this case $\sigma(\mathscr A)>0$ (Theorem 2).

6.2.6. $A_2$ is nonsingular, $D > 0$, $\lambda_1 < 0 $ and $\lambda_2<0 $

In this case $\sigma(\mathscr A) > 0$ by virtue of Theorem 2.

6.2.7. $A_2$ is nonsingular, $D > 0$, $\lambda_1 > 0 $ and $\lambda_2 >0 $

As in the first two cases, there can be no umbrella switching. In this case the vectors always point to the same side with respect to $x$. So they can always be compared. The extremal control is uniquely found. Indeed, the highest vector is defined on all vectors except $\boldsymbol d_1$ and $\boldsymbol d_2$ (corresponding to the guiding values), and we show below that at points where the vectors $A_1\boldsymbol{x}$ and $A_2\boldsymbol{x}$ are parallel, switching always occurs (Figure 14).

6.2.8. $A_2$ is nonsingular, $D > 0$, $\lambda_1 > 0 $ and $\lambda_2=0 $

To begin with, assume that the matrix $A_2$ has no real eigenvalues. We show that in this case $\sigma(\mathscr A)=0$. Let us construct an invariant body $B$. Consider the lines $MM'$ and $AA'$ passing through the point $O$ and corresponding to the directions of the vectors $\boldsymbol d_2$ and $\boldsymbol d_1$. From the point $M$ we start a trajectory of the equation $\dot{\boldsymbol{x}}=A \boldsymbol{x}$ for $A=-A_2$. Since $\sigma(-A_2)=-\sigma(A_2)> 0$, it intersects the line parallel to $\ell_1$ and passing through the point $A'$ before it crosses the line $AA'$ at some point $C$. Reflecting centrally symmetrically the arc $MC$ about $AA'$, we obtain an invariant body (Figure 15).

Now let the eigenvalues of the matrix $A_2$ be real. Since $\operatorname{det} A_2\neq 0$ and $\sigma(A_2)\leqslant 0$, the eigenvalues of $A_2$ are negative. The corresponding eigenvectors are denoted by $\boldsymbol{u}_1$ and $\boldsymbol{u}_2$. These vectors are normalized by the conditions $\boldsymbol{u}_i\in \partial B$, $i=1,2$. It follows from Theorem 2 that the matrix $A_2$ dominates the point $M'$. Also note that the matrix $B_1$ dominates in the vectors $\boldsymbol{u}_1$ and $\boldsymbol{u}_2$. Thus we repeat the proof of the case discussed in § 6.2.7.

6.2.9. $A_2$ is nonsingular, $D > 0$, $\lambda_1=0 $ and $\lambda_2<0 $

According to Theorem 2, $\sigma(\mathscr A) > 0$.

§ 7. Families of singular matrices: invariant polygons

If all matrices in the family $\mathscr A$ are singular and $\sigma (\mathscr A)=0$, then the invariant body is a polygon. We prove this immediately for an arbitrary finite set $\mathscr A$. We start with the following lemma.

Lemma 6. Fix a family of matrices $\mathscr A=\{ A_1,A_2,\dots,A_N \}$ in which the nonzero $2 \times 2$ matrices $A_{n+1},\dots,A_N$ are singular. If $\sigma(\mathscr A)=0$, then for every $i>n$, among the points of tangency of the invariant body $B$ with lines parallel to $\operatorname{Im} A_i$ there is a point on $\operatorname{Ker}{A_i}$.

Proof. Assume the converse: there is $i>n$ such that the line $\ell_i$ parallel to $\operatorname{Im} A_i$ and intersecting the boundary of the invariant body $B$ at a point on the kernel $\operatorname{Ker}A_i$ is not a tangent line. Then there is a point $\boldsymbol{x}$ on the boundary of the invariant body such that the vector $A_i\boldsymbol{x}$ is not directed inside the body (Figure 16). This contradicts the properties of invariant bodies.

Let all matrices in $\mathscr A=\{A_1,A_2,\dots,A_N\}$ be singular. Each of these matrices is ‘switched on’ near its kernel (Lemma 6) and only there (due to the convexity of the invariant body). Thus, if all matrices in the family $\mathscr A$ are singular, then the invariant body is a polygon.

Let every line $\ell_i$ intersect $\operatorname{Ker} A_i$ at a point $x_i$. According to Lemma 6, to construct the polygon it is enough to know for each matrix $A_i$ the distance between $x_i$, lying on $\operatorname{Ker} A_i$, and the point $O$. Denote the lengths of the segments cut on $\operatorname{Ker} A_{i-1}$ and $\operatorname{Ker} A_{i+1}$ by the intersections with the line $\ell_i$ by $a_i x_i$ and $b_i x_i$, respectively, where $a_i$ and $b_i$ are some coefficients (Figure 17). (For brevity we use the same symbol $x_i$ to indicate the length of the vector $\boldsymbol{x}_i$.)

We consider the system

$$ \begin{equation} \prod_{j=1}^N b_j \geqslant 1, \qquad \prod_{j=1}^N a_j\geqslant 1, \qquad b_i a_{i+1} \geqslant 1, \quad i=1,\dots,N. \end{equation} \tag{6} $$

Theorem 4. If all matrices in the family $\mathscr A=\{A_1,A_2,\dots,A_N\}$ are singular, then $\sigma(\mathscr A)=0$ if and only if system (1) has a solution. In this case any invariant body of system (1) is a symmetric $2N$-gon with sides parallel to $\ell_i$. If the system has no solution, then $\sigma(\mathscr A) > 0$.

Proof. The lengths $x_{i-1}$ and $x_{i+1}$ satisfy conditions $x_{i-1} \leqslant a_i x_i$ and $x_{i+1} \leqslant b_i x_i$, which are written in the form of a system as follows (we set $x_{N+1}=x_{1}$):
$$ \begin{equation*} \begin{cases} a_i x_i \geqslant x_{i-1}, \\ b_i x_i \geqslant x_{i+1}, \end{cases} \quad i=1,\dots,N. \end{equation*} \notag $$
Let us shift the numbering in the first line:
$$ \begin{equation} \begin{cases} a_{i+1} x_{i+1} \geqslant x_i, \\ b_i x_i \geqslant x_{i+1}, \end{cases} \quad i=1,\dots,N. \end{equation} \tag{7} $$
Then we obtain
$$ \begin{equation} \frac{1}{a_{i+1}} \leqslant \frac{x_{i+1}}{x_i} \leqslant b_i, \qquad i=1,\dots,N. \end{equation} \tag{8} $$

Let $S_i={x_{i+1}}/{x_i}$. Then $S_1 S_2\dotsb S_N=1$. It is clear that finding the quantities $S_1, S_2,\dots,S_N$ is equivalent to searching for the set $x_1,\dots,x_N$ up to multiplication by a scalar.

We introduce the notation $a_{N+1}=a_1$ and $b_{N+1}=b_1$.

Lemma 7. For the existence of a solution of system (8) it is necessary and sufficient that the following inequalities are satisfied:

1) $\prod_{j=1}^N b_j \geqslant 1$;

2) $\prod_{j=1}^N a_j \geqslant 1$;

3) $b_i a_{i+1} \geqslant 1$, $i=1,\dots,N$.

Proof. The necessity of condition 3) follows from inequalities (8):
$$ \begin{equation*} \frac{1}{a_{i+1}} \leqslant b_i, \qquad i=1,\dots,N. \end{equation*} \notag $$
The necessity of conditions 1) and 2) follows from the product of inequalities (8) for all $i=1,\dots,N$:
$$ \begin{equation*} \prod_{j=1}^N \frac{1}{a_j} \leqslant 1 \leqslant \prod_{j=1}^N b_j. \end{equation*} \notag $$

It remains to show the sufficiency of 1)–3). Consider the function $f(\alpha,x,y)=x^{\alpha}y^{1-\alpha}$. For $\alpha \in [0,1]$ and $x, y >0$ we have

$$ \begin{equation*} \min(x,y)\leqslant f(\alpha,x,y) \leqslant \max(x,y). \end{equation*} \notag $$

Indeed, fix the values of $x$ and $y$. Without loss of generality we assume that $x \leqslant y$. Then

$$ \begin{equation*} f(\alpha,x,y)=x^{\alpha}y^{1-\alpha}=\biggl(\frac{x}{y}\biggr)^{\alpha} y . \end{equation*} \notag $$
Since ${x}/{y}\leqslant 1$, we have
$$ \begin{equation*} x=f(1,x,y) \leqslant f(\alpha,x,y) \leqslant f(0,x,y)=y \end{equation*} \notag $$
for all $\alpha \in [0,1]$.

In view of the continuous dependence on the variable $\alpha$, the function $f(\alpha,x,y)$ takes all values on the interval $[x,y]$. Let

$$ \begin{equation*} A=\prod_{j=1}^N a_j \geqslant 1\quad\text{and} \quad B=\prod_{j=1}^N b_j \geqslant 1, \end{equation*} \notag $$
and let $\alpha_0$ be such that $f(\alpha_0,{1}/{A},B)=1 $. Set $S_i$ to be equal to $f(\alpha_0,{1}/{a_{i+1}},b_i)$. Then the $S_i$ are a solution of system (8).

The solution in Lemma 7 does not cover the case when some $a_i$ (or $b_i$) are equal to $\infty$ (the lines are parallel or do not intersect in the corresponding sector). To find a constructive solution of such an inequality, instead of the $a_i=\infty$ we take sufficiently large $a_i$ so that all inequalities 1)–3) are fulfilled. It is easy to see that this solution is also a solution of the original problem, where $a_i=\infty$.

If the system has a solution, then any invariant polygon of system (1) is constructed in accordance with the above algorithm, and vice versa, each solution generates an invariant polygon.

§ 8. Construction of an invariant norm for an arbitrary number of matrices

The algorithm for constructing the invariant norm repeats exactly the algorithm for $N=2$, except in a few details. The choice of $\delta_0$ is still provided by Lemma 4.

Algorithm 2 (construction of an invariant norm for a family $\mathscr A=\{A_1,A_2, \dots, A_N \}$).

Input: an irreducible family of $ 2 \times 2 $ matrices $\mathscr A=\{ A_1,A_2,\dots,A_N \}$.

Output: $\sigma(\mathscr A)$, an invariant norm for $\mathscr A$.

Zero iteration: Let $\mathscr A^{(0)}=\mathscr A$ and $\Delta_0=[-\delta_0 , \delta_0 ]$.

END

It remains to present the inner loop to find the invariant norm and calculate the Lyapunov exponent using Algorithm 2. The inner loop, as in Algorithm 1, finds the sign of the Lyapunov exponent $\sigma(\mathscr A)$ and, in the case when $\sigma(\mathscr A)=0$, constructs an invariant norm. There are two cases in the inner loop. In the first case the sign of $\sigma(\mathscr A)$ is easily determined from the conditions. In the second case, in order to determine the sign, it is necessary to carry out the process of constructing an invariant norm. Let us start with the first case. (The second case is outlined in § 9.)

The inner loop (Case 1)

The matrices $\mathscr A=\{A_1,A_2,\dots,A_N\}$ are fixed. If there are singular matrices among the $A_j$, then we put them at the end of the list, so that $A_1,A_2,\dots,A_n$ are nonsingular and the $A_{j}$ for $j\geqslant n+1$ are singular. Let $n+m=N$.

As in the case of $N=2$, we assume that $\sigma(A_j) \leqslant 0$, $j=1,\dots,N$, because otherwise $\sigma > 0$. For each pair of matrices $ \mathscr A$ we find the guiding values and possible switch points. Let us note some properties.

1. If among the $n+m$ matrices there is a pair with at least one negative guiding value, then $\sigma(\mathscr A)> 0$ by virtue of Theorem 2.

2. If the guiding values in all pairs $A_i$, $A_j$, $i\neq j$ are positive, then for each guiding vector it is possible to determine uniquely whether switching occurs and the result of switching (that is, to determine the matrix to switch on). Therefore, the extremal trajectory is constructed in a unique way. Thus, the case of $n=N$, when there are no singular matrices, repeats § 6.2.7.

3. The special case $n=0$.

Given properties 1–3, we can assume that all guiding values are nonnegative, and among them there is at least one positive value and at least one equal to zero. The last condition is equivalent to the existence of a singular matrix.

4. Assume that there are no proportional matrices in the family $\mathscr A=\{A_1, A_2,\dots,A_N\}$. A negative similarity coefficient would give a negative guiding value, which means that $\sigma(\mathscr A) > 0$ (Theorem 2). In the case of a positive coefficient one of the matrices can be removed from the family without changes (this matrix will have to be returned after shifting).

The case of umbrella switching (see the definition at the end of § 3). For $N=2$ umbrella switches are not allowed due to the fact that there is only one starting point, and the motion is non-stop. In the case of $N\geqslant 3$ we consider the sets of petals $A_i\boldsymbol{x}$ on one side and on the other side of the stem $\boldsymbol{x}$, and the highest vector is distinguished in each set. Umbrella switching is possible if singular matrices are present.

§ 9. The inner loop of Algorithm 2: construction of an invariant norm for $N$ matrices

The inner loop (Case 2)

Let $B_i$ be the kernel of the singular matrix $A_i$. Without loss of generality we assume that these kernels are arranged in accordance with the numbering of the matrices. We draw two different straight lines which are symmetric about the centre and parallel to $\operatorname{Im} A_1$. According to Lemma 6, the invariant function contains a point of each of $B_1,B_2,\dots,B_m$. This means that the maximal trajectory must contain a segment of $\operatorname{Im} A_1$ (possibly degenerating into a point).

Let $A^{+}(t)$ ($A^{-}(t)$) denote the switching law corresponding to the maximal clockwise trajectory (anticlockwise trajectory, respectively). At the switch point corresponding to $A_1$ and $A^{+}(t)$ the trajectory switches to the $A^{+}(t)$ mode and reaches $B_2$ if it has not intersected $B_2$ before. If the intersection with $B_2$ occurred before, then the whole segment of $\ell_1$ between $B_1$ and $B_2$ is contained in the maximal trajectory. Indeed, this means that the matrix $A_1$ dominates in the whole area between the kernels, which means that we need not switch to $A^{+}(t)$. At the point of the trajectory that lies on $B_2$ we have two possibilities of entry relative to $\ell_2$: from the origin and from the other side. In the first case we switch to the matrix $B_2$. In the second case we obtain an unbounded trajectory (Figure 18), which immediately shows that $\sigma{(\mathscr A)} > 0$.

Analogously, let us follow the law $A^{-}(t)$ from $B_1$ to $B_m$. Next we repeat the steps of the algorithm for $n$ singular matrices described in § 7. To construct a body, it is sufficient to know for each matrix $A_i$ the distance $x_i$ along $B_i$ from $O$ to a boundary point of the invariant body. Denote the lengths of the segments of $B _{i-1}$ and $B_{i+1}$ cut by the union of positive and negative trajectories by $a_i x_i$ and $b_i x_i$, respectively, where $a_i$ and $b_i$ are coefficients (Figure 19). Then we obtain the following system:

$$ \begin{equation*} \begin{cases} a_i x_i \geqslant x_{i-1}, \\ b_i x_i \geqslant x_{i+1}, \end{cases} \quad i=1,\dots,n. \end{equation*} \notag $$

We shift the numbering in the first line:

$$ \begin{equation} \begin{cases} a_{i+1} x_{i+1} \geqslant x_i, \\ b_i x_i \geqslant x_{i+1}, \end{cases} \quad i=1,\dots,n. \end{equation} \tag{9} $$
Then we obtain
$$ \begin{equation} \frac{1}{a_{i+1}} \leqslant \frac{x_{i+1}}{x_i} \leqslant b_i, \qquad i=1,\dots,n. \end{equation} \tag{10} $$

Set $S_i={x_{i+1}}/{x_i}$. Then $S_1 S_2\dotsb S_n=1$. The numbers $S_1, S_2,\dots,S_n$ determine $x_1,\dots,x_n$.

Lemma 8. For the existence of a solution of system (10) it is necessary and sufficient that the following conditions are satisfied:

$$ \begin{equation} \prod_{j=1}^n b_j \geqslant 1, \qquad \prod_{j=1}^n a_j\geqslant 1, \qquad b_i a_{i+1} \geqslant 1, \quad i=1,\dots,n. \end{equation} \tag{11} $$

The proof is the same as the proof of Lemma 7.

Corollary 3. In the case when there is exactly one dominant matrix in the system, $x_1$ and $x_2$ can be considered as the distances from $O$ to the boundary of the invariant body along $B_1$ and $-B_1$, respectively. Then (11) is equivalent to $a_1, b_1\geqslant 1$. It is also possible to carry out the process of constructing an invariant Lyapunov body in the same way as in § 6.2.8.

Thus, the following theorem is proved.

Theorem 5. If the matrices in the system $\mathscr A$ satisfy conditions (11), then ${\sigma(\mathscr A)=0}$ if and only if system (6) has a solution. In this case every invariant body of system (1) is constructed in accordance with Algorithm 2. If system (10) does not have a solution, then $\sigma(\mathscr A) > 0$.

We have analyzed all possible cases of the internal loop. In each case the sign of $\sigma(\mathscr A)$ is determined in finite time and in the case when $\sigma(\mathscr A)=0$ an invariant body is also constructed in finite time.

Thus, the following theorem is proved.

Theorem 6. For any family of matrices Algorithm 2 determines in finite time which of the inequalities $\sigma(\mathscr A) > 0$ and $\sigma(\mathscr A)\leqslant 0$ is true. In the case when $\sigma(\mathscr A)=0$ the algorithm constructs an invariant norm.

Figure 20 illustrates the work of Algorithm 2.

§ 10. Classification of invariant Lyapunov norms

The algorithm for constructing invariant Lyapunov norms presented in §§ 8 and 9 actually provides a complete classification of such norms. If there is at least one singular matrix in the system under consideration and $\sigma(\mathscr A)=0$, then the invariant norm $\mathscr A$ is not unique. Uniqueness depends on the presence of dominant matrices. Recall that $A_i$ is dominant if $\sigma(A_i)= \sigma(\mathscr A)$ and $\sigma(\mathscr A)$ is an eigenvalue of the matrix $A_i$. We also recall that the family of matrices $\mathscr A$ is irreducible.

Theorem 7. All invariant norms of an irreducible finite family are obtained using Algorithm 2.

Proof. Consider an arbitrary invariant norm. Let $G$ be its unit sphere. According to Theorem 1, an invariant body has a piecewise analytic boundary. Therefore, there is only a finite number of switch points. We fix an arbitrary switch point $X_0$ on the boundary of $G$ and start implementing Algorithm 2. If there is no singular matrix among $ \{A_1,A_2,\dots,A_N \}$, then, in accordance with the algorithm, the choice of the matrix $A(t)$ at any point $\boldsymbol{x}(t) \in \partial G$ is unambiguous. Therefore, Algorithm 2 produces precisely the body $G$. Thus, when there are no singular matrices in the family $\mathscr A$, the theorem is proved. If there are some, then to prove the theorem it is enough to show that there cannot be more than one umbrella switching between two kernels. Otherwise, there are two umbrella switches with no kernels between them. Let us issue the extremal trajectories to both sides of each of these umbrella switchings. These trajectories contain the boundary of the body $G$ enclosed between these umbrella switchings, and the trajectories must meet at some point on the boundary. There is an umbrella switching again at this point. Thus, there is at least one more umbrella switching between any pair of umbrella switchings that do not have kernels between them. Consequently, there must be infinitely many switchings, which contradicts Theorem 1. So there can be no more than one umbrella switching between two kernels. For each umbrella switching trajectories in both directions are uniquely drawn till the intersections with kernels. Let us denote by $x_i$ the distances from the origin to the points of intersection of the boundary of $G$ with kernels. According to Lemma 8, all the $x_i$ satisfy the system of inequalities (10). On the other hand, for any numbers $x_i$ satisfying system (10) the algorithm constructs an appropriate invariant body, which coincides with the body $G$ in this case.

This completes the proof.

Now we turn to the uniqueness of an invariant body for a given system $\mathscr A$. Uniqueness is not always fulfilled, as we saw in § 6.1: there the invariant parallelogram is not unique. In this example the system consists of two dominant matrices. It turns out that if a system with an arbitrary number of matrices has at most one dominant matrix, then uniqueness takes place. The following theorem provides a complete answer to the uniqueness problem.

Theorem 8. If a system contains at most one dominant matrix, then the invariant body is unique.

If the system contains at least two dominant matrices, then at least one of the following three conditions is necessary and sufficient for the uniqueness of the invariant body:

1) the inequality $\prod_{j=1}^n b_j \geqslant 1$ in Lemma 8 turns to equality;

2) the inequality $\prod_{j=1}^n a_j \geqslant 1$ in Lemma 8 turns to equality;

3) all inequalities $b_i a_{i+1} \geqslant 1$, $i=1,\dots,n$, in Lemma 8 (except possibly one) turn to equalities.

Proof. We assume that after a suitable shift all dominant matrices become singular. Let there be no dominant matrices in the system. In this case the construction of the invariant body is carried out in one direction by choosing the highest vector $\boldsymbol x(t)$ as a tangent to the body. Note that there cannot be two invariant bodies for the same system that correspond to two different directions (Figure 21). Indeed, otherwise there is a point $X$ at which the central angle between some petals $A_i \boldsymbol{x}$ and $A_j \boldsymbol{x}$ is greater than $\pi$, which means that $\sigma(\mathscr A) > 0$.

Thus, if there are no dominant matrices, then the invariant body is unique.

If there are dominant matrices in the system, then according to Lemma 8 each solution $x_i$, $i=1,\dots,n$, of system (11) generates an invariant body of the system. The case when there is exactly one dominant matrix in the system can be reduced to Corollary 3, and the procedure of constructing an invariant norm, which is similar to § 6.2.8, determines the trajectory uniquely.

If at least one inequality in 1) or 2) turns to equality, then system (11) has a unique solution $x_i$, $i=1,\dots,n$, up to multiplication by a constant. Therefore, the invariant body is unique. Indeed, let $\prod_{j=1}^n b_j=1$ without loss of generality. Multiplying all inequalities

$$ \begin{equation} \frac{1}{a_{i+1}} \leqslant \frac{x_{i+1}}{x_i} \leqslant b_i, \qquad i=1,\dots,n, \end{equation} \tag{12} $$
we obtain
$$ \begin{equation*} \prod_{j=1}^n \frac{1}{a_j} \leqslant 1 \leqslant \prod_{j=1}^n b_j. \end{equation*} \notag $$
As $\prod_{j=1}^n b_j=1$, all inequalities (12) turn to equalities, and we obtain the unique solution of the system.

Now let all inequalities in 3) (except possibly one inequality) turn to equalities. Then all quantities $S_i={x_{i+1}}/{x_i}$ (except possibly one of them) are uniquely defined. The last $S_i$ is also uniquely determined since the product of all the $S_i$ is equal to 1.

Remark 1. Chitur, Gaye and Mason [10] gave an example of a system of two matrices which has a unique invariant body, but at the same time this body is not strictly convex. This and other examples were discussed in more detail by Maurice in [11]. Algorithm 2 provides a series of similar examples.

Remark 2. All constructions of invariant bodies and calculations of Lyapunov exponents are carried out under the assumption that the matrices do not have a common invariant subspace . In the case of two singular matrices this means that these matrices have different kernels. However, if there are three matrices or more, then, generally speaking, some pairs of singular matrices can have common kernels ( Figure 22). In this case we consider the highest petals approaching the kernel from the right and left, and their union is called the image of a new singular matrix. We consider these matrices instead of the original ones. Note that Algorithm 2 still works in the same way.

§ 11. Numerical results

Example 1. For a system $\mathscr A=\{A_1, A_2, A_3\}$ let

$$ \begin{equation*} A_1=\begin{pmatrix} 0& \ -1\\ 2& 0 \end{pmatrix}, \qquad A_2=\begin{pmatrix} 0& \ -2\\ 1& 0 \end{pmatrix}\quad\text{and} \quad A_3=\begin{pmatrix} 0.2& \ -1\\ 1& 0 \end{pmatrix}. \end{equation*} \notag $$
Algorithm 2 outputs $\sigma(\mathscr A)=0.325\dots$ . An invariant body (which is unique according to Theorem 8) has six switch points and two arcs generated by each matrix (Figure 23). The bisection method makes six iterations.

The following example is borrowed from [18].

Example 2. Only four out of five matrices take part in the construction of an invariant body, that is, generate arcs on its boundary (Figure 24). The body looks like an ellipsoid, but in fact it is not one, because it contains 10 switch points. The system consists of the matrices

$$ \begin{equation*} \begin{gathered} \, A_1=\begin{pmatrix} 0& \ 5 \\ -30& -1 \end{pmatrix}, \qquad A_2=\begin{pmatrix} 0& \ 5\\ -26& -1 \end{pmatrix}, \qquad A_3=\begin{pmatrix} -6& \ 27\\ -150& -1 \end{pmatrix}, \\ A_4=\begin{pmatrix} -1& \ -1\\ 1& -1 \end{pmatrix}\quad\text{and} \quad A_5=\begin{pmatrix} -2& \ 1\\ 2& -3 \end{pmatrix}. \end{gathered} \end{equation*} \notag $$
A small ‘discrepancy’ on the right of the figure is explained by an error in the calculation of $\sigma(\mathscr A)$.

Acknowledgements

The author expresses her gratitude to two anonymous referees for many valuable comments and suggestions. The author also expresses her deep gratitude to Professor V. Yu. Protasov for setting the problem and for his constant attention to my work.


Bibliography

1. D. Liberzon, Switching in systems and control, Systems Control Found. Appl., Birkhäuser Boston, Inc., Boston, MA, 2003, xiv+233 pp.  crossref  mathscinet  zmath
2. D. Liberzon and A. S. Morse, “Basic problems in stability and design of switched systems”, IEEE Control Syst. Mag., 19:5 (1999), 59–70  crossref  zmath
3. F. Blanchini and S. Miani, “A new class of universal Lyapunov functions for the control of uncertain linear systems”, IEEE Trans. Automat. Control, 44:3 (1999), 641–647  crossref  mathscinet  zmath
4. U. Boscain, “A review on stability of switched systems for arbitrary switchings”, Geometric control and nonsmooth analysis, Ser. Adv. Math. Appl. Sci., 76, World Sci. Publ., Hackensack, NJ, 2008, 100–119  crossref  mathscinet  zmath
5. A. P. Molchanov and Ye. S. Pyatnitskiy, “Criteria of asymptotic stability of differential and difference inclusions encountered in control theory”, Systems Control Lett., 13:1 (1989), 59–64  crossref  mathscinet  zmath
6. F. Blanchini and S. Miani, “Piecewise-linear functions in robust control”, Robust control via variable structure and Lyapunov techniques (Benevento 1994), Lect. Notes Control Inf. Sci., 217, Springer-Verlag London, Ltd., London, 1996, 213–243  crossref  mathscinet  zmath
7. N. Guglielmi, L. Laglia and V. Protasov, “Polytope Lyapunov functions for stable and for stabilizable LSS”, Found. Comput. Math., 17:2 (2017), 567–623  crossref  mathscinet  zmath
8. N. E. Barabanov, “Absolute characteristic exponent of a class of linear nonstationary systems of differential equations”, Siberian Math. J., 29:4 (1988), 521–530  mathnet  crossref  mathscinet  zmath
9. R. N. Shorten and K. S. Narendra, “Necessary and sufficient conditions for the existence of a common quadratic Lyapunov function for two stable second order linear time-invariant systems”, Proceedings of the 1999 American control conference (San Diego, CA), v. 2, IEEE, 1999, 1410–1414  crossref
10. Y. Chitour, M. Gaye and P. Mason, “Geometric and asymptotic properties associated with linear switched systems”, J. Differential Equations, 259:11 (2015), 5582–5616  crossref  mathscinet  zmath  adsnasa
11. I. D. Moris, An irreducible linear switching system whose unique Barabanov norm is not strictly convex, arXiv: 2301.09942v1
12. D. V. Shatov, “Stability of a certain class of second order switched systems”, 20th IFAC conference on technology, culture, and international stability TECIS 2021 (Moscow 2021), IFAC-PapersOnLine, 54:13 (2021), 425–430  crossref
13. D. V. Shatov, “Analysis of stability and dwell time of a certain class of switched systems with second order subsystems”, 16th international conference on stability and oscillations of nonlinear control systems (Pyatnitskiy's conference) (Moscow 2022), IEEE, 2022, 1–4  crossref
14. M. Margaliot, “Stability analysis of switched systems using variational principles: an introduction”, Automatica J. IFAC, 42:12 (2006), 2059–2077  crossref  mathscinet  zmath
15. M. Balde, U. Boscain and P. Mason, “A note on stability conditions for planar switched systems”, Internat. J. Control, 82:10 (2009), 1882–1888  crossref  mathscinet  zmath
16. M. Balde and U. Boscain, “Stability of planar switched systems: the nondiagonalizable case”, Commun. Pure Appl. Anal., 7:1 (2008), 1–21  crossref  mathscinet  zmath
17. M. Benaïm, S. Le Borgne, F. Malrieu and P. Zitt, “On the stability of planar randomly switched systems”, Ann. Appl. Probab., 24:1 (2014), 292–311  crossref  mathscinet  zmath
18. L. Greco, F. Tocchini and M. Innocenti, “A geometry-based algorithm for the stability of planar switching systems”, Internat. J. Systems Sci., 37:11 (2006), 747–761  crossref  mathscinet  zmath
19. Shen Cong, “Stability analysis for planar discrete-time linear switching systems via bounding joint spectral radius”, Systems Control Lett., 96 (2016), 7–10  crossref  mathscinet  zmath
20. V. Yu. Protasov, “Joint spectral radius and invariant sets of linear operators”, Fundam. Prikl. Mat., 2:1 (1996), 205–231 (Russian)  mathnet  mathscinet  zmath
21. V. Yu. Protasov, “On the regularity of de Rham curves”, Izv. Math., 68:3 (2004), 567–606  mathnet  crossref  mathscinet  zmath  adsnasa
22. V. D. Blondel, J. Theys and A. A. Vladimirov, “An elementary counterexample to the finiteness conjecture”, SIAM J. Matrix Anal. Appl., 24:4 (2003), 963–979  crossref  mathscinet  zmath
23. V. S. Kozyakin, “On the computational aspects of the theory of joint spectral radius”, Dokl. Math., 80:1 (2009), 487–491  mathnet  crossref  mathscinet  zmath
24. K. G. Hare, I. D. Morris, N. Sidorov and J. Theys, “An explicit counterexample to the Lagarias-Wang finiteness conjecture”, Adv. Math., 226:6 (2011), 4667–4701  crossref  mathscinet  zmath
25. I. D. Morris and N. Sidorov, “On a devil's staircase associated to the joint spectral radii of a family of pairs of matrices”, J. Eur. Math. Soc. (JEMS), 15:5 (2013), 1747–1782  crossref  mathscinet  zmath
26. A. Cicone, N. Guglielmi, S. Serra-Capizzano and M. Zennaro, “Finiteness property of pairs of $2 \times 2$ sign-matrices via real extremal polytope norms”, Linear Algebra Appl., 432:2–3 (2010), 796–816  crossref  mathscinet  zmath
27. I. Ya. Novikov, V. Yu. Protasov and M. A. Skopina, Wavelet theory, Transl. Math. Monogr., 239, Amer. Math. Soc., Providence, RI, 2011, xiv+506 pp.  crossref  mathscinet  mathscinet  zmath  zmath
28. V. Yu. Protasov, “Fractal curves and wavelets”, Izv. Math., 70:5 (2006), 975–1013  mathnet  crossref  mathscinet  zmath  adsnasa
29. J. N. Tsitsiklis and V. D. Blondel, “The Lyapunov exponent and joint spectral radius of pairs of matrices are hard — when not impossible — to compute and to approximate”, Math. Control Signals Systems, 10:1 (1997), 31–40  crossref  mathscinet  zmath
30. V. Yu. Protasov, “The Barabanov norm is generically unique, simple, and easily computed”, SIAM J. Control Optim., 60:4 (2022), 2246–2267  crossref  mathscinet  zmath

Citation: A. M. Musaeva, “Construction of invariant Lyapunov norms for planar switching systems”, Mat. Sb., 214:9 (2023), 27–57; Sb. Math., 214:9 (2023), 1212–1240
Citation in format AMSBIB
\Bibitem{Mus23}
\by A.~M.~Musaeva
\paper Construction of invariant Lyapunov norms for planar switching systems
\jour Mat. Sb.
\yr 2023
\vol 214
\issue 9
\pages 27--57
\mathnet{http://mi.mathnet.ru/sm9821}
\crossref{https://doi.org/10.4213/sm9821}
\mathscinet{http://mathscinet.ams.org/mathscinet-getitem?mr=4705898}
\adsnasa{https://adsabs.harvard.edu/cgi-bin/bib_query?2023SbMat.214.1212M}
\transl
\jour Sb. Math.
\yr 2023
\vol 214
\issue 9
\pages 1212--1240
\crossref{https://doi.org/10.4213/sm9821e}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001168284500002}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85185963435}
Linking options:
  • https://www.mathnet.ru/eng/sm9821
  • https://doi.org/10.4213/sm9821e
  • https://www.mathnet.ru/eng/sm/v214/i9/p27
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Математический сборник Sbornik: Mathematics
    Statistics & downloads:
    Abstract page:415
    Russian version PDF:19
    English version PDF:67
    Russian version HTML:88
    English version HTML:92
    References:28
    First page:12
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024