Abstract:
A survey of the results concerning the development of the theory of Hamilton–Jacobi equations for hereditary dynamical systems is presented. One feature of these systems is that the rate of change of their state depends not only on the current position, like in the classical case, but also on the full path travelled, that is, the history of the motion. Most of the paper is devoted to dynamical systems whose motion is described by functional differential equations of retarded type. In addition, more general systems described by functional differential equations of neutral type and closely related systems described by differential equations with fractional derivatives are considered. So-called path-dependent Hamilton–Jacobi equations are treated, which play for the above classes of systems a role similar to that of the classical Hamilton–Jacobi equations in dynamic optimization problems for ordinary differential systems. In the context of applications to control problems, the main attention is paid to the minimax approach to the concept of a generalized solution of the Hamilton–Jacobi equations under consideration and also to its relationship with the viscosity approach. Methods for designing optimal feedback control strategies with memory of motion history which are based on the constructions discussed are presented.
Bibliography: 183 titles.
The study by N. Yu. Lukoyanov was supported by the Ministry of Science and Higher Education of the Russian Federation under the project “Ural Mathematical Center” (agreement no. 075-02-2023-935).
Hamilton–Jacobi equations are now usually understood as first-order partial differential equations. In classical mechanics a Hamilton–Jacobi equation occurs, for example, as an equation for the generating function in the theory of canonical transformations of Hamiltonian systems (see, for example, [43], Chap. IV, and [4], § 47) and also as an equation holding for the action as a function of time and the position at the end of motion of a mechanical system (see, for example, [104], § 47). In a similar way a Hamilton–Jacobi equation occurs in variational calculus as an equation satisfied by the minimum in the simplest variational problem as a function of the upper limit of integration and the boundary value at the right-hand endpoint (see, for example, [45], § 19, and [177], § 26). The development of the modern theory of Hamilton–Jacobi equations is largely related to extremal problems in dynamics such as optimal control problems and differential games.
We recall briefly how a Hamilton–Jacobi equation occurs in one canonical optimal control problem (for greater detail, see, for example, [177], § 8, and [107], § 5.1).
We consider a dynamical system whose motion is described by the differential equation
where $\tau$ is the current time, $T > 0$ is a fixed terminal instant, $y(\tau)$ is the state of the system at $\tau$, $\dot{y}(\tau)={\rm d}y(\tau)/{\rm d}\tau$ is the rate of change in this state, $u(\tau)$ is the current control action and $P$ is a compact set describing the geometric constraints on its value. We assume that an initial position $(t, x) \in [0, T] \times \mathbb{R}^n$ of the system (1.1) is prescribed, where $t$ is the initial instant and $x$ is the initial state at this instant. Thus, we obtain the initial condition $y(t) = x$ for differential equation (1.1). An admissible (program) control on the interval $[t, T]$ is an arbitrary (Lebesgue-)measurable function $u \colon [t, T] \to P$. We let $\mathcal{U}(t)$ denote the set of all admissible controls. By choosing a control $u(\,{\cdot}\,) \in \mathcal{U}(t)$ we need to minimize the quality index
where the function $y \colon [t, T] \to \mathbb{R}^n$ is the motion of system (1.1) generated by $u(\,{\cdot}\,) \in \mathcal{U}(t)$ from an initial position $(t, x)$. More precisely, $y(\,{\cdot}\,)$ is a Lipschitz- continuous function (that is, $y(\,{\cdot}\,) \in \operatorname{Lip}([t, T]; \mathbb{R}^n)$) satisfying the equality $y(t) = x$ and, for almost every $\tau \in [t, T]$, satisfying the differential equation (1.1) for the control $u(\,{\cdot}\,)$ we have chosen. The functions
$$
\begin{equation*}
f \colon [0, T] \times \mathbb{R}^n \times P \to \mathbb{R}^n,\quad \sigma \colon \mathbb{R}^n \to \mathbb{R},\quad \chi \colon [0, T] \times \mathbb{R}^n \times P \to \mathbb{R}
\end{equation*}
\notag
$$
When the initial position $(t, x)$ is unfixed, we have the optimal result function $\rho \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$, which is also called the value function. By definition, this function has the following property, which is called the dynamic programming principle: for any initial position $(t, x) \in [0, T) \times \mathbb{R}^n$ and any instant $\vartheta \in (t, T]$,
When the value function $\rho$ turns out to be continuously differentiable, by the formula for differentiation of a composite function, for almost every $\tau \in [t, T]$ we have
where $\langle\,\cdot\,{,}\,\cdot\,\rangle$ is the scalar product in $\mathbb{R}^n$. Based on (1.4) and (1.5), we conclude that the function $\rho$ satisfies the following Hamilton–Jacobi equation (which is also called Bellman’s equation or the Hamilton–Jacobi–Bellman equation):
Hence the value function $\rho$ is a solution of the Cauchy problem for the Hamilton–Jacobi equation (1.6) with boundary condition (1.8).
A result which is converse in a certain sense and is called sometimes the verification theorem, is important for applications. To be precise, if a continuously differentiable function $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ is a solution of the Cauchy problem (1.6), (1.8) with Hamiltonian (1.7), then this function coincides with the value function $\rho$ in the optimal control problem (1.1), (1.2). In this case an optimal feedback control strategy, making it possible to generate $\varepsilon$-optimal controls for any a priori fixed $\varepsilon > 0$, can be constructed using extremal aiming in the direction of the gradient $\partial \varphi / \partial x$ of the function $\varphi$:
For more details about optimal feedback control strategies, see § 2.5.
We emphasize that the above approach to the optimal control problem (1.1), (1.2), which is based on the dynamic programming principle and leads to Hamilton–Jacobi equations, is only one of the possible approaches. It is aimed directly at constructing optimal feedback control strategies. Another well-known approach is related to Pontryagin’s maximum principle [147]. In this case the emphasis is put on seeking an optimal program control for a fixed initial position $(t, x)$. For the relationship between the Hamilton–Jacobi equation and Pontryagin’s maximum principle in the optimal control problem (1.1), (1.2), see, for example, [40], Theorem 8.1, and also [168].
Dynamic programming constructions become of fundamental importance in the theory of differential games, in which control problems are formalized in the presence of obstacles or counteraction. In problems of this type, in choosing the current control action it is significant to use additional information on the progress of the process.
Just like above for the optimal control problem (1.1), (1.2), we illustrate the role of the Hamilton–Jacobi equations by the following differential zero-sum game of two players.
Assume that the motion of a dynamical system is described by the differential equation
where $u(\tau)$ and $v(\tau)$ are the current control actions of the first and second players, respectively, and an initial position $(t, x) \in [0, T] \times \mathbb{R}^n$ is fixed. We let $\mathcal{U}(t)$ and $\mathcal{V}(t)$ denote the sets of admissible (program) controls of the players. The first (second) player is aimed at minimizing (maximizing) the quality index
A non-anticipating strategy (a quasi-strategy in another terminology) of the first player is a mapping $\boldsymbol{\alpha} \colon \mathcal{V}(t) \to \mathcal{U}(t)$ with the following property (which is often called the property of non-anticipation or physical feasibility): for any functions $v_1(\,{\cdot}\,),v_2(\,{\cdot}\,) \in \mathcal{V}(t)$ and instant $\vartheta \in [t, T]$, if $v_1(\tau) = v_2(\tau)$ for almost all $\tau \in [t, \vartheta]$, then $\boldsymbol{\alpha}(v_1(\,{\cdot}\,))(\tau) = \boldsymbol{\alpha}(v_2(\,{\cdot}\,))(\tau)$ for almost all $\tau \in [t, \vartheta]$. Let $\boldsymbol{\mathcal{A}}(t)$ be the set of all strategies of this type. We consider the lower value of the game (in another terminology, the optimal guaranteed result of the first player in the class of quasistrategies $\boldsymbol{\mathcal{A}}(t)$)
The upper value of the game (the optimal guaranteed result of the second player in the class $\boldsymbol{\mathcal{B}}(t)$ of quasi-strategies $\boldsymbol{\beta} \colon \mathcal{U}(t) \to \mathcal{V}(t)$) is defined symmetrically by
For the differential game (1.10), (1.11), the dynamic programming principle is expressed by the following equalities, valid for all $(t, x) \in [0, T) \times \mathbb{R}^n$ and $\vartheta \in (t, T]$ (see, for example, [36], Theorem 3.1, and also [41], Chap. XI, Theorem 5.1, and [176], Theorem 3.3.5):
Assume that the function $\rho_- \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ is continuously differentiable. Then, on the basis of (1.5), from the first equality in (1.14) we derive (see, for example, [41], Chap. XI, Lemma 6.2)
$$
\begin{equation}
\begin{gathered} \, H_-(t, x, s)= \max_{v \in Q}\,\min_{u \in P} \bigl( \langle s, f(t, x, u, v) \rangle - \chi(t, x, u, v) \bigr), \\ t \in [0, T], \quad x, s \in \mathbb{R}^n. \end{gathered}
\end{equation}
\tag{1.15}
$$
Thus, the function $\rho_-$ satisfies the Hamilton–Jacobi equation (1.6) with Hamiltonian (1.15). In a similar way, if $\rho_+ \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ is continuously differentiable, then by the second equality in (1.14) it satisfies (1.6) with Hamiltonian
$$
\begin{equation}
\begin{gathered} \, H_+(t, x, s)= \min_{u \in P}\,\max_{v \in Q} \bigl(\langle s, f(t, x, u, v) \rangle - \chi(t, x, u, v) \bigr), \\ t \in [0, T], \quad x, s \in \mathbb{R}^n. \end{gathered}
\end{equation}
\tag{1.16}
$$
Equation (1.6) with the Hamiltonian (1.15) (or (1.16)) is also called Isaacs’s equation or the Hamilton–Jacobi–Bellman–Isaacs equation. According to (1.12) and (1.13), the functions $\rho_-$ and $\rho_+$ satisfy the boundary condition (1.8).
If Isaacs’s condition ([72], § 2.4; it is also called the saddle-point condition in a small game [96], p. 79)
$$
\begin{equation}
H_-(t, x, s)= H_+(t, x, s),\qquad t \in [0, T],\quad x, s \in \mathbb{R}^n,
\end{equation}
\tag{1.17}
$$
holds, then the value function $\rho$ must satisfy (1.6) with the Hamiltonian
The converse is also true (in this connection see, for example, [96], § 15, and [162], § 11.5): if there is a continuously differentiable function $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ satisfying this equation and the boundary condition (1.8), then the differential game has value $\rho(t, x) = \varphi(t, x)$, $(t,x) \in [0,T] \times \mathbb{R}^n$. In this case, by extremal aiming in the direction of the gradient of $\varphi$ we can construct optimal feedback control strategies of the form
$$
\begin{equation}
\begin{gathered} \, \begin{aligned} \, U^\circ(\tau, y)& \in \operatorname*{arg\,min}_{u \in P}\, \max_{v \in Q} \biggl( \biggl\langle \frac{\partial \varphi}{\partial x} (\tau, y), f(\tau, y, u, v) \biggr\rangle - \chi(\tau, y, u, v) \biggr), \\ V^\circ(\tau, y)& \in \operatorname*{arg\,max}_{v \in Q}\, \min_{u \in P}\biggl( \biggl\langle \frac{\partial \varphi} {\partial x} (\tau, y), f(\tau, y, u, v) \biggr\rangle - \chi(\tau, y, u, v) \biggr), \end{aligned} \\ (\tau, y) \in [0, T] \times \mathbb{R}^n, \end{gathered}
\end{equation}
\tag{1.19}
$$
which allow each player, regardless of the opponent’s control, to guarantee that the corresponding value of the quality index (1.11) is not worse than the game value $\rho$, with any prescribed accuracy $\varepsilon > 0$ (for more details, see § 2.5).
For details concerning the derivation of Hamilton–Jacobi equations in variational calculus, optimal control, and differential games, see, for example, [153], § § 3.1–3.3.
We emphasize that a key assumption in the above arguments that makes it possible to use Hamilton–Jacobi equations to study problems in optimal control and differential games and to construct optimal control strategies of the form (1.9) and (1.19) is the existence of a continuously differentiable solution of the Cauchy problem (1.6), (1.8) or, which is equivalent, the continuous differentiability of the value function. However, as a rule, the Cauchy problem for nonlinear Hamilton–Jacobi equations has no classical (continuously differentiable) global solution. This can be the case even when the Hamiltonian $H$ and boundary function $\sigma$ are infinitely differentiable (see an example in [162], § 1.4). At the same time there are meaningful problems in optimal control theory and differential games in which the value function is not smooth. In particular, this leads to the fact that in problems of this type formulae (1.9) and (1.19) cannot be applied directly to the construction of optimal control strategies, since the gradient $\partial \varphi (\tau, y) / \partial x$ may not exist at some points $(\tau, y)$.
These facts have in many aspects prompted the development of the theory of generalized (non-smooth) solutions of Hamilton–Jacobi equations that, in particular, must satisfy the following natural conditions:
There are several approaches to the concept of generalized solution of various boundary value problems for Hamilton–Jacobi equations (see, for example, [101], [81], [159], [109], [27], [89], [160], and [168]). In our opinion, two approaches stand out: the minimax approach and the viscosity approach. In fact, they have received the greatest development.
The minimax approach is based on the theory of positional differential games (see, for instance, [99], [163], [96], [100], and [90]) and can be regarded as a development of the unification constructions in this theory [94], [95] (also see [100], § 10.5, and [171], [167], and [174]). Within this approach [159], [160], [162] a generalized (minimax) solution is defined as a function satisfying a pair of non-local conditions that express the properties of the weak invariance of its super- and subgraph under so-called characteristic differential inclusions. In the infinitesimal form these conditions are expressed by inequalities for lower and upper directional derivatives of the minimax solution. The resulting pair of differential inequalities can be regarded as a generalization of the Hamilton–Jacobi equation to the non-smooth case.
The viscosity approach [27], [25] (also see, for instance, [109], [8], [41], and [173]) is conceptually related to the vanishing viscosity method and comparison theorems in mathematical physics. According to this approach, a generalized (viscosity) solution is defined as a function satisfying a pair of conditions involving smooth underlying (test) functions. Expressing these conditions in an infinitesimal form leads to a generalization of the Hamilton–Jacobi equation in the form of inequalities for sub- and supergradients of the viscosity solution.
Though the concepts of minimax and viscosity solutions are based on a different background, it is important that they turn out to be equivalent under rather general assumptions (see, for example, [110], [166], [160], [42], [20], and [162]). Furthermore, in the minimax approach special attention is paid to applications of the theory of generalized solutions of Hamilton–Jacobi equations to developing non-smooth methods for the construction of optimal feedback control strategies.
We also note a slightly isolated ingenious approach [89], [134] to the definition of generalized solutions of Hamilton–Jacobi equations which is based on constructions in idempotent analysis [133]. The relationship between this approach and the minimax one was considered, for example, in [151].
In this survey we discuss the development of the theory of Hamilton–Jacobi equations for so-called hereditary dynamical systems. One feature of systems of this type is that the rate of change of their state depends not only on the current position, like in the ordinary (classical) case, but also on the full path travelled, that is, the history of the motion. The Hamilton–Jacobi equations under consideration play for hereditary systems a role similar to the role of (1.6) in motion control problems for ordinary differential systems of the form (1.10). Most attention is paid to the minimax approach to the concept of generalized solution of the Cauchy problems for equations of this type and its relationship with the viscosity approach. We present methods for designing optimal feedback control strategies with memory of the motion history that are based on the constructions discussed here.
A major part of the review is devoted to the results deduced for hereditary dynamical systems whose motion is described by retarded functional differential equations of the following form (see, for example, [88] and [71]):
Here the map $f \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}^n$, where $h > 0$ and $\operatorname{C}([- h, T]; \mathbb{R}^n)$ is the space of continuous functions from $[-h, T]$ to $\mathbb{R}^n$, has the non-anticipation property: for any functions $y_1(\,{\cdot}\,),y_2(\,{\cdot}\,)\in \operatorname{C}([- h, T]; \mathbb{R}^n)$ and instant $\tau \in [0, T]$ the equality $y_1(\xi) = y_2(\xi)$ for all $\xi \in [-h, \tau]$ implies that
In other words, this property means that, instead of the full function $y(\,{\cdot}\,)$, the right-hand side of (1.20) depends only on $y(\xi)$ for $\xi \in [- h, \tau]$, that is, on the history of the motion $y(\,{\cdot}\,)$ to the current instant $\tau$. In what follows systems of the form (1.20) are called time-delay systems. Typical examples of systems of this type, which often arise in applications, are:
We also present some results concerning the Hamilton–Jacobi equations for more general systems than (1.20), which are described by functional differential equations of neutral type in Hale’s form (see, for example, [70], [2], [88], and [71]):
where the mapping $g \colon [0, T] \times \operatorname{Lip}([- h, T]; \mathbb{R}^n) \to \mathbb{R}^n$ has a non-anticipation property similar to the above one for $f$.
In addition, we will consider so-called fractional-order systems (see, for example, [152], [136], [146], [82], and [32]) described by differential equations of the form
which are closely related to neutral-type systems (1.24). Here $(^{\mathrm C\!} D^\alpha y)(\tau)$ is the Caputo fractional derivative of order $\alpha \in (0, 1)$ defined by (see, for example, [82], § 2.4, and [32], Chap. 3)
Topics related to Hamilton–Jacobi equations and boundary value problems for equations of this type that arise, for example, in time-optimal problems, control problems with infinite horizon, problems with state constraints, or control problems for stochastic dynamical systems are beyond the scope of this review.
Concluding the introductory part, we note that this paper contains quite a lot of various conditions and properties. To make it easier to navigate through them we choose a special system of notation. More precisely, as a rule, the notation for a particular condition consists of three parts: the first part is the number of the section in which the condition appears for the first time; the second part is a letter abbreviation intended to stress the object this condition applies to; the third part is an order number within the relevant group of conditions. For example, condition $(2.\mathrm{CP}.3)$ is stated in § 2, refers to the Cauchy problem studied in that section, and has order number 3.
2. Minimax and viscosity solutions of Hamilton–Jacobi equations
In this section we consider the Cauchy problem for the Hamilton–Jacobi equation
The function $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ is unknown. The function $H \colon [0, T] \times \mathbb{R}^n \times \mathbb{R}^n \to\mathbb{R}$, which is called the Hamiltonian, and the boundary function $\sigma \colon \mathbb{R}^n \to \mathbb{R}$ are known and satisfy the following conditions.
for all $t \in [0, T]$, $x_1,x_2 \in W$, and $s \in \mathbb{R}^n$.
Troughout what follows $\|\cdot\|$ is the Euclidean norm in $\mathbb{R}^n$.
Conditions $(2.\mathrm{CP}.1)$–$(2.\mathrm{CP}.3)$ are rather general. In the introduction we indicated the relationship between a Cauchy problem of the form (2.1), (2.2) (see (1.6), (1.8), and (1.18)) and the differential game (1.10), (1.11). These results are certainly true when the functions $f \colon [0, T] \times \mathbb{R}^n \times P \times Q \to \mathbb{R}^n$, $\sigma \colon \mathbb{R}^n \to \mathbb{R}$, and $\chi \colon [0, T] \times \mathbb{R}^n \times P \times Q \to \mathbb{R}$ satisfy the following conditions, which are typical in optimal control theory and differential games.
$$
\begin{equation*}
\|f(\tau, y, u, v)\|\leqslant c (1 + \|y\|),\qquad \tau \in [0, T], \quad y \in \mathbb{R}^n, \quad u \in P, \quad v \in Q.
\end{equation*}
\notag
$$
To date, the theory of minimax and viscosity solutions of the Cauchy problem (2.1), (2.2) has been developed quite thoroughly. Below we present only a few basic concepts and results in this theory. Their development for the case of hereditary dynamical systems is the subject of our survey.
2.1. Definition of a minimax solution
The concept of a minimax solution of the Cauchy problem (2.1), (2.2) is related to constructions in the theory of positional differential games. It was proved in this theory (in this connection, see [96], § 29, [90], § 9, and also [99], § 18, [163], Chap. III, § 2, and [100], § 3.1) that under conditions $(2.\mathrm{DG}.1)$–$(2.\mathrm{DG}.4)$ the differential game (1.10), (1.11) has a value such that the value function is the only continuous function $\rho \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ satisfying the boundary condition (1.8) and the following two conditions.
Conditions $(2.\mathrm{US}.1)$ and $(2.\mathrm{VS}.1)$ are called the conditions of $u$- and $v$-stability of the function $\rho$, respectively. Note that these conditions are fulfilled by the dynamic programming principle (see (1.14)).
In addition, unification of differential games has been carried out (see [94], [95], and also [100], § 10.5): it has been shown that we can switch from the game (1.10), (1.11) to an equivalent differential game, in the sense of the coincidence of the value functions, which are specified by the Hamiltonian $H$ (see (1.18)) and the function $\sigma$. As a consequence, if $H$ and $\sigma$ coincide in two games of the form (1.10), (1.11), then the value functions also coincide; in this case, the specific form of the functions $f$ and $\chi$ and the sets $P$ and $Q$ does not play any role. In particular, following unification constructions, we can express properties $(2.\mathrm{US}.1)$ and $(2.\mathrm{VS}.1)$ of $\rho$ as follows.
Hence the value function can be characterized as the unique continuous function $\rho \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ satisfying the boundary condition (1.8), which is specified by the function $\sigma$, and preserving the stability properties $(2.\mathrm{US}.2)$ and $(2.\mathrm{VS}.2)$, which are specified by the Hamiltonian $H$. The following definition of a minimax solution of the Cauchy problem (2.1), (2.2) is based on the above reasoning (in this connection, see [162], § 6.3).
defined for all $\tau \in [0, T]$ and $y,s \in \mathbb{R}^n$ by
$$
\begin{equation}
E_0(\tau, y, s)= \bigl\{ (f, \chi) \in \mathbb{R}^n \times \mathbb{R} \colon \|f\| \leqslant c (1 + \|y\|),\, \chi = \langle s, f \rangle - H(\tau, y, s) \bigr\},
\end{equation}
\tag{2.4}
$$
where, like in (2.3), the number $c$ is the same as in $(2.\mathrm{CP}.2)$. Given $(t, x) \in [0, T] \times \mathbb{R}^n$ and $s \in \mathbb{R}^n$, we let $\operatorname{Sol}(t, x, E_0, s)$ denote the set of solutions of the differential inclusion
$$
\begin{equation}
y(t)= x \quad\text{and}\quad z(t)= 0,
\end{equation}
\tag{2.6}
$$
that is, the set of pairs $(y(\,{\cdot}\,), z(\,{\cdot}\,)) \in \operatorname{Lip}([t, T]; \mathbb{R}^n) \times \operatorname{Lip}([t, T]; \mathbb{R})$ satisfying equalities (2.6) and, for almost all $\tau \in [t, T]$, inclusion (2.5). Sometimes, the differential inclusion (2.5) is said to be characteristic, while elements of the set $\operatorname{Sol}(t, x, E_0, s)$ are called generalized characteristics of the Hamilton–Jacobi equation (2.1).
Definition 1. An upper minimax solution of the Cauchy problem (2.1), (2.2) is a lower semicontinuous function $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ that satisfies the boundary condition
Definition 2. A lower minimax solution of the Cauchy problem (2.1), (2.2) is an upper semicontinuous function $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ that satisfies the boundary condition
Definition 3. A minimax solution of the Cauchy problem (2.1), (2.2) is a function $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ that is an upper and a lower minimax solution of this problem at the same time.
Note that conditions $(2.\mathrm{MS}.1_+)$ and $(2.\mathrm{MS}.1_-)$, respectively, reflect the property of weak invariance of the super- and subgraph of the function $\varphi$ with respect to the characteristic differential inclusion (2.5).
We can show that it is possible to combine conditions $(2.\mathrm{MS}.1_+)$ and $(2.\mathrm{MS}.1_-)$ in defining a minimax solution of the Cauchy problem (2.1), (2.2). More precisely ([162], Theorem 6.4), a continuous function $\varphi \colon [0,T] \times \mathbb{R}^n \to \mathbb{R}$ is a minimax solution if and only if it satisfies the boundary condition (2.2) and has the following property.
in this sense, property $(2.\mathrm{MS})$ expresses the weak invariance of the graph of the minimax solution with respect to the characteristic differential inclusion (2.5).
2.2. Well-posedness
A key point in the theory of minimax solutions is to prove the existence and uniqueness of a minimax solution of the Cauchy problem (2.1), (2.2). It is conceptually related to proving the existence of a game value (the alternative theorem) in the theory of positional differential games. The result is that the Cauchy problem (2.1), (2.2) has a unique minimax solution under conditions $(2.\mathrm{CP}.1)$–$(2.\mathrm{CP}.3)$ ([162], Theorem 8.1).
The scheme of the proof can be divided into two parts. In the first ([162], Theorem 8.2) it is shown that the lower and upper closures of the lower envelope of the family of all functions $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ satisfying conditions $(2.\mathrm{MS}.1_+)$ and (2.7), respectively, specify an upper and a lower minimax solution, $\varphi_+^\circ$ and $\varphi_-^\circ$, such that
for any upper and lower minimax solutions $\varphi_+$ and $\varphi_-$. The philosophy of Lyapunov functions is used here. An appropriate function is, for example, as follows:
where $\lambda > 0$ is a number from $(2.\mathrm{CP}.3)$ chosen for an appropriate bounded set $W \subset \mathbb{R}^n$, and $\varepsilon > 0$ is a small parameter.
In addition, it turns out (in this connection, see [160], Theorem 4.3) that the minimax solution of the Cauchy problem (2.1), (2.2) depends continuously on the Hamiltonian $H$ and the boundary function $\sigma$ in the following sense. Assume that for each $k \in \mathbb{N} \cup \{0\}$ we have a Hamiltonian $H_k$ and a boundary function $\sigma_k$ satisfying conditions $(2.\mathrm{CP}.1)$–$(2.\mathrm{CP}.3)$, where the constant $c$ in $(2.\mathrm{CP}.2)$ is independent of $k$. Also assume that for any bounded set $W \subset \mathbb{R}^n$ and any $s \in \mathbb{R}^n$ we have the convergence
which is uniform with respect to $t \in [0, T]$ and $x \in W$. For $k \in \mathbb{N} \cup \{0\}$ we consider the minimax solution $\varphi_k \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ of the Cauchy problem (2.1), (2.2) with $H = H_k$ and $\sigma = \sigma_k$. Then $\varphi_k(t, x) \to \varphi_0(t, x)$ as $k \to \infty$ uniformly with respect to $t \in [0, T]$ and $x \in W$ for any bounded set $W \subset \mathbb{R}^n$.
2.3. Characteristic complexes
To understand whether a function $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ is a minimax solution of the Cauchy problem (2.1), (2.2), it is often more convenient, in place of a specific characteristic differential inclusion of the form (2.5) to verify conditions $(2.\mathrm{MS}.1_+)$ and $(2.\mathrm{MS}.1_-)$ for certain differential inclusions chosen in a special way. We present the corresponding constructions (see [162], § 6.2, and also [160], § 2.4).
Throughout what follows we let $\mathcal{K}(\mathbb{R}^m)$ denote the set of non-empty convex compact subsets of $\mathbb{R}^m$.
For a non-empty set $\Psi$ and a multivalued mapping
Let $(\Psi, E)$ be some characteristic complex, and let $(t, x) \in [0, T] \times \mathbb{R}^n$ and $\psi\in \Psi$. We denote by $\operatorname{Sol}(t, x, E, \psi)$ the set of solutions $(y(\,{\cdot}\,), z(\,{\cdot}\,))$ of the differential inclusion
It turns out ([162], Theorem 6.4) that if $\varphi$ is a lower semicontinuous function satisfying $(2.\mathrm{MS}.2_+)$ for some upper characteristic complex, then it also satisfies this condition for all other possible upper characteristic complexes. In a similar way, for an upper semicontinuous function $\varphi$, if condition $(2.\mathrm{MS}.2_-)$ holds for some lower characteristic complex, then this condition holds for all lower characteristic complexes.
Thus, in Definitions 1 and 2 of an upper and a lower minimax solution of the Cauchy problem (2.1), (2.2) there is some freedom in the choice of appropriate characteristic complexes, which sometimes makes it possible to take certain properties of a particular Hamiltonian $H$ into account.
For example, when we consider the Hamiltonian $H$ from (1.18), which occurs in the differential game (1.10), (1.11), then we can take the following upper $(Q, E_+)$ and lower $(P, E_-)$ characteristic complexes (see, for example, [99], § 11, and [160], § 14.1), which are closely related to the stability conditions $(2.\mathrm{US}.1)$ and $(2.\mathrm{VS}.1)$:
$$
\begin{equation*}
E_+ (\tau, y, v) = \operatorname{co} \bigl\{(f(\tau, y, u_\ast, v), \chi(\tau, y, u_\ast, v)) \in \mathbb{R}^n \times \mathbb{R} \colon u_\ast \in P \bigr\},
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
E_- (\tau, y, u) = \operatorname{co} \bigl\{ (f(\tau, y, u, v_\ast), \chi(\tau, y, u, v_\ast)) \in \mathbb{R}^n \times \mathbb{R} \colon v_\ast \in Q \bigr\},
\end{equation*}
\notag
$$
$$
\begin{equation*}
\tau \in [0, T], \quad y \in \mathbb{R}^n, \quad u \in P, \quad v \in Q,
\end{equation*}
\notag
$$
where $\operatorname{co}\! A$ denotes the convex hull of the set $A$.
A special place in the theory of minimax solutions belongs to the Hamilton–Jacobi equations (2.1) with Hamiltonian $H$ that is positively homogeneous in the third variable, when conditions $(2.\mathrm{CP}.1)$, $(2.\mathrm{CP}.2)$, and the following conditions, which are stronger than $(2.\mathrm{CP}.3)$, hold.
$$
\begin{equation*}
E_+(\tau, y, s) = \bigl\{ f \in \mathbb{R}^n \colon \|f\| \leqslant \sqrt{2}\,c (1 + \|y\|),\, \langle s, f \rangle \geqslant H(\tau, y, s) \bigr\}\times \{0\}\subset \mathbb{R}^n \times \mathbb{R}
\end{equation*}
\notag
$$
and
$$
\begin{equation*}
E_-(\tau, y, s) = \bigl\{ f \in \mathbb{R}^n \colon \|f\| \leqslant \sqrt{2}\,c (1 + \|y\|),\, \langle s, f \rangle \leqslant H(\tau, y, s) \bigr\}\times \{0\}\subset \mathbb{R}^n \times \mathbb{R},
\end{equation*}
\notag
$$
where $\tau \in [0, T]$, $y \in \mathbb{R}^n$, $s \in S_n$, and $c$ is borrowed from $(2.\mathrm{CP}.2)$.
In this connection we also note the case when the Hamiltonian $H$ is originally not homogeneous but the Cauchy problem (2.1), (2.2) can be reduced to an auxiliary Cauchy problem with homogeneous Hamiltonian $\overline{H}$: to be precise, when the following conditions hold along with $(2.\mathrm{CP}.1)$.
$$
\begin{equation*}
|\chi(\tau, y, u, v)|\leqslant c (1 + \|y\|),\quad \tau \in [0, T], \quad y \in \mathbb{R}^n, \quad u \in P, \quad v \in Q.
\end{equation*}
\notag
$$
is homogeneous in the variable $\overline{s} = (s, r)$. The relationship between the solution $\varphi$ of the Cauchy problem (2.1), (2.2) and the solution $\overline{\varphi}$ of the Cauchy problem (2.13), (2.14) is as follows:
Thus, in the case under consideration, a minimax solution can be defined using $(2.\mathrm{MS}.2_-)$ and $(2.\mathrm{MS}.2_+)$, based on the upper and lower characteristic complexes $(S_{n + 1}, \overline{E}_+)$ and $(S_{n + 1}, \overline{E}_-)$ (see [160], § 3):
where $\tau \in [0, T]$, $y \in \mathbb{R}^n$, $\overline{s}=(s,r) \in S_{n+1}$, and $c$ is borrowed from $(2.\mathrm{CP}.6)$.
2.4. Infinitesimal criteria and consistency
We emphasize that the properties $(2.\mathrm{MS}.1_+)$ and $(2.\mathrm{MS}.1_-)$ in Definitions 1 and 2 of an upper and a lower minimax solution of the Cauchy problem (2.1), (2.2), as well as the equivalent properties $(2.\mathrm{MS}.2_+)$ and $(2.\mathrm{MS}.2_-)$, are of non-local nature, that is, to verify them at some point $(t, x)$ directly it is not sufficient to know the values of $\varphi$ in an arbitrarily small neighbourhood of this point. These properties are convenient, for example, for the proof of the well-posedness of the minimax solution. However, to understand whether these properties hold for a particular function $\varphi$, their infinitesimal forms turn out to be useful. Since it is possible in the general case that the minimax solution is not differentiable, we need to employ the apparatus of non-smooth analysis.
The following assertion is true (see, for example, [162], Theorem 6.4): given a lower semicontinuous function $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$, condition $(2.\mathrm{MS}.2_+)$ for an upper characteristic complex $(\Psi, E)$ is equivalent to the differential inequality
In a similar way, given an upper semicontinuous function $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$, condition $(2.\mathrm{MS}.2_-)$ for a lower characteristic complex $(\Psi, E)$ is equivalent to the differential inequality
Here $\partial_- \{ \varphi (t, x) \mid 1, f \}$ and $\partial_+ \{ \varphi (t, x) \mid 1, f \}$ denote the lower and upper derivatives of $\varphi$ at $(t, x)$ in the direction $(1, f)$:
In particular, considering the characteristic complex $(\mathbb{R}^n, E_0)$, where $E_0$ is defined by (2.4), we deduce that, under conditions $(2.\mathrm{CP}.1)$–$(2.\mathrm{CP}.3)$, the minimax solution of the Cauchy problem (2.1), (2.2) can be characterized as the unique continuous function $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ satisfying the boundary condition (2.2) and the following differential inequalities, which are equivalent to $(2.\mathrm{MS}.1_+)$ and $(2.\mathrm{MS}.1_-)$ respectively:
where $B( c(1 + \|x\|))$ is the ball in $\mathbb{R}^n$ with centre at the origin and radius $c(1+\|x\|)$, and the number $c$ is from $(2.\mathrm{CP}.2)$.
The above criterion makes it possible to establish easily the consistency of the minimax solution with the concept of solution in the classical sense. In fact, assume that a function $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ is differentiable at some point $(t, x) \in (0, T) \times \mathbb{R}^n$. Then
On the other hand, in view of condition $(2.\mathrm{CP}.2)$ inequality (2.21) ensures (2.20). In a similar way (2.19) turns out to be equivalent to the inequality
therefore, the pair of inequalities (2.18), (2.19) is equivalent to (2.1). Thus, first, the minimax solution satisfies the Hamilton–Jacobi equation (2.1) at all points where it is differentiable; second, if there is a continuous function $\varphi \colon [0,T] \times \mathbb{R}^n \to \mathbb{R}$ that is differentiable at all points $(t,x) \in (0,T) \times \mathbb{R}^n$ and satisfies equation (2.1) and boundary condition (2.2), then it is a minimax solution of the Cauchy problem (2.1), (2.2). In this sense the pair of differential inequalities (2.18), (2.19) can be regarded as a possible generalization of the Hamilton–Jacobi equation (2.1) to the non-smooth case.
2.5. Meaningfulness: value function and optimal strategies
In the context of optimal control problems and differential games, the meaningfulness of the concept of a minimax solution actually follows directly from its definition, since this definition is precisely based on the characteristic properties of the value function. Namely, the following assertion is true (in this connection, see [160], Theorem 14.1, and [162], Theorem 12.4): under conditions $(2.\mathrm{DG}.1)$–$(2.\mathrm{DG}.4)$ the value function $\rho$ of the differential game (1.10), (1.11) coincides with the unique minimax solution $\varphi$ of the Cauchy problem (2.1), (2.2) with Hamiltonian (1.18).
It follows that if, for a particular differential game of the form (1.10), (1.11), on the basis of some considerations we have managed to construct a continuous function $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ satisfying boundary condition (2.2) and the differential inequalities (2.15) and (2.16) for some characteristic complexes corresponding to the Hamiltonian $H$ defined by (1.18), then $\varphi$ is the value function of the game. Assume that $\varphi$ satisfies the following Lipschitz condition with respect to the second variable.
In this case, once inequalities (2.15) and (2.16) have been verified, the lower and upper directional derivatives can be calculated using the following formulae, which are simpler than the definitions (2.17) (see, for example, [160], § 5):
Note (in this connection, see, for example, [96], § 29) that the value function of the differential game (1.10), (1.11) satisfies $(2.\mathrm{L})$ if the following additional condition holds, which is stronger than the assumption of the continuity of $\sigma$ (see $(2.\mathrm{DG}.1)$): for any bounded set $W \subset \mathbb{R}^n$ there is $\lambda > 0$ such that
The fact that the value function $\rho$ of the differential game (1.10), (1.11) coincides with the minimax solution $\varphi$ of the Cauchy problem (2.1), (2.2) with Hamiltonian (1.18) can be proved in various ways. The proof usually given in the theory of minimax solutions is based on the construction of optimal feedback control strategies of the players from the minimax solution. In this case the existence of a game value is established independently.
First, following [44] (also see [162], § 12.2), we present the corresponding constructions in the case when the following conditions hold for the differential game (1.10), (1.11), along with $(2.\mathrm{DG}.1)$ and $(2.\mathrm{DG}.4)$.
Conditions $(2.\mathrm{DG}.5)$ and $(2.\mathrm{DG}.6)$ ensure, in particular, the existence of $\kappa \geqslant 0$ such that the minimax solution $\varphi$ satisfies the estimate
in accordance with (2.12), we consider the Lyapunov function $\nu_\varepsilon$ used in the proof of the comparison principle for minimax solutions. For each point $(\tau,y) \in [0,T] \times \mathbb{R}^n$ we choose a vector $z_\varepsilon^{(u)}(\tau,y)$ so that
The vector $s_\varepsilon^{(u)}(\tau, y)$ is interpreted as the generalized gradient of the function $\varphi$ at the point $(\tau, y)$ and is sometimes called the quasi-gradient. This is partly owing to the fact that if a function $\mathbb{R}^n \ni x \mapsto \varphi(\tau, x) \in \mathbb{R}$ turns out to be continuously differentiable in a neighbourhood of $y$, then $s_\varepsilon^{(u)}(\tau, y) \to \partial \varphi(\tau, y) / \partial x$ as $\varepsilon \to 0^+$. Thus, replacing the gradient $\partial \varphi(\tau, y) / \partial x$, which possibly does not exist in the general case, by the quasi-gradient $s_\varepsilon^{(u)}(\tau, y)$ in the first formula in (1.19), we arrive at the following rule for constructing the control strategy of the first player:
To describe the optimality properties of the strategies $U^\circ_\varepsilon$ and $V^\circ_\varepsilon$, we introduce some auxiliary concepts and notation (for more details, see, for example, [96], § § 7 and 8, and [90], § § 5 and 6). Assume that we have an initial position $(t, x) \in [0, T]$ and a partition $\Delta = (\tau_j)_{j \in \{1,\dots, k + 1\}}$ of the interval $[t, T]$: $\tau_1 = t$, $\tau_j < \tau_{j + 1}$ for all $j \in \{1,\dots, k\}$, $\tau_{k+1}=T$. We let $J(t,x,U^\circ_\varepsilon,\Delta,v(\,{\cdot}\,))$ denote the value of the quality index (1.11) realized in the case when, during the game, the first player forms a piecewise constant feedback control $u(\,{\cdot}\,) \in \mathcal{U}(t)$ based on the partition $\Delta$ according to the rule
while the second player uses a control $v(\,{\cdot}\,) \in \mathcal{V}(t)$. We define the guaranteed result of the control strategy $U^\circ_\varepsilon$ of the first player by
where $\Pi_\delta(t)$ is the set of partitions $\Delta$ of the interval $[t, T]$ with diameter below $\delta$. In a similar way we introduce the guaranteed result of the control strategy $V^\circ_\varepsilon$ of the second player:
where $J(t, x, u(\,{\cdot}\,), V^\circ_\varepsilon, \Delta)$ is the value of the quality index (1.11) corresponding to the control $u(\,{\cdot}\,) \in \mathcal{U}(t)$ of the first player and the control $v(\,{\cdot}\,) \in \mathcal{V}(t)$ of the second player, which is formed by the equality
Based immediately on the definition of the minimax solution $\varphi$ and the properties of the Lyapunov function $\nu_\varepsilon$, we can show (see [44] and also [162], Theorem 12.3) that
where $\rho_- (t, x)$ is the lower game value (1.12). In fact, to verify the first inequality in (2.32) we must consider the case when the first player uses an arbitrary quasi-strategy $\boldsymbol{\alpha} \in \boldsymbol{\mathcal{A}}(t)$, while the second player forms a control $v(\,{\cdot}\,) \in \mathcal{V}(t)$ according to (2.30). The second inequality in (2.32) follows from the fact that the mapping associating each control $v(\,{\cdot}\,) \in \mathcal{V}(t)$ of the second player with the control $u(\,{\cdot}\,) \in \mathcal{U}(t)$ of the first player formed by (2.27) is a quasi-strategy of the first player. We can similarly establish that
which reflects the optimality properties of the strategies $U^\circ_\varepsilon$ and $V^\circ_\varepsilon$.
We can abandon conditions $(2.\mathrm{DG}.5)$ and $(2.\mathrm{DG}.6)$, additional to $(2.\mathrm{DG}.1)$–$(2.\mathrm{DG}.4)$, if the rules (2.25) and (2.26) for constructing the control strategies of the players are modified. The distinction is that now the constructions depend on the prescribed initial position $(t_0, x_0) \in [0, T] \times \mathbb{R}^n$ (or, in a more general case, on an arbitrary bounded set of points $(t_0, x_0)$ in $[0, T] \times \mathbb{R}^n$). For any $t \in [t_0, T]$ we consider a compact set $W(t)$ of points $x \in \mathbb{R}^n$ for each of which there exists a function $y(\,{\cdot}\,) \in \operatorname{Lip}([t_0,t];\mathbb{R}^n)$ such that
$$
\begin{equation*}
y(t_0)= x_0\quad\text{and}\quad \|\dot{y}(\tau)\| \leqslant c(1+\|y(\tau)\|)\quad \text{for a. e. } \tau \in [t_0,t],\qquad y(t) = x,
\end{equation*}
\notag
$$
where $c$ is from condition $(2.\mathrm{DG}.2)$. Note that the family of sets $W(t)$, $t \in [t_0, T]$, has the following property of strong invariance with respect to motions of the dynamical system (1.10): for any motion $y(\,{\cdot}\,)$ of this system from the position $(t, x)$, where $t \in [t_0, T]$ and $x \in W(t)$, which is generated by controls $u(\,{\cdot}\,) \in \mathcal{U}(t)$ and $v(\,{\cdot}\,) \in \mathcal{V}(t)$ of the players, we have $y(\tau) \in W(\tau)$, $\tau \in [t, T]$. Taking account of the fact that $W(t) \subset W(T)$ for $t \in [t_0, T]$, we choose $\lambda$ for $W(T)$ in accordance with $(2.\mathrm{DG}.3)$ and consider the Lyapunov function $\nu_\varepsilon$ for $\varepsilon > 0$ (see (2.12)). We obtain strategies $\overline{U}^\circ_\varepsilon$ and $\overline{V}^\circ_\varepsilon$ by using (2.25) and (2.26) with the only exception that now, when the vectors $z_\varepsilon^{(u)}(\tau, y)$ and $z_\varepsilon^{(v)}(\tau, y)$ are specified, the corresponding minimum and maximum are considered not on the whole of $\mathbb{R}^n$ but on the compact set $W(\tau)$:
Just like above, this implies the existence of the game value $\rho(t_0, x_0)$, the equality $\rho(t_0, x_0) = \varphi(t_0, x_0)$, and also the optimality of the strategies $\overline{U}^\circ_\varepsilon$ and $\overline{V}^\circ_\varepsilon$ for the initial position $(t_0, x_0)$:
As already noted in the introduction, the minimax approach to defining a generalized solution of the Hamilton–Jacobi equations is closely related to another intensively developing approach, called the viscosity approach (see, for example, [27] and [25], and also [8], [41], and [173]). We give a definition of a viscosity solution of the Cauchy problem (2.1), (2.2).
Definition 4. An upper viscosity solution of the Cauchy problem (2.1), (2.2) is a continuous function $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ that satisfies the boundary condition (2.7) and has the following property.
Definition 5. A lower viscosity solution of the Cauchy problem (2.1), (2.2) is a continuous function $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ that satisfies the boundary condition (2.9) and has the following property.
Definition 6. A viscosity solution of the Cauchy problem (2.1), (2.2) is a function $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ that is both an upper and a lower viscosity solution of this problem.
Under conditions $(2.\mathrm{CP}.1)$–$(2.\mathrm{CP}.3)$ a viscosity solution of the Cauchy problem (2.1), (2.2) exists and is unique (see, for example, [30], Theorem VI.1). In the theory of viscosity solutions existence is usually proved by the vanishing viscosity method (see, for example, [109], [27], [25], [157], and also [101] and [173], § 1.3). However, if the theory of minimax solutions is used, then this follows from the fact that a minimax solution is a viscosity solution.
Consider indeed a minimax solution $\varphi$ of the Cauchy problem (2.1), (2.2). We can show, for example, that it has property $(2.\mathrm{V}_+)$. Assume that the difference $\varphi-\psi$ attains a local minimum at some point $(t, x) \in (0, T) \times \mathbb{R}^n$. Owing to property $(2.\mathrm{MS})$, for $s = \partial \psi(t, x) / \partial x$ there exists a function $y(\,{\cdot}\,) \in \operatorname{Lip}([t,T];\mathbb{R}^n)$ satisfying conditions (2.3) for $\vartheta = T$ and equality (2.11). Since
for all $\vartheta \in [t, t + \delta]$. It follows from the differentiability of $\psi$ at $(t, x)$ that inequality (2.35) is true.
Like in the case of minimax solutions, the verification of the uniqueness of a viscosity solution consists in the proof of the comparison principle, that is, the fact that each upper viscosity solution $\varphi_+$ dominates each lower viscosity solution $\varphi_-$ (see, for example, [173], Theorem 2.5, and [176], Theorem 2.5.3, under conditions $(2.\mathrm{CP}.1)$–$(2.\mathrm{CP}.3)$). Usually, one reasons by contradiction and uses the idea of doubling the number of variables: for $(t, x),(\tau, y) \in [0, T] \times \mathbb{R}^n$ one considers the difference $\varphi_- (t, x) - \varphi_+ (\tau, y)$. To this difference one adds some specially chosen correction terms, among which the penalty functions
where $\delta > 0$ and $\varepsilon > 0$ are small parameters, play an important role. Based on the search for maximum points of the resulting function in appropriate subsets of points $(t, x, \tau, y)$ in the space $[0, T] \times \mathbb{R}^n \times [0, T] \times \mathbb{R}^n$, a pair of test functions is constructed, using which we obtain a contradiction.
In the infinitesimal form conditions $(2.\mathrm{V}_+)$ and $(2.\mathrm{V}_-)$, defining viscosity solutions, can be expressed in terms of inequalities for sub- and supergradients. Namely (see, for example, [125], Theorem 1.1, and also [173], § 1.2.3), for a continuous function $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ condition $(2.\mathrm{V}_+)$ is equivalent to the differential inequality
Thus, in the framework of the viscosity approach, the Hamilton–Jacobi equation (2.1) is generalized to the non-smooth case by replacing it by the pair of inequalities (2.37), (2.38).
Since the minimax solution of the Cauchy problem (2.1), (2.2) is a viscosity solution of this problem, from the uniqueness of a viscosity solution we conclude that the minimax and viscosity solutions coincide. In addition, there is a stronger result ([162], Theorem 6.4) which establishes the equivalence of the properties used in the definitions of minimax and viscosity solutions: under conditions $(2.\mathrm{CP}.1)$–$(2.\mathrm{CP}.3)$ the differential inequalities (2.18) and (2.37) are equivalent for each lower semicontinuous function $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$; the differential inequalities (2.19) and (2.38) are equivalent for each upper semicontinuous function $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$.
We comment briefly on the proof of the first assertion. The fact that (2.37) implies (2.18) can easily be verified directly on the basis of the definitions of the lower directional derivative (2.17) and subdifferential (2.39). A key result, using which it is possible to prove the converse, is the following very subtle property of the subdifferential ([161], Theorem 1.1; also see [160], Lemma 6.4, and [162], § A6): if
for a lower semicontinuous function $\varphi \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$, a point $(t_0, x_0) \in (0, T) \times \mathbb{R}^n$, and a set $F \in \mathcal{K}(\mathbb{R}^n)$, then there exists a point $(t, x)$ in an arbitrarily small neighbourhood of $(t_0, x_0)$ at which $\varphi$ has a subgradient $(a, s) \in D_-\varphi (t, x)$ such that
$$
\begin{equation*}
a+ \langle s, f \rangle> 0,\qquad f \in F.
\end{equation*}
\notag
$$
Note that this property can be obtained as an infinitesimal version of the multidimensional non-smooth generalization of the finite increment formula [19], [20], [86] (also see [162], § A6, and [21], Chap. 3, Theorem 2.6).
3. Time-delay systems
An answer to the question of equations playing the role of Hamilton–Jacobi equations for time-delay systems (1.20) is closely related to dynamic programming constructions in control problems for systems of this type. The formalization of these constructions is based on the functional interpretation of time-delay systems [92] (also see, for example, [71]), according to which their evolution is considered in an appropriate function space of motion histories. Two main approaches can conventionally be distinguished here.
The first is based on switching directly to the description of the evolution of time-delay systems by ordinary differential equations in a function state space (see, for example, [92], § 29, [154], [155], and [71], § 7.11, as well as [12], Chap. 4). For simplicity we explain the main idea without going into detail, by taking an example of a system with concentrated delay (see (1.21)) of the form
We let $y_\tau(\,{\cdot}\,)$ denote the motion history $y(\,{\cdot}\,)$ of this system on the interval $[\tau-h,\tau]$ shifted to the unified interval $[-h,0]$:
It is natural to consider the evolution of $y_\tau(\,{\cdot}\,)$ with $\tau$ in the space $\operatorname{C}([- h, 0]; \mathbb{R}^n)$. Then we arrive at the differential equation
Note that for each $\tau \in [0, T]$ the domain of the unbounded operator $\boldsymbol{y} \mapsto \boldsymbol{f}_1(\tau, \boldsymbol{y})$ is a subset of $\operatorname{C}([- h, 0]; \mathbb{R}^n)$ consisting of continuously differentiable functions $\boldsymbol{y} \colon [- h, 0] \to \mathbb{R}^n$ whose derivative at the point $\theta = 0$ coincides with $f_1(\tau, \boldsymbol{y}(0), \boldsymbol{y}(- h))$. As a result, time-delay systems are included in a wider class of dynamical systems which, in particular, covers systems described by partial differential equations. When this approach is consistently applied to control problems for time-delay systems, we arrive at Hamilton–Jacobi equations in the space of continuous functions with Frechét derivatives.
Quite a lot of research works are devoted to Hamilton–Jacobi equations in abstract Banach spaces with Frechét derivatives and develop the theory of generalized solutions (in the viscosity sense) of equations of this type. We limit ourselves here to referring to the series of seven papers that began with [28] and [29], to the papers [17], [74], [172], [31], [87], [148], and [169], and to the monographs [7], [105], and [37]. The greatest progress has been achieved in the case of Hilbert spaces. As basic applications of this theory, control problems for systems whose dynamics is described by partial differential equations are considered. There are also several works where control problems for different time-delay systems are studied using results obtained in this theory (see, for example, [156], [9], [18], [38], and [39]). Nevertheless, this approach does not make it possible to cover fully time-delay systems of the general form (1.20). One reason is that the development of a full-fledged theory of generalized solutions of Hamilton–Jacobi equations in the space of continuous functions has failed. The difficulties arising are largely due to the bad differentiability properties of the norm in this space. On the other hand the space of continuous functions is natural for the description of the space of states of time-delay systems (in this connection see, for example, [92], [71], and also [175]).
Our survey is devoted to another approach, in which we do not explicitly switch to the description of time-delay systems by ordinary differential equations in a function space. In this approach the properties of the optimal result (game value) under shifts along possible trajectories of the system are investigated directly. In this case it is possible to consider the hereditary nature of time-delay systems in more details and to cover the general class of systems (1.20). When this approach is used, the classical machinery of Frechét derivatives turns out to be inconvenient; therefore, we have to consider special concepts of differentiability of functionals of the motion history and use derivatives adequate to these concepts. For example, so-called coinvariant derivatives are suitable [111] (also see [123], § 2). The ideas of invariant and coinvariant differentiation of functionals and the terms invariant and coinvariant derivatives themselves were originally proposed in [83], in connection with the second Lyapunov method in stability problems for time-delay systems. Subsequently, derivatives of this type found applications to other branches of the theory of functional differential equations (see, for example, [84] and [85]). Among close concepts of derivatives, we note Clio derivatives [5] and also horizontal and vertical derivatives [33] (see § 3.9 for details). Thus, this approach leads to a new class of Hamilton–Jacobi equations, which is under consideration in this section.
3.1. Hamilton–Jacobi equations with coinvariant derivatives
We consider the space $[0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n)$ with metric
Given a pair $(t, x(\,{\cdot}\,)) \in [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n)$, we let $\operatorname{Lip}(t, x(\,{\cdot}\,))$ denote the set of functions $y(\,{\cdot}\,) \in \operatorname{C}([- h, T]; \mathbb{R}^n)$ that satisfy the equality
and are Lipschitz continuous on the interval $[t, T]$.
Definition 7. A functional $\varphi \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ is coinvariantly differentiable (ci-differentiable) at a point $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{C}([- h, T]; \mathbb{R}^n)$ if there exist $\partial_t \varphi(t, x(\,{\cdot}\,)) \in \mathbb{R}$ and $\nabla \varphi(t, x(\,{\cdot}\,)) \in \mathbb{R}^n$ such that
for each $y(\,{\cdot}\,) \in \operatorname{Lip}(t,x(\,{\cdot}\,))$, where the function $o(\,{\cdot}\,)$ can depend on $y(\,{\cdot}\,)$ and $o(\delta) / \delta \to 0$ as $\delta \to 0^+$. In this case the quantities $\partial_t \varphi(t, x(\,{\cdot}\,))$ and $\nabla \varphi(t, x(\,{\cdot}\,))$ are called the ci-derivative with respect to $t$ and the ci-gradient of the functional $\varphi$ at the point $(t,x(\,{\cdot}\,))$.
The term coinvariant is intended to emphasize that, first, the quantities $\partial_t \varphi(t, x(\,{\cdot}\,))$ and $\nabla \varphi(t, x(\,{\cdot}\,))$ are independent of $y(\,{\cdot}\,) \in \operatorname{Lip}(t, x(\,{\cdot}\,))$ (‘invariance’) and, second, the variables $\tau$ and $y(\,{\cdot}\,)$ vary simultaneously and consistently (the prefix co).
We comment on some distinctions between ci-differentiability and the classical concepts of differentiability of functionals.
First we note that only functions $y(\,{\cdot}\,)$ in $\operatorname{Lip}(t,x(\,{\cdot}\,))$ are considered to be admissible increments of the argument $x(\,{\cdot}\,)$. Hence, by (3.5) the values of $x(\,{\cdot}\,)$ on $[- h, t]$ do not vary; since $y(\,{\cdot}\,)$ is a Lipschitz function on $[t, T]$, only the difference $\tau-t$ is taken as the argument of the infinitesimal $o(\,{\cdot}\,)$.
In addition, the linear part of the increment in (3.6) has a special form that takes account only of the value of $y(\,{\cdot}\,)$ at the point $\tau$. Therefore, in particular, the ci-gradient $\nabla \varphi(t, x(\,{\cdot}\,))$ is an element of the finite-dimensional space $\mathbb{R}^n$.
Finally, we note separately that ci-differentiability is closely related to the non- anticipation property of functionals. Recall that a functional $\varphi \colon [0,T] \times \operatorname{C}([-h,T];\mathbb{R}^n) \to \mathbb{R}$ is called non-anticipating if for $t \in [0, T)$ and $x(\,{\cdot}\,),y(\,{\cdot}\,) \in \operatorname{C}([-h,T];\mathbb{R}^n)$ equality (3.5) implies that $\varphi(t, x(\,{\cdot}\,)) = \varphi(t, y(\,{\cdot}\,))$. It turns out that if $\varphi$ is ci-differentiable at the points $(t,x(\,{\cdot}\,)),(t,y(\,{\cdot}\,)) \in [0,T) \times \operatorname{C}([-h,T];\mathbb{R}^n)$ satisfying (3.5), then
In fact, let $f \in \mathbb{R}^n$, and consider a function $y^{(f)}(\,{\cdot}\,) \in \operatorname{Lip}(t, x(\,{\cdot}\,))$ such that $y^{(f)}(\tau) = x(t) + f (\tau - t)$, $\tau \in [t, T]$. Note that $y^{(f)}(\,{\cdot}\,) \in \operatorname{Lip}(t, y(\,{\cdot}\,))$. For all $\tau \in (t, T]$ we have
Subtracting the second equality from the first and then taking the limit as $\tau \to t^+$, we obtain $\varphi(t,x(\,{\cdot}\,))=\varphi(t,y(\,{\cdot}\,))$. Consequently,
for all $\tau \in (t, T]$. Passing to the limit as $\tau \to t^+$ in this equality and bearing in mind that $f$ is arbitrary, we conclude that $\partial_t \varphi(t, x(\,{\cdot}\,))=\partial_t \varphi(t, y(\,{\cdot}\,))$ and $\nabla \varphi(t, x(\,{\cdot}\,)) = \nabla \varphi(t, y(\,{\cdot}\,))$.
Thus, we deduce that if a functional $\varphi$ is ci-differentiable at all points $(t,x(\,{\cdot}\,)) \in [0,T) \times\operatorname{C}([-h,T];\mathbb{R}^n)$, then $\varphi$ and its ci-derivatives
A functional $\varphi \colon [0,T] \times \operatorname{C}([-h,T];\mathbb{R}^n) \to \mathbb{R}$ is called ci-smooth if it is continuous, ci-differentiable at all points $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{C}([- h, T]; \mathbb{R}^n)$, and its ci-derivatives (3.7) are continuous.
We emphasize that the class of ci-smooth functionals is rather wide and includes, for example, functionals of the form
where the function $a \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ is continuously differentiable and the function $b \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ is continuous. The ci-derivatives of such a functional $\varphi$ at a point $(t,x(\,{\cdot}\,)) \in [0,T) \times \operatorname{C}([-h,T];\mathbb{R}^n)$ are calculated by the formulae
We can verify directly that this functional is ci-differentiable at points $(t, x(\,{\cdot}\,)) \in [- h, T) \times \operatorname{C}([- h, T]; \mathbb{R}^n)$ such that $\|x(\cdot \wedge t)\|_\infty > \|x(t)\|$ or $\|x(\cdot \wedge t)\|_\infty = 0$; in this case $\partial_t \varphi(t, x(\,{\cdot}\,)) = 0$ and $\nabla \varphi(t, x(\,{\cdot}\,)) = 0$. However, the functional (3.8) is not ci-differentiable at other points. In fact, reasoning by contradiction we assume that for some point $(t,x(\,{\cdot}\,)) \in [0,T) \times \operatorname{C}([-h,T];\mathbb{R}^n)$ such that $\|x(\cdot \wedge t)\|_\infty = \|x(t)\| > 0$ there exist $\partial_t \varphi = \partial_t \varphi(t, x(\,{\cdot}\,)) \in \mathbb{R}$ and $\nabla \varphi = \nabla \varphi(t, x(\,{\cdot}\,)) \in \mathbb{R}^n$ such that
for each $y(\,{\cdot}\,) \in \operatorname{Lip}(t, x(\,{\cdot}\,))$ and all $\tau \in (t, T]$. Choosing $y(\,{\cdot}\,) = x(\cdot \wedge t)$ in (3.10) we obtain $\partial_t\varphi=0$. Taking account of this equality and considering (3.10) for the function $y(\,{\cdot}\,) \in \operatorname{Lip}(t, x(\,{\cdot}\,))$ defined by $y(\tau) = x(t) - x(t) (\tau - t)$, $\tau \in [t, T]$, we infer the relation $\langle \nabla \varphi, x(t) \rangle = 0$. Substituting the function $y(\,{\cdot}\,) \in \operatorname{Lip}(t, x(\,{\cdot}\,))$ such that $y(\tau) = x(t) + x(t) (\tau - t)$, $\tau \in [t, T]$, into (3.10) we arrive at the equality $\|x(t)\| = 0$, which contradicts the above assumption.
Coinvariant derivatives are convenient due to the fact that the total derivative of a ci-smooth functional along the motion of a time-delay system (1.20) is written in the form familiar from the standard case (in this connection see, for example, [84], Theorem 7.1.1, and [123], Lemma 2.1). More precisely, the following assertion is an analogue of the standard differentiation rule for ci-smooth functionals (see (1.5)).
Proposition 1. Assume that a functional $\varphi \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ is ci- smooth and a point $(t, x(\,{\cdot}\,))\,{\in}\,[0,T)\times\operatorname{C}([-h,T];\mathbb{R}^n)$, a function $y(\,{\cdot}\,)\,{\in}\,\operatorname{Lip}(t,x(\,{\cdot}\,))$, and a number $\vartheta \in (t, T)$ are fixed. Then $\mu(\tau) = \varphi(\tau, y(\,{\cdot}\,))$, $\tau \in [t, \vartheta]$, is a Lipschitz function and
In fact, in view of the continuity of the mappings $\partial_t \varphi$ and $\nabla \varphi$, we can choose a number $M \geqslant 0$ such that $|\partial_t \varphi(\tau, y(\,{\cdot}\,))| \leqslant M$ and $\|\nabla \varphi(\tau, y(\,{\cdot}\,))\| \leqslant M$ for all $\tau \in [t, \vartheta]$. We consider the Lipschitz constant $\lambda > 0$ of the function $y(\,{\cdot}\,)$ on the interval $[t,T]$ and set $L=(1+\lambda) M$. For each $\tau \in [t, \vartheta]$ the functional $\varphi$ is ci-differentiable at the point $(\tau, y(\,{\cdot}\,))$ and $y(\,{\cdot}\,) \in \operatorname{Lip}(\tau, y(\,{\cdot}\,))$. Therefore, according to (3.6), we have
Since the function $\mu(\,{\cdot}\,)$ is continuous because the functional $\varphi$ is, this inequality implies, by Dini’s theorem (see, for example, [16], Chap. 4, Theorem 1.2), that $\mu(\,{\cdot}\,)$ satisfies the Lipschitz condition with constant $L$. It remains to note that, for all $\tau \in (t, \vartheta)$ such that the derivatives $\dot{\mu}(\tau)$ and $\dot{y}(\tau)$ exist, equality (3.11) is obtained by taking the limit in (3.12) as $\delta \to 0^+$.
The subject of discussion in this subsection is the Hamilton–Jacobi equation with ci-derivatives
The non-anticipating functional $\varphi \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ is unknown. The functional $H \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \times \mathbb{R}^n \to \mathbb{R}$, which is called the Hamiltonian, and the boundary functional $\sigma \colon \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ are prescribed. For each $s \in \mathbb{R}^n$ the functional
To emphasize that, unlike the classical Hamilton-Jacobi equations (2.1), the sought-for quantity in (3.13) is not a function defined on the finite-dimensional space $[0, T] \times \mathbb{R}^n$ but a non-anticipating functional on $[0, T] \times \operatorname{C}([-h, T]; \mathbb{R}^n)$, equations of the form (3.13) are sometimes said to be functional or non-anticipating. In addition, equations of this type are often called hereditary, since by virtue of the non-anticipation conditions, for each $t \in [0, T]$ the Hamiltonian $H$ and the unknown functional $\varphi$ depend only on the restriction of $x(\,{\cdot}\,)$ to the interval $[-h,t]$: in applications to control problems (see § 3.2) this fact is interpreted as the dependence on the path travelled, that is, the history of motion of the system to the moment of time $t$. The term path-dependent is commonly used in this case.
for all $t \in [0, T]$, $x_1(\,{\cdot}\,),x_2(\,{\cdot}\,) \in W$, and $s \in \mathbb{R}^n$.
Note that condition $(3.\mathrm{CP}.3)$ yields automatically the non-anticipation property of the functional (3.15) for each $s \in \mathbb{R}^n$.
3.2. Differential games for time-delay systems
We describe how Cauchy problems of the form (3.13), (3.14) are related to differential games for time-delay systems (1.20).
We consider a differential game in which the motion of the dynamical system is described by retarded-type functional differential equations of the form
with the initial condition (3.5) specified by a pair $(t,x(\,{\cdot}\,)) \in [0,T] \times \operatorname{C}([-h,T];\mathbb{R}^n)$, which plays the role of the initial position of system (3.16). By choosing the controls $u(\tau)$ the first player seeks to minimize the quality index
For all $(t, x(\,{\cdot}\,)) \in [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n)$, $u(\,{\cdot}\,) \in \mathcal{U}(t)$, and $v(\,{\cdot}\,) \in \mathcal{V}(t)$ conditions $(3.\mathrm{DG}.1)$–$(3.\mathrm{DG}.3)$ ensure the existence and uniqueness of a motion of the system (3.16) satisfying the initial condition (3.5), that is, of a function $y(\,{\cdot}\,) \in \operatorname{Lip}(t, x(\,{\cdot}\,))$ satisfying equation (3.16) with $u(\,{\cdot}\,)$ and $v(\,{\cdot}\,)$ at almost every $\tau \in [t, T]$.
We consider the lower value of the differential game (3.16), (3.17):
We emphasize that (3.19) defines a functional $\rho_- \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$, which is non-anticipating by construction. This functional satisfies the following equality, expressing the the dynamic programming principle in the game (3.16), (3.17):
We can verify this using formula (3.11) for the derivative of a ci-smooth functional along the motion and equality (3.20). In this case the corresponding arguments actually repeat the proof of a similar assertion in the classical situation of differential games for systems described by ordinary differential equations (see § 1).
In a similar way, for the upper value functional of the game
It is important that, like for ordinary differential systems, the following assertion holds, along with Theorem 1, for time-delay systems (see, for example, [115], Theorem 3.1, and [123], Theorem 3.1).
Theorem 2. Assume that conditions $(3.\mathrm{DG}.1)$–$(3.\mathrm{DG}.4)$ hold and there exists a ci-smooth functional $\varphi \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ satisfying the Hamilton–Jacobi equation (3.13) with Hamiltonian (3.22) and the boundary condition (3.14). Then the differential game (3.16), (3.17) has the value
In this case non-anticipating control strategies of the first and second players constructed by extremal aiming in the direction of the ci-gradient of $\varphi$, namely,
In view of Proposition 1 the proof of Theorem 2 actually repeats again the proof of a similar assertion in the classical case of differential games for ordinary differential systems.
We explain the optimality of the strategies (3.23) using the example of the control strategy $U^\circ$ of the first player (in this connection see (2.28) and (2.34)): for any $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{C}([-h,T];\mathbb{R}^n)$ and $\zeta>0$ there is $\delta>0$ such that, for any partition $\Delta = (\tau_j)_{j \in \{1,\dots, k + 1\}} \in \Pi_\delta(t)$ of the interval $[t, T]$ and any control $v(\,{\cdot}\,) \in \mathcal{V}(t)$ of the second player, the first player, by forming their feedback control in accordance with the step-by-step rule
Note that we can use the rule (3.24) because of the non-anticipation property of the strategy $U^\circ$: $U^\circ(\tau_j, y(\,{\cdot}\,))$ depends only on the values $y(\tau)$, $\tau \in [- h, \tau_j]$, already realized to time $\tau_j$, that is, on the history of the motion $y(\,{\cdot}\,)$ of the system before that instant. In this sense, control strategies of this form are sometimes called strategies with memory of motion history.
Theorem 2 gives a solution of the differential game (3.16), (3.17) in the case when the Cauchy problem (3.13), (3.14) with Hamiltonian (3.22) has a ci-smooth solution. However, like in the case of classical Hamilton–Jacobi equations, this problem can have no ci-smooth solutions, and we thus need to consider appropriate generalized solutions. Below, according to the scheme in § 2, we discuss the results deduced in the framework of the minimax and viscosity approaches to generalized solutions of the Cauchy problem (3.13), (3.14).
is non-anticipating. For $(t, x(\,{\cdot}\,)) \in [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n)$ and $s \in \mathbb{R}^n$ we consider the set $\operatorname{Sol}(t, x(\,{\cdot}\,), E_0, s)$ of solutions $(y(\,{\cdot}\,), z(\,{\cdot}\,))$ of the retarded-type functional differential inclusion
We let $\Phi_{\operatorname{C}}$, $\Phi_{\operatorname{LSC}}$, and $\Phi_{\operatorname{USC}}$ denote the sets of non-anticipating functionals $\varphi \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n)\to \mathbb{R}$ that are continuous, lower semicontinuous, and upper semicontinuous, respectively.
Definition 8. An upper minimax solution of the Cauchy problem (3.13), (3.14) is a functional $\varphi \in \Phi_{\operatorname{LSC}}$ that satisfies the boundary condition
Definition 9. A lower minimax solution of the Cauchy problem (3.13), (3.14) is a functional $\varphi \in \Phi_{\operatorname{USC}}$ that satisfies the boundary condition
Definition 10. A minimax solution of the Cauchy problem (3.13), (3.14) is a functional $\varphi \in \Phi_{\operatorname{C}}$ that is both an upper and a lower minimax solution of this problem.
Note that properties $(3.\mathrm{MS}.1_+)$ and $(3.\mathrm{MS}.1_-)$ express the $u$- and $v$-stability of the value functional of the differential game (3.16), (3.17) in the unified way (see, for example, [121], Propositions 1 and 2). These properties appeared in the works [98], [137], [102], [138], and [131] (also see [100], § 11), devoted to the development of the theory of positional differential games for time-delay systems.
where $(t,x(\,{\cdot}\,)) \in [0,T] \times \operatorname{C}([-h,T];\mathbb{R}^n)$ and $c$ is from $(3.\mathrm{CP}.2)$. For a functional $\varphi \colon [0,T] \times \operatorname{C}([-h,T];\mathbb{R}^n) \to \mathbb{R}$ we consider the following condition.
Note that condition $(3.\mathrm{MS})$ can be reformulated as follows: for all $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{C}([-h, T]; \mathbb{R}^n)$ and $s\,{\in}\,\mathbb{R}^n$ there exists a pair $(y(\,{\cdot}\,), z(\,{\cdot}\,)) \in \operatorname{Sol}(t, x(\,{\cdot}\,), E_0, s)$ such that
In particular, it follows from Proposition 2 that a minimax solution of the Cauchy problem (3.13), (3.14) can be defined as a non-anticipating continuous functional $\varphi \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ satisfying conditions (3.14) and $(3.\mathrm{MS})$.
3.4. Well-posedness of the minimax solution
Like in the case of the classical Hamilton–Jacobi equations, the verification of the existence and uniqueness of a minimax solution of the Cauchy problem (3.13), (3.14) consists of two parts. The first part is proving the existence of an upper and a lower minimax solution $\varphi_+^\circ$ and $\varphi_-^\circ$ of the Cauchy problem (3.13), (3.14) such that
This proof (see [114], Lemmas 7.1–7.5, and also [123], § 6.1) actually repeats the reasoning ([162], Theorem 8.2) in the classical case of the Cauchy problem (2.1), (2.2) except for some technical details related, among other things, to the properties of the sets of solutions of retarded-type functional differential inclusions. The second part consists in verifying the comparison principle: for any upper and lower minimax solutions $\varphi_+$ and $\varphi_-$ of the Cauchy problem (3.13), (3.14) we have
for all $t \in [0, T]$, $x_1(\,{\cdot}\,),x_2(\,{\cdot}\,) \in W$, and $s \in \mathbb{R}^n$.
In the context of applications to control problems for time-delay systems condition $(3.\mathrm{CP}.4)$ makes it possible to cover only the case of distributed delays (see (1.23)). It has been shown that under conditions $(3.\mathrm{CP}.1)$, $(3.\mathrm{CP}.2)$, and $(3.\mathrm{CP}.4)$ an appropriate Lyapunov–Krasovskii functional is
where $(\tau, w(\,{\cdot}\,)) \in [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n)$, $\varepsilon > 0$ is a small parameter, $\lambda$ is chosen for the set $Y(t, x(\,{\cdot}\,))$ in accordance with $(3.\mathrm{CP}.4)$ (see (3.32)), and $(t, x(\,{\cdot}\,))$ is the point at which inequality (3.33) is verified.
for all $t \in [0, T]$, $x_1(\,{\cdot}\,),x_2(\,{\cdot}\,) \in W$, and $s \in \mathbb{R}^n$.
In applications to control problems condition $(3.\mathrm{CP}.5)$ makes it possible to deal with distributed and constant concentrated delays alike (see (1.21) and (1.23)); however, it does not cover, for example, the case of variable delay (see (1.22)). Under conditions $(3.\mathrm{CP}.1)$, $(3.\mathrm{CP}.2)$, and $(3.\mathrm{CP}.5)$ we can prove the comparison principle by using the Lyapunov–Krasovskii functional
Here $\lambda$ is chosen for the set $Y(t, x(\,{\cdot}\,))$ in accordance with $(3.\mathrm{CP}.5)$.
As we can see, the constructions of the Lyapunov–Krasovskii functionals (3.34) and (3.35) are closely related to the corresponding conditions $(3.\mathrm{CP}.4)$ and $(3.\mathrm{CP}.5)$ for the Lipschitz property of the Hamiltonian $H$ in the variable $x(\,{\cdot}\,)$. Therefore, to prove the comparison principle under the most general condition $(3.\mathrm{CP}.3)$, which involves the uniform norm, it is natural to consider Lyapunov–Krasovskii functionals based on the functional (3.8). However, the implementation of this idea encounters significant difficulties caused primarily by the poor ci-differentiability properties of the latter (in particular, this functional is not ci-smooth).
Generally speaking, we can abstract away from the specific form of one or another condition for the Lipschitz property of the Hamiltonian $H$ in the variable $x(\,{\cdot}\,)$ and formulate requirements for a Lyapunov–Krasovskii functional that make it possible to prove the comparison principle using this functional in accordance with the scheme in [114], Lemma 7.7, (also see [123], Lemma 6.7). Note that these requirements are close to conditions $(H.4)^\prime$ in [26] and $(A.4)$ in [162], § 9.2, in the case of the classical Hamilton–Jacobi equations.
Lemma 1. Assume that conditions $(3.\mathrm{CP}.1)$ and $(3.\mathrm{CP}.2)$ hold and the functional (3.15) is non-anticipating for all $s \in \mathbb{R}^n$. Assume that for each point $(t,x(\,{\cdot}\,)) \in [0,T) \times \operatorname{C}([-h,T];\mathbb{R}^n)$ there is $\varepsilon_0 > 0$ and a Lyapunov–Krasovskii functional $\nu_\varepsilon \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ depending on the parameter $\varepsilon \in (0, \varepsilon_0]$ such that the following conditions are satisfied:
We give a proof of Lemma 1. Let $(t,x(\,{\cdot}\,)) \in [0,T] \times \operatorname{C}([-h,T];\mathbb{R}^n)$. If $t = T$, then inequality (3.33) holds true by the boundary conditions (3.28) for $\varphi_+$ and (3.30) for $\varphi_-$; therefore, we assume in what follows that $t < T$.
Let $\varepsilon \in (0, \varepsilon_0]$. For any $\tau \in [t, T]$ and $y_1(\,{\cdot}\,),y_2(\,{\cdot}\,) \times \operatorname{C}([- h, T]; \mathbb{R}^n)$ we consider the set $F_\varepsilon(\tau, y_1(\,{\cdot}\,), y_2(\,{\cdot}\,))$ of triples $(f_1, f_2, \chi) \in \mathbb{R}^n \times \mathbb{R}^n \times \mathbb{R}$ such that
$$
\begin{equation}
\|f_1\|\leqslant c (1 + \|y_1(\cdot \wedge \tau)\|_\infty),\quad \|f_2\|\leqslant c (1 + \|y_2(\cdot \wedge \tau)\|_\infty),
\end{equation}
\tag{3.36}
$$
where $c$ is borrowed from condition $(3.\mathrm{CP}.2)$ and $w(\,{\cdot}\,) = y_2(\,{\cdot}\,) - y_1(\,{\cdot}\,)$. Note that the multivalued mapping $(\tau, y_1(\,{\cdot}\,), y_2(\,{\cdot}\,)) \mapsto F_\varepsilon(\tau, y_1(\,{\cdot}\,), y_2(\,{\cdot}\,))$ is non-anticipating. We let $\operatorname{Sol}_\varepsilon$ denote the set of solutions $(y_1(\,{\cdot}\,), y_2(\,{\cdot}\,), z(\,{\cdot}\,))$ of the differential inclusion
This set is non-empty and compact in $\operatorname{C}([- h, T]; \mathbb{R}^n \times \mathbb{R}^n \times \mathbb{R})$.
To prove the lemma it suffices to establish that for all $\varepsilon \in (0, \varepsilon_0]$ there exists a triple of functions $(y_1^\varepsilon(\,{\cdot}\,), y_2^\varepsilon(\,{\cdot}\,), z^\varepsilon(\,{\cdot}\,)) \in \operatorname{Sol}_\varepsilon$ such that
In fact, the inclusion $(y_1^\varepsilon(\,{\cdot}\,), y_2^\varepsilon(\,{\cdot}\,), z^\varepsilon(\,{\cdot}\,)) \in \operatorname{Sol}_\varepsilon$ yields the inequality
for almost every $\tau \in [t, T]$. Here $w^\varepsilon(\,{\cdot}\,) = y_2^\varepsilon(\,{\cdot}\,) - y_1^\varepsilon(\,{\cdot}\,)$. We consider the function $\mu_\varepsilon(\tau) = \nu_\varepsilon(\tau, w^\varepsilon(\,{\cdot}\,)) + z^\varepsilon(\tau) - \varepsilon (\tau - t)$, $\tau \in [t, T]$. The functional $\nu_\varepsilon$ is ci-smooth. Thus, the function $\mu_\varepsilon(\,{\cdot}\,)$ is continuous, and by Proposition 1 it satisfies the Lipschitz condition on each interval $[t, \vartheta]$, where $\vartheta \in (t, T)$. In addition, by (3.40) and $(3.\mathrm{d})$ we have
for almost every $\tau \in [t,T]$. We have $\mu_\varepsilon(t) \geqslant \mu_\varepsilon(T)$; in view of the initial condition (3.38) and $(3.\mathrm{b})$ it follows that
The functionals $\varphi_+$ and $\varphi_-$ satisfy the boundary conditions (3.28) and (3.30), respectively; hence $z^\varepsilon(T) \geqslant \sigma(y_1^\varepsilon(\,{\cdot}\,)) - \sigma(y_2^\varepsilon(\,{\cdot}\,))$ according to (3.39). As a result, we obtain the estimate
for all $\varepsilon \in (0,\varepsilon_0]$. The set $Y(t, x(\,{\cdot}\,))$ is compact, and the boundary functional $\sigma$ is continuous. Therefore, there is $K > 0$ such that $|\sigma(y(\,{\cdot}\,))| \leqslant K$ for $y(\,{\cdot}\,) \in Y(t, x(\,{\cdot}\,))$. Thus, we derive from (3.41) that
Hence $\|y_2^\varepsilon(\,{\cdot}\,) - y_1^\varepsilon(\,{\cdot}\,)\|_\infty \to 0$ as $\varepsilon \to 0^+$ by condition $(3.\mathrm{c})$. Finally, estimate (3.41), since the functional $\nu_\varepsilon$ is non-negative, implies the inequality
Passing to the limit as $\varepsilon \to 0^+$ we arrive at the required inequality (3.33).
Thus, it remains to show that for each $\varepsilon \in (0,\varepsilon_0]$ there exists a triple of functions $(y_1^\varepsilon(\,{\cdot}\,), y_2^\varepsilon(\,{\cdot}\,), z^\varepsilon(\,{\cdot}\,)) \in \operatorname{Sol}_\varepsilon$ satisfying (3.39). For $\tau \in [t,T]$ we consider the set
The maximum in (3.42) is attained, since $\operatorname{Sol}_\varepsilon$ is compact and the functionals $\varphi_+$ and $\varphi_-$ are lower and upper semicontinuous, respectively. We have to prove the equality $\tau_\varepsilon = T$. Reasoning by contradiction we assume that $\tau_\varepsilon < T$. Fix
We introduce the notation $s=\nabla\nu_\varepsilon(\tau_\varepsilon,y_1(\,{\cdot}\,)- y_2(\,{\cdot}\,))$. The functional $\varphi_+$ is non-anticipating and lower semicontinuous. Therefore (see, for example, [123], Proposition 5.1), owing to $(3.\mathrm{MS}.1_+)$ there exists a pair $(y_+(\,{\cdot}\,), z_+(\,{\cdot}\,)) \in \operatorname{Sol}(\tau_\varepsilon, y_1(\,{\cdot}\,), E_0, s)$ such that
In a similar way, for the functional $\varphi_-$, owing to $(3.\mathrm{MS}.1_-)$, there exists a pair $(y_-(\,{\cdot}\,), z_-(\,{\cdot}\,)) \in \operatorname{Sol}(\tau_\varepsilon,y_2(\,{\cdot}\,),E_0,s)$ such that
Taking account of the definitions (3.25) and (3.36), (3.37) of the multivalued mappings $E_0$ and $F_\varepsilon$, from the continuity of the Hamiltonian $H$ and the ci-gradient $\nabla \nu_\varepsilon$, we conclude that there exists $\delta \in (0, T - \tau_\varepsilon)$ such that
for almost every $\tau \in [\tau_\varepsilon,\tau_\varepsilon + \delta]$. We consider the functions $y^\ast_\pm (\,{\cdot}\,) = y_\pm(\cdot \wedge \tau_\varepsilon + \delta)$ and
and $w^\ast(\,{\cdot}\,) = y^\ast_-(\,{\cdot}\,) - y^\ast_+(\,{\cdot}\,)$. According to (3.43) and (3.46), we have $(y^\ast_+(\,{\cdot}\,), y^\ast_-(\,{\cdot}\,), z^\ast(\,{\cdot}\,)) \in \operatorname{Sol}_\varepsilon$. Thus, in view of the non-anticipation property of the functionals $\varphi_+$ and $\varphi_-$ it follows from (3.43)–(3.45) that
hence we have the inclusion $(y^\ast_+(\,{\cdot}\,), y^\ast_-(\,{\cdot}\,), z^\ast(\,{\cdot}\,)) \in M_\varepsilon(\tau_\varepsilon + \delta)$, which contradicts the definition (3.42) of $\tau_\varepsilon$. The proof of Lemma 1 is complete.
where $\varkappa = (3 - \sqrt{5}\,) / 2$. In addition (also see [67], Appendix B), the functional $\gamma$ is ci-smooth; in this case $\partial_t \gamma(\tau, w(\,{\cdot}\,)) = 0$ and
The functional $\gamma$ is a regularization of the functional $\varphi$ in (3.8): the functionals $\gamma$ and $\varphi$ are equivalent in the sense of inequalities (3.48); however, unlike $\varphi$, the functional $\gamma$ is ci-smooth.
Let $(t,x(\,{\cdot}\,))\in[0,T)\times\operatorname{C}([-h,T];\mathbb{R}^n)$. For the set $Y(t,x(\,{\cdot}\,))$ (see (3.32)) we choose $\lambda$ in accordance with condition $(3.\mathrm{CP}.3)$, borrow $\varkappa$ from (3.48), set $\varepsilon_0 = e^{- \lambda T / \varkappa} / \sqrt{\varkappa}$, and introduce the Lyapunov–Krasovskii functional
for each $\varepsilon \in (0, \varepsilon_0]$. On the basis of the above properties of the functional $\gamma$ we can verify directly (see [67], § 4.1) that the functional $\nu_\varepsilon$ satisfies conditions $(3.\mathrm{a})$–$(3.\mathrm{d})$ in Lemma 1. We thus deduce the following result (see [67], Theorem 1).
We also note that, in connection with the existence and uniqueness problems for minimax solutions, other conditions specifying the Lipschitz properties of the Hamiltonian $H$ with respect to the variable $x(\,{\cdot}\,)$ can be considered. For example, a Lipschitz property of a slightly different form from $(3.\mathrm{CP}.3)$–$(3.\mathrm{CP}.5)$ was treated in [10]. Regarding the Cauchy problem (3.13), (3.14), we can formulate this condition as follows.
for all $\varepsilon > 0$, $\tau \in [t, T]$, and $y_1(\,{\cdot}\,), y_2(\,{\cdot}\,) \in Y_L(t, x(\,{\cdot}\,))$. Here the set $Y_L(t,x(\,{\cdot}\,))$ is defined by equality (3.32) for $c$ replaced by $L$.
Theorem 3 also eliminates certain ‘inconvenience’ in Definition 10 (of a minimax solution) related to the fact that $c$ (see the definition (3.25) of the multivalued mapping $E_0$) can be chosen on the basis of $(3.\mathrm{CP}.2)$ in an ambiguous way. Nevertheless, under conditions $(3.\mathrm{CP}.1)$–$(3.\mathrm{CP}.3)$ the minimax solution of the Cauchy problem (3.13), (3.14) is independent of this choice. In fact, fix numbers $c_1 > c_2 > 0$ such that condition $(3.\mathrm{CP}.2)$ holds for each of them. It is clear that a minimax solution corresponding to the smaller number $c_2$ is a minimax solution corresponding to the larger number $c_1$. Therefore, by Theorem 3 the minimax solutions corresponding to $c_1$ and $c_2$ must coincide.
Starting from Theorem 3 and the comparison principle and reasoning, for example, in accordance with the scheme in Theorem 2 in [97] (also see [123], Theorem 9.1), we can establish the following property of the continuous dependence of the minimax solution $\varphi$ of the Cauchy problem (3.13), (3.14) on variations of the Hamiltonian $H$ and the boundary functional $\sigma$. Assume that for each $k \in \mathbb{N} \cup \{0\}$ we have
satisfying conditions $(3.\mathrm{CP}.1)$–$(3.\mathrm{CP}.3)$, where the constant $c$ in $(3.\mathrm{CP}.2)$ is independent of $k$. Assume that for each compact set $W \subset \operatorname{C}([- h, T]; \mathbb{R}^n)$ and each $s \in \mathbb{R}^n$ the following convergence holds uniformly with respect to $t \in [0, T]$ and $x(\,{\cdot}\,) \in W$:
For each $k \in \mathbb{N} \cup \{0\}$ we let $\varphi_k \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ denote the minimax solution of the Cauchy problem (3.13), (3.14) with $H = H_k$ and $\sigma = \sigma_k$.
Theorem 4. Under the above conditions, for any compact set $W \subset \operatorname{C}([- h, T]; \mathbb{R}^n)$ the convergence $\varphi_k(t, x(\,{\cdot}\,)) \to \varphi_0(t, x(\,{\cdot}\,))$ takes place uniformly with respect to $t \in [0, T]$ and $x(\,{\cdot}\,) \in W$ as $k \to \infty$.
Theorem 4 can be used, among other things, in the development of methods for an approximate construction of the minimax solution of the Cauchy problem (3.13), (3.14), namely, as regards the substantiation of the transition from the infinite-dimensional argument $x(\,{\cdot}\,)$ of the sought-for solution $\varphi$ to a finite- dimensional one. This idea was implemented, for example, in [97] and [117] (also see [123], § 10) on the basis of a piecewise linear approximation of the argument $x(\,{\cdot}\,)$. As a consequence, a scheme of approximation of the minimax solution of the Cauchy problem (3.13), (3.14) by minimax solutions of appropriate Cauchy problems for the classical Hamilton–Jacobi equations in finite-dimensional spaces, albeit of growing dimension, was proposed. These constructions were further developed in [78].
We also note that close approximation schemes for the minimax solution of the Cauchy problem (3.13), (3.14) were developed in [66]. In this case retarded-type functional differential inclusions involved in the definition of a minimax solution were approximated by ordinary differential inclusions of large dimension (in this connection see, for example, [93], [150], [103], and also [125]).
To conclude this subsection, we draw the reader’s attention to the fact that, although the functional $\gamma$ in (3.47), playing a key role in the proof of Theorem 3, is ci-smooth, for fixed $\tau \in [0, T]$ the functional $\operatorname{C}([- h, T]; \mathbb{R}^n) \ni w(\,{\cdot}\,) \mapsto \gamma(\tau, w(\,{\cdot}\,)) \in \mathbb{R}$ is not Gâteaux-differentiable. Points of non-differentiability are, for example, the points $w(\,{\cdot}\,) \in \operatorname{C}([- h, T]; \mathbb{R}^n)$ such that $\|w(\cdot \wedge \tau)\|_\infty > \|w(\tau)\|$ and the quantity $\|w(\cdot \wedge \tau)\|_\infty = \max_{\xi \in [- h, \tau]} \|w(\xi)\|$ is attained at two distinct values $\xi_1,\xi_2 \in [- h, \tau)$. We can prove this, for example, by following the scheme from [6] (Chap. XI, § 4, the lemma). In particular, the above observation emphasizes once again a certain specificity of the concept of ci-differentiability.
3.5. Characteristic complexes
By analogy with the classical Cauchy problem (2.1), (2.2), apart from the characteristic differential inclusion (3.26), other appropriate functional differential inclusions of retarded type can also be used to define a minimax solution of the Cauchy problem (3.13), (3.14).
More precisely, an upper characteristic complex for the Hamilton–Jacobi equation (3.13) is a pair $(\Psi, E)$ of a non-empty set $\Psi$ and a multivalued mapping
Like in the classical case, the pair $(\mathbb{R}^n,E_0)$, where $E_0$ is defined by (3.25), is both an upper and a lower characteristic complex of equation (3.13).
We let $\operatorname{Sol}(t, x(\,{\cdot}\,), E, \psi)$ denote the set of solutions $(y(\,{\cdot}\,), z(\,{\cdot}\,))$ of the retarded- type functional differential inclusion
with initial condition (3.27) and consider the following conditions for a functional $\varphi \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$.
Proposition 3. Assume that conditions $(3.\mathrm{CP}.1)$–$(3.\mathrm{CP}.3)$ hold. If the functional $\varphi \in \Phi_{\operatorname{LSC}}$ satisfies $(3.\mathrm{MS}.2_+)$ for some upper characteristic complex $(\Psi_+^\ast, E_+^\ast)$, then it satisfies this condition for all upper characteristic complexes $(\Psi_+, E_+)$. In a similar way, if a functional $\varphi \in \Phi_{\operatorname{USC}}$ satisfies $(3.\mathrm{MS}.2_-)$ for some lower characteristic complex $(\Psi_-^\ast, E_-^\ast)$, then it satisfies this condition for all lower characteristic complexes $(\Psi_-,E_-)$.
We comment on the proof of the first part of Proposition 3. We fix $c > 0$ such that condition $(3.\mathrm{C}.3)$ for both complexes $(\Psi_+^\ast, E_+^\ast)$ and $(\Psi_+, E_+)$ and condition $(3.\mathrm{CP}.2)$ hold at the same time. Then, owing to [123], Theorem 5.1 (also see [67], Proposition 3), we infer that for $\varphi \in \Phi_{\operatorname{LSC}}$ condition $(3.\mathrm{MS}.2_+)$ for $(\Psi_+^\ast, E_+^\ast)$ implies this condition for the characteristic complex $(\mathbb{R}^n, E_0)$ (see (3.25)). On the other hand, following the scheme presented in [52], Proposition 1, and using properties $(3.\mathrm{a})$–$(3.\mathrm{d})$ of the functional $\nu_\varepsilon$ defined by (3.49) we can verify directly that condition $(3.\mathrm{MS}.2_+)$ for $(\mathbb{R}^n, E_0)$ yields the same condition for $(\Psi_+, E_+)$.
In particular, Proposition 3 makes it possible to conclude that Theorem 3 generalizes the results in [112] and [97] (also see [123], §§ 7 and 8) concerning the existence and uniqueness of a minimax solution in the case when the Hamiltonian is positively homogeneous in the third variable and also in the case when the Cauchy problem (3.13), (3.14) can be reduced to an auxiliary Cauchy problem with homogeneous Hamiltonian (in this connection see § 2.4 for a discussion of similar cases). Note that the proof of the comparison principle for minimax solutions is slightly simplified in the above cases: in particular, there is no need to construct a ci-smooth Lyapunov–Krasovskii functional.
3.6. Infinitesimal criteria and consistency
To express conditions $(3.\mathrm{MS}.1_+)$ and $(3.\mathrm{MS}.1_-)$, involved in the definitions of an upper and lower minimax solution of the Cauchy problem (3.13), (3.14), in the infinitesimal form it is convenient to use the following derivatives of functionals in multivalued directions, which are compatible with the ci-differentiability techniques in the natural way (see, for example, [113] and also [123], § 11).
For a functional $\varphi \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ the lower and upper right derivatives at a point $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{C}([- h, T]; \mathbb{R}^n)$ in a multivalued direction $F \in \mathcal{K}(\mathbb{R}^n)$ are, respectively, the quantities
$$
\begin{equation}
\Omega(t, x(\,{\cdot}\,), F, \varepsilon)= \bigl\{ y(\,{\cdot}\,) \in \operatorname{Lip}(t, x(\,{\cdot}\,)) \colon\dot{y}(\tau) \in [F]^\varepsilon \text{ for a. a. } \tau \in [t, T] \bigr\}
\end{equation}
\tag{3.51}
$$
and $[F]^\varepsilon$ is the closed $\varepsilon$-neighbourhood of the set $F$ in $\mathbb{R}^n$.
The result reads as follows (see, for example, [114], Theorem 8.1, and also [123], Theorem 12.1).
Theorem 5. Assume that conditions $(3.\mathrm{CP}.1)$ and $(3.\mathrm{CP}.2)$ hold and the functional (3.15) is non-anticipating for each $s \in \mathbb{R}^n$. Then for any functional $\varphi \in \Phi_{\operatorname{LSC}}$ condition $(3.\mathrm{MS}.1_+)$ is equivalent to the differential inequality
In a similar way, for any functional $\varphi \in \Phi_{\operatorname{USC}}$ condition $(3.\mathrm{MS}.1_-)$ is equivalent to the differential inequality
In (3.52) and (3.53), $d_\mp \{\varphi(t, x(\,{\cdot}\,)) - \langle s, x(t) \rangle \mid B(c (1 + \|x(\cdot \wedge t)\|_\infty)) \}$ denote the corresponding derivatives at $(t,x(\,{\cdot}\,))$ of the auxiliary functional
According to (3.50) and (3.51), calculating the derivatives $d_\mp \{\varphi(t, x(\,{\cdot}\,)) \mid F \}$ requires an analysis of the behaviour of $\varphi$ along all functions $y(\,{\cdot}\,)$ in the infinite- dimensional set $\Omega(t,x(\,{\cdot}\,),F,\varepsilon)$. Therefore, the verification of the differential inequalities (3.52) and (3.53) is, generally speaking, a rather difficult problem. Nevertheless, in some particular cases the derivatives $d_\mp\{\varphi(t,x(\,{\cdot}\,))\mid F\}$ can be calculated using convenient formulae. First of all, we note that if a functional $\varphi$ is ci-differentiable at $(t,x(\,{\cdot}\,))$, then (see, for example, [113], Proposition 1, and also [123], Proposition 11.1)
In addition, [113] and [118] (also see [123], § § 11.1, 11.3) contain formulae for the calculation of the derivatives $d_\mp \{\varphi(t, x(\,{\cdot}\,)) \mid F\}$ under the assumption that $\varphi$ is piecewise ci-smooth or is an envelope of some family of ci-smooth functionals, which is typical in the context of applications to control problems for time-delay systems.
We also note that the calculation of the derivatives $d_\mp \{\varphi(t, x(\,{\cdot}\,)) \mid F \}$ is simpler when $\varphi$ satisfies the following Lipschitz condition in the variable $x(\,{\cdot}\,)$.
for all $t \in [0, T]$ and $x_1(\,{\cdot}\,),x_2(\,{\cdot}\,) \in W$, where $\mathcal{I}(t)=\max\{i\in\{1,\dots, \mathcal{I}+ 1\} \colon t_i \leqslant t \}$.
Here $\partial_- \{\varphi(t, x(\,{\cdot}\,)) \mid f \}$ and $\partial_+ \{\varphi(t, x(\,{\cdot}\,)) \mid f \}$ denote the lower and upper right derivatives of $\varphi$ at $(t, x(\,{\cdot}\,))$ in the single-valued direction $f$, respectively, that is,
where the function $y^{(f)}(\,{\cdot}\,) \in \operatorname{Lip}(t, x(\,{\cdot}\,))$ is defined by $y^{(f)}(\tau) = x(t) + f(\tau - t)$, $\tau \in [t, T]$. Thus, to calculate $d_\mp \{\varphi(t, x(\,{\cdot}\,)) \mid F \}$ under condition $(3.\mathrm{L})$ it actually suffices to consider only functions $y^{(f)}(\,{\cdot}\,)$ parametrized by the finite-dimensional parameter $f \in F$. In terms of the derivatives (3.56) inequalities (3.52) and (3.53), which characterize minimax solutions of the Cauchy problem (3.13), (3.14), assume the form of inequalities (2.18) and (2.19) for the classical Hamilton–Jacobi equation (2.1) (see below for inequalities (3.62)). We also note that if $\varphi$ is specified by a function $\widehat{\varphi} \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}$ so that $\varphi(t,x(\,{\cdot}\,))=\widehat{\varphi}(t,x(t))$, $t \in [0, T]$, $x(\,{\cdot}\,) \in \operatorname{C}([- h, T]; \mathbb{R}^n)$, then condition $(3.\mathrm{L})$ on $\varphi$ implies condition $(2.\mathrm{L})$ on $\widehat{\varphi}$; therefore, it follows from (2.22) and (3.56) that
Finally, we emphasize that equalities (3.55) need not hold without some additional assumptions on the properties of $\varphi$. For example, consider the functional
It is non-anticipating and continuous and, in addition, has the Lipschitz property in $x(\,{\cdot}\,)$ with respect to the uniform norm in the following form: for all $t \in [0, 1]$ and $x_1(\,{\cdot}\,),x_2(\,{\cdot}\,) \in \operatorname{C}([0, 1]; \mathbb{R})$ we have
Nevertheless, the second equality in (3.55) does not hold at the point $(t_0=0, x_0(\,{\cdot}\,) \equiv 0) \in [0, 1] \times \operatorname{C}([0,1];\mathbb{R})$. In fact, on the one hand we obtain $\partial_+ \{\varphi(t_0, x_0(\,{\cdot}\,)) \mid f \} = 0$ for any single-valued direction $f \in \mathbb{R}$, since $\varphi(t_0+\delta,y^{(f)}(\,{\cdot}\,))=\varphi(t_0,x_0(\,{\cdot}\,))=0$, $\delta \in (0, 1]$. On the other hand we have the estimate $d_+ \{\varphi(t_0, x_0(\,{\cdot}\,)) \mid [- 1, 1] \} > 0$ for the multivalued direction $F = [- 1, 1]$. To verify this, we introduce the notation $\delta_1 = 1$ and $\delta_k = 3/ 2^k$ for all $k \in \mathbb{N} \setminus \{1\}$ and consider the function
Note that $\varphi(t_0 + \delta_k, y(\,{\cdot}\,)) = 1 / 2^{k - 1}$ for each even $k \in \mathbb{N}$. Consequently, taking account of the inclusion $y(\,{\cdot}\,) \in \Omega(t_0, x_0(\,{\cdot}\,),[-1,1],0)$ we infer that
In view of condition $(3.\mathrm{CP}.2)$ and (3.54) we can conclude from Theorem 5 that the concept of a minimax solution of the Cauchy problem (3.13), (3.14) is consistent with the concept of a solution of this problem in the classical sense (in this connection also see [123], Proposition 12.1).
Proposition 4. Assume that conditions $(3.\mathrm{CP}.1)$ and $(3.\mathrm{CP}.2)$ hold and the functional (3.15) is non-anticipating for each $s \in \mathbb{R}^n$. Then the following hold:
(i) if a continuous functional $\varphi \colon [0, T] \times \operatorname{C}([-h, T]; \mathbb{R}^n) \to \mathbb{R}$ is ci-differentiable at each point $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{C}([- h, T]; \mathbb{R}^n)$ and satisfies the Hamilton–Jacobi equation (3.13) and the boundary condition (3.14), then it is a minimax solution of the Cauchy problem (3.13), (3.14);
(ii) if a minimax solution of the Cauchy problem (3.13), (3.14) is ci-differentiable at some point $(t,x(\,{\cdot}\,)) \in [0,T) \times \operatorname{C}([-h,T];\mathbb{R}^n)$, then it satisfies the Hamilton–Jacobi equation (3.13) at this point.
Thus, the pair of differential inequalities (3.52), (3.53) can be interpreted as a generalization of the Hamilton–Jacobi equation (3.13) to the non-smooth case.
To conclude this subsection we discuss the natural question whether the concept of minimax solution of the Cauchy problem (3.13), (3.14) is a natural extension of the concept of minimax solution of the Cauchy problem for the classical Hamilton–Jacobi equations (see § 2). Assume that in the Cauchy problem (3.13), (3.14) the Hamiltonian $H$ and boundary functional $\sigma$ have the form
for some functions $\widehat{H} \colon [0, T] \times \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ and $\widehat{\sigma} \colon \mathbb{R}^n \to \mathbb{R}$. We consider the Cauchy problem for the Hamilton–Jacobi equation
From $(3.\mathrm{CP}.1)$–$(3.\mathrm{CP}.3)$ we obtain that the functions $\widehat{H}$ and $\widehat{\sigma}$ satisfy conditions $(2.\mathrm{CP}.1)$–$(2.\mathrm{CP}.3)$, which ensure the existence and uniqueness of the minimax solution $\widehat{\varphi} \colon [0, T]\times \mathbb{R}^n \to \mathbb{R}$ of the Cauchy problem (3.58), (3.59). In particular, the function $\widehat{\varphi}$ is continuous and satisfies the boundary condition (3.59) and the condition $(2.\mathrm{MS})$ for $\widehat{H}$. Then we can verify directly that the functional
is non-anticipating and continuous and satisfies the boundary condition (3.14) and condition $(3.\mathrm{MS})$. As a result, in view of Theorem 3 we arrive at the following assertion.
Proposition 5. Assume that conditions $(3.\mathrm{CP}.1)$–$(3.\mathrm{CP}.3)$ hold for the Cauchy problem (3.13), (3.14) and that the Hamiltonian $H$ and boundary functional $\sigma$ have the form (3.57). Then the minimax solution $\varphi$ of this problem is defined by (3.60).
In particular, we infer that if the minimax solution $\widehat{\varphi}$ of the Cauchy problem (3.58), (3.59) is differentiable at some point $(t, \widehat{x}) \in (0, T) \times \mathbb{R}^n$, then, according to (3.60), the minimax solution $\varphi$ of the Cauchy problem (3.13), (3.14) is ci-differentiable at all points $(t, x(\,{\cdot}\,))$ such that $x(\,{\cdot}\,) \in \operatorname{C}([- h, T]; \mathbb{R}^n)$ and $x(t) = \widehat{x}$. In this case we have
Thus, we can say that the Hamilton–Jacobi equations (3.13) with ci-derivatives are a generalization of the classical Hamilton–Jacobi equations (2.1) with partial derivatives to the case when the Hamiltonian $H$ and required solution $\varphi$ are non-anticipating functionals.
3.7. Meaningfulness of the minimax solution
Regarding the differential game (3.16), (3.17), the minimax solution $\varphi$ of the Cauchy problem (3.13), (3.14) with Hamiltonian (3.22) makes it possible in the general (non-smooth) case to construct optimal feedback control strategies of the players with memory of motion history.
For an initial position $(t, x(\,{\cdot}\,)) \in [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n)$ we obtain the Lyapunov–Krasovskii functional $\nu_\varepsilon$ from (3.49), $\varepsilon \in (0, \varepsilon_0]$, and consider the following non- anticipating control strategies of the first and second players:
and the set $Y(t,x(\,{\cdot}\,))$ is borrowed from (3.32). We can choose non-anticipating $U^\circ_\varepsilon(\tau, y(\,{\cdot}\,))$ and $V^\circ_\varepsilon(\tau, y(\,{\cdot}\,))$ by the non-anticipation property of the maps (3.18) and also $\varphi$, $\nu_\varepsilon$, and $\nabla \nu_\varepsilon$.
By analogy with (2.28) and (2.29) (also see the comment after Theorem 2) we define the guaranteed results of the control strategies $U^\circ_\varepsilon$ and $V^\circ_\varepsilon$ of the first and second players as follows:
Taking account of the fact that the functional $\nu_\varepsilon$ satisfies conditions $(3.\mathrm{a})$–$(3.\mathrm{d})$ in Lemma 1 and following the scheme of reasoning from [123], Theorem 15.1 (also see [116], Theorem 1, and [121], Theorem 2), we can verify the following assertion.
Note that in the construction of $U^\circ_\varepsilon$ and $V^\circ_\varepsilon$ we can use any other functional satisfying conditions $(3.\mathrm{a})$–$(3.\mathrm{d})$ instead of $\nu_\varepsilon$ from (3.49). In the solution of particular differential games of the form (3.16), (3.17) a successful choice of such a functional can significantly simplify the calculations (see, for example, [116]).
As a consequence of Theorem 6, we deduce that the value functional of the differential game (3.16), (3.17) has all properties of the minimax solution of the Cauchy problem (3.13), (3.14) with Hamiltonian (3.22). In particular, by Proposition 4 the value functional satisfies the Hamilton–Jacobi equation at all points where it is ci-differentiable, which refines Theorem 1. On the other hand, to find out whether one or another functional $\varphi$ is a value functional, it suffices to verify any of the conditions characterizing the minimax solution of this problem. For example, if $\varphi \in \Phi_{\operatorname{C}}$ satisfies the Lipschitz condition $(3.\mathrm{L})$ and the boundary condition (3.14), then, according to § 3.6 (also see [122], Theorem 3), it suffices to verify the differential inequalities
where $\partial_\mp \{\varphi(t, x(\,{\cdot}\,)) \mid f\}$ are the directional derivatives (3.56), $H$ is the Hamiltonian (3.22), and $c$ is from $(3.\mathrm{DG}.2)$.
In building the theory of minimax solutions of the Cauchy problem (3.13), (3.14) an important trick in the investigation of the properties of functionals $\varphi$ is that it is always possible to limit ourselves to considering only the values $\varphi(t,x(\,{\cdot}\,))$ at functions $x(\,{\cdot}\,)$ in appropriately chosen compact subsets of the space $\operatorname{C}([- h, T]; \mathbb{R}^n)$. When the viscosity approach to the concept of a generalized solution of this problem is developed, there arise additional difficulties in this regard, because instead of compact subsets in $\operatorname{C}([- h, T]; \mathbb{R}^n)$ we have to deal with bounded closed ones, on which continuous bounded functionals $\varphi$ need not attain extremum values any longer.
Similar obstacles also occur in the theory of viscosity solutions of Hamilton–Jacobi equations in Banach spaces with Frechét derivatives. One method for dealing with them is to use an appropriate smooth variational principle, more precisely, an assertion that, given a lower semicontinuous, bounded below functional $\varphi$, makes it possible, roughly speaking, to find an arbitrarily small Frechét-differentiable functional $\psi$ such that the sum $\varphi + \psi$ must attain its minimum. In this connection we note, for example, the variational principles due to Stegall [158], Borwein and Preiss [14], and Deville, Godefroy, and Zizler [31] (also see [15], Theorems 2.5.3, 2.5.7, and 6.3.5). However, in assertions of this type a key assumption ensuring the smoothness of the small additional term $\psi$ is usually one or another property concerning the smoothness of the norm in the relevant Banach space. In particular, this does not make it possible to cover the case of the space $\operatorname{C}([-h,T];\mathbb{R}^n)$. Another method for dealing with the obstacles under discussion was indicated in [156], where it was proposed to modify the concept of viscosity solution by introducing an appropriate sequence of expanding compact subsets such that the closure of their union covers the whole space. In [120] and [119] this modification was taken as the basis for the following definition of a viscosity solution of the Cauchy problem (3.13), (3.14).
For each $k \in \mathbb{N}$ consider a compact set $W_k$ consisting of functions $x(\,{\cdot}\,) \in \operatorname{Lip}([-h,T];\mathbb{R}^n)$ such that
where $c$ is from $(3.\mathrm{CP}.2)$. Note that $W_k \subset W_{k + 1}$, $k \in \mathbb{N}$, the closure of the union of the $W_k$ over all $k \in \mathbb{N}$ coincides with the whole of $\operatorname{C}([- h, T]; \mathbb{R}^n)$, and for each $k \in \mathbb{N}$ the set $W_k$ is strongly invariant in the following sense: for any point $(t, x(\,{\cdot}\,)) \in [0, T] \times W_k$ we have $Y(t, x(\,{\cdot}\,)) \subset W_k$, where the set $Y(t, x(\,{\cdot}\,))$ is defined by (3.32).
Definition 11. An upper viscosity solution of the Cauchy problem (3.13), (3.14) is a functional $\varphi \in \Phi_{\operatorname{C}}$ that satisfies the boundary condition (3.28) and has the following property.
Definition 12. A lower viscosity solution of the Cauchy problem (3.13), (3.14) is a functional $\varphi \in \Phi_{\operatorname{C}}$ that satisfies the boundary condition (3.30) and has the following property.
Definition 13. A viscosity solution of the Cauchy problem (3.13), (3.14) is a functional $\varphi \in \Phi_{\operatorname{C}}$ that is both an upper and a lower viscosity solution of this problem.
Like in the case of the classical Hamilton–Jacobi equations (2.1) we can verify that, under conditions $(3.\mathrm{CP}.1)$–$(3.\mathrm{CP}.3)$, the minimax solution of the Cauchy problem (3.13), (3.14), which exists and is unique by Theorem 3, is a viscosity solution of this problem. In this case the strong invariance of the sets $W_k$, $k \in \mathbb{N}$, turns out to be important. Thus, the following result is true ([119], Theorem 1).
Again, the verification of the uniqueness of the viscosity solution of the Cauchy problem (3.13), (3.14) consists in proving the comparison principle: for any upper and lower viscosity solutions $\varphi_+$ and $\varphi_-$ of this problem
In addition, a generalization of this result to the case when conditions $(3.\mathrm{CP}.1)$, $(3.\mathrm{CP}.2)$, and $(3.\mathrm{CP}.5)$ hold was deduced in [121], Theorem 4. In this case the following functional was constructed on the basis of the ideas in [91] on the construction of Lyapunov–Krasovskii functionals in stability problems for time-delay systems:
We prove Theorem 8. The closure of the union of $W_k$, $k \in \mathbb{N}$, coincides with $\operatorname{C}([- h, T]; \mathbb{R}^n)$; the functionals $\varphi_+$ and $\varphi_-$ are continuous. Therefore, it suffices to fix $k \in \mathbb{N}$ and show that inequality (3.64) holds for all $(t, x(\,{\cdot}\,)) \in [0, T] \times W_k$. Reasoning by contradiction, we assume that
where $\delta > 0$ and $\varepsilon > 0$ are small parameters and $\nu$ is the functional (3.66). We choose points $(t_\varepsilon^\delta, x_\varepsilon^\delta(\,{\cdot}\,))$, $(\tau_\varepsilon^\delta, y_\varepsilon^\delta(\,{\cdot}\,)) \in [0, T] \times W_k$ from the condition
Note that $\Psi(t_\varepsilon^\delta, x_\varepsilon^\delta(\,{\cdot}\,), \tau_\varepsilon^\delta, y_\varepsilon^\delta(\,{\cdot}\,)) \geqslant \Psi(T, x(\,{\cdot}\,), T, x(\,{\cdot}\,)) = 0$ for any function $x(\,{\cdot}\,) \in W_k$. Hence
In what follows $C_i$, $i \in \{1,\dots\}$, denote positive constants independent of $\varepsilon$ and $\delta$. In view of the definition (3.66) of $\nu$ and the compactness of the set $W_k$, from the first inequality in (3.69) we derive
Taking account of the choice of $\alpha$, the convergence (3.71), and the boundary conditions (3.28) and (3.30) for $\varphi_+$ and $\varphi_-$, respectively, we infer that $t_\varepsilon^\delta < T$ and $\tau_\varepsilon^\delta < T$ for all sufficiently small $\varepsilon > 0$.
for all $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{C}([- h, T]; \mathbb{R}^n)$ such that $t \leqslant \tau_\varepsilon^\delta + \vartheta_j$, $j \in \{1,\dots, \mathcal{J}\}$. Owing to (3.68), the difference $\varphi_- - \psi_-$ attains its maximum in $[0, T] \times W_k$ at the point $(t_\varepsilon^\delta, x_\varepsilon^\delta(\,{\cdot}\,))$; thus, by condition $(3.\mathrm{VS}.1_-)$ for $\varphi_-$ we obtain
for all $(\tau, y(\,{\cdot}\,)) \in [0, T) \times \operatorname{C}([- h, T]; \mathbb{R}^n)$ such that $\tau \geqslant t_\varepsilon^\delta - \vartheta_j$, $j \in \{1,\dots, \mathcal{J}\}$; according to condition $(3.\mathrm{VS}.1_+)$ on $\varphi_+$, we arrive at the inequality
The functional $\nu$ is continuous, the set $W_k$ is compact, and, in addition, all functions in this set satisfy the Lipschitz condition with the same constant. Therefore,
where $\nu_\varepsilon^\delta = \nu (t_\varepsilon^\delta, x_\varepsilon^\delta(\,{\cdot}\,), \tau_\varepsilon^\delta, y_\varepsilon^\delta(\,{\cdot}\,))$. Then from condition $(3.\mathrm{CP}.2)$ we obtain the inequality
The Hamiltonian $H$ is continuous, the set $W_k$ is compact, and $\|p_\varepsilon^\delta\| \leqslant C_8 / \varepsilon$ by virtue of the second inequality in (3.76). Hence, owing to the second inequality in (3.69), the right-hand side of (3.79) tends to zero as $\delta \to 0^+$, which contradicts the choice of $\alpha > 0$. The proof of Theorem 8 is complete.
As a consequence, we see that, under conditions $(3.\mathrm{CP}.1)$, $(3.\mathrm{CP}.2)$, and $(3.\mathrm{CP}.5)$, the viscosity solution of the Cauchy problem (3.13), (3.14) exists, is unique, and coincides with the minimax solution of this problem. In particular, the viscosity solution has all properties established above for the minimax solution. On the other hand it is worthy of noting that the problem of the equivalence of conditions $(3.\mathrm{MS}.1_-)$ and $(3.\mathrm{VS}.1_-)$ (conditions $(3.\mathrm{MS}.1_+)$ and $(3.\mathrm{VS}.1_+)$, respectively), which are involved in the definitions of minimax and viscosity solutions, remains open (for example, for functionals $\varphi \in \Phi_{\operatorname{C}}$).
By analogy with the theory of minimax solutions, to prove the comparison principle for viscosity solutions of the Cauchy problem (3.13), (3.14) under the more general Lipschitz condition $(3.\mathrm{CP}.3)$ it is natural to attempt to construct an appropriate penalty functional $\nu$ starting from the ci-smooth functional $\gamma$ defined by (3.47). For this purpose, we need to ‘extend’ the definition of this functional to the case of doubled variables $(t, x(\,{\cdot}\,))$ and $(\tau, y(\,{\cdot}\,))$. The most direct way to this ‘extension’ leads to the functional
for $\|x(\cdot \wedge t) - y(\cdot \wedge \tau)\|_\infty > 0$ and $\nu(t, x(\,{\cdot}\,), \tau, y(\,{\cdot}\,)) = 0$ otherwise. However, it turns out that for a fixed point $(\tau,y(\,{\cdot}\,)) \in [0,T] \times \operatorname{C}([-h,T];\mathbb{R}^n)$, the functional
may not be ci-differentiable at some points $(t, x(\,{\cdot}\,))$ for $t < \tau$ (see [69], Example 4.1). Thus we cannot use the functional $\nu$ from (3.80) directly in the scheme of the proof of Theorem 8.
The approach concerning modifications of the concept of a viscosity solution in the spirit of Definition 13 was developed, for example, in [77], [10], [79], and [78]. Some attempts to cover the general Lipschitz condition $(3.\mathrm{CP}.3)$ were made in [180]. There are also rather many studies using similar technique and its further generalizations to construct a theory of viscosity solutions of hereditary Hamilton–Jacobi equations arising in control problems for stochastic time-delay systems. We limit ourselves to referring to [139], [170], [34], and [35]. We also note the paper [11], which is devoted to developing the vanishing viscosity method for path-dependent Hamilton–Jacobi equations.
In addition, several works proposing other definitions of viscosity solutions of path-dependent Hamilton–Jacobi equations, which are more consistent with the definition of a viscosity solution of classical Hamilton–Jacobi equations (see Definition 6), appeared recently. In the papers in the first group (see, for example, [178], [179], [182], [142], and [144]) a transition from the space of continuous functions to the wider space of piecewise continuous functions is made, and only functionals satisfying certain additional continuity requirements and/or growth estimates are under consideration. This makes it possible to prove the uniqueness of the corresponding relevant viscosity solutions using methods of, in fact, finite-dimensional optimization. In works in the second group (see, for example, [23], [181], [24], and [183]), to substantiate the uniqueness of a viscosity solution, special smooth variational principles are proved (see § 3.9), where the smoothness is understood precisely in the sense of ci-differentiation or its analogues. These principles are based on an abstract variant of the Borwein–Preiss variational principle in complete metric spaces ([106], Theorem 1; also see, for example, [15], Theorem 2.5.2) and an appropriate choice of a smooth ‘gauge-type’ functional in an appropriate way. In particular, the functional $\gamma$ from (3.47) was used for these purposes in [23], [181], and [183].
To conclude this subsection, we consider another approach to the construction of the theory of viscosity solutions of the Cauchy problem (3.13), (3.14) in which the definition of a viscosity solution is based on appropriate analogues of known criteria of viscosity solutions of the classical Hamilton–Jacobi equations in the form of inequalities (2.37) and (2.38) for sub- and supergradients. In this case, to verify the existence and uniqueness the equivalence of relevant definitions of minimax and viscosity solutions is established and then the results from the theory of minimax solutions (see Theorem 3) are used.
Following [123], § 14, we define the ci-sub- and ci-superdifferentials of a functional $\varphi \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ at a point $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{C}([- h, T]; \mathbb{R}^n)$ by
Definition 14. A viscosity solution of the Cauchy problem (3.13), (3.14) is a functional $\varphi \in \Phi_{\operatorname{C}}$ satisfying the boundary condition (3.14) and the differential inequalities
The following result is true (see [123], Proposition 14.1, and also [69], Lemma 4.4).
Proposition 6. Assume that conditions $(3.\mathrm{CP}.1)$ and $(3.\mathrm{CP}.2)$ hold and the functional (3.15) is non-anticipating for each $s \in \mathbb{R}^n$. Then for any functional $\varphi \in \Phi_{\operatorname{LSC}}$, inequality (3.52) implies (3.81). In a similar way, for any functional $\varphi \in \Phi_{\operatorname{USC}}$ inequality (3.53) implies (3.82).
As a consequence, in accordance with Theorem 5, we deduce that a minimax solution of the Cauchy problem (3.13), (3.14) is a viscosity solution of this problem in the sense of Definition 14. The question of whether the converse is true is more subtle and has not been exhaustively studied. An answer to this question was partially given in [130], [142], and [144]. In that case it was necessary to switch from the space $\operatorname{C}([- h, T]; \mathbb{R}^n)$ to the space of piecewise continuous functions and to impose additional restrictions on the class of functionals $\varphi$ under consideration, among other things. Below we dwell more thoroughly on the results in this area [69], in which it is not required to extend the space $\operatorname{C}([- h, T]; \mathbb{R}^n)$.
We consider the following properties of lower and upper semicontinuity of a functional $\varphi \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$, which are expressed in terms of the functional $\nu$ defined by (3.65).
The set of functionals $\varphi$ that have property $(3.\mathrm{LSC})$ (property $(3.\mathrm{USC})$) is denoted by $\Phi_{\operatorname{LSC}}^\ast$ ($\Phi_{\operatorname{USC}}^\ast$, respectively). Note that any $\varphi \in \Phi_{\operatorname{LSC}}^\ast$ is automatically non-anticipating (in this connection see [24], Remark 2.1) and lower semicontinuous (with respect to the metric $\operatorname{dist}$ defined by (3.3)), that is, $\Phi_{\operatorname{LSC}}^\ast \subset \Phi_{\operatorname{LSC}}$. In a similar way $\Phi_{\operatorname{USC}}^\ast \subset \Phi_{\operatorname{USC}}$.
The following assertion is valid [69], Theorem 2.1.
Theorem 9. Assume that conditions $(3.\mathrm{CP}.1)$ and $(3.\mathrm{CP}.2)$ hold and the functional (3.15) is non-anticipating for each $s \in \mathbb{R}^n$. Then if a functional $\varphi \in \Phi_{\operatorname{LSC}}^\ast$ satisfies the differential inequality (3.81), then it satisfies condition $(3.\mathrm{MS}.1_-)$. In a similar way, for a functional $\varphi \in \Phi_{\operatorname{USC}}^\ast$ the differential inequality (3.82) implies the fulfillment of the condition $(3.\mathrm{MS}.1_+)$.
Thus, for a functional $\varphi \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ that has the additional continuity properties $(3.\mathrm{LSC})$ and $(3.\mathrm{USC})$, Definition 10 of a minimax solution and Definition 14 of a viscosity solution of the Cauchy problem (3.13), (3.14) are equivalent. As a consequence, we immediately derive from Theorem 3 that under conditions $(3.\mathrm{CP}.1)$–$(3.\mathrm{CP}.3)$ the viscosity solution is unique in the class of functionals $\Phi_{\operatorname{LSC}}^\ast \cap \Phi_{\operatorname{USC}}^\ast$.
Furthermore, we can show (in this connection see [69], § 3.3, for optimal control problems) that $\rho \in \Phi_{\operatorname{LSC}}^\ast \cap \Phi_{\operatorname{USC}}^\ast$ for the value functional $\rho$ of the differential game (3.16), (3.17) if the following conditions hold in addition to $(3.\mathrm{DG}.1)$, $(3.\mathrm{DG}.2)$, and $(3.\mathrm{DG}.4)$.
for any functions $y_1(\,{\cdot}\,),y_2(\,{\cdot}\,) \in W$.
Condition $(3.\mathrm{DG}.7)$ is stronger than $(3.\mathrm{DG}.3)$; it makes it possible to cover only the cases of distributed and constant concentrated delays (see (1.21) and (1.23)), while condition $(3.\mathrm{DG}.8)$ ensures the continuity of the functional $\sigma$. As a result, from Theorem 6 we conclude that, under the above conditions, the Cauchy problem (3.13), (3.14) with Hamiltonian (3.22) has a viscosity solution in the sense of Definition 14 which belongs to the set $\Phi_{\operatorname{LSC}}^\ast \cap \Phi_{\operatorname{USC}}^\ast$, is unique, and coincides with the value functional $\rho$.
We comment on the proof of the first part of Theorem 9. Since, in general, the reasoning is carried out in accordance with the scheme of the verification of a similar result for the classical Hamilton-Jacobi equations (see, for example, [162], Theorem 4.3), we dwell only on the most significant distinctions.
For $\varphi \colon [0,T] \times \operatorname{C}([-h,T]; \mathbb{R}^n) \to \mathbb{R}$, $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{C}([-h,T];\mathbb{R}^n)$, and $F \in \mathcal{K}(\mathbb{R}^n)$ we consider the quantity (in this connection also see [20], § 4)
where the set $\Omega(t, x(\,{\cdot}\,), F, \delta)$ is defined by (3.51). Along with $d_-\{\varphi(t,x(\,{\cdot}\,)) \mid F\}$ in (3.50), the quantity $d_-^\ast \{ \varphi(t, x(\,{\cdot}\,)) \mid F \}$ can also be interpreted as some lower right derivative of $\varphi$ at $(t, x(\,{\cdot}\,))$ in the multivalued direction $F$.
By analogy with Theorem 5 we can verify that for a functional $\varphi \in \Phi_{\operatorname{LSC}}$ to satisfy condition $(3.\mathrm{MS}.1_-)$ it suffices that it satisfies the differential inequality
A key role in substantiating the implication from (3.81) to (3.83) for $\varphi \in \Phi_{\operatorname{LSC}}^\ast$ is played by the following property of the ci-subdifferential ([69], Lemma 4.3; this is an analogue of a property of the subdifferential in [161], Theorem 1.1).
Lemma 2. Assume that $\varphi \in \Phi_{\operatorname{LSC}}^\ast$, $(t_0,x_0(\,{\cdot}\,)) \in [0,T) \times \operatorname{C}([-h,T];\mathbb{R}^n)$, $F \in \mathcal{K}(\mathbb{R}^n)$, and
Then for each positive number $\varepsilon $ there are $(t,x(\,{\cdot}\,)) \in [0,T) \times \operatorname{C}([-h,T];\mathbb{R}^n)$ and $(a, s) \in D_-\varphi (t, x(\,{\cdot}\,))$ such that $\operatorname{dist}((t, x(\,{\cdot}\,)), (t_0, x_0(\,{\cdot}\,))) \leqslant \varepsilon$ and
$$
\begin{equation*}
a+ \langle s, f \rangle> 0,\qquad f \in F.
\end{equation*}
\notag
$$
Lemma 2 is proved using techniques related to the substantiation of multidimensional non-smooth generalizations of the finite increment formula [19], [20], and [86] (also see [162], § A6, and [21], Chap. 3, Theorem 2.6) and partially adapted to the context of applications to Hamilton–Jacobi equations with ci-derivatives in [142] and [144]. Among other things, constructions of penalty functionals based on the functional $\nu$ from (3.65) are used again. Note that this does not make it possible to abandon fully the additional assumption $(3.\mathrm{LSC})$ of lower semicontinuity. In addition, in comparison with the finite-dimensional case, the distinction is that there, like in [20], [21], and [86], we must use smooth variational principles. With this aim in view, the following fact was proved in [69], Lemma 4.1.
Lemma 3. Assume that a non-empty bounded closed set $W \subset [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n)$ such that $t < T$ for all $(t, x(\,{\cdot}\,)) \in W$ and a non-anticipating lower semicontinuous functional $\varphi \colon W \to \mathbb{R}$ bounded below are fixed. Then for each $\varepsilon > 0$ there exists a non-anticipating functional $\psi \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ and a point $(t_\ast, x_\ast(\,{\cdot}\,)) \in W$ such that
Proving Lemma 3 is in its turn based on an abstract variant of the Borwein–Preiss variational principle ([106], Theorem 1). In this case the gauge-type functional
is used, where $\gamma$ is the functional (3.47) and the positive number $a$ is chosen in an appropriate way based on $T$ and $W$. In particular, the sought-for functional $\psi$ has the form of a series:
for appropriately chosen $\delta > 0$ and points $(\tau_i, y_i(\,{\cdot}\,)) \in W$, $i \in \mathbb{N} \cup \{0\}$.
Finally, comparing Definitions 13 and 14 of viscosity solutions of the Cauchy problem (3.13), (3.14) we note that, for example, the differential inequality (3.81) can equivalently be expressed as follows in terms of smooth underlying functionals ([69], Proposition 3.3).
Proposition 7. Let $\varphi \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ be a non-anticipating functional. Then $\varphi$ satisfies the differential inequality (3.81) if and only if the following condition holds: for any point $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{C}([- h, T]; \mathbb{R}^n)$ and each ci-smooth functional $\psi \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$, if for each function $y(\,{\cdot}\,) \in \operatorname{Lip}(t, x(\,{\cdot}\,))$ there is $\delta \in (0, T - t]$ such that
In this subsection we discuss briefly the relationship between the formalization used in the paper and some other possible approaches (concerning the choice of a function space and the derivatives used) developed in the theory of path-dependent Hamilton–Jacobi equations.
Recall that in this paper, following, for example, [120], [119], [121], [122], and [67], we consider the Cauchy problem (3.13), (3.14) in the set $[0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n)$ and assume that the unknown functional
in the set $[0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n)$. Note that a functional $\varphi \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ is continuous with respect to the pseudometric $\operatorname{dist}_0$ if and only if it is non- anticipating and continuous with respect to $\operatorname{dist}$. Thus, the above results do not change if the Cauchy problem (3.13), (3.14) is considered on the set $[0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n)$ with pseudometric $\operatorname{dist}_0$, without indicating the non-anticipation property.
Furthermore, applying the standard procedure for switching from a pseudometric space to the induced metric space of equivalence classes (see, for example, [80], Chap. 4, Theorem 15), we can switch from the space $[0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n)$ with pseudometric $\operatorname{dist}_0$ to the space
In this case, a non-anticipating functional $\varphi \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ can be identified with the functional $\psi \colon \operatorname{G} \to \mathbb{R}$ defined by
Then we conclude that $\varphi$ is continuous with respect to the pseudometric $\operatorname{dist}_0$ (or, which is the same, is non-anticipating and continuous in the metric $\operatorname{dist}$) if and only if $\psi$ is continuous in the metric $\operatorname{dist}_\ast$. Hence the Cauchy problem (3.13), (3.14) can also be reformulated in the space $\operatorname{G}$ with metric $\operatorname{dist}_\ast$, when the Hamiltonian $H$ is defined on triples $(t, r(\,{\cdot}\,), s)$ and the required functional acts from $\operatorname{G}$ to $\mathbb{R}$. Such a formalization was used, for example, in [77], [79], [180], [78], [69], and [183].
We also note that, for example, in [97], [111]–[115], [118], and [123], the Cauchy problem (3.13), (3.14) was considered on the set $\operatorname{G}$ but with another metric, namely, the metric
At its core, this metric is the Hausdorff distance between the graphs of continuous functions $r_1 \colon [- h, t_1] \to \mathbb{R}^n$ and $r_2 \colon [- h, t_2] \to \mathbb{R}^n$ as compact subsets of $\mathbb{R} \times \mathbb{R}^{n}$. According, for example, to [153], Chap. 2, the metrics $\operatorname{dist}_{\textrm{H}}$ and $\operatorname{dist}_\ast$ are not equivalent in the strong sense but induce the same topology in $\operatorname{G}$. This means that if $(t_i, r_i(\,{\cdot}\,)) \in \operatorname{G}$, $i \in \mathbb{N} \cup \{0\}$, then $\operatorname{dist}_{\textrm{H}} ((t_i, r_i(\,{\cdot}\,)), (t_0, r_0(\,{\cdot}\,))) \to 0$ as $i \to \infty$ if and only if $\operatorname{dist}_\ast((t_i,r_i(\,{\cdot}\,)),(t_0,r_0(\,{\cdot}\,))) \to 0$ as $i \to \infty$. Correspondingly, any functional $\psi\colon\operatorname{G} \to \mathbb{R}$ is continuous with respect to $\operatorname{dist}_{\textrm{H}}$ if and only if it is continuous with respect to $\operatorname{dist}_\ast$. These arguments make it possible to link the results presented in our survey with the ones in the above studies.
The Hamilton–Jacobi equation (3.13) under consideration employs coinvariant (ci-) derivatives, which are defined by (3.6). As already noted above, the very concept of invariant and coinvariant derivatives of functionals was introduced in [83]; the relevant technique of invariant differential calculus and its applications to problems in the theory of functional differential equations were discussed in detail, for example, in [84] and [85].
It is worthy of noting that functionals of the form
were studied in [84] and [85]. Here $\operatorname{PC}([- h, T]; \mathbb{R}^n)$ is the set of piecewise continuous (from the right) functions $x \colon [-h, T] \to \mathbb{R}^n$. For such a functional $\Phi$ the ci-derivative $\partial \Phi(t, z, x(\,{\cdot}\,)) \in \mathbb{R}$ at a point $(t, z, x(\,{\cdot}\,)) \in [0, T) \times \mathbb{R}^n \times \operatorname{PC}([- h, T]; \mathbb{R}^n)$ is defined by
provided that this limit exists and is independent of $y(\,{\cdot}\,)\,{\in}\operatorname{Lip}(t,z,x(\,{\cdot}\,))$, where $\operatorname{Lip}(t, z, x(\,{\cdot}\,))$ is the set of functions $y(\,{\cdot}\,) \in \operatorname{PC}([- h, T]; \mathbb{R}^n)$ such that $y(\tau)=x(\tau)$, $\tau \in [-h,t)$, $y(t)=z$, and $y(\,{\cdot}\,)$ is a Lipschitz function in $[t, T]$. With respect to the finite-dimensional variable $z$ the ordinary gradient $\partial \Phi(t,z,x(\,{\cdot}\,)) / \partial z \in \mathbb{R}^n$ is considered. Note that in this approach $\Phi$ must be defined at piecewise continuous functions $x(\,{\cdot}\,)$ and, unlike (3.6), it cannot be extended directly to functionals defined only at continuous functions $x(\,{\cdot}\,)$. Nevertheless, considering, for a functional $\Phi$, the functional
under rather natural assumptions. In view of the above relationship the quantities $\partial_t \varphi(t, x(\,{\cdot}\,))$ and $\nabla \varphi(t, x(\,{\cdot}\,))$ defined by (3.6) are called ci-derivatives as before.
We also note that a close approach to differentiation of non-anticipating functionals was proposed in [5], where the corresponding derivatives were called Clio- derivatives.
It must be noted separately that path-dependent Hamilton–Jacobi equations are often (see, for example, [170], [180], [181], [23], [24], and [183]) considered with horizontal and vertical derivatives [33] (see also [22]). We give the corresponding definitions. Let $\operatorname{D}([- h, T]; \mathbb{R}^n)$ be the space of so-called càdlàg functions $x \colon [- h, T] \to \mathbb{R}^n$ (that is, right-continuous functions that have a limit from the left at each point $t \in (-h,T]$) with the norm
We consider the set $[0, T] \times \operatorname{D}([- h, T]; \mathbb{R}^n)$ endowed with the pseudometric $\operatorname{dist}_0$ defined by (3.84). The horizontal derivative $\partial^{\rm H}\Phi(t,x(\,{\cdot}\,)) \in \mathbb{R}$ of a functional $\Phi \colon [0,T] \times \operatorname{D}([-h,T];\mathbb{R}^n) \to \mathbb{R}$ at a point $(t,x(\,{\cdot}\,)) \in [0,T) \times \operatorname{D}([-h,T]; \mathbb{R}^n)$ is defined by
The vertical derivative of $\Phi$ at a point $(t,x(\,{\cdot}\,)) \in [0,T] \times \operatorname{D}([-h,T]; \mathbb{R}^n)$ is defined as the vector $\partial^{\rm V} \Phi(t, x(\,{\cdot}\,)) = (\partial_1^{\rm V} \Phi(t, x(\,{\cdot}\,)), \dots, \partial_n^{\rm V} \Phi(t, x(\,{\cdot}\,))) \in \mathbb{R}^n$, where
Here $e_i \in \mathbb{R}^n$, $i \in \{1,\dots, n\}$, is the standard orthonormal basis in the space $\mathbb{R}^n$ and $1_{[t, T]}(\,{\cdot}\,) \in \operatorname{D}([- h, T]; \mathbb{R})$ is the characteristic function of the interval $[t, T]$. We note again that vertical derivatives cannot be defined directly for functionals $\varphi \colon [0, T] \times \operatorname{C}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ (in this connection see, for example, [24]).
The following result is an analogue of Proposition 1 (see, for example, [24], Theorem 2.1, and also [180], Theorem 2.6).
Proposition 8. Assume that a continuous functional $\Phi \colon [0, T] \times \operatorname{D}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ has a horizontal derivative $\partial^{\rm H} \Phi(t, x(\,{\cdot}\,))$ at each point $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{D}([- h, T]; \mathbb{R}^n)$ and a vertical derivative $\partial^{\rm V} \Phi(t, x(\,{\cdot}\,))$ at each point $(t,x(\,{\cdot}\,)) \in [0,T] \times \operatorname{D}([-h,T];\mathbb{R}^n)$ such that the mappings
are continuous. Then for all $(t, x(\,{\cdot}\,)) \kern-1pt\in\kern-1pt [0, T) \times \operatorname{C}([- h, T]; \mathbb{R}^n)$ and $y(\,{\cdot}\,) \in \operatorname{Lip}(t, x(\,{\cdot}\,))$,
As a consequence, if a functional $\Phi \colon [0, T] \times \operatorname{D}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ satisfies the assumptions of Proposition 8, then the restriction $\varphi$ of this functional to the space $[0,T] \times \operatorname{C}([-h,T];\mathbb{R}^n)$ is ci-smooth and, in addition,
for all $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{C}([- h, T]; \mathbb{R}^n)$. This makes it possible to compare the results described in this survey with ones in the theory of Hamilton–Jacobi equations with horizontal and vertical derivatives.
We also note that there is a certain relationship between Frechét derivatives and horizontal and vertical derivatives for non-anticipating functionals (see, for example, [75]). Nevertheless, in our opinion it remains unclear how the theory of Hamilton–Jacobi equations in infinite-dimensional spaces with Frechét derivatives can be applied to the Cauchy problem (3.13), (3.14) under the assumptions $(3.\mathrm{CP}.1)$–$(3.\mathrm{CP}.3)$.
Finally, we note that when derivatives of non-anticipating functionals are introduced in one or another way, it is important to have a formula of the form (3.11) or (3.85). Therefore, definitions of relevant derivatives are sometimes (see, for example, [139], [34], [35], and [10]) based on formulae of this type directly.
4. Neutral-type systems
In this section we touch upon the development of the theory of Hamilton–Jacobi equations for neutral-type systems (1.24), which are more general than time-delay systems (1.20).
We consider a differential game such that the motion of the corresponding dynamical system is described by neutral-type functional differential equations in the Hale form (see, for example, [70], [2], [88], and [71])
with the initial condition (3.5) specified by the initial position $(t, x(\,{\cdot}\,)) \in [0, T] \times \operatorname{Lip}([- h, T]; \mathbb{R}^n)$. The aim of the first (second) player is to minimize (maximize, respectively) the quality index
A distinction from the differential game (3.16), (3.17) for a time-delay system is that the subtrahend $g(\tau, y(\,{\cdot}\,))$ occurs on the left-hand side of the equation of motion (4.1). Since this subtrahend is under the derivative sign, the function $x(\,{\cdot}\,)$ in the initial position $(t, x(\,{\cdot}\,))$ must be Lipschitz continuous. In this connection we assume that the mapping $g \colon [0, T] \times \operatorname{Lip}([- h, T]; \mathbb{R}^n) \to \mathbb{R}^n$ satisfies the following local Lipschitz condition.
and $\|\cdot\|_\infty$ is the uniform norm in the space $\operatorname{Lip}([- h, T]; \mathbb{R}^n)$ (see (3.4)). It follows from $(4.\mathrm{DG}.1)$ that the map $g$ is non-anticipating. In addition, $g(\tau, y(\,{\cdot}\,))$ is independent of the values of $y(\,{\cdot}\,)$ on the interval $(\tau - h_0, T]$ and, in particular, of $y(\tau)$, which implies automatically that equation (4.1) becomes solved with respect to the derivative $\dot{y}(\tau)$. Note that one or another condition of this type is rather often imposed when neutral-type functional differential equations are considered (see, for example, [70] and [71], § 2.7, for the condition of atomicity at zero, or [2], § 1.2, for the condition of strict throw-back in derivative).
We also assume that the following conditions hold.
We let $X_\ast$ denote the set of points $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{Lip}([- h, T]; \mathbb{R}^n)$ at which all coordinates $g_i \colon [0, T] \times \operatorname{Lip}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$, $i \in \{1,\dots, n\}$, of the map $g = (g_1,\dots,g_n)$ in (4.1) are ci-differentiable and set
By condition $(4.\mathrm{DG}.1)$, for each function $x(\,{\cdot}\,) \in \operatorname{Lip}([- h, T]; \mathbb{R}^n)$, for almost every $t \in [0, T)$ we have
for some function $\widehat{g} \colon [0, T] \times \mathbb{R} \to \mathbb{R}$, which corresponds to the case of a constant concentrated delay on the left-hand side of system (4.1). If $\widehat{g}$ is differentiable and $\partial \widehat{g}(\tau, y) / \partial y \ne 0$ for all $(\tau, y) \in [0, T] \times \mathbb{R}$, then the set $X_\ast$ consists of pairs $(t,x(\,{\cdot}\,)) \in [0,T) \times \operatorname{Lip}([-h,T];\mathbb{R})$ such that $x(\,{\cdot}\,)$ has a right derivative $\dot{x}^+ (t - h)$ at the point $t - h$. In this case we have
The main distinction of (4.4) from the Hamilton–Jacobi equations (3.13), which correspond to control problems for time-delay systems, is the occurrence of the new term $\langle \partial_t g(t, x(\,{\cdot}\,)), \nabla\varphi(t,x(\,{\cdot}\,))\rangle$. As we can see from the example of (4.3), the map $X_\ast \ni (t, x(\,{\cdot}\,)) \mapsto \partial_t g(t, x(\,{\cdot}\,)) \in \mathbb{R}^n$ is not necessarily continuous, so this term is singled out. In addition, since the map $\partial_t g$ is defined only on the set $X_\ast$, equation (4.4) itself can only be considered on this set. However, despite this fact, the unknown in the Cauchy problem (4.4), (4.5) is a non-anticipating functional $\varphi$ defined on the whole space $[0, T] \times \operatorname{Lip}([- h, T]; \mathbb{R}^n)$.
Like in (3.19) and (3.21), at each point $(t,x(\,{\cdot}\,)) \in [0,T] \times \operatorname{Lip}([-h,T];\mathbb{R}^n)$ we define the lower and upper values $\rho_-(t,x(\,{\cdot}\,))$ and $\rho_+(t,x(\,{\cdot}\,))$ of the differential game (4.1), (4.2). If these quantities are equal, then the game has value
The following assertion is true (in this connection see [140], Theorem 2).
Theorem 10. Assume that conditions $(4.\mathrm{DG}.1)$–$(4.\mathrm{DG}.3)$ hold. Then the value functional $\rho \colon [0,T] \times \operatorname{Lip}([-h,T]; \mathbb{R}^n) \to \mathbb{R}$ of the differential game (4.1), (4.2) satisfies the Hamilton–Jacobi equation (4.4) with Hamiltonian (4.6) at all points $(t,x(\,{\cdot}\,)) \in X_\ast$ where it is ci-differentiable.
The specificity of neutral-type systems also manifests itself through the fact that the value functional $\rho$ is not ci-smooth even in the simplest examples (see, for example, [140], § 6). Therefore, to deduce a result similar to Theorem 2 and adequate for systems of this type we replace the assumption of the ci-smoothness of the solution $\varphi \colon [0, T] \times \operatorname{Lip}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ of the Cauchy problem (4.4), (4.5) by the following weaker conditions.
and for any numbers $0 < \delta < \min_{i\in \{1,\dots, \mathcal{I}\}} (t_{i + 1} - t_i)$ and $\mu > 0$ the ci-gradient $\nabla \varphi$ is uniformly continuous in the set $X_\ast \cap \Bigl(\,\bigcup\limits_{i = 1}^{\mathcal{I}} [t_i, t_{i + 1} - \delta] \times Z_\mu\Bigr)$.
Note that conditions $(4.\mathrm{S}.1)$ and $(4.\mathrm{S}.2)$ ensure that the functional $\varphi$ and also (see § 3.1) the maps $\partial_t \varphi \colon X_\ast \to \mathbb{R}$ and $\nabla \varphi \colon X_\ast \to \mathbb{R}^n$ are non-anticipating. In addition, by virtue of $(4.\mathrm{S}.3)$ and the above properties of the set $X_\ast$ we can consider the extension
of the ci-gradient $\nabla \varphi$, defined at $(t, x(\,{\cdot}\,)) \in ([0, T) \times \operatorname{Lip}([- h, T]; \mathbb{R}^n)) \setminus X_\ast$ by continuity, that is,
where the limit is calculated over the points $(\tau, x(\cdot \wedge t)) \in X_\ast$. The map $\Upsilon$ is also non-anticipating.
The following result is true (in this connection see [140], Theorem 1, and also [124], Theorem 1).
Theorem 11. Assume that conditions $(4.\mathrm{DG}.1)$–$(4.\mathrm{DG}.3)$ hold and there is a functional $\varphi \colon [0, T] \times \operatorname{Lip}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ that satisfies the smoothness conditions $(4.\mathrm{S}.1)$–$(4.\mathrm{S}.3)$, the Hamilton–Jacobi equation (4.4) with Hamiltonian (4.6), and the boundary condition (4.5). Then the differential game (4.1), (4.2) has value $\rho(t, x(\,{\cdot}\,)) = \varphi(t, x(\,{\cdot}\,))$, $(t, x(\,{\cdot}\,)) \in [0, T] \times \operatorname{Lip}([- h, T]; \mathbb{R}^n)$, and the non-anticipating control strategies of the first and second players constructed by extremal aiming in the direction of the extension $\Upsilon$ of the ci-gradient of $\varphi$, namely,
First we indicate the results obtained in this area under the following stronger conditions, which make it possible to take account of neutral-type systems with constant concentrated delays in applications.
for all $t \in [0, T]$, $x_1(\,{\cdot}\,),x_2(\,{\cdot}\,) \in W$, and $s \in \mathbb{R}^n$.
Definition 15. A minimax solution of the Cauchy problem (4.4), (4.5) is a functional $\varphi \colon [0, T] \times \operatorname{Lip}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ that is non-anticipating and continuous, satisfies the boundary condition (4.5), and has the following properties.
The multivalued mapping $E_0$ in (4.7) is defined by (3.25).
We note that for $g \equiv 0$ this definition is naturally consistent with Definition 10. We also note that properties $(4.\mathrm{MS}.1_+)$ and $(4.\mathrm{MS}.1_-)$ express the $u$- and $v$-stability of the value functional of the differential game (4.1), (4.2) (see, for example, [132] and also [65] and [140]).
The basic difficulty in the proof of Theorem 12 is in verifying the relevant comparison principle ([143], Lemma 4): if a non-anticipating lower semicontinuous functional $\varphi_+ \colon [0, T] \times \operatorname{Lip}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ with property $(4.\mathrm{MS}.1_+)$ and a non-anticipating upper semicontinuous functional $\varphi_- \colon [0, T] \times \operatorname{Lip}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ with property $(4.\mathrm{MS}.1_-)$ satisfy the estimates
where $\varepsilon > 0$ is a small parameter, the positive numbers $\lambda_g $ and $\lambda_H $ correspond to a specially chosen set $W \subset \operatorname{Lip}([- h, T]; \mathbb{R}^n)$ in accordance with conditions $(4.\mathrm{CP}.1)$ and $(4.\mathrm{CP}.3)$, respectively, and $\lambda_\ast = 4 \lambda_H + 2 \lambda_g / h$.
We also note ([143], Theorems 1 and 2) the consistency of the concept of minimax solution of the Cauchy problem (4.4), (4.5) with the concept of solution of this problem in the classical sense. More precisely, on the one hand, if a functional $\varphi \colon [0, T] \times \operatorname{Lip}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ satisfies the smoothness conditions $(4.\mathrm{S}.1)$–$(4.\mathrm{S}.3)$, the Hamilton–Jacobi equation (4.4), and the boundary condition (4.5), then $\varphi$ is a minimax solution of the Cauchy problem (4.4), (4.5). On the other hand the minimax solution satisfies the Hamilton–Jacobi equation (4.4) at all points $(t, x(\,{\cdot}\,)) \in X_\ast$ at which it is ci-differentiable.
The greatest progress in the development of the theory of minimax solutions of the Cauchy problem (4.4), (4.5) has been achieved in the case when the Hamiltonian $H$ satisfies the condition of positive homogeneity in the third variable (in this connection see § 2.3).
Assume that the condition $(4.\mathrm{CP}.2)$ and the following conditions hold.
Definition 16. A minimax solution of the Cauchy problem (4.4), (4.5) is a functional $\varphi \colon [0, T] \times \operatorname{Lip}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ that is non-anticipating and continuous, satisfies the boundary condition (4.5), and has the following properties.
Note that a minimax solution in the sense of Definition 16 is also a minimax solution in the sense of Definition 15. The converse result for the Cauchy problem (4.4), (4.5) requires further investigations.
The following result holds (see [141], Theorem 1, and also [126], Theorem 1).
In addition, the minimax solution of the Cauchy problem (4.4), (4.5) depends continuously on variations of the map $g$, the Hamiltonian $H$, and the boundary functional $\sigma$ (see [141], Theorem 2, and also [126], Theorem 2) and, like in the case of Definition 15, is consistent with the concept of solution in the classical sense (see [141], Theorems 3 and 4, and also [126], Propositions 1 and 2).
In [127] conditions $(4.\mathrm{MS}.2_+)$ and $(4.\mathrm{MS}.2_-)$ in Definition 16 were expressed in the infinitesimal form. In that case, for a functional $\varphi \colon [0, T] \times \operatorname{Lip}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ the following lower and upper right derivatives at a point $(t,x(\,{\cdot}\,)) \in [0,T) \times \operatorname{Lip}([-h,T];\mathbb{R}^n)$ in a direction $(z(\,{\cdot}\,),F) \in \operatorname{Lip}([t,T];\mathbb{R}^n) \times \mathcal{K}(\mathbb{R}^n)$ were to be considered:
Thus, a non-anticipating continuous functional $\varphi \colon [0, T] \times \operatorname{Lip}([-h,T];\mathbb{R}^n) \to \mathbb{R}$ is a minimax solution of the Cauchy problem (4.4), (4.5) in the sense of Definition 16 if and only if it satisfies the boundary condition (4.5) and the differential inequalities (4.8) and (4.9).
Under the above conditions the minimax solution $\varphi$ of the Cauchy problem (4.4), (4.5) satisfies ([129], Lemma 6.4) the following Lipschitz condition in the variable $x(\,{\cdot}\,)$.
for all $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{Lip}([- h, T]; \mathbb{R}^n)$ and $(z(\,{\cdot}\,), F) \in \operatorname{Lip}([t, T]; \mathbb{R}^n) \times \mathcal{K}(\mathbb{R}^n)$. Here $\partial_- \{\varphi(t, x(\,{\cdot}\,)) \mid z(\,{\cdot}\,), f \}$ and $\partial_+ \{\varphi(t, x(\,{\cdot}\,)) \mid z(\,{\cdot}\,), f \}$ are the lower and upper right derivatives of $\varphi$ at the point $(t, x(\,{\cdot}\,))$ in the direction $(z(\,{\cdot}\,), f)$, that is,
Hence we arrive at the following assertion ([129], Theorem 4.1).
Theorem 15. Under conditions $(4.\mathrm{CP}.1)$, $(4.\mathrm{CP}.2)$, $(4.\mathrm{CP}.6)$–$(4.\mathrm{CP}.8)$ a continuous functional $\varphi \colon [0, T] \times \operatorname{Lip}([- h, T]; \mathbb{R}^n) \to \mathbb{R}$ is a minimax solution of the Cauchy problem (4.4), (4.5) in the sense of Definition 16 if and only if it satisfies the boundary condition (4.5), the Lipschitz condition $(4.\mathrm{L})$, and the differential inequalities
Regarding the differential game (4.1), (4.2), the meaningfullness of Definition 16 of a minimax solution of the Cauchy problem (4.4), (4.5) under various assumptions was studied in [124], [68], and [128]. Among other things, appropriate methods, based on the minimax solution, for the construction of optimal control strategies of the players were proposed and the coincidence of this solution with the value functional was established.
Some results concerning the viscosity approach to a generalized solution of the Cauchy problem (4.4), (4.5) and its relationship with the minimax approach were deduced in [145]. In this case the problem was considered over the space of piecewise Lipschitz functions.
To conclude this section, we note that the development of the theory of differential games and the theory of relevant Hamilton–Jacobi equations for neutral-type systems of general form, for example, of the form
The machinery of coinvariant differentiation of non-anticipating functionals has turned out to be useful for the construction of the theory of Hamilton–Jacobi equations for fractional-order systems (1.25). This section is devoted to the results obtained in this area.
5.1. Hamilton–Jacobi equations with fractional coinvariant derivatives
Let $\alpha \in (0, 1)$. Following, for example, [152], Definition 2.3 we consider the space $\operatorname{AC}^\alpha([0, T]; \mathbb{R}^n)$ of functions $x \colon [0, T] \to \mathbb{R}^n$ each of which is representable in the form
with its own measurable essentially bounded function $f \colon [0, T] \to \mathbb{R}^n$, where $(I^\alpha f)(\tau)$ is the Riemann–Liouville fractional integral of order $\alpha$ of the function $f(\,{\cdot}\,)$ at a point $\tau$ (see [152], Definition 2.1):
In view of the fact that $\operatorname{AC}^\alpha([0, T]; \mathbb{R}^n) \subset \operatorname{C}([0, T]; \mathbb{R}^n)$, the space $\operatorname{AC}^\alpha([0, T]; \mathbb{R}^n)$ is endowed with the uniform norm $\|\cdot\|_\infty$ (see (3.4)). By [152], Theorem 2.4, each function $x(\,{\cdot}\,) \in \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n)$ has a Caputo fractional derivative $(^{\mathrm C\!} D^\alpha x)(\tau) = f(\tau)$ for almost every $\tau \in [0, T]$ (see Definition (1.26)), where $f(\,{\cdot}\,)$ is from (5.1). In particular,
Definition 17. A functional $\varphi \colon [0, T] \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n) \to \mathbb{R}$ is coinvariantly differentiable (ci-differentiable) of order $\alpha$ at a point $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n)$ if there exist $\partial_t^\alpha \varphi(t, x(\,{\cdot}\,)) \in \mathbb{R}$ and $\nabla^\alpha \varphi(t, x(\,{\cdot}\,)) \in \mathbb{R}^n$ such that
for each function $y(\,{\cdot}\,) \in \operatorname{AC}^\alpha(t, x(\,{\cdot}\,))$, where $o(\,{\cdot}\,)$ can depend on $y(\,{\cdot}\,)$ and $o(\delta) / \delta \to 0$ as $\delta \to 0^+$. In this case $\partial_t^\alpha \varphi(t, x(\,{\cdot}\,))$ and $\nabla^\alpha \varphi(t, x(\,{\cdot}\,))$ are called the ci- derivative of order $\alpha$ with respect to $t$ and the ci-gradient of order $\alpha$ of $\varphi$ at $(t,x(\,{\cdot}\,))$, respectively.
Note that if we take formally $\alpha=1$ and agree that $(^{\mathrm C\!} D^\alpha y)(\,{\cdot}\,)=\dot{y}(\,{\cdot}\,)$ in this case, then Definition 17 of ci-differentiability of order $\alpha$ turns to Definition 7 of ordinary ci-differentiability.
In addition, like in the case of ordinary ci-differentiability (see § 3.1), it turns out that if a functional $\varphi \colon [0, T] \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n) \to \mathbb{R}$ is ci-differentiable of order $\alpha$ at all points $(t,x(\,{\cdot}\,)) \in [0,T) \times \operatorname{AC}^\alpha([0,T];\mathbb{R}^n)$, then it and its ci-derivatives of order $\alpha$
Furthermore, a functional $\varphi \colon [0, T] \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n) \to \mathbb{R}$ is called ci-smooth of order $\alpha$ if it is continuous and ci-differentiable of order $\alpha$ at all points $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n)$ and the maps (5.4) are continuous.
A key property of functionals ci-smooth of order $\alpha$ is the following analogue of Proposition 1 (see [48], Lemma 9.2).
Proposition 9. Assume that a functional $\varphi \colon [0, T] \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n) \to \mathbb{R}$ is ci- smooth of order $\alpha$, and fix a point $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n)$, a function $y(\,{\cdot}\,) \in \operatorname{AC}^\alpha(t, x(\,{\cdot}\,))$, and a number $\vartheta \in (t, T)$. Then the function $\mu(\tau) = \varphi(\tau, y(\,{\cdot}\,))$, $\tau \in [t, \vartheta]$, satisfies $\mu(\,{\cdot}\,) \in \operatorname{Lip}([t,\vartheta];\mathbb{R})$ and
where $(I^{1 - \alpha} x)(\,{\cdot}\,)$ is the Riemann–Liouville fractional integral of order $1 - \alpha$ of $x(\,{\cdot}\,)$ (see (5.2)). Then $\varphi$ is ci-differentiable of order $\alpha$ at a point $(t,x(\,{\cdot}\,)) \in [0,T) \times \operatorname{AC}^\alpha([0,T];\mathbb{R}^n)$ if and only if $\psi$ is ci-differentiable in the sense of Definition 7 at the point $(t,(I^{1-\alpha} x)(\,{\cdot}\,)) \in [0,T) \times \operatorname{C}([0,T];\mathbb{R}^n)$. In this case
In particular, starting from any ci-smooth functional $\psi$, we can define a functional $\varphi$ ci-smooth of order $\alpha$ by (5.5).
The second example actually says that the dependence of the solution of a differential equation with Caputo fractional derivative of the form (1.25) on the initial data $(t, x(\,{\cdot}\,)) \in [0, T] \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n)$ is ci-smooth of order $\alpha$. More precisely, assume that the function $f \colon [0, T] \times \mathbb{R}^n \to \mathbb{R}^n$ in (1.25) is continuously differentiable and there is $c > 0$ such that $\|f(\tau, y)\| \leqslant c (1 + \|y\|)$ for all $\tau \in [0, T]$ and $y \in \mathbb{R}^n$. Consider the functional
where the function $\sigma \colon \mathbb{R}^n\to\mathbb{R}$ is continuously differentiable and $y(\,{\cdot}\,)\in\operatorname{AC}^\alpha(t, x(\,{\cdot}\,))$ is a function satisfying the differential equation (1.25) for almost all $\tau \in [t, T]$. Then $\varphi$ is ci-smooth of order $\alpha$, and the corresponding derivatives can be obtained as solutions of some integral equations (in this connection see [56], Theorem 3.1, and also [59], Corollary 9.1).
On the other hand, very simple functionals are not necessarily ci-differentiable of order $\alpha$. For example, fix a vector $\ell \in \mathbb{R}^n \setminus \{0\}$ and consider the functional
Note that this functional is ci-differentiable in the sense of Definition 7 at all points $(t,x(\,{\cdot}\,)) \in [0,T) \times \operatorname{C}([0,T];\mathbb{R}^n)$, and we have $\partial_t \varphi (t, x(\,{\cdot}\,)) = 0$ and $\nabla \varphi(t, x(\,{\cdot}\,)) = \ell$. We show that $\varphi$ is not ci-differentiable of order $\alpha$ at any point $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n)$. Reasoning by contradiction, we assume that $\varphi$ is ci-differentiable of order $\alpha$ at some point $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n)$. For each vector $f \in \mathbb{R}^n$ we choose a function $y^{(f)}(\,{\cdot}\,) \in \operatorname{AC}^\alpha(t, x(\,{\cdot}\,))$ so that $(^{\mathrm C\!} D^\alpha y^{(f)})(\tau) = f$ for almost every $\tau \in [t,T]$. Note that, according to (5.3),
At the same time, by the assumption of ci-differentiability of order $\alpha$ there is a vector $\nabla^\alpha \varphi(t, x(\,{\cdot}\,)) \in \mathbb{R}^n$ such that
$$
\begin{equation*}
\frac{(\tau - t)^\alpha \langle \ell, f - g \rangle}{\Gamma(\alpha + 1)}= (\tau - t) \langle \nabla^\alpha \varphi(t, x(\,{\cdot}\,)), f - g \rangle + o(\tau - t),\qquad \tau \in (t, T].
\end{equation*}
\notag
$$
Dividing both sides of this equality by $(\tau - t)^\alpha$ and passing to the limit as $\tau \to t^+$, we infer the relation $\langle\ell,f-g\rangle=0$, which contradicts the choice of $f$ and $g$.
We note here that the question of ci-differentiability of different orders of the same functional was treated in [51], Proposition 5.
The subject of discussion in this section is the Cauchy problem for the Hamilton–Jacobi equation with ci-derivatives of order $\alpha$,
We seek a functional $\varphi \colon [0, T] \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n) \to \mathbb{R}$ with the non-anticipation property. The following conditions are assumed to hold for the Hamiltonian $H$ and the boundary functional $\sigma$.
5.2. Differential games for fractional-order systems
We consider a differential game in which the motion of the dynamical system is described by a differential equation with Caputo fractional derivative of the form (see, for example, [152], [136], [146], [82], and [32])
which is specified by an initial position $(t, x(\,{\cdot}\,)) \in [0, T] \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n)$, and let the quality index have the form
For any initial position $(t, x(\,{\cdot}\,)) \in [0, T] \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n)$ and controls $u(\,{\cdot}\,) \in \mathcal{U}(t)$ and $v(\,{\cdot}\,) \in \mathcal{V}(t)$ of the first and second players, respectively, condition $(5.\mathrm{DG}.1)$ (see conditions $(2.\mathrm{DG}.1)$–$(2.\mathrm{DG}.3)$) ensures the existence and uniqueness of a motion of system (5.8) satisfying the initial condition (5.9), namely, of a function $y(\,{\cdot}\,) \in \operatorname{AC}^\alpha(t, x(\,{\cdot}\,))$ that, together with $u(\,{\cdot}\,)$ and $v(\,{\cdot}\,)$, satisfies (5.8) for almost every $\tau \in [t, T]$. Note that this function $y(\,{\cdot}\,)$ coincides with the unique solution in the space $\operatorname{C}([0, T]; \mathbb{R}^n)$ of Volterra’s integral equation
The function $a(\cdot \mid t, x(\,{\cdot}\,))$ can alternatively be defined as a function $a(\,{\cdot}\,) \in \operatorname{AC}^\alpha(t, x(\,{\cdot}\,))$ such that $(^{\mathrm C\!} D^\alpha a)(\tau) = 0$ for almost every $\tau \in [t, T]$; in this sense it is an analogue of the function $x(\cdot \wedge t)$ in (3.9).
In contrast to ordinary differential equations, the Volterra integral operator in the equivalent integral equation (5.11) has the kernel $1 / (\Gamma(\alpha) (\tau - \xi)^{1 - \alpha})$, which is weakly singular in addition. In particular, this has the effect that fractional-order systems have the property of heredity (in this connection see, for example, [32], Remark 4.6). Therefore (see [48], § 4), for an adequate formalization of dynamic programming principles, control problems for systems of this type must consider, as initial data at an intermediate instant $t \in (0, T]$, not only the state $y(t)$ of the system at this instant but also the complete history of the motion $y(\,{\cdot}\,)$ to this instant. Nevertheless, it is worthy of noting that in some works (see, for example, [76] and [149]) attempts were made to apply the dynamic programming principle without taking this feature into account.
We briefly dwell on the relationship between fractional-order systems and neutral- type systems. First, note that if we apply the map
then, according to (1.26) and (5.2), the differential equation with Caputo fractional derivative (5.8) becomes formally a neutral-type functional differential equation in the Hale form (4.1). However, since the quantity $g(\tau, y(\,{\cdot}\,))$ actually depends on $y(\tau)$, the map $g$ does not satisfy the key condition $(4.\mathrm{DG}.1)$ (also see the discussion after the formulation of this condition). In particular, this indicates that equation (5.8) is not solved with respect to the first derivative $\dot{y}(\tau)$.
Second, we can change the variables in (5.8) and define a new unknown function $w(\,{\cdot}\,) \in \operatorname{Lip}([0, T]; \mathbb{R}^n)$ by
from (1.26) we conclude that it is possible to switch from the differential game for the fractional-order system (5.8), (5.10) to an equivalent differential game for the system
is not continuous, condition $(3.\mathrm{DG}.1)$, for example, is in general not satisfied for the differential game (5.13), (5.14); therefore, the results in § 3 do not apply directly to this game (hence to the original game (5.8), (5.10) either). On the other hand, since ([152], Theorem 2.2)
the right-hand side of the differential equation (5.13) depends explicitly on the values of the derivative $\dot{w}(\xi)$ of the sought-for function for $\xi \in [0,\tau]$; thus, this equation must rather be classified as a neutral-type functional differential equation of general form. As mentioned in § 4 above, the theory of differential games and the theory of Hamilton–Jacobi equations are currently not developed for systems described by equations of this type. We nevertheless note that the above reasoning was used in [49] (see also [47] and [61]) to develop approximation schemes for the differential game (5.8), (5.10) by use of differential games for time-delay systems.
For each point $(t, x(\,{\cdot}\,)) \in [0, T] \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n)$ we consider the lower and upper values $\rho_-(t, x(\,{\cdot}\,))$ and $\rho_+(t, x(\,{\cdot}\,))$ of the differential game (5.8), (5.10) that are defined by analogy with (3.19) and (3.21). Recall that in the case when these quantities are equal, the game has value $\rho(t,x(\,{\cdot}\,))=\rho_-(t,x(\,{\cdot}\,))=\rho_+(t,x(\,{\cdot}\,))$.
The following theorem ([153], Theorem 1) is true, which is largely a consequence of Proposition 9.
Theorem 16. Assume that conditions $(5.\mathrm{DG}.1)$ and $(5.\mathrm{DG}.2)$ hold for the differential game (5.8), (5.10). Assume that there is a functional $\varphi \colon [0, T] \times\operatorname{AC}^\alpha([0, T]; \mathbb{R}^n) \to \mathbb{R}$ ci-smooth of order $\alpha$ and satisfying the Hamilton–Jacobi equation (5.6) with Hamiltonian (1.18) and the boundary condition (5.7). Then this game has value
In this case non-anticipating control strategies of the first and second players constructed by extremal aiming in the direction of the ci-gradient of order $\alpha$ of $\varphi$, namely,
Note that on the basis of Theorem 16 we can, for example, construct solutions of linear quadratic problems in optimal control and differential games for fractional-order systems [60], [64], [63].
5.3. Minimax solutions
In this subsection we present results obtained in developing the minimax approach to the concept of a generalized solution of the Cauchy problem (5.6), (5.7).
Definition 18. A minimax solution of the Cauchy problem (5.6), (5.7) is a functional $\varphi \colon [0, T] \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n) \to \mathbb{R}$ that is non-anticipating and continuous, satisfies the boundary condition (5.7), and has the following properties.
In general, Theorem 17 can be proved in accordance with the scheme in Theorem 8.1 in [162], which is based on the properties of sets of solutions of differential inclusions with Caputo fractional derivatives that were established in [50]. The most significant difficulty in this case is constructing an appropriate Lyapunov–Krasovskii functional that makes it possible to prove the comparison principle for minimax solutions of the Cauchy problem (5.6), (5.7). We present a construction of such a functional that was proposed in [55].
We fix a bounded set $W \subset \mathbb{R}^n$ and choose a number $\lambda$ for it in accordance with condition $(2.\mathrm{CP}.3)$ (see $(5.\mathrm{CP}.1)$). We consider the functional
and $\beta_i = (2^{i-1} - 1) \alpha$. We set $\varepsilon_0 = 2 e^{- (\lambda + \lambda_\ast / 2) T}$ and define a Lyapunov–Krasovskii functional for $\varepsilon \in (0,\varepsilon_0]$ in the form
(in this connection see (2.12)), and also define the following auxiliary mappings, which play in a certain sense the role of the ci-derivative of order $\alpha$ with respect to $t$ and the ci-gradient of order $\alpha$ of this functional, respectively:
where $(\tau,w(\,{\cdot}\,)) \in [0,T] \times \operatorname{C}([0,T];\mathbb{R}^n)$.
The mappings $\nu_\varepsilon$, $p_\varepsilon$, and $s_\varepsilon$ have ([55], Lemma 7.7) the following properties, which are similar to conditions $(3.\mathrm{a})$–$(3.\mathrm{d})$ in Lemma 1.
holds, where $w(\,{\cdot}\,) = y_2(\,{\cdot}\,) - y_1(\,{\cdot}\,)$.
These properties turn out to suffice for the substantiation of the comparison principle for minimax solutions of the Cauchy problem (5.6), (5.7) by following the scheme of the proof of Lemma 1 above.
We emphasize that the last assertion in $(5.\mathrm{a})$ is verified on the basis of an estimate ([46], Corollary 4.2) for the fractional derivative of order $\alpha$ of the squared norm of a function $w(\,{\cdot}\,) \in \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n)$. More precisely, for $r(\tau) = \|w(\tau) - w(0)\|^2$, $\tau \in [0, T]$, we have $r(\,{\cdot}\,) \in \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n)$ and
for almost every $\tau \in [0, T]$. Estimates of this form were originally derived in [3] and [1] for absolutely continuous functions $w(\,{\cdot}\,)$.
As a supplement to Theorem 17, we can show (in this connection see [51], § 6.2) that the minimax solution of the Cauchy problem (5.6), (5.7) depends continuously on variations in the differentiation order $\alpha$, the Hamiltonian $H$, and the boundary functional $\sigma$. In addition, like in § 2.3, we can introduce the concept of characteristic complex of the Hamilton–Jacobi equation (5.6) and prove ([52], Proposition 2), using the Lyapunov–Krasovskii functional $\nu_\varepsilon$ defined by (5.17), that in conditions $(5.\mathrm{MS}.1_+)$ and $(5.\mathrm{MS}.1_-)$ in Definition 18 we can take arbitrary upper and lower characteristic complexes, respectively, instead of the standard characteristic complex $(\mathbb{R}^n, E_0)$.
In the infinitesimal form conditions $(5.\mathrm{MS}.1_+)$ and $(5.\mathrm{MS}.1_-)$ can be expressed as follows ([52], Proposition 4 and Corollary 2; also see [51], Theorem 3). Given a functional $\varphi \colon [0, T] \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n) \to \mathbb{R}$, we define its lower and upper right derivatives of order $\alpha$ at a point $(t,x(\,{\cdot}\,)) \in [0,T) \times \operatorname{AC}^\alpha([0,T];\mathbb{R}^n)$ in a multivalued direction $F \in \mathcal{K}(\mathbb{R}^n)$:
Theorem 18. Assume that condition $(5.\mathrm{CP}.1)$ holds. Then for a non-anticipating lower semicontinuous functional $\varphi \colon [0, T] \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n) \to \mathbb{R}$ condition $(5.\mathrm{MS}.1_+)$ is equivalent to the differential inequality
In a similar way, for a non-anticipating upper semicontinuous functional $\varphi \colon [0, T] \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n) \to \mathbb{R}$ condition $(5.\mathrm{MS}.1_-)$ is equivalent to the differential inequality
at $(t, x(\,{\cdot}\,))$ in the multivalued direction equal to the ball $B ( c(1 + \|x(t)\|))$, where $c$ is from $(2.\mathrm{CP}.2)$ (see condition $(5.\mathrm{CP}.1)$).
Assume that a functional $\varphi \colon [0, T] \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n) \to \mathbb{R}$ satisfies the following Lipschitz condition in the variable $x(\,{\cdot}\,)$.
for all $t \in [0, T]$ and $x_1(\,{\cdot}\,),x_2(\,{\cdot}\,) \in W$, where the function $a(\cdot \mid t, x_1(\,{\cdot}\,) - x_2(\,{\cdot}\,))$ is defined by (5.12).
Then $\varphi$ is automatically non-anticipating and ([62], Lemma 2) we have
for all $(t,x(\,{\cdot}\,)) \in [0,T) \times{} \operatorname{AC}^\alpha([0,T];\mathbb{R}^n)$ and $F \in \mathcal{K}(\mathbb{R}^n)$. Here $\partial_-^\alpha \{\varphi(t, x(\,{\cdot}\,)) \mid f \}$ and $\partial_+^\alpha \{\varphi(t, x(\,{\cdot}\,)) \mid f \}$ are the lower and upper right derivatives of order $\alpha$ of $\varphi$ at the point $(t, x(\,{\cdot}\,))$ in the single-valued direction $f$:
where the function $y^{(f)}(\,{\cdot}\,) \in \operatorname{AC}^\alpha(t,x(\,{\cdot}\,))$ is defined by the equality $(^{\mathrm C\!} D^\alpha y^{(f)})(\tau) = f$ for almost all $\tau \in [t, T]$. Thus, under condition $(5.\mathrm{L})$ the differential inequalities (5.19) and (5.20) take the simpler form
In particular, this and Theorem 18 imply the consistency of the minimax solution of the Cauchy problem (5.6), (5.7) with the concept of solution in the classical sense (in this connection also see [51], Theorem 4, and [55], Theorem 6.1 and Proposition 6.2).
(i) if a continuous functional $\varphi \colon [0, T] \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n) \to \mathbb{R}$ is ci-differentiable of order $\alpha$ at each point $(t, x(\,{\cdot}\,)) \in [0, T) \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n)$ and satisfies the Hamilton–Jacobi equation (5.6) and the boundary condition (5.7), then it is a minimax solution of the Cauchy problem (5.6), (5.7);
(ii) if a minimax solution of the Cauchy problem (5.6), (5.7) is ci-differentiable of order $\alpha$ at some point $(t,x(\,{\cdot}\,)) \in [0,T) \times \operatorname{AC}^\alpha([0,T];\mathbb{R}^n)$, then it satisfies equality (5.6) at this point.
We turn back to the differential game (5.8), (5.10). We consider a minimax solution $\varphi$ of the corresponding Cauchy problem (5.6), (5.7) with Hamiltonian (1.18). For an initial position $(t, x(\,{\cdot}\,)) \in [0, T] \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n)$ we define the set
$$
\begin{equation}
\begin{aligned} \, \nonumber Y^\alpha(t, x(\,{\cdot}\,))& = \bigl\{ y(\,{\cdot}\,) \in \operatorname{AC}^\alpha(t, x(\,{\cdot}\,)) \colon \\ & \qquad \|(^{\mathrm C\!} D^\alpha y)(\tau)\| \leqslant c (1 + \|y(\tau)\|) \text{ for a. a. } \tau \in [t, T] \bigr\}, \end{aligned}
\end{equation}
\tag{5.22}
$$
where $c$ is borrowed from condition $(2.\mathrm{DG}.2)$ (see $(5.\mathrm{DG}.1)$). As the set $Y^\alpha(t, x(\,{\cdot}\,))$ is compact in the space $\operatorname{AC}^\alpha([0, T]; \mathbb{R}^n)$, we can choose a bounded set $W \subset \mathbb{R}^n$ such that for each $y(\,{\cdot}\,) \in Y^\alpha(t, x(\,{\cdot}\,))$ we have $y(\tau) \in W$, $\tau \in [t, T]$. Using (5.17) and (5.18) we define the corresponding mappings $\nu_\varepsilon$, $p_\varepsilon$, and $s_\varepsilon$ for $\varepsilon \in (0,\varepsilon_0]$. We consider non-anticipating control strategies of the following form of the first and second players:
The following assertion is true ([53], Theorem 2).
Theorem 20. Under conditions $(5.\mathrm{DG}.1)$ and $(5.\mathrm{DG}.2)$ the differential game (5.8), (5.10) has a value. The value functional $\rho$ of this game coincides with the minimax solution $\varphi$ of the Cauchy problem (5.6), (5.7) with Hamiltonian (1.18), that is, $\rho(t, x(\,{\cdot}\,)) = \varphi(t, x(\,{\cdot}\,))$ for $(t,x(\,{\cdot}\,)) \in [0,T] \times \operatorname{AC}^\alpha([0,T];\mathbb{R}^n)$. The strategies $U^\circ_\varepsilon$ and $V^\circ_\varepsilon$ are optimal:
In (5.24) the guaranteed results $\rho^{(u)} (t, x(\,{\cdot}\,), U^\circ_\varepsilon)$ and $\rho^{(v)} (t, x(\,{\cdot}\,), V^\circ_\varepsilon)$ of the control strategies $U^\circ_\varepsilon$ and $V^\circ_\varepsilon$ of the first and second players, respectively, for the differential game (5.8), (5.10) are defined similarly to (3.61) (also see (2.28) and (2.29)).
Assume that the following addition condition, which is stronger than $(5.\mathrm{DG}.2)$, holds.
for all $y_1(\,{\cdot}\,),y_2(\,{\cdot}\,) \in W$.
In this case the value functional $\rho$ satisfies the Lipschitz condition $(5.\mathrm{L})$ (in this connection see [62], Lemma 1). Thus, we arrive at the following criterion.
Theorem 21. Assume that conditions $(5.\mathrm{DG}.1)$ and $(5.\mathrm{DG}.3)$ hold for the differential game (5.8), (5.10). Then a functional $\varphi \colon [0,T] \times \operatorname{AC}^\alpha([0,T];\mathbb{R}^n) \to \mathbb{R}$ is a value functional for this game if and only if $\varphi$ is continuous and satisfies the Lipschitz condition $(5.\mathrm{L})$, the boundary condition (5.7), and the differential inequalities (5.21) with Hamiltonian $H$ from (1.18).
Note that it was proved in [59] for optimal control problems for fractional-order systems that under some additional assumptions the value functional is differentiable of order $\alpha$ in finite-dimensional directions (for a similar result for optimal control problems for ordinary differential systems, see, for example, [164], [165], and [168]). On this basis it is possible, first, to describe ([59], Theorem 10.1) a non- smooth rule for the construction of optimal control strategies that is simpler than (5.23) and, second, to establish ([58], Theorem 3) the relationship between the Hamilton–Jacobi equation (5.6) and the relevant Pontryagin maximum principle (see, for example, [13] and [108]), which is similar to the one known in the classical case (see, for example, [40], Theorem 8.1, and also [168]). In particular, the last result is another confirmation that ci-derivatives of order $\alpha$ are an adequate tool for the study of control problems for fractional-order systems.
5.4. Viscosity solutions
In [54], following the constructions in [156], [120], [119], and [121], elements of the viscosity approach to a generalized solution of the Cauchy problem (5.6), (5.7) were developed.
For each $k \in \mathbb{N}$ consider the compact set $W_k$ consisting of functions $x(\,{\cdot}\,) \in \operatorname{AC}^\alpha([0,T];\mathbb{R}^n)$ such that
$$
\begin{equation*}
\|x(0)\| \leqslant k\quad\text{and}\quad \|(^{\mathrm C\!} D^\alpha x)(\tau)\|\leqslant k c (1 + \|x(\tau)\|)\quad \text{for a. e. } \tau \in [0, T],
\end{equation*}
\notag
$$
where $c$ is from condition $(2.\mathrm{CP}.2)$ (see $(5.\mathrm{CP}.1)$). Note that $W_k \subset W_{k + 1}$, $k \in \mathbb{N}$, the union of $W_k$ over $k \in \mathbb{N}$ coincides with $\operatorname{AC}^\alpha([0, T]; \mathbb{R}^n)$, and for any $(t, x(\,{\cdot}\,)) \in [0, T] \times W_k$ and $y(\,{\cdot}\,) \in Y^\alpha(t, x(\,{\cdot}\,))$ (see (5.22)) we have $y(\,{\cdot}\,) \in W_k$.
Definition 19. A viscosity solution of the Cauchy problem (5.6), (5.7) is a functional $\varphi \colon [0, T] \times \operatorname{AC}^\alpha([0, T]; \mathbb{R}^n) \to \mathbb{R}$ that is non-anticipating and continuous, satisfies the boundary condition (5.7), and has the following properties.
The greatest difficulty in the verification of Theorem 22 is to prove the comparison principle for viscosity solutions satisfying $(5.\mathrm{L})$. It is generally speaking proved using the scheme of the above proof of Theorem 8. The main distinction is that the following functional, which is ci-smooth of order $\alpha$ with respect to both $(t, x(\,{\cdot}\,))$ and $(\tau, y(\,{\cdot}\,))$, is used to construct appropriate penalty functionals (an analogue of the function $\|x - y\|^2$ in (2.36) in the classical case and also of the functionals (3.65) and (3.66) in the case of time-delay systems):
where $\varepsilon > 0$ is a small parameter, $q = 2 / (2 - \alpha) \in (1, 2)$, the positive number $\beta $ satisfies $\beta < 1 - \alpha$ and $\beta < \alpha / 2$, the functions $a(\cdot \mid t, x(\,{\cdot}\,))$ and $a(\cdot \mid \tau, y(\,{\cdot}\,))$ are defined by (5.12), and $C = 1 + T^{1 - (1 - \alpha - \beta) q} / (1 - (1 - \alpha - \beta) q)$. In this case the additional Lipschitz condition $(5.\mathrm{L})$ is significant and makes it possible, by analogy with Lemma 7.6 in [144], to obtain certain boundedness properties for ci-gradients of order $\alpha$ of test functionals $\psi$ to be constructed.
Note that, under the additional Lipschitz condition $(5.\mathrm{DG}.3)$, the functional $\nu_\varepsilon$ in (5.25) can also be used to construct optimal non-anticipating control strategies of the players in the differential game (5.8), (5.10) (in this connection see [57]).
Bibliography
1.
N. Aguila-Camacho, M. A. Duarte-Mermoud, and J. A. Gallegos, “Lyapunov functions for fractional order systems”, Commun. Nonlinear Sci. Numer. Simul., 19:9 (2014), 2951–2957
2.
R. R. Akhmerov, M. I. Kamenskii, A. S. Potapov, A. E. Rodkina, and B. N. Sadovskii, “Theory of equations of neutral type”, J. Soviet Math., 24:6 (1984), 674–719
3.
A. A. Alikhanov, “A priori estimates for solutions of boundary value problems for fractional-order equations”, Differ. Equ., 46:5 (2010), 660–666
4.
V. I. Arnold, Mathematical methods of classical mechanics, 3d revised and aygmented ed., Nauka, Moscow, 1989, 472 pp. ; English transl. of 1st ed. Grad. Texts in Math., 60, Springer-Verlag, New York–Heidelberg, 1978, x+462 pp.
5.
J.-P. Aubin and G. Haddad, “History path dependent optimal control and portfolio valuation and management”, Positivity, 6:3 (2002), 331–358
6.
S. Banach, Théorie des opérations linéaires, Monogr. Mat., 1, Inst. Mat. PAN, Warszawa, 1932, vii+254 pp.
7.
V. Barbu and G. Da Prato, Hamilton–Jacobi equations in Hilbert spaces, Res. Notes in Math., 86, Pitman, Boston, MA, 1983, v+172 pp.
8.
M. Bardi and I. Capuzzo-Dolcetta, Optimal control and viscosity solutions of Hamilton–Jacobi–Bellman equations, Systems Control Found. Appl., Birkhäuser Boston, Inc., Boston, MA, 1997, xviii+570 pp.
9.
E. N. Barron, “Application of viscosity solutions of infinite-dimensional Hamilton–Jacobi–Bellman equations to some problems in distributed optimal control”, J. Optim. Theory Appl., 64:2 (1990), 245–268
10.
E. Bayraktar and C. Keller, “Path-dependent Hamilton–Jacobi equations in infinite dimensions”, J. Funct. Anal., 275:8 (2018), 2096–2161
11.
E. Bayraktar and C. Keller, “Path-dependent Hamilton–Jacobi equations with super-quadratic growth in the gradient and the vanishing viscosity method”, SIAM J. Control Optim., 60:3 (2022), 1690–1711
12.
A. Bensoussan, G. Da Prato, M. C. Delfour, and S. K. Mitter, Representation and control of infinite dimensional systems, Systems Control Found. Appl., 2nd ed., Birkhäuser Boston, Inc., Boston, MA, 2007, xxviii+575 pp.
13.
M. Bergounioux and L. Bourdin, “Pontryagin maximum principle for general Caputo fractional optimal control problems with Bolza cost and terminal constraints”, ESAIM Control Optim. Calc. Var., 26 (2020), 35, 38 pp.
14.
J. M. Borwein and D. Preiss, “A smooth variational principle with applications to subdifferentiability and to differentiability of convex functions”, Trans. Amer. Math. Soc., 303:2 (1987), 517–527
15.
J. M. Borwein and Q. J. Zhu, Techniques of variational analysis, CMS Books Math./Ouvrages Math. SMC, 20, Springer-Verlag, New York, 2005, vi+362 pp.
16.
A. M. Bruckner, Differentiation of real functions, Lecture Notes in Math., 659, Springer, Berlin, 1978, x+247 pp.
17.
P. Cannarsa and G. Da Prato, “Some results on non-linear optimal control problems and Hamilton–Jacobi equations in infinite dimensions”, J. Funct. Anal., 90:1 (1990), 27–47
18.
G. Carlier and R. Tahraoui, “Hamilton–Jacobi–Bellman equations for the optimal control of a state equation with memory”, ESAIM Control Optim. Calc. Var., 16:3 (2010), 744–763
19.
F. Clarke and Yu. S. Ledyaev, “New finite-increment formulas”, Dokl. Math., 48:1 (1994), 75–79
20.
F. H. Clarke and Yu. S. Ledyaev, “Mean value inequalities in Hilbert space”, Trans. Amer. Math. Soc., 344:1 (1994), 307–324
21.
F. H. Clarke, Yu. S. Ledyaev, R. J. Stern, and P. R. Wolenski, Nonsmooth analysis and control theory, Grad. Texts in Math., 178, Springer-Verlag, New York, 1998, xiv+276 pp.
22.
R. Cont and D.-A. Fournié, “Functional Itô calculus and stochastic integral representation of martingales”, Ann. Probab., 41:1 (2013), 109–133
23.
A. Cosso, F. Gozzi, M. Rosestolato, and F. Russo, Path-dependent Hamilton–Jacobi–Bellman equation: uniqueness of Crandall–Lions viscosity solutions, 2023 (v1 – 2021), 46 pp., arXiv: 2107.05959
24.
A. Cosso and F. Russo, “Crandall–Lions viscosity solutions for path-dependent PDEs: the case of heat equation”, Bernoulli, 28:1 (2022), 481–503
25.
M. G. Crandall, L. C. Evans, and P.-L. Lions, “Some properties of viscosity solutions of Hamilton–Jacobi equations”, Trans. Amer. Math. Soc., 282:2 (1984), 487–502
26.
M. G. Crandall, H. Ishii, and P.-L. Lions, “Uniqueness of viscosity solutions of Hamilton–Jacobi equations revisited”, J. Math. Soc. Japan, 39:4 (1987), 581–596
27.
M. G. Crandall and P.-L. Lions, “Viscosity solutions of Hamilton–Jacobi equations”, Trans. Amer. Math. Soc., 277:1 (1983), 1–42
28.
M. G. Crandall and P.-L. Lions, “Hamilton–Jacobi equations in infinite dimensions. I. Uniqueness of viscosity solutions”, J. Funct. Anal., 62:3 (1985), 379–396
29.
M. G. Crandall and P.-L. Lions, “Hamilton–Jacobi equations in infinite dimensions. II. Existence of viscosity solutions”, J. Funct. Anal., 65:3 (1986), 368–405
30.
M. G. Crandall and P.-L. Lions, “Remarks on the existence and uniqueness of unbounded viscosity solutions of Hamilton–Jacobi equations”, Illinois J. Math., 31:4 (1987), 665–688
31.
R. Deville, G. Godefroy, and V. Zizler, “A smooth variational principle with applications to Hamilton–Jacobi equations in infinite dimensions”, J. Funct. Anal., 111:1 (1993), 197–212
32.
K. Diethelm, The analysis of fractional differential equations. An application-oriented exposition using differential operators of Caputo type, Lecture Notes in Math., 2004, Springer-Verlag, Berlin, 2010, viii+247 pp.
33.
B. Dupire, Functional Itô calculus, Bloomberg Portfolio Research Paper No. 2009-04-FRONTIERS, 2009, 25 pp. https://ssrn.com/abstract=1435551
34.
I. Ekren, N. Touzi, and Jianfeng Zhang, “Viscosity solutions of fully nonlinear parabolic path dependent PDEs: Part I”, Ann. Probab., 44:2 (2016), 1212–1253
35.
I. Ekren, N. Touzi, and Jianfeng Zhang, “Viscosity solutions of fully nonlinear parabolic path dependent PDEs: Part II”, Ann. Probab., 44:4 (2016), 2507–2553
36.
L. C. Evans and P. E. Souganidis, “Differential games and representation formulas for solutions of Hamilton–Jacobi–Isaacs equations”, Indiana Univ. Math. J., 33:5 (1984), 773–797
37.
G. Fabbri, F. Gozzi, and A. Swiech, Stochastic optimal control in infinite dimension. Dynamic programming and HJB equations, Probab. Theory Stoch. Model., 82, Springer, Cham, 2017, xxiii+916 pp.
38.
S. Federico, B. Goldys, and F. Gozzi, “HJB equations for the optimal control of differential equations with delays and state constraints. I. Regularity of viscosity solutions”, SIAM J. Control Optim., 48:8 (2010), 4910–4937
39.
S. Federico, B. Goldys, and F. Gozzi, “HJB equations for the optimal control of differential equations with delays and state constraints. II. Verification and optimal feedbacks”, SIAM J. Control Optim., 49:6 (2011), 2378–2414
40.
W. H. Fleming and R. W. Rishel, Deterministic and stochastic optimal control, Appl. Math., 1, Springer-Verlag, Berlin–New York, 1975, vii+222 pp.
41.
W. H. Fleming and H. M. Soner, Controlled Markov processes and viscosity solutions, Stoch. Model. Appl. Probab., 25, 2nd ed., Springer, New York, 2006, xviii+429 pp.
42.
H. Frankowska, “Lower semicontinuous solutions of Hamilton–Jacobi–Bellman equations”, SIAM J. Control Optim., 31:1 (1993), 257–272
43.
F. Gantmakher, Lectures in analytical mechanics, Mir Publishers, Moscow, 1970, 264 pp.
44.
G. G. Garnysheva and A. I. Subbotin, “Strategies of minimax aiming in the direction of the quasigradient”, J. Appl. Math. Mech., 58:4 (1994), 575–581
45.
I. M. Gelfand and S. V. Fomin, Calculus of variations, Prentice-Hall, Inc., Englewood Cliffs, NJ, 1963, vii+232 pp.
46.
M. I. Gomoyunov, “Fractional derivatives of convex Lyapunov functions and control problems in fractional order systems”, Fract. Calc. Appl. Anal., 21:5 (2018), 1238–1261
47.
M. I. Gomoyunov, “Approximation of fractional order conflict-controlled systems”, Progr. Fract. Differ. Appl., 5:2 (2019), 143–155
48.
M. I. Gomoyunov, “Dynamic programming principle and Hamilton–Jacobi–Bellman equations for fractional-order systems”, SIAM J. Control Optim., 58:6 (2020), 3185–3211
49.
M. Gomoyunov, “Solution to a zero-sum differential game with fractional dynamics via approximations”, Dyn. Games Appl., 10:2 (2020), 417–443
50.
M. I. Gomoyunov, “To the theory of differential inclusions with Caputo fractional derivatives”, Differ. Equ., 56:11 (2020), 1387–1401
51.
M. I. Gomoyunov, “Minimax solutions of homogeneous Hamilton–Jacobi equations with fractional-order coinvariant derivatives”, Proc. Steklov Inst. Math. (Suppl.), 315, suppl. 1 (2021), S97–S116
52.
M. I. Gomoyunov, “Criteria of minimax solutions for Hamilton-Jacobi equations with coinvariant fractional-order derivatives”, Tr. Inst. Mat. Mekh. UrO RAN, 27, no. 3, 2021, 25–42 (Russian)
53.
M. I. Gomoyunov, “Differential games for fractional-order systems: Hamilton–Jacobi–Bellman–Isaacs equation and optimal feedback strategies”, Mathematics, 9:14 (2021), 1667, 16 pp.
54.
M. I. Gomoyunov, On viscosity solutions of path-dependent Hamilton–Jacobi–Bellman–Isaacs equations for fractional-order systems, 2021, 24 pp., arXiv: 2109.02451
55.
M. I. Gomoyunov, “Minimax solutions of Hamilton–Jacobi equations with fractional coinvariant derivatives”, ESAIM Control Optim. Calc. Var., 28 (2022), 23, 36 pp.
56.
M. I. Gomoyunov, “On differentiability of solutions of fractional differential equations with respect to initial data”, Fract. Calc. Appl. Anal., 25:4 (2022), 1484–1506
57.
M. I. Gomoyunov, “On optimal positional strategies inяfractional optimal control problems”, Mathematical optimization theory and operations research, Lecture Notes in Comput. Sci., 13930, Springer, Cham, 2023, 255–265
58.
M. I. Gomoyunov, “On the relationship between the Pontryagin maximum principle and the Hamilton–Jacobi–Bellman equation in optimal control problems for fractional-order systems”, Differ. Equ., 59:11 (2023), 1520–1526
59.
M. I. Gomoyunov, “Sensitivity analysis of value functional of fractional optimal control problem with application to feedback construction of near optimal controls”, Appl. Math. Optim., 88:2 (2023), 41, 49 pp.
60.
M. I. Gomoyunov, “Value functional and optimal feedback control in linear-quadratic optimal control problem for fractional-order system”, Math. Control Relat. Fields, 14:1 (2024), 215–254
61.
M. I. Gomoyunov and N. Yu. Lukoyanov, “Construction of solutions to control problems for fractional-order linear systems based on approximation models”, Proc. Steklov Inst. Math. (Suppl.), 313, suppl. 1 (2021), S73–S82
62.
M. I. Gomoyunov and N. Yu. Lukoyanov, “Differential games in fractional-order systems: inequalities for directional derivatives of the value functional”, Proc. Steklov Inst. Math., 315 (2021), 65–84
63.
M. I. Gomoyunov and N. Yu. Lukoyanov, “On linear-quadratic differential games for fractional-order systems”, Dokl. Math., 108, 1 (2023), S122-S127
64.
M. I. Gomoyunov and N. Yu. Lukoyanov, “Optimal feedback in a linear-quadratic optimal control problem for a fractional-order system”, Differ. Equ., 59:8 (2023), 1117–1129
65.
M. I. Gomoyunov, N. Yu. Lukoyanov, and A. R. Plaksin, “Existence of a value and a saddle point in positional differential games for neutral-type systems”, Proc. Steklov Inst. Math. (Suppl.), 299, suppl. 1 (2017), 37–48
66.
M. I. Gomoyunov, N. Yu. Lukoyanov, and A. R. Plaksin, “Approximation of minimax solutions to Hamilton–Jacobi functional equations for time-delay systems”, Proc. Steklov Inst. Math. (Suppl.), 304, suppl. 1 (2019), S68–S75
67.
M. I. Gomoyunov, N. Yu. Lukoyanov, and A. R. Plaksin, “Path-dependent Hamilton–Jacobi equations: the minimax solutions revised”, Appl. Math. Optim., 84:1 (2021), S1087–S1117
68.
M. I. Gomoyunov and A. R. Plaksin, “On basic equation of differential games for neutral-type systems”, Mech. Solids, 54:2 (2019), 131–143
69.
M. I. Gomoyunov and A. R. Plaksin, “Equivalence of minimax and viscosity solutions of path-dependent Hamilton–Jacobi equations”, J. Funct. Anal., 285:11 (2023), 110155, 41 pp.
70.
J. K. Hale and M. A. Cruz, “Existence, uniqueness and continuous dependence for hereditary systems”, Ann. Mat. Pura Appl. (4), 85:1 (1970), 63–81
71.
J. K. Hale and S. M. Verduyn Lunel, Introduction to functional differential equations, Appl. Math. Sci., 99, Springer-Verlag, New York, 1993, x+447 pp.
72.
R. Isaacs, Differential games. A mathematical theory with applications to warfare and pursuit, control and optimization, John Wiley & Sons, Inc., New York–London–Sydney, 1965, xvii+384 pp.
73.
H. Ishii, “Uniqueness of unbounded viscosity solutions of Hamilton–Jacobi equations”, Indiana Univ. Math. J., 33:5 (1984), 721–748
74.
H. Ishii, “Viscosity solutions for a class of Hamilton–Jacobi equations in Hilbert spaces”, J. Funct. Anal., 105:2 (1992), 301–341
75.
Shaolin Ji and Shuzhen Yang, “A note on functional derivatives on continuous paths”, Statist. Probab. Lett., 106 (2015), 176–183
76.
G. Jumarie, “Fractional Hamilton–Jacobi equation for the optimal control of nonrandom fractional dynamics with fractional cost function”, J. Appl. Math. Comput., 23:1-2 (2007), 215–228
77.
H. Kaise, “Path-dependent differential games of inf-sup type and Isaacs partial differential equations”, 54th IEEE conference on decision and control (CDC) (Osaka 2015), IEEE, 2015, 1972–1977
78.
H. Kaise, “Convergence of discrete-time deterministic games to path-dependent Isaacs partial differential equations under quadratic growth conditions”, Appl. Math. Optim., 86:1 (2022), 13, 49 pp.
79.
H. Kaise, T. Kato, and Y. Takahashi, “Hamilton–Jacobi partial differential equations with path-dependent terminal costs under superlinear Lagrangians”, 23rd international symposium on mathematical theory of networks and systems (MTNS), Hong Kong Univ. of Science and Technology, Hong Kong, 2018, 692–699
80.
J. L. Kelley, General topology, Grad. Texts in Math., 27, Reprint of the 1955 ed., Springer-Verlag, New York–Berlin, 1975, xiv+298 pp.
81.
M. M. Khrustalev, “Necessary and sufficient optimality conditions in the form of Bellman's equation”, Soviet Math. Dokl., 19:5 (1978), 1262–1266
82.
A. A. Kilbas, H. M. Srivastava, and J. J. Trujillo, Theory and applications of fractional differential equations, North-Holland Math. Stud., 204, Elsevier Science B.V., Amsterdam, 2006, xvi+523 pp.
83.
A. V. Kim, “Lyapunov's second method for systems with aftereffect”, Differ. Equ., 21 (1985), 244–249
84.
A. V. Kim, Functional differential equations. Application of $i$-smooth calculus, Math. Appl., 479, Kluwer Acad. Publ., Dordrecht, 1999, xvi+165 pp.
85.
A. V. Kim and V. G. Pimenov, $i$-smooth analysis and numerical methods of solutions of functional differential equations, Regulyarnaya i Khaoticheskaya Dinamika, Moscow–Izhevsk, 2004, 256 pp. (Russian)
86.
R. Kipka and Yu. Ledyaev, “A generalized multidirectional mean value inequality and dynamic optimization”, Optimization, 68:7 (2019), 1365–1389
87.
M. Kocan, P. Soravia, and A. Swiech, “On differential games for infinite-dimensional systems with nonlinear, unbounded operators”, J. Math. Anal. Appl., 211:2 (1997), 395–423
88.
V. Kolmanovskii and A. Myshkis, Applied theory of functional differential equations, Math. Appl. (Soviet Ser.), 85, Kluwer Acad. Publ., Dordrecht, 1992, xvi+234 pp.
89.
V. N. Kolokoltsov and V. P. Maslov, “The Cauchy problem for the homogeneous Bellman equation”, Dokl. Math., 36:2 (1988), 326–330
90.
A. N. Krasovskii and N. N. Krasovskii, Control under lack of information, Systems Control Found. Appl., Birkhäuser Boston, Inc., Boston, MA, 1995, xii+322 pp.
91.
N. N. Krasovskii, “On the application of the second method of Lyapunov for equations with time delays”, Prikl. Mat. Mekh., 20:3 (1956), 315–327 (Russian)
92.
N. N. Krasovskii, Stability of motion. Applications of Lyapunov's second method to differential systems and equations with delay, Stanford Univ. Press, Stanford, CA, 1963, vi+188 pp.
93.
N. N. Krasovskii, “The approximation of a problem of analytic design of controls in a system with time-lag”, J. Appl. Math. Mech., 28:4 (1964), 876–885
94.
N. N. Krasovskiĭ, “On the problem of unifying differential games”, Soviet Math. Dokl., 17:1 (1976), 269–273
95.
N. N. Krasovskii, “Unifuing differential games”, Tr. Inst. Mat. Mekh. UrO Akad. Nauk SSSR, 24, Institute of Mechanics and Mathematics, Ural Branch of the USSR Academy of Sciences, Sverdlovsk, 1977, 32–45 (Russian)
96.
N. N. Krasovskii, Control by dynamical system. The minimum problem of a guaranteed result, Nauka, Moscow, 1985, 519 pp. (Russian)
97.
N. N. Krasovskii and N. Yu. Lukoyanov, “Equations of Hamilton–Jacobi type in hereditary systems: minimax solutions”, Proc. Steklov Inst. Math. (Suppl.), 2000, suppl. 1, , S136–S153
98.
N. N. Krasovskiĭand Yu. S. Osipov, “Linear differential-difference games”, Soviet Math. Dokl., 12 (1971), 554–558
99.
N. N. Krasovskii and A. I. Subbotin, Positional differential games, Nauka, Moscow, 1974, 456 pp. (Russian)
100.
N. N. Krasovskii and A. I. Subbotin, Game-theoretical control problems, Springer Ser. Soviet Math, Springer, New York, 1988, xii+517 pp.
101.
S. N. Kruzhkov, “Generalized solutions of nonlinear equations of the first order with several variables. I”, Mat. Sb., 70(112):3 (1966), 394–415 (Russian)
102.
A. V. Kryazhimskii and Yu. S. Osipov, “Differential-difference game of encounter with a functional target set”, J. Appl. Math. Mech., 37:1 (1973), 1–10
103.
A. B. Kurzhanskii, “On the approximation of linear differential equations with lag”, Differ. Uravn., 3:12 (1967), 2094–2107 (Russian)
104.
L. D. Landau and E. M. Lifshitz, Course of theoretical physics, v. 1, Mechanics, 3, Pergamon Press, Oxford, 1976, xi+170 pp.
105.
Xunjing Li and Jiongmin Yong, Optimal control theory for infinite dimensional systems, Systems Control Found. Appl., Birkhäuser Boston, Inc., Boston, MA, 1995, xii+448 pp.
106.
Yongxin Li and Shuzhong Shi, “A generalization of Ekeland's $\epsilon$-variational principle and its Borwein–Preiss smooth variant”, J. Math. Anal. Appl., 246:1 (2000), 308–319
107.
D. Liberzon, Calculus of variations and optimal control theory. A concise introduction, Princeton Univ. Press, Princeton, NJ, 2012, xviii+235 pp.
108.
Ping Lin and Jiongmin Yong, “Controlled singular Volterra integral equations and Pontryagin maximum principle”, SIAM J. Control Optim., 58:1 (2020), 136–164
109.
P.-L. Lions, Generalized solutions of Hamilton–Jacobi equations, Res. Notes in Math., 69, Pitman, Boston, MA–London, 1982, iv+317 pp.
110.
P.-L. Lions and P. E. Souganidis, “Differential games, optimal control and directional derivatives of viscosity solutions of Bellman's and Isaacs' equations”, SIAM J. Control Optim., 23:4 (1985), 566–583
111.
N. Yu. Lukoyanov, “A Hamilton–Jacobi type equation in control problems with hereditary information”, J. Appl. Math. Mech., 64:2 (2000), 243–253
112.
N. Yu. Lukoyanov, “Minimax solutions of functional equations of the Hamilton–Jacobi type for hereditary systems”, Differ. Equ., 37:2 (2001), 246–256
113.
N. Yu. Lukoyanov, “The properties of the value functional of a differential game with hereditary information”, J. Appl. Math. Mech., 65:3 (2001), 361–370
114.
N. Lukoyanov, “Functional Hamilton–Jacobi type equations in ci-derivatives for systems with distributed delays”, Nonlinear Funct. Anal. Appl., 8:3 (2003), 365–397
115.
N. Lukoyanov, “Functional Hamilton–Jacobi type equations with ci-derivatives in control problems with hereditary information”, Nonlinear Funct. Anal. Appl., 8:4 (2003), 535–555
116.
N. Yu. Lukoyanov, “Strategies for aiming in the direction of invariant gradients”, J. Appl. Math. Mech., 68:4 (2004), 561–574
117.
N. Yu. Lukoyanov, “Approximation of the Hamilton–Jacobi functional equations in systems with hereditary information”, Proceeding of the International Semonar “Control theory and theory of generalizaed soltujons of the Hamilton–Jacobi equations” (Ekaterinburg 2005), v. 1, Ural University Publishing House, Ekaterinburg, 2006, 108–115 (Russian)
118.
N. Yu. Lukoyanov, “Differential inequalities for a nonsmooth value functional in control systems with an aftereffect”, Proc. Steklov Inst. Math. (Suppl.), 255, suppl. 2 (2006), S103–S114
119.
N. Yu. Lukoyanov, “On viscosity solution of functional Hamilton–Jacobi type equations for hereditary systems”, Proc. Steklov Inst. Math. (Suppl.), 259, suppl. 2 (2007), S190–S200
120.
N. Yu. Lukoyanov, “Viscosity solution of nonanticipating Hamilton–Jacobi equations”, Differ. Equ., 43:12 (2007), 1715–1723
121.
N. Yu. Lukoyanov, “Minimax and viscosity solutions in optimization problems for hereditary systems”, Proc. Steklov Inst. Math. (Suppl.), 269, suppl. 1 (2010), S214–S225
122.
N. Yu. Lukoyanov, “On optimality conditions for the guaranteed result in control problems for time-delay systems”, Proc. Steklov Inst. Math. (Suppl.), 268 , suppl. 1 (2010), S175–S187
123.
N. Yu. Lukoyanov, Hamilton–Jacobi functional equations of control problems with hereditary information, Ural Federal University, Ekaterinburg, 2011, 243 pp. (Russian)
124.
N. Yu. Lukoyanov, M. I. Gomoyunov, and A. R. Plaksin, “Hamilton–Jacobi functional equations and differential games for neutral-type systems”, Dokl. Math., 96:3 (2017), 654–657
125.
N. Yu. Lukoyanov and A. R. Plaksin, “Finite-dimensional modelling guides in time-delay systems”, Tr. Inst. Mat. Mekh., 19, no. 1, 2013, 182–195 (Russian)
126.
N. Yu. Lukoyanov and A. R. Plaksin, “Minimax solutions of Hamilton–Jacobi functional equations for neutral-type systems”, Dokl. Math., 96:2 (2017), 445–448
127.
N. Yu. Lukoyanov and A. R. Plaksin, “Stable functionals of neutral-type dynamical systems”, Proc. Steklov Inst. Math., 304 (2019), 205–218
128.
N. Yu. Lukoyanov and A. R. Plaksin, “On the theory of positional differential games for neutral-type systems”, Proc. Steklov Inst. Math. (Suppl.), 309, suppl. 1 (2020), S83–S92
129.
N. Yu. Lukoyanov and A. R. Plaksin, “Hamilton–Jacobi equations for neutral-type systems: inequalities for directional derivatives of minimax solutions”, Minimax Theory Appl., 5:2 (2020), 369–381
130.
N. Yu. Lukoyanov and A. R. Plaksin, “Inequalities for subgradients of a value functional in differential games for time-delay systems”, Dokl. Math., 101:1 (2020), 76–79
131.
V. I. Maksimov, “An alternative in the differential-difference game of approach–evasion with a functional target”, J. Appl. Math. Mech., 40:6 (1976), 936–943
132.
V. I. Maksimov, “A differential guidance game for a system of neutral type with deviating argument”, Problems of dynamical control, Ural Science Center of USSR Academy of Sciences, Sverdlovsk, 1981, 33–45 (Russian)
133.
V. N. Kolokoltsov and V. P. Maslov, Idempotent analysis and its applications, Math. Appl., 401, Kluwer Acad. Publ., Dordrecht, 1997, xii+305 pp.
134.
V. P. Maslov and S. N. Samborskii, “Existence and uniqueness of solutions of the steady-state Hamilton–Jacobi and Bellman equations. A new approach”, Dokl. Math., 45:3 (1992), 682–687
135.
A. A. Melikyan, Generalized characteristics of first order PDEs. Applications in optimal control and differential games, Birkhäuser Boston, Inc., Boston, MA, 1998, xiv+310 pp.
136.
K. S. Miller and B. Ross, An introduction to the fractional calculus and fractional differential equations, Wiley-Intersci. Publ., John Wiley & Sons, Inc., New York, 1993, xvi+366 pp.
137.
Yu. S. Osipov, “Differential games of systems with aftereffect”, Soviet Math. Dokl., 12:262–266 (1971)
138.
V. L. Pasikov, “An alternative in a minimax differential game for systems with aftereffect”, Soviet Math. (Iz. VUZ), 27:8 (1983), 54–61
139.
Triet Pham and Jianfeng Zhang, “Two person zero-sum game in weak formulation and path dependent Bellman–Isaacs equation”, SIAM J. Control Optim., 52:4 (2014), 2090–2121
140.
A. R. Plaksin, “On Hamilton-Jacobi-Isaacs-Bellman equation for neutral type systems”, Vestn. Udmurt. Univer. Mat. Makh. Compyut. Nauki, 27:2 (2017), 222–237 (Russian)
141.
A. R. Plaksin, “Minimax solution of functional Hamilton–Jacobi equations for neutral type systems”, Differ. Equ., 55:11 (2019), 1475–1484
142.
A. Plaksin, “Minimax and viscosity solutions of Hamilton–Jacobi–Bellman equations for time-delay systems”, J. Optim. Theory Appl., 187:1 (2020), 22–42
143.
A. R. Plaksin, “On the minimax solution of the Hamilton–Jacobi equations for neutral-type systems: the case of an inhomogeneous Hamiltonian”, Differ. Equ., 57:11 (2021), 1516–1526
144.
A. R. Plaksin, “Viscosity solutions of Hamilton–Jacobi–Bellman–Isaacs equations for time-delay systems”, SIAM J. Control Optim., 59:3 (2021), 1951–1972
145.
A. Plaksin, “Viscosity solutions of Hamilton–Jacobi equations for neutral-type systems”, Appl. Math. Optim., 88:1 (2023), 6, 29 pp.
146.
I. Podlubny, Fractional differential equations. An introduction to fractional derivatives, fractional differential equations,
to methods of their solution and some of their applications, Math. Sci. Engrg., 198, Academic Press, Inc., San Diego, CA, 1999, xxiv+340 pp.
147.
L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze, and E. F. Mishchenko, The mathematical theory of optimal processes, 4th ed., Nauka, Moscow, 1983, 392 pp. ; English transl. of 1st ed. Intersci. Publ. John Wiley & Sons, Inc., New York–London, 1962, viii+360 pp.
148.
M. Ramaswamy and A. J. Shaiju, “Construction of approximate saddle-point strategies for differential games in a Hilbert space”, J. Optim. Theory Appl., 141:2 (2009), 349–370
149.
A. Razminia, M. Asadizadehshiraz, and D. F. M. Torres, “Fractional order version of the Hamilton–Jacobi–Bellman equation”, J. Comput. Nonlinear Dynam., 14:1 (2018), 011005, 6 pp.
150.
Yu. M. Repin, “On the approximate replacement of systems with lag by ordinary dynamical systems”, J. Appl. Math. Mech., 29:2 (1965), 254–264
151.
I. V. Rublev, “The relationship between two approaches to a generalized solution of the Hamilton–Jacobi equation”, Differ. Equ., 38:6 (2002), 865–873
152.
S. G. Samko, A. A. Kilbas, and O. I. Marichev, Fractional integrals and derivatives. Theory and applications, Gordon and Breach Sci. Publ., Yverdon, 1993, xxxvi+976 pp.
153.
B. Sendov, Hausdorff approximations, Math. Appl. (East European Ser.), 50, Kluwer Acad. Publ., Dordrecht, 1990, xx+364 pp.
154.
S. N. Shimanov, “On the stability in the critical case of a zero root for systems with time lag”, J. Appl. Math. Mech., 24:3 (1961), 653–668
155.
S. N. Shimanov, “On the theory of linear differential equations with after-effect”, Differ. Equ., 1:1 (1965), 76–86
156.
H. M. Soner, “On the Hamilton–Jacobi–Bellman equations in Banach spaces”, J. Optim. Theory Appl., 57:3 (1988), 429–437
157.
P. E. Souganidis, “Existence of viscosity solutions of Hamilton–Jacobi equations”, J. Differential Equations, 56:3 (1985), 345–390
158.
C. Stegall, “Optimization of functions on certain subsets of Banach spaces”, Math. Ann., 236:2 (1978), 171–176
159.
A. I. Subbotin, “A generalization of the basic equation of the theory of differential games”, Soviet Math. Dokl., 22:2 (1980), 358–362
160.
A. I. Subbotin, Minimax inequalities and Hamilton-Jacobi equations, Nauka, Moscow, 1991, 216 pp.
161.
A. I. Subbotin, “On a property of the subdifferential”, Math. USSR-Sb., 74:1 (1993), 63–78
162.
A. I. Subbotin, Generalized solutions of first order PDEs. The dynamical optimization perspective, Systems Control Found. Appl., Birkhäuser Boston, Inc., Boston, MA, 1995, xii+312 pp.
163.
A. I. Sibbotin and A. G. Chentsov, Guarantee optimization in control problems, Nauka, Moscow, 1981, 288 pp. (Russian)
164.
A. I. Subbotin and N. N. Subbotina, “The optimum result function in a control problem”, Soviet Math. Dokl., 26:2 (1982), 336–340
165.
A. I. Subbotin and N. N. Subbotina, “The basis for the method of dynamic programming in optimal control problems”, Engrg. Cybernetics, 21:2 (1983), 16–23
166.
A. I. Subbotin and A. M. Tarasev, “Stability properties of the value function of a differential game and viscosity solutions of Hamilton–Jacobi equations”, Problems Control Inform. Theory, 15:6 (1986), 451–463
167.
N. N. Subbotina, “Unified optimality conditions in control problems”, Tr. Inst. Mat. Mekh., 1, 1992, 147–159
168.
N. N. Subbotina, “The method of characteristics for Hamilton–Jacobi equations and applications to dynamical optimization”, J. Math. Sci. (N. Y.), 135:3 (2006), 2955–3091
169.
A. Świȩch, “Sub- and super-optimality principles and construction of almost optimal strategies for differential games in Hilbert spaces”, Advances in dynamic games, Ann. Internat. Soc. Dynam. Games, 11, Birkhäuser/Springer, New York, 2010, 149–163
170.
Shanjian Tang and Fu Zhang, “Path-dependent optimal stochastic control and viscosity solution of associated Bellman equations”, Discrete Contin. Dyn. Syst., 35:11 (2015), 5521–5553
171.
A. M. Taras'yev, V. N. Ushakov, and A. P. Khripunov, “On a computational algorithm for solving game control problems”, J. Appl. Math. Mech., 51 (1987), 2
172.
D. Tataru, “Viscosity solutions of Hamilton–Jacobi equations with unbounded nonlinear terms”, J. Math. Anal. Appl., 163:2 (1992), 345–392
173.
Hung Vinh Tran, Hamilton–Jacobi equations: theory and applications, Grad. Stud. Math., 213, Amer. Math. Soc., Providence, RI, 2021, xiv+322 pp.
174.
V. N. Ushakov and A. A. Uspenskii, “On a supplement to the stability property in differential games”, Proc. Steklov Inst. Math., 271 (2010), 286–305
175.
P. R. Wolenski, “Hamilton–Jacobi theory for hereditary control problems”, Nonlinear Anal., 22:7 (1994), 875–894
176.
Jiongmin Yong, Differential games. A concise introduction, World Sci. Publ., Hackensack, NJ, 2015, xiv+322 pp.
177.
M. I. Zelikin, Optimal control and variational calculus, 2nd revised and augmented ed., Editorial URSS, Moscow, 2004, 160 pp. (Russian)
178.
Jianjun Zhou, A class of delay optimal control problems and viscosity solutions
to associated Hamilton–Jacobi–Bellman equations, 2015, 21 pp., arXiv: 1507.04112
179.
Jianjun Zhou, “A class of infinite-horizon stochastic delay optimal control problems and a viscosity solution to the associated HJB equation”, ESAIM Control Optim. Calc. Var., 24:2 (2018), 639–676
180.
Jianjun Zhou, Viscosity solutions to first order path-dependent HJB equations, 2020, 25 pp., arXiv: 2004.02095
181.
Jianjun Zhou, Viscosity solutions to second order elliptic Hamilton–Jacobi–Bellman equation with infinite delay, 2021, 39 pp., arXiv: 2112.13363
182.
Jianjun Zhou, “A notion of viscosity solutions to second-order Hamilton–Jacobi–Bellman equations with delays”, Internat. J. Control, 95:10 (2022), 2611–2631
183.
Jianjun Zhou, “Viscosity solutions to second order path-dependent Hamilton–Jacobi–Bellman equations and applications”, Ann. Appl. Probab., 33:6B (2023), 5564–5612
Citation:
M. I. Gomoyunov, N. Yu. Lukoyanov, “Minimax solutions of Hamilton–Jacobi equations in dynamic optimization problems for hereditary systems”, Russian Math. Surveys, 79:2 (2024), 229–324