Izvestiya: Mathematics
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Forthcoming papers
Archive
Impact factor
Guidelines for authors
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Izv. RAN. Ser. Mat.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Izvestiya: Mathematics, 2024, Volume 88, Issue 1, Pages 92–113
DOI: https://doi.org/10.4213/im9384e
(Mi im9384)
 

On the construction of families of optimal recovery methods for linear operators

K. Yu. Osipenkoab

a Lomonosov Moscow State University, Faculty of Mechanics and Mathematics
b Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute), Moscow
References:
Abstract: The paper proposes an approach for construction of families of optimal methods for the recovery of linear operators from inaccurately given information. The proposed method of construction is then applied to recover derivatives from inaccurately specified other derivatives in the multidimensional case and to recover solutions of the heat equation from inaccurately specified temperature distributions at some time instants.
Keywords: optimal recovery, linear operators, heat equation, difference equations.
Received: 03.06.2022
Russian version:
Izvestiya Rossiiskoi Akademii Nauk. Seriya Matematicheskaya, 2024, Volume 88, Issue 1, Pages 98–120
DOI: https://doi.org/10.4213/im9384
Bibliographic databases:
Document Type: Article
UDC: 517.98
MSC: 41A65, 42B10, 49N30
Language: English
Original paper language: Russian

§ 1. Introduction

Let $X$ be a linear space, $Y,Z$ be normed linear spaces. The problem of optimal recovery of a linear operator $\Lambda\colon X\to Z$ from inaccurately given values of a linear operator $I\colon X\to Y$ on a set $W\subset X$ is posed as follows:

$$ \begin{equation*} E(\Lambda,W,I,\delta)=\inf_{\varphi\colon Y\to Z} \sup_{\substack{x\in W,\, y\in Y\\\|Ix-y\|_Y \leqslant\delta}}\|\Lambda x-\varphi(y)\|_Z; \end{equation*} \notag $$
the value $E(\Lambda,W,I,\delta)$ is called the error of optimal recovery, and the mapping $\varphi$ on which the lower bound is attained is called an optimal recovery method (here, $\delta\geqslant0$ is a parameter that characterizes the error of setting the values of the operator $I$). Initially, this problem was posed by Smolyak [1] in the case where $\Lambda$ is a linear functional, $Y$ is a finite-dimensional space, and the information is known exactly ($\delta=0$). In fact, this problem generalizes that posed by A. N. Kolmogorov’s for a best quadrature formula on a class of functions [2] in which both the integral and the values of the functions are replaced by arbitrary linear functionals, and there is no linearity condition for the recovery method. Subsequently, much effort has been placed to extensions of this problem (see [3]–[10], and the references given there).

Traub and Woźniakowski [4] were one of the first to consider the problem of construction of an optimal recovery method for a linear operator. This topic was further developed [11]–[19]. It turned out that in some cases it is possible to construct a whole family of optimal recovery methods for a linear operator. The study of such families began in [20] and continued in [21], [22], [14] and [19].

The aim of this paper is to propose an approach for construction of families of optimal recovery methods for linear operators and demonstrate its potency in a number of concrete problems.

§ 2. General setting and construction of families of optimal recovery methods

We will consider the case where, in the optimal recovery problem, the set $W$ (containing the a priori information about elements from the space $X$) is given in the form of constraints associated with a certain set of linear operators. Let $Y_0,\dots,Y_n$ be normed linear spaces and $I_j\colon X \to Y_j$, $j=0,\dots,n$, be linear operators. Let, in addition, $\delta_1,\dots,\delta_n\geqslant0$ be given numbers, and $J\subset\{1,\dots,n\}$ be a set of natural numbers. We put $\overline J=\{1,\dots,n\}\setminus J$.

The problem is to optimally recover the operator $I_0$ on the set

$$ \begin{equation*} W_J=\{x\in X\colon \|I_jx\|_{Y_j}\leqslant\delta_j,\, j\in J\} \end{equation*} \notag $$
from the values of the operators $I_j$ given with errors $\delta_j$, $j\in\overline J$ (for $J=\varnothing$, we assume that $W=X$). More precisely, we will assume that, for each $x\in W$, we know a vector
$$ \begin{equation*} y=\{y_j\}_{j\in\overline J}\in Y_{\overline J}=\prod_{j\in\overline J}Y_j \end{equation*} \notag $$
such that $\|I_jx-y_j\|_{Y_j}\leqslant\delta_j$, $j\in\overline J$. As recovery methods we will consider arbitrary mappings $\varphi\colon Y_{\overline J}\to Y_0$. The error of a method $\varphi(\,{\cdot}\,)$ is defined by
$$ \begin{equation*} e_J(I,\delta,\varphi)=\sup_{\substack{x\in W_J,\, y\in Y_{\overline J}\\ \|I_jx-y_j\|_{Y_j}\leqslant\delta_j,\, j\in\overline J}}\|I_0x-\varphi(y)\|_{Y_0}, \end{equation*} \notag $$
and the quantity
$$ \begin{equation} E_J(I,\delta)=\inf_{\varphi\colon Y_{\overline J}\,\to Y_0} e_J(I,\delta,\varphi) \end{equation} \tag{2.1} $$
is known as the optimal recovery error (here, $I=(I_0,\dots,I_n)$, $\delta=(\delta_1,\dots,\delta_n)$). A method on which the lower bound in (2.1) is attained (if exists) is an optimal method.

Theorem 1. Let $1\leqslant p<+\infty$. Assume that there exist $\widehat{\lambda}_j\geqslant0$, $j=1,\dots,n$, such that

$$ \begin{equation*} \sup_{\substack{x\in X\\ \|I_jx\|_{Y_j}\leqslant\delta_j,\, j=1,\dots,n}} \|I_0x\|_{Y_0}^p \geqslant\sum_{j=1}^n\widehat{\lambda}_j\delta_j^p. \end{equation*} \notag $$
Moreover, let a set of linear operators $S_j\colon Y_j\to Y_0$, $j=1,\dots,n$, be such that
$$ \begin{equation} I_0=\sum_{j=1}^nS_jI_j \end{equation} \tag{2.2} $$
and
$$ \begin{equation} \biggl\|\sum_{j=1}^nS_jz_j\biggl\|_{Y_0}^p\leqslant\sum_{j=1}^n\widehat{\lambda}_j\|z_j\|_{Y_j}^p \end{equation} \tag{2.3} $$
for all $z_j\in Y_j$, $j=1,\dots,n$. Then, for any $J\subset\{1,\dots,n\}$, the method
$$ \begin{equation} \widehat{\varphi}(y)=\sum_{j\in\overline J}S_jy_j \end{equation} \tag{2.4} $$
is optimal for the corresponding optimal recovery problem, and the error of optimal recovery is given by
$$ \begin{equation} E_J(I,\delta)=\biggl(\sum_{j=1}^n\widehat{\lambda}_j\delta_j^p\biggr)^{1/p}. \end{equation} \tag{2.5} $$

Proof. Let $\varphi\colon Y_{\overline J}\to Y_0$ be an arbitrary method of recovery and $x\in X$ be such that $\|I_jx\|_{Y_j}\leqslant\delta_j$, $j=1,\dots,n$. Then
$$ \begin{equation*} \begin{aligned} \, 2\|I_0x\|_{Y_0} &=\|I_0x-\varphi(0)-(I_0(-x)-\varphi(0))\|_{Y_0} \\ &\leqslant\|I_0x-\varphi(0)\|_{Y_0}+\|I_0(-x)-\varphi(0)\|_{Y_0}\leqslant2e_J(I,\delta,\varphi). \end{aligned} \end{equation*} \notag $$
Hence
$$ \begin{equation*} e_J^p(I,\delta,\varphi) \geqslant\sup_{\substack{x\in X\\\|I_jx\|_{Y_j}\leqslant\delta_j,\, j=1,\dots,n}}\|I_0x\|_{Y_0}^p \geqslant\sum_{j=1}^n\widehat{\lambda}_j\delta_j^p. \end{equation*} \notag $$
Since the method $\varphi(\,{\cdot}\,)$ is arbitrary, we obtain
$$ \begin{equation} E_J^p(I,\delta)\geqslant\sum_{j=1}^n\widehat{\lambda}_j\delta_j^p. \end{equation} \tag{2.6} $$

To estimate the $p$th power of error of method $\widehat{\varphi}(\,{\cdot}\,)$ we need to estimate the value of the following extremal problem

$$ \begin{equation*} \begin{gathered} \, \biggl\|I_0x-\sum_{j\in\overline J}S_jy_j\biggr\|_{Y_0}^p\to\max,\qquad \|I_jx\|_{Y_j}\leqslant\delta_j,\quad j\in J, \\ \|I_jx-y_j\|_{Y_j}\leqslant\delta_j,\qquad j\in\overline J,\quad x\in X. \end{gathered} \end{equation*} \notag $$
Setting $z_j=I_jx-y_j$, $j\in\overline J$, the above problem assumes the form
$$ \begin{equation} \begin{gathered} \, \biggl\|\biggl(I_0-\sum_{j\in\overline J}S_jI_j\biggr)x +\sum_{j\in\overline J}S_jz_j\biggl\|_{Y_0}^p\to\max,\qquad \|I_jx\|_{Y_j}\leqslant\delta_j,\quad j\in J, \\ \|z_j\|_{Y_j}\leqslant\delta_j,\qquad j\in\overline J,\quad x\in X. \end{gathered} \end{equation} \tag{2.7} $$
In view of (2.2) and condition (2.3), we obtain
$$ \begin{equation*} \begin{aligned} \, \biggl\|\biggl(I_0-\sum_{j\in\overline J}S_jI_j\biggr)x +\sum_{j\in\overline J}S_jz_j\biggl\|_{Y_0}^p &= \biggl\|\sum_{j\in J}S_jI_jx+\sum_{j\in\overline J} S_jz_j\biggl\|_{Y_0}^p \\ &\leqslant\sum_{j\in J}\widehat{\lambda}_j\|I_jx\|_{Y_j}^p +\sum_{j\in\overline J} \widehat{\lambda}_j\|z_j\|_{Y_j}^p \leqslant\sum_{j=1}^n\widehat{\lambda}_j\delta_j^p. \end{aligned} \end{equation*} \notag $$
Thus,
$$ \begin{equation*} E_J^p(I,\delta)\leqslant e_J^p(I,\delta,\widehat{\varphi}) \leqslant\sum_{j=1}^n\widehat{\lambda}_j\delta_j^p, \end{equation*} \notag $$
which together with (2.6) proves the theorem.

Note that the dual extremal problem

$$ \begin{equation*} \|I_0x\|_{Y_0}\to\max,\qquad \|I_jx\|_{Y_j}\leqslant\delta_j,\quad j=1,\dots,n, \end{equation*} \notag $$
“does not distinguish” which of the operators $I_j$ are informational, and which of them define the class on which the recovery problem is considered. In other words, the dual extremal problem does not distinguish between a priori and a posteriori information. In view of this, Theorem 1 shows that if operators $S_j\colon Y_j\to Y_0$, $j=1,\dots,n$, are found to satisfy conditions (2.2) and (2.3), then $2^n$ recovery problems are immediately solved. Moreover, to obtain an appropriate optimal method, it is sufficient to put $y_j=0$, $j\in J$, in the method
$$ \begin{equation*} \widehat{\varphi}(y)=S_1y_1+\dots+S_ny_n. \end{equation*} \notag $$

§ 3. Recovery in $L_p(\mathbb R^d)$

Denote by $L_p(\mathbb R^d)$, $1\leqslant p<\infty$, the set of all measurable functions $x(\,{\cdot}\,)$ for which

$$ \begin{equation*} \|x(\,{\cdot}\,)\|_{L_p(\mathbb R^d)} =\biggl(\int_{\mathbb R^d} |x(\xi)|^p\,d\xi\biggr)^{1/p}<\infty. \end{equation*} \notag $$

Let $\alpha=(\alpha_1,\dots,\alpha_d)\in\mathbb R_+^d$. For a given vector $\xi=(\xi_1,\dots,\xi_d)\in\mathbb R^d$, we set $(i\xi)^\alpha=(i\xi_1)^{\alpha_1}\cdots(i\xi_d)^{\alpha_d}$, $|\xi|^\alpha=|\xi_1|^{\alpha_1}\cdots|\xi_d|^{\alpha_d}$. For $\alpha^0,\dots,\alpha^n\in\mathbb R_+^d$, we put

$$ \begin{equation*} I_jx(\xi)=(i\xi)^{\alpha^j}x(\xi),\qquad j=0,\dots,n. \end{equation*} \notag $$
Let $X$ be the set of all measurable functions $x(\,{\cdot}\,)$ such that $\|I_jx(\,{\cdot}\,)\|_{L_p(\mathbb R^d)}<\infty$, $j=1,\dots,n$. Consider problem (2.1) with $Y_0=Y_1=\dots=Y_n=L_p(\mathbb R^d)$.

We also set

$$ \begin{equation*} Q=\operatorname{co}\biggl\{\biggl(\alpha^1,\ln\frac1{\delta_1}\biggr), \dots, \biggl(\alpha^n,\ln\frac1{\delta_n}\biggr)\biggr\}, \end{equation*} \notag $$
where $\operatorname{co} M$ denotes the convex hull of a set $M$. Consider the function $S(\,{\cdot}\,)$ on $\mathbb R^d$ defined by
$$ \begin{equation} S(\alpha)=\max\{z\in\mathbb R\colon (\alpha,z)\in Q\}, \end{equation} \tag{3.1} $$
where we assume that $S(\alpha)=-\infty$ if the set in the curly brackets is empty.

Let $\alpha^0\in\operatorname{co}\{\alpha^1,\dots,\alpha^n\}$. Then the point $(\alpha^0,S(\alpha^0))$ lies in the boundary of the convex polyhedron $Q$. A support hyperplane to the convex polyhedron $Q$ at the point $(\alpha^0,S(\alpha^0))$ can be written as $z=\langle\alpha,\widehat{\eta}\,\rangle+\widehat{a}$ for some $\widehat{\eta}=(\widehat{\eta}_1,\dots,\widehat{\eta}_d)\in\mathbb R^d$ and $\widehat{a}\in\mathbb R$ ($\langle\alpha,\widehat{\eta}\,\rangle$ denotes the inner product of the vectors $\alpha$ and $\widehat{\eta}$ ). By the Carathéodory theorem, there exist points $(\alpha^{j_k},\ln1/\delta_{j_k})$, $k=1,\dots,s$, $s\leqslant d+1$, from this hyperplane such that

$$ \begin{equation} \alpha^0=\sum_{k=1}^s\theta_{j_k}\alpha^{j_k},\qquad \theta_{j_k}>0,\quad k=1,\dots,s, \quad\sum_{k=1}^s\theta_{j_k}=1. \end{equation} \tag{3.2} $$
We put $J_0=\{j_1,\dots,j_s\}$ and define
$$ \begin{equation*} \widehat{\lambda}_j=\frac{\theta_j}{\delta_j^p}e^{-pS(\alpha^0)},\qquad j\in J_0. \end{equation*} \notag $$

Theorem 2. Let $\alpha^0\in\operatorname{co}\{\alpha^1,\dots,\alpha^n\}$. Then, for any $J\subset\{1,\dots,n\}$,

$$ \begin{equation*} E_J(I,\delta)=e^{-S(\alpha^0)}. \end{equation*} \notag $$
Moreover, each method
$$ \begin{equation*} \widehat{\varphi}(y)=\sum_{j\in\overline J\cap J_0}a_j(\xi)y_j, \end{equation*} \notag $$
where a measurable functions $a_j(\,{\cdot}\,)$, $j\in J_0$, satisfies
$$ \begin{equation} \sum_{j\in J_0}(i\xi)^{\alpha^j}a_j(\xi)=(i\xi)^{\alpha^0}, \end{equation} \tag{3.3} $$
$$ \begin{equation} \sum_{j\in J_0} \frac{|a_j(\xi)|^{p'}}{\widehat{\lambda}_j^{p'/p}} \leqslant 1,\qquad \frac1{p}+\frac1{p'}=1, \quad\textit{if }\ 1<p<\infty, \end{equation} \tag{3.4} $$
$$ \begin{equation} \max_{j\in J_0}\frac{|a_j(\xi)|}{\widehat{\lambda}_j} \leqslant1,\quad\textit{if }\ p=1, \end{equation} \tag{3.5} $$
for almost all $\xi\in\mathbb R^d$, is optimal for the corresponding optimal recovery problem.

Proof. Let us estimate the value of the extremal problem
$$ \begin{equation} \int_{\mathbb R^d}|(i\xi)^{\alpha^0} x(\xi)|^p\,d\xi\to\max,\qquad \int_{\mathbb R^d}|(i\xi)^{\alpha^j} x(\xi)|^p\,d\xi\leqslant\delta_j^p,\quad j=1,\dots,n. \end{equation} \tag{3.6} $$
We set $\widehat{A}=e^{-p\widehat{a}}$, $\widehat{\xi}_j=e^{-\widehat{\eta}_j}$, $j=1,\dots,d$, $\widehat{\xi}=(\widehat{\xi}_1,\dots,\widehat{\xi}_d)$. For a sufficiently small $\varepsilon>0$, consider the cube
$$ \begin{equation*} B_\varepsilon=\{\xi=(\xi_1,\dots,\xi_d)\in\mathbb R^d\colon \widehat{\xi}_j -\varepsilon\leqslant\xi_j\leqslant\widehat{\xi}_j,\, j=1,\dots,d\} \end{equation*} \notag $$
and define the function
$$ \begin{equation*} x_\varepsilon(\xi)= \begin{cases} \biggl(\dfrac{\widehat{A}}{|B_\varepsilon|}\biggr)^{1/p}, &\xi\in B_\varepsilon, \\ 0, &\xi\notin B_\varepsilon \end{cases} \end{equation*} \notag $$
($|B_\varepsilon|$ denotes the volume of the cube $B_\varepsilon$). We have
$$ \begin{equation*} \int_{\mathbb R^d} |(i\xi)^{\alpha^j}x_\varepsilon(\xi)|^p\,d\xi \leqslant\widehat{A}|\widehat{\xi}|^{p\alpha^j}= e^{-p(\langle\alpha^j,\widehat{\eta}\,\rangle+\widehat{a})}. \end{equation*} \notag $$
Since $z=\langle\alpha,\widehat{\eta}\,\rangle+\widehat{a}$ is a support hyperplane to $Q$, we have
$$ \begin{equation*} \langle\alpha^j,\widehat{\eta}\,\rangle+\widehat{a}\geqslant\ln\frac1{\delta_j}. \end{equation*} \notag $$
Hence
$$ \begin{equation*} \int_{\mathbb R^d}|(i\xi)^{\alpha^j}x_\varepsilon(\xi)|^p\,d\xi\leqslant\delta_j^p, \qquad j=1,\dots,n. \end{equation*} \notag $$
Thus, $x_\varepsilon(\,{\cdot}\,)$ is an admissible function for problem (3.6). Consequently,
$$ \begin{equation*} \sup_{\substack{x\in X\\\|I_jx\|_{Y_j}\leqslant\delta_j,\, j=1,\dots,n}} \|I_0x\|_{Y_0}^p \geqslant \int_{\mathbb R^d}|(i\xi)^{\alpha^0}x_\varepsilon(\xi)|^p\, d\xi \geqslant\widehat{A}|\widehat{\xi}_\varepsilon|^{p\alpha^0}, \end{equation*} \notag $$
where
$$ \begin{equation*} \widehat{\xi}_\varepsilon=(\widehat{\xi}_1-\varepsilon,\dots,\widehat{\xi}_d-\varepsilon). \end{equation*} \notag $$
Making $\varepsilon\to 0$, we have
$$ \begin{equation*} \sup_{\substack{x\in X\\\|I_jx\|_{Y_j}\leqslant\delta_j,\, j=1,\dots,n}}\|I_0x\|_{Y_0}^p \geqslant e^{-pa}|\widehat{\xi}|^{p\alpha^0} =e^{-p(\langle\alpha^0,\widehat{\eta}\,\rangle+\widehat{a})}=e^{-pS(\alpha^0)}. \end{equation*} \notag $$
Thus,
$$ \begin{equation*} \sup_{\substack{x\in X\\\|I_jx\|_{Y_j}\leqslant\delta_j,\, j=1,\dots,n}}\|I_0x\|_{Y_0}^p \geqslant\sum_{j\in J_0} \widehat{\lambda}_j\delta_j^p. \end{equation*} \notag $$

Consider the operators $S_j\colon L_p(\mathbb R^d)\to L_p(\mathbb R^d)$, $j=1,\dots,n$, defined by

$$ \begin{equation*} S_jz(\xi)=\begin{cases} a_j(\xi)z(\xi), &j\in J_0, \\ 0, &j\notin J_0, \end{cases} \end{equation*} \notag $$
where $a_j(\,{\cdot}\,)$, $j\in J_0$, satisfy conditions (3.3)(3.5). We have
$$ \begin{equation} \biggl\|\sum_{j=1}^nS_jz_j(\,{\cdot}\,)\biggl\|_{L_p(\mathbb R^d)}^p =\int_{\mathbb R^d} \biggl|\sum_{j\in J_0}a_j(\xi)z_j(\xi)\biggr|^p\,d\xi. \end{equation} \tag{3.7} $$
By Hölder’s inequality, we have, for $1<p<\infty$,
$$ \begin{equation*} \biggl|\sum_{j\in J_0}a_j(\xi)z_j(\xi)\biggr|=\biggl|\sum_{j\in J_0} \frac{a_j(\xi)}{\widehat{\lambda}_j^{1/p}}\widehat{\lambda}_j^{1/p}z_j(\xi)\biggr| \leqslant\Omega(\xi)\biggl(\sum_{j\in J_0}\widehat{\lambda}_j|z_j(\xi)|^p\biggr)^{1/p}, \end{equation*} \notag $$
where
$$ \begin{equation*} \Omega(\xi)=\biggl(\sum_{j\in J_0} \frac{|a_j(\xi)|^{p'}}{\widehat{\lambda}_j^{p'/p}}\biggr)^{1/p'},\qquad \frac1{p}+\frac1{p'}=1. \end{equation*} \notag $$
For $p=1$, we have the inequality
$$ \begin{equation*} \biggl|\sum_{j\in J_0}a_j(\xi)z_j(\xi)\biggr| \leqslant\Omega(\xi)\biggl(\sum_{j\in J_0} \widehat{\lambda}_j|z_j(\xi)|\biggr), \end{equation*} \notag $$
in which
$$ \begin{equation*} \Omega(\xi)=\max_{j\in J_0}\frac{|a_j(\xi)|}{\widehat{\lambda}_j}. \end{equation*} \notag $$
Using the above inequalities we have by (3.7)
$$ \begin{equation*} \biggl\|\sum_{j=1}^nS_jz_j(\,{\cdot}\,)\biggl\|_{L_p(\mathbb R^d)}^p \leqslant\int_{\mathbb R^d} \Omega^p(\xi) \biggl(\sum_{j\in J_0}\widehat{\lambda}_j|z_j(\xi)|^p\biggr)\,d\xi. \end{equation*} \notag $$
From conditions (3.4), (3.5), we have
$$ \begin{equation*} \biggl\|\sum_{j=1}^nS_jz_j(\,{\cdot}\,)\biggl\|_{L_p(\mathbb R^d)}^p \leqslant\sum_{j\in J_0} \widehat{\lambda}_j\|z_j(\,{\cdot}\,)\|_{L_p(\mathbb R^d)}^p. \end{equation*} \notag $$

It remains to show that the set of functions $a_j(\,{\cdot}\,)$, $j\in J_0$, satisfying conditions (3.3)(3.5) is non-empty. Consider the function

$$ \begin{equation*} f(\eta)=-1+\sum_{j\in J_0}\widehat{\lambda}_j e^{-p\langle\alpha^j-\alpha^0,\eta\rangle} \end{equation*} \notag $$
on $\mathbb R^d$. It is easy to see that $f(\,{\cdot}\,)$ is a convex function, $f(\widehat{\eta})=0$, and the derivative of this function at the point $\widehat{\eta}$ is also zero. Hence $f(\eta)\geqslant0$ for all $\eta\in\mathbb R^d$. Consequently,
$$ \begin{equation*} -e^{-p\langle\alpha^0,\eta\rangle} +\sum_{j\in J_0} \widehat{\lambda}_je^{-p\langle\alpha^j,\eta\rangle}\geqslant0. \end{equation*} \notag $$
Putting $e^{-\eta_j}=|\xi_j|$, $j=1,\dots,d$, we get that
$$ \begin{equation} -|\xi|^{p\alpha^0}+\sum_{j\in J_0}\widehat{\lambda}_j|\xi|^{p\alpha^j}\geqslant0 \end{equation} \tag{3.8} $$
for all $\xi\in\mathbb R^d$. We next set
$$ \begin{equation*} a_j(\xi)=(i\xi)^{\alpha^0} \frac{\widehat{\lambda}_j(-i\xi)^{\alpha^j}|\xi|^{(p-2)\alpha^j}} {\sum_{j\in J_0}\widehat{\lambda}_j|\xi|^{p\alpha^j}},\qquad j\in J_0. \end{equation*} \notag $$
It is easily seen that condition (3.3) is met. For $p=1$, using (3.8), we have
$$ \begin{equation*} \frac{|a_j(\xi)|}{\widehat{\lambda}_j} =\frac{|\xi|^{\alpha^0}}{\sum_{j\in J_0} \widehat{\lambda}_j|\xi|^{\alpha^j}}\leqslant1. \end{equation*} \notag $$
If $p>1$, then
$$ \begin{equation*} \begin{aligned} \, \sum_{j\in J_0}\frac{|a_j(\xi)|^{p'}}{\widehat{\lambda}_j^{p'/p}} &=\sum_{j\in J_0} \frac{|\xi|^{p'\alpha^0}\widehat{\lambda}_j^{p'}|\xi|^{(p-1)p'\alpha^j}} {\widehat{\lambda}_j^{p'/p} \bigl(\sum_{j\in J_0} \widehat{\lambda}_j|\xi|^{p\alpha^j}\bigr)^{p'}} =\frac{|\xi|^{p'\alpha^0}\sum_{j\in J_0}\widehat{\lambda}_j|\xi|^{p\alpha^j}} {\bigl(\sum_{j\in J_0} \widehat{\lambda}_j|\xi|^{p\alpha^j}\bigr)^{p'}} \\ &=\Biggl(\frac{|\xi|^{p\alpha^0}} {\sum_{j\in J_0} \widehat{\lambda}_j|\xi|^{p\alpha^j}}\Biggr)^{p'-1}. \end{aligned} \end{equation*} \notag $$
Now it follows from (3.8) that
$$ \begin{equation*} \sum_{j\in J_0}\frac{|a_j(\xi)|^{p'}}{\widehat{\lambda}_j^{p'/p}}\leqslant1. \end{equation*} \notag $$

Let $\alpha=(\alpha_1,\dots,\alpha_d)\in\mathbb R^d_+$. Given $x(\,{\cdot}\,)\in L_2(\mathbb R^d)$, let $D^\alpha x(\,{\cdot}\,)$ denote the Weyl derivative of order $\alpha$, which is defined by

$$ \begin{equation*} D^\alpha x(t)=\frac1{(2\pi)^d} \int_{\mathbb R^d}(i\xi)^\alpha Fx(\xi) e^{i\langle\xi,t\rangle}\,d\xi, \end{equation*} \notag $$
where $Fx(\,{\cdot}\,)$ is the Fourier transform of $x(\,{\cdot}\,)$.

Let $\alpha^0,\dots,\alpha^n\in\mathbb R_+^d$. Put

$$ \begin{equation*} I_j=D^{\alpha^j},\qquad j=0,\dots,n. \end{equation*} \notag $$
Denote by $X$ the set of measurable functions $x(\,{\cdot}\,)$ such that $\|D^{\alpha^j}x(\,{\cdot}\,)\|_{L_2(\mathbb R^d)}<\infty$, $j=1,\dots,n$. Consider problem (2.1) with $Y_0=\dots=Y_n=L_2(\mathbb R^d)$. Using the above notation for $p=2$, we get the following result.

Theorem 3. Let $\alpha^0\in\operatorname{co}\{\alpha^1,\dots,\alpha^n\}$. Then, for any $J\subset\{1,\dots,n\}$,

$$ \begin{equation*} E_J(I,\delta)=e^{-S(\alpha^0)}. \end{equation*} \notag $$
Moreover, each method
$$ \begin{equation*} \widehat{\varphi}(y)=\sum_{j\in\overline J\cap J_0}\Lambda_jy_j \end{equation*} \notag $$
is optimal for the corresponding optimal recovery problem. Here, $\Lambda_j\colon L_2(\mathbb R^d)\to L_2(\mathbb R^d)$, $j\in J_0$, are continuous linear operators which act in Fourier images by the rule $F\Lambda_jy_j(\,{\cdot}\,)=a_j(\,{\cdot}\,) Fy_j(\,{\cdot}\,)$, and measurable functions $a_j(\,{\cdot}\,)$, $j\in J_0$ satisfy the conditions
$$ \begin{equation*} \sum_{j\in J_0}(i\xi)^{\alpha^j}a_j(\xi) =(i\xi)^{\alpha^0},\qquad \sum_{j\in J_0} \frac{|a_j(\xi)|^2}{\widehat{\lambda}_j} \leqslant1, \end{equation*} \notag $$
for almost all $\xi\in\mathbb R^d$.

Proof. Passing to the Fourier images and using the Parseval identity, we can rewrite the conditions
$$ \begin{equation*} \bigl\|D^{\alpha^j}x(\,{\cdot}\,)\bigr\|_{L_2(\mathbb R^d)}^2 \leqslant\delta_j^2,\qquad \bigl\|D^{\alpha^j}x(\,{\cdot}\,)-y_j(\,{\cdot}\,)\bigr\|_{L_2(\mathbb R^d)}^2 \leqslant\delta_j^2 \end{equation*} \notag $$
as
$$ \begin{equation*} \int_{\mathbb R^d}|\xi|^{2\alpha^j}|f(\xi)|^2\,d\xi\leqslant\delta_j^2,\qquad \int_{\mathbb R^d} \bigl|(i\xi)^{\alpha^j}f(\xi)-Y_j(\xi)\bigr|^2 \leqslant\delta_j^2, \end{equation*} \notag $$
where
$$ \begin{equation*} f(\,{\cdot}\,)=\frac1{(2\pi)^{d/2}}Fx(\,{\cdot}\,),\qquad Y_j(\,{\cdot}\,)=\frac1{(2\pi)^{d/2}}Fy_j(\,{\cdot}\,). \end{equation*} \notag $$
For any recovery method $\varphi\colon(L_2(\mathbb R^d))^m\to L_2(\mathbb R^d)$, $m=\operatorname{card}\overline{J}$,
$$ \begin{equation*} \bigl\|D^{\alpha^0}x(\,{\cdot}\,)-\varphi(y)(\,{\cdot}\,)\bigr\|^2_{L_2(\mathbb R^d)} =\int_{\mathbb R^d} \bigl|(i\xi)^{\alpha^0}f(\xi)-\Phi(y)(\xi)\bigr|^2\, d\xi, \end{equation*} \notag $$
where
$$ \begin{equation*} \Phi(y)(\,{\cdot}\,)=\frac1{(2\pi)^{d/2}}F\varphi(y)(\,{\cdot}\,). \end{equation*} \notag $$
Thus, the problem under consideration is equivalent to that whose solution is given by Theorem 2 (for $p=2$).

Note that from Theorems 1 and 2 we have

$$ \begin{equation} \sup_{\|D^{\alpha^j}x(\,{\cdot}\,)\|_{L_2(\mathbb R^d)} \leqslant \delta_j,\, j=1,\dots,n} \bigl\|D^{\alpha^0}x(\,{\cdot}\,)\bigr\|_{L_2(\mathbb R^d)} =e^{-S(\alpha^0)}=\prod_{j\in J_0} \delta_j^{\theta_j}. \end{equation} \tag{3.9} $$
The extremal problem in the left-hand side of (3.9) is closely related to the exact constant problem in the generalized Hardy–Littlewood–Pólya inequality, which in our case has the form
$$ \begin{equation*} \bigl\|D^{\alpha^0}x(\,{\cdot}\,)\bigr\|_{L_2(\mathbb R^d)} \leqslant\prod_{j\in J_0} \bigl\|D^{\alpha^j}x(\,{\cdot}\,)\bigr\|^{\theta_j}_{L_2(\mathbb R^d)} \end{equation*} \notag $$
(for more information, see [23]).

§ 4. Generalized heat equation on a sphere

We set

$$ \begin{equation*} \mathbb S^{d-1}=\{x\in\mathbb R^d\colon |x|=1\},\qquad d\geqslant2, \end{equation*} \notag $$
where $|x|=\sqrt{x_1^2+\dots+x_d^2}$. Given a function $Y$ on the unit sphere $\mathbb S^{d-1}$, the Laplace–Beltrami operator $\Delta_S$ is defined by
$$ \begin{equation*} \Delta_SY(x')=\Delta Y\biggl(\frac x{|x|}\biggr)\bigg|_{x=x'}, \end{equation*} \notag $$
where $\Delta$ is the Laplace operator. Denote by $\mathcal{H}_k$ the set of spherical harmonics of order $k$. It is known (see [24]) that $L_2(\mathbb S^{d-1})=\sum_{k=0}^\infty\mathcal{H}_k$, $\dim\mathcal{H}_0=a_0=1$, and
$$ \begin{equation*} \dim\mathcal{H}_k=a_k=(d+2k-2)\frac{(d+k-3)!}{(d-2)!\,k!},\qquad k=1,2,\dots\,. \end{equation*} \notag $$

Let $Y_j^{(k)}(\,{\cdot}\,)$, $j=1,\dots,a_k$, be an orthonormal basis for $\mathcal{H}_k$. For $\alpha>0$, the operator $(-\Delta_S)^{\alpha/2}$ is defined by

$$ \begin{equation*} (-\Delta_S)^{\alpha/2}Y(\,{\cdot}\,) =\sum_{k=1}^\infty\Lambda_k^{\alpha/2}\sum_{j=1}^{a_k}c_{kj}Y_j^{(k)}(\,{\cdot}\,), \end{equation*} \notag $$
where
$$ \begin{equation*} Y(\,{\cdot}\,)=\sum_{k=0}^\infty\sum_{j=1}^{a_k}c_{kj}Y_j^{(k)}(\,{\cdot}\,), \end{equation*} \notag $$
and $\Lambda_k=k(k+d-2)$ are the eigenvalues of the operator $-\Delta_S$.

Consider the problem of finding a solution of the equation

$$ \begin{equation} u_t+(-\Delta_S)^{\alpha/2}u=0 \end{equation} \tag{4.1} $$
with the initial condition
$$ \begin{equation*} u(\,{\cdot}\,,0)=f(\,{\cdot}\,), \end{equation*} \notag $$
where $f(\,{\cdot}\,)\in L_2(\mathbb S^{d-1})$. If
$$ \begin{equation} f(\,{\cdot}\,)=\sum_{k=0}^\infty\sum_{j=1}^{a_k}c_{kj}Y_j^{(k)}(\,{\cdot}\,), \end{equation} \tag{4.2} $$
then a solution of this problem can be easily found by the Fourier method. Namely,
$$ \begin{equation*} u(x',t)=\sum_{k=0}^\infty e^{-\Lambda_k^{\alpha/2}t}\sum_{j=1}^{a_k}c_{kj} Y_j^{(k)}(x'). \end{equation*} \notag $$

Assume that the solutions of the problem under consideration are approximately known at $t=0,T$. It is required to recover the solution at any time $\tau$, $0<\tau<T$. For any function $f(\,{\cdot}\,)\in L_2(\mathbb S^{d-1})$ with expansion (4.2) we put $I_1f(\,{\cdot}\,)=f(\,{\cdot}\,)$,

$$ \begin{equation*} \begin{aligned} \, I_0f(\,{\cdot}\,)&=\sum_{k=0}^\infty e^{-\Lambda_k^{\alpha/2}\tau} \sum_{j=1}^{a_k}c_{kj}Y_j^{(k)}(\,{\cdot}\,), \\ I_2f(\,{\cdot}\,)&=\sum_{k=0}^\infty e^{-\Lambda_k^{\alpha/2}T} \sum_{j=1}^{a_k}c_{kj}Y_j^{(k)}(\,{\cdot}\,). \end{aligned} \end{equation*} \notag $$
This reduces the problem to problem (2.1) with $X=Y_0=Y_1=Y_2=L_2(\mathbb S^{d-1})$, $p=2$ and $J=\varnothing$.

Theorem 4. Let $\delta_1/\delta_2\in\bigl[e^{\Lambda_m^{\alpha/2}T}, e^{\Lambda_{m+1}^{\alpha/2}T}\bigr]$ for some $m\in\mathbb Z_+$, and let $\alpha_{kj}$, $k=0,1,\dots$, $j=1,\dots,a_k$, satisfy

$$ \begin{equation} \frac{\bigl(e^{\Lambda_k^{\alpha/2}(T-\tau)}-\alpha_{kj}\bigr)^2} {\lambda_1e^{2\Lambda_k^{\alpha/2}T}}+ \frac{\alpha_{kj}^2}{\lambda_2}\leqslant1, \end{equation} \tag{4.3} $$
where
$$ \begin{equation*} \lambda_1=\frac{e^{2\Lambda_{m+1}^{\alpha/2}(T-\tau)}-e^{2\Lambda_m^{\alpha/2}(T-\tau)}} {e^{2\Lambda_{m+1}^{\alpha/2}T}-e^{2\Lambda_m^{\alpha/2}T}},\qquad \lambda_2=\frac{e^{-2\Lambda_m^{\alpha/2}\tau}-e^{-2\Lambda_{m+1}^{\alpha/2}\tau}} {e^{-2\Lambda_m^{\alpha/2}T}-e^{-2\Lambda_{m+1}^{\alpha/2}T}}. \end{equation*} \notag $$
Then any method
$$ \begin{equation*} \widehat{\varphi}(y_1,y_2)(\,{\cdot}\,) =\sum_{k=0}^\infty\sum_{j=1}^{a_k}\bigl(e^{-\Lambda_k^{\alpha/2}T} \bigl(e^{\Lambda_k^{\alpha/2}(T-\tau)}-\alpha_{kj}\bigr)y_{kj}^{(1)} + \alpha_{kj}y_{kj}^{(2)}\bigr)Y_j^{(k)}(\,{\cdot}\,), \end{equation*} \notag $$
where
$$ \begin{equation*} y_s(\,{\cdot}\,)=\sum_{k=0}^\infty \sum_{j=1}^{a_k}y_{kj}^{(s)}Y_j^{(k)}(\,{\cdot}\,),\qquad s=1,2, \end{equation*} \notag $$
is optimal, and
$$ \begin{equation*} E_\varnothing(I,\delta)=\sqrt{\lambda_1\delta_1^2+\lambda_2\delta_2^2}. \end{equation*} \notag $$
If $\delta_1/\delta_2\in(0,1]$, then the method
$$ \begin{equation*} \widehat{\varphi}(y_1,y_2)(\,{\cdot}\,)=\sum_{k=0}^\infty e^{-\Lambda_k^{\alpha/2}\tau} \sum_{j=1}^{a_k}y_{kj}^{(1)}Y_j^{(k)}(\,{\cdot}\,) \end{equation*} \notag $$
is optimal, and $E_\varnothing(I,\delta)=\delta_1$.

Proof. Consider the extremal problem
$$ \begin{equation*} \|I_0f(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})}^2\to\max,\qquad \|I_jf(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})}^2\leqslant\delta_j^2,\quad j=1,2. \end{equation*} \notag $$
This problem can be written as
$$ \begin{equation} \sum_{k=0}^\infty e^{-2\Lambda_k^{\alpha/2}\tau}f_k^2\to\max,\qquad \sum_{k=0}^\infty f_k^2\leqslant\delta_1^2,\qquad \sum_{k=0}^\infty e^{-2\Lambda_k^{\alpha/2}T}f_k^2\leqslant\delta_2^2, \end{equation} \tag{4.4} $$
where
$$ \begin{equation*} f_k^2=\sum_{j=1}^{a_k}c_{jk}^2,\qquad k=0,1,\dots\,. \end{equation*} \notag $$
Let $\delta_1/\delta_2\in\bigl[e^{\Lambda_m^{\alpha/2}T}, e^{\Lambda_{m+1}^{\alpha/2}T}\bigr]$. Let $f_m$ and $f_{m+1}$ be defined from the conditions
$$ \begin{equation*} f_m^2+f_{m+1}^2=\delta_1^2,\qquad e^{-2\Lambda_m^{\alpha/2}T}f_m^2 +e^{-2\Lambda_{m+1}^{\alpha/2}T}f_{m+1}^2=\delta_2^2. \end{equation*} \notag $$
We have
$$ \begin{equation*} f_m^2=\frac{\delta_2^2-\delta_1^2e^{-2\Lambda_{m+1}^{\alpha/2}T}} {e^{-2\Lambda_m^{\alpha/2}T}-e^{-2\Lambda_{m+1}^{\alpha/2}T}},\qquad f_{m+1}^2 =\frac{\delta_1^2e^{-2\Lambda_m^{\alpha/2}T}-\delta_2^2} {e^{-2\Lambda_m^{\alpha/2}T}-e^{-2\Lambda_{m+1}^{\alpha/2}T}}. \end{equation*} \notag $$
The sequence $\{f_k\}$ in which $f_k=0$ for $k\ne m,m+1$, is admissible in the extremal problem (4.4). Therefore,
$$ \begin{equation*} \begin{aligned} \, \sup_{\substack{f(\,{\cdot}\,)\in L_2(\mathbb S^{d-1})\\ \|I_jf(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})} \leqslant\delta_j,\, j=1,2}}\|I_0f(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})}^2 &\geqslant e^{-2\Lambda_m^{\alpha/2}\tau}f_m^2+e^{-2\Lambda_{m+1}^{\alpha/2}\tau}f_{m+1}^2 \\ &=\lambda_1\delta_1^2+\lambda_2\delta_2^2. \end{aligned} \end{equation*} \notag $$
If $\delta_1/\delta_2\in(0,1]$, then the sequence $\{f_k\}$, in which $f_0=\delta_1^2$, and $f_k=0$ for $k\geqslant1$, is admissible in the extremal problem (4.4). Therefore, in this case
$$ \begin{equation*} \sup_{\substack{f(\,{\cdot}\,)\in L_2(\mathbb S^{d-1})\\\|I_jf(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})} \leqslant\delta_j,\, j=1,2}} \|I_0f(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})}^2\geqslant f_0^2=\delta_1^2. \end{equation*} \notag $$

Let again $\delta_1/\delta_2\in\bigl[e^{\Lambda_m^{\alpha/2}T},e^{\Lambda_{m+1}^{\alpha/2}T}\bigr]$. Given a function $f(\,{\cdot}\,)\in L_2(\mathbb S^{d-1})$ with expansion (4.2), let the operators $S_j\colon L_2(\mathbb S^{d-1})\to L_2(\mathbb S^{d-1})$, $j=1,2$, be defined by

$$ \begin{equation*} \begin{aligned} \, S_1f(\,{\cdot}\,)&=\sum_{k=0}^\infty\sum_{j=1}^{a_k}e^{-\Lambda_k^{\alpha/2}T} \bigl(e^{\Lambda_k^{\alpha/2}(T-\tau)}-\alpha_{kj}\bigr)c_{kj}Y_j^{(k)}(\,{\cdot}\,), \\ S_2f(\,{\cdot}\,)&=\sum_{k=0}^\infty\sum_{j=1}^{a_k}\alpha_{kj}c_{kj}Y_j^{(k)}(\,{\cdot}\,), \end{aligned} \end{equation*} \notag $$
where $\alpha_{kj}$ satisfy condition (4.3). It is easy to see that $I_0=S_1I_1+S_2I_2$. For $f_1(\,{\cdot}\,),f_2(\,{\cdot}\,)\in L_2(\mathbb S^{d-1})$, we have
$$ \begin{equation*} \|S_1f_1(\,{\cdot}\,)+S_2f_2(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})}^2 =\sum_{k=0}^\infty\sum_{j=1}^{a_k}\bigl(e^{-\Lambda_k^{\alpha/2}T} \bigl(e^{\Lambda_k^{\alpha/2}(T-\tau)}-\alpha_{kj}\bigr)f_{kj}^{(1)}+ \alpha_{kj}f_{kj}^{(2)}\bigr)^2, \end{equation*} \notag $$
where $f_{kj}^{(1)},f_{kj}^{(2)}$ are the Fourier coefficients of $f_1(\,{\cdot}\,),f_2(\,{\cdot}\,)$. From the Cauchy–Schwartz–Bunyakovskii inequality and (4.3), we get
$$ \begin{equation*} \begin{aligned} \, &\bigl(e^{-\Lambda_k^{\alpha/2}T}\bigl(e^{\Lambda_k^{\alpha/2}(T-\tau)} -\alpha_{kj}\bigr)f_{kj}^{(1)}+\alpha_{kj}f_{kj}^{(2)}\bigr)^2 \\ &\qquad\leqslant\biggl(\frac{\bigl(e^{\Lambda_k^{\alpha/2}(T-\tau)}-\alpha_{kj}\bigr)^2} {\lambda_1e^{2\Lambda_k^{\alpha/2}T}} + \frac{\alpha_{kj}^2}{\lambda_2}\biggr) \bigl(\lambda_1\bigl(f_{kj}^{(1)}\bigr)^2+\lambda_2\bigl(f_{kj}^{(2)}\bigr)^2\bigr) \\ &\qquad\leqslant\lambda_1\bigl(f_{kj}^{(1)}\bigr)^2+\lambda_2\bigl(f_{kj}^{(2)}\bigr)^2. \end{aligned} \end{equation*} \notag $$
Thus,
$$ \begin{equation*} \begin{aligned} \, \|S_1f_1(\,{\cdot}\,)+S_2f_2(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})}^2 &\leqslant\sum_{k=0}^\infty\sum_{j=1}^{a_k} \bigl(\lambda_1\bigl(f_{kj}^{(1)}\bigr)^2 +\lambda_2\bigl(f_{kj}^{(2)}\bigr)^2\bigr) \\ &=\lambda_1\|f_1(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})}^2 +\lambda_2\|f_2(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})}^2. \end{aligned} \end{equation*} \notag $$

We claim that there exist $\alpha_{kj}$, $k=0,1,\dots$, $j=1,\dots,a_k$, which satisfy (4.3). On the plane $(x,y)$, consider the set of points with coordinates

$$ \begin{equation*} x_k=e^{-2\Lambda_k^{\alpha/2}T},\quad y_k=e^{-2\Lambda_k^{\alpha/2}\tau},\qquad k=0,1,\dots\,. \end{equation*} \notag $$
This set lies on the concave curve $y=x^{\tau/T}$. The straight line through the points $(x_{m+1},y_{m+1})$ and $(x_m,y_m)$ has the equation $y=\lambda_1+\lambda_2x$. By concavity of the curve which contains the points under consideration, we have
$$ \begin{equation*} y_k\leqslant\lambda_1+\lambda_2x_k,\qquad k=0,1,\dots\,. \end{equation*} \notag $$
Consequently, for all $k=0,1,\dots$
$$ \begin{equation*} \frac{e^{-2\Lambda_k^{\alpha/2}\tau}} {\lambda_1+\lambda_2e^{-2\Lambda_k^{\alpha/2}T}}\leqslant1. \end{equation*} \notag $$
Setting
$$ \begin{equation*} \widehat{\alpha}_{{kj}} =\frac{\lambda_2e^{\Lambda_k^{\alpha/2}(T-\tau)}} {\lambda_1e^{2\Lambda_k^{\alpha/2}T}+\lambda_2}, \end{equation*} \notag $$
we have
$$ \begin{equation*} \frac{\bigl(e^{\Lambda_k^{\alpha/2}(T-\tau)}-\widehat{\alpha}_{kj}\bigr)^2} {\lambda_1e^{2\Lambda_k^{\alpha/2}T}}+ \frac{\widehat{\alpha}_{kj}^2}{\lambda_2} =\frac{e^{2\Lambda_k^{\alpha/2}(T-\tau)}} {\lambda_1e^{2\Lambda_k^{\alpha/2}T}+\lambda_2} =\frac{e^{-2\Lambda_k^{\alpha/2}\tau}} {\lambda_1+\lambda_2e^{-2\Lambda_k^{\alpha/2}T}}\leqslant1. \end{equation*} \notag $$

For $\delta_1/\delta_2\in(0,1]$, we define $S_1=I_0$ and $S_2=0$. Hence

$$ \begin{equation*} \begin{aligned} \, \|S_1f_1(\,{\cdot}\,)+S_2f_2(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})}^2 &=\|I_0f_1(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})}^2 =\sum_{k=0}^\infty e^{-2\Lambda_k^{\alpha/2}\tau}\sum_{j=1}^{a_k}\bigl(f_{kj}^{(1)}\bigr)^2 \\ &\leqslant\sum_{k=0}^\infty\sum_{j=1}^{a_k}\bigl(f_{kj}^{(1)}\bigr)^2 =\|f_1(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})}^2. \end{aligned} \end{equation*} \notag $$
Now the result of the theorem follows from Theorem 1.

Condition (4.3) can be written in the following equivalent form:

$$ \begin{equation*} (\alpha_{kj}-\widehat{\alpha}_{kj})^2 \leqslant\lambda_1\lambda_2e^{4\Lambda_k^{\alpha/2}T} \frac{-e^{-2\Lambda_k^{\alpha/2}\tau}+\lambda_1+\lambda_2e^{-2\Lambda_k^{\alpha/2}T}} {\bigl(\lambda_1+\lambda_2e^{-2\Lambda_k^{\alpha/2}T}\bigr)^2}. \end{equation*} \notag $$
Thus, if $\alpha_{kj}$ satisfies condition (4.3), then
$$ \begin{equation*} \alpha_{kj}=\widehat{\alpha}_{kj} +\theta_{kj}e^{2\Lambda_k^{\alpha/2}T}\sqrt{\lambda_1\lambda_2}\, \frac{\sqrt{-e^{-2\Lambda_k^{\alpha/2}\tau} +\lambda_1+\lambda_2e^{-2\Lambda_k^{\alpha/2}T}}} {\lambda_1+\lambda_2e^{-2\Lambda_k^{\alpha/2}T}}, \end{equation*} \notag $$
where $|\theta_{kj}|\leqslant1$.

If we consider the problem of optimal recovery of the solution at time $\tau$ from an inaccurately given solution at time $T>\tau$ on the class

$$ \begin{equation*} W=\{f(\,{\cdot}\,)\in L_2(\mathbb S^{d-1}) \colon \|f(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})}\leqslant\delta_1\}, \end{equation*} \notag $$
then Theorem 1 (for $J=\{1\}$) implies that the methods $\widehat{\varphi}(0,y_2)(\,{\cdot}\,)$ are optimal. It turns out that this family of optimal methods contains a subfamily of optimal methods which are superior to the other methods in this family.

In order to specify this subfamily, we first formulate an extended version of the problem under consideration. Let $\mathcal F\subset L_2(\mathbb S^{d-1})$ be a given function class. We set

$$ \begin{equation*} \begin{aligned} \, e(\mathcal F,\delta,\varphi) &=\sup_{\substack{f(\,{\cdot}\,)\in\mathcal F,\, y(\,{\cdot}\,) \in L_2(\mathbb S^{d-1})\\\|u(\,{\cdot}\,,T)-y(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})}\leqslant\delta}} \|u(\,{\cdot}\,,\tau)-\varphi(y)(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})}, \\ E(\mathcal F,\delta) &=\inf_{\varphi\colon L_2(\mathbb S^{d-1})\to L_2(\mathbb S^{d-1})} e(\mathcal F,\delta,\varphi). \end{aligned} \end{equation*} \notag $$
The problem of finding the error of optimal recovery $E(\mathcal F,\delta)$ and of the corresponding optimal method differs from that considered above only by an arbitrary class $\mathcal F$.

A method $\varphi(y)(\,{\cdot}\,)$ is said to be exact on a set $L\subset L_2(\mathbb S^{d-1})$ if $\varphi(u(\,{\cdot}\,,T))(\,{\cdot}\,)=u(\,{\cdot}\,,\tau)$ for all $f(\,{\cdot}\,)\in L$.

Proposition 1. If $\widehat{\varphi}(y)(\,{\cdot}\,)$ is an optimal method for a class $\mathcal F$, and if $\widehat{\varphi}(y)(\,{\cdot}\,)$ is linear and exact on a set $L\subset L_2(\mathbb S^{d-1})$ containing the origin, then $\widehat{\varphi}(y)(\,{\cdot}\,)$ is optimal on the class $\mathcal F+L$. Moreover,

$$ \begin{equation} E(\mathcal F,\delta)=E(\mathcal F+L,\delta). \end{equation} \tag{4.5} $$

Proof. Let $f(\,{\cdot}\,)\in\mathcal F+L$, $f(\,{\cdot}\,)=f_1(\,{\cdot}\,)+f_2(\,{\cdot}\,)$, where $f_1(\,{\cdot}\,) \in \mathcal F$, $f_2(\,{\cdot}\,) \in L$. Let $u_j(\,{\cdot}\,,{\cdot}\,)$ be the solution of equation (4.1) with initial function $f_j(\,{\cdot}\,)$, $j=1,2$. Let $y(\,{\cdot}\,)\in L_2(\mathbb S^{d-1})$ be such that $\|u(\,{\cdot}\,,T)-y(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})}\leqslant\delta$. We set $y_1(\,{\cdot}\,)=y(\,{\cdot}\,)-u_2(\,{\cdot}\,,T)$. It is clear that $y_1(\,{\cdot}\,)\in L_2(\mathbb S^{d-1})$. Since $u_1(\,{\cdot}\,,T)-y_1(\,{\cdot}\,)=u(\,{\cdot}\,,T)-y(\,{\cdot}\,)$ we have
$$ \begin{equation} \|u_1(\,{\cdot}\,,T)-y_1(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})}\leqslant\delta. \end{equation} \tag{4.6} $$
Since $\widehat{\varphi}(y)(\,{\cdot}\,)$ is linear and exact on $L$, we have
$$ \begin{equation} \|u(\,{\cdot}\,,\tau)-\widehat{\varphi}(y)(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})} =\|u_1(\,{\cdot}\,,\tau)-\widehat{\varphi}(y_1)(\,{\cdot}\,)\|_{L_2(\mathbb S^{d-1})}. \end{equation} \tag{4.7} $$
By (4.7) the expression on the right of (4.6) is majorized by $e(\mathcal F,\delta,\widehat{\varphi})$. In addition, $e(\mathcal F,\delta,\widehat{\varphi})= E(\mathcal F,\delta)$ since the method $\widehat{\varphi}(y)(\,{\cdot}\,)$ is optimal. Hence, taking the supremum on the left of (4.7) with respect to $f(\,{\cdot}\,)\in\mathcal F+L$ and the corresponding $y(\,{\cdot}\,)$, we get that
$$ \begin{equation*} e(\mathcal F+L,\delta,\widehat{\varphi})\leqslant E(\mathcal F,\delta). \end{equation*} \notag $$
Next, since $\mathcal F\subset\mathcal F+L$, we have
$$ \begin{equation*} E(\mathcal F,\delta)\leqslant E(\mathcal F+L,\delta) \leqslant e(\mathcal F+L,\delta,\widehat{\varphi}) \leqslant E(\mathcal F,\delta). \end{equation*} \notag $$
Consequently, $\widehat{\varphi}(y)(\,{\cdot}\,)$ is an optimal method for the class $\mathcal F+L$, and (4.5) holds. This proves Proposition 1.

Assume that $\delta_1/\delta_2\in [e^{\Lambda_m^{\alpha/2}T}, e^{\Lambda_{m+1}^{\alpha/2}T}]$. It is easy to show that $\lambda_2\geqslant1$ for sufficiently large $m$. Thus, if $\delta_1$ is fixed, then $\lambda_2\geqslant 1$ for sufficiently small $\delta_2$. In this case, we put

$$ \begin{equation*} \widehat{k}=\max\biggl\{k\in\mathbb Z_+\colon \Lambda_k \leqslant\biggl(\frac{\ln\lambda_2}{2(T-\tau)}\biggr)^{2/\alpha}\biggr\}. \end{equation*} \notag $$
It is easy to check that
$$ \begin{equation*} \widehat{k}=\Biggl[\sqrt{\frac{(d-2)^2}4 +\biggl(\frac{\ln\lambda_2}{2(T-\tau)}\biggr)^{2/\alpha}}- \frac{d-2}2\Biggr] \end{equation*} \notag $$
(where $[a]$ is the integer part of $a$).

Consider the methods

$$ \begin{equation*} \widehat{\varphi}_0(y)(\,{\cdot}\,) =\sum_{k=0}^{\widehat{k}}\sum_{j=1}^{a_k} e^{\Lambda_k^{\alpha/2}(T-\tau)}Y_j^{(k)}(\,{\cdot}\,)+ \sum_{k=\widehat{k}+1}^\infty \sum_{j=1}^{a_k}\alpha_{kj}y_{kj}Y_j^{(k)}(\,{\cdot}\,), \end{equation*} \notag $$
where $\alpha_{kj}$, $k=\widehat{k}+1,\widehat{k}+2,\dots$, $j=1,\dots,a_k$, satisfy condition (4.3). Since
$$ \begin{equation*} \alpha_{kj}=e^{\Lambda_k^{\alpha/2}(T-\tau)},\qquad k=0,\dots,\widehat{k},\quad j=1,\dots,a_k, \end{equation*} \notag $$
obeys condition (4.3), the methods $\widehat{\varphi}_0(y)(\,{\cdot}\,)$ are optimal on the class $W$.

Moreover, the methods $\widehat{\varphi}_0(y)(\,{\cdot}\,)$ are exact on the subspace

$$ \begin{equation*} L_{\widehat{k}}=\sum_{k=0}^{\widehat{k}}\mathcal{H}_k. \end{equation*} \notag $$
Indeed, let $f(\,{\cdot}\,)\in L_{\widehat{k}}$. Then
$$ \begin{equation*} f(\,{\cdot}\,)=\sum_{k=0}^{\widehat{k}} \sum_{j=1}^{a_k}c_{kj}Y_j^{(k)}(\,{\cdot}\,). \end{equation*} \notag $$
Therefore,
$$ \begin{equation*} u(x',T)=\sum_{k=0}^{\widehat{k}} e^{-\Lambda_k^{\alpha/2}T} \sum_{j=1}^{a_k}c_{kj}Y_j^{(k)}(x'). \end{equation*} \notag $$
Consequently,
$$ \begin{equation*} \widehat{\varphi}_0(u(\,{\cdot}\,,T))(\,{\cdot}\,) =\sum_{k=0}^{\widehat{k}} e^{-\Lambda_k^{\alpha/2}\tau} \sum_{j=1}^{a_k}c_{kj}Y_j^{(k)}(\,{\cdot}\,)=u(\,{\cdot}\,,\tau). \end{equation*} \notag $$
Thus, it follows from Proposition 1 that the methods $\widehat{\varphi}_0(y)(\,{\cdot}\,)$ are not only optimal on the class $W$, but they are also optimal on the wider class $W+L_{\widehat{k}}$.

§ 5. Optimal recovery of solutions of difference equations

Let us consider the process of heat propagation in an infinite rod described by a discrete model, namely, by the implicit difference scheme

$$ \begin{equation} \frac{u_{s+1,j}-u_{sj}}\tau=\frac{u_{s+1,j+1}-2u_{s+1,j}+u_{s+1,j-1}}{h^2}. \end{equation} \tag{5.1} $$
Here $\tau$ and $h$ are positive numbers, $(s,j)\in\mathbb Z_+\times\mathbb Z$, and $u_{s,j}$ is the temperature of the rod at time $s\tau$ at the point $jh$.

Let $l_{2,h}$ be the set of vectors $x=\{x_j\}_{j\in\mathbb Z}$ such that

$$ \begin{equation*} \|x\|_{l_{2,h}}=\biggl(h\sum_{j\in\mathbb Z}|x_j|^2\biggr)^{1/2}<\infty,\qquad h>0. \end{equation*} \notag $$
Suppose that the temperature of the rod is approximately measured at time zero and at time $n\tau$, that is, the vectors $u_0=\{u_{0,j}\}$ and $u_n=\{u_{n,j}\}$ are approximately known. More precisely, we know vectors $y_1,y_2\in l_{2,h}$ such that
$$ \begin{equation*} \|u_0-y_1\|_{l_{2,h}}\leqslant\delta_1,\qquad \|u_n-y_2\|_{l_{2,h}}\leqslant\delta_2, \end{equation*} \notag $$
where $\delta_j>0$, $j=1,2$. From this information it is required to recover the vector $u_m=\{u_{m,j}\}$, where $0<m<n$, that is, to recover the value of the rod temperature at time $m\tau$.

Thus, we again come to problem (2.1) with $X=Y_0=Y_1=Y_2= l_2$, $p=2$, $J=\varnothing$, and with the operators $I_j\colon l_{2,h}\to l_{2,h}$, $j=0,1,2$, are defined by

$$ \begin{equation*} I_0u_0=u_m,\qquad I_1u_0=u_0,\qquad I_2u_0=u_n. \end{equation*} \notag $$

The Fourier transform of a sequence $x=\{x_j\}_{j\in\mathbb Z}\in l_{2,h}$ is defined by

$$ \begin{equation*} Fx(\xi)=h\sum_{j\in\mathbb Z}x_je^{-ijh\xi}. \end{equation*} \notag $$
It is easy to verify that $Fx(\,{\cdot}\,)\in L_2([-\pi/h,\pi/h])$ and
$$ \begin{equation} \|Fx(\,{\cdot}\,)\|^2_{L_2([-\pi/h,\pi/h])}=2\pi\|x\|^2_{l_{2,h}}. \end{equation} \tag{5.2} $$

Applying the Fourier transform to both parts of equality (5.1), we get

$$ \begin{equation*} h\sum_{j\in\mathbb Z}\frac{u_{s+1,j}-u_{sj}}\tau\, e^{-ijh\xi} =h\sum_{j\in\mathbb Z} \frac{u_{s+1,j+1}-2u_{s+1,j}+u_{s+1,j-1}}{h^2} \, e^{-ijh\xi}. \end{equation*} \notag $$
Hence
$$ \begin{equation*} \frac{U_{s+1}(\xi)-U_s(\xi)}\tau=\frac{e^{ih\xi}-2+e^{-ih\xi}}{h^2}U_{s+1}(\xi), \end{equation*} \notag $$
where
$$ \begin{equation*} U_s(\xi)=h\sum_{j\in\mathbb Z}u_{s,j}e^{-ijh\xi}. \end{equation*} \notag $$
Thus,
$$ \begin{equation*} U_{s+1}(\xi)=\biggl(1+\frac{4\tau}{h^2}\sin^2\frac{h\xi}2\biggr)^{-1}U_s(\xi). \end{equation*} \notag $$
Consequently,
$$ \begin{equation*} U_s(\xi)=\Lambda^s(\xi)U_0(\xi),\qquad\Lambda(\xi)=\biggl(1+\frac{4\tau}{h^2} \sin^2\frac{h\xi}2\biggr)^{-1}. \end{equation*} \notag $$

We put $a=(1+4\tau/h^2)^{-1}$, and define

$$ \begin{equation*} \begin{aligned} \, \lambda_1 &=\begin{cases} 0, &\dfrac{\delta_2}{\delta_1}\in(0,a^n], \\ \biggl(1-\dfrac mn\biggr)\biggl(\dfrac{\delta_2}{\delta_1}\biggr)^{2m/n}, &\dfrac{\delta_2}{\delta_1}\in(a^n,1), \\ 1, &\dfrac{\delta_2}{\delta_1}\in[1,+\infty), \end{cases} \\ \lambda_2 &= \begin{cases} a^{2(m-n)}, &\dfrac{\delta_2}{\delta_1}\in(0,a^n], \\ \dfrac mn\biggl(\dfrac{\delta_2}{\delta_1}\biggr)^{2(m/n-1)}, &\dfrac{\delta_2}{\delta_1}\in(a^n,1), \\ 0, &\dfrac{\delta_2}{\delta_1}\in[1,+\infty). \end{cases} \end{aligned} \end{equation*} \notag $$

Theorem 5. The following equality holds:

$$ \begin{equation*} E_\varnothing(I,\delta)=\sqrt{\lambda_1\delta_1^2+\lambda_2\delta_2^2}. \end{equation*} \notag $$
If $\alpha(\,{\cdot}\,)$ satisfies the condition
$$ \begin{equation} \Lambda^{2m}(\xi) \biggl(\frac{|1-\alpha(\xi)|^2}{\lambda_1} +\Lambda^{-2n}(\xi) \frac{|\alpha(\xi)|^2}{\lambda_2}\biggr)\leqslant1 \end{equation} \tag{5.3} $$
for $\delta_2/\delta_1\in(a^n,1)$, and satisfies the equality
$$ \begin{equation*} \alpha(\xi)=\begin{cases} 1, &\dfrac{\delta_2}{\delta_1}\in(0,a^n], \\ 0, &\dfrac{\delta_2}{\delta_1}\in[1,+\infty), \end{cases} \end{equation*} \notag $$
in other cases, then the methods
$$ \begin{equation*} \widehat{\varphi}(y_1,y_2)=F^{-1} \bigl(\Lambda^m(\,{\cdot}\,)(1-\alpha(\,{\cdot}\,))Fy_1(\,{\cdot}\,) +\Lambda^{m-n}(\,{\cdot}\,)\alpha(\,{\cdot}\,) Fy_2(\,{\cdot}\,)\bigr) \end{equation*} \notag $$
are optimal.

Proof. Consider the extremal problem
$$ \begin{equation*} \|u_m\|^2_{l_{2,h}}\to\max,\qquad \|u_0\|^2_{l_{2,h}}\leqslant\delta_1^2,\qquad \|u_n\|^2_{l_{2,h}}\leqslant\delta_2^2. \end{equation*} \notag $$
Passing to the Fourier images, we have the problem
$$ \begin{equation} \begin{gathered} \, \frac1{2\pi}\|\Lambda^m(\,{\cdot}\,) U_0(\,{\cdot}\,)\|^2_{L_2([-\pi/h,\pi/h])}\to\max, \qquad \frac1{2\pi}\|U_0(\,{\cdot}\,)\|^2_{L_2([-\pi/h,\pi/h])}\leqslant\delta_1^2, \\ \frac1{2\pi}\|\Lambda^n(\,{\cdot}\,) U_0(\,{\cdot}\,)\|^2_{L_2([-\pi/h,\pi/h])}\leqslant\delta_2^2. \end{gathered} \end{equation} \tag{5.4} $$

Assume that $\delta_2/\delta_1\in(a^n,1)$. For $\xi\in[0,\pi/h]$, the function $\Lambda(\xi)$ is monotone decreasing from $1$ to $a$. Therefore, there is $\widehat{\xi}\in(0,\pi/h)$ such that $\Lambda^n(\widehat{\xi})=\delta_2/\delta_1$. For sufficiently small $\varepsilon>0$, we put

$$ \begin{equation*} \widehat{U}_0(\xi)=\begin{cases} \sqrt{\dfrac{2\pi}\varepsilon}\,\delta_1, &\xi\in(\widehat{\xi},\widehat{\xi}+\varepsilon), \\ 0, &\xi\notin(\widehat{\xi},\widehat{\xi}+\varepsilon). \end{cases} \end{equation*} \notag $$
We have
$$ \begin{equation*} \frac1{2\pi}\|\widehat{U}_0(\,{\cdot}\,)\|^2_{L_2([-\pi/h,\pi/h])} =\delta_1^2, \end{equation*} \notag $$
and
$$ \begin{equation*} \frac1{2\pi}\|\Lambda^n(\,{\cdot}\,) \widehat{U}_0(\,{\cdot}\,)\|^2_{L_2([-\pi/h,\pi/h])} =\frac{\delta_1^2}\varepsilon\int_{\widehat{\xi}}^{\widehat{\xi}+\varepsilon} \Lambda^{2n}(\xi)\,d\xi\leqslant\delta_1^2\Lambda^{2n}(\widehat{\xi})=\delta_2^2. \end{equation*} \notag $$
Thus, $\widehat{U}_0(\,{\cdot}\,)$ is admissible in problem (5.4). Consequently,
$$ \begin{equation*} \begin{aligned} \, \sup_{\substack{u_0\in l_{2,h}\\\|u_0\|^2_{l_{2,h}}\leqslant\delta_1^2\\\|u_n\|^2_{l_{2,h}}\leqslant\delta_2^2}} \|u_m\|^2_{l_{2,h}} &\geqslant\frac1{2\pi} \|\Lambda^m(\,{\cdot}\,)\widehat{U}_0(\,{\cdot}\,)\|^2_{L_2([-\pi/h,\pi/h])} =\frac{\delta_1^2} \varepsilon \int_{\widehat{\xi}}^{\widehat{\xi}+\varepsilon} \Lambda^{2m}(\xi)\,d\xi \\ &=\delta_1^2\Lambda^{2m}(c), \end{aligned} \end{equation*} \notag $$
where $c\in[\widehat{\xi},\widehat{\xi}+\varepsilon]$. Making $\varepsilon\to0$, we obtain
$$ \begin{equation*} \sup_{\substack{u_0\in l_{2,h}\\\|u_0\|^2_{l_{2,h}}\leqslant\delta_1^2\\\|u_n\|^2_{l_{2,h}}\leqslant\delta_2^2}} \|u_m\|^2_{l_{2,h}}\geqslant\delta_1^2\Lambda^{2m}(\widehat{\xi}) =\delta_1^{2(1-m/n)}\delta_2^{2m/n} =\lambda_1\delta_1^2+\lambda_2\delta_2^2. \end{equation*} \notag $$

Assume that $\delta_2/\delta_1\in(0,a^n]$. For sufficiently small $\varepsilon>0$, we put

$$ \begin{equation*} \widehat{U}_0(\xi)=\begin{cases} \sqrt{\dfrac{2\pi}\varepsilon}\, \dfrac{\delta_2}{\Lambda^n(\xi)}, &\xi\in\biggl(\dfrac{\pi}{h}-\varepsilon,\dfrac{\pi}{h}\biggr], \\ 0, &\xi\notin\biggl(\dfrac{\pi}{h}-\varepsilon,\dfrac{\pi}{h}\biggr]. \end{cases} \end{equation*} \notag $$
Hence
$$ \begin{equation*} \frac1{2\pi}\|\Lambda^n(\,{\cdot}\,) \widehat{U}_0(\,{\cdot}\,)\|^2_{L_2([-\pi/h,\pi/h])}=\delta_2^2 \end{equation*} \notag $$
and
$$ \begin{equation*} \frac1{2\pi}\|\widehat{U}_0(\,{\cdot}\,)\|^2_{L_2([-\pi/h,\pi/h])} =\frac{\delta_2^2}\varepsilon\int_{\pi/h-\varepsilon}^{\pi/h} \Lambda^{-2n}(\xi)\,d\xi\leqslant\delta_2^2a^{-2n}\leqslant\delta_1^2. \end{equation*} \notag $$
Thus, the function $\widehat{U}_0(\,{\cdot}\,)$ is admissible in problem (5.4). Consequently,
$$ \begin{equation*} \begin{aligned} \, \sup_{\substack{u_0\in l_{2,h}\\\|u_0\|^2_{l_{2,h}} \leqslant\delta_1^2\\\|u_n\|^2_{l_{2,h}}\leqslant\delta_2^2}} \|u_m\|^2_{l_{2,h}} &\geqslant\frac1{2\pi}\|\Lambda^m(\,{\cdot}\,)\widehat{U}_0(\,{\cdot}\,)\|^2_{L_2([-\pi/h,\pi/h])} \\ &=\frac{\delta_2^2}\varepsilon\int_{\pi/h-\varepsilon}^{\pi/h}\Lambda^{2(m-n)}(\xi)\,d\xi =\delta_2^2\Lambda^{2(m-n)}(c), \end{aligned} \end{equation*} \notag $$
where $c\in[\pi/h-\varepsilon,\pi/h]$. Making $\varepsilon\to0$, we obtain
$$ \begin{equation*} \sup_{\substack{u_0\in l_{2,h}\\\|u_0\|^2_{l_{2,h}}\leqslant\delta_1^2\\\|u_n\|^2_{l_{2,h}}\leqslant\delta_2^2}} \|u_m\|^2_{l_{2,h}}\geqslant\delta_2^2a^{2(m-n)}=\lambda_2\delta_2^2. \end{equation*} \notag $$

If, finally, $\delta_2/\delta_1\in[1,+\infty)$ for sufficiently small $\varepsilon>0$, we put

$$ \begin{equation*} \widehat{U}_0(\xi)=\begin{cases} \sqrt{\dfrac{2\pi}\varepsilon}\delta_1, &\xi\in(0,\varepsilon), \\ 0, &\xi\notin(0,\varepsilon). \end{cases} \end{equation*} \notag $$
Hence
$$ \begin{equation*} \frac1{2\pi}\|\widehat{U}_0(\,{\cdot}\,)\|^2_{L_2([-\pi/h,\pi/h])}=\delta_1^2 \end{equation*} \notag $$
and
$$ \begin{equation*} \frac1{2\pi}\|\Lambda^n(\,{\cdot}\,) \widehat{U}_0(\,{\cdot}\,)\|^2_{L_2([-\pi/h,\pi/h])} =\frac{\delta_1^2}\varepsilon\int_0^\varepsilon \Lambda^{2n}(\xi)\,d\xi\leqslant\delta_1^2\leqslant\delta_2^2. \end{equation*} \notag $$
Thus, the function $\widehat{U}_0(\,{\cdot}\,)$ is admissible in problem (5.4). Consequently,
$$ \begin{equation*} \sup_{\substack{u_0\in l_{2,h} \\ \|u_0\|^2_{l_{2,h}}\leqslant\delta_1^2 \\ \|u_n\|^2_{l_{2,h}}\leqslant\delta_2^2}} \|u_m\|^2_{l_{2,h}} {\geqslant}\, \frac1{2\pi} \|\Lambda^m(\,{\cdot}\,)\widehat{U}_0(\,{\cdot}\,)\|^2_{L_2([-\pi/h,\pi/h])} {=}\,\frac{\delta_1^2} \varepsilon\int_0^\varepsilon\!\!\Lambda^{2m}(\xi)\,d\xi \,{=}\,\delta_1^2\Lambda^{2m}(c), \end{equation*} \notag $$
where $c\in[0,\varepsilon]$. Making $\varepsilon\to0$, we obtain
$$ \begin{equation*} \sup_{\substack{u_0\in l_{2,h}\\\|u_0\|^2_{l_{2,h}}\leqslant\delta_1^2\\\|u_n\|^2_{l_{2,h}}\leqslant\delta_2^2}} \|u_m\|^2_{l_{2,h}}\geqslant\delta_1^2. \end{equation*} \notag $$

Now let us consider estimate (2.3). Let $\delta_2/\delta_1\in(a^n,1)$. Let us define operators $S_j\colon l_{2,h}\to l_{2,h}$, $j=1,2$, so as to have

$$ \begin{equation*} F(S_1u)(\,{\cdot}\,)=\Lambda^m(\,{\cdot}\,)(1-\alpha(\,{\cdot}\,))Fu(\,{\cdot}\,),\qquad F(S_2u)(\,{\cdot}\,)=\Lambda^{m-n}(\,{\cdot}\,)\alpha(\,{\cdot}\,) Fu(\,{\cdot}\,). \end{equation*} \notag $$
It is easy to verify that, for all $u_0\in l_{2,h}$,
$$ \begin{equation*} F((I_0-S_1I_1-S_2I_2)u)(\,{\cdot}\,)\equiv0. \end{equation*} \notag $$
Therefore, $I_0=S_1I_1+S_2I_2$. In view of (5.2) we get
$$ \begin{equation*} \|S_1z_1+S_2z_2\|_{l_{2,h}}^2=\frac1{2\pi}\int_{-\pi/h}^{\pi/h}\Lambda^{2m}(\xi) |(1-\alpha(\xi))Fz_1(\xi)+\Lambda^{-n}(\xi)\alpha(\xi)Fz_2(\xi)|^2\,d\xi. \end{equation*} \notag $$
It follows from the Cauchy–Schwartz–Bunyakovskii inequality that
$$ \begin{equation*} \Lambda^{2m}(\xi)|(1-\alpha(\xi))Fz_1(\xi)+\Lambda^{-n} \alpha(\xi)Fz_2(\xi)|^2 \leqslant\Omega(\xi)(\lambda_1|Fz_1(\xi)|^2+\lambda_2|Fz_2(\xi)|^2), \end{equation*} \notag $$
where
$$ \begin{equation*} \Omega(\xi)=\Lambda^{2m}(\xi)\biggl(\frac{|1-\alpha(\xi)|^2} {\lambda_1}+\Lambda^{-2n}(\xi)\frac{|\alpha(\xi)|^2}{\lambda_2}\biggr). \end{equation*} \notag $$
By (5.3), we have
$$ \begin{equation*} \begin{aligned} \, \|S_1z_1+S_2z_2\|_{l_{2,h}}^2 &\leqslant\frac1{2\pi}\int_{-\pi/h}^{\pi/h}\bigl(\lambda_1|Fz_1(\xi)|^2+ \lambda_2|Fz_2(\xi)|^2\bigr)\,d\xi \\ &=\lambda_1\|z_1\|^2_{l_{2,h}}+\lambda_2\|z_2\|^2_{l_{2,h}}. \end{aligned} \end{equation*} \notag $$
Theorem 1 shows that, in the case under consideration, the methods
$$ \begin{equation*} \widehat{\varphi}(y_1,y_2)=S_1y_1+S_2y_2 \end{equation*} \notag $$
are optimal, and
$$ \begin{equation*} E_\varnothing(I,\delta)=\sqrt{\lambda_1\delta_1^2+\lambda_2\delta_2^2}. \end{equation*} \notag $$

Now assume that $\delta_2/\delta_1\in(0,a^n]$. Let $S_2\colon l_{2,h}\to l_{2,h}$ an operator such that

$$ \begin{equation*} F(S_2u)(\,{\cdot}\,)=\Lambda^{m-n}(\,{\cdot}\,) Fu(\,{\cdot}\,). \end{equation*} \notag $$
Since
$$ \begin{equation*} F((I_0-S_2I_2)u_0)(\xi)\equiv0, \end{equation*} \notag $$
we have $I_0=S_2I_2$. Moreover,
$$ \begin{equation*} \|S_2z_2\|_{l_{2,h}}^2 =\frac1{2\pi}\int_{-\pi/h}^{\pi/h}\Lambda^{2(m-n)}(\xi)|Fz_2(\xi)|^2\,d\xi \leqslant a^{2(m-n)}\|z_2\|^2_{l_{2,h}}. \end{equation*} \notag $$
Theorem 1 implies that, in the case under consideration, the method
$$ \begin{equation*} \widehat{\varphi}(y_1,y_2)=S_2y_2 \end{equation*} \notag $$
is optimal, and
$$ \begin{equation*} E_\varnothing(I,\delta)=a^{m-n}\delta_2. \end{equation*} \notag $$

Finally, for $\delta_2\geqslant\delta_1$, consider an operator $S_1\colon l_{2,h}\to l_{2,h}$ such that

$$ \begin{equation*} F(S_1u)(\,{\cdot}\,)=\Lambda^m(\,{\cdot}\,) Fu(\,{\cdot}\,). \end{equation*} \notag $$
We have $I_0=S_1I_1$, and
$$ \begin{equation*} \|S_1z_1\|_{l_{2,h}}^2 =\frac1{2\pi}\int_{-\pi/h}^{\pi/h}\Lambda^{2m}(\xi)|Fz_1(\xi)|^2\,d\xi \leqslant\|z_1\|^2_{l_{2,h}}. \end{equation*} \notag $$
It follows from Theorem 1 that the method
$$ \begin{equation*} \widehat{\varphi}(y_1,y_2)=S_1y_1 \end{equation*} \notag $$
is optimal, and
$$ \begin{equation*} E_\varnothing(I,\delta)=\delta_1. \end{equation*} \notag $$

We claim that, for $\delta_2/\delta_1\in(a^n,1)$, the set of functions $\alpha(\,{\cdot}\,)$ satisfying condition (5.3) is non-empty. Consider the concave function

$$ \begin{equation} y=x^{m/n},\qquad x\geqslant0. \end{equation} \tag{5.5} $$
The tangent to the graph of this function at $x_0>0$ has the equation $y=\widehat{\lambda}_1+\widehat{\lambda}_2x$, where
$$ \begin{equation*} \widehat{\lambda}_1 =\biggl(1-\frac mn\biggr)x_0^{m/n},\qquad \widehat{\lambda}_2=\frac mnx_0^{m/n-1}. \end{equation*} \notag $$
By concavity of curve (5.5), we have, for all $x\geqslant0$,
$$ \begin{equation*} x^{m/n}\leqslant\widehat{\lambda}_1+\widehat{\lambda}_2x. \end{equation*} \notag $$

Putting

$$ \begin{equation*} x=\Lambda^{2n}(\xi),\qquad x_0=\biggl(\frac{\delta_2}{\delta_1}\biggr)^2, \end{equation*} \notag $$
we have $\widehat{\lambda}_j=\lambda_j$, $j=1,2$, and, for all $\xi\in[-\pi/h,\pi/h]$,
$$ \begin{equation*} \Lambda^{2m}(\xi)\leqslant\lambda_1+\lambda_2\Lambda^{2n}(\xi). \end{equation*} \notag $$
Hence
$$ \begin{equation*} \frac{\Lambda^{2m}(\xi)}{\lambda_1+\lambda_2\Lambda^{2n}(\xi)}\leqslant1. \end{equation*} \notag $$
Putting
$$ \begin{equation*} \alpha(\xi)=\frac{\lambda_2\Lambda^{2n}(\xi)}{\lambda_1+\lambda_2\Lambda^{2n}(\xi)}, \end{equation*} \notag $$
we obtain
$$ \begin{equation*} \Lambda^{2m}(\xi)\biggl(\frac{|1-\alpha(\xi)|^2}{\lambda_1} +\Lambda^{-2n}(\xi)\frac{|\alpha(\xi)|^2}{\lambda_2}\biggr) =\frac{\Lambda^{2m}(\xi)}{\lambda_1+\lambda_2\Lambda^{2n}(\xi)}\leqslant1. \end{equation*} \notag $$
This proves Theorem 5.

Consider the problem of optimal recovery of the solution at time $m\tau$ from an inaccurately given solution at time $n\tau$ on the class

$$ \begin{equation*} W=\{u_0\in l_{2,h}\colon\|u_0\|_{l_{2,h}}\leqslant\delta_1\}. \end{equation*} \notag $$
Theorem 1 implies that the methods $\widehat{\varphi}(0,y_2)(\,{\cdot}\,)$ are optimal for this problem.

Note that, for a continuous model of heat propagation, the result obtained in [25] for $t_1=0$, $t_2=T$ ($n=2$) and an intermediate point $\tau_0$, at which it is required to recover the temperature distribution, coincides, in the one-dimensional case, with the limiting error of recovery and with one of the methods constructed in Theorem 5 for $h\to0$ and $\tau\to0$ (in this case, we should put $a=0$).

We also note that a similar problem for heat propagation on a circle was considered in [22].


Bibliography

1. S. A. Smolyak, Optimal recovery of functions and functionals of functions, Kandidat Thesis, Moscow State University, Moscow, 1965 (Russian)
2. S. M. Nikol'skii, “Concerning estimation for approximate quadrature formulas”, Uspekhi Mat. Nauk, 5:2(36) (1950), 165–177 (Russian)  mathnet  mathscinet  zmath
3. C. A. Micchelli and T. J. Rivlin, “A survey of optimal recovery”, Optimal estimation in approximation theory (Freudenstadt 1976), Plenum, New York, 1977, 1–54  crossref  mathscinet  zmath
4. A. A. Melkman and C. A. Micchelli, “Optimal estimation of linear operators in Hilbert spaces from inaccurate data”, SIAM J. Numer. Anal., 16:1 (1979), 87–105  crossref  mathscinet  zmath  adsnasa
5. J. F. Traub and H. Woźniakowski, A general theory of optimal algorithms, ACM Monogr. Ser., Academic Press, Inc., New York–London, 1980  mathscinet  zmath
6. V. V. Arestov, “Optimal recovery of operators and related problems”, Proc. Steklov Inst. Math., 189:4 (1990), 1–20
7. G. G. Magaril-Il'yaev and K. Yu. Osipenko, “Optimal recovery of functionals based on inaccurate data”, Math. Notes, 50:6 (1991), 1274–1279  crossref
8. L. Plaskota, Noisy information and computational complexity, Cambridge Univ. Press, Cambridge, 1996  crossref  mathscinet  zmath  adsnasa
9. K. Yu. Osipenko, Optimal recovery of analytic functions, Nova Science Publ., Inc., Huntington, NY, 2000
10. G. G. Magaril-Il'yaev and V. M. Tikhomirov, Convex analysis: theory and applications, rev. by the authors, Transl. Math. Monogr., 222, Amer. Math. Soc., Providence, RI, 2003  crossref  mathscinet  zmath
11. G. G. Magaril-Il'yaev and K. Yu. Osipenko, “Optimal recovery of functions and their derivatives from Fourier coefficients prescribed with an error”, Sb. Math., 193:3 (2002), 387–407  crossref  adsnasa
12. G. G. Magaril-Il'yaev and K. Yu. Osipenko, “Optimal recovery of functions and their derivatives from inaccurate information about the spectrum and inequalities for derivatives”, Funct. Anal. Appl., 37:3 (2003), 203–214  crossref
13. K. Yu. Osipenko, “The Hardy–Littlewood–Pólya inequality for analytic functions in Hardy–Sobolev spaces”, Sb. Math., 197:3 (2006), 315–334  crossref  adsnasa
14. K. Yu. Osipenko, “Optimal recovery of linear operators in non-Euclidean metrics”, Sb. Math., 205:10 (2014), 1442–1472  crossref  adsnasa
15. K. Yu. Osipenko, “Optimal recovery of operators and multidimensional Carlson type inequalities”, J. Complexity, 32:1 (2016), 53–73  crossref  mathscinet  zmath
16. V. V. Arestov, “Best uniform approximation of the differentiation operator by operators bounded in the space $L_2$”, Proc. Steklov Inst. Math., 308, Suppl. 1 (2020), 9–30  crossref
17. V. V. Arestov, “Best approximation of a differentiation operator on the set of smooth functions with exactly or approximately given Fourier transform”, Mathematical optimization theory and operations research (MOTOR 2019), Lecture Notes in Comput. Sci., 11548, Springer, Cham, 2019, 434–448  crossref  mathscinet  zmath
18. V. Arestov, “Uniform approximation of differentiation operators by bounded linear operators in the space $L_r$”, Anal. Math., 46:3 (2020), 425–445  crossref  mathscinet  zmath
19. K. Yu. Osipenko, “Optimal recovery in weighted spaces with homogeneous weights”, Sb. Math., 213:3 (2022), 385–411  crossref  adsnasa
20. G. G. Magaril-Il'yaev and K. Yu. Osipenko, “On optimal harmonic synthesis from inaccurate spectral data”, Funct. Anal. Appl., 44:3 (2010), 223–225  crossref
21. G. G. Magaril-Il'yaev and K. Yu. Osipenko, “Hardy–Littlewood–Paley inequality and recovery of derivatives from inaccurate data”, Dokl. Math., 83:3 (2011), 337–339  crossref
22. G. G. Magaril-Il'yaev and K. Yu. Osipenko, “On optimal recovery of solutions to difference equations from inaccurate data”, J. Math. Sci. (N.Y.), 189:4 (2013), 596–603  crossref
23. G. G. Magaril-Il'yaev and V. M. Tikhomirov, “Kolmogorov-type inequalities for derivatives”, Sb. Math., 188:12 (1997), 1799–1832  crossref  adsnasa
24. E. M. Stein and G. Weiss, Introduction to Fourier analysis on Euclidean spaces, Princeton Math. Ser., 32, Princeton Univ. Press, Princeton, NJ, 1971  mathscinet  zmath
25. G. G. Magaril-Il'yaev and K. Yu. Osipenko, “Optimal recovery of the solution of the heat equation from inaccurate data”, Sb. Math., 200:5 (2009), 665–682  crossref  adsnasa

Citation: K. Yu. Osipenko, “On the construction of families of optimal recovery methods for linear operators”, Izv. RAN. Ser. Mat., 88:1 (2024), 98–120; Izv. Math., 88:1 (2024), 92–113
Citation in format AMSBIB
\Bibitem{Osi24}
\by K.~Yu.~Osipenko
\paper On the construction of families of optimal recovery methods for linear operators
\jour Izv. RAN. Ser. Mat.
\yr 2024
\vol 88
\issue 1
\pages 98--120
\mathnet{http://mi.mathnet.ru/im9384}
\crossref{https://doi.org/10.4213/im9384}
\mathscinet{http://mathscinet.ams.org/mathscinet-getitem?mr=4727543}
\adsnasa{https://adsabs.harvard.edu/cgi-bin/bib_query?2024IzMat..88...92O}
\transl
\jour Izv. Math.
\yr 2024
\vol 88
\issue 1
\pages 92--113
\crossref{https://doi.org/10.4213/im9384e}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001202734300006}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85191540740}
Linking options:
  • https://www.mathnet.ru/eng/im9384
  • https://doi.org/10.4213/im9384e
  • https://www.mathnet.ru/eng/im/v88/i1/p98
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Известия Российской академии наук. Серия математическая Izvestiya: Mathematics
    Statistics & downloads:
    Abstract page:198
    Russian version PDF:10
    English version PDF:56
    Russian version HTML:16
    English version HTML:67
    References:22
    First page:5
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024