Sbornik: Mathematics
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Forthcoming papers
Archive
Impact factor
Guidelines for authors
License agreement
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Mat. Sb.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Sbornik: Mathematics, 2023, Volume 214, Issue 5, Pages 676–702
DOI: https://doi.org/10.4213/sm9802e
(Mi sm9802)
 

Levinson-type theorem and Dyn'kin problems

A. M. Gaisin, R. A. Gaisin

Institute of Mathematics with Computing Centre, Ufa Federal Research Centre of the Russian Academy of Sciences, Ufa, Russia
References:
Abstract: Questions relating to theorems of Levinson-Sjöberg-Wolf type in complex and harmonic analysis are explored. The well-known Dyn'kin problem of effective estimation of the growth majorant of an analytic function in a neighbourhood of its set of singularities is discussed, together with the problem, dual to it in certain sense, on the rate of convergence to zero of the extremal function in a nonquasianalytic Carleman class in a neighbourhood of a point at which all the derivatives of functions in this class vanish.
The first problem was solved by Matsaev and Sodin. Here the second Dyn'kin problem, going back to Bang, is fully solved. As an application, a sharp asymptotic estimate is given for the distance between the imaginary exponentials and the algebraic polynomials in a weighted space of continuous functions on the real line.
Bibliography: 24 titles.
Keywords: nonquasianalytic Carleman class, theorems of Levinson-Sjöberg-Wolf type, extremal function, Fourier transform, weighted space on the real line.
Received: 11.06.2022 and 22.12.2022
Russian version:
Matematicheskii Sbornik, 2023, Volume 214, Number 5, Pages 69–96
DOI: https://doi.org/10.4213/sm9802
Bibliographic databases:
Document Type: Article
MSC: 30D60
Language: English
Original paper language: Russian

§ 1. Introduction

In 1938 Levinson proved the following result (see [1], Ch. VIII, Theorem XLIII), which is a “far-reaching generalization of the principle of the maximum of the modulus for analytic functions” (see [2]).

Theorem 1 (Levinson). Let $M(y)$ be a positive monotonically decreasing function on a half-open interval $(0,b]$ such that $M(y)\uparrow\infty$ as $y\downarrow0$ and $M(b)=e$. Also, let $F_M$ be the family of analytic functions in the rectangle

$$ \begin{equation*} Q=\{z=x+iy\colon |x|<a,\ |y|<b\} \end{equation*} \notag $$
that have the estimate $|F(z)|\leqslant M(|y|)$ in $Q$. If
$$ \begin{equation} \int_{0}^{b} \log\log M(y)\,dy<\infty, \end{equation} \tag{1.1} $$
then for each $\delta>0$ there exists a constant $C$ depending only on $\delta$ and $M(y)$ such that for all functions $f\in F_M$ the estimate $|f(z)|\leqslant C$ holds in the rectangle
$$ \begin{equation*} P_{\delta}=\{z=x+iy\colon |x|<a-\delta,\ |y|<b\}. \end{equation*} \notag $$

Note that, independently of Levinson and apparently at the same time, this result, in a slightly different form, was established by Sjöberg [3]. On the other hand, long before this, Carleman [4] proved the following.

Theorem 2 (Carleman). Let $M(\varphi)$ be a positive function on $(0,2 \pi)$ such that $\log M(\varphi)>1$ and the integral

$$ \begin{equation*} \int_{0}^{2 \pi} \log\log M(\varphi)\, d\varphi \end{equation*} \notag $$
is convergent. Then each entire function $f(z)$ satisfying
$$ \begin{equation*} |f(z)|\leqslant M(\varphi),\quad\textit{where } \varphi=\arg z, \ 0<\varphi<2 \pi, \end{equation*} \notag $$
is a constant: $f(z)\equiv \mathrm{const}$.

Just this result of Carleman’s was subsequently developed by Levinson and Sjöberg, who extended it to the most general case. Note, however, that Carleman’s theorem holds without additional assumptions about the majorant $M(\varphi)$. Subsequently, Wolf [5] transferred the Levinson-Sjöberg theorem to a wide class of functions. Another, simpler proof of Theorem 1 was proposed in [2].

We present a version of this theorem (see [6] and [7]).

Theorem 3 (Domar). Let $D=\{z=x+iy\colon -a<x<a,\ 0<y<b\}$ and let $M(y)$ be a Lebesgue-measurable function such that $M(y)\geqslant e$ for $0<y<b$. If the integral in (1.1) is convergent, then there exists a decreasing function $m(\delta)$, which is finite for $\delta>0$ and depends only on $M(y)$, such that if $f(z)$ is analytic in $D$ and

$$ \begin{equation} |f(z)|\leqslant M(\operatorname{Im} z), \end{equation} \tag{1.2} $$
then
$$ \begin{equation*} |f(z)|\leqslant m(\operatorname{dist}(z, \partial D)), \qquad z\in D. \end{equation*} \notag $$

Corollary. Let $J=\{f\}$ be the family of analytic functions in $D$ satisfying (1.2). If the integral in (1.1) is convergent, then the family of functions $J$ is normal (that is, relatively compact).

As Koosis showed, condition (1.1), under which Levinson’s theorem hold, is also necessary (see [6]): if the integral in (1.1) is divergent, then there exists a sequence of polynomials $P_n(z)$ such that

1) $|P_n(z)|\leqslant K M(|y|)$, $K=\mathrm{const}$, for $n\geqslant1$ and all $z$ in the rectangle

$$ \begin{equation*} Q=\{z=x+iy\colon |x|<a,\ |y|<b\}; \end{equation*} \notag $$

2) as $n\to\infty$,

$$ \begin{equation*} P_n(z)\to F(z)= \begin{cases} 1 &\text{for }z\in Q\cap\mathbb{C}_{+}, \\ -1 &\text{for }z\in Q\cap\mathbb{C}_{-}; \end{cases} \end{equation*} \notag $$
here $\mathbb{C}_{+}=\{z=x+iy\colon y>0\}$ and $\mathbb{C}_{-}=\{z=x+iy\colon y<0\}$.

Note that under some additional assumptions about the behaviour of $M(y)$ a similar result was proved by Levinson in [1]. On the other hand it was shown in [7] that in Levinson’s theorem one can replace the monotonicity of $M(y)$ by its Lebesgue measurability. The following version of a Levinson-type result was presented in [8] (also see [9]).

Theorem 4 (Carleman, Levinson, Sjöberg, Wolff, Beurling and Domar). Let $M\colon (0,1]\to [e,+\infty)$ be a decreasing continuous function and $f$ be an analytic function in the strip

$$ \begin{equation*} S_{(-1,1)}=\{z\in\mathbb{C}\colon -1<\operatorname{Im} z<1\} \end{equation*} \notag $$
that satisfies the estimate
$$ \begin{equation} |f(z)|\leqslant M(|{\operatorname{Im} z}|), \qquad z\in S_{(-1,1)}. \end{equation} \tag{1.3} $$
If, in addition,
$$ \begin{equation} \int_{0}^{1} \log\log M(t)\,dt<\infty, \end{equation} \tag{1.4} $$
then $f$ is bounded in $S_{(-1,1)}$. On the other hand, if the integral in (1.4) is divergent, then there exists an analytic function $f$ satisfying (1.3) that is unbounded in $S_{(-1,1)}$.

The sufficiency part of this result follows form Levinson’s theorem. In fact, setting $a=b=1$ in Theorem 1 it is sufficient to consider the family of functions $\{f_{n}(z)\}$ such that $f_{n}(z)=f(z+n)$, $z\in Q$, $n\in\mathbb{Z}$.

In [10] Levinson’s theorem was generalized to the case when, in place of the real interval $[-a,a]$, we have some rectifiable arc $\gamma$ or, more precisely, an arc with bounded slope.

Recall the definition (also see [10]): an arc $\gamma$ with equation $y=g(x)$, $|x|<a$, that satisfies the Lipschitz condition

$$ \begin{equation*} \sup_{x_1\neq x_2} \biggl|\frac{g(x_2)-g(x_1)}{x_2-x_1}\biggr|=K_{\gamma}<\infty, \end{equation*} \notag $$
is called an arc with bounded slope. It was shown in [10] that for all $z=x+iy$, $|x|\leqslant a$, away from an arc of bounded slope $\gamma$ we have
$$ \begin{equation} \frac{k}{2} |y-g(x)|\leqslant \rho(z)\leqslant |y-g(x)|, \end{equation} \tag{1.5} $$
where $\rho(z)=\min_{w\in\gamma}|z-w|$ and $k=\min(1,K_{\gamma}^{-1})$.

Now we state the result of [10].

Let $M=M(y)$ be the function from Theorem 1 and $F_M$ be the family of analytic functions $f$ in the curvilinear quadrilateral

$$ \begin{equation*} \Pi=\{z=x+iy\colon |x|<a,\ |y-g(x)|<b\} \end{equation*} \notag $$
that have the estimate
$$ \begin{equation*} |f(z)|\leqslant M(\rho(z)), \qquad z\in\Pi\setminus\gamma, \end{equation*} \notag $$
where $\gamma$ is the arc introduced above.

It was shown in [10] that if the integral (1.1) is convergent, then for each $\delta>0$ there exists a constant $C_M(\delta)$, which only depends on $\delta$ and $M$, such that for all $f\in F_M$, in the domain

$$ \begin{equation*} \Pi_{\delta}=\{z=x+iy\colon |x|<a-\delta,\ |y-g(x)|<b\} \end{equation*} \notag $$
we have the estimate
$$ \begin{equation*} |f(z)|\leqslant C_M(\delta). \end{equation*} \notag $$

The main step of the prof of this result is constructing a so-called cutoff function, that is, an analytic function $F$ in a neighbourhood $G$ of $\gamma$ such that for each $f\in F_M$ the ratio $f/F$ is analytic in $G$ and continuous in $\overline{G}$ (here $G$ is a curvilinear rectangle with pointed corners). The construction of this function is based on Ahlfors’s theorem on distortion under conformal mappings. In fact, estimates (1.5) for the distance $\rho(z)$ are also used.

In this paper we discuss some questions closely connected with the Levinson-Sjöberg-Wolff theorems and their applications to approximation theory,1 and, in particular, the Dyn’kin problems, which he stated in the 1970s. The above overview of results will perhaps allow one take a broader view on these questions in the future and discover other versions of these problems of Dyn’kin’s.

§ 2. Dyn’kin problem of an effective estimate for the growth majorant

Let $E$ be a compact set in $\mathbb{R}$ and $M$ be a majorant from Levinson’s theorem satisfying the bi-logarithmic condition (1.1). In [11] Dyn’kin introduced the system $F_{E}^{0}(M)$ of functions $f$ defined and analytic away from $E$ and such that

$$ \begin{equation*} |f(z)|\leqslant M(|{\operatorname{Im} z}|), \qquad z\in\mathbb{C}\setminus E. \end{equation*} \notag $$
Here $M$ is a decreasing function on $\mathbb{R}_{+}=(0,+\infty)$ that is equal to the majorant in Theorem 1 on $(0,b]$. In what follows we assume that $M(y)\downarrow0$ as $y\to +\infty$.

By Theorem 1 the set $F_{E}^{0}(M)$ is normal, that is, for each $\delta>0$

$$ \begin{equation*} M^*(\delta)=\sup\{|f(z)|\colon f\in F_{E}^{0}(M),\ \rho(z,E)\geqslant\delta\}<\infty. \end{equation*} \notag $$
Here $\rho(z,E)=\inf_{\xi\in E}|z-\xi|$, $z\in\mathbb{C}$.

Thus, $M^*$ is the least function such that

$$ \begin{equation*} |f(z)|\leqslant M^*(\rho(z,E)), \qquad z\in\mathbb{C}\setminus E, \end{equation*} \notag $$
for all $f\in F_{E}^{0}(M)$. The problem of an “effective estimate for the majorant $M^*$” was stated in [11].

Let $M$ be a function such that $\log M(e^{-\sigma})$ is a convex function of $\sigma$.

Set

$$ \begin{equation*} M_n=\sup_{\delta>0} \frac{n!}{M(\delta) \delta^{n+1}}, \qquad n\geqslant0. \end{equation*} \notag $$
Then it is known that the Carleman class on $I=[0,1]$,
$$ \begin{equation*} C_{I}(M_n)=\{f\colon f\in C^{\infty}(I),\ \|f^{(n)}\|\leqslant c K_{f}^{n} M_n,\ n\geqslant0\}, \end{equation*} \notag $$
where $\|f\|=\max_{I} |f(x)|$, is quasianalytic if and only if the integral (1.1) is divergent (see [10] and [12]). In what follows we let $C_{I}^{N}(M_n)$ denote the normalized class, that is, the class $C_{I}(M_n)$ with constants $c=1$ and $K_{f}=1$. Following [11] we also set
$$ \begin{equation*} P(\delta)=\sup\{|f(\delta)|\colon f\in C_{I}^{N}(M_n),\ f^{(n)}(0)=f^{(n)}(1)=0,\ n\geqslant0\}, \qquad 0<\delta\leqslant1. \end{equation*} \notag $$

As claimed in [12] (see p. 61, § 2.1, the remark), the problem of an effective estimate for the majorant “in the form $M^*\simeq P^{-1}$ with unknown $P$ was established in [11]”. Here and throughout, $M^*\simeq P^{-1}$ means that

$$ \begin{equation} A P^{-1}(a \delta)\leqslant M^*(\delta)\leqslant B P^{-1}(b \delta) \end{equation} \tag{2.1} $$
(where $0<a<b$ and $0<A<B$ are some constants). Note that estimates (2.1) were not explicitly written in [11], and only the lower estimate was proved there. There was no proof of the upper bound in [11]. In our paper we show that, in fact, estimates of type (2.1) hold for the so-called corrected associated weight $H_0$, rather than for $M^*$ (see Theorem 9). Under the assumptions of Theorem 2.3 in [13], where a sharp asymptotic estimate for $M^*$ was obtained, we show that if $M=H_0$, then $\log M(\delta)=o(\log M^*(\delta))$ as $\delta\to0$.

Now we look at the results in [11] more closely. In that paper the author considered only regular sequences $\{M_n\}$, that is, sequences such that the numbers $m_n={M_n}/{n!}$ have the following properties:

1) $m_n^{1/n}\to\infty$ as $n\to\infty$;

2) $\displaystyle \sup_{n\geqslant0}\biggl(\dfrac{m_{n+1}}{m_n}\biggr)^{1/n}<\infty$;

3) $m_n^2\leqslant m_{n-1} m_{n+1}$, $n\geqslant1$.

As is well known, for $\alpha>0$ the Carleman class $C_{I}((n!)^{1+\alpha})$ is called a Gevrey class. It is regular because the numbers $M_n=(n!)^{1+\alpha}$ satisfy conditions 1)–3).

The associated weight is the function $H^*(r)=[h^*(r)]^{-1}$ (see [11]), where

$$ \begin{equation*} h^*(r)=\inf_{n\geqslant0} (m_n r^n). \end{equation*} \notag $$
It is clear that $h^*(r)\uparrow\infty$ as $r\to\infty$ and $h^*(0+)=0$. We can see from property 2) of regular sequences that $h^*(r)\leqslant r h^*(qr)$ for some $q > 1$. We have
$$ \begin{equation*} H^*(r)=\sup_{n\geqslant0} \frac{1}{m_n r^n}=\sup_{n\geqslant0} \frac{n!}{M_n r^n}. \end{equation*} \notag $$
Then it is known that (see [12])
$$ \begin{equation*} M_n=\sup_{r>0} \frac{n!}{H^*(r) r^n}, \qquad n\geqslant0. \end{equation*} \notag $$

The class $C_{I}(M_n)$ is quasianalytic if and only if any of the following equivalent conditions is satisfied (see [11]):

1) $\displaystyle\sum_{n=0}^{\infty} \frac{M_n}{M_{n+1}}=\infty$;

2) $\displaystyle\int_{0}^{1} \log^{+}\log H^*(t)\,dt=\infty$.

Let us present the results from [13], where the first Dyn’kin problem on estimates for the majorant $M^*$ was solved. Consider the square

$$ \begin{equation*} S=\{x+iy\colon |x|<1,\ |y|<1\}. \end{equation*} \notag $$
The Carleman-Levinson-Sjöberg theorem claims that the family of analytic functions $F$ in $S$ satisfying
$$ \begin{equation} |F(z)|\leqslant M(|x|), \qquad z=x+iy, \end{equation} \tag{2.2} $$
is locally uniformly bounded in $S$ if $M(x)$ is nonincreasing on $(0,1)$ and
$$ \begin{equation} \int_{0}^{1} \log^{+}\log M(x)\,dx<\infty. \end{equation} \tag{2.3} $$
As mentioned already, a result in just this form was independently established by Levinson (in 1940) and Sjöberg (in 1938–1939). However, before that, in 1926 Carleman obtained an equivalent result (see [13]). It is clear that this result also holds for analytic functions $F$ satisfying (2.2) in the punctured square $S^*=S\setminus\{0\}$, provided that $M$ satisfies (2.3).

In [11] and [12] Dyn’kin asked about the precise behaviour of the majorant

$$ \begin{equation*} M^*(s)=\sup_{F} \max_{|z|=s} |F(z)| \end{equation*} \notag $$
as $s\to0$. Here the supremum is taken over all analytic functions in $S^*$ with majorant $M$ satisfying (2.3). Note that, originally, Dyn’kin stated the problem imposing no restrictions on the set of singularities of $F$ (see [11]). Subsequently, in [12], this problem was refined, stated in terms of the function $M=H^*$ and referred to as an “open problem stated in [11]”.

An upper bound for $M^*$ can be obtained using one method due to Domar [7], [9] (see [6]).

Using duality, Matsaev showed that the Levinson-Sjöberg theorem is equivalent to the Denjoy-Carleman theorem on the quasianalytic classes $C_{I}(M_n)$ (see [14]). Subsequently, this fact was re-discovered by Dyn’kin [15], while in [12] he claimed two-sided bounds for $M^*$ in terms of the quantity

$$ \begin{equation*} J_M(s)=\sup\Bigl\{|g(s)|\colon \sup_{I}|g^{(n)}(t)|\leqslant M_n,\ g^{(n)}(0)=0,\ n\geqslant0\Bigr\}. \end{equation*} \notag $$
However, these bounds were not just not sharp, but not even true (see a survey of results and a discussion in [13] and [16]). Sharp estimates for $M^*$ were obtained in [13], where another method was used. Let us state this result.

Let

$$ \begin{equation} P_{\varphi}(s)=\sup_{y>0}\biggl[\frac{2y}{\pi} \int_{0}^{\infty} \frac{\varphi(t)\,dt}{t^2+y^2}-ys\biggr], \end{equation} \tag{2.4} $$
where the (logarithmic) weight function satisfies the conditions

Occasionally, authors also impose a further condition on $\varphi$:

For the logarithm of the majorant $M$ in (2.2) let

$$ \begin{equation*} \varphi(r)=\inf_{s>0} (\log M(s)+rs) \end{equation*} \notag $$
be its lower Legendre transform. Assume that
$$ \begin{equation} \lim_{s\to0} s^{N} M(s)=\infty \end{equation} \tag{2.5} $$
for each $N>0$. Then the weight function $\varphi$ satisfies automatically conditions 1)–3), and also condition 5) (see [13]). Now, if $\log M(e^{-s})$ and $\log M(t)$ are convex functions, then $\varphi(e^x)$ is also a convex function of $x\in\mathbb{R}_{+}$ (so that condition 4) is satisfied; see [13]).

The following result was proved in [13].

Theorem 5. Assume that the majorant $M$ satisfies conditions (2.3) and (2.5), and let $\log M(e^{-s})$ and $\log M(t)$ be convex functions. Then, as $s\to0$,

$$ \begin{equation*} \log M^*(s)=(1+o(1)) \log P_{\varphi}(s), \end{equation*} \notag $$
where $P_{\varphi}$ is defined by (2.4) and $\varphi$ is the lower Legendre transform of $\log M(t)$.

§ 3. Second Dyn’kin problem of estimates for the function $J_M(s)$

The problem discussed in this section goes back historically to Bang [17].

Let $\{M_n\}_{n=0}^{\infty}$ be an arbitrary positive sequence such that $M_n^{1/n}\to \infty$ (but not necessarily regular). Then it has the greatest logarithmically convex minorant $\{M_n^{c}\}_{n=0}^{\infty}$, which is a sequence satisfying $M_n^{c}\leqslant M_n$ for $n\geqslant0$, and $(M_n^{c})^2\leqslant M_{n-1}^{c} M_{n+1}^{c}$ for $n\geqslant1$. The sequence $\{M_n^{c}\}$ is called the convex regularization of $\{M_n\}$ by logarithms (see [18]).

Let $P=\{n_i\}$ be the sequence of principal indices, so that $M_{n_i}=M_{n_i}^{c}$ for $i\geqslant1$. In [17], for each function $f\in C^{\infty}(I)$ Bang considered the quantity

$$ \begin{equation} B_{f}(x)=\inf_{p\in P} \biggl[\max\biggl(e^{-p}, \max_{0\leqslant n\leqslant p} \frac{|f^{(n)}(x)|}{e^n M_n^{c}}\biggr)\biggr]. \end{equation} \tag{3.1} $$

The central result in [17] is as follows.

Theorem 6 (Bang). If $f\in C^{\infty}(I)$ and $\|f^{(n)}\|\leqslant M_n$, $n\geqslant0$, then the estimate

$$ \begin{equation*} B_{f}(x)\geqslant e^{-q} \end{equation*} \notag $$
for some $q\in\mathbb{N}$ yields the inequality
$$ \begin{equation} B_{f}(x+h)\leqslant B_{f}(x) \exp\biggl(e|h| \frac{M_{q}^{c}}{M_{q-1}^{c}}\biggr). \end{equation} \tag{3.2} $$

Note that in this statement $q$ does not necessarily belong to the set $P$ of principal indices. The parameter $h$ is chosen so that the shift $x+h$ belongs to $I$.

Remark. Setting $L(x)=\log B_{f}(x)$, from Bang’s theorem we obtain the following:

1) $\displaystyle |L(x+h)-L(x)|\leqslant e \frac{M_{q}^{c}}{M_{q-1}^{c}} |h|$;

2) at points where the derivative $L'(x)$ is defined we have

$$ \begin{equation*} |L'(x)|\leqslant e \frac{M_{q}^{c}}{M_{q-1}^{c}}. \end{equation*} \notag $$

Bang used Theorem 6 to prove a criterion for the class $C_{I}(M_n)$ to be quasianalytic. We are only interested in the sufficiency part of this criterion, because its proof implies a simple estimate for each function $f$ in the class $C_{I}^{0}(M_n)=\{f\colon f\in C_{I}^{N}(M_n),\ f^{(n)}(0)=f^{(n)}(1)=0,\ n\geqslant0\}$ in a neighbourhood of $x=0$. Some authors extend this estimate groundlessly to the extremal function $J_M(M_n)$ (see [12] and [13]).

Making no claim to originality, we give a short proof of the following result due to Bang: if a class $C_{I}^{0}(M_n)$ is not quasianalytic, then

$$ \begin{equation*} \sum_{n=0}^{\infty} \frac{M_n^{c}}{M_{n+1}^{c}}<\infty. \end{equation*} \notag $$

By assumption there exists a function $f$ in $C_{I}^{0}(M_n)$ such that $f(x)\not\equiv0$. Hence $B_{f}(x)\not\equiv0$ too. Therefore, there exist $p_1\in P$ and $x_1\in I$ such that $B_{f}(x_1)=e^{-p_1}$. Next we construct recursively a sequence $\{x_n\}_{n=1}^{\infty}$: such that $x_n\downarrow0$, $B_{f}(x_j)=e^{-p_j}$ for $p_j\in P$, $p_1<p_2<\dots<p_n<\dotsb$. If $x=x_j$ and $x+h=x_{j-1}$, then $h>0$. By Theorem 6

$$ \begin{equation*} B_{f}(x_{j-1})\leqslant B_{f}(x_j) \exp\biggl[e |x_j-x_{j-1}| \frac{M_{p_j}^{c}}{M_{p_j-1}^{c}}\biggr]. \end{equation*} \notag $$
Hence
$$ \begin{equation*} p_j-p_{j-1}\leqslant e |x_j-x_{j-1}| \frac{M_{p_j}^{c}}{M_{p_j-1}^{c}}, \end{equation*} \notag $$
or
$$ \begin{equation} (p_j-p_{j-1}) \frac{M_{p_j-1}^{c}}{M_{p_j}^{c}}\leqslant e |x_j-x_{j-1}|. \end{equation} \tag{3.3} $$

However, the left-hand side here is

$$ \begin{equation*} \sum_{n=p_{j-1}}^{p_j-1} \frac{M_n^{c}}{M_{n+1}^{c}}, \end{equation*} \notag $$
where all terms are equal (and their number is $p_j-p_{j-1}$): this is easily seen from the geometric meaning of the regularization of the sequence $\{M_n\}$ by logarithms (see [18]). Since
$$ \begin{equation*} \sum_{j=2}^{\infty} |x_j-x_{j-1}|\leqslant x_1, \end{equation*} \notag $$
it follows from (3.3) that
$$ \begin{equation} \sum_{n=p_1}^{\infty} \frac{M_n^{c}}{M_{n+1}^{c}}\leqslant e x_1<\infty. \end{equation} \tag{3.4} $$

The proof is complete. However, we are interested in inequality (3.4) itself, because Bang obtained an important estimate for $f$ on its basis: if $x\in I$ and

$$ \begin{equation*} x<\frac{1}{e} \sum_{n=p_1}^{\infty} \frac{M_n^{c}}{M_{n+1}^{c}}, \end{equation*} \notag $$
then
$$ \begin{equation} |f(x)|<M_{0}^{c} e^{-p_1}. \end{equation} \tag{3.5} $$

It should be noted that here $p_1$ depends on the particular function $f$: the smaller $\|f\|$, the greater $p_1=p_1(f)$.

Using Taylor’s formula Bang also obtained another inequality, which yields the bound

$$ \begin{equation} J_M(x)\leqslant \inf_{n\geqslant0} \frac{M_n x^n}{n!}, \qquad x\in I. \end{equation} \tag{3.6} $$

To see the difference between (3.5) and (3.6) we look at an example.

Consider the sequence of numbers

$$ \begin{equation*} M_n=n!\, [\log (n+e)]^{(1+\beta)n},\qquad \beta>0,\quad n\geqslant0. \end{equation*} \notag $$

Let $f$ be the function from the above proof of the sufficiency part of Theorem 6; it satisfies (3.5). From (3.6) we also obtain

$$ \begin{equation} |f(x)|\leqslant \frac{1}{\sup_{n\geqslant0}(n!/(M_n x^n))}=\frac{1}{H_1(x)}, \end{equation} \tag{3.7} $$
where
$$ \begin{equation*} H_1(x)\asymp \exp\exp \biggl[c_1 \biggl(\frac{1}{x}\biggr)^{1/(1+\beta)}\biggr], \qquad 0<x\leqslant1, \end{equation*} \notag $$
and $c_1$ is a positive constant independent of $f$ (we write $H_1\asymp H_2$ if there exist positive $a_1$ and $a_2$ such that $a_1 H_1(x)\leqslant H_2(x)\leqslant a_2 H_1(x)$).

In view of the rapid growth of $H_1(x)$ as $x\to0$, we can write (3.7) as follows:

$$ \begin{equation} \log\log \frac{1}{|f(x)|}\geqslant c_2 \biggl(\frac{1}{x}\biggr)^{1/(1+\beta)}, \end{equation} \tag{3.8} $$
where $0<c_2<c_1$ and $c_2$ is also independent of $f$ ($c_2$ depends only on the sequence $\{M_n\}$).

The fact that $C_{I}^{N}(M_n)$ is not quasianalytic follows from the condition

$$ \begin{equation*} \sum_{n=0}^{\infty} \frac{M_n}{M_{n+1}}<\infty. \end{equation*} \notag $$
However, the absence of quasianalyticity is also controlled by the associated weight $H_1$, because
$$ \begin{equation} \int_{0}^{1} \log^{+}\log H_1(x)\,dx<\infty, \end{equation} \tag{3.9} $$
and for $\beta=0$ the integral in (3.9) is divergent and the class $C_{I}^{N}(M_n)$ becomes quasianalytic, as was to be expected. This suggests that estimate (3.6) is fairly sharp.

However, using Bang’s estimate (3.5) we can deduce a sharper estimate, albeit for a fixed function $f$ (see [17]): there exists $x_0=x_0(f)$ such that for all $x$, ${0<x<x_0(f)}$, and some $c=c(f)>0$ we have

$$ \begin{equation} \log\log \frac{1}{|f(x)|}\geqslant c \biggl(\frac{1}{x}\biggr)^{1/\beta}. \end{equation} \tag{3.10} $$

A natural question is as follows: which of inequalities (3.8) and (3.10) reflects faithfully the behaviour of the extremal function $J_M(x)$?

An attempt to answer was made in [12] (also see [19] in this connection).

Let $\{M_n\}$ be a regular sequence and $H_{0}$ be the corrected associated weight function, that is,

$$ \begin{equation*} H_0(y)=\sup_{n\geqslant0} \frac{n!}{M_n y^{n+1}}. \end{equation*} \notag $$
Then it is known that
$$ \begin{equation*} M_n=\sup_{y>0} \frac{n!}{H_{0}(y) y^{n+1}}. \end{equation*} \notag $$
We also consider the functions
$$ \begin{equation} H(y)=\sum_{n=0}^{\infty} \frac{n!}{M_n y^{n+1}}. \end{equation} \tag{3.11} $$
Then a criterion for the class $C_{I}^{N}(M_n)$ to be nonquasianalytic has the form
$$ \begin{equation} \int_{0}^{d} \log h(t)\,dt<\infty, \end{equation} \tag{3.12} $$
where $h(t)=\log H(t)$ and $d>0$ is a number such that $h(d)=1$. This criterion is equivalent to the Lebesgue-Stieltjes integral
$$ \begin{equation} -\int_{0}^{d} t \psi'(t)\,dt, \quad\text{where } \psi(t)=\log h(t) \end{equation} \tag{3.13} $$
being convergent. As in [19], let $\theta=\theta(y)$ be the inverse function of
$$ \begin{equation*} y(\theta)=-\int_{0}^{\theta} t \psi'(t)\,dt. \end{equation*} \notag $$
Now, provided that (3.12) holds, there exists $f\in C_{I}^{0}(M_n)$ such that (see [10])
$$ \begin{equation} |f(y)|\geqslant C_0(f) \exp\biggl[-h\biggl(\frac{1}{4} \theta(y)\biggr)\biggr]. \end{equation} \tag{3.14} $$
Previously, Dyn’kin also proved an upper bound, under more restrictive assumptions (see [12]): if (3.12) holds and, moreover, $t |\psi'(t)|\to \infty$ as $t\to0$, then the following estimate holds for each $f\in C_{I}^{0}(M_n)$:
$$ \begin{equation} |f(y)|\leqslant C_1(f) \exp[-h(c \theta(y))], \end{equation} \tag{3.15} $$
where $c>0$ is a constant. Under the same assumptions Dyn’kin [12] also obtained a similar lower bound of type (3.14) for some function $f\in C_{I}^{0}(M_n)$ as $y\to0$, which however, did not involve the constant $C_0(f)$. By combining these two bounds the following theorem was obtained in [12].

Theorem 7. Let $t |\psi'(t)|\to\infty$ as $t\to0$. Then the following assertions hold:

1) if the integral in (3.12) is divergent, then $J_M(x)\equiv0$;

2) if the integral in (3.12) is convergent, then

$$ \begin{equation} H_{0}(q_1 \theta(x))\leqslant J_M(x)\leqslant H_{0}(q_2 \theta(x)), \end{equation} \tag{3.16} $$
where $0<q_1<q_2<\infty$.

In view of the proof of Bang’s theorem and the above comments to (3.5), we can conclude that, in place of $J_M(x)$, estimates (3.16) must involve the particular function $f$ constructed in [10], and the constants $q_1$ and $q_2$ must depend on this function $f$, that is, $q_1=q_1(f)$ and $q_2=q_2(f)$. This means that, contrary to the author’s claim (see [12]), the second Dyn’kin problem was in fact not solved in [12].

It is easy to see that, for the sequence $M_n=n!\, [\log (n+e)]^{(1+\beta)n}$, where $\beta>0$ and $n\geqslant0$, we have

$$ \begin{equation*} h(y)\asymp y^{-1/(1+\beta)}\quad\text{and} \quad \theta(y)\asymp y^{(1+\beta)/\beta}. \end{equation*} \notag $$
Therefore, taking the above into account, there exists a function $f\in C_{I}^{0}(M_n)$ such that
$$ \begin{equation*} c_{f} x^{-1/\beta}\leqslant \log\log \frac{1}{|f(x)|}\leqslant C_{f} x^{-1/\beta}, \qquad 0<x\leqslant1. \end{equation*} \notag $$

In [13] the corresponding inequality, in place of ${1}/{|f(x)|}$, involved incorrectly the quantity $\delta_{\{M_n\}}(s)=\sup\{|g(s)|,\ g\in C_{I}^{0}(M_n)\}$.

Thus, although Bang’s asymptotic estimate (3.10), is better than (3.8) for each fixed $f$ of the above type, it does not describe adequately the behaviour of $J_M(x)$.

The following result was established in [19].

Theorem 8. Let $\{M_n\}$ be a regular sequence. If the function $H$ defined by (3.11) satisfies the bi-logarithmic condition (3.12), then the extremal function $J_M(x)$ satisfies

$$ \begin{equation} \frac{1}{q_1 H(x/2)}\leqslant J_M(x)\leqslant \frac{1}{H(2q_2 x)}, \qquad 0<x\leqslant1, \end{equation} \tag{3.17} $$
where $q_1$ is a positive constant depending only on $H$ (that is, on the sequence $M_n$), and
$$ \begin{equation*} q_2=\sup_{n\geqslant1} \sqrt[n]{\frac{m_n}{m_{n-1}}}<\infty, \quad\textit{where } m_n=\frac{M_n}{n!}. \end{equation*} \notag $$

Now we compare estimates (3.17) for $J_M(x)$ with Dyn’kin’s estimates (3.16) for the function $f\in C_{I}^{0}(M_n)$ constructed in [10] and [12] using Gurarii’s method of cutoff function. In doing this it is natural to limit ourselves to the case when2 $t |\psi'(t)|\to\infty$ as $t\to0$. Then, using Dyn’kin’s estimate (3.15) we obtain

$$ \begin{equation} |f(y)|\leqslant C_1(f) e^{-h(c \theta(y))}, \end{equation} \tag{3.18} $$
where $c>0$ is a constant and $\theta=\theta(y)$ is a function introduced above.

Set

$$ \begin{equation*} a(y)=\log \frac{1}{|f(y)|}\quad\text{and} \quad b(y)=h\biggl(\frac{y}{2}\biggr). \end{equation*} \notag $$
Then taking (3.18) into account we obtain
$$ \begin{equation*} p(y)=\frac{a(y)}{b(y)}\geqslant \frac{1}{2} \,\frac{h(c \theta(y))}{h(y/2)}, \qquad 0<y\leqslant y_0<1. \end{equation*} \notag $$
Since it is easy to verify that $\theta(y)=o(y)$ as $y\to0$ (because $t |\psi'(t)|\to\infty$ as $t\to0$) taking the monotonicity of $h$ into account we obtain
$$ \begin{equation*} p(y)\geqslant \frac{1}{2} \frac{h(x)}{h(2x)}, \qquad x=\frac{y}{4}, \quad 0<x\leqslant x_0<1. \end{equation*} \notag $$
Since $\psi(t)=\log h(t)$ and $t |\psi'(t)|\to\infty$ as $t\to0$, for each $A>0$ we have
$$ \begin{equation*} \log p(y)\geqslant -\log 2+\int_{x}^{2x} t |\psi'(t)| \,\frac{dt}{t}\geqslant -\log 2+A \log 2 \end{equation*} \notag $$
for $0<x<x_1(A)$. Thus, as $y\to0$,
$$ \begin{equation*} \log H\biggl(\frac{y}{2}\biggr)=o\biggl(\log \frac{1}{|f(y)|}\biggr). \end{equation*} \notag $$
This means that the function $f(y)$ tends to zero as $y\to0$ much more rapidly than $H^{-1}(y/2)$. Hence the actual behaviour of $J_M(y)$ for $y\to0$ is comparable to the asymptotic behaviour of $H^{-1}(y)$, rather than of $|f(y)|$.

Estimates of the form (3.17) for $J_M(x)$ are important for applications, for example, in problems concerning the asymptotic behaviour of entire Dirichlet series on the real axis (see [20]).

We go over to the results in [13] related to estimates for the extremal function $J_M(y)$. The authors of [13] claimed that they also solved the second Dyn’kin problem in that paper.

Let $\varphi(r)=\log T(r)$, where $T(r)=\sup_{n\geqslant0}(r^n/M_n)$ is the trace function of the sequence $\{M_n\}$ which satisfies the nonquasianalyticity condition

$$ \begin{equation} \int_{1}^{\infty} \frac{\log T(r)}{r^2}\,dr<\infty. \end{equation} \tag{3.19} $$
It is known that $\varphi$ satisfies conditions 1)–4) for a logarithmic weight (see § 2).

The following result was presented in [13], Theorem 2.1: if

$$ \begin{equation} \lim_{t\to\infty} \frac{t \varphi'(t+0)}{\displaystyle\biggl(t^3 \int_{t}^{\infty} \frac{\varphi(\tau)}{\tau^4}\,d\tau\biggr)^{2/3}}=\infty, \end{equation} \tag{3.20} $$
then, as $x\to0$,
$$ \begin{equation} \log \delta_{\{M_n\}}(x)=-(1+o(1)) P_{\varphi}(x), \end{equation} \tag{3.21} $$
where $P_{\varphi}(x)$ is the function defined by (2.4) and
$$ \begin{equation*} \delta_{\{M_n\}}(x)=\sup\{|g(x)|\colon g\in C_{I}^{0}(M_n)\}. \end{equation*} \notag $$

We see that $\delta_{\{M_n\}}(x)\equiv J_M(x)$; however, neither the regularity of the sequence $\{M_n\}$ nor the convergence of (3.12) were explicitly assumed in [13]. We use this notation for the extremal function in what follows, when we discuss the results of [13].

If $\varphi(r)=\log T(r)$ is a concave function on $\mathbb{R}_{+}$ such that $\varphi(r) \log^{-3}(r)\uparrow\infty$ as $r\to\infty$, then it was shown in [13] that condition (3.20) holds for it, but the authors knew of no weaker condition that would be easier to verify and could replace (3.20) (see [13]). However, the assumption that $\log T(r)$ itself is concave restrict excessively the class of sequences $\{M_n\}$. Usually, in such problem a natural object of consideration is the least concave majorant $\omega_{T}(r)$ of the function $\log T(r)$, about which one assumes that it belongs to the convergence class, that is, satisfies (3.19). In fact,

$$ \begin{equation*} \omega_{T}(r)=\inf_{y>0} (m(y)+yr), \end{equation*} \notag $$
where
$$ \begin{equation*} m(y)=\sup_{r>0} (\varphi(r)-ry)\quad\text{and} \quad \varphi(r)=\log T(r), \end{equation*} \notag $$
and, moreover, the integral
$$ \begin{equation} \int_{0}^{a} \log m(y)\,dy, \qquad m(a)=1, \end{equation} \tag{3.22} $$
is also convergent in this case, as also is the integral (3.19) for the function $\omega_{T}(r)$ (see [21]).

We see from the proof of the result of [13] presented above that it is reasonable to consider separately two cases: when only the integral (3.19) converges and when the analogous integral for the function $\omega_{T}(r)$ is convergent. In fact, the verification of the asymptotic equality (3.21) relies essentially on Lemma 2.1 in [13]: let $W$ be the outer function in the upper half-plane $\mathbb{C}_{+}$ with logarithmic weight $\varphi(t)=\log T(|t|)$. Then

$$ \begin{equation} \sqrt{2\pi} \rho_{1,W}(s)\leqslant \delta_{\{M_n\}}(s)\leqslant \frac{e}{\sqrt{2\pi}} s \rho_{\infty,W}(s), \end{equation} \tag{3.23} $$
where
$$ \begin{equation} \rho_{p,W}(s)=\sup_{\|f\|_{H^{p}(W)}\leqslant1} |(F^{-1}f)(s)|, \end{equation} \tag{3.24} $$
and
$$ \begin{equation*} (F^{-1}f)(s)=\frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} f(x) e^{-isx}\,dx \end{equation*} \notag $$
is the inverse Fourier transformation; here
$$ \begin{equation*} H^{p}(W)=\{f\colon f\in H(\mathbb{C}_{+}),\ Wf\in H^{p}\} \end{equation*} \notag $$
and
$$ \begin{equation*} \|f\|_{H^{p}(W)}=\|Wf\|_{H^{p}},\qquad 1\leqslant p\leqslant \infty. \end{equation*} \notag $$

Note that $W(z)\neq0$ in $\mathbb{C}_{+}$, and we have $W(z)=e^{u(z)+i v(z)}$, where

$$ \begin{equation*} u(z)=\log |W(z)|=\frac{\operatorname{Im} z}{\pi} \int_{\mathbb{R}} \frac{\varphi(t)\, dt}{|t-z|^2}, \qquad \operatorname{Im} z>0. \end{equation*} \notag $$
It is also known that if $f\in H^{p}(W)$ and $\|f\|_{H^{p}(W)}\leqslant1$, then (see [22])
$$ \begin{equation} |f(z)|\leqslant \frac{|W(z)|^{-1}}{(\pi \operatorname{Im} z)^{1/p}}, \qquad z\in\mathbb{C_{+}}. \end{equation} \tag{3.25} $$

It was proved in Lemma 4.1 in [13] that if $\varphi(r)=\log T(r)$ belongs to the convergence class, then

$$ \begin{equation*} \rho_{p,W}(s)\leqslant C \frac{\sqrt{P''_{\varphi}(s)}\, e^{-P_{\varphi}(s)}}{|P_{\varphi}(s)|^{1/p}}. \end{equation*} \notag $$
Hence, as $s\to0$,
$$ \begin{equation} \log \rho_{p,W}(s)\leqslant -(1+o(1)) P_{\varphi}(s). \end{equation} \tag{3.26} $$

Thus, taking account of the upper estimate in (3.23) and (3.26) one arrives at the following assertion (see [13], Theorem 2.1): if the integral in (3.19) converges, then, as $s\to0$,

$$ \begin{equation} \log \delta_{\{M_n\}}(s)\leqslant -(1+o(1)) P_{\varphi}(s). \end{equation} \tag{3.27} $$

We obtain a lower bound for $\log \delta_{\{M_n\}}(s)$ from the one in (3.23) and an appropriate lower bound for $\rho_{1,W}(s)$, which however is obtained under the additional assumption (3.20). Since this case reduces in fact to the convergence of the bi-logarithmic integral (3.22) for $H(y)=\exp(m(y))$ or, equivalently, for the associated weight $H_0$, we consider just it in our paper. Here we obtain a relation, quite different from the one in (3.21), which provides a solution of the second Dyn’kin problem.

As regards inequality (3.27), it is false (see § 5): its proof in [13] is inaccurate and contains a significant gap. In fact, the asymptotic inequality (3.27) holds for each fixed function $g\in C_{I}^{0}(M_n)$, but in its own neighbourhood of zero, that is, for ${0<s\leqslant s_{0}(g)}$, rather than for the extremal function $\delta_{\{M_n\}}(s)$. That is, there was no solution of the Dyn’kin problem in question in [13].

§ 4. The main result: solving the second Dyn’kin problem

Let $\{M_n\}$ be a regular sequence and $H_0$ be the associated weight introduced above.

If the integral

$$ \begin{equation} \int_{0}^{d_0} \log\log H_{0}(t)\,dt<\infty, \qquad H_{0}(d_0)=e, \end{equation} \tag{4.1} $$
converges, then there exists a function $f\in C_{I}^{0}(M_n)$ such that $f(x)\not\equiv0$. Then using inequality (3.6) and the definition of $H_0$ we obtain
$$ \begin{equation} J_M(x)\leqslant \frac{1}{x H_{0}(x)}, \qquad x\in I. \end{equation} \tag{4.2} $$

We have obtained an upper bound for $J_M(x)$. For a lower one we consider the normed space $F_{I}(H_0)$ of analytic functions in $\mathbb{C}\setminus I$ satisfying the estimate

$$ \begin{equation*} |f(z)|\leqslant C_{f} H_{0}(\operatorname{dist}(z,I)), \qquad z\in\mathbb{C}\setminus I, \end{equation*} \notag $$
with the norm
$$ \begin{equation*} \|f\|_{0}=\sup_{\operatorname{Im} z\neq0} \frac{|f(z)|}{H_{0}(|{\operatorname{Im} z}|)}. \end{equation*} \notag $$
Let $F_{I}^{0}(H_{0})$ denote the unit ball in $F_{I}(H_{0})$.

In place of $I$ we could consider any closed set $E\subset\mathbb{R}$ (see [11]). So taking $E=\{0\}$ consider the linear functional $G$ on the space $F_{\{0\}}(H_{0})$ such that $\langle G,f\rangle=f(\delta)$ for some fixed $\delta\in (0,1]$. Then we obviously have $|\langle G,f\rangle|\leqslant C_{f} H_{0}(\delta)$. Because the integral (4.1) is convergent, by Levinson’s theorem the set of functions $F_{\{0\}}^{0}$ is normal. Hence setting $C_{f}^{0}=\inf C_{f}$ we obtain $\sup_{f\in F_{\{0\}}^{0}(H_{0})} C_{f}^{0}=C<\infty$. Therefore, $\|G\|\leqslant C H_{0}(\delta)$ (the positive constant $C$ is independent of $\delta$). Now, since $F_{\{0\}}(H_{0})\subset F_{I}(H_{0})$, by the Hahn-Banach theorem the functional $G$ can be extended to the whole of $F_{I}(H_{0})$ with the same norm. We keep the notation $G$ for this functional and consider the function

$$ \begin{equation*} \eta(t)=\biggl\langle G,\frac{1}{z-t}\biggr\rangle, \qquad t\in I. \end{equation*} \notag $$
Then $\eta\in C^{\infty}(I)$, and we have
$$ \begin{equation*} |\eta^{(n)}(t)|=\biggl|\biggl\langle G,\frac{n!}{(z-t)^{n+1}}\biggr\rangle\biggr| \leqslant C H_{0}(\delta) \|n!\, (z-t)^{-n-1}\|=C H_{0}(\delta) M_n, \qquad n\geqslant0, \end{equation*} \notag $$
where
$$ \begin{equation*} M_n=\sup_{y>0} \frac{n!}{H_{0}(y) y^{n+1}}. \end{equation*} \notag $$
Also note that
$$ \begin{equation*} \eta^{(n)}(0)=\biggl\langle G,\frac{n!}{z^{n+1}}\biggr\rangle=\frac{n!}{\delta^{n+1}}, \qquad n\geqslant0. \end{equation*} \notag $$

Now consider the function $g$ such that $g(t)=1+\eta(t)(t-\delta)$. Since

$$ \begin{equation*} g^{(n)}(t)=\eta^{(n)}(t)(t-\delta)+n \eta^{(n-1)}(t), \qquad n\geqslant1, \end{equation*} \notag $$
we obtain $g^{(n)}(0)=0$, $n\geqslant0$; $|g^{(n)}(t)|\leqslant C H_{0}(\delta) (M_n+n M_{n-1})$, $n\geqslant1$. However, the sequence $\{M_n\}$ is logarithmically convex, that is, $M_n^2\leqslant M_{n-1} M_{n+1}$, $n\geqslant1$. Hence the sequence $\{M_{n-1}/M_n\}$ is nonincreasing. Then, as the series
$$ \begin{equation*} \sum_{n=1}^{\infty} \frac{M_{n-1}}{M_n} \end{equation*} \notag $$
is convergent, it follows that $n M_{n-1}=o(M_n)$ as $n\to\infty$, so that
$$ \begin{equation*} \sup_{n\geqslant1} \frac{n M_{n-1}}{M_n}=L<\infty. \end{equation*} \notag $$
Therefore,
$$ \begin{equation*} \sup_{I} |g^{(n)}(t)|\leqslant C (1+L) M_n H_{0}(\delta), \qquad \delta\in (0,1], \quad n\geqslant0. \end{equation*} \notag $$
Thus, we finally obtain the following:

1) $g^{(n)}(0)=0$, $n\geqslant0$;

2) $\|g^{(n)}\|\leqslant K H_{0}(\delta) M_n$, $n\geqslant0$, where $K=(1+L)C$;

3) $g(\delta)=1$.

Hence the function

$$ \begin{equation*} \psi(t)=\frac{g(t)}{K H_{0}(\delta)} \end{equation*} \notag $$
belongs to the class $C_{I}^{0}(M_n)$. It remains to observe that
$$ \begin{equation*} J_M(\delta)\geqslant \frac{1}{K H_{0}(\delta)} \quad\text{for } \delta\in (0,1], \quad\text{where } K=(1+L)C. \end{equation*} \notag $$

We state the result obtained as the following theorem.

Theorem 9. Let $\{M_n\}$ be a regular sequence and $H_{0}$ be the associated weight in the following sense:

$$ \begin{equation*} H_{0}(t)=\sup_{n\geqslant0} \frac{n!}{M_n t^{n+1}}, \qquad t>0. \end{equation*} \notag $$
If the integral (4.1) converges, then the extremal function $J_M(x)$ has the estimates
$$ \begin{equation} \frac{1}{K H_{0}(x)}\leqslant J_M(x)\leqslant \frac{1}{x H_{0}(x)}, \end{equation} \tag{4.3} $$
where $K=(1+L)C$, $C$ is the constant introduced above and
$$ \begin{equation*} L=\sup_{n\geqslant1} \frac{n M_{n-1}}{M_n}. \end{equation*} \notag $$

Thus, we see from (4.3) that, as $x\to0$,

$$ \begin{equation} \log J_M(x)=-\log H_{0}(x)+O\biggl(\log \frac{1}{x}\biggr)=-(1+o(1)) \log H_{0}(x). \end{equation} \tag{4.4} $$

Estimates (4.3), by contrast with (3.17), describe the asymptotic behaviour of the extremal function $J_M(x)$ as accurately as possible. The meaning of this theorem is that $J_M(x)$ tends to zero as $x\to0$ much slower than any function $f$ in the class $C_{I}^{0}(M_n)$ (see above). Note that in Theorem 9 we need not rely on Theorem 5 from [13], which solves the first Dyn’kin problem, or, more precisely, on the relation

$$ \begin{equation*} \log M^*(x)=(1+o(1)) P_{\varphi}(x), \qquad x\to0. \end{equation*} \notag $$

As we show in § 5, the function $\log H_{0}(x)$ exhibits a considerably slower growth as $x\to0$ than $P_{\varphi}(x)$: $\log H_{0}(x)=o(P_{\varphi}(x))$ as $x\to0$. This means that the upper bound in (2.1), on which the authors of some papers mentioned above relied, is incorrect. In the context of the proof of Theorem 9, $M^*(x)=C H_{0}(x)$.

§ 5. On an upper bound for $\delta_{\{M_n\}}(s)$

For a function $g\in C_{I}^{0}(M_n)$ its Fourier transform

$$ \begin{equation*} (Fg)(z)=\frac{1}{\sqrt{2\pi}} \int_{0}^{1} g(s) e^{isz}\,ds \end{equation*} \notag $$
defines an analytic function in the upper half-plane $\mathbb{C}_{+}$. Integrating by parts $n$ times and using the equalities $g^{(n)}(0)=g^{(n)}(1)=0$, $n\geqslant0$, we obtain
$$ \begin{equation*} (Fg)(z)=\frac{(-1)^n}{(iz)^n \sqrt{2\pi}} \int_{0}^{1} g^{(n)}(s) e^{isz}\,ds, \qquad z\in\mathbb{C}_{+}. \end{equation*} \notag $$
Hence
$$ \begin{equation} |(Fg)(z)|\leqslant \frac{1}{\sqrt{2\pi} \operatorname{Im} z T(|z|)}, \qquad z\in\mathbb{C}_{+}. \end{equation} \tag{5.1} $$
In [13] it was derived from (5.1) (see [13], Lemma 2.1) that for each $\tau>0$ we have
$$ \begin{equation} \|\sqrt{2\pi}\, \tau (Fg)(z+i\tau)\|_{H^{\infty}(W)}\leqslant 1. \end{equation} \tag{5.2} $$
Now consider the inverse Fourier transform
$$ \begin{equation*} g(s)=\frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} (Fg)(x) e^{-isx}\,dx. \end{equation*} \notag $$
By Cauchy’s theorem, for each $\tau>0$ we can write it as
$$ \begin{equation*} g(s)=\frac{1}{\sqrt{2\pi}\, \tau}\biggl(\frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} \bigl[\sqrt{2\pi}\, \tau (Fg)(x+i\tau) e^{-is(x+i\tau)}\bigr]\,dx\biggr). \end{equation*} \notag $$
Thus, for each function $g\in C_{I}^{0}(M_n)$
$$ \begin{equation*} |g(s)|\leqslant \frac{e^{s\tau}}{\sqrt{2\pi}\, \tau}\biggl(\frac{1}{\sqrt{2\pi}}\biggl|\int_{\mathbb{R}} \bigl[\sqrt{2\pi}\, \tau (Fg)(x+i\tau) e^{-isx}\bigr]\,dx\biggr|\biggr). \end{equation*} \notag $$
Hence
$$ \begin{equation} |g(s)|\leqslant \frac{e^{s\tau}}{\sqrt{2\pi}\, \tau} \rho_{\tau}(s), \end{equation} \tag{5.3} $$
where
$$ \begin{equation*} \rho_{\tau}(s)=\sup_{g\in C_{I}^{0}(M_n)} \biggl(\frac{1}{\sqrt{2\pi}} \biggl|\int_{\mathbb{R}} \bigl[\sqrt{2\pi}\, \tau (Fg)(x+i\tau) e^{-isx}\bigr]\,dx\biggr|\biggr). \end{equation*} \notag $$
Then from (5.3) we obtain
$$ \begin{equation} J_M(s)\leqslant \inf_{\tau>0} \biggl(\frac{e^{s\tau}}{\sqrt{2\pi}\, \tau} \rho_{\tau}(s)\biggr). \end{equation} \tag{5.4} $$
However, in [13] another estimate was obtained in place of (5.4). It was based on the following argument. Since, as we see from (5.2), $\rho_{\tau}(s)\leqslant \rho_{\infty,W}(s)$ and
$$ \begin{equation*} \min_{\tau>0} \frac{e^{s\tau}}{\sqrt{2\pi}\, \tau}=\frac{e}{\sqrt{2\pi}} s \end{equation*} \notag $$
(the minimum is attained at $\tau={1}/{s}$), the bound
$$ \begin{equation} J_M(s)\leqslant \frac{e}{\sqrt{2\pi}} s \rho_{\infty,W}(s) \end{equation} \tag{5.5} $$
was deduced from (5.4) on this basis in [13], which is the upper bound in (3.23) (see [13], Lemma 2.1). This argument is erroneous, and estimate (5.5) fails. In fact, we can write the inequality preceding (5.3) as
$$ \begin{equation*} |g(s)|\leqslant \frac{1}{\sqrt{2\pi}}\biggl |\int_{\mathbb{R}} [e^{s\tau} (Fg)(x+i\tau) e^{-isx}\, dx]\biggr|. \end{equation*} \notag $$
To obtain an upper estimate for $|g(s)|$ in terms of $\rho_{\infty,W}(s)$ (this is just what one needs to establish inequality (3.27) from [13]), taking the valid inequality (5.2) into account we find $\tau>0$ such that the function $e^{s\tau} (Fg)(z+i\tau)$ belongs to the unit ball of the space $H^{\infty}(W)$ with centre zero.

Using (5.1) and the maximum modulus principle, we have

$$ \begin{equation*} I_{\tau}=\|e^{s\tau} (Fg)(z+i\tau)\|_{H^{\infty}(W)} =e^{s\tau} \sup_{z\in \mathbb{C}_{+}} |(Fg)(z+i\tau) W(z)|\leqslant e^{s\tau} A_{\tau}, \end{equation*} \notag $$
where
$$ \begin{equation*} A_{\tau}=\frac{1}{\sqrt{2\pi}\, \tau} \sup_{x>0} \frac{T(x)}{T(|x+i\tau|)}. \end{equation*} \notag $$
We show that $\log(1/A_{\tau})=o(\tau)$ as $\tau\to\infty$. In fact,
$$ \begin{equation*} \frac{1}{A_{\tau}}=\sqrt{2\pi}\, \tau \inf_{x>0} \frac{T(|x+i\tau|)}{T(x)}\leqslant \sqrt{2\pi} M_0 \tau T(\tau). \end{equation*} \notag $$
Since $T(\tau)$ satisfies (3.19), it follows that $\log T(\tau)=o(\tau)$ as $\tau\to\infty$. This explains everything.

Thus,

$$ \begin{equation*} I_{\tau}\leqslant \exp\biggl[\tau\biggl(s-\frac{1}{\tau} \log \frac{1}{A_{\tau}}\biggr)\biggr]\leqslant 1 \end{equation*} \notag $$
for $0<s\leqslant ({1}/{\tau}) \log ({1}/{A_{\tau}})$. In fact, $I_{\tau}=e^{s\tau}B_{\tau}\leqslant 1$ also for $s$ in the larger half-open interval $J_{\tau}(g)=(0,(1/\tau)\log(1/B_{\tau})]$, whose length satisfies $|J_{\tau}(g)|=o(1)$ as $\tau\to\infty$, for example, if $g(s) \geqslant 0$. Thus, in place of (5.5) we obtain: for each function $g\in C_{I}^{0}(M_n)$ the inequality
$$ \begin{equation} |g(s)|\leqslant \rho_{\infty,W}(s) \end{equation} \tag{5.6} $$
holds asymptotically as $s\to 0$. In view of the above we cannot replace the left-hand side in (5.6) by the extremal function $J_M(s)$ (that is, by $\delta_{\{M_n\}}(s)$). However, we can apply Lemma 4.1 in [13] to the right-hand side; this yields
$$ \begin{equation*} \rho_{\infty,W}(s)\leqslant C \sqrt{P''_{\varphi}(s)}\, e^{-P_{\varphi}(s)} \end{equation*} \notag $$
($C>0$ is a constant); moreover, as $s\to0$,
$$ \begin{equation*} 0\leqslant \log P''_{\varphi}(s)=o(P_{\varphi}(s)). \end{equation*} \notag $$
Now we see from (5.6) that any function $g\in C_{I}^{0}(M_n)$ has the asymptotic estimate
$$ \begin{equation*} \log |g(s)|\leqslant -(1+o(1)) P_{\varphi}(s) \end{equation*} \notag $$
as $s\to0$. As shown in Theorem 9, estimates for $J_M(s)$ are quite different (see (4.3)). So let us see what we can derive from (5.4). For an answer we look at inequalities (5.1) and (5.3). Then for all $s>0$ and $\tau>0$ we have
$$ \begin{equation*} J_M(s)\leqslant \frac{e^{s \tau}}{\sqrt{2\pi}\, \tau} \int_{\mathbb{R}}\frac{dx}{T(|x+i\tau|)}\leqslant \frac{e^{s \tau}}{\sqrt{2\pi}\, \tau T_{0}(\tau)} \int_{\mathbb{R}} \frac{dx}{|x+i\tau|^2}. \end{equation*} \notag $$
Here we bear in mind that
$$ \begin{equation*} T(|x+i\tau|)\geqslant \max_{n\geqslant2} \frac{|x+i\tau|^n}{M_n}\geqslant |x+i\tau|^2 T_{0}(\tau), \end{equation*} \notag $$
where
$$ \begin{equation*} T_{0}(\tau)=\max_{n\geqslant2} \frac{\tau^{n-2}}{M_n}, \qquad \tau>0. \end{equation*} \notag $$
Therefore,
$$ \begin{equation*} J_M(s)\leqslant \frac{e^{s \tau}}{\sqrt{2\pi}\, \tau T_{0}(\tau)} \int_{\mathbb{R}} \frac{dx}{x^2+\tau^2}=\sqrt{\frac{\pi}{2}}\, \frac{e^{s \tau}}{\tau^2 T_{0}(\tau)}. \end{equation*} \notag $$
Since $\tau^2 T_{0}(\tau)=T(\tau)$ for $\tau\geqslant \tau_{0}$, from the last inequality we obtain
$$ \begin{equation*} J_M(s)\leqslant \sqrt{\frac{\pi}{2}} \exp[-(\log T(\tau)-s \tau)], \qquad \tau\geqslant \tau_0. \end{equation*} \notag $$

Let $\tau_s$ be such that

$$ \begin{equation*} \log T(\tau_s)-s \tau_s=\sup_{\tau\geqslant\tau_0}(\log T(\tau)-s \tau). \end{equation*} \notag $$
It is clear that for $0<s\leqslant s_0\leqslant1$ we have
$$ \begin{equation*} \sup_{\tau\geqslant \tau_0}(\log T(\tau)-s \tau)=\sup_{\tau>0}(\log T(\tau)-s \tau) \stackrel{\mathrm{def} }{=} m(s). \end{equation*} \notag $$
Thus,
$$ \begin{equation} J_M(s)\leqslant \sqrt{\frac{\pi}{2}}\, e^{-m(s)} \end{equation} \tag{5.7} $$
for $0<s\leqslant s_0$, where $m(s)=\sup_{\tau>0} (\log T(s)-s \tau)$.

Setting $\tau=1/s$ we obviously obtain $m(s)=\log T(\tau_s)-s \tau_s\geqslant \log T(1/s)-1$, so that for $0<s\leqslant s_0$

$$ \begin{equation} J_M(s)\leqslant \sqrt{\frac{\pi}{2}}\, e^{-m(s)}\leqslant e \sqrt{\frac{\pi}{2}} \, T^{-1}\biggl(\frac{1}{s}\biggr). \end{equation} \tag{5.8} $$
We see from (5.8) that the corresponding estimate of $J_M(s)$ for $\tau=1/s$ is not better than (5.7).

Now we claim that

$$ \begin{equation} d_{0} s H_{0}(2s)\leqslant e^{m(s)}\leqslant d_{1} s H_{0}(s). \end{equation} \tag{5.9} $$
In fact,
$$ \begin{equation*} e^{m(s)}=\exp\Bigl[\sup_{r>0}(\log T(r)-sr)\Bigr] =\exp\biggl[\sup_{r>0}\biggl(\sup_{n\geqslant0} \log \frac{r^n}{M_n}-sr\biggr)\biggr]. \end{equation*} \notag $$
Hence
$$ \begin{equation*} e^{m(s)}=\exp\biggl[\sup_{n\geqslant0} \sup_{r>0}\biggl(\log \frac{r^n}{M_n}-sr\biggr)\biggr]. \end{equation*} \notag $$
Setting $\alpha_n(r)=\log({r^n}/{M_n})-sr$ we see that $\alpha'_n(r)=0$ at the point $r_0=n/s$. At this point $\alpha_n(r)$ attains its maximum
$$ \begin{equation*} \alpha_n(r_0)=\log \biggl[M_n^{-1} \biggl(\frac{n}{s}\biggr)^n\biggr]-n. \end{equation*} \notag $$
Thus,
$$ \begin{equation*} e^{m(s)}=\sup_{n\geqslant0} \frac{n^n}{e^n M_n s^n}\leqslant s H_{0}(s). \end{equation*} \notag $$
We have used Stirling’s formula, which shows that $n^n e^{-n}\leqslant n!$ for $n\geqslant0$ (see [23]).

On the other hand, since $\sqrt{n}<2^{n+1}$ for $n\geqslant0$, in a similar way we obtain

$$ \begin{equation*} e^{m(s)}=\sup_{n\geqslant0} \frac{n^n}{e^n M_n s^n}\geqslant s \frac{1}{\sqrt{2 \pi}\, e^{1/12}} H_{0}(2s). \end{equation*} \notag $$
Thus, estimates (5.9) hold indeed with the constants
$$ \begin{equation*} d_0=\frac{1}{e^{1/12} \sqrt{2 \pi}}\quad\text{and} \quad d_1=1. \end{equation*} \notag $$
Hence from (5.7) and (5.9) we obtain
$$ \begin{equation} J_M(s)\leqslant \frac{1}{d_0}\, \sqrt{\frac{\pi}{2}}\, \frac{1}{s H_{0}(2s)}, \qquad 0<s\leqslant s_0. \end{equation} \tag{5.10} $$

Thus, using estimates for Fourier transforms we have obtained an estimate for $J_M(s)$ analogous to the right-hand estimate in (3.17) (although in terms of the associated weight $H_{0}$, which is not very important). Note that in [13] the authors attempted to establish the ostensibly finer asymptotic estimate

$$ \begin{equation} \log J_M(s)\leqslant -(1+o(1)) P_{\varphi}(s), \quad\text{where } \varphi(r)=\log T(r), \end{equation} \tag{5.11} $$
although just an estimate similar to (5.10) suggested itself. Perhaps, estimate (5.11) was derived in [13] under the influence of Bang [17]. In [13] a lower estimate of type (5.11) was also obtained, but under the additional assumption (3.20). However, we will see that such an estimate is considerably weaker that the corresponding inequality in Theorem 9.

We can also obtain an estimate of type (5.10) via Taylor’s formula (see above). On the other hand the main difficulty in Theorem 9 consists in estimating $J_M(s)$ from below in terms of $H_{0}(s)$. We have obtained such an estimate under assumption (4.1), but in fact condition (3.20), on which we commented above, has the same meaning as the convergence of the bilogarithmic integral (4.1).

It follows from (5.9) and estimates (4.3) in Theorem 9 that

$$ \begin{equation} \log J_M(s)\geqslant -\log K-\log H_{0}(s)\geqslant -\log K_0+\log \frac{2}{s}-m\biggl(\frac{s}{2}\biggr), \qquad K_0=\frac{K}{d_0}. \end{equation} \tag{5.12} $$
But inequalities (5.11) and (5.12) are incompatible, for example, if $m(s/2)=o(P_{\varphi}(s))$ as $s\to0$. In fact, otherwise it would follow from (5.11) and (5.12) that, as $s\to0$,
$$ \begin{equation} (1+o(1)) P_{\varphi}(s)\leqslant \log \frac{1}{J_M(s)}\leqslant (1+o(1)) m\biggl(\frac{s}{2}\biggr). \end{equation} \tag{5.13} $$
However, by definition
$$ \begin{equation*} \begin{gathered} \, P_{\varphi}(s)=\sup_{y>0}(q(y)-sy), \qquad q(y)=\frac{2y}{\pi} \int_{0}^{\infty} \frac{\varphi(t)\, dt}{t^2+y^2}, \\ m(s)=\sup_{y>0}(\varphi(y)-sy)\quad\text{and} \quad \varphi(y)=\log T(y). \end{gathered} \end{equation*} \notag $$

Lemma. Assume that the weight $\varphi(y)$ satisfies $\varphi(y) \asymp \psi(y)$, where $\psi$ is a concave function on $\mathbb{R}_{+}$ such that3

$$ \begin{equation} \inf_{A>1}\varliminf_{y\to\infty}\frac{\psi(Ay)}{Ay}>0. \end{equation} \tag{5.14} $$
Then
$$ \begin{equation*} \lim_{y\to\infty} \frac{q(y)}{\varphi(y)}=\infty. \end{equation*} \notag $$

Note that condition (5.14) in this lemma holds, for instance, for the regular sequence $M_n=n!\,[\log (n+e)]^{(1+\beta)n}$, $\beta>0$, for $n\geqslant0$. In this case

$$ \begin{equation*} \varphi(r)=\log T(r) \asymp \frac{r}{[\log (n+e)]^{1+\beta}}. \end{equation*} \notag $$

Proof of the lemma. Let
$$ \begin{equation} c_0 \psi(y)\leqslant \varphi(y)\leqslant c_1 \psi(y). \end{equation} \tag{5.15} $$
Without loss of generality we can assume that $M_0=1$. Then $\varphi(y)\equiv0$ in a neighbourhood of zero. Hence the function $q(y)$ is well defined, and for each $A>1$
$$ \begin{equation*} q(y)\geqslant \frac{2y}{\pi} \int_{0}^{Ay} \frac{\varphi(t)}{t^2+y^2}\,dt\geqslant c_0 \frac{2y}{\pi}\, \frac{\psi(Ay)}{Ay} \int_{0}^{Ay} \frac{t\, dt}{t^2+y^2}. \end{equation*} \notag $$
This means that
$$ \begin{equation} q(y)\geqslant c_0 \frac{2}{\pi} \frac{\psi(Ay)}{A} \log A, \qquad A>1. \end{equation} \tag{5.16} $$

Let $r=r(s)$ be the root of the equation

$$ \begin{equation*} \frac{\psi_{0}(y)}{y}=\frac{s}{2}, \quad\text{where } \psi_{0}(y)=c_1 \psi(y). \end{equation*} \notag $$
Then, clearly, $r(s)\uparrow\infty$ as $s\downarrow0$ and
$$ \begin{equation*} m\biggl(\frac{s}{2}\biggr)\leqslant\psi_{0}(r(s))-\frac{s}{2} r(s)\leqslant \psi_{0}(r(s)). \end{equation*} \notag $$
It is also obvious that
$$ \begin{equation*} P_{\varphi}(s)=\sup_{y>0}(q(y)-sy)\geqslant q(r(s))-2\psi_{0}(r(s)). \end{equation*} \notag $$
Hence
$$ \begin{equation} \frac{m(s/2)}{P_{\varphi}(s)}\leqslant \frac{\psi_{0}(r)}{q(r)-2\psi_{0}(r)}=\frac{1}{q(r)/\psi_{0}(r)-2}\quad\text{for } r=r(s). \end{equation} \tag{5.17} $$
However, by (5.15) and (5.16), for each $A>1$
$$ \begin{equation*} \frac{q(r)}{\psi_{0}(r)}\geqslant c_{0} c_{1}^{-1} \frac{2}{\pi}\, \frac{\psi(Ar)}{A \psi(r)} \quad\text{for }r=r(s). \end{equation*} \notag $$
Since $A$ is arbitrary, taking (5.14) into account we obtain
$$ \begin{equation*} \lim_{r\to\infty} \frac{q(r)}{\psi_{0}(r)}=\infty \end{equation*} \notag $$
and, as we see from (5.17), $m(s/2)=o(P_{\varphi}(s))$ as $s\to0$.

Thus we arrive at a contradiction with (5.13), and so the required result is established.

The proof is complete.

§ 6. Applying the main result: an estimate for the distance between the algebraic polynomials and the imaginary exponentials in a weighted space

Following [13], let $C_{T}^{0}$ denote the weighted space of continuous functions $f$ on $\mathbb{R}$ such that

$$ \begin{equation*} \lim_{t\to\infty} \frac{f(t)}{T(|t|)}=0, \end{equation*} \notag $$
with the norm
$$ \begin{equation*} \|f\|_{C_{T}^{0}}=\sup_{t\in\mathbb{R}} \frac{f(t)}{T(|t|)}, \end{equation*} \notag $$
where the function $T(r)$ satisfies (3.19). Let $X$ denote the closure of the span of the algebraic polynomials $\mathscr{P}$ in $C_{T}^{0}$: $X=\operatorname{Clos}_{C_{T}^{0}} \mathscr{P}$. In view of (3.19) the polynomials are not dense in $C_{T}^{0}$: $\operatorname{Clos}_{C_{T}^{0}} \mathscr{P}\neq C_{T}^{0}$.

Which functions in $C_{T}^{0}$ can actually be approximated by polynomials in this weighted space? It is known that the limit function must be the restriction of an entire function of minimal exponential type to $\mathbb{R}$ (see [24], Supplements and problems, §§ 12 and 13).

The following problem was discussed in [13]: what is the asymptotic behaviour as $s\to0$ of the quantity

$$ \begin{equation*} d_{T}(s)=\operatorname{dist}_{C_{T}^{0}}(X,e_{s})=\operatorname{dist}_{C_{T}^{0}}(\mathscr{P},e_{s}),\quad \text{where }e_{s}(t)=e^{ist}\,? \end{equation*} \notag $$
Theorem 2.2 in [13] claims that if $\varphi(r)=\log T(r)$ satisfies condition (3.19), then
$$ \begin{equation} \log d_{T}(s)=-(1+o(1)) P_{\varphi}(s). \end{equation} \tag{6.1} $$

However, this result was deduced from (3.21) with the help of Lemma 2.2 in [13]: let $W$ be an outer function in $\mathbb{C}_{+}$ with logarithmic weight $\varphi(t)=\log T(|t|)$ (in this case $|W(t)|=T(|t|)$, $t\in\mathbb{R}$, where the function $W$ was defined above). Then

$$ \begin{equation} \sqrt{2 \pi} \rho_{1,W}(s)\leqslant d_{T}(s)\leqslant \frac{e}{\sqrt{2 \pi}} s \rho_{\infty,W}(s). \end{equation} \tag{6.2} $$
But (6.2) is based on Lemma 2.1 in [13], which is incorrect as already mentioned. So let us find valid estimates for $d_{T}(s)$.

The general form of a continuous linear functional on $C_{T}^{0}$ is as follows:

$$ \begin{equation*} \mu^*(f)=\int_{\mathbb{R}} \frac{f(t)}{T(|t|)}\, d\mu(t), \end{equation*} \notag $$
where $\mu(t)$ is a function of bounded variation on $\mathbb{R}$; in addition,
$$ \begin{equation*} \|\mu^*\|_{T}=\int_{\mathbb{R}} |d\mu(t)|<\infty. \end{equation*} \notag $$
The function $\mu(t)$ gives rise to a finite complex measure $\mu$ on the whole line. By the Hahn-Banach theorem
$$ \begin{equation*} d_{T}(s)=\sup_{\substack{\mu^*\in \mathscr{P}^{\perp} \\ \|\mu^*\|_{T}\leqslant1}} |\mu^*(e_{s})|, \end{equation*} \notag $$
where $\mathscr{P}^{\perp}$ is the annihilator of the subspace $\operatorname{Clos}_{C_{T}^{0}} \mathscr{P}$ (that is, the set of continuous linear functionals on $C_{T}^{0}$ that vanish on $\operatorname{Clos}_{C_{T}^{0}} \mathscr{P}$). Thus,
$$ \begin{equation} d_{T}(s)=\sqrt{2 \pi} \sup_{\substack{\mu^*\in \mathscr{P}^{\perp}\\ \|\mu^*\|_{T}\leqslant1}} |(F \nu)(s)|, \end{equation} \tag{6.3} $$
where
$$ \begin{equation*} d \nu(t)=\frac{1}{T(|t|)} \, d \mu(t). \end{equation*} \notag $$
It is obvious that $F \nu$ (the Fourier transform of the measure $\nu$) is a function in $C^{\infty}(\mathbb{R})$ and, in addition,
$$ \begin{equation*} (F \nu)^{(n)}(0)=\frac{i^n}{\sqrt{2\pi}} \int_{\mathbb{R}} t^n\, d \mu(t)=0, \qquad n\geqslant0. \end{equation*} \notag $$
Moreover,
$$ \begin{equation} \begin{aligned} \, \notag |(F \nu)^{(n)}(s)| &=\frac{1}{\sqrt{2\pi}} \biggl|\int_{\mathbb{R}} \frac{(it)^n}{T(|t|)}\, d \mu(t)\biggr | \\ &\leqslant \frac{1}{\sqrt{2\pi}} M_n^{c} \|\mu^*\|_{T}\leqslant \frac{1}{\sqrt{2\pi}} M_n \|\mu^*\|_{T}, \qquad n\geqslant0 \end{aligned} \end{equation} \tag{6.4} $$
($\{M_n^{c}\}$ is the regularization of the sequence $\{M_n\}$ by logarithms; $M_n^{c}\leqslant M_n$, $n\geqslant0$). Hence taking (6.3), (6.4) and the upper estimate in (4.3) into account we obtain4
$$ \begin{equation} d_{T}(s)\leqslant J_M(s)\leqslant \frac{1}{s H_{0}(s)}, \qquad s\in I. \end{equation} \tag{6.5} $$
We must point out that we obtain the upper estimate (6.5) for $d_{T}(s)$ under the minimal assumption (3.19) about $\{M_n\}$ (this sequence is not necessarily regular).

Now we find a lower estimate for $d_{T}(s)$.

For each functional $\mu^*\in \mathscr{P}^{\perp}$ and any algebraic polynomial $\mathscr{P}$ we have

$$ \begin{equation*} (F \nu)(s)=\frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} [e^{ist}-\mathscr{P}(t)] \,d \nu(t). \end{equation*} \notag $$

Let $g$ be an arbitrary function in $C_{I}^{0}(M_{n-2})$, let $M_{-2}=M_{-1}=M_{0}$, and let $f(x)=(F^{-1}g)(x)$. Then $f\in C^{\infty}(\mathbb{R})$ and $f(x)\equiv0$ for $x\leqslant0$. We also have

$$ \begin{equation*} |f(x)|\leqslant \frac{1}{\sqrt{2 \pi}} \begin{cases} M_{0}, &|x|\leqslant1, \\ \dfrac{1}{x^2 T(|x|)}, &|x|>1. \end{cases} \end{equation*} \notag $$

Set $d \nu(t)=c f(t) \, dt$, where $c>0$ is a normalizing coefficient (to be specified in what follows). Clearly, $\mu^*\in \mathscr{P}^{\perp}$, where

$$ \begin{equation*} \mu^*(\varphi)=\int_{\mathbb{R}} \varphi(t)\, d\nu(t), \qquad d \nu(t)=\frac{d \mu(t)}{T(|t|)}\quad\text{and} \quad \varphi\in C_{T}^{0}. \end{equation*} \notag $$
We have
$$ \begin{equation*} \|\mu^*\|_{T}=\int_{\mathbb{R}} |d \mu (t)|=c \int_{|t|\leqslant1} |f(t)| T(|t|)\, dt+c \int_{|t|>1} |f(t)| T(|t|) \,dt. \end{equation*} \notag $$
Hence we obtain
$$ \begin{equation*} \|\mu^*\|_{T}\leqslant \frac{2c}{\sqrt{2 \pi}}\, \frac{M_0}{M_0'}+\frac{2c}{\sqrt{2 \pi}}=\sqrt{\frac{2}{\pi}} \biggl(\frac{M_{0}}{M_{0}'}+1\biggr) c. \end{equation*} \notag $$
Set
$$ \begin{equation*} c=\sqrt{\frac{\pi}{2}}\, \frac{M_{0}'}{M_{0}'+M_{0}}. \end{equation*} \notag $$
Then
$$ \begin{equation*} \|\mu^*\|_{T}\leqslant1, \qquad F \nu=cg, \quad g\in C_{I}^{0}(M_{n-2}). \end{equation*} \notag $$

Therefore, for each function $g\in C_{I}^{0}(M_{n-2})$

$$ \begin{equation} c\sqrt{2 \pi}\, |g(s)|= \sqrt{2 \pi}\, |(F \nu)(s)|\leqslant \sup_{t\in\mathbb{R}} \biggl|\frac{e^{ist}-P(t)}{T(|t|)}\biggr|. \end{equation} \tag{6.6} $$
It follows directly from (6.6) that
$$ \begin{equation} c \sqrt{2 \pi}\, J_M(s)\leqslant d_{T}(s), \qquad s\in I, \end{equation} \tag{6.7} $$
where $M'=\{M_{n-2}\}$ and $M_{-2}=M_{-1}=M_{0}$. Now we also assume that this sequence is regular in the sense of Dyn’kin, and
$$ \begin{equation} \sum_{n=0}^{\infty} \frac{M_n}{M_{n+1}}<\infty. \end{equation} \tag{6.8} $$
Then by Theorem 9
$$ \begin{equation*} J_M(s)\geqslant \frac{1}{K H_{0}(s)}, \end{equation*} \notag $$
where $H_{0}$ is the associated weight and the positive constant $K$ depends only on the function $H_{0}$. Hence, taking (6.7) into account we obtain
$$ \begin{equation} \frac{c \sqrt{2 \pi}\, s^2}{K H_{0}(s)}\leqslant d_{T}(s), \qquad s\in I. \end{equation} \tag{6.9} $$

Thus, in view of (6.5), (6.7) and (6.9) we have the following result.

Theorem 10. Let $\{M_n\}$ be a regular sequence satisfying the condition of nonquasianalyticity (6.8). Then the following estimates hold:

1) $c \sqrt{2 \pi}\, J_{M'}(s)\leqslant d_{T}(s)\leqslant J_M(s)$;

2) $\dfrac{c \sqrt{2 \pi}\, s^2}{K H_{0}(s)}\leqslant d_{T}(s)\leqslant \dfrac{1}{s H_{0}(s)}$, $s\in I$.

Here $K$ is the constant from Theorem 9 and

$$ \begin{equation*} c=\sqrt{\frac{\pi}{2}}\, \frac{M_{0}'}{M_{0}'+M_{0}}, \end{equation*} \notag $$
where
$$ \begin{equation*} M'=\{M_{n-2}\}, \qquad M_{-2}=M_{-1}=M_{0}\quad\textit{and} \quad M_{0}'=\min_{n\geqslant0} M_n. \end{equation*} \notag $$

Thus, under the assumptions of Theorem 10, as $s\to0$,

$$ \begin{equation*} \log d_{T}(s)=-m_{0}(s)+O\biggl(\log \frac{1}{s}\biggr), \quad\text{where } m_{0}(s)=\log H_{0}(s). \end{equation*} \notag $$
Nevertheless, (6.1) involves the function $P_{\varphi}(s)$, which grows considerably more rapidly than $m_{0}(s)$. As we have seen, $m_{0}(s)=o(P_{\varphi}(s))$ as $s\to0$.

Acknowledgements

The authors are grateful to the participants of the seminar Complex and harmonic analysis (at the Institute of Mathematics with Computer Center of the Ufa Federal Research Center of Russian Academy of Sciences) for discussions of the main results obtained in this paper. The authors are also obliged to referees for useful comments.


Bibliography

1. N. Levinson, Gap and density theorems, Amer. Math. Soc. Colloq. Publ., 26, Amer. Math. Soc., New York, 1940, viii+246 pp.  mathscinet  zmath
2. V. P. Gurarii, “On Levinson's theorem concerning normal families of analytic functions”, Investigations in linear operators and function theory, Pt. 1, Semin. Math., Springer, Boston, MA, 1972, 124–127  mathnet  crossref  mathscinet  zmath
3. N. Sjöberg, “Sur les minorantes subharmoniques d'une function donée”, Comptes rendus du IX congres des mathématiciens scandinaves (Helsinki 1938), Helsingfors, 1939, 309–319  zmath
4. T. Carleman, “Extension d'un théorème de Liouville”, Acta Math., 48:3–4 (1926), 363–366  crossref  mathscinet  zmath
5. F. Wolf, “On majorants of subharmonic and analytic functions”, Bull. Amer. Math. Soc., 48:12 (1942), 925–932  crossref  mathscinet  zmath
6. P. Koosis, The logarithmic integral, v. I, Cambridge Stud. Adv. Math., 12, Cambridge Univ. Press, Cambridge, 1988, xvi+606 pp.  mathscinet  zmath; Corr. reprint of the 1988 original, 1998, xviii+606 pp.  mathscinet  zmath
7. Y. Domar, “On the existence of a largest subharmonic minorant of a given function”, Ark. Mat., 3:5 (1958), 429–440  crossref  mathscinet  zmath  adsnasa
8. A. Borichev and H. Hedenmalm, “Completeness of translates in weighted spaces on the half-plane”, Acta Math., 174:1 (1995), 1–84  crossref  mathscinet  zmath
9. Y. Domar, “Uniform boundedness in families related to subharmonic functions”, J. London Math. Soc. (2), 38:3 (1988), 485–491  crossref  mathscinet  zmath
10. A. M. Gaĭsin and I. G. Kinzyabulatov, “A Levinson-Sjöberg type theorem. Applications”, Sb. Math., 199:7 (2008), 985–1007  mathnet  crossref  mathscinet  zmath  adsnasa
11. E. M. Dyn'kin, “Growth of an analytic function near its set of singular points”, J. Soviet Math., 4:4 (1975), 438–440  mathnet  crossref  mathscinet  zmath
12. E. M. Dyn'kin, “The pseudoanalytic extension”, J. Anal. Math., 60 (1993), 45–70  crossref  mathscinet  zmath
13. V. Matsaev and M. Sodin, “Asymptotics of Fourier and Laplace transforms in weighted spaces of analytic functions”, St. Petersburg Math. J., 14:4 (2003), 615–640  mathnet  mathscinet  zmath
14. V. Matsaev, Uniqueness, completeness and compactness theorems related to classical quasianaliticity, Kandidat dissertation, Physics and Technology Institute of Low Temperatures of the Academy of Sciences of Ukr.SSR, Khar'kov, 1964 (Russian)
15. E. M. Dyn'kin, “Functions with given estimate for $\partial f/\partial\overline z$, and N. Levinson's theorem”, Math. USSR-Sb., 18:2 (1972), 181–189  mathnet  crossref  mathscinet  zmath  adsnasa
16. N. Nikolski, “Yngve Domar's forty years in harmonic analysis”, Festschrift in honour of Lennart Carleson and Yngve Domar (Uppsala 1993), Acta Univ. Upsaliensis Skr. Uppsala Univ. C Organ. Hist., 58, Uppsala Univ., Uppsala, 1995, 45–78  mathscinet  zmath
17. T. Bang, “The theory of metric spaces applied to infinitely differentiable functions”, Math. Scand., 1 (1953), 137–152  crossref  mathscinet  zmath
18. S. Mandelbrojt, Séries adhérentes. Régularisation des suites. Applications, Gauthier-Villars, Paris, 1952, xiv+277 pp.  mathscinet  zmath
19. A. M. Gaisin, “Extremal problems in nonquasianalytic Carleman classes. Applications”, Sb. Math., 209:7 (2018), 958–984  mathnet  crossref  mathscinet  zmath  adsnasa
20. A. M. Gaisin, “Dirichlet series with real coefficients that are unbounded on the positive half-axis”, Sb. Math., 198:6 (2007), 793–815  mathnet  crossref  mathscinet  zmath  adsnasa
21. A. M. Gaisin, “Levinson's condition in the theory of entire functions: equivalent statements”, Math. Notes, 83:3 (2008), 317–326  mathnet  crossref  mathscinet  zmath
22. P. Koosis, Introduction to $H^p$ spaces, With an appendix on Wolff's proof of the corona theorem, London Math. Soc. Lecture Note Ser., 40, Cambridge Univ. Press, Cambridge–New York, 1980, xv+376 pp.  mathscinet  zmath
23. G. M. Fichtenholz, A course of calculus, v. II, 8th ed., Fizmatlit, Moscow, 2006, 864 pp. (Russian); German transl., G. M. Fichtenholz, Differential- und Integralrechnung, v. II, Hochschulbücher fur Math., 62, 10. Aufl., VEB Deutscher Verlag der Wissenschaften, Berlin, 1990, 732 pp.  mathscinet  zmath
24. N. I. Akhiezer, Lectures on approximation theory, 2nd ed., Nauka, Moscow, 1965, 407 pp. (Russian)  mathscinet  zmath; German transl., N. I. Achieser, Vorlesungen über Approximationstheorie, Math. Lehrbücher und Monogr., II, 2. verbesserte Aufl., Akademie-Verlag, Berlin, 1967, xiii+412 pp.  mathscinet  zmath

Citation: A. M. Gaisin, R. A. Gaisin, “Levinson-type theorem and Dyn'kin problems”, Mat. Sb., 214:5 (2023), 69–96; Sb. Math., 214:5 (2023), 676–702
Citation in format AMSBIB
\Bibitem{GaiGai23}
\by A.~M.~Gaisin, R.~A.~Gaisin
\paper Levinson-type theorem and Dyn'kin problems
\jour Mat. Sb.
\yr 2023
\vol 214
\issue 5
\pages 69--96
\mathnet{http://mi.mathnet.ru/sm9802}
\crossref{https://doi.org/10.4213/sm9802}
\mathscinet{http://mathscinet.ams.org/mathscinet-getitem?mr=4662650}
\adsnasa{https://adsabs.harvard.edu/cgi-bin/bib_query?2023SbMat.214..676G}
\transl
\jour Sb. Math.
\yr 2023
\vol 214
\issue 5
\pages 676--702
\crossref{https://doi.org/10.4213/sm9802e}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001095751800003}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85176572799}
Linking options:
  • https://www.mathnet.ru/eng/sm9802
  • https://doi.org/10.4213/sm9802e
  • https://www.mathnet.ru/eng/sm/v214/i5/p69
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Математический сборник Sbornik: Mathematics
    Statistics & downloads:
    Abstract page:324
    Russian version PDF:42
    English version PDF:63
    Russian version HTML:161
    English version HTML:99
    References:39
    First page:6
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024