Izvestiya: Mathematics
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Forthcoming papers
Archive
Impact factor
Guidelines for authors
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Izv. RAN. Ser. Mat.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Izvestiya: Mathematics, 2023, Volume 87, Issue 2, Pages 284–325
DOI: https://doi.org/10.4213/im9296e
(Mi im9296)
 

This article is cited in 2 scientific papers (total in 2 papers)

Multivariate tile $\mathrm{B}$-splines

T. I. Zaitsevaab

a Lomonosov Moscow State University, Faculty of Mechanics and Mathematics
b Moscow Center for Fundamental and Applied Mathematics
References:
Abstract: Tile $\mathrm{B}$-splines in $\mathbb R^d$ are defined as autoconvolutions of indicators of tiles, which are special self-similar compact sets whose integer translates tile the space $\mathbb R^d$. These functions are not piecewise-polynomial, however, being direct generalizations of the classical $\mathrm{B}$-splines, they enjoy many of their properties and have some advantages. In particular, exact values of the Hölder exponents of tile $\mathrm{B}$-splines are evaluated and are shown, in some cases, to exceed those of the classical $\mathrm{B}$-splines. Orthonormal systems of wavelets based on tile B-splines are constructed, and estimates of their exponential decay are obtained. Efficiency in applications of tile $\mathrm{B}$-splines is demonstrated on an example of subdivision schemes of surfaces. This efficiency is achieved due to high regularity, fast convergence, and small number of coefficients in the corresponding refinement equation.
Keywords: $\mathrm{B}$-spline, self-affine tiling, tile, subdivision scheme, wavelet, Hölder regularity, joint spectral radius.
Funding agency Grant number
Russian Science Foundation 21-11-00131
This research was supported by the Russian Science Foundation (project no. 21-11-00131) at Lomonosov Moscow State University.
Received: 29.11.2021
Revised: 11.03.2022
Bibliographic databases:
Document Type: Article
UDC: 517.965+517.518.36+514.174.5
Language: English
Original paper language: Russian

§ 1. Introduction

The B-splines, which form one of the most famous and simple piecewise-polynomial bases, have been extensively studied (see, for example, [1]). $\mathrm{B}$-splines were found useful in constructions of orthogonal Battle–Lemarie wavelets [2]–[4], in effective algorithms of piecewise-polynomial approximation [1], [5]–[8], in approximate formulas for Gaussian distributions, in formulas for volumes of slices of the multivariate cube, etc. Various applications deal with $\mathrm{B}$-splines with irregular knots, penalized $\mathrm{B}$-splines, $\mathrm{B}$-splines defined on various domains, etc. In the multivariate case, $\mathrm{B}$-splines can be constructed in several ways (see [9]–[11]), of which the most popular one is the direct product of univariate $\mathrm{B}$-splines. The common property of this and other approaches is that the $\mathrm{B}$-splines are actually splines (that is, piecewise polynomial functions).

However, there is another natural generalization of $\mathrm{B}$-splines that exploits the fact that a univariate $\mathrm{B}$-spline of order $k$ is the convolution of $(k+1)$ indicator functions of the segment $[0, 1]$. In the present paper, we consider tile $\mathrm{B}$-splines defined as autoconvolutions of the indicator of a special compact set (a tile). A similar construction was employed in [12], [13] (see Remark 2 for details). Each tile is a union of its several contractions by means of affine operators with the same linear part. In wavelet theory, tiles serve as a key ingredient in efficient approach for construction of multivariate Haar systems, as developed by Lagarias, Wang, Gröchenig, Haas, and others ([14]–[16]). Of special importance is the case where a tile consists of two contractions — such tiles are called two-digit tiles or 2-tiles. The systems based on two-digit tiles are in sense most close to those in the univariate case. On the plane, there are three types of affinely non-equivalent 2–tiles, while in $\mathbb{R}^3$ there are seven types. In the present paper, we consider, in detail, $\mathrm{B}$-splines based on these three classes of planar 2-tiles (we call them Square, Dragon, and Bear tile splines).

Tile $\mathrm{B}$-splines inherit many advantages of the classical $\mathrm{B}$-splines, however, their use involves certain difficulties. We mention the following questions in this regard.

1) How to compute values of tile $\mathrm{B}$-splines and of their derivatives? Unlike the classical $\mathrm{B}$-splines, the tile $\mathrm{B}$-splines are not defined explicitly, and their straightforward evaluation involves calculation of convolutions, that is, numerical integration.

2) How to analyze their properties, in particular, regularity, which is a key parameter in many applications (for example, for the wavelet-Galerkin method)?

3) How effective are tile $\mathrm{B}$-splines in applications? Is it possible to construct wavelet systems and subdivision schemes (SubD algorithms) generated by such splines?

In the present paper, we make an attempt to answer all these questions and to apply tile $\mathrm{B}$-splines for construction of wavelets and design of subdivision schemes in the geometric surface modelling.

Our results would be merely of theoretical interest only if there were no advantages of the tile $\mathrm{B}$-splines over the classical $\mathrm{B}$-splines and if they were not so effective in applications. However, some of the constructed two-digit $\mathrm{B}$-splines (for example, the so-called Bear-3, Bear-4) are surprisingly smoother than the corresponding classical $\mathrm{B}$-splines (Theorem 5). Thus, the standard (direct product) $\mathrm{B}$-splines do not feature highest regularity. The rate of convergence of some numerical algorithms based on $\mathrm{B}$-splines (such as the cascade algorithm for computation of coefficients of wavelet decomposition and subdivision algorithms for curve and surface modelling splines, [17], [18]) depends on the regularity of $\mathrm{B}$-splines. In this regard, tile $\mathrm{B}$-splines provide faster convergence rate and better quality of limit functions and surfaces.

Moreover, we show that one class of two-digit $\mathrm{B}$-splines (the so-called Square-$(n+ 1)$ $\mathrm{B}$-splines) coincides with the classical multivariate $\mathrm{B}$-splines of order $n$, but its refinement equation has much fewer non-zero coefficients. This observation reduces the complexity of one iteration in numerical algorithms (Theorem 7).

The present paper is organized as follows: in § 2, we recall the definition of tiles and their properties; § 3 is devoted to the univariate $\mathrm{B}$-splines. In § 4, we define tile $\mathrm{B}$-splines and prove their fundamental properties. In § 5, § 6, tile $\mathrm{B}$-splines are orthogonalized in the two-digit case following a standard approach. Further, in § 7, we find the corresponding wavelet function, that is, we construct a wavelet basis based on two-digit tile $\mathrm{B}$-splines, similarly to Battle–Lemarie wavelets. Since the wavelet function after orthogonalization is no longer compactly supported, it is important to analyze its rate of decay. This convergence rate is estimated in § 8 via multivariate complex analysis (the Laurent series, the Reinhardt domains). This gives us a good approximation of tile wavelet function with finite functions. In § 9, Hölder exponents of tile $\mathrm{B}$-splines are evaluated. Finally, in § 10, tile $\mathrm{B}$-splines are applied in subdivision schemes for surface modelling. These theoretical results provide a basis for our software package for construction of $\mathrm{B}$-splines, wavelets, and smoothness calculation [19].

§ 2. Tiles

Every integer matrix $M \in \mathbb{Z}^{d \times d}$ is known to generate a partition of the lattice $\mathbb{Z}^d$ into $m=|{\det M}|$ equivalent classes $y \sim x \Leftrightarrow y-x \in M\mathbb{Z}^d$. Choosing one representative $d_i \in \mathbb{Z}^d$ from each coset, we obtain a digit set $D(M)=\{d_0, \dots, d_{m-1}\}$. Assume that $d_0=0$. In the univariate case, if $M$ is a number, $D(M)$ is a digit set in the base-$m$ number system. Thus, an integer matrix and a proper digit set define a “number system” in $\mathbb{Z}^d$. Further, we assume that the matrix $M$ is expanding, that is, all its eigenvalues exceed 1 in absolute value. In this case, similarly to the unit segment, we can consider the following set in this base-$m$ number system:

$$ \begin{equation*} G=\biggl\{ \sum_{k=1}^{\infty}M^{-k}\Delta_k \biggm| \Delta_k \in D(M)\biggr\}. \end{equation*} \notag $$

It is known (see, for example, [15], [16]) that, for each expanding integer matrix $M$ and any digit set $D(M)$, the set $G$ is compact, has non-empty interior, and possesses the following properties:

1) the Lebesgue measure $\mu(G)$ is a positive integer;

2) (self-affinity) $G=\bigcup_{\Delta \in D(M)} {M^{-1}(G+\Delta)}$, all the sets $M^{-1}(G+ \Delta)$ have pairwise intersections of measure zero;

3) the indicator $\varphi=\chi_G(x)$ of the set $G$ satisfies the refinement equation almost everywhere on $\mathbb{R}^d$

$$ \begin{equation*} \varphi(x)={\sum_{\Delta \in D(M)}{\varphi(Mx-\Delta)}}, \qquad x \in \mathbb{R}^d; \end{equation*} \notag $$

4) $\sum_{k \in \mathbb{Z}^d}\varphi(x+k) \equiv \mu(G)$ almost everywhere, that is, the integer shifts of $\varphi$ cover $\mathbb{R}^d$ with $\mu(G)$ layers;

5) $\mu(G)=1$ if and only if the function system $\{\varphi(\,{\cdot}\,{+}\,k)\}_{k \in \mathbb{Z}^d}$ is orthonormal.

The last property allows us to introduce the following notion.

Definition 1. Let us fix an expanding matrix $M \in \mathbb{Z}^{d\times d}$ and a digit set $D(M)= \{d_0, \dots, d_{m-1}\}$. If the measure of the set

$$ \begin{equation*} G=\biggl\{ \sum_{k= 1}^{\infty}M^{-k}\Delta_k \biggm| \Delta_k \in D(M)\biggr\} \end{equation*} \notag $$
is one, that is, if all integer shifts of $G$ form a disjoint (up to measure zero) covering of $\mathbb{R}^d$, then $G$ is called a tile.

In some sense, a tile is a multivariate generalization of the segment $[0, 1]$ for the “number system” with matrix base $M$.

Example 1. For the univariate case $d=1$, if $M=2$, we can choose $D(M)=\{0, 1\}$. Then

$$ \begin{equation*} G=\biggl\{ \sum_{k=1}^{\infty}2^{-k}\Delta_k \biggm| \Delta_k \in \{0,1\}\biggr\}=[0, 1]. \end{equation*} \notag $$
The segment $[0, 1]$ satisfies all the above properties. In particular, its indicator function $\varphi(x)=\chi_{[0, 1]}$ satisfies the refinement equation $\varphi(x)=\varphi(2x)+ \varphi(2x-1)$. The segment $[0, 1]$ is indeed a tile, its integer shifts tile the whole line $\mathbb{R}$.

GRAPHIC

Figure 1.The tile $G$ from Example 2 and its properties: (a) the set $G$; (b) the self-affinity of $G$; (c) tiling of the plane

Example 2. Consider the matrix $M=\left(\begin{smallmatrix}1 & 2\\ 1 & -1\end{smallmatrix}\right)$, $m= |{\det M}|=3$. The possible choice of digits is $D(M)=\left\{\left(\begin{smallmatrix}0 \\ 0\end{smallmatrix}\right), \left(\begin{smallmatrix}1 \\ 0\end{smallmatrix}\right), \left(\begin{smallmatrix}0 \\ 1\end{smallmatrix}\right)\right\}$. The corresponding set $G$, is depicted in Fig. 1, (a). Figure 1, (b) illustrates self-affinity of the set $G$, that is, a partition of $G$ into $m=3$ affinely-similar copies. In this case, $G$ is a tile, and a tiling of the plane with its integer shifts is depicted in Fig. 1, (c). The indicator $\varphi=\chi_{G}$ satisfies the refinement equation

$$ \begin{equation*} \varphi(x)=\varphi(Mx)+\varphi\biggl(Mx-\begin{pmatrix}1 \\ 0\end{pmatrix}\biggr) + \varphi\biggl(Mx-\begin{pmatrix}0 \\ 1\end{pmatrix}\biggr). \end{equation*} \notag $$

øverfullrule 0pt

Each tile generates a multivariate Haar basis for $\mathbb{R}^d$ (the construction is described, for example, in [14]–[16], [20]). Unlike the univariate case, in which the Haar system is generated by shifts and dilations of a single function; the multivariate case requires $m-1= |{\det M}|-1$ generating functions. The case $|{\det M}|=2$ is especially interesting since in this case only one generating function is required. In what follows, we will mainly restrict ourselves to this case.

§ 3. The classical $\mathrm{B}$-splines

Recall that the univariate cardinal $\mathrm{B}$-spline of order $n$, denoted $B_n$, is the convolution of $n+1$ functions $\chi_{[0, 1]}$ (see Fig. 2). In particular, $B_0=\chi_{[0, 1]}$, $B_1=\chi_{[0, 1]} * \chi_{[0, 1]}$, etc.

The $\mathrm{B}$-spline $B_n$ of order $n$ belongs to $C^{n-1}(\mathbb{R})$; it is an algebraic polynomial of degree $n$ on segment $[k, k+1)$, $k=0, \dots, n$; $B_n$ is zero outside $[0, n+1]$.

Recall that a univariate refinable function $\varphi(x)$ with dilation coefficient $2$ is a solution of the univariate refinement equation

$$ \begin{equation} \varphi(x)=\sum_{k=0}^{N} c_{k} \varphi(2x-k), \end{equation} \tag{3.1} $$
and the mask of this equation is the trigonometric polynomial
$$ \begin{equation*} a(\xi)=\frac{1}{2} \sum_{k= 0}^{N} c_{k} e^{-2 \pi i k \xi}. \end{equation*} \notag $$

We always assume that $\int_{\mathbb{R}} \varphi(x)\, dx \ne 0$, and normalize the solutions of the refinement equation so that $\int_{\mathbb{R}} \varphi(x)\, dx=1$. Applying the Fourier transform to both sides of equation (3.1), we obtain

$$ \begin{equation} \widehat{\varphi}(2 \xi)=a(\xi) \widehat{\varphi}(\xi). \end{equation} \tag{3.2} $$
Substituting $s\,{=}\,0$ and since $\widehat{\varphi}(0)\,{=}\int_{\mathbb{R}} \varphi(x) \, dx\,{=}\,1$, we get $\widehat{\varphi}(0)\,{=}\,a(0) \widehat{\varphi}(0)$. Hence $a(0)\,{=}\,1$, $\sum_{k=0}^N c_{k}\,{=}\,2$. If two functions $\varphi_1$, $\varphi_2$ satisfy the refinement equations with masks $a_1(x)$, $a_2(x)$, then by (3.2) their convolution also satisfies the refinement equation with mask $a_1(x)a_2(x)$. Since $\varphi(x)=\chi_{[0, 1]}$ satisfies the refinement equation $\varphi(x)= \varphi(2x)+\varphi(2x-1)$ with mask $a_0(\xi)=(1+e^{-2\pi i \xi})/2$ (Example 1), the $\mathrm{B}$-spline $B_n$, which is a convolution of $n+1$ functions $\varphi(x)=\chi_{[0, 1]}$, also satisfies the refinement equation with mask $a_n(\xi)=a_0^{n+1}(\xi)$. Thus, $B_n$ is a solution of the refinement equation with mask $((1+e^{-2\pi i \xi})/2)^{n+1}$. The coefficients of this equation are $c_0=2^{-n}\binom{n+1}{0}$, $c_1=2^{-n}\binom{n+1}{1}, \dots, c_{n+1}= 2^{-n}\binom{n+1}{n+1}$, where $\binom{n+1}{k}$ are the binomial coefficients.

The classical generalization of $\mathrm{B}$-splines to multivariate functions is the direct product of several univariate $\mathrm{B}$-splines:

$$ \begin{equation*} B_n(x_1, \dots, x_d)=B_n(x_1) \cdots B_n(x_d). \end{equation*} \notag $$
This function also satisfies a refinement equation. Its mask is $a_n(\xi_1, \dots, \xi_d)=a_n(\xi_1) \cdots a_n(\xi_d)$. In particular, in the two-dimensional case, the $\mathrm{B}$-spline of zero order has the form
$$ \begin{equation*} B_0(x, y)=\chi_{[0, 1]}(x) \chi_{[0, 1]}(y)=\chi_{[0, 1]^2}(x, y). \end{equation*} \notag $$
Its refinement equation can be obtained by multiplication of the univariate refinement equations
$$ \begin{equation*} \begin{aligned} \, B_0(x, y) &=\bigl(\chi_{[0, 1]}(2x)+\chi_{[0, 1]}(2x-1)\bigr)\bigl(\chi_{[0, 1]}(2y)+\chi_{[0, 1]}(2y-1)\bigr) \\ &= B_0(2x, 2y)+B_0(2x-1, y)+B_0(2x, 2y-1)+B_0(2x-1, 2y-1), \end{aligned} \end{equation*} \notag $$
or by finding its coefficients from the mask $a_0(\xi_1, \xi_2)$. The equation is illustrated in Fig. 3. The $\mathrm{B}$-spline $B_n(x, y)$ of arbitrary order $n$ is the convolution of $(n+1)$ $\mathrm{B}$-splines $B_0(x, y)$:
$$ \begin{equation*} \begin{aligned} \, B_n(x, y) &=B_n(x)B_n(y)=\bigl(\chi_{[0, 1]}(x) * \dots * \chi_{[0, 1]}(x)) (\chi_{[0, 1]}(y) * \dots * \chi_{[0, 1]}(y)\bigr) \\ &= B_0(x, y) * \dots * B_0(x, y). \end{aligned} \end{equation*} \notag $$
The linear $\mathrm{B}$-spline $B_1(x, y)$ is depicted in Fig. 4. The same holds for the case of $d$ variables.

So, the $\mathrm{B}$-spline $B_n(x_1, \dots, x_d)$ is a solution of the refinement equation with $(n+2)^d$ positive coefficients. Since the number of coefficients grows exponentially in dimension, the use of the classical $d$-variate $\mathrm{B}$-splines often leads to non-effective algorithms for large $d$. One of them, the subdivision algorithm, is considered in detail in § 10. The cascade algorithm (the fast discrete wavelet transform) is closely related to this method and is used to obtain the coefficients of wavelet expansion. Its complexity also depends on the number of non-zero coefficients in the refinement equation. Using a different construction of multivariate $\mathrm{B}$-splines, one can obtain a fewer number of coefficients, and, in some cases, guarantee higher regularity (without impairing the structural and approximative properties of the classical $\mathrm{B}$-splines). In the next section, we define $\mathrm{B}$-splines based on tiles, which, with an appropriate choice of a tile, involve only $n+2$ coefficients of the refinement equation independent of the dimension $d$. We will show that some of these tile B-splines are more smooth than the classical $\mathrm{B}$-splines of the same order.

§ 4. The construction of tile $\mathrm{B}$-splines

We start with the definition of tile $\mathrm{B}$-splines.

Definition 2. For a given tile $G \subset \mathbb{R}^d$ and an integer number $n \geqslant 0$, the tile $\mathrm{B}$-spline $B_n^G$ of order $n$ is defined as the convolution of $n+1$ functions $\chi_G$.

In particular, $B_0^G=\chi_G$, $B_1^G=\chi_G * \chi_{G}$.

Definition 3. The symmetrized tile $\mathrm{B}$-spline $\operatorname{Bs}_{n}^G$ of order $n$ is defined as the convolution of $n+1$ functions $\chi_G * \chi_{-G}$.

We fix a tile $G$, and write, for brevity, $B_n=B_n^G$, $\operatorname{Bs}_n= \operatorname{Bs}_n^G$.

In the multivariate case, we consider refinement equations with dilation matrix coefficient $M$ of the form

$$ \begin{equation} \varphi(x)=\sum_{k \in \mathbb{Z}^d} c_{k} \varphi(Mx-k), \end{equation} \tag{4.1} $$
the mask of this equation is the trigonometric polynomial of variables $\xi_1, \dots, \xi_d$,
$$ \begin{equation*} a(\xi)=\frac{1}{m} \sum_{k \in \mathbb{Z}^d} c_{k} e^{-2 \pi i (k, \xi)}, \end{equation*} \notag $$
where $m=|{\det{M}}|$.

The tile $\mathrm{B}$-spline $B_0^G$, that is, the indicator function of the tile $\chi_G$, satisfies almost everywhere the refinement equation

$$ \begin{equation*} \chi_G(x)={\sum_{\Delta \in D(M)}{\chi_G(Mx-\Delta)}}, \qquad x \in \mathbb{R}^d \end{equation*} \notag $$
(see § 2). In this case, $c_k=1$ for all $k \in D(M)$ and $c_k = 0$ for $k \notin D(M)$, and the mask is given by the formula
$$ \begin{equation*} a_0(\xi)=\frac{1}{m} \sum_{\Delta \in D(M)} e^{-2\pi i (\Delta, \xi)}. \end{equation*} \notag $$
Similarly to the univariate case, applying the Fourier transform to both sides of the refinement equation (4.1) we obtain
$$ \begin{equation*} \widehat{\varphi}(\xi)=a(M_1^T \xi) \widehat{\varphi}(M_1^T \xi) \end{equation*} \notag $$
and so,
$$ \begin{equation} \widehat{\varphi}(M^T\xi)=a(\xi) \widehat{\varphi}(\xi). \end{equation} \tag{4.2} $$
Hence, if two functions $\varphi_1$, $\varphi_2$ satisfy refinement equations with masks $a_1(x)$, $a_2(x)$ and with dilation coefficient $M$, then their convolution also satisfies a refinement equation with mask $a_1(x)a_2(x)$ and with dilation coefficient $M$. Thus, similarly to the univariate case, the mask $a_n$ of the tile $\mathrm{B}$-spline $B_n$ satisfies $a_n=a_0^{n+1}$. From this the coefficients of the refinement equation (4.1) of the tile $\mathrm{B}$-splines can be found explicitly.

Proposition 1. Let $\varphi=B_n^G$ be a tile $\mathrm{B}$-spline, where the tile $G$ is constructed from a matrix $M$ and a set of digits $D=\{d_0, \dots, d_{m-1}\}$. Next, for every vector $k \in \mathbb{Z}^d$, let $C_k$ be the number of its representations of the form $k=s_1+\dots+s_{n+1}$ for all ordered sets $(s_1, \dots, s_{n+1})$, $s_i \in D$, with possible repetitions. Then the refinement equation of the tile $\mathrm{B}$-spline $B_n^G$ has the form

$$ \begin{equation*} \varphi(x)=m^{-n} \sum_{k \in \mathbb{Z}^d} C_k \varphi(Mx-k). \end{equation*} \notag $$

The numbers $C_k$ can be explicitly expressed using multinomial coefficients.

The symmetrized tile $\mathrm{B}$-spline also satisfies a refinement equation. Indeed, if $\varphi(x)$ satisfies a refinement equation with coefficients $c_k$ and mask $a(\xi)$, then $\varphi(-x)$ satisfies the refinement equation with coefficients $c_{-k}$ and mask $\overline{a}(\xi)$. Therefore, $\chi_G * \chi_{-G}$ corresponds to the refinement equation with mask $|a_0|^2$, and $\operatorname{Bs}_n$ corresponds to the mask $|a_0|^{2(n+1)}$. Note that, for symmetrized tile B-splines, the coefficients of the mask and its values for all $\xi \in \mathbb{R}^d$ are real.

As the set of the coefficients of the polynomial $a_n$ depends only on the order $n$ and on the digit set $D$, the set of coefficients of the refinement equation of the tile $\mathrm{B}$-spline $B_n$ depends only on $n$, $D$, and is independent of the matrix $M$. Thus, we have established

Corollary 1. If a digit set $D$ is fixed, then all $\mathrm{B}$-splines of the same order are defined by the same refinement equation up to a dilation matrix $M$. The same holds for the symmetrized $\mathrm{B}$-splines.

Remark 1. The construction of the tile $\mathrm{B}$-spline $B_n$ involves evaluation of convolutions via numerical integration. However, $B_n$ can be found in a different way as a solution of the corresponding refinement equation. Each refinable function can be computed precisely on an arbitrarily dense lattice using products of special transition matrices (for more detail, see § 9). In particular, the values of $B_n(k)$ at integer points $k \in \mathbb{Z}^d$ coincide with the components of the eigenvector $v$ of the transition matrix corresponding to the eigenvalue 1. The values of the function $B_n(x)$ on the lattice $M^{-1}\mathbb{Z}^d$ can be obtained by multiplication of the transition matrices by the vector $v$. By subsequent multiplications by the transition matrices one can find $B_n(x)$ for $x \in M^{-2}\mathbb{Z}^d, M^{-3}\mathbb{Z}^d$, etc. In this way, after several iterations, we get the exact values of the function $B_n(x)$ on the refined lattice.

Proposition 2. The integer translates $\{B_n(x-k)\}_{k \in \mathbb{Z}^d}$ of the tile $\mathrm{B}$-spline $B_n$ form a Riesz basis of their linear span. The same holds for the translates of the symmetrized tile $\mathrm{B}$-splines.

The proof is postponed to § 10.

Proposition 2 implies that the integer shifts of the tile $\mathrm{B}$-spline $\varphi(x)=B_n(x)$ generate a multiresolution analysis (MRA), and, correspondingly, a wavelet system (see, for example, [21]). Besides, we can apply the orthogonalization procedure to our refinable function $\varphi(x)=B_n(x)$, thereby obtaining a new function $\varphi_1$, which generates the same MRA and possesses orthogonal integer shifts. This will be done in § 5. In this way, we will obtain orthogonalized tile $\mathrm{B}$-splines, and consequently, the corresponding orthonormal wavelet systems.

Proposition 3. The linear combinations of integer shifts $\{B_n(x-k)\}_{k \in \mathbb{Z}^d}$ of the tile $\mathrm{B}$-spline $B_n$ generate the algebraic polynomials of degree at most $n$. The same holds for the shifts of the symmetrized tile $\mathrm{B}$-splines.

Proof. The required result for the order $n=0$ holds, since $B_0(x-k)=\chi_{G}(x-k)$ and the tile satisfies $\sum_{k \in \mathbb{Z}^d}\chi_G(x-k) \equiv 1$ almost everywhere. Thus, the linear combinations of the shifts $B_0(x-k)$ indeed generate the constant functions.

It is known that the shifts of a compactly supported function $\varphi$ generate algebraic polynomials of degree at most $n$ if and only if its Fourier transform $\varphi(\xi)$ has zeros of order at least $n+1$ at all integer points except zero (due to the Strang–Fix theorem, see [22], [23], [21]). Applying this result first with $\varphi=B_0(x)$, we find that $\widehat{B}_0(\xi)$ has zeros of order one at all integer non-zero points. Since $\widehat{B}_n(\xi)=\widehat{B}_0^{n+1}(\xi)$, the order of the zeros of $\widehat{B}_n(\xi)$ is at least $n+1$. Now we apply the converse result for $\varphi=B_n$, completing the proof.

Corollary 2. The order of approximation by integer shifts of the tile $\mathrm{B}$-spline $B_n$ is $n$.

This means that the distance between an arbitrary smooth function $f$ and the space generated by the functions $\{B_n(a x-k)\}_{k \in \mathbb{Z}^d}$ is $O(a^{-(n+1)})$ as $a \to +\infty$ (see, for example, [24]).

Remark 2. The tile $\mathrm{B}$-splines were also considered in [12] under the name of elliptic refinable functions, which were defined via the Fourier transform. Analogues of Propositions 2, 3 and Corollary 2 are known to hold in the isotropic case (when $M$ is similar to an orthogonal matrix multiplied by a scalar).

Planar tile $\mathrm{B}$-splines were studied in [13] under the name of $\alpha$-splines. A certain complex number $\alpha \in \mathbb{Q}[i]$ played the role of a matrix $M$ in the refinement equation, $m=|\alpha|^2$. This allows us, in particular, to obtain some analogues of the tile $\mathrm{B}$-splines with two digits. In [13], subdivision schemes based on $\alpha$-splines were also constructed, and the regularity problem was formulated as an open problem.

4.1. The case of two digits ($m=2$)

In what follows, we will often consider the case where $m=|{\det M}|=2$, and, therefore $D(M)=\{0, e\}$. We call such tiles 2-tiles or two-digit tiles. They possess many useful properties, some of them are considered below.

According to [25], Proposition 2.2, every 2-tile is centrally-symmetric. We give a proof for the convenience of the reader.

Proposition 4. Every 2-tile is centrally-symmetric.

Proof. Let $D(M)=\{0 , e\}$. We set $c = (1/2)\sum_{j=1}^{\infty} M^{-j} e$. The two-digit tile $G$ is $\bigl\{c+\sum_{j=1}^{\infty}\!\!{\pm}(1/2) M^{-j} e\bigr\}$. The set $\bigl\{ \sum_{j=1}^{\infty} {\pm}(1/2) M^{-j} e\bigr\}$ is symmetric about the origin, and hence $G$ is symmetric about $c$.

In case of a centrally-symmetric tile, where $G=c+G_0=c-G_0$, the following holds (here and in what follows, $\int$ means $\int_{\mathbb{R}^d}$)

$$ \begin{equation*} \begin{aligned} \, \chi_G * \chi_G(y) &=\int \chi_{G}(x) \chi_{G}(y-x)\, dx=\int \chi_{G_0}(x-c) \chi_{G_0}(y-x-c)\, dx \\ &=\int \chi_{G_0}(x) \chi_{G_0}(y-x-2c)\, dx=\chi_{G_0}*\chi_{G_0} (y-2c), \\ \chi_G * \chi_{-G}(y) &=\int \chi_{G}(x) \chi_{-G}(y-x) \, dx=\int \chi_{G_0}(x-c) \chi_{G_0}(y-x+c) \, dx \\ &=\int \chi_{G_0}(x) \chi_{G_0}(y-x) \, dx=\chi_{G_0}*\chi_{G_0} (y). \end{aligned} \end{equation*} \notag $$
Therefore, for centrally-symmetric tiles, in particular, for all 2-tiles, the $\mathrm{B}$-splines $B_{2n}$ and $\operatorname{Bs}_n$ differ only by a shift. Further, we restrict ourselves only to the $\mathrm{B}$-splines based on 2-tiles.

We next define an isotropic tile.

Definition 4. A tile is called isotropic if it is generated by an isotropic matrix $M$, that is, by a diagonalizable matrix whose eigenvalues have the same absolute value.

An isotropic matrix is affinely-similar to an orthogonal matrix multiplied by a scalar. The most popular tiles in applications are isotropic.

The 2-tiles have been extensively studied (see, for example, [26]–[35]). It is known that, for every $d$, there is a finite number of different 2-tiles in $\mathbb{R}^d$ up to affine similarity. For instance, there exist exacty three 2-tiles in $\mathbb{R}^2$. We call them the Square, the Dragon, and the Bear (in the literature the terms respectively “square”, “twindragon”, “tame twindragon” are also used). All of them are isotropic. For their construction, we can choose, for example, the matrices

$$ \begin{equation} M_S=\begin{pmatrix} 0 & -2 \\ 1 & 0\end{pmatrix},\qquad M_D=\begin{pmatrix} 1 & 1 \\ -1 & 1\end{pmatrix},\qquad M_B=\begin{pmatrix} 1 & -2 \\ 1 & 0\end{pmatrix} \end{equation} \tag{4.3} $$
respectively, and the set of digits $D=\left\{\begin{pmatrix} 0 & 0\end{pmatrix}, \begin{pmatrix} 1 & 0\end{pmatrix}\right\}$. If we change the digits, the set transforms affinely. Further, we shall use these matrices and digits. A partition of 2-tiles into two affinely-similar parts is shown in Fig. 5, and a tiling of the plane by their integer shifts is depicted in Fig. 6. In $\mathbb{R}^3$, there are seven 2-tiles, of which only one (the cube) is isotropic.

In the isotropic case, the problem of classification of 2-tiles up to an affine similarity can be solved completely [36]. This classification turns out to be quite simple. In odd dimensions $d=2k+1$, all isotropic 2-tiles are parallelepipeds. In even dimensions $d=2k$, there exists three types of isotropic 2-tiles up to an affine similarity — these are the parallelepiped, the direct product of $k$ (two-dimensional) Dragons, and the direct product of $k$ (two-dimensional) Bears.

In non-isotropic case, the problem of finding the number $N(d)$ of non-equivalent classes of 2-tiles for every $d$ is reduced to that of the total number of expanding monic polynomials with free term $\pm 2$; see [36], where the following estimate was obtained

$$ \begin{equation*} \frac{d^2}{16}-\frac{43d}{36}-\frac56 \leqslant N(d) \leqslant 2^{d(1+16 \ln \ln d/\ln d)}. \end{equation*} \notag $$

The tile $\mathrm{B}$-splines constructed by 2-tiles will be called by their names but with index increased by one. Thus, the indicator $B_0$ of the Bear tile is Bear-1, the convolution $B_{k-1}$ of $k$ such functions is Bear-$k$. This index shift is in line with the traditional notation for univariate $\mathrm{B}$-splines, where the function $B_{k-1}(x)$ is a spline of order $k-1$ (that is, is the convolution of $k$ functions $\chi_{[0, 1]}$). In our case, $\mathrm{B}$-splines are defined by convolutions of indicators, rather than by polynomials, and so it is more natural to include the number of multipliers in the name of tile $\mathrm{B}$-splines.

The coefficients of the refinement equations of tile $\mathrm{B}$-splines were found in the general case in Proposition 1. For planar tiles, they are quite simple.

Corollary 3. Let a tile $\mathrm{B}$-spline $\varphi=B_n^G$ be constructed by a planar 2-tile with matrix $M$ and digits $D=\{ (0 \, 0), (1 \, 0)\}$. We set $C_k\,{=}\,2^{-n}\binom{n+1}{k}$ for $k\,{=}\,\{0, \dots, n+ 1\}$. Then $B_n^G$ satisfies the refinement equation

$$ \begin{equation} \varphi=\sum_{k \in \{0, 1, \dots, n+1\}}C_k \varphi \biggl(Mx-\begin{pmatrix} k \\ 0\end{pmatrix}\biggr). \end{equation} \tag{4.4} $$

Thus, all tile $\mathrm{B}$-splines based on Bear, Dragon and Square tiles can be computed using the refinement equation (4.4) (see Remark 1). Figure 7 shows1 Bear-1, …, Bear-4, Fig. 8 shows Dragon-1, …, Dragon-4, and Fig. 9 depicts Square-1, …, Square-4.

In § 9 we compute the Hölder regularity of these splines and establish that the Bear-2 is not from $C^1$, the Bear-3 is from $C^2$, and the Bear-4 is from $C^3$. Thus, Bear-3 and Bear-4 have higher regularity than the Square-3 and Square-4, respectively, though it seems paradoxical (see Theorem 5).

§ 5. Orthogonalization of $\mathrm{B}$-splines

In the previous section, we established that the $\mathrm{B}$-spline $\varphi(x)=B_n(x)$ is a compactly supported refinable function. Its integer shifts are not orthogonal to each other, and, therefore, they do not generate an orthonormal wavelet system. Since these shifts form a Riesz basis for their linear span, they can be orthogonalized in a standard way. Namely, we can construct another refinable function $\varphi_1(x)$, whose integer shifts form an orthonormal basis for the space of integer shifts of $\varphi(x)$ (in terms of wavelet theory, it should generate the same multiresolution analysis). The support of the function $\varphi_1$ is not finite anymore, but it decays fast at infinity (an example of an estimate is given in § 8). The construction involves the following well-known fact (see, for example, [21]).

Proposition 5. A function $\eta(x) \in L_2$ has orthonormal integer shifts if and only if $\sum_{k \in \mathbb{Z}^d} |\widehat{\eta}(\xi+k)|^2 \equiv 1$.

In particular, the function $\varphi_1$, as given in terms of the Fourier transform by

$$ \begin{equation} \widehat \varphi_1(\xi)=\frac{\widehat \varphi(\xi)} {\sqrt{{ \sum_{k \in \mathbb{Z}^d} |\widehat{\varphi}(\xi+k)|^2}}}, \end{equation} \tag{5.1} $$
possesses this property. The transition from the function $\varphi(x)$ to the function $\varphi_1(x)$ by (5.1) is the standard Battle–Lemarie orthogonalization procedure.

From (5.1) it follows that the function $\varphi_1$ is expressed as a linear combination of integer shifts $\{\varphi(x-k)\}_{k \in \mathbb{Z}^d}$ of $\varphi$; below, we will find the coefficients of this decomposition.

Theorem 1. Let $G$ be an arbitrary tile in $\mathbb{R}^d$, $\varphi(x)=B_n^G(x)$ be its corresponding tile $\mathrm{B}$-spline, $\varphi_1(x) $ be its orthogonalization. Let $\Phi_k=(\varphi, \varphi(\,{\cdot}\,{+}\,k))$ for all $k \in \mathbb{Z}^d$ and $\Phi(\xi)=\sum_{k \in \mathbb{Z}^d} \Phi_k e^{-2 \pi i (k, \xi)}$. Then $\varphi_1(x)$ is a linear combination of integer shifts of $\varphi(x)$,

$$ \begin{equation*} \varphi_1(x)=\sum_{k \in \mathbb{Z}^d}{b_k \varphi(x-k)}, \end{equation*} \notag $$
where $b_k$ are the Fourier coefficients of the function $1/\sqrt{\Phi(\xi)}=\sum b_k e^{-2 \pi i (k, \xi)}$.

Proof. Decomposing the orthogonalized spline $\varphi_1(x)$ in terms integer translates of $\varphi$,
$$ \begin{equation*} \varphi_1(x)=\sum_{k \in \mathbb{Z}^d}{b_k \varphi(x-k)}, \end{equation*} \notag $$
we have, for every $j \in \mathbb{Z}^d$,
$$ \begin{equation*} \varphi_1(x+j)=\sum_{k \in \mathbb{Z}^d} b_k \varphi(x-(k-j)) \stackrel{(l=k-j)}{=} \sum_{l \in \mathbb{Z}^d} b_{l+j} \varphi(x-l). \end{equation*} \notag $$
The shift orthogonality property as written as
$$ \begin{equation*} \delta_j^0=(\varphi_1, \varphi_1(\,{\cdot}\,{+}\,j))=\sum_{k, l \in \mathbb{Z}^d} b_k \overline{b_{l+j}} (\varphi(\,{\cdot}\,{-}\,k), \varphi(\,{\cdot}\,{-}\,l))=\sum_{k, l \in \mathbb{Z}^d} b_k \overline{b_{l+j}} \Phi_{k- l} \end{equation*} \notag $$
(here $\delta_j^0$ denotes the Kronecker delta), or, equivalently,
$$ \begin{equation*} \sum_{m \in \mathbb{Z}^d} \biggl(\sum_{k \in \mathbb{Z}^d} b_k \overline{b_{k-m+j}}\biggr) \Phi_{m}=\delta_j^0. \end{equation*} \notag $$
Setting $A_p=\sum_{k \in \mathbb{Z}^d} b_k \overline{b_{k-p}}$, we have, for every $j \in \mathbb{Z}^d$,
$$ \begin{equation*} \sum_{m \in \mathbb{Z}^d} A_{j-m}\Phi_m=\delta_j^0. \end{equation*} \notag $$
In other words, the convolution of the sequences $A$ and $\Phi$ is a $\delta$-sequence. Hence the product of their Fourier transforms $A(\xi)=\sum_{k \in \mathbb{Z}^d} A_k e^{-2 \pi i (k, \xi)}$ and $\Phi(\xi)=\sum_{k \in \mathbb{Z}^d} \Phi_k e^{-2 \pi i (k, \xi)}$ is the identically one function. Thus, $A(\xi)=1/\Phi(\xi)$. We also consider the Fourier transform $B(\xi)=\sum_{k \in \mathbb{Z}^d} b_k e^{-2 \pi i (k, \xi)}$. Now we have
$$ \begin{equation*} A_p=\int B(\xi) \overline{B(\xi)} e^{2 \pi i (p, \xi)}\, d\xi=\int |B(\xi)|^2 e^{2 \pi i (p, \xi)}\, d\xi. \end{equation*} \notag $$
Hence, $|B(\xi)|^2=\sum_{p \in \mathbb{Z}^d} A_p e^{-2 \pi i (p, \xi)}=A(\xi)$, and so,
$$ \begin{equation*} |B(\xi)|=\frac{1}{\sqrt{\Phi(\xi)}}, \end{equation*} \notag $$
that is, we express the coefficients $b_k$ of the expansion of the function $\varphi_1$ with respect to the shifts of the function $\varphi$ using the numbers $\Phi_k$. This proves Theorem 1.

By definition of the coefficients $\Phi_k=(\varphi, \varphi(\,{\cdot}\,{+}\,k))$, their calculation involves numerical integration. It turns out, however, that they can be found easily as components of an eigenvector of a special matrix. This will be done in next section.

§ 6. Formulas for coefficients $\Phi_k$

In order to find a new refinable function $\varphi_1$ with orthonormal integer shifts, we need to find the auxiliary numbers $\Phi_k=(\varphi, \varphi(\,{\cdot}\,{+}\,k))$.

Theorem 2. 1) For every integer $k$, the function $\varphi(x) * \varphi(-x)$ at $-k$ is $\Phi_k$.

2) For the Fourier series constructed by the coefficients $\Phi_k$,

$$ \begin{equation*} \Phi(\xi) := \sum_{k \in \mathbb{Z}^d} \Phi_k e^{-2 \pi i (k, \xi)} =\sum_{k \in \mathbb{Z}^d} |\widehat{\varphi}(\xi+k)|^2. \end{equation*} \notag $$

Assertion 1) implies that only a finite number of the coefficients $\Phi_k$ are non-zero. Hence $\Phi(\xi)$ is a trigonometric polynomial.

Proof of Theorem 2. We set $f:=\varphi(x) * \varphi(-x)$. Since $\varphi$ is real-valued, we have
$$ \begin{equation*} f(y)=\int \varphi(x)\varphi(x-y)\, dx. \end{equation*} \notag $$
Therefore, at the point $y=-k$,
$$ \begin{equation*} f(-k)=\int \varphi(x)\varphi(x+k)\, dx=\Phi_k. \end{equation*} \notag $$

For a proof of the second part, we note that, by the Plancherel theorem,

$$ \begin{equation*} \Phi_k=\int \varphi(x)\varphi(x+k)\, dx=\int \widehat{\varphi}(\xi) \overline{\widehat{\varphi}(\xi)} e^{-2 \pi i (k, \xi)}\, d\xi=\int |\widehat{\varphi}(\xi)|^2 e^{-2 \pi i (k, \xi)} \, d\xi, \end{equation*} \notag $$
since $\widehat{\varphi(\,{\cdot}\,{+}\,k)}(\xi)=e^{2 \pi i (\xi, k)}\varphi(\xi)$.

Hence the Fourier coefficients of the function $\sum_{k \in \mathbb{Z}^d} |\widehat{\varphi}(\xi+k)|^2$ also coincide with the numbers $\Phi_k$. Thus,

$$ \begin{equation*} \Phi(\xi)=\sum_{k \in \mathbb{Z}^d} \Phi_k e^{-2 \pi i (k, \xi)} =\sum_{k \in \mathbb{Z}^d} |\widehat{\varphi}(\xi+k)|^2, \end{equation*} \notag $$
completing the proof of Theorem 2.

Remark 3. In Theorem 1, the values $\Phi_k=(\varphi, \varphi(\,{\cdot}\,{+}\,k))$ are defined as inner products, which are evaluated via numerical integration. In Theorem 2, we showed that $f := \varphi(x) * \varphi(-x)$ is equal to $\Phi_k$ at integer points. This function satisfies the refinement equation with mask $a(\xi) \overline{a}(\xi)=|a(\xi)|^2$. In particular, if $\varphi(x)$ is the tile $\mathrm{B}$-spline $B_n$, then $f=\varphi(x) * \varphi(-x)$ is the symmetrized tile $\mathrm{B}$-spline $\operatorname{Bs}_n$. If $\varphi(x)$ is the symmetrized tile $\mathrm{B}$-spline $\operatorname{Bs}_n$, then $f=\varphi(x) * \varphi(-x)$ is the symmetrized $\mathrm{B}$-spline $\operatorname{Bs}_{2n}$. Therefore, knowing the refinement equation of $\varphi(x)$, we can find the coefficients $\Phi_k$ as the components of the eigenvector of a special matrix (see Remark 1).

Orthogonalized tile $\mathrm{B}$-splines for Bear-2, Bear-4, Dragon-2, Dragon-4, Square-2, Square-4 are depicted in Fig. 10.

§ 7. Construction of wavelet function

Recall that $\Phi(\xi)=\sum_{k \in \mathbb{Z}^d} |\widehat{\varphi}(\xi+k)|^2$. The following fact is clear.

Proposition 6. Let a tile $\mathrm{B}$-spline $\varphi(x)$ satisfy a refinement equation with mask $a(\xi)$. Then its orthogonalization $\varphi_1(x)$ is the solution of the refinement equation with mask

$$ \begin{equation} a_1(\xi)=a(\xi)\frac{\sqrt{\Phi(\xi)}}{\sqrt{\Phi(M^T \xi)}}. \end{equation} \tag{7.1} $$

Proof. Using the representation of refinement equations obtained after application of the Fourier transform (4.2), it is sufficient to check 1-periodicity of the function
$$ \begin{equation*} a_1(\xi)=\frac{\widehat \varphi_1(M^T \xi)}{\widehat \varphi_1(\xi)} =\frac{\widehat \varphi(M^T \xi) \sqrt{\Phi(\xi)}}{\widehat \varphi(\xi) \sqrt{\Phi(M^T \xi)}} = a(\xi)\frac{\sqrt{\Phi(\xi)}}{\sqrt{\Phi(M^T \xi)}}. \end{equation*} \notag $$
Since the functions $a(\xi)$, $\Phi(\xi)$ are 1-periodic and the matrix $M$ is integer, the function $a_1(\xi)$ is also 1-periodic, proving the claim.

Thus, we can find the coefficients $c_k$ of the refinement equation for the function $\varphi_1$ by expanding the mask $a_1(\xi)$ defined by (7.1) in a Fourier series,

$$ \begin{equation*} a_1(\xi)=\frac{1}{m} \sum_{k \in \mathbb{Z}^d} c_{k} e^{-2 \pi i (k, \xi)}. \end{equation*} \notag $$

There are infinitely many non-zero coefficients $c_k$, therefore, the new refinable function $\varphi_1$ is not compactly supported. However, as we will see later, $\varphi_1$ decays exponentially as $\xi \to \infty$. That will allow us to effectively approximate it by compactly-supported functions (see § 8).

Now we turn to explicit construction of the wavelet function corresponding to the orthogonalized function $\varphi_1(x)$ in the two-digit case, that is, for $m=2$. In this case, the Haar system is the most simple, since it is generated by a single wavelet function. The next theorem is a version of the general result on construction of orthonormal wavelets (see, for example, [4]). Nevertheless, we give its full proof for two-digit tile $\mathrm{B}$-splines.

Theorem 3. Let $G$ be a two-digit tile (2-tile), that is, $m\,{=}|{\det{M}}|\,{=}\,2$, let $\varphi=B_n^G$ be its corresponding tile $\mathrm{B}$-spline, let $\varphi_1$ be its orthogonalization, and let $c_k$ be the coefficients of the refinement equation for $\varphi_1$. Then:

1) The corresponding wavelet function $\psi(x)$ is a linear combination of $M$-dilations of the function $\varphi_1$ with coefficients $\pm c_k$.

2) For three types of affinely non-equivalent two-digit tiles whose matrices are defined by formula (4.3), the following formulas for wavelet function hold:

a) if $G$ is the tile “Bear”, that is, $M=M_B$, then

$$ \begin{equation} \psi_B(x)=\sum_{k \in K} c_k (-1)^{(k_2-k_1)} \varphi_1 \biggl(M_B x+k-\begin{pmatrix} 0 \\ 1 \end{pmatrix}\biggr); \end{equation} \tag{7.2} $$

b) if $G$ is the tile “Square”, that is, $M=M_S$, then

$$ \begin{equation*} \psi_S(x)=\sum_{k \in K} c_k (-1)^{k_1} \varphi_1 \biggl(M_S x+k-\begin{pmatrix} 1 \\ 0 \end{pmatrix}\biggr); \end{equation*} \notag $$

c) if $G$ is the tile “Dragon”, that is, $M=M_D$, then

$$ \begin{equation*} \psi_D(x)=\sum_{k \in K} c_k (-1)^{(k_2-k_1)} \varphi_1 \biggl(M_D x+k-\begin{pmatrix} 0 \\ 1 \end{pmatrix}\biggr). \end{equation*} \notag $$

Proof. The wavelet function has the form
$$ \begin{equation*} \psi(x)=\sum_{w \in W}p_w \varphi_1(Mx-w). \end{equation*} \notag $$
Consider the mask $p(\xi)=(1/m) \sum_{w \in W} p_w e^{-2\pi i (w, \xi)}$ for $\psi(x)$ similarly to the mask $a_1(\xi)=(1/m) \sum_{k \in K} c_k e^{-2\pi i (k, \xi)}$ for the refinement equation for $\varphi_1$.

Lemma 1. Let vectors $0$, $u \in \mathbb{Z}^d$ be from two different cosets $\mathbb{Z}^d / M^T \mathbb{Z}^d$ of the matrix $M^T$. Let $v=M^{-T} u$. Then, for the masks $a_1$ and $p$ of the scaling function and the wavelet function, respectively,

1) $|a_1(s)|^2+|a_1(s+v)|^2=1$ for every $s$ (the orthonormality of $\varphi_1$);

2) $p(s) \overline{a}_1(s)+p(s+v) \overline{a}_1(s+v)=0$ (the orthonormality of $\varphi_1$ and $\psi$).

Proof. From the refinement equation on $\varphi_1$ it follows that
$$ \begin{equation*} \widehat{\varphi}_1(\xi)=a_1(M_1^T \xi) \widehat{\varphi}_1(M_1^T \xi) \end{equation*} \notag $$
and so,
$$ \begin{equation*} \widehat{\varphi}_1(M^T\xi)=a_1(\xi) \widehat{\varphi}_1(\xi). \end{equation*} \notag $$
Since the integer shifts $\varphi_1$ are orthonormal, for every $s \in \mathbb{R}^d$, we have
$$ \begin{equation*} \sum_{q \in \mathbb{Z}^d}|\widehat{\varphi}_1(s+q)|^2=1. \end{equation*} \notag $$
We choose vectors $0$ and $u \,{\in}\, \mathbb{Z}^d$ from two cosets $\mathbb{Z}^d / M^T \mathbb{Z}^d$ defined by the matrix $M^T$. Setting $v=M^{-T} u$, we have
$$ \begin{equation*} \begin{aligned} \, 1 &=\!\sum_{q \in \mathbb{Z}^d}|\widehat{\varphi}_1(M^Ts\,{+}\,q)|^2 {=}\!\sum_{q \in \mathbb{Z}^d}|\widehat{\varphi}_1(M^Ts\,{+}\,M^Tq)|^2{+} \!\sum_{q \in \mathbb{Z}^d}|\widehat{\varphi}_1(M^Ts\,{+}\,M^Tq\,{+}\,M^Tv)|^2 \\ &= \!\sum_{q \in \mathbb{Z}^d} |a_1(s+q)|^2 |\widehat{\varphi}_1(s+q)|^2 +\sum_{q \in \mathbb{Z}^d} |a_1(s+q+v)|^2 |\widehat{\varphi}_1(s+q+v)|^2 \\ &=|a_1(s)|^2+|a_1(s+v)|^2. \end{aligned} \end{equation*} \notag $$
So, $|a_1(s)|^2+|a_1(s+v)|^2=1$ for every $s$, proving assertion 1).

Similarly, 2) follows from orthogonality of $\varphi_1$ and $\psi$. This proves Lemma 1.

Let us return to the proof of theorem. We will look for $p$ to satisfy the orthogonality condition $p(s) \overline{a}_1(s)+p(s+v) \overline{a}_1(s+v)=0$ from Lemma 1. Note that in the two-digit case the vector $2 \cdot v=2 \cdot M_1^T u$ is integer, therefore, $a_1(s+2v)=a_1(s)$, $p(s+2v)=p(s)$.

Consider the Bear case with the matrix $M=M_B=\left(\begin{smallmatrix}1 & -2 \\ 1 & 0\end{smallmatrix}\right)$. Hence $M_1^T=(1/2)\left(\begin{smallmatrix}0 & -1 \\ 2 & 1\end{smallmatrix}\right)$. The vector $u$ can be chosen as $\left(\begin{smallmatrix}0 \\ 1\end{smallmatrix}\right)$. Hence $v=\left(\begin{smallmatrix}-0.5 \\ 0.5\end{smallmatrix}\right)$.

Therefore, we can take the function $p(s)=e^{-2 \pi i s_2} \overline{a}_1(s+ v)$ as a particular solution. Indeed, $p(s+v)=e^{-2 \pi i (s_2+0.5)} \overline{a}_1(s+2 \cdot v) = -e^{-2 \pi i s_2} \overline{a}_1(s)$ and it is easy to check that the equality holds.

Using the equality $\widehat{\psi}_B(\xi)=p(M_1^T \xi) \widehat{\varphi}_1(M_1^T\xi)$, we have

$$ \begin{equation*} \begin{aligned} \, \widehat{\psi}_B(\xi) &=e^{-2 \pi i (M_1^T \xi)_2} \overline{a}_1( M_1^T \xi+v) \widehat{\varphi}_1(M_1^T\xi), \\ \widehat{\psi}_B(\xi) &=e^{-2 \pi i ((0,1), M_1^T \xi)} \overline{a}_1(M_1^T \xi+v) \widehat{\varphi}_1(M_1^T\xi). \end{aligned} \end{equation*} \notag $$
We have
$$ \begin{equation*} \begin{gathered} \, a_1(\xi)=\frac{1}{m} \sum_{k \in K} c_k e^{-2\pi i (k, \xi)}, \\ \begin{aligned} \, \widehat{\psi}_B(\xi) &=\frac{1}{m} \sum_{k \in K} e^{-2 \pi i ((0,1), M_1^T \xi)} c_k e^{2\pi i (k, M_1^T \xi+v)} \widehat{\varphi}_1(M_1^T\xi) \\ &=\frac{1}{m} \sum_{k \in K} c_k e^{2\pi i (k, v)} e^{-2 \pi i (-k+(0,1), M_1^T \xi)} \widehat{\varphi}_1(M_1^T\xi). \end{aligned} \end{gathered} \end{equation*} \notag $$
Thus, for the Bear matrix, we conclude that
$$ \begin{equation*} \psi_B(x)=\sum_{k \in K} c_k e^{2\pi i (k, v)} \varphi_1 \biggl(M_B x+k-\begin{pmatrix} 0 \\ 1 \end{pmatrix}\biggr), \end{equation*} \notag $$
where $v= \left(\begin{smallmatrix}-0.5 \\ 0.5\end{smallmatrix}\right)$. Note that since each of the vectors $k$ in the sum is integer, and since the vector $v$ is half-integer, we see that the exponent $e^{2\pi i (k, v)}$ takes only the values $\pm 1$ and hence,
$$ \begin{equation*} \psi_B(x)=\sum_{k \in K} c_k (-1)^{(k_2-k_1)} \varphi_1 \biggl(M_B x+k-\begin{pmatrix} 0 \\ 1 \end{pmatrix}\biggr), \end{equation*} \notag $$
which completes the proof. The argument for the Dragon and Square is similar. Theorem 3 is proved.

The wavelet functions generated by Bears, Dragons, and Squares of order two and of order four are depicted in Fig. 11.

§ 8. Approximation of wavelets by finite sums

Above, we have obtained formulas for orthogonal wavelet systems based on tile $\mathrm{B}$-splines. Their use is complicated by infinite summation. There are infinitely many summands in formula (7.2), since the mask $a_1(\xi)$, as obtained after orthogonalization, is not a trigonometric polynomial. To estimate the accuracy of its approximation by trigonometric polynomials, one has to know the rate of decay of the coefficients $c_k$. We will prove that $|c_k| \leqslant C_1 e^{-C_2 \|k\|}$, where $C_1, C_2$ are positive constants, and we will estimate the number $C_2$. Setting $z_1=e^{-2 \pi i (e_1, \xi)}$, $z_2=e^{-2 \pi i (e_2, \xi)}$, $z=(z_1, z_2)$, where $e_1=(1 \ 0 )$, $e_2=(0 \ 1)$, we have $e^{-2 \pi i (k, \xi)}=z_1^{k_1}z_2^{k_2}$ and $a_1(z)=(1/m) \sum_{k \in \mathbb{Z}^2} c_k z_1^{k_1}z_2^{k_2}$. Considering the Laurent series of a function $a_1(z)$ in $\mathbb{C}^2$ and estimating the rate of decay of its coefficients, we estimate $C_2$.

We use the equality $a_1(\xi)=a(\xi)\sqrt{\Phi(\xi)}/\sqrt{\Phi(M^T \xi)}$. The original mask $a(z)$ (before orthogonalization) has a finite number of non-zero Fourier coefficients (since the initial equation is given by a finite number of coefficients $c_k$); therefore, the multiplier $a(\xi)$ has no effect on the rate of decay of the coefficients $a_1(\xi)$. In addition, as we will see later, our estimate of the decay rate of the Laurent coefficients of expansion of a function into a Laurent series depends only on its domain of holomorphy. Hence, we are interested only in zeros of the denominator $\sqrt{\Phi(M^T \xi)}$, that is, zeros of the function $\Phi(M^T \xi)$. Thus, the final estimate on the rate of decay of the coefficients $c_k$ will be the estimate of the Laurent coefficients of the function $1/\Phi(M^T \xi)$ after the change of variables to $z$. The zeros of the denominator $\Phi(M^T \xi)$ of the function $\Phi(\xi)$, and, therefore, the rate of decay of this function, can be estimated of that of the function $1/\Phi(\xi)$. Since, according to § 6, the Fourier coefficients of the denominator are equal to $\Phi_k$, the Laurent coefficients of the function $\Phi(z)$ after change of variables are also equal to $\Phi_k$. Since we know the numbers $\Phi_k$, we can find the zeros of the denominator of the function $1/\Phi(\xi)$, as well as of the function $1/\Phi(M^T \xi)$. Later, we will estimate the decay rate of the function $f(z)=1/g(z)$ in the general case, and then apply it to the function $1/\Phi(M^T \xi)$.

8.1. The rate of decay of the Laurent coefficients for bivariate holomorphic function

We need to estimate the rate of decay of the coefficients of the function $f(z)=1/g(z)$ in the power expansion in $z \in \mathbb{C}^2$. To this end, we invoke some facts from multivariate complex analysis. We study the structure of the zeros of the function $g(z)$ to find the domain of holomorphy of the function $f(z)$ and then estimate its coefficients. Note that, for a holomorphic function of two complex variables, its set of zeros is a union of continuous curves and, moreover, it does not have compact components.

Let $B_R\,{=}\,B^-_R\,{=}\,\{z\,{\in}\, \mathbb{C}\colon |z|\,{<}\,R\}$ be a ball. The complement of the ball of radius $r$ is $B^+_r\,{=}\,\{z \in \mathbb{C}\colon |z|>r\}$. The annulus is $A_{r, R}=\{z \in \mathbb{C}\colon r<|z|<R\}$. The polydisc of radius $R=(R_1, R_2)$ centred at $\overline 0 \in \mathbb{C}^2$ is the set $U(R)\,{=}\,\{z\,{\in}\, \mathbb{C}^{2}\colon |z_{v}|\,{<}\,R_{v},\ v=1, 2 \} =B_{R_1} \times B_{R_2}$.

We use the Reinhardt domains $\{(|z_1|, |z_2|) \mid (z_1, z_2) \in U\}$ for depicting subsets of $\mathbb{C}^2$. For example, a polydisc in $\mathbb{C}^2$ centred at the origin is presented in the Reinhardt domain as a rectangle with vertices $(0, 0)$, $(R_1, R_2)$.

For given radii $r_1<1<R_1$, $r_2<1<R_2$ (we will choose them later), consider the product of the annuli $A=A_{r_1, R_1} \times A_{r_2, R_2}$. On the diagram, this product is shown as a rectangle with vertices $(r_1, r_2)$, $(R_1, R_2)$. We also introduce the domains $P^{--}=B^-_{R_1}\times B^-_{R_2}$, $P^{+-}=B^+_{r_1}\times B^-_{R_2}$, $P^{-+}=B^-_{R_1}\times B^+_{r_2}$, $P^{++}=B^+_{r_1}\times B^+_{r_2}$. The domain $P^{--}$ is a polydisc, the other domains are the direct products of a ball and of the complement of a ball. The union of four domains is the whole complex plane $\mathbb{C}^2$, and their intersection is the domain $A$.

Suppose that the function $f=1/g$ is from $\mathscr{O}(A) \cap C(\overline{A})$, where $\mathscr{O}(A)$ denotes the set of functions holomorphic in the domain $A$, $C(\overline{A})$ is the set of functions continuous on the closure $A$. We need the following classical theorem on Laurent series expansion (see [37]).

Theorem A. Any $f(x_1, x_2) \in \mathscr{O}(A) \cap C(\overline{A})$ can be represented as a sum of four functions $f^{++}$, $f^{+-}$, $f^{-+}$, $f^{--}$ that are holomorphic in the domains $P^{++}$, $P^{+-}$, $P^{-+}$, $P^{--}$, respectively.

So, we have an expansion of the function $f=1/g$ into four summands. Let these sumands be $f^{++}$, $f^{+-}$, $f^{-+}$, $f^{--}$. One of them, $f^{--}$, is holomorphic in the polydisc $P^{--}= B^-_{R_1}\times B^-_{R_2}$. The following theorem on power series in a polydisc holds.

Theorem B. Let $U$ be the polydisc in $\mathbb{C}^2$ of radius $R=(R_1, R_2)$ centred at $\overline 0 \in \mathbb{C}^2$. Then, at each $z=(z_1, z_2) \in U$, every function $h \in \mathscr{O}(U) \cap C(\overline{U})$ can be written as a multiple power series

$$ \begin{equation*} h(z)=\sum_{k_1, k_2 \geqslant 0} c_{k_1, k_2}z_1^{k_1}z_2^{k_2}. \end{equation*} \notag $$

This theorem gives that $f^{--}(x_1, x_2)=\sum_{k_1, k_2 \geqslant 0} a_{k_1,k_2} x_1^{k_1}x_2^{k_2}$. This series converges in the polydisc $P^{--}$.

The remaining functions could be reduced to holomorphic on the polydisc functions by a change of variables. For the function $f^{-+}$, the change of variables $z_1=x_1$, $z_2=1/x_2$ gives the function $f^{-+}(z_1, z_2)$ holomorphic in the polydisc $P^{-+}(z)= B^-_{R_1}(z_1)\times B^-_{1/r_2}(z_2)$, therefore, it is decomposed as $f^{-+} (z_1, z_2)= \sum_{k_1, k_2 \geqslant 0} b_{k_1,k_2} z_1^{k_1}z_2^{k_2}$. Denote $a_{k_1, -k_2}=b_{k_1, k_2}$. Similarly, we obtain representations for the remaining two functions.

The function ${f}(x_1, x_2)$ is expanded as the series

$$ \begin{equation*} f(x_1, x_2)=\sum_{k_1, k_2 \in \mathbb{Z}} a_{k_1,k_2} x_1^{k_1}x_2^{k_2}, \end{equation*} \notag $$
which converges in the domain $A$.

We now estimate the coefficients $a_{k_1, k_2}$ for each of the four functions by the Cauchy formula, and thus obtain the resulting estimates for these coefficients. Let us recall the Cauchy estimate for bivariate power series (see, for example, [37]).

Theorem C. If ${h \in \mathscr{O}(U) \cap C(\overline{U})}$, ${|h| \leqslant M}$ in the domain ${\{|z_{1}|=R_{1}\} }\times \{|z_{2}|= R_{2}\}$, then, for the coefficients of the power series,

$$ \begin{equation*} |c_{k_1, k_2}| \leqslant \frac{M}{R_1^{k_1}R_2^{k_2}}. \end{equation*} \notag $$

This theorem is applied separately to the expansion coefficients of $f^{++}$, $f^{+-}$, $f^{-+}$, $f^{--}$ in each of the four polydiscs with variables $z_1$, $z_2$. For $k_1, k_2 \geqslant 0$, we have the estimates $|a_{k_1, k_2}| \leqslant C/(R_1^{k_1} R_2^{k_2})$, $|a_{k_1, -k_2}| \leqslant C/(R_1^{k_1} r_2^{-k_2})$, $|a_{-k_1, k_2}| \leqslant C/(r_1^{-k_1} r_2^{k_2})$, $|a_{-k_1, -k_2}| \leqslant C/(r_1^{-k_1} r_2^{-k_2})$.

The numbers $r_1$, $R_1$, $r_2$, $R_2$ should be chosen so to have $f=1/g \in \mathscr{O}(A) \cap C(\overline{A})$, where $A=A_{r_1, R_1} \times A_{r_2, R_2}$, that is, the function $g$ should have no zeros in the closure of the domain $A$. For any possible choice of $r_1$, $R_1$, $r_2$, $R_2$, we have choice-specific estimates of the coefficients. For simplicity, we choose $r_1=1/R_1=r_2= 1/R_2=q$. In this case, the picture simplifies, the rectangle on the Reinhardt domain in Fig. 12 is a square with vertices on the straight line $y = x$ (see Fig. 13), and all the estimates can be written in the general form

$$ \begin{equation*} |a_{k_1, k_2}| \leqslant C q^{|k_1|+|k_2|},\qquad k_1, k_2 \in \mathbb{Z}. \end{equation*} \notag $$

The value $q$ is chosen numerically so that the function $g(x_1, x_2)$ would not vanish in the closure of the domain $A(q, q) \times A(1/q, 1/q)$. This can be always achieved, since $(1, 1)$ is not a zero of the function $g$. Hence, in view of continuity of $g$, there exists a neighbourhood of $(1, 1)$ that does not contain its zeros.

Combining the above results, we have the following theorem.

Theorem 4. Let $f(z)=1/g(z)$, where $g$ is holomorphic in some domain $U \subset \mathbb{C}^2$ that contains the point $(1, 1)$, let $g((1, 1)) \ne 0$, and let $q$ be such that the domain $A=A(q, q) \times A(1/q, 1/q)$ is a subset of $U$ and has no zeros of $g(z)$ in its closure. Then, in the domain $A$,

$$ \begin{equation*} f(z_1, z_2)=\sum_{k_1, k_2 \in \mathbb{Z}} a_{k_1,k_2} z_1^{k_1}z_2^{k_2}, \end{equation*} \notag $$
where the coefficients are estimated as
$$ \begin{equation} |a_{k_1, k_2}| \leqslant C q^{|k_1|+|k_2|},\qquad k_1, k_2 \in \mathbb{Z}, \end{equation} \tag{8.1} $$
for some positive constant $C$.

8.2. The rate of decay of the coefficients of wavelet functions

We now apply the general theorem from § 8.1 on the coefficients of a holomorphic function to estimate the decay of the coefficients of the wavelet functions build by the two-digit tiles constructed in Theorem 3.

Corollary 4. Let $G$ be a two-digit tile, $\varphi=B_n^G$ be the corresponding tile $\mathrm{B}$-spline, let $\varphi_1$ be its orthogonalization, let $c_k$ be the coefficients of the refinement equation for $\varphi_1$, and let $\psi$ be the corresponding wavelet function, which is a linear combination of $M$-dilations of the function $\varphi_1$. Then the coefficients $a_{k_1, k_2}=\pm c_{k_1, k_2}$ of this linear combination satisfy

$$ \begin{equation} |a_{k_1, k_2}|=|c_{k_1, k_2}| \leqslant C q^{|k_1|+|k_2|},\qquad k_1, k_2 \in \mathbb{Z}, \end{equation} \tag{8.2} $$
where $q$ is such that the function $\Phi(M^T \xi)$ after the change of variables $z_1=e^{-2 \pi i (e_1, \xi)}$, $z_2=e^{-2 \pi i (e_2, \xi)}$ has no zeros in the closure of the domain $A(q, q) \times A(1/q, 1/q)$, where $z=(z_1, z_2)$, $e_1=(1, 0)$, $e_2=(0, 1)$.

This implies the estimate on the decay of the wavelet function itself (with a possibly different constant $C$).

Corollary 5. For the orthogonalized tile $\mathrm{B}$-spline $\varphi_1$ and the corresponding wavelet function $\psi$,

$$ \begin{equation*} \begin{alignedat}{2} |\varphi_1(x_1, x_2)| &\leqslant C q^{|x_1|+|x_2|}, &\qquad x_1, x_2 &\in \mathbb{R}, \\ |\psi(x_1, x_2)| &\leqslant C q^{|x_1|+|x_2|}, &\qquad x_1, x_2 &\in \mathbb{R}, \end{alignedat} \end{equation*} \notag $$
where $q$ is chosen as in Corollary 4.

We estimate the rate of decay of the Bear-2 and Bear-4 wavelet functions. In other words, we consider the set $G$, which is the Bear tile with matrix given by (4.3), $\varphi= B_2^G$ or $\varphi=B_4^G$.

For Bear-4, we obtain $q=0.85$. An approximate location of the zeros (restricted to a certain range) is given in Figs. 14, 15. The orange line is $y=x$, the green point is $(1, 1)$. For Bear-2, we can choose $q=0.7$.

Hence, for $|k_1|+|k_2|>40$, we see that for Bear-4 the values $|c_{k_1, k_2}|$ are at most of the order $10^{-3}$, and for Bear-2, this order is at most $10^{-6}$. In the actual fact, the majority of the coefficients are also small in the range $|k_1|+|k_2| \leqslant 40$. Let us proceed with a more refined analysis of the coefficients to estimate how many of them should be kept in order that the $\ell_2$- or the $\ell_1$-norms of the vector of these coefficients would not change much.

Since we cannot store infinitely many coefficients $c_{k_1, k_2}$, we delete all those outside a large square, that is, those satisfying $|k_1|+|k_2|> m$. How to choose $m$ so that the norm of the tail (the vector of the deleted coefficients) is small enough?

Proposition 7. For the $\ell_2$-norm

$$ \begin{equation*} H_2 =\sqrt{\sum_{|k_1|+|k_2|>m} c_{k_1, k_2}^2} \end{equation*} \notag $$
of the coefficients of the “tail”,
$$ \begin{equation} H_2 \leqslant \frac{2 C q^{m+1}\sqrt{1+m-mq^2}}{(1-q^2)}, \end{equation} \tag{8.3} $$
and for the $\ell_1$-norm $H_1={\sum_{|k_1|+|k_2|>m} |c_{k_1, k_2}|}$ of the coefficients,
$$ \begin{equation} H_1 \leqslant \frac{4 C q^{m+1}(1+m-mq)}{(1-q)^2}, \end{equation} \tag{8.4} $$
where the parameters $q, C$ are from inequality (8.2).

Thus, both the $\ell_1$- and $\ell_2$-norms of the tail behave as $O(q^m)$, where $q$ is smaller than one. The constants for $q^m$ for Bear-2 and Bear-4 will be estimated after the proof of Proposition 7.

Proof of Proposition 7. Consider the case $k_1>0$, $k_2 \geqslant 0$, the final estimate will be two times larger. Using inequality (8.2), we have
$$ \begin{equation*} \sum_{\substack{k_1+k_2>m \\ k_1>0,\, k_2 \geqslant 0}} c_{k_1, k_2}^2 \leqslant \sum_{s=m+1}^\infty s C^2 q^{2s} \leqslant \frac{C^2 q^{2m+2}(1+m-mq^2)}{(1-q^2)^2}. \end{equation*} \notag $$
Hence
$$ \begin{equation*} H_2 \leqslant \frac{2 C q^{m+1}\sqrt{1+m-mq^2}}{(1-q^2)} \end{equation*} \notag $$
proving the first assertion.

Similarly, we estimate the $\ell_1$-norm of the coefficients of the tail. We have

$$ \begin{equation*} \sum_{\substack{k_1+k_2>m \\ k_1>0,\, k_2 \geqslant 0}} |c_{k_1, k_2}| \leqslant \sum_{s=m+1}^\infty s C q^{s} \leqslant \frac{C q^{m+1}(1+m-mq)}{(1-q)^2}, \end{equation*} \notag $$
and hence,
$$ \begin{equation*} H_1 \leqslant \frac{4 C q^{m+1}(1+m-mq)}{(1-q)^2}. \end{equation*} \notag $$

The constant $C$ can be estimated from Theorem C. In what follows, we suppose for simplicity that $C=1$. The estimate $q$ is illustrated in § 8.1.

Example 3 (Bear-2). For Bear-2 we have $q= 0.7$. The values of the right-hand side of the estimate (8.3) for $q=0.7$ and some $m$ are given in Table 1. For $m=22$, we have $H_2 \leqslant 0.005$.

Table 1.Estimates of the $\ell_2$-norm of the tail of coefficients $H_2$ for $q=0.7$

$m$$1$$10$$15$$21$$22$$30$$60$
$H_2 \leqslant $$2.36$$0.19$$0.038$$0.00525$$0.00375$$0.00025$$0.00000025$

Among the remaining coefficients $c_{k_1}$, $c_{k_2}$, we choose as many as possible with $|k_1|+|k_2| \leqslant m$ so that the square root of the sum of their squares is at most $0.005$. We delete these coefficients. Now the $\ell_2$-norm of the remaining coefficients is at most $0.01$, and we get a small error.

Numerical results show that only $65$ coefficients remain; their location and values are shown in Fig. 16. The size of points depends logarithmically on the corresponding coefficients. The values of the coefficients are also given in Table 2, but only a half of them are shown since in our case $c_{i, j}=c_{-i, -j}$.

Table 2.Coefficients of Bear-2 for approximation in $\ell_2$

$i$$1$$0$$3$$4$$1$$3$$3$$-3$
$j$$0$$0$$0$$0$$1$$1$$-1$$0$
$c_{i,j}$$1.15586$$0.5563$$-0.09441$$-0.06459$$0.06225$$-0.04478$$-0.0398$$0.0191$
$i$$5$$4$$4$$2$$5$$0$$-4$$4$
$j$$1$$-1$$1$$-1$$-1$$-2$$0$$2$
$c_{i,j}$$0.01591$$0.01557$$0.01535$$-0.01304$$0.01256$$-0.00979$$0.00935$$0.00911$
$i$$2$$-4$$-4$$6$$-5$$5$$-5$$2$
$j$$1$$-1$$1$$2$$-1$$2$$1$$-2$
$c_{i,j}$$-0.00862$$-0.00644$$-0.00543$$-0.00430$$-0.00418$$-0.0041$$-0.00350$$0.00340$
$i$$3$$-5$$-5$$-6$$5$$-6$$8$$6$$4$
$j$$2$$0$$-2$$-1$$3$$-2$$-1$$-2$$-2$
$c_{i,j}$$0.0031$$-0.0029$$0.0021$$0.0017$$-0.0016$$0.0015$$0.0015$$-0.0012$$0.0012$

Similarly, Table 3 shows the values of the right-hand side in estimate (8.4) for Bear-2 with $C=1$, $q=0.7$. First, consider $m=32$, for which $H_1 \leqslant 0.005$.

Table 3.Estimates of the $\ell_1$-norm of the tails of the coefficients $H_1$ for Bear-2

$m$$1$$10$$20$$30$$32$$45$$60$
$H_1 \leqslant $$28.3$$3.52$$0.17$$0.007$$0.0036$$0.000048$$0.0000003$

We again select the maximal possible number of coefficients $c_{k_1}$, $c_{k_2}$ from the square $|k_1|+|k_2| \leqslant m$ such that the sum of their absolute values is at most $0.005$, and delete them. Numerical computations show that $149$ coefficients remain; they are shown in Fig. 17. The $\ell_1$-norm of the deleted coefficients is at most $0.01$. The values of the coefficients are shown in Table 7 in the appendix.

Example 4 (Bear-4). For Bear-4, we have $q=0.85$. Table 4 shows the values of the right-hand side in estimate (8.3) with $q=0.85$ and some $m$. For $m=53$, for Bear-4 we have $H_2 \leqslant 0.005$. We choose again the maximal number of the coefficients $c_{k_1, k_2}$ satisfying $|k_1|+|k_2| \leqslant m$ and such that the square root of the sum of their squares is at most $0.005$; we again delete them. The $\ell_2$-norm of the remaining coefficients is $<0.01$; their values are shown in Table 8 in the appendix.

Table 4.Estimates of the $\ell_2$-norm of the tails of the coefficients $H_2$ for Bear-4

$m$$1$$10$$20$$30$$40$$53$$60$
$H_2 \leqslant $$5.89$$2.34$$0.61$$0.14$$0.03$$0.0044$$0.0015$

§ 9. The regularity of the tile $\mathrm{B}$-splines

The regularity is one of the most important parameters of refinable functions and of the corresponding wavelet systems. For wavelets, regularity implies good approximative properties and fast decay of the coefficients of wavelet decompositions (see [3], [4]). In some applications, regularity is crucial (for example, in the wavelet–Galerkin method). For the subdivision schemes, the regularity of the limit function defines both the quality of the limit surface and the rate of convergence of the algorithm (see [17]).

For the classical piecewise-polynomial splines, regularity is controlled by their order, but, for tile B-splines, evaluation of smoothness exponents is a great challenge. By applying the method developed in the recent paper [38], we will find exact Hölder exponents for tile $\mathrm{B}$-splines of small orders.

Definition 5. The general Hölder regularity of a function $\varphi$ in the space $C$ is defined by

$$ \begin{equation*} \alpha_{\varphi}=k+\sup \bigl\{\alpha \geqslant 0\colon \|\varphi^{(k)}(\,{\cdot}\,{+}\,h) - \varphi^{(k)}\|_C \leqslant C \|h\|^{\alpha}\ \forall\, h \in \mathbb{R}^d\bigr\}, \end{equation*} \notag $$
where $k$ is the maximal integer such that $\varphi \in C^k(\mathbb{R}^d)$.

For $\varphi \in C^{\infty}$, we define $\alpha_{\varphi}=+\infty$.

Similarly, the Hölder regularity in $L_2$ is defined by replacing $C^k(\mathbb{R}^d)$ by $W_2^k(\mathbb{R}^d)$.

It is known that the value of Hölder regularity of a refinable function is defined from the joint spectral radius (the $L_2$-radius, in case of regularity).

Definition 6. Given linear operators $A_0$, $A_1$, their joint spectral radius is defined by

$$ \begin{equation*} \rho_{C}(A_0, A_1)=\lim_{s \to \infty} \max_{\sigma} \|A_{\sigma(1)}\cdots A_{\sigma(s)}\|^{1/s}, \qquad \sigma\colon \{1, \dots, s\} \to \{0, 1\}. \end{equation*} \notag $$

Definition 7. Given linear operators $A_0$, $A_1$, their $L_2$-radius is defined by

$$ \begin{equation*} \rho_{2}(A_0, A_1)=\lim_{m \to \infty} \biggl(\frac{1}{2^s} \sum_{\sigma} \|A_{\sigma(1)}\cdots A_{\sigma(s)}\|^2\biggr)^{1/2s}. \end{equation*} \notag $$

Suppose we have a refinement equation with finite number of summands. Consider the set

$$ \begin{equation*} K=\biggl\{x \in \mathbb{R}^d \colon x=\sum_{j=1}^{\infty} M^{-j} \gamma_j\biggr\}. \end{equation*} \notag $$
Next, we choose an arbitrary set of digits $D(M)$ for the dilation matrix $M$ that generates a tile $G_0$. We call it a basis tile. In the case of tile $\mathrm{B}$-splines, one can take the corresponding generating tile as a basis tile.

Definition 8. The set $\Omega \subset \mathbb{Z}^d$ is the minimal set of integer vectors such that $K \subset \Omega+G_0=\bigcup_{k \in \Omega} {(k+G_0)}$.

This set can be found using the algorithm from [39]. In the univariate case for $M=2$, $D(M)=\{0, 1\}$, the basis tile $G_0=[0, 1]$ is the unit segment. If a refinement equation is given by the coefficients $c_0, \dots, c_N$, then $K=[0, N]$, $\Omega=\{0, 1, 2, \dots, N-1\}$.

If the basis tile is fixed, we can define the transition matrices $(T_d)_{a,b}=c_{Ma-b+\Delta}$ for all $a, b \in \Omega, \Delta \in D(M)$.

Using these matrices, we can find Hölder regularity of a refinable function $\varphi$ (see [38]). It is expressed in terms of the joint spectral characteristics of matrices restricted to a certain common invariant subspace, and in most cases this space is $W=\bigl\{x \in \mathbb{R}^{N} \bigm| \sum_{k} x_{k}=0\bigr\}$. Namely, if the Hölder regularity is at most one and the integer shifts of the function $\varphi$ are linearly independant (see § 10 for details), we have

$$ \begin{equation*} \begin{aligned} \, \alpha_{\varphi} &=-\log_{\rho(M)}\bigl(\rho_{C}(T_0|_W, T_1|_W)\bigr), \\ \alpha_{\varphi, 2} &=-\log_{2}\bigl(\rho_2(T_0|_W, T_1|_W)\bigr), \end{aligned} \end{equation*} \notag $$
where $\rho(M)$ is the spectral radius of a matrix. In the general case, if there are no such constraints on the Hölder regularity, the Hölder exponent is computed by similar formulas:
$$ \begin{equation} \alpha_{\varphi} =-\log_{\rho(M)}\bigl(\rho_{C}(T_0|_{W_k}, T_1|_{W_k})\bigr), \end{equation} \tag{9.1} $$
$$ \begin{equation} \alpha_{\varphi, 2} =-\log_{2}\bigl(\rho_{2}(T_0|_{W_k}, T_1|_{W_k})\bigr), \end{equation} \tag{9.2} $$
where $W_k$ is the space of vectors from $\mathbb{R}^d$ orthogonal to the space of polynomials of $d$ variables of degree at most $k$; the number $k$ in (9.1), (9.2) is the maximal number such that the space $W_k$ is invariant under the matrices $T_0$, $T_1$.

Remark 4. The $L_2$-radius of $n \times n$ matrices $A_0$, $A_1$ can be written in terms of the maximal eigenvalue of the linear operator $\mathscr{A}X=(1/2)({A_{0}^T X A_0+ A_{1}^T X A_1})$, which acts on the space of symmetric $n \times n$ matrices $X$. Namely,

$$ \begin{equation*} \rho_2= \sqrt{\lambda_{\max}(\mathscr{A})}. \end{equation*} \notag $$
Since the operator has an invariant cone (the cone of positive definite matrices), the largest eigenvalue $\lambda_{\mathrm{max}}(\mathscr{A})$ is non-negative by the Krein–Rutman theorem [40]. The matrix of the operator $\mathscr{A}$ is given by
$$ \begin{equation*} \frac{1}{2}(A_0 \otimes A_0+A_1 \otimes A_1), \end{equation*} \notag $$
where $\otimes$ is the Kronecker product of matrices [41], [42]. Thus, evaluation of the $L_2$-radius is reduced to that of the leading eigenvalue of the linear operator in dimension $(n^2+n)/2$.

Most likely, an explicit formula for the joint spectral radius does not exist. Moreover, it is known that the problem of its computation for general matrices with rational coefficients is undecidable, and, for Boolean matrices, it is NP-complete [43]. Nevertheless, in most cases, in relatively small dimensions (up to 25), it is possible to exactly evaluate the joint spectral radius by the so-called invariant polytope algorithm [44]. We apply the upgraded version of this algorithm from [45].

Table 5.The $L_2$-regularity of two-digit tile $\mathrm{B}$-splines

$\mathrm{B}$-spline$B_0$$B_1$$B_2$$B_3$$B_4$
Square$0.5$$1.5$$2.5$$3.5$$4.5$
Dragon$0.2382$$1.0962$$1.8039$$2.4395$$3.0557$
Bear$0.3946$$1.5372$$2.6323$$3.7092$$4.7668$

Table 6.The regularity in $C$ of two-digit tile $\mathrm{B}$-splines

$\mathrm{B}$-spline$B_0$$B_1$$B_2$$B_3$
Square$0$$1$$2$$3$
Dragon$0$$0.47637$$1.5584$$2.1924$
Bear$0$$0.7892$$2.2349$$3.0744$

The regularity values of tile $\mathrm{B}$-splines up to order 4 are given in Tables 5 and 6. We thus have

Theorem 5. Tile $\mathrm{B}$-splines Bear-3 and Bear-4 are $C^2(\mathbb{R}^2)$- and $C^3(\mathbb{R}^2)$-smooth, respectively.

It is well known that the classical bivariate $\mathrm{B}$-splines of corresponding orders are not $C^2$-smooth (respectively, $C^3$-smooth).

Remark 5. The result of the theorem may seem paradoxical since the regularity of the fractal $\mathrm{B}$-spline was found to be higher than that of the rectangular one. This is possibly related to the fact that the cube has planar faces, which are smoothed slowly by autoconvolutions, and the smoothing is faster for tiles with fractal structure. Figure 18 illustrates the difference between the autoconvolution of indicators of a square and a disc, in case of a disc the area of intersection of $\varphi(x)$, $\varphi(x+h)$ from formula for Hölder regularity decays much faster.

Similar phenomenon was observed by P. Oswald in his investigations on subdivision schemes [46]–[48]. Note also that, in the one-dimensional case, among all refinement equations with a given number of coefficients, the $\mathrm{B}$-spline has the maximum regularity of a solution [17]. As we see, the Bears have the maximal smoothness for functions of two variables with four and five coefficients. For B-splines of higher order, we cannot calculate the regularity, because of the labour involved in evaluation of the spectral radius.

Figure 19, (a)–(i) depicts the graphs of the partial derivatives of order one, two, and three for Bear-4.

Remark 6. The values of regularity for large orders of tile $\mathrm{B}$-splines are not given due to the difficulties involved in evaluation of the joint spectral radius for matrices of large size. The transition matrices grow fast with the order of convolution. The question on asymptotic behaviour of the regularity with increasing order of $\mathrm{B}$-splines remains open.

§ 10. Subdivision schemes

In this section, we will apply the obtained tile $\mathrm{B}$-splines for construction of a special class of subdivision schemes (SubD algorithms), also known as surface refinement algorithms.

Subdivision schemes are linear iterative algorithms for interpolating or extrapolating functions from given values on some rough mesh. The resulting surface is the limit of iterative approximations constructed on each iteration from the values computed on increasingly dense lattices (which explains the name “surface refinement algorithm”). For the planar lattice, the limit surface is the graph of the limit function. Manifolds can also be obtained in this way (see the end of § 10). Below, we will consider these algorithms and their properties.

Let again $M \in \mathbb{Z}^{d \times d}$ be an expanding matrix. For an arbitrary mask (the set of numbers) $\{c_k\}$, consider the subdivision (SubD) operator $S\colon \ell_{\infty}(\mathbb{Z}^d) \to \ell_{\infty}(\mathbb{Z}^d)$,

$$ \begin{equation*} [Su](k)=\sum_{j \in \mathbb{Z}^d}{c_{k-Mj} \cdot u(j)}, \quad u \in \ell_{\infty}(\mathbb{Z}^d). \end{equation*} \notag $$
Applying the subdivision operator several times, we obtain a sequence of values from which the function can be constructed.

Example 5 (the univariate case). Let $M=2$, and let $u$ be the sequence of values at integer points. From this sequence, we construct the function

$$ \begin{equation*} f_0(\,{\cdot}\,)=\sum_{k \in \mathbb{Z}} u(k) \chi_{[0, 1]}(\,{\cdot}\,{-}\,k), \end{equation*} \notag $$
which is constant on $[k, k+1]$, $k \in \mathbb{Z}$. After $t$ iterations of the subdivision operator, we obtain the function
$$ \begin{equation*} f_q(\,{\cdot}\,)=\sum_{k \in \mathbb{Z}} [S^qu](k) \chi_{[0, 1]}(2^q\,{\cdot}\,{-}\,k), \end{equation*} \notag $$
which is constant on $[k/2^q, (k+1)/2^q]$, $k \in \mathbb{Z}$.

Instead of the function $\chi_{[0, 1]}$, one can consider any other function $h(x)$ satisfying the partition of unity property, that is, $\sum_{j \in \mathbb{Z}}h(x-j) \equiv 1$, which is equivalent to saying that $\widehat{h}(0)=1$, $\widehat{h}(s)=0$, $s \ne 0$. For example, $h(x)=B_1(x)$ gives piecewise-linear functions $f_q$ on each iteration. If, for an admissible function $h$, the functions $f_q$ converge in $L_{\infty}$ as $q \to \infty$ for every sequence $u$, then the subdivision scheme is said to converge. The corresponding limit is called the limit function for a given $u$.

We generally follow the same scheme in the general multivariate case. We set

$$ \begin{equation*} f_q=\sum_{k \in \mathbb{Z}^d} [S^q u](k) \chi_{[0, 1]^d}(M^q\,{\cdot}\,{-}\,k) \text{ (a piecewise-constant approximation)}. \end{equation*} \notag $$
Instead of $\chi_{[0, 1]^d}$, we can use any function $h(x)$ with the property
$$ \begin{equation*} \sum_{j \in \mathbb{Z}^d}h(x-j) \equiv 1. \end{equation*} \notag $$
In particular, we can use here the indicator of an arbitrary tile $h= \chi_G$, a multivariate $\mathrm{B}$-spline $B_1$, etc.

Next, let us study the convergence of the algorithm in $C^n$. For $n \geqslant 0$, consider the function space

$$ \begin{equation*} \begin{aligned} \, Q_n &=\bigl\{h \in C^n(\mathbb{R}^d) \bigm| \widehat{h}(0)=1,\, \widehat{h}\text{ has zeros of order at least } n+1 \\ &\qquad\qquad \text{at each point from}\ \mathbb{Z}^d \setminus \{0\}\bigr\}. \end{aligned} \end{equation*} \notag $$
Note that $h \in Q_n$ if and only if $h \in C^n{(\mathbb{R}^d)}$, $\sum_{j \in \mathbb{Z}^d}h(x- j) \equiv 1$, where each algebraic polynomial $P_n$ of the variables $x_1, \dots, x_d$ of degree at most $n$ lies in the span of $\{h(\,{\cdot}\,{-}\,j)\}_{j \in \mathbb{Z}^d}$ (see [23]).

For example, $\chi_{[0,1]^d} \notin Q_0$, and the classical $B_1$- and $B_{n+1}$-splines lie in $Q_0$ and $Q_n$, respectively.

Definition 9. The subdivision scheme SubD converges in $C^n$ if, for some $h\,{\in}\,Q_n$ and every $u \in \ell_{\infty}(\mathbb{Z}^d)$, there exists a function $f_u \in C^n(\mathbb{R}^d)$ such that

$$ \begin{equation*} \biggl\|\sum_{k \in \mathbb{Z}^d} [S^q u](k) h(M^q\,{\cdot}\,{-}\,k)-f_u(\,{\cdot}\,)\biggr\|_{C^n(\mathbb{R}^d)} \to 0, \qquad q \to \infty. \end{equation*} \notag $$

Remark 7. We can always choose $h=B_{n+1}$ (the classical $\mathrm{B}$-spline of order $n$). Indeed, the convergence does not depend on the choice of the initial function $h \in Q_n$, that is, we can replace the words “for some $h \in Q_n$” in Definition 9 by “for every $h \in Q_n$”.

The operator $S$ is linear and invariant under integer shifts, that is, applying $S$ to the shift of the sequence $u$ by $k$ we obtain the shift of the sequence $k$ by $Su$. Therefore, it suffices to know the limit function only for the $\delta$-sequence $\delta(k)= \delta^0_k$; we denote this limit by $f_{\delta}$ (in the case of convergence). Now, for an arbitrary sequence $u \in \ell_{\infty}(\mathbb{Z}^d)$, the limit function has the form

$$ \begin{equation*} f_{u}(x)=\sum_{k \in \mathbb{Z}^d}{f_{\delta}(x-k) \cdot u(k)}, \end{equation*} \notag $$
since $u=\sum_{k \in \mathbb{Z}^d} u(k)\delta(k)$. It turns out [17] that the function $f_\delta$ satisfies the refinement equation with coefficients $c_k$ of the subdivision operator,
$$ \begin{equation*} f_\delta(x)=\sum_{k \in \mathbb{Z}^d}{c_k f_\delta(Mx-k)}. \end{equation*} \notag $$

We define the numbers $c_k$ as the coefficients of the refinement equations that generate tile $\mathrm{B}$-splines. Therefore, the function $f_\delta$ is the tile $\mathrm{B}$-splines $B_n^G$, and, for an arbitrary initial sequence, the limit function is a linear combination of the integer shifts of $B_n^G$. In particular, the regularity of the limit function coincides with that of the tile $\mathrm{B}$-spline.

Note that the number of arithmetic operations required for applying the SubD-operator at each step depends on the number of non-zero coefficients $c_k$ of the refinement equation — the smaller is this number, the faster are the iterations of the subdivision scheme.

According to § 3 and § 4, the classical $\mathrm{B}$-spline $B_n$ of $d$ variables of order $n$ has $(n+2)^d$ non-zero coefficients. However, in every dimension we can consider a tile $\mathrm{B}$-spline with $n+2$ coefficients generated by a two-digit parallelepiped tile (such tiles exist in every dimension; see [49]). Besides, the functions, the $\mathrm{B}$-splines, and correspondingly, the limit surfaces of the subdivision schemes are the same as in the classical case, since they are the convolutions of the same indicators of parallelepipeds. The only difference is the way the algorithm is organized. We thus have the following theorem.

Theorem 6. The subdivision scheme SubD in $\mathbb{R}^d$ based on a tile $\mathrm{B}$-spline $B_n^G$, where $G$ is a two-digit parallelepiped tile, has complexity $n+2$ for one iteration. The classical $d$-variate subdivision scheme SubD of order $n$ based on the product of $d$ univariate $\mathrm{B}$-splines $B_n$ has complexity $(n+2)^d$ for one iteration. These algorithms generate the same limit surfaces.

It is known (see, for example, [50], [51]) that the SubD algorithm converges in $C^n$ under the following two conditions.

1) The corresponding refinement equation has a solution $\varphi \in C^n$ (in the general case, it is known only that the refinement equation has always a unique solution in the space of tempered distributions $\mathcal{S}'$ up to multiplication by a constant [3]).

2) The mask $a$ of the equation satisfies the sum rules, that is, it has zeros of order at least $n+1$ at the points $M^{-T}\Delta_*$ for all $\Delta_* \in D_* \setminus \{0\}$, where $D_*$ is the digit set corresponding to the transposed matrix $M^T$, and $a(0)= 1$ (see, for example, [17]). This condition can be easily rewritten as linear relations on the coefficients $c_k$ of the refinement equation. In particular, the sum rules of order $n=0$ are equivalent to saying that $\sum_{k} c_{Mk+\Delta}=1$ for every vector $\Delta \in D$, where $D$ is the set of digits of the basis tile.

These conditions are not sufficient (the corresponding examples are well-known [21]). Nevertheless, if conditions 1), 2) are satisfied, and if, in addition. the limit refinable function $\varphi$ is stable (that is, its integer shifts are linearly independent), then the algorithm converges in $C^n$ (see [17] for $n=0$, and [52], for $n \geqslant 1$). It is known that the function $\varphi$ is stable if and only if its Fourier transform has no periodic zeros, that is, there is no point $\xi \in \mathbb{R}^d$ such that $\widehat \varphi (\xi+k)=0$ for all $k \in \mathbb{Z}^d$ (see, for example, [17]).

Proposition 8. For each $n \geqslant 1$, the $\mathrm{B}$-spline $B_n$ of $d$ variables of order $n$ is continuous, stable, and its refinement equation satisfies the sum rules of order $n$.

Proof. The convolution of indicators of several compact sets is continuous, and hence so is the function $B_n$. The stability follows from the fact that the Fourier transform $\widehat B_n(\xi)$ does not have periodic zeros. Indeed, since the Fourier transform of the convolution is the product of Fourier transforms of multipliers, it follows that $\widehat B_n(\xi)=(\widehat B_0(\xi))^{n+1}$. As the integer shifts of a tile are linearly independent, the corresponding Fourier transform $\widehat B_0(\xi)$ does not have a periodic zero, therefore, the Fourier transform of $\widehat B_n(\xi)$ does not have a periodic zero.

Now we check the sum rules. By definition of a tile, $B_0$ satisfies the rule of order zero, since $c_{\Delta}=1$ for each $\Delta \in D$ and since the digits lie in different cosets $\mathbb{Z}^d / M\mathbb{Z}^d$. Let $D_*$ be an arbitrary digit set corresponding to the transposed matrix $M^T$. Then the mask $a_0$ satisfies the sum rule of order zero, which can also be written in the frequency domain as $a_0(M^{-T}\Delta_*)=0$ for all $\Delta_* \in D_* \setminus \{0\}$ and $a_0(0)=1$. Since the mask of the tile $\mathrm{B}$-spline $B_n$ is $a_n(M^{-T}\Delta_*)= a_0(M^{-T}\Delta_*)^{n+1}$, the function $B_n$ satisfies the sum rules of order $n$. This proves Proposition 8.

Corollary 6. Let the Hölder regularity of the tile $\mathrm{B}$-spline $B_n$ be $\alpha$. Then the subdivision algorithm based on $B_n$ converges in $C^k$ for every $k \leqslant \alpha$.

One of the most important problems of the subdivision algorithm theory is that of the rate of convergence. In [53], the rate of convergence of subdivision algorithms in $C^n(\mathbb{R})$ (the generalized rate of convergence) was defined by means of the difference schemes. Later, this concept was generalized to the multivariate case (see [54]). We will use a similar definition in terms of [52]. For simplicity, we will assume that the matrix $M$ is isotropic, that is, it is diagonalizable and all its eigenvalues have the same absolute value (the general case is dealt with similarly, see [38]).

It turns out (see [17], [50]) that the algorithm converges in $C^n$ with exponential rate. Namely, for every $u\in \ell_{\infty}$, $\|u\|_{\ell_{\infty}}=1$, $r=0, \dots, n$,

$$ \begin{equation*} \|f_q^{(r)}-f^{(r)}\|_{C(\mathbb{R}^d)} \leqslant C \cdot \tau_r^{-q}, \end{equation*} \notag $$
where $f=f_u$ is the limit function for $u$, $f_q$ is the result of the $q$th iteration. The exponents of convergence $\tau_0=\dotsb=\tau_{n-1}=m^{1/d}$, where $m=|{\det{M}}|$, and $\tau_n$ depend on the coefficients of the subdivision algorithm. In particular,
$$ \begin{equation*} \tau_n=\rho_{C}(T_0|_{W_k},T_1|_{W_k})\cdot {m}^{n/d}, \end{equation*} \notag $$
where
$$ \begin{equation*} \rho_C(A_0,A_1)=\lim_{s\to\infty}\max_\sigma\|A_{\sigma(1)} \dotsb A_{\sigma(s)}\|^{1/s},\qquad \sigma\colon\{1,\dots,s\}\to\{0,1\}, \end{equation*} \notag $$
is the joint spectral radius of two operators (see § 9 for details and the definition of $k$).

Definition 10. The generalized rate of convergence of the subdivision algorithm is defined by $n-(1/d)\log_m{\tau}=-(1/d)\log_m\rho$.

The following fact is well known [52].

Proposition 9. The Hölder regularity $\alpha_\varphi$ of the limit function $\varphi$ of the subdivision algorithm is not smaller than the generalized rate of convergence. If the function $\varphi$ is stable, these parameters coincide.

Corollary 7. For a tile B-spline $B_n^G \in C^n(\mathbb{R}^d)$ its subdivision scheme converges with a generalized rate $n-(1/d)\log_m{\tau}=-(1/d)\log_m\rho= \alpha_\varphi$. In particular, $\tau=m^{(n-\alpha_\varphi)/d}$.

Since tile $\mathrm{B}$-splines are stable, we can estimate the generalized rate of convergence of their subdivision algorithms. Bear-4 is the smoothest spline among those considered in Table 6.

Theorem 7. The subdivision algorithms constructed from the tile $\mathrm{B}$-splines Bear-3 and Bear-4 converge in $C^2$ and $C^3$, respectively.

The classical subdivision algorithms constructed from bivariate $\mathrm{B}$-splines of corresponding orders do not converge in $C^2$ ($C^3$, respectively).

Note that the convergence of the algorithm in the space of functions of high regularity has a strong effect on the quality of the generated surface. For example, the convergence in $C^2$ means that, at each point, the curvature of the surface converges to that of the limit surface. In particular, if the limit surface is locally convex at some point, then the surfaces obtained after several iterations are locally convex as well.

Since Bear-4 is optimal in terms of its rate of convergence and has a small number of non-zero coefficients (only five), let us consider the subdivision algorithm based on Bear-4. This scheme can be applied both in the case when we initially have a function given at integers on the plane (for example, in image processing) and in the case where the initial points form a rough approximation of the surface. Figure 20 shows the pattern of computation. The values at the points marked with circles are updated by the iteration of the subdivision scheme with a linear combination of values of the neighbouring points. The direction of computation is multiplied by $M^{-1}$ with each iteration.

Let us consider an application of the Bear-4 subdivision algorithm to the surface of a slightly deformed torus given by a rough approximation (see Fig. 21, (a)). Figures 21, (b)–21, (f) show the results after several iterations of Bear-4.

Figure 22, (a)–(c) illustrate an application of the Bear-4 algorithm to the surface with border (a catenoid).

§ 11. Conclusions

The paper presents an approach for construction and analysis of multivariate $\mathrm{B}$-splines based on convolutions of tiles. Some properties of the tile $\mathrm{B}$-splines are put forward, and a detailed account of planar symmetric 2-tiles (Square, Dragon and Bear) is given. Tile $\mathrm{B}$-splines are shown as being solutions of refinement equations with small number of non-zero coefficients — this makes them advantageous over the classical multivariate $\mathrm{B}$-splines. Orthogonalization of tile $\mathrm{B}$-splines produces orthonormal wavelet systems generated by a single wavelet function, for which we obtain explicit formulas, compute regularity exponents, and estimate the rate of decay at infinity. Using the machinery of multivariate complex analysis, we estimate the rate of decay and the number of the coefficients required for approximation of the wavelet function with prescribed accuracy. Some of the constructed tile $\mathrm{B}$-splines possess a higher regularity than the classical $\mathrm{B}$-splines of the same orders. In particular, Bear-4 is three times differentiable unlike the corresponding classical $\mathrm{B}$-spline. This property is important for applications, and, in particular, for subdivision algorithms in geometric modelling, which require certain regularity of the generating function for convergence in $C^n$. Examples and numerical results are provided, a software package for practical implementation of the results of the present paper is developed.

Acknowledgements

The author is grateful to her scientific advisor V. Yu. Protasov for his constant assistance and advice and for the referee for many useful comments and suggestions. The author is also grateful to the developers of the software package [55] using which the tiles in the present paper were constructed.

§ 12. Appendix. The tables with coefficients of wavelet functions

Table 7.The coefficients of Bear-2 for approximation in $\ell_1$ with accuracy $0.01$

$i$$1$$0$$3$$4$$1$$3$$3$$-3$
$j$$0$$0$$0$$0$$1$$1$$-1$$0$
$c_{i,j}$$1.15586$$0.55632$$-0.09441$$-0.06459$$0.06225$$-0.04478$$-0.03976$$0.01911$
$i$$5$$4$$4$$2$$5$$0$$-4$$4$
$j$$1$$-1$$1$$-1$$-1$$-2$$0$$2$
$c_{i,j}$$0.01591$$0.01557$$0.01535$$-0.01304$$0.01256$$-0.00979$$0.00935$$ 0.00911$
$i$$2$$-4$$-4$$6$$-5$$5$$-5$$2$
$j$$1$$-1$$1$$2$$-1$$2$$1$$-2$
$c_{i,j}$$-0.00862$$-0.00644$$-0.00543$$-0.00430$$-0.00418$$-0.0041$$-0.00350$$0.00339$
$i$$3$$-5$$-5$$-6$$5$$-6$$8$$6$
$j$$1$$-1$$1$$2$$-1$$2$$1$$-2$
$c_{i,j}$$0.00306$$-0.00292$$0.00207$$0.00172$$-0.00156$$0.00153$$0.00151$$ -0.00124$
$i$$4$$-6$$5$$7$$-1$$-7$$6$$-7$
$j$$2$$0$$-2$$-1$$3$$-2$$-1$$-2$
$c_{i,j}$$0.00123$$-0.00113$$-0.00108$$0.00102$$-0.00101$$0.00095$$0.00091$$ 0.00085$
$i$$-1$$1$$-5$$-7$$-6$$4$$8$$9$
$j$$-2$$0$$-2$$3$$3$$-1$$3$$1$
$c_{i,j}$$0.00081$$0.00079$$0.00074$$-0.00073$$0.00059$$-0.00057$$-0.00057$$ -0.00047$
$i$$-8$$4$$2$$2$$10$$10$$-7$$5$
$j$$-3$$3$$2$$-2$$2$$3$$3$$3$
$c_{i,j}$$-0.00046$$0.00046$$-0.00045$$-0.00040s$$-0.00037$$-0.00036$$-0.00032$$0.00032$
$i$$-7$$1$$0$$-8$$10$$11$$8$$6$
$j$$-2$$-3$$4$$-3$$1$$-1$$2$$-3$
$c_{i,j}$$0.00028$$0.00027$$0.00026$$0.00025$$-0.00022$$0.00021$$-0.00021$$ 0.00019$
$i$$-1$$-9$$-9$$11$$-5$$4$$9$$10$
$j$$2$$1$$-3$$1$$-4$$4$$4$$4$
$c_{i,j}$$-0.00018$$-0.00018$$0.00018$$-0.00018$$-0.00016$$0.00015$$0.00014$$0.00012$
$i$$12$$3$$11$
$j$$2$$5$$-2$
$c_{i,j}$$0.00012$$0.00011$$0.00011$

Table 8.The coefficients of Bear-4 for approximation in $\ell_2$ with accuracy $0.01$

$i$$2$$1$$5$$2$$0$$0$$4$$6$
$j$$0$$0$$0$$-1$$-1$$1$$0$$-1$
$c_{i,j}$$1.08200$$0.60379$$-0.13271$$0.08179$$-0.06971$$-0.06948$$-0.06578$$0.04453$
$i$$6$$-3$$-2$$-1$$8$$5$$-4$$3$
$j$$1$$0$$0$$-2$$-1$$-1$$-1$$2$
$c_{i,j}$$0.04344$$0.03556$$0.03408$$0.02357$$-0.02329$$0.02213$$-0.02085$$-0.01941$
$i$$-3$$7$$5$$-3$$1$$-5$$-4$$9$
$j$$-2$$-1$$1$$-1$$1$$-2$$0$$-1$
$c_{i,j}$$-0.01931$$-0.01870$$0.01740$$-0.01580$$-0.01272$$0.0122$$-0.01128$$0.01096$
$i$$10$$-5$$8$$6$$-6$$1$$-4$$-5$
$j$$-1$$-1$$2$$2$$-1$$2$$-3$$0$
$c_{i,j}$$0.01051$$0.00876$$0.00874$$-0.00846$$0.00804$$0.00796$$0.00734$$ -0.00710$
$i$$-3$$9$$-7$$-2$$-6$$-6$$1$$11$
$j$$2$$-2$$-2$$-3$$-2$$-3$$-1$$-1$
$c_{i,j}$$-0.00701$$0.00677$$-0.00635$$-0.00626$$-0.00607$$-0.00598$$ -0.00577$$-0.00505$
$i$$0$$11$$8$$10$$9$$0$$-8$$-8$
$j$$3$$-2$$-2$$-2$$3$$-2$$1$$-3$
$c_{i,j}$$-0.00486$$-0.0049$$0.00460$$-0.00425$$-0.00408$$0.00402$$-0.00399$$0.0040$
$i$$6$$-7$$-7$$7$$-8$$1$$12$$2$
$j$$-3$$-3$$-1$$3$$-2$$-4$$-2$$-3$
$c_{i,j}$$0.00359$$0.00347$$-0.00339$$0.0032$$0.00323$$-0.00302$$0.00297$$ 0.00293$
$i$$13$$-2$$-9$$-1$$-7$$1$$-9$$-8$
$j$$-2$$2$$-2$$2$$-4$$4$$-3$$-1$
$c_{i,j}$$0.00289$$-0.00280$$0.00277$$0.00257$$0.00237$$0.00230$$-0.00230$$-0.00228$
$i$$-10$$-9$$-5$$4$$5$$13$$-1$$-6$
$j$$-3$$-4$$-4$$3$$-3$$-1$$-4$$0$
$c_{i,j}$$-0.00221$$-0.00200$$-0.00199$$0.00198$$0.00195$$0.00181$$0.00177$$0.00174$
$i$$14$$12$$7$$-11$$10$$8$$-2$$-11$
$j$$-2$$4$$-3$$2$$4$$-3$$-5$$-4$
$c_{i,j}$$-0.00171$$-0.00161$$-0.00148$$-0.00147$$0.00145$$-0.00144$$ -0.00142$$0.00138$
$i$$-10$$14$$-9$$-1$$4$$-11$$-10$$1$
$j$$-2$$4$$0$$-3$$5$$-3$$1$$3$
$c_{i,j}$$-0.0014$$0.00131$$0.00131$$-0.00129$$0.00126$$0.0013$$0.00113$$ -0.00111$
$i$$-12$$-11$$14$$-12$$15$$14$$-12$$12$
$j$$-3$$-2$$0$$-4$$0$$-3$$2$$0$
$c_{i,j}$$0.00104$$-0.00093$$-0.00091$$-0.00088$$-0.00087$$-0.00085$$0.00083$$0.00081$
$i$$-13$$-10$$-9$$-8$$-4$$-5$$-12$$8$
$j$$-4$$-5$$-1$$3$$-5$$4$$-5$$4$
$c_{i,j}$$-0.00081$$0.00080$$0.0008$$0.00076$$0.00073$$0.00072$$-0.00070$$ -0.00070$
$i$$13$$5$
$j$$-3$$-4$
$c_{i,j}$$-0.00068$$-0.00067$


Bibliography

1. C. de Boor, A practical guide to splines, Appl. Math. Sci., 27, Springer-Verlag, New York–Berlin, 1978  mathscinet  zmath; Russian transl. Radio i Svyaz', Moscow, 1985  mathscinet
2. I. Daubechies, Ten lectures on wavelets, CBMS-NSF Regional Conf. Ser. in Appl. Math., 61, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1992  crossref  mathscinet  zmath  adsnasa; Russian transl. Research Centre “Regular and Chaotic Dynamics”, Moscow, Izhevsk, 2001  zmath
3. I. Ya. Novikov, V. Yu. Protasov, and M. A. Skopina, Wavelet theory, Fizmatlit, Moscow, 2005  mathscinet  zmath; English transl. Transl. Math. Monogr., 239, Amer. Math. Soc., Providence, RI, 2011  crossref  mathscinet  zmath
4. P. Wojtaszczyk, A mathematical introduction to wavelets, London Math. Soc. Stud. Texts, 37, Cambridge Univ. Press, Cambridge, 1997  crossref  mathscinet  zmath
5. A. Yu. Shadrin, “The $L_{\infty}$-norm of the $L_2$-spline projector is bounded independently of the knot sequence: a proof of de Boor's conjecture”, Acta Math., 187:1 (2001), 59–137  crossref  mathscinet  zmath
6. C. de Boor, R. A. DeVore, and A. Ron, “On the construction of multivariate (pre)wavelets”, Constr. Approx., 9:2-3 (1993), 123–166  crossref  mathscinet  zmath
7. V. Yu. Protasov, “Fractal curves and wavelets”, Izv. Ross. Akad. Nauk Ser. Mat., 70:5 (2006), 123–162  mathnet  crossref  mathscinet  zmath; English transl. Izv. Math., 70:5 (2006), 975–1013  crossref
8. P. A. Terekhin, “Best approximation of functions in $L_p$ by polynomials on affine system”, Mat. Sb., 202:2 (2011), 131–158  mathnet  crossref  mathscinet  zmath; English transl. Sb. Math., 202:2 (2011), 279–306  crossref  adsnasa
9. C. De Boor, K. Höllig, and S. Riemenschneider, Box splines, Appl. Math. Sci., 98, Springer-Verlag, New York, 1993  crossref  mathscinet  zmath
10. M. Charina, C. Conti, K. Jetter, and G. Zimmermann, “Scalar multivariate subdivision schemes and box splines”, Comput. Aided Geom. Design, 28:5 (2011), 285–306  crossref  mathscinet  zmath
11. D. Van de Ville, T. Blu, and M. Unser, “Isotropic polyharmonic B-splines: scaling functions and wavelets”, IEEE Trans. Image Process., 14:11 (2005), 1798–1813  crossref  mathscinet  adsnasa
12. V. G. Zakharov, “Elliptic scaling functions as compactly supported multivariate analogs of the B-splines”, Int. J. Wavelets Multiresolut. Inf. Process., 12:02 (2014), 1450018  crossref  mathscinet  zmath
13. S. Zube, “Number systems, $\alpha$-splines and refinement”, J. Comput. Appl. Math., 172:2 (2004), 207–231  crossref  mathscinet  zmath  adsnasa
14. J. Lagarias and Yang Wang, “Integral self-affine tiles in $\mathbb{R}^n$. II. Lattice tilings”, J. Fourier Anal. Appl., 3:1 (1997), 83–102  crossref  mathscinet  zmath
15. K. Gröchenig and A. Haas, “Self-similar lattice tilings”, J. Fourier Anal. Appl., 1:2 (1994), 131–170  crossref  mathscinet  zmath
16. K. Gröchenig and W. R. Madych, “Multiresolution analysis, Haar bases, and self-similar tilings of $\mathbb{R}^n$”, IEEE Trans. Inform. Theory, 38:2, Part 2 (1992), 556–568  crossref  mathscinet  zmath
17. A. S. Cavaretta, W. Dahmen, and C. A. Micchelli, Stationary subdivision, Mem. Amer. Math. Soc., 93, no. 453, Amer. Math. Soc., Providence, RI, 1991  crossref  mathscinet  zmath
18. E. Catmull and J. Clark, “Recursively generated B-spline surfaces on arbitrary topological meshes”, Comput.-Aided Des., 10:6 (1978), 350–355  crossref
19. T. Zaitseva, Tile_Bsplines https://github.com/TZZZZ/Tile_Bsplines
20. C. A. Cabrelli, C. Heil, and U. M. Molter, Self-similarity and multiwavelets in higher dimensions, Mem. Amer. Math. Soc., 170, no. 807, Amer. Math. Soc., Providence, RI, 2004  crossref  mathscinet  zmath
21. A. Krivoshein, V. Protasov, and M. Skopina, Multivariate wavelet frames, Ind. Appl. Math., Springer, Singapore, 2016  crossref  mathscinet  zmath
22. G. Strang and G. Fix, “A Fourier analysis of the finite element variational method”, Constructive aspects of functional analysis (Erice, 1971), C.I.M.E. Summer Schools, 57, Cremonese, 1973, 793–840  crossref  mathscinet  zmath
23. C. de Boor, R. A. DeVore, and A. Ron, “The structure of finitely generated shift-invariant spaces in $L_2(\mathbb{R}^d)$”, J. Funct. Anal., 119:1 (1994), 37–78  crossref  mathscinet  zmath
24. C. de Boor, R. Devore, and A. Ron, “Approximation from shift-invariant subspaces of $L_2(\mathbb{R}^d)$”, Trans. Amer. Math. Soc., 341:2 (1994), 787–806  crossref  mathscinet  zmath
25. I. Kirat and Ka-Sing Lau, “On the connectedness of self-affine tiles”, J. London Math. Soc. (2), 62:1 (2000), 291–304  crossref  mathscinet  zmath
26. C. Bandt and G. Gelbrich, “Classification of self-affine lattice tilings”, J. London Math. Soc. (2), 50:3 (1994), 581–593  crossref  mathscinet  zmath
27. G. Gelbrich, “Self-affine lattice reptiles with two pieces in $\mathbb{R}^n$”, Math. Nachr., 178 (1996), 129–134  crossref  mathscinet  zmath
28. W. J. Gilbert, “Radix representations of quadratic fields”, J. Math. Anal. Appl., 83:1 (1981), 264–274  crossref  mathscinet  zmath
29. R. F. Gundy and A. L. Jonsson, “Scaling functions on $\mathbb{R}^2$ for dilations of determinant $\pm 2$”, Appl. Comput. Harmon. Anal., 29:1 (2010), 49–62  crossref  mathscinet  zmath
30. C. Bandt, Combinatorial topology of three-dimensional self-affine tiles, arXiv: 1002.0710
31. C. Bandt, “Self-similar sets. 5. Integer matrices and fractal tilings of $\mathbb{R}^n$”, Proc. Amer. Math. Soc., 112:2 (1991), 549–562  crossref  mathscinet  zmath
32. Xiaoye Fu and J.-P. Gabardo, Self-affine scaling sets in $\mathbb R^2$, Mem. Amer. Math. Soc., 233, no. 1097, Amer. Math. Soc., Providence, RI, 2015  crossref  mathscinet  zmath
33. J. C. Lagarias and Yang Wang, “Haar type orthonormal wavelet bases in $\mathbf{R}^2$”, J. Fourier Anal. Appl., 2:1 (1995), 1–14  crossref  mathscinet  zmath
34. T. Zaitseva, “Haar wavelets and subdivision algorithms on the plane”, Adv. Syst. Sci. Appl., 17:3 (2017), 49–57  crossref
35. V. G. Zakharov, “Rotation properties of 2D isotropic dilation matrices”, Int. J. Wavelets Multiresolut. Inf. Process., 16:1 (2018), 1850001  crossref  mathscinet  zmath
36. T. I. Zaitseva and V. Yu. Protasov, “Self-affine 2-attractors and tiles”, Mat. Sb., 213:6 (2022), 71–110  mathnet  crossref  mathscinet; English transl. Sb. Math., 213:6 (2022), 794–830, arXiv: 2007.11279  crossref
37. B. V. Shabat, Introduction to complex analysis, v. 1, 2, 3rd ed., Nauka, Moscow, 1985 (Russian)  mathscinet  mathscinet  zmath; French transl. of 3rd ed. Introduction à l'analyse complexe, v. 1, 2, Traduit Russe Math., Mir, Moscow, 1990  mathscinet  mathscinet  zmath
38. M. Charina and V. Yu. Protasov, “Regularity of anisotropic refinable functions”, Appl. Comput. Harmon. Anal., 47:3 (2019), 795–821  crossref  mathscinet  zmath
39. M. Charina and Th. Mejstrik, “Multiple multivariate subdivision schemes: matrix and operator approaches”, J. Comput. Appl. Math., 349 (2019), 279–291  crossref  mathscinet  zmath
40. M. G. Kreĭn and M. A. Rutman, Linear operators leaving invariant a cone in a Banach space, Uspekhi Mat. Nauk, 3, no. 1(23), 1948  mathnet  mathscinet  zmath; English transl. Amer. Math. Soc. Translation, 26, Amer. Math. Soc., New York, 1950  mathscinet
41. V. Yu. Protasov, “The generalized joint spectral radius. A geometric approach”, Izv. Ross. Akad. Nauk Ser. Mat., 61:5 (1997), 99–136  mathnet  crossref  mathscinet  zmath; English transl. Izv. Math., 61:5 (1997), 995–1030  crossref  adsnasa
42. V. D. Blondel and Yu. Nesterov, “Computationally efficient approximations of the joint spectral radius”, SIAM J. Matrix Anal. Appl., 27:1 (2005), 256–272  crossref  mathscinet  zmath
43. V. D. Blondel, S. Gaubert, and J. N. Tsitsiklis, “Approximating the spectral radius of sets of matrices in the max-algebra is NP-hard”, IEEE Trans. Automat. Control, 45:9 (2000), 1762–1765  crossref  mathscinet  zmath
44. N. Guglielmi and V. Protasov, “Exact computation of joint spectral characteristics of linear operators”, Found. Comput. Math., 13:1 (2013), 37–97  crossref  mathscinet  zmath
45. T. Mejstrik, “Algorithm 1011: improved invariant polytope algorithm and applications”, ACM Trans. Math. Software, 46:3 (2020), 29  crossref  mathscinet  zmath
46. P. Oswald, “Designing composite triangular subdivision schemes”, Comput. Aided Geom. Design, 22:7 (2005), 659–679  crossref  mathscinet  zmath
47. P. Oswald and P. Schröder, “Composite primal/dual $\sqrt{3}$-subdivision schemes”, Comput. Aided Geom. Design, 20:3 (2003), 135–164  crossref  mathscinet  zmath
48. Qingtang Jiang and P. Oswald, “Triangular $\sqrt 3$-subdivision schemes: the regular case”, J. Comput. Appl. Math., 156:1 (2003), 47–75  crossref  mathscinet  zmath
49. T. I. Zaitseva, “Simple tiles and attractors”, Mat. Sb., 211:9 (2020), 24–59  mathnet  crossref  mathscinet  zmath; English transl. Sb. Math., 211:9 (2020), 1233–1266  crossref  adsnasa
50. M. Charina, C. Conti, and T. Sauer, “Regularity of multivariate vector subdivision schemes”, Numer. Algorithms, 39:1-3 (2005), 97–113  crossref  mathscinet  zmath  adsnasa
51. N. Dyn and D. Levin, “Subdivision schemes in geometric modelling”, Acta Numer., 11 (2002)  crossref  mathscinet  zmath
52. V. Protasov, “The stability of subdivision operator at its fixed point”, SIAM J. Math. Anal., 33:2 (2001), 448–460  crossref  mathscinet  zmath
53. C. Conti and K. Jetter, “Concerning order of convergence for subdivision”, Numer. Algorithms, 36:4 (2004), 345–363  crossref  mathscinet  zmath  adsnasa
54. A. Cohen, K. Gröchenig, and L. F. Villemoes, “Regularity of multivariate refinable functions”, Constr. Approx., 15:2 (1999), 241–255  crossref  mathscinet  zmath
55. D. Mekhontsev, IFStile software http://ifstile.com

Citation: T. I. Zaitseva, “Multivariate tile $\mathrm{B}$-splines”, Izv. Math., 87:2 (2023), 284–325
Citation in format AMSBIB
\Bibitem{Zai23}
\by T.~I.~Zaitseva
\paper Multivariate tile $\mathrm{B}$-splines
\jour Izv. Math.
\yr 2023
\vol 87
\issue 2
\pages 284--325
\mathnet{http://mi.mathnet.ru//eng/im9296}
\crossref{https://doi.org/10.4213/im9296e}
\mathscinet{http://mathscinet.ams.org/mathscinet-getitem?mr=4634762}
\zmath{https://zbmath.org/?q=an:1533.41003}
\adsnasa{https://adsabs.harvard.edu/cgi-bin/bib_query?2023IzMat..87..284Z}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001054286300004}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85168123678}
Linking options:
  • https://www.mathnet.ru/eng/im9296
  • https://doi.org/10.4213/im9296e
  • https://www.mathnet.ru/eng/im/v87/i2/p89
  • This publication is cited in the following 2 articles:
    Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Известия Российской академии наук. Серия математическая Izvestiya: Mathematics
    Statistics & downloads:
    Abstract page:486
    Russian version PDF:57
    English version PDF:109
    Russian version HTML:256
    English version HTML:138
    References:37
    First page:12
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024