Abstract:
For divergence-form second-order elliptic operators with measurable $\varepsilon$-periodic coefficients in $\mathbb{R}^d$ resolvent approximations with error term of order $\varepsilon^2$ as $\varepsilon\to 0$ in the operator norm $\|\cdot\|_{H^1{\to}H^1}$ are constructed. The method of two-scale expansions in powers of $\varepsilon$ up to order two inclusive is used. The lack of smoothness in the data of the problem is overcome by use of Steklov smoothing or its iterates. First scalar differential operators with real matrix of coefficients which act on functions $u\colon \mathbb{R}^d\to \mathbb{R}$, and then matrix differential operators with complex-valued tensor of order four which act on functions $u\colon \mathbb{R}^d\to \mathbb{C}^n$ are considered.
Bibliography: 20 titles.
with an $\varepsilon$-periodic matrix of coefficients $a^\varepsilon(x)=a(y)|_{y=\varepsilon^{-1}x}$, where $\varepsilon> 0$ is a small parameter. We assume that the 1-periodic real measurable matrix $a(y)=\{a_{ij}(y)\}_{i,j=1}^d$ is bounded and positive definite, that is, it satisfies the inequalities
for some positive constants $\lambda_0$ and $\lambda_1$. Equation (1.1) is solvable for each right-hand side $f\in H^{-1}(\mathbb{R}^d)=(H^1(\mathbb{R}^d))^*$ and, moreover, the resolvent $(A_{\varepsilon}+1)^{-1}$: ${H^{-1}(\mathbb{R}^d)\to H^1(\mathbb{R}^d)}$ is a bounded operator such that $\|(A_{\varepsilon}+1)^{-1}\|_{H^{-1}\to H^1}\leqslant C$ uniformly in $\varepsilon$. All function spaces related to (1.1) consist of real-valued functions.
The homogenized (limiting as $\varepsilon\to 0$) problem for (1.1) is similar:
where the constant matrix $\widehat{a}$ is in the same class (1.3); it can be found by solving problems on the periodicity cell $Y=[-1/2,1/2)^d$ (see (2.1) and (2.3)). It is well known [1]–[5] that solutions to (1.1) and (1.4) are linked as follows: $\lim_{\varepsilon\to 0}\|{u^\varepsilon-u}\|_{L^2(\mathbb{R}^d)}=0$, where the rate of convergence in $\varepsilon$ can be clarified. The most general result in this direction was obtained relatively recently: if $f\in L^2(\mathbb{R}^d)$, then
as was originally shown in [6] and slightly later and by use of another method in [7]. This estimate can be expressed in the operator form, in terms of resolvents:
We see that the resolvent $(\widehat A+1)^{-1}$ approximates $(A_\varepsilon+1)^{-1}$ with accuracy of order $\varepsilon$ in the $L^2(\mathbb{R}^d)$-operator norm. To approximate the resolvent of the original operator with the same accuracy of order $\varepsilon$ as in (1.6), but in the stronger $\|\cdot\|_{L^2 (\mathbb{R}^d)\to H^1 (\mathbb{R}^d)}$-operator norm, we must add a suitable corrector to the zeroth approximation $(\widehat A+1)^{-1}$, namely, we have the inequality
where $K_1(\varepsilon)f(x)=N(x/\varepsilon)\,{\cdot}\,\nabla u(x)$ for $u=(\widehat A+1)^{-1}f$ and the 1-periodic vector $N(y)=\{N_j(y)\}_{j=1}^d$ is formed by the solutions of the basic problems (mentioned above) on the periodicity cell $Y$. Estimate (1.7) was originally established in [8]; in its derivation Zhikov’s method was used, which was slightly refined in [8] in comparison with the original version in [7].
We narrow the action of the resolvent $(A_\varepsilon+1)^{-1}$ further down. Regarding it as an operator in $H^1 (\mathbb{R}^d)$, we approximate it with an error of order $O(\varepsilon^2)$; in other words, we establish the asymptotic formula
in the $H^1(\mathbb{R}^d)$-operator norm (the precise statement is in Theorem 4.1). To do this, first we find approximations for the solution of (1.1) with suitable estimate for the error in the $H^1(\mathbb{R}^d)$-norm. Such approximations are defined by two-scale expansions, as in Bakhvalov’s approach presented in [1], with suitably many correctors supplementing the zeroth approximation. Here all correctors depend on the ‘slow’ and ‘fast’ variables $x$ and $x/\varepsilon$ alike and only differ from similar terms in two-scale approximations arising in Bakhvalov’s method by possible smoothing with respect to the slow variable. Smoothing is needed because the data in the problem (the coefficients and right-hand side) are not sufficiently regular, and we cannot consistently define a two-scale expansion without smoothing. The zeroth approximation in the expansion depends only on the ‘slow’ variable $x$, but is itself an expansion in powers of $\varepsilon$ (and therefore depends on $\varepsilon$ too). For an approximation to an accuracy of order $\varepsilon^2$ we can take this expansion to be the sum
of the solution $u(x)$ of the homogenized equation (1.4) and the solution $u^1(x)$ of another homogenized equation (with right-hand side depending on $u$),
where the $b_{jkl}$ are some specially selected coefficients defined by (2.9) in terms of the solutions of the periodic problems (2.1) and (2.7) and $D_j= \partial/\partial x_j$ is differentiation with respect to the $j$th variable. Here and throughout, we sum from 1 to $d$ with respect to repeated indices.
$$
\begin{equation}
(\widehat{A}+\varepsilon B )U_0^\varepsilon+U_0^\varepsilon=f+\varepsilon^2 B u^1.
\end{equation}
\tag{1.11}
$$
At the first step, as an approximation to the solution of the original problem we take a two-scale function of the type described above, with all terms smoothed with respect to the slow variables:
Here the 1-periodic functions $N_j(y)$ and $N_{jk}(y)$ are the solutions of (2.1) and (2.7), respectively; as $ \Theta^\varepsilon$ we can take iterates of the Steklov smoothing operator $S^\varepsilon$, for example, $\Theta^\varepsilon=S^\varepsilon S^\varepsilon$. Finally, $S^\varepsilon$ itself is defined at functions $\varphi\in L^1_{\mathrm{loc}}(\mathbb{R}^d)$ by
Theorem 1.1. In equation (1.1) let the right-hand side $f$ belong to $H^1(\mathbb{R}^d)$. Then the function defined in (1.12)–(1.14) approximate the solution of problem (1.1) with the estimate
where the constant $C$ depends only on the dimension $d$ and the constants $\lambda_0$ and $\lambda_1$ from (1.2) and (1.3).
1.2.
Theorem 1.1 is proved in § 4 (namely, see §§ 4.1 and 4.2); we use Zhikov’s method, going back to [7] and [8]. Theorem 1.1 is out main technical result; from it we deduce other more or less routine corollaries. For example, from the approximation $w^\varepsilon$ we go over to the simpler approximation
by simplifying each of the three terms in (1.12); in particular, by using smoothing, namely, of the form $S^\varepsilon$, only in the second corrector. Owing to smoothing and the multiplier properties of the gradient of the solution of the problem in the cell (2.1) all terms participating in (1.12) and (1.17) belong to the space $H^1(\mathbb{R}^d)$ under our assumptions. This follows from Lemmas 3.1, 3.3 and 2.2 below and the elliptic estimates
$$
\begin{equation}
\|u\|_{H^{3}(\mathbb{R}^d)}\leqslant c \|f\|_{H^1(\mathbb{R}^d)}
\end{equation}
\tag{1.18}
$$
and
$$
\begin{equation}
\|u^1\|_{H^{2}(\mathbb{R}^d)}\leqslant c \|Bu\|_{L^2(\mathbb{R}^d)}\leqslant C\|f\|_{H^1(\mathbb{R}^d)}
\end{equation}
\tag{1.19}
$$
for solutions of homogenized equations.
In the derivation of (1.16) the terms in (1.12) must fit one another, so all of them involve double smoothing in the sense of Steklov. First of all, we need it in the corrector $U_2^\varepsilon(x)$: on the one hand for this corrector to belong to $H^1(\mathbb{R}^d)$ and on the other, in estimates for the discrepancy in equation (1.1) of the approximation constructed (see the proof of Theorem 1.1 in § 4.1). By contrast, in the expansion (1.17) we eliminate smoothing where possible to come closer to Bakhvalov’s ansatz in [1].
From (1.16), using the properties of smoothing (see § 3) and the multiplier properties of the gradient of the solution of (2.1) that we indicate in Lemma 2.2, we derive
with constant $C$ of the same type as in (1.16), as explained in detail in § 4.3.
Estimates (1.16) and (1.20) can be expressed in operator terms, which yields, for example, the asymptotic formula (1.8) with explicitly indicated correctors. We do this in § 4.4.
The method of the construction of resolvent approximations in the operator energy norm, which we present initially for the scalar problem (1.1) for greater transparency and simplicity, is extended in § 5 to the case of matrix operators in a space of vector-valued functions. In this case the two-scale expansion of type (1.12) does not change in its idea, but we have to use more cumbersome machinery of tensors of the fourth and fifth order. Theorem 1.1 generalizes to this case as Theorem 5.1, and operator asymptotics of the type of (1.8) is described in Theorem 5.2. In § 5 we also present, as consequences of Theorem 5.2, asymptotic expressions for the resolvent in the weaker operator norms $\|\,{\cdot}\,\|_{H^1\to L^2}$ and $\|\,{\cdot}\,\|_{H^1\to H^{-1}}$.
1.3.
Constructing approximations to accuracy $O(\varepsilon^2)$ in the energy norm for the solution of (1.1) is a natural step made after we have established (1.7). The approximation in (1.16) is interesting in its own right, but it is also useful as a tool in the construction of a resolvent approximation to an accuracy of $O(\varepsilon^3)$ for the solution of (1.1), as shown in [9] in the symmetric real scalar case. This is how one uses a method for constructing such approximations which is alternative to the spectral (operator-theoretic) method used in [10], where refined resolvent approximations in the operator norm $\|\cdot\|_{H^1\to L^2}$ that take account of the first and second correctors were investigated for the first time. Our alternative approach is conceptually simpler and agrees well with Bakhvalov’s ideas in [1], to which we add a number of improvements drawn from Zhikov’s method, for instance, smoothing of approximations and a special analysis of the discrepancies of approximations. By using smoothing we can relax the assumptions on the regularity of the data in the problem for which we establish the estimates in question, while a special analysis of the discrepancies of approximations enables us to minimize the number of terms involved in our approximations.
Before [10], improved resolvent approximation with error $O(\varepsilon^2)$ in the $\|\,{\cdot}\,\|_{L^2\to L^2}$-operator norm were obtained (see [11] and [12]). To improve the accuracy order to $\varepsilon^3$, in [10] the authors had to narrow down the action of the resolvent and consider it as an operator from $H^1$ to $L^2$. Such approximations are impossible in the norm $\|\,{\cdot}\,\|_{L^2\to L^2}$. In a similar way, when we approximate the solution of (1.1) in the $H^1$-energy norm, we can achieve accuracy of order $\varepsilon$ when the right-hand side $f$ is in $L^2(\mathbb{R}^d)$ (see (1.7)). For accuracy of order $\varepsilon^2$ the minimal regularity of $f$ must be higher (see (1.16), where $f\in H^1(\mathbb{R}^d)$).
Estimates (1.16) and (1.20) were proved in [13] under the assumption that the scalar operator $A_\varepsilon$ in (1.1) is selfadjoint. In this case the approximation $v^\varepsilon$ from (1.20) is taken without $u^1$ (see (1.17)), and so the second homogenized equation is not considered. An important point here is the fact, mentioned and proved in [13], that a certain form $\sum_{i,j,k}b_{ijk}\xi_{i}\xi_{j}\xi_{k}$, $\xi\in \mathbb{R}^d$, related to the operator $B$ in (1.10), vanishes. This holds only in the ‘symmetric real scalar case’, when the coefficients $\{b_{ijk}\}$ have a rich symmetry.
Our aim is to extend, on the basis of Zhikov’s method, the results of [13] to a wider class of elliptic operators, namely, nonselfadjoint matrix operators with complex coefficients, for which the construction in [13] does not work, nor the method from [10] can be used. Bakhvalov’s techniques come to help: the required approximation to the exact solution of the original problem is constructed as a two-scale expansion, and the zeroth approximation in this expansion, which only depends on the slow variable, is itself sought in the form of an expansion in powers of $\varepsilon$, which results in several homogenized problems, stated recursively. As a result, the two methods, Bakhvalov’s and Zhikov’s, merge together, which is a notable feature of our paper.
§ 2. Problems on a cell
Consider $\mathcal{W}=\{\varphi\in H^1_{\mathrm{per}}(Y)\colon \langle \varphi\rangle=0\}$, the Sobolev space of periodic function ($Y=[-1/2,1/2)$ is the periodicity cell) with zero mean value
where $e_1,\dots,e_d$ is the canonical basis in $\mathbb{R}^d$. The solutions of problems (2.1) can be regarded as distributions on $\mathbb{R}^d$, and also in the sense of an integral identity over $Y$ which holds for the test functions in $ C_{\mathrm{per}}^\infty(Y)$. This identity extends by closure to all functions in the energy space $\mathcal{W}$, so that
This yields the solvability of (2.1) and the estimate $\|N_j\|_{\mathcal{W}}\leqslant c$, $c=\mathrm{const}(\lambda)$. This twofold point of view on (2.1) extends to other similar differential relations for periodic functions (for example, in (2.5)–(2.7)) and (2.10)).
the following result, established in [5], holds for the $g_j$.
Lemma 2.1. Let $g\in L_{\mathrm{per}}^2(Y)^d$, $\langle g\rangle=0$ and $\operatorname{div}g=0$. Then there exists a skew-symmetric matrix $G\!\in\! H^1_{\mathrm{per}}(Y)^{d\times d}$ such that $\langle G\rangle\!=\!0$, $\operatorname{Div}G\!=\!g$ and ${\|G\|_{H^1}\!\leqslant\! c \|g\|_{L^2}}$.
Here and in what follows we let $\operatorname{Div} G$ denote the row-wise divergence of ${G=\{G_{st}\}_{s,t=1}^d}$, so that $\operatorname{Div}G$ is the vector $\{D_tG_{st}\}_{s=1}^d$.
By Lemma 2.1 there exist skew-symmetric matrices $G_j$ such that
where $N_j$ is the solution of (2.1) and the matrix $G_j$ is its derivative defined by (2.4) and (2.6). It is obvious that these problems are solvable and $\|N_{ij}\|_{\mathcal{W}}\leqslant c$, ${c=\mathrm{const}(\lambda)}$.
Note the properties of the solution of the basic problem in the cell (2.1) which are specific to the scalar case. First of all, $N_j\in L^\infty(Y)$ by the generalized maximum principle (for instance, see [14] or [15]). Moreover, the gradient $\nabla N_j$ is a multiplier from $H^1(\mathbb{R}^d)$ to $L^2(\mathbb{R}^d) ^d$ satisfying the following estimate.
Lemma 2.2. If $(\nabla N_j)^\varepsilon(x)=(\nabla N_j)(x/\varepsilon)$, then
The multiplier properties of the gradient of the solution of a basic problem on a cell were originally noted in [16]. The proof of (2.11) can be found in [8] or [17], for example.
§ 3. On smoothing
Estimates (1.16) and (1.20) were obtained using the same approach as in [7] and [8]: problems related to the low regularity of the data in the problem are eliminated by introducing an additional parameter of integration. This can be done as in [7], by making a direct shift in the approximation constructed, or by smoothing it (for instance, in the sense of Steklov) with respect to the slow variable, as in [8]. In our paper we choose a version of the shift method, where Steklov smoothing and its iterates are also used.
Recall some properties of the Steklov smoothing operator (see the definition in (1.15)). Below we let $\|\cdot\|$ and $(\,\cdot\,{,}\,\cdot\,)$ denote the norm and inner product in $L^2(\mathbb{R}^d)$, respectively.
Lemma 3.2. If $b\in L^2_{\mathrm{per}}(Y)$, $\displaystyle \int_Yb(y)\,dy=0$, $b^\varepsilon(x)=b(\varepsilon^{-1}x)$, $\varphi\in L^2(\mathbb{R}^d)$ and $\Phi\in H^1(\mathbb{R}^d)$, then
where inequality (3.5) can be applied to both the differences $S^\varepsilon v-v$ and $\nabla(S^\varepsilon v- v)=S^\varepsilon (\nabla v)-\nabla v$, because $v\in H^3(\mathbb{R}^d)$ by construction and $\|v\|_{H^3(\mathbb{R}^d)} \leqslant c_0\|f\|_{H^1(\mathbb{R}^d)}$. As a result,
Assume that the smoothing kernel $\theta\in L^\infty(\mathbb{R}^d)$ has a compact support, $\theta\geqslant 0$ and $\displaystyle\int_{\mathbb{R}^d} \theta(x)\,dx=1$.
Estimates (3.1)–(3.3), stated for the Steklov smoothing operator, are also valid for the general smoothing operator (3.7), with a single proviso: the constants on their right-hand sides depend not only on the dimension $d$, but also on the smoothing kernel $\theta$. If $\theta$ is even, then the smoothing $\Theta^\varepsilon$ also has properties of similar to (3.5) and (3.6).
The following properties of the operator (3.7) or some analogues of these were pointed out in [18] and [19].
Lemma 3.3. Let the smoothing kernel $\theta$ be a Lipschitz function. If $b\in L^2_{\mathrm{per}}(Y)$, $b_\varepsilon(x)=b(x/\varepsilon)$ and $\varphi\in L^2(\mathbb{R}^d)$, then
It is obvious that the Steklov smoothing operator $S^\varepsilon$ is defined by (3.7) with smoothing kernel equal to the characteristic function $\theta_1(x)$ of the cube $Y= [-{1}/{2},{1}/{2})^d$. Steklov double smoothing $S^\varepsilon S^\varepsilon$ is an operator of the form (3.7) with smoothing kernel equal to the convolution $\theta_2=\theta_1*\theta_1$, and this convolution is easy to calculate. It is clear from the expression for $\theta_2(x)$ (see [18]) that this is a Lipschitz continuous function, which in addition is even, so we can apply Lemma 3.3 and inequalities of the type of (3.5) and (3.6) to the Steklov double smoothing operator.
denote the function smoothed by means of the operator $\Theta^\varepsilon=S^\varepsilon S^\varepsilon$. Applying this operator to both sides of (1.11) we obtain
$$
\begin{equation}
(\widehat{A}+\varepsilon B )u^{,\varepsilon}+u^{,\varepsilon}=(f)_\varepsilon+\varepsilon^2 B (u^1)_\varepsilon=:f^{,\varepsilon},
\end{equation}
\tag{4.2}
$$
where we have used the representation $g_j^\varepsilon=\operatorname{Div}(\varepsilon G_j^\varepsilon )$ and have taken account of the formula for the divergence of a matrix $G$ times a scalar $\varphi$, namely, ${\operatorname{Div}(G\varphi) =\varphi\operatorname{Div}G+G\nabla \varphi}$. The vector $\operatorname{Div}(\varepsilon G_j^\varepsilon D_ju^{,\varepsilon})$ is divergence free because $G_j^\varepsilon$ is skew-symmetric. From (4.7) and (4.8) we deduce
We write out minutely the components of the right-hand side $F^\varepsilon$, exposing the structure of the zeroth approximation $u^{,\varepsilon}$ and denoting smoothing by $\Theta^\varepsilon$ as in (4.1):
In fact, we can apply Lemma 3.1 or 3.3 to the terms in (4.11), (4.13) and the second term in (4.12) and can apply Lemma 3.2 to the first terms in (4.10) and (4.12). The second term in (4.10) can in advance be transformed into the following form because $\widetilde{g}_{ij}^{\,\varepsilon}$ is divergence free:
after which we can also apply Lemma 3.1 to the product $\widetilde{g}_{ij}^{\,\varepsilon} D_iD_j(u^1)_\varepsilon$. (For some details of the application of Lemmas 3.1–3.3, see § 4.2 below.) As a result of our analysis, we obtain the estimates
where the constants $C$ depend in the end on the dimension $d$ and norms $\|N_i\|_{\mathcal{W}} $ or $\|N_{ij}\|_{\mathcal{W}} $, and the $L^2$-norms of the gradients $\nabla^3 u$, $\nabla^2 u$, $\nabla u$ and $\nabla^2 u^1$, $\nabla u^1$ can be estimated in terms of $\|f\|_{H^1(\mathbb{R}^d)}$ because of the elliptic bounds (1.18) and (1.19).
Finally, the last term $(f^{,\varepsilon}-f)$ in (4.9) can in view of (4.2) be written as a sum:
We apply inequality of type (3.6) to the difference $(f)_\varepsilon -f$. In addition, as $B$ (see the definition in (1.10)) has a divergence-form representation, it follows that
$$
\begin{equation*}
\|Bu^1\|_{H^{-1}(\mathbb{R}^d)}\leqslant C \|\nabla^2u^1\|_{L^2(\mathbb{R}^d)}\stackrel{(1.19)}\leqslant C\|f\|_{H^1(\mathbb{R}^d)}.
\end{equation*}
\notag
$$
where the right-hand side $F^\varepsilon$ satisfies (4.14). Hence, by the energy inequality for an elliptic equation we obtain the required estimate (1.16).
We show in several examples how Lemmas 3.1–3.3 were actually used above (in the proof of (4.14)). For instance, the vector $r_1^\varepsilon$ in (4.11) has components that are products of two types involving smoothing by means of the operator $\Theta^\varepsilon=S^\varepsilon S^\varepsilon$: $b(x/\varepsilon)(\psi)_\varepsilon$ or $\varepsilon b(x/\varepsilon)(D_j\varphi)_\varepsilon$, where $b\in L^2(Y) $ is a periodic function and $\psi(x)=D_jD_kD_m u(x)$ and $\varphi(x)=D_kD_m u^1(x)$ are derivatives of order two and three of the solutions of the homogenized equations (1.4) and (1.10), which have estimates (1.18) and (1.19), respectively. Note that Lemma 3.3 can be applied to the smoothing kernel of the operator $\Theta^\varepsilon$, as explained at the end of § 3. By Lemma 3.1 or Lemma 3.3 we have
The sum $r_0^\varepsilon$ in (4.11) contains terms of the form $b(x/\varepsilon)(\psi)_\varepsilon$, which we have already considered, but satisfying additionally the condition $\langle b\rangle=0$. For them, by Lemma 3.2 we have
Here are some comments on the derivation of (1.20) from (1.16). Let us examine the difference between the approximations $w^\varepsilon$ and $v^\varepsilon$ defined in (1.12)–(1.14) and (1.17). We use the notation (4.1) for the double Steklov smoothing involved in $w^\varepsilon$. Clearly,
and the required $L^2$-estimate of the terms included in the remainder $r(\varepsilon)$ can be obtained from Lemma 3.1. To the differences $(u)_\varepsilon-u$, $(D_ju)_\varepsilon-D_ju$ and $(u^1)_\varepsilon- u^1$ we can apply an inequality of type (3.5) for the smoothing operator $\Theta^\varepsilon=S^\varepsilon S^\varepsilon$. As $N_j\in L^\infty$, this is sufficient for the conclusion that $\|w^\varepsilon-v^\varepsilon\|_{L^2(\mathbb{R}^d)}\leqslant C\varepsilon^2\|f\|_{H^1(\mathbb{R}^d)}$.
Now we express the gradient $\nabla(w^\varepsilon-v^\varepsilon)$ as a sum:
The last term is estimated with the required majorant by Lemma 3.3, once we have set $\varphi=D_jD_ku^1$ and $b(y)=N_{jk}(y)$, because $D_jD_ku^1\in L^2(\mathbb{R}^d)$ and $\|D_jD_ku^1\|\leqslant C\|f\|_{H^1(\mathbb{R}^d)} $ by (1.19). As for the other terms, we apply to them Lemma 2.2 or Lemma 3.1 and also inequalities of the type (3.2), (3.5) or (3.1) for the smoothings $S^\varepsilon$ and $\Theta^\varepsilon$. For example,
Theorem 4.1. Let $A_\varepsilon$ be the operator of the original problem (1.1) and $\widehat A$ be the operator of the homogenized problem (1.4). Let $\mathcal{K}_1(\varepsilon)$ and $\mathcal{K}_2(\varepsilon)$ be the operators defined by formulae (4.18), involving the solutions $N_j$ and $N_{jk}$ of problems in the cell (see (2.1) and (2.7)), the operator $B$ defined in (1.10) and the Steklov smoothing operator $S^\varepsilon$ defined in (1.15). Then
where the constant $C$ depends only on the dimension $d$ and the constants $\lambda_0$ and $\lambda_1$ from (1.2) and (1.3).
Remark 4.2. If the operator $A_\varepsilon$ has a symmetric matrix with real coefficients, then the resolvent approximation in (4.19) becomes significantly simpler. In both correctors (see (4.18)) the first term vanishes since the operator $B=- b_{jkl} D_jD_kD_l$ is actually equal to zero as shown in [13], because of the symmetry of its coefficients.
Remark 4.3. On the basis of (4.19), using duality arguments we can find a resolvent approximation $(A_\varepsilon+1)^{-1}$ in the operator norm $\|\cdot\|_{H^1(\mathbb{R}^d)\to L^2(\mathbb{R}^d)}$ with error of order $\varepsilon^3$, which was performed in [9] in the symmetric real scalar case and agrees with the results in [10]. The same method works in the asymmetric case, but then it produces many correctors because the initial approximation (4.17)–(4.18) contains more terms.
§ 5. Vector-valued problem
5.1.
We consider a vector analogue of problem (1.1), which in addition has complex coefficients. To do this we introduce a complex 1-periodic tensor $a(y)=\{a_{jk}^{\alpha\beta}(y)\}_{1\leqslant j,k\leqslant d}^{1\leqslant\alpha,\beta\leqslant n}$ of order four which acts as a linear operator in the space of $ n\times d $ matrices. With a function $u\colon \mathbb{R}^d\to \mathbb{C}^n$ we associate the $ n\times d $ gradient matrix $D u=\{D_k u^\beta\}_{\beta,k}$, where $D=-i\nabla$ ($i^2=-1$), and also the $ n\times d$ matrix of the flow $aD u=\{a_{jk}^{\alpha\beta}D_k u^\beta\}_{\alpha,j}$. Here and below we sum over the repeated indices, from 1 to $d$ for indices denoted by Latin characters, and from 1 to $n$ for Greek characters.
In the space of functions $u\colon \mathbb{R}^d\to \mathbb{C}^n$ consider the action of the second-order differential operator
with $\varepsilon$-periodic complex coefficients. As regards the tensor $a(y)=\{a_{jk}^{\alpha\beta}(y)\}$, we assume that conditions of boundedness and coercivity are satisfied:
for some positive constants $\lambda_0$ and $\lambda_1$. Here and in what follows we use the simplified notation $(\,\cdot\,{,}\,\cdot\,)$ and $\|\cdot\|$ for the inner product and norm in the spaces $L^2(\mathbb{R}^d,\mathbb{C}^n)$ and $L^2(\mathbb{R}^d,\mathbb{C}^{n\times d})$, depending on the context.
Using dilations, from (5.3) we obtain a similar inequality for the $\varepsilon$-periodic tensor $a^\varepsilon(x)=a(x/\varepsilon)$:
with right-hand side $f\in \mathcal{H}^*$, where $\mathcal{H}=H^1(\mathbb{R}^d,\mathbb{C}^n)$ and $ \mathcal{H}^*$ is the dual space. This equation has a unique solution. Owing to (5.4), the estimate $\|u^\varepsilon\|_\mathcal{H}\leqslant C \|f\|_{\mathcal{H}^*}$ holds uniformly in $\varepsilon\in(0,1)$. The homogenized equation for (5.5) is as follows:
where the constant fourth-order tensor $\widehat{a}=\{\widehat{a}_{jk}^{\alpha\beta}\}$ belongs to (5.3) and can be found using the solutions of the problem in the cell which is stated below (see (5.7) and (5.9)).
5.2.
We introduce the 1-periodic objects necessary for the construction of the two-scale expansion (1.12) approximating the solution of the vector problem (5.5).
In the Sobolev space $\mathcal{W}=\{\varphi\in H^1_{\mathrm{per}}(Y,\mathbb{C}^n)\colon \langle \varphi\rangle=0\}$ of mean zero periodic vector functions ($Y=[-1/2,1/2)$ is the periodicity cell) consider the problem
with matrices $e^\alpha_j=\{\delta^\alpha_\beta\delta^k_j\}_{\beta,k}$, $1\leqslant j\leqslant d$, $1\leqslant \alpha\leqslant n$, where $\delta^\alpha_\beta$ and $\delta^k_j$ are the Kronecker deltas.
The integral inequality (5.3) for functions with compact support yields the analogous inequality for periodic functions
Hence the rows of $g^\alpha_j=\{g^{\alpha\beta}_{jm}\}_{\beta,m}$ are $d$-dimensional divergence-free vectors. By Lemma 2.1 there exist mean zero $ d\times d $ matrices $G^{\alpha\beta}_j=\{G^{\alpha\beta}_{jmk}\}_{m,k}\in H^1_{\mathrm{per}}(Y,\mathbb{C}^{d\times d}) $ such that
it involves the tensor product of the vectors $ N^\alpha_j$ and $e_k=\{\delta_k^j\}_j$ and the $ n\times d$ matrix $G^\alpha_{jk}=\{G^{\alpha\beta}_{jkm}\}_{\beta,m}$ formed by the components of the fifth-order tensor from (5.12). It is obvious that problem (5.13) is uniquely solvable.
Next we introduce the constant $ n\times d $ matrices
Since $\widehat{a}$ is a constant tensor, the solutions of (5.6) and (5.16) satisfy elliptic estimates of type (1.18) and (1.19), provided that the function $f$ in (5.6) belongs to $H^1(\mathbb{R}^d,\mathbb{C}^n)$; namely,
which is smoothed by means of the operators $\Theta^\varepsilon=S^\varepsilon S^\varepsilon$, and the first and second correctors are constructed from $u^{,\varepsilon}(x)$ by the formulae
Here the 1-periodic functions $N^\alpha_j(y)$ and $N^\alpha_{jk}(y)$ are solutions of the problems on the cell (5.7) and (5.13), respectively, while $u$ and $u^1$ are solutions of the homogenized equations (5.6) and (5.16).
We estimate the discrepancy of the function (5.22) in equation (5.5) by expressing it as
where $f^{,\varepsilon}$ is the right-hand side of the equation
$$
\begin{equation}
(\widehat{A}+\varepsilon B )u^{,\varepsilon}+u^{,\varepsilon} =(f)_\varepsilon+\varepsilon^2 B (u^1)_\varepsilon=:f^{,\varepsilon},
\end{equation}
\tag{5.26}
$$
which is obtained by applying the smoothing operator $\Theta^\varepsilon$ to both sides of (5.17) and taking (5.23) into account. Since $f^{,\varepsilon}=(\widehat{A}u^{,\varepsilon}+u^{,\varepsilon})+\varepsilon Bu^{,\varepsilon}$ by (5.26), it follows that
Each entry of $(g^\alpha_j)^\varepsilon D_ju^{\alpha,\varepsilon}=\{(g^{\alpha\beta}_{jm})^\varepsilon D_ju^{\alpha,\varepsilon}\}_{\beta,m}$ (no summation with respect to $\alpha$ and $j$ implied here) has the representation
For $F^\varepsilon$ in (5.31) we can establish an analogue of inequality (4.14) by using the same arguments as in the verification of (4.14) in § 4; we omit the details. As a result, we prove the following analogue of Theorem 1.1.
Theorem 5.1. Let the right-hand side $f$ of equation (5.5) belong to $\mathcal{H}$. Then the function defined in (5.22)–(5.24) approximates the solution of (5.5) with the estimate
where the constant $C$ only depends on the dimensions $d$ and $n$ and the constants $\lambda_0$ and $\lambda_1$ in conditions (5.2) and (5.3).
5.4.
We find an analogue of the approximation (4.17), (4.18) for the resolvent of the operator $A_\varepsilon$ of the vector-valued problem (5.5). To do this we write the function defined in (5.22)–(5.24) as the following sum by collecting terms by powers of $\varepsilon$ and using the notation (4.1) and (4.6):
To substantiate this transition we refer to the properties of smoothing of type (3.2) and (3.4) and to Lemmas 3.1 and 3.3. (We used similar arguments in §§ 4.2 and 4.3.) Expressing the solutions of homogenized equations in (5.33) in terms of resolvents as in (4.16) and decoding the notation (4.1) we obtain an approximation to the resolvent of the operator $A_\varepsilon$ of type (4.17), namely,
Theorem 5.2. Let $A_\varepsilon$ be the operator of problem (5.5) and $\widehat A$ be the operator of the homogenized problem (5.6). Let $\mathcal{U}_\varepsilon$, $\mathcal{K}_1(\varepsilon)$ and $\mathcal{K}_2(\varepsilon)$ be the operators defined in (5.34)–(5.36), where the solutions $N^\alpha_j$ and $N^\alpha_{jk}$ of the problems in the cell (see (5.7) and (5.13)) are involved, along with the operator $B$ defined in (5.15) and the operator of double Steklov smoothing $\Theta^\varepsilon=S^\varepsilon S^\varepsilon$ ($S^\varepsilon$ was defined in (1.15)). Then the following estimate holds:
where the constant $C$ only depends on the dimensions $d$ and $n$ and the constants $\lambda_0$ and $\lambda_1$ from conditions (5.2) and (5.3).
Taking a weaker norm in (5.37) we obtain an approximation of the resolvent $(A_\varepsilon+ I)^{-1}$ to accuracy $O(\varepsilon^2)$ in the norm of operators from $H^1(\mathbb{R}^d,\mathbb{C}^n)$ to $L^2(\mathbb{R}^d,\mathbb{C}^n)$ or $H^{-1}(\mathbb{R}^d,\mathbb{C}^n)$; in addition, this approximation has a simpler structure. In fact, by Lemma 3.1 or 3.2 some of the terms in (5.34) can be included in the remainder term. For example,
and, as a result, $\|\varepsilon K_1(\varepsilon) \|_{L^2\to H^{-1}}=O(\varepsilon^2)$ and, moreover, in the norm $\|\cdot\|_{H^1\to H^{-1}}$ the operator $\varepsilon K_1(\varepsilon)$ occurs in the remainder term in the expansion (5.34).
Theorem 5.3. The following estimates hold under the assumptions of Theorem 5.2:
that is, the solution $u^\varepsilon$ of the original problem is approximated in the sense indicated above just by the solutions of two homogenized equations, (5.6) and (5.16).
Estimates (5.38) and (5.39) become even simpler in the ‘symmetric real scalar case’ mentioned in Remark 4.2.
Theorem 5.4. Under the assumptions of Theorem 4.1 let the matrix of coefficients $a(y)$ be symmetric. Then
We see that the solution $u$ of the homogenized problem (1.4), that is, the ‘zeroth approximation’ and the classical ‘first approximation’ $u+\varepsilon N^\varepsilon_j(x) D_j u$ from [1]–[5] associated with it, can approximate the solution $u^\varepsilon$ of the original problem (1.1) with error of order $\varepsilon^2$ in weak norms. For this to hold the right-hand side $f$ of the equation in (1.1) must belong to $H^1(\mathbb{R}^d)$.
Remark 5.5. On the basis of (5.37), using duality arguments we can find an approximation of the resolvent $(A_\varepsilon+I)^{-1}$ with error $\varepsilon^3$ in the norm of operators from $H^1(\mathbb{R}^d,\mathbb{C}^n)$ to $L^2(\mathbb{R}^d,\mathbb{C}^n)$. One must follow the method of [9], where a similar approximation was constructed in the simplest ‘symmetric real scalar case’ mentioned in Remark 4.2. Quite another, operator-theoretic (spectral) method, going back to [6], was used in [10] to study similar approximations, only for selfadjoint operators though.
Bibliography
1.
N. S. Bakhvalov, “Averaging of partial differential equations with rapidly oscillating coefficients”, Soviet Math. Dokl., 16:2 (1975), 351–355
2.
A. Bensoussan, J.-L. Lions and G. Papanicolaou, Asymptotic analysis for periodic structures, Stud. Math. Appl., 5, North-Holland Publishing Co., Amsterdam–New York, 1978, xxiv+700 pp.
3.
V. V. Zhikov, S. M. Kozlov, O. A. Oleinik and Kha T'en Ngoan, “Averaging and $G$-convergence of differential operators”, Russian Math. Surveys, 34:5 (1979), 69–147
4.
N. Bakhvalov and G. Panasenko, Homogenisation: averaging processes in periodic media. Mathematical problems in the mechanics of composite materials, Math. Appl. (Soviet Ser.), 36, Kluwer Acad. Publ., Dordrecht, 1989, xxxvi+366 pp.
5.
V. V. Jikov (Zhikov), S. M. Kozlov and O. A. Oleinik, Homogenization of differential operators and integral functionals, Springer-Verlag, Berlin, 1994, xii+570 pp.
6.
M. Sh. Birman and T. A. Suslina, “Second order periodic differential operators. Threshold properties and homogenization”, St. Petersburg Math. J., 15:5 (2004), 639–714
7.
V. V. Zhikov, “On operator estimates in homogenization theory”, Dokl. Math., 72:1 (2005), 534–538
8.
V. V. Zhikov and S. E. Pastukhova, “On operator estimates for some problems in homogenization theory”, Russ. J. Math. Phys., 12:4 (2005), 515–524
9.
S. E. Pastukhova, “Approximations of resolvents of second order elliptic operators with periodic coefficients”, J. Math. Sci. (N.Y.), 267:3 (2022), 382–397
10.
E. S. Vasilevskaya and T. A. Suslina, “Homogenization of parabolic and elliptic periodic operators in
$L_2(\mathbb{R}^d)$ with the first and second correctors taken into account”, St. Petersburg Math. J., 24:2 (2013), 185–261
11.
M. Sh. Birman and T. A. Suslina, “Homogenization with corrector term for periodic elliptic differential operators”, St. Petersburg Math. J., 17:6 (2006), 897–973
12.
V. V. Zhikov, “Spectral method in homogenization theory”, Proc. Steklov Inst. Math., 250 (2005), 85–94
13.
S. E. Pastukhova, “Improved resolvent approximations in homogenization of second order operators with periodic coefficients”, Funct. Anal. Appl., 56:4 (2022), 310–319
14.
O. A. Ladyzhenskaya and N. N. Ural'tseva, Linear and quasilinear elliptic equations, Academic Press, New York–London, 1968, xviii+495 pp.
15.
D. Kinderlehrer and G. Stampacchia, An introduction to variational inequalities and their applications, Pure Appl. Math., 88, Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York–London, 1980, xiv+313 pp.
16.
T. A. Suslina, “Homogenization of a stationary periodic Maxwell system”, St. Petersburg Math. J., 16:5 (2005), 863–922
17.
V. V. Zhikov and S. E. Pastukhova, “Operator estimates in homogenization theory”, Russian Math. Surveys, 71:3 (2016), 417–511
18.
S. E. Pastukhova, “Homogenization estimates for singularly perturbed operators”, J. Math. Sci. (N.Y.), 251:5 (2020), 724–747
19.
Weisheng Niu and Yue Yuan, “Convergence rate in homogenization of elliptic systems with singular perturbations”, J. Math. Phys., 60:11 (2019), 111509, 7 pp.
20.
S. E. Pastukhova, “Operator estimates in homogenization of elliptic systems of equations”, J. Math. Sci. (N.Y.), 226:4 (2017), 445–461
Citation:
S. E. Pastukhova, “Error estimates taking account of correctors in homogenization of elliptic operators”, Sb. Math., 215:7 (2024), 932–952