Home Mathematics Gradient estimates for the conductivity problem with imperfect bonding interfaces
Article Open Access

Gradient estimates for the conductivity problem with imperfect bonding interfaces

  • Hongjie Dong ORCID logo , Zhuolun Yang ORCID logo EMAIL logo and Hanye Zhu ORCID logo
Published/Copyright: November 5, 2025

Abstract

We study the field concentration phenomenon between two closely spaced perfect conductors with imperfect bonding interfaces of low conductivity type. The boundary condition on these interfaces is given by a Robin-type boundary condition. We discover a new dichotomy for the field concentration depending on the bonding parameter 𝛾. Specifically, we show that the gradient of solution is uniformly bounded independent of 𝜀 (the distance between two inclusions) when 𝛾 is sufficiently small. However, the gradient may blow up when 𝛾 is large. Moreover, we identify the threshold of 𝛾 and the optimal blow-up rates under certain symmetry assumptions. The proof relies on a crucial anisotropic gradient estimate in the thin neck between two inclusions. We develop a general framework for establishing such estimate, which is applicable to a wide range of elliptic equations and boundary conditions.

1 Introduction and main results

Our study is motivated by the phenomenon of field concentration in high-contrast composites, a central topic in composite material research. When two inclusions within a composite material are positioned very close to each other and their material properties significantly differ from the surrounding matrix, it may lead to a significant build-up of stress or field intensity in the narrow gap between the inclusions. Understanding the field concentration phenomenon quantitatively is important since it may cause material failure (see, for example, [5, 26]). In some other cases, the inclusions are designed to create the field concentration to achieve desired enhancement of the field (see, for example, [33, 24]). Theoretical analysis related to the field concentration phenomenon originated from the effective medium theory. In the pioneering work [25], the electrical or thermal resistance of the narrow gaps between the inclusions was analyzed in order to estimate the effective conductivity of a medium containing a dense (nearly touching) array of perfectly conducting or insulating cylinders (2D) or perfectly conducting spheres (3D), all with perfect interfaces. See also [18] in the setting of elasticity. There has been significant progress over the past three decades in quantitatively analyzing the field concentration phenomenon in scenarios where the inclusions and the background matrix are perfectly bonded: optimal estimates for the local fields in the narrow gaps between the inclusions and asymptotic characterizations capturing the field concentration phenomenon have been derived.

Motivated by the significant physics work [34] and recent pioneering paper [19], we investigate the case of non-ideal bonding. This model holds significant practical relevance, as it takes into account the contact resistance due to roughness and possible debonding at the interface, and it also approximates the membrane structure in biological systems. It is known in physics literature that interfacial effects are very important in a variety of systems. For example, they may dramatically alter the effective properties [34, 32, 12, 8, 17, 21]. In [34, 32], lower and upper bounds for the effective conductivity of a medium containing randomly packed equisized spheres with imperfect bonding interfaces were deduced by constructing trial fields that account for the complex interactions between the spheres and using minimum energy principles. It is noteworthy that the work [34, 32] could not provide estimates for the local fields or capture the field concentration phenomenon when the spheres are closely located, because, as stated in [34], “the complexity of the microstructure prohibits one from obtaining the local fields exactly”. However, in this paper, we will establish optimal estimates for the local fields under certain symmetry assumptions.

Let us describe the mathematical setup. Since the region of interest is the local narrow area in between two inclusions which are closely located, the mathematical problem is formulated with two inclusions. Let n 2 , Ω R n contain two relatively strictly convex open sets D 1 and D 2 with dist ( D 1 D 2 , Ω ) > 1 (see (1.8) and (1.9) for the definition of relative strict convexity). Let ε : = dist ( D 1 , D 2 ) and Ω ̃ : = Ω ( D 1 D 2 ) ̄ . When the inclusions D 1 , D 2 and the background matrix are perfectly bonded, the conductivity problem can be modeled by the following elliptic equation:

(1.1) { div ( a ( x ) u ) = 0 in Ω , u = φ ( x ) on Ω ,

where 𝑎 represents the conductivity distribution, given by a = k 1 χ D 1 + k 2 χ D 2 + χ Ω ̃ , 𝑢 represents the voltage potential, and u stands for the electric field. A key feature of the perfectly bonded problem is the continuity of potential and the continuity of flux across the interfaces,

(1.2) u | + = u | and u ν | + = k i u ν | on D i .

Here and throughout the paper, the subscripts ± indicate the limit from outside and inside the inclusion, respectively, and 𝜈 denotes the inner normal vector on D 1 D 2 . When k i are bounded away from 0 and infinity, it was proved in [10, 30, 28] that the gradient of solutions remains bounded independent of 𝜀. However, if k i degenerate to either ∞ or 0, u may blow up as ε 0 . Specifically, in the perfect conducting case when k 1 = k 2 = , equation (1.1) becomes

(1.3) { Δ u = 0 in Ω ̃ , u = U j on D j , j = 1 , 2 , D j ν u d σ = 0 , j = 1 , 2 , u = φ on Ω ,

where U j is some constant determined by the third line. It was known that

{ u L ( Ω ̃ ) C ε 1 / 2 φ C 2 ( Ω ) when n = 2 , u L ( Ω ̃ ) C | ε ln ε | 1 φ C 2 ( Ω ) when n = 3 , u L ( Ω ̃ ) C ε 1 φ C 2 ( Ω ) when n 4 ;

see [2, 3, 35, 36, 6, 7]. These bounds were shown to be optimal. In the insulated case where k 1 = k 2 = 0 , equation (1.1) becomes

(1.4) { Δ u = 0 in Ω ̃ , ν u = 0 on D j , j = 1 , 2 , u = φ on Ω .

While the optimal blow-up rate in two dimensions was captured about two decades ago in [3, 2], the optimal rate in dimensions n 3 was only recently identified in [14, 13, 27]. This optimal rate is linked to the first non-zero eigenvalue of an elliptic operator on S n 2 , which is determined by the principal curvatures of the inclusions. When k 1 = 0 and k 2 = , it was recently shown in [22, 15] that u is bounded independent of 𝜀. Thus the study of field concentration phenomena, particularly regarding the optimal blow-up rate, when the bonding is perfect, is relatively comprehensive. For a detailed review of related work in this area, we refer the reader to the survey article [23].

When the bonding between the inclusions and the background matrix is non-ideal, at least one of the transmission conditions (1.2) fails. We consider the inclusion with imperfect bonding interfaces of low conductivity type (LC-type), which can be understood as an approximation of an inclusion with a low conductivity shell as the shell thickness approaches zero. Specifically, consider an inclusion with a core-shell structure, where the shell has a thickness denoted by 𝑡. Let 𝑘 and k s be the conductivities of the core and shell, respectively. As t 0 , if

γ 1 : = lim t 0 k s t

exists and is positive, then the transmission conditions become

u ν | + = k u ν | = γ 1 ( u | + u | ) ,

and we say that the inclusion has an imperfect bonding interface of LC-type, where 𝛾 is called the bonding parameter. In this scenario, the continuity of potential no longer holds. It is important to note that there is another type of imperfect bonding interface known as the high conductivity type (HC-type), where the continuity of flux fails. In two dimensions, The HC-type problem is indeed the dual problem of the LC-type problem. For a detailed discussion and derivation of the transmission conditions, we refer the reader to [19, 1, 9].

Throughout this article, we consider D 1 , D 2 to have imperfect bonding interface of LC-type, and let k 1 = k 2 = . Then the conductivity problem becomes

(1.5) { Δ u = 0 in Ω ̃ , u + γ ν u = U j on D j , j = 1 , 2 , D j ν u d σ = 0 , j = 1 , 2 , u = φ on Ω ,

where 𝜈 is the inward normal vector on D j , U j is some constant determined by the third line of (1.5). This setting corresponds to the physical situation investigated in [34, 4] – suspensions of metallic particles in epoxy matrices with interfacial resistance of Kapitza type. The solution u H 1 ( Ω ̃ ) to equation (1.5) is indeed the minimizer of a functional in an appropriate function space (see Lemma 2.2): I [ u ] = min v A I [ v ] , where

(1.6) A := { v H 1 ( Ω ̃ ) : v = φ on Ω } , ( v ) D j := D j v d σ , j = 1 , 2 , I [ v ] := Ω ̃ | v | 2 + γ 1 D 1 | v ( v ) D 1 | 2 + γ 1 D 2 | v ( v ) D 2 | 2 .

In [19], it was proved that, in two dimensions, the gradient of the solution to

(1.7) { Δ u = 0 in R 2 ( D 1 D 2 ) ̄ , u + γ ν u = U j on D j , j = 1 , 2 , D j ν u d σ = 0 , j = 1 , 2 , u x 2 = O ( | x | 1 ) as | x | ,

is bounded independent of 𝜀 when γ > 0 and D 1 , D 2 are disks in R 2 with radius 𝑅, centered at ( 0 , R ε / 2 ) and ( 0 , R + ε / 2 ) , respectively. It is well known in the literature [3] that the gradient of solution to (1.7) with γ = 0 (the perfect conductivity problem) blows up as ε 0 . This surprising result shows that a thin coating of low conductivity can prevent such blow-up. The authors of [19] conjectured that the gradient of the solution to (1.5) is always bounded independent of 𝜀 for arbitrary γ > 0 and boundary data 𝜑, as a biological system cannot endure large stress.

In this article, we prove three main results.

  1. Upper bound for gradient. We establish in Theorem 1.1 an upper bound of order ε 1 / 2 for the gradient of the solution to (1.5). A direct consequence of this result, presented in Corollary 1.2, is that if Ω ̃ is symmetric with respect to x n , and the boundary value 𝜑 is odd in x n , then the gradient is bounded independent of 𝜀. This generalizes the boundedness result in [19] to any dimension n 2 , more general inclusions D 1 , D 2 , and more general boundary data 𝜑.

  2. Absence of field concentration for small 𝛾. We show that, when the bonding parameter 𝛾 is sufficiently small, the gradient of solution is bounded independent of 𝜀. This result confirms the conjecture raised in [19] under the assumption that 𝛾 is sufficiently small.

  3. Dichotomy for field concentration. We show that, when 𝛾 is large, the conjecture does not hold and the boundedness result in Corollary 1.2 is highly unstable, in the sense that if the boundary data 𝜑 is slightly tilted away from being odd in x n ( φ = δ x 1 + x n for some small 𝛿 for example), the gradient of the solution may blow up. More precisely, we show that, when D 1 and D 2 are balls of radius 𝑅, centered at ( 0 , R ε / 2 ) and ( 0 , R + ε / 2 ) , respectively, Ω = B 5 R , and φ = x 1 , there is a dichotomy for the field concentration phenomenon: the gradient of the solution to (1.5) is bounded when 0 < γ R , but blows up when γ > R . Moreover, we establish an optimal gradient estimate when γ > R . Our analysis shows that the solution behaves similarly to that of the insulated conductivity problem in this case.

We use the notation x = ( x , x n ) , where x R n 1 . We choose the coordinate so that, near the origin, the parts of D 1 and D 2 , denoted by Γ + and Γ , are respectively the graphs of two C 1 , 1 functions in terms of x . That is, for some R 0 > 0 ,

Γ + = { x n = ε 2 + f ( x ) , | x | < R 0 } , Γ = { x n = ε 2 + g ( x ) , | x | < R 0 } ,

where f ( x ) and g ( x ) are C 1 , 1 functions satisfying

(1.8) f ( 0 ) = g ( 0 ) = 0 , x f ( 0 ) = x g ( 0 ) = 0 ,
(1.9) c 1 | x | 2 f ( x ) g ( x ) for 0 | x | < R 0 ,
(1.10) f C 1 , 1 c 2 , g C 1 , 1 c 2 ,
with some positive constants c 1 , c 2 . For x 0 Ω ̃ with | x 0 | < R 0 , 0 < r R 0 | x 0 | , we denote

Ω x 0 , r : = { ( x , x n ) Ω ̃ : ε 2 + g ( x ) < x n < ε 2 + f ( x ) , | x x 0 | < r }

and Ω r : = Ω 0 , r . For any domain 𝒟, we denote the averaged L p norm of 𝑢 over 𝒟 by

u L p ( D ) : = ( D | u | p ) 1 / p .

Even though the constants U 1 and U 2 in (1.5) depend on 𝜀, it can be shown that (see Lemma 2.1)

inf Ω φ U 1 , U 2 sup Ω φ .

Therefore, by the maximum principle and classical gradient estimates (cf. [31, Chapters 1 and 4], the solution u H 1 ( Ω ̃ ) of (1.5) satisfies

(1.11) u L ( Ω ̃ ) + u L ( Ω ̃ Ω R 0 / 2 ) C φ C 1 , 1 ( Ω ) .

As such, we focus on the following problem near the origin:

(1.12) { Δ u = 0 in Ω R 0 , u + γ ν u = U 1 on Γ + , u + γ ν u = U 2 on Γ ,

where 𝜈 is the unit normal vector pointing upward on Γ + and pointing downward on Γ .

Our first main result is the following pointwise gradient estimate of order ε 1 / 2 .

Theorem 1.1

Let f , g be C 1 , 1 functions satisfying (1.8)–(1.10). Let n 2 , γ > 0 , ε ( 0 , R 0 / 4 ) , and let u H 1 ( Ω R 0 ) be a solution of (1.12) for some constants U 1 and U 2 . Then there exists a constant C > 0 , depending only on 𝑛, 𝛾, R 0 , c 1 , and c 2 , such that, for any x Ω R 0 / 2 and r = 1 4 ( ε + | x | 2 ) 1 2 ,

(1.13) | u ( x ) | C ( r 1 u ( U 1 + U 2 ) / 2 L 2 ( Ω x , r ) + | U 1 U 2 | ) .

A quick corollary of the theorem is the boundedness of the gradient under some additional symmetry assumptions. This generalizes one of the results in [19].

Corollary 1.2

For n 2 and ε ( 0 , R 0 / 4 ) , let Ω, D 1 , and D 2 be C 1 , 1 domains so that Ω ̃ is symmetric with respect to x n , and (1.8)–(1.10) hold. Let φ C 1 , 1 ( Ω ) be an odd function in x n , γ > 0 , and let u H 1 ( Ω ̃ ) be the solution to (1.5). Then there exists a constant C > 0 , depending only on 𝑛, 𝛾, R 0 , Ω, c 1 , and c 2 , such that, for any x Ω ̃ ,

(1.14) | u ( x ) | C φ C 1 , 1 ( Ω ) .

It is important to note that the factor on the right-hand side of estimate (1.13) is 𝑟 instead of r 2 ; the latter is comparable to the height of the narrow region. A similar anisotropic estimate has served as a fundamental tool in studying both linear and nonlinear insulated conductivity problems [29, 14, 13, 16, 27]. A key step in establishing such estimate involves extending the solution beyond the upper and lower boundaries of the narrow region to reach a larger scale of order 𝑟. For the insulated conductivity problem (1.4), where the boundary condition is the homogeneous Neumann boundary condition, this extension can be naturally achieved using even and periodic extensions after flattening the boundaries. To prove Theorem 1.1, we use a special flattening map introduced by the same authors in [16] that preserves normal vectors on Γ ± , and construct a delicate auxiliary function to manage the inhomogeneity of the boundary condition. The framework presented in this proof is very robust and can be applied to inhomogeneous Dirichlet or Neumann boundary conditions, as well as to a wide range of elliptic equations and systems.

Our second result concerns the local problem (1.12) with sufficiently small 𝛾. We show that u is uniformly bounded independent of 𝜀. Moreover, u exhibits a polynomial decay when U 1 = U 2 . Along with (1.11) and (2.2) below, this result confirms the conjecture raised in [19] for small 𝛾, without any assumptions on the symmetry of the domain Ω ̃ or the boundary data 𝜑.

Theorem 1.3

Let 𝑓, 𝑔 be C 1 , 1 functions satisfying (1.8)–(1.10), n 2 , ε ( 0 , R 0 / 4 ) , and let u H 1 ( Ω R 0 ) be a solution of (1.12) for some constants U 1 and U 2 . Then, for any β 0 , there exists a positive constant γ 0 , depending only on 𝑛, R 0 , c 1 , c 2 , and 𝛽, such that if γ ( 0 , γ 0 ) , then

(1.15) | u ( x ) | C ( ( ε + | x | ) β u U 1 + U 2 2 L ( Ω R 0 ) + | U 1 U 2 | ) in Ω R 0 / 2 ,

where C > 0 depends only on 𝑛, R 0 , c 1 , c 2 , 𝛽, and 𝛾.

As γ 0 , problem (1.5) formally converges to the perfect conductivity problem (1.3), where u may blow up as ε 0 (see [3, 6]). This suggests that the constants 𝐶 above might go to ∞ as 𝛾 approaches 0.

For the next result, we further assume that D 1 , D 2 are C 2 , σ for some σ ( 0 , 1 ) . In addition to (1.8), the corresponding graph functions 𝑓 and 𝑔 will satisfy

(1.16) f ( x ) g ( x ) = μ | x | 2 + O ( | x | 2 + σ ) for 0 < | x | < R 0 , μ > 0 .

We also assume that, for some 1 j n 1 , the domain Ω ̃ is symmetric with respect to x j , and boundary data φ C 1 , 1 is odd in x j . In this case, by the symmetry and the uniqueness of solutions (Lemma 2.2), we know that the solution 𝑢 to (1.5) is odd in x j , and satisfies

(1.17) { Δ u = 0 in Ω ̃ , u + γ ν u = 0 on D 1 D 2 , D i ν u d σ = 0 , i = 1 , 2 , u = φ on Ω .

Again, we focus on the local problem of (1.17), that is,

(1.18) { Δ u = 0 in Ω R 0 , u + γ ν u = 0 on Γ ± ,

where 𝜈 is the unit normal vector pointing upward on Γ + and pointing downward on Γ . We define

(1.19) α = α ( n , γ , μ ) : = ( n 1 ) + ( n 1 ) 2 + 4 ( n 2 + 2 / ( μ γ ) ) 2 .

Note that 𝛼 is always positive, and α < 1 if and only if γ > 1 / μ .

Our third result is as follows.

Theorem 1.4

For n 2 , ε ( 0 , R 0 / 4 ) , γ > 0 , μ > 0 , let f , g satisfy (1.8) and (1.16), and let u H 1 ( Ω R 0 ) be a solution of (1.18) that is odd in x j and Ω R 0 is symmetric in x j for some 1 j n 1 . Then there exists a positive constant 𝐶, depending only on 𝑛, 𝛾, 𝜇, R 0 , f C 2 , σ , and g C 2 , σ , such that,

  • when 0 < γ 1 / μ ,

    (1.20) | u ( x ) | C u L ( Ω R 0 ) in Ω R 0 / 2 ,

  • when γ > 1 / μ ,

    (1.21) | u ( x ) | C u L ( Ω R 0 ) ( ε + | x | 2 ) α 1 2 in Ω R 0 / 2 ,

where 𝛼 is given in (1.19).

Estimate (1.21) indicates that, when γ > 1 / μ , the gradient of solution may blow up as ε 0 . We provide an example to show that the blow-up can actually happen and the blow-up rate is optimal as in the following theorem.

Theorem 1.5

For n 2 , μ > 0 , γ > 1 / μ , ε ( 0 , 1 / ( 4 μ ) ) , let Ω = B 5 / μ , let D 1 and D 2 be balls of radius 1 / μ centered at ( 0 , 1 / μ + ε / 2 ) and ( 0 , 1 / μ ε / 2 ) , respectively, φ ( x ) = x 1 , and let u H 1 ( Ω ̃ ) be the solution of (1.17). Then there exist positive constants 𝑐 and 𝐶, depending only on 𝑛, 𝛾, and 𝜇, such that

(1.22) u L ( Ω c ε ) 1 C ε α 1 2 ,

where 𝛼 is given in (1.19).

Remark 1.6

The fact that D 1 and D 2 are balls of the same radius is used in Step 3 of the proof of Theorem 1.5 in Section 7, so that x 1 is a subsolution of (1.17). This conclusion remains valid even if D 1 and D 2 are balls of radii r 1 and r 2 centered at ( 0 , r 1 + ε / 2 ) and ( 0 , r 2 ε / 2 ) , respectively, under a stronger assumption that γ > max { r 1 , r 2 } . More generally, if D 1 and D 2 are only assumed to be strictly convex smooth sets that are axially symmetric with respect to x n , and Ω ̃ is only assumed to be symmetric in x 1 , estimate (1.22) still holds when 𝛾 is sufficiently large.

Remark 1.7

It is noteworthy that Theorems 1.4 and 1.5 are the first blow-up results in the context of field concentration for imperfect bonding interfaces. Note that

α α 0 : = ( n 1 ) + ( n 1 ) 2 + 4 ( n 2 ) 2 as γ .

Therefore, the blow-up exponent in (1.21), (1.22) converges to the optimal blow-up exponent ( α 0 1 ) / 2 for the insulated conductivity problem with perfect bonding interfaces (see [14]). This suggests that the perfect conducting inclusions with imperfect interfaces behave similarly to insulators when the bonding parameter 𝛾 is large.

Remark 1.8

When D 1 and D 2 are balls of radius 𝑅, centered at ( 0 , R ε / 2 ) and ( 0 , R + ε / 2 ) , respectively, it holds that μ = 1 / R , and therefore our critical value of bonding parameter for the dichotomy γ = 1 / μ = R as in Theorems 1.4 and 1.5 is exactly the special bonding parameter ensuring that the inclusions are neutral to uniform fields, i.e., the insertion of the inclusions does not perturb the uniform fields (see [21, 17, 34, 32, 9, 19]). Indeed, in this critical case when γ = R , the linear potential u = x j ( j = 1 , 2 , , n ) automatically satisfies the boundary conditions (1.5)2 and (1.5)3 on D 1 D 2 and thus is a solution to equation (1.5) with φ = x j . Our result reveals, for the first time, a fascinating microscopic dichotomy of the electric field at this critical bonding parameter 𝛾.

The rest of the paper is organized as follows. In the next section, we prove the uniform boundedness of U 1 and U 2 , and the existence and uniqueness of solutions to equation (1.5). In Section 3, we demonstrate the proofs of Theorem 1.1 and Corollary 1.2 by utilizing a delicate change of variables, which preserves the normal directions on the upper and lower boundaries of Ω R 0 / 2 , and an extension argument. Theorem 1.3 is proved in Section 4. Sections 57 are devoted to the proofs of Theorems 1.4 and 1.5, for which we use a dimension reduction argument to reduce the original problem into a degenerate elliptic equation in a spherical domain of R n 1 . The Robin boundary condition on the interfaces introduces extra difficulties to the reduced problem in R n 1 compared to the insulated case (Neumann boundary conditions). It also leads to a dichotomy for the field concentration depending on the bonding parameter 𝛾.

2 Preliminary

In this section, we prove the uniform boundedness of U 1 and U 2 (Lemma 2.1), and the existence and uniqueness of solutions to equation (1.5) (Lemma 2.2).

Lemma 2.1

Let n 2 , let Ω, D 1 , and D 2 be Lipschitz domains in R n , let u H 1 ( Ω ̃ ) be the solution to (1.5), and let U 1 , U 2 be the same constants as in (1.5). Then it holds that

(2.1) inf Ω φ u sup Ω φ in Ω ̃ ,
(2.2) inf Ω φ U j sup Ω φ , j = 1 , 2 .

Proof

By taking the average of both sides of equality (1.5)2 on D j and using (1.5)3, we obtain

(2.3) U j = ( u ) D j , j = 1 , 2 .

Thus it suffices to prove (2.1), since (2.2) follows directly from (2.1) and (2.3). Without loss of generality, we only show that u sup Ω φ in Ω ̃ . By considering u sup Ω φ in place of 𝑢, we may assume u 0 on Ω , and it suffices to show that

(2.4) u 0 in Ω ̃ .

Let u + : = max { u , 0 } . Testing equation (1.5) with u + and using (2.3) yield

0 = Ω ̃ | u + | 2 j = 1 2 D j u + ν u = Ω ̃ | u + | 2 + γ 1 j = 1 2 D j u + ( u ( u ) D j ) = Ω ̃ | u + | 2 + γ 1 j = 1 2 D j ( u + 2 ( u ) D j ( u + ) D j ) Ω ̃ | u + | 2 .

Here we used the fact ( u ) D j ( u + ) D j and the Cauchy–Schwarz inequality in the last line. Therefore, since u + = 0 on Ω , we get u + 0 in Ω ̃ , and thus (2.4) holds. The proof is completed. ∎

Lemma 2.2

Let n 2 and let Ω, D 1 , and D 2 be Lipschitz domains in R n . Then 𝑢 is the minimizer of (1.6) if and only if u H 1 ( Ω ̃ ) satisfies (1.5). Therefore, there exists a unique solution u H 1 ( Ω ̃ ) to equation (1.5).

Proof

First, we prove that (1.5) has at most one solution u H 1 ( Ω ̃ ) . Suppose that u 1 , u 2 W 1 , p ( Ω ) are two solutions of (1.5). Then, by linearity, we know that u 0 : = u 1 u 2 is a solution to equation (1.5) with φ = 0 . By Lemma 2.1, we know that u 0 0 in Ω ̃ , and thus u 1 u 2 . It is straightforward to see that there exists a unique minimizer of (1.6), due to the convexity of I [ ] and 𝒜. Therefore, it suffices to show that the minimizer 𝑢 of (1.6) satisfies (1.5). We show this by taking different test functions v A in the equation

(2.5) 0 = d d t | t = 0 I [ u + t v ] .

First we take v C c ( Ω ̃ ) . Then (2.5) reads as

0 = Ω ̃ u v .

This implies

(2.6) Δ u = 0 in Ω ̃ .

Next, we take v H 0 1 ( Ω ) . By using integration by parts and (2.6), from (2.5), we deduce

(2.7) 0 = Ω ̃ u v + γ 1 j = 1 2 D j ( u ( u ) D j ) ( v ( v ) D j ) = D 1 D 2 v ν u + γ 1 j = 1 2 D j ( u ( u ) D j ) ( v ( v ) D j ) = j = 1 2 D j v ( ν u + γ 1 ( u ( u ) D j ) ) .

Since D 1 , D 2 are disjoint and (2.7) holds for any v H 0 1 ( Ω ) , we know that

(2.8) u + γ ν u = ( u ) D j on D j , j = 1 , 2 .

Finally, we choose v H 0 1 ( Ω ) such that v = 1 in D 1 ̄ and v = 0 in D 2 ̄ . Then (2.7) implies

(2.9) D 1 ν u = 0 .

Similarly, by choosing v H 0 1 ( Ω ) such that v = 0 in D 1 ̄ and v = 1 in D 2 ̄ , one can also show that

(2.10) D 2 ν u = 0 .

By (2.6) and (2.8)–(2.10), we can conclude that the minimizer 𝑢 of (1.6) is a solution to (1.5) with U j = ( u ) D j ( j = 1 , 2 ). The lemma is proved. ∎

3 A flattening and extension argument

In this section, we use a flattening and extension argument to prove Theorem 1.1. Even though we are working on the Laplace equation with Robin boundary conditions, the argument we present here is very robust, and can be applied to more general settings.

Without loss of generality, we assume R 0 = 1 . For general R 0 > 0 , we can make the scaling z = x / R 0 , and consider (1.12) in the 𝑧-coordinates. We also assume U 2 = U 1 , since otherwise we can consider the equation for u ( U 1 + U 2 ) / 2 instead of 𝑢. Throughout this section, unless specified otherwise, we use 𝐶 to denote positive constants that could be different from line to line, but depend only on 𝑛, 𝛾, c 1 , and c 2 , where c 1 and c 2 are defined in (1.9) and (1.10), respectively. We fix x = x 0 Ω 1 / 2 , and prove Theorem 1.1 at x = x 0 . Recall that

r = 1 4 ( ε + | x 0 | 2 ) 1 / 2 .

By the triangle inequality, for any x Ω x 0 , 2 r ̄ , we have

(3.1) 1 4 | x 0 | 2 1 4 ε 1 2 | x 0 | 2 | x x 0 | 2 | x | 2 2 | x x 0 | 2 + 2 | x 0 | 2 5 2 | x 0 | 2 + 1 2 ε .

By (1.8)–(1.10), we know that

(3.2) 1 C ( ε + | x | 2 ) ε + f ( x ) g ( x ) C ( ε + | x | 2 ) .

Combining (3.1) and (3.2), we obtain

(3.3) 1 C r 2 ε + f ( x ) g ( x ) C r 2

for any x Ω x 0 , 2 r ̄ .

Let r 0 = r 0 ( n , c 1 , c 2 ) ( 0 , 1 / 4 ) be an 𝜀-independent constant to be determined later. Then, by the classical gradient estimates (cf. [31, Chapter 4]), (1.13) holds at x = x 0 whenever r = 1 4 ( ε + | x 0 | 2 ) 1 / 2 > r 0 . Therefore, it suffices to prove Theorem 1.1 for the case when r ( 0 , r 0 ] .

Step 1: The flattening map

To ensure that the solution after the change of variables still satisfies Robin boundary conditions, we adapt the flattening map introduced in [16, Section 3] to our setting, which preserves the normal directions on the upper and lower boundaries.

More precisely, we define

f ̃ ( x ) := { f ( x ) when | x | 10 r 0 , 0 when | x | > 10 r 0 , g ̃ ( x ) := { g ( x ) when | x | 10 r 0 , 0 when | x | > 10 r 0 .

We denote

Q s , t : = { y = ( y , y n ) R n : | y x 0 | < s , | y n | < t } ,

and for y Q 2 r , r 2 ̄ , we define the map x = Φ ( y ) by

(3.4) { x = y h ( y ) , x n = 1 2 [ y n r 2 ( ε + f ̃ ( y ) g ̃ ( y ) ) + f ̃ ( y ) + g ̃ ( y ) ] ,

where

h ( y ) = ( y n r 2 ) ( y n + r 2 ) ( Θ y n + Ξ ) , { Θ = 1 8 r 6 [ ε + f ̃ ( y ) g ̃ ( y ) ] D y [ f ̂ ( y ) + g ̂ ( y ) ] , Ξ = 1 8 r 4 [ ε + f ̃ ( y ) g ̃ ( y ) ] D y [ f ̂ ( y ) g ̂ ( y ) ] ,

f ̂ and g ̂ are mollifications of f ̃ and g ̃ given by

(3.5) f ̂ ( y ) : = R n 1 f ̃ ( y κ z ) φ ( z ) d z , g ̂ ( y ) : = R n 1 g ̃ ( y κ z ) φ ( z ) d z ,

where 𝜑 is a positive smooth function with unit integral supported in B 1 R n 1 , and

κ = r 4 y n 2 r 0 .

As in [16, Section 3], the mollification (3.5) was introduced to overcome the lack of regularities of 𝑓 and 𝑔. Note that, for any y Q 2 r , r 2 ̄ , by the triangle inequality, we have

| y | | x 0 | + 2 r 6 r 6 r 0 ,

and thus

f ̃ ( y ) = f ( y ) and g ̃ ( y ) = g ( y ) .

Thus, by (1.8), for y Q 2 r , r 2 ̄ , we have

(3.6) | D y k f ̃ ( y ) | C r 2 k , | D y k g ̃ ( y ) | C r 2 k , k = 0 , 1 , 2 , | D y k f ̂ ( y ) | C r 2 k , | D y k g ̂ ( y ) | C r 2 k , k = 0 , 1 , 2 .

Similar to [16, Lemma 3.1], one can obtain several properties for the change of variables Φ as in the lemma below. The proof will be given in the appendix for completeness.

Lemma 3.1

There exists an r 0 = r 0 ( n , c 1 , c 2 ) ( 0 , 1 / 4 ) , independent of 𝜀 and 𝛾, such that, when r ( 0 , r 0 ] and Φ is given as (3.4), we have the following.

  1. There exists a positive constant 𝐶, independent of 𝜀 and 𝑟, such that

    I C D Φ ( y ) C I , y Q 2 r , r 2 ̄ ,

    and hence Φ is invertible.

  2. Q 1.9 r , r 2 Φ 1 ( Ω x 0 , 2 r ) and Ω x 0 , r Φ ( Q 1.1 r , r 2 ) .

  3. Let u H 1 ( Ω x 0 , 2 r ) be a solution of

    (3.7) { Δ u = 0 in Ω x 0 , 2 r , u + γ ν u = U 1 on Γ + Ω x 0 , 2 r ̄ , u + γ ν u = U 1 on Γ Ω x 0 , 2 r ̄ ,

    and u ̃ ( y ) = u ( Φ ( y ) ) . Then u ̃ is a solution of the following elliptic equation:

    (3.8) { a i j D y i y j u ̃ + b i D y i u ̃ = 0 in Q 1.9 r , r 2 , u ̃ + γ F 1 1 D y n u ̃ = U 1 on { y n = r 2 } Q 1.9 r , r 2 ̄ , u ̃ γ F 2 1 D y n u ̃ = U 1 on { y n = r 2 } Q 1.9 r , r 2 ̄ ,

    where

    a := ( a i j ) n × n = ( D Φ ) 1 ( ( D Φ ) 1 ) T C 0 , 1 ( Q 1.9 r , r 2 ̄ : R n × n ) , b := ( b i ) n L ( Q 1.9 r , r 2 ̄ : R n )

    satisfy

    (3.9) I C a C I , | D y a | C r , | b | C r ,

    and

    (3.10) a n j = a j n = 0 on { y n = ± r 2 } Q 1.9 r , r 2 ̄

    for any j { 1 , 2 , , n 1 } , and F i C 0 , 1 ( Q 1.9 r , r 2 ̄ ) C 1 , 1 ( Q 1.9 r , r 2 ) ( i = 1 , 2 ) are defined as

    { F 1 := F 1 ( y ) := 1 2 r 2 ( ε + f ( y ) g ( y ) ) 1 + | D y f ̂ ( y ) | 2 , F 2 := F 2 ( y ) := 1 2 r 2 ( ε + f ( y ) g ( y ) ) 1 + | D y g ̂ ( y ) | 2 ,

    satisfying

    (3.11) { 1 C F i C , | D y F i | C r , | D y n F i | C r 2 , | D y 2 F i ( y ) | C r 2 r 4 y n 2 , | D y n D y F i ( y ) | C r 3 r 4 y n 2 , | D y n 2 F i ( y ) | C r 4 r 4 y n 2 .

Step 2: Reducing to the homogeneous Robin boundary condition

Next, we construct an auxiliary function w = w ( y ) such that v : = u ̃ w satisfies the homogeneous Robin boundary conditions and 𝑤 has good estimates. Ideally, we would like to take w ( y ) to be the form a ( y ) y n + b ( y ) . However, due to the lack of regularities of 𝑓 and 𝑔, the second derivative of a ( y ) is very singular if we adapt the same mollification (3.5) to 𝑓 and 𝑔. To overcome this, we split 𝑤 into w 1 + w 2 , where w 1 is a linear function in y n whose coefficient is C 1 , 1 and approximates a ( y ) well enough, and w 2 is “small”, as follows.

Let F 0 : = 1 2 r 2 ( ε + f g ) . First, we choose w 1 ( y ) : = A y n , where

(3.12) A : = A ( y ) : = U 1 r 2 + γ F 0 1 .

Then w 1 satisfies

(3.13) { w 1 + γ F 0 1 D y n w 1 = U 1 on { y n = r 2 } Q 1.9 r , r 2 ̄ , w 1 γ F 0 1 D y n w 1 = U 1 on { y n = r 2 } Q 1.9 r , r 2 ̄ .

Now we choose w 2 such that

(3.14) { w 2 + γ F 1 1 D y n w 2 = γ ( F 0 1 F 1 1 ) D y n w 1 on { y n = r 2 } Q 1.9 r , r 2 ̄ , w 2 γ F 2 1 D y n w 2 = γ ( F 2 1 F 0 1 ) D y n w 1 on { y n = r 2 } Q 1.9 r , r 2 ̄ .

We use the ansatz

w 2 = ( y n + r 2 ) ( y n r 2 ) ( A ̄ y n + B ̄ ) ,

where

A ̄ := A ̄ ( y ) := A 4 r 4 ( F 1 ( F 0 1 F 1 1 ) F 2 ( F 2 1 F 0 1 ) ) = A 4 r 4 F 0 ( ( F 1 F 0 ) ( F 0 F 2 ) ) ,
B ̄ := B ̄ ( y ) := A 4 r 2 ( F 1 ( F 0 1 F 1 1 ) + F 2 ( F 2 1 F 0 1 ) ) = A 4 r 2 F 0 ( ( F 1 F 0 ) + ( F 0 F 2 ) ) ,
and A = A ( y ) is the same function as in (3.12), so that w 2 satisfies (3.14). Note that w 2 is sufficiently small since F i is close to F 0 (see the proof of Lemma 3.2 below). Finally, we let w : = w 1 + w 2 . Then, by (3.13) and (3.14), we know that 𝑤 satisfies

(3.15) { w + γ F 1 1 D y n w = U 1 on { y n = r 2 } Q 1.9 r , r 2 ̄ , w γ F 2 1 D y n w = U 1 on { y n = r 2 } Q 1.9 r , r 2 ̄ .

Let v = u ̃ w . Then we have the following lemma.

Lemma 3.2

Let u ̃ , 𝑎, 𝑏, 𝐹, 𝐺 be defined as in Lemma 3.1, and let 𝑤, 𝑣 be as above. Then 𝑣 is a solution to the following equation:

(3.16) { a i j D y i y j v + b i D y i v = a i j D y i y j w b i D y i w in Q 1.9 r , r 2 , v + γ F 1 1 D y n v = 0 on { y n = r 2 } Q 1.9 r , r 2 ̄ , v γ F 2 1 D y n v = 0 on { y n = r 2 } Q 1.9 r , r 2 ̄ .

Moreover, we have w C 1 , 1 ( Q 1.9 r , r 2 ̄ ) , satisfying

(3.17) | w | C r 2 | U 1 | , | D y w | C | U 1 | , | D y 2 w | C | U 1 | r .

Proof

By direct computations, using (1.8)–(1.10), we obtain

| F 0 | C , | D y F 0 | C r , | D y 2 F 0 | C r 2 ,
(3.18) | F 0 1 | C , | D y F 0 1 | C r , | D y 2 F 0 1 | C r 2 .
Thus A = A ( y ) defined in (3.12) satisfies

(3.19) | A | C | U 1 | , | D y A | C | U 1 | r , | D y 2 A | C | U 1 | r 2 .

Thus, by noting that D y n 2 w 1 = 0 , we have

(3.20) | w 1 | C r 2 | U 1 | , | D y w 1 | C | U 1 | , | D y 2 w 1 | C | U 1 | r .

Note that, for any t R , it holds that

0 1 + t 2 1 t 2 2 .

Therefore, by (1.8)–(1.10) and (3.11), we have

(3.21) | F i F 0 | C r 2 , i = 1 , 2 .

Moreover, by using (1.8)–(1.10) and (3.6), we also have

(3.22) | D y F i D y F 0 | C r , i = 1 , 2 .

By direct computations and using (3.18), (3.19), (3.21), and (3.22), we have the following estimates for A ̄ :

(3.23) | A ̄ | C r 2 | U 1 | , | D y A ̄ | C r 3 | U 1 | , | D y n A ̄ | C r 2 | U 1 | ,
(3.24) | D y 2 A ̄ | C | U 1 | r 2 ( r 4 y n 2 ) , | D y n D y A ̄ | C | U 1 | r ( r 4 y n 2 ) , | D y n 2 A ̄ | C | U 1 | r 4 y n 2 .
Similarly, we have
(3.25) | B ̄ | C | U 1 | , | D y B ̄ | C r 1 | U 1 | , | D y n B ̄ | C | U 1 | ,
(3.26) | D y 2 B ̄ | C | U 1 | r 4 y n 2 , | D y n D y B ̄ | C r | U 1 | r 4 y n 2 , | D y n 2 B ̄ | C r 2 | U 1 | r 4 y n 2 .
By (3.23)–(3.26) and direct computations, one can deduce that

(3.27) | w 2 | C r 4 | U 1 | , | D y w 2 | C r 2 | U 1 | , | D y 2 w 2 | C | U 1 | .

By (3.20) and (3.27), we have w = w 1 + w 2 C 1 , 1 ( Q 1.9 r , r 2 ̄ ) , satisfying (3.17). Thus, by (3.8) and (3.15), 𝑣 is a solution to (3.16). ∎

Step 3: Reducing to the homogeneous Neumann boundary condition

We make another transformation v ̃ = v e ψ , where ψ = ψ ( y ) is a function to be determined, such that v ̃ satisfies the homogeneous Neumann boundary conditions. Note that, by the chain rule,

D y n v ̃ = e ψ ( D y n v + v D y n ψ ) .

Thus if we choose ψ = ( y n + r 2 ) ( y n r 2 ) ( A ̃ y n + B ̃ ) , where A ̃ and B ̃ are functions such that

{ D y n ψ ( y , r 2 ) = 2 r 2 ( A ̃ r 2 + B ̃ ) = γ 1 F 1 , D y n ψ ( y , r 2 ) = 2 r 2 ( A ̃ r 2 + B ̃ ) = γ 1 F 2 ,

namely,

A ̃ : = ( 4 γ r 4 ) 1 ( F 1 F 2 ) and B ̃ : = ( 4 γ r 2 ) 1 ( F 1 + F 2 ) ,

then by (3.16), v ̃ : = v e ψ : = v e ( y n + r 2 ) ( y n r 2 ) ( A ̃ y n + B ̃ ) satisfies

(3.28) D y n v ̃ = 0 on { y n = ± r 2 } Q 1.9 r , r 2 ̄ .

We have the following lemma.

Lemma 3.3

Let 𝜓, v ̃ be defined as above and let 𝑎 be defined as in Lemma 3.1. Then v ̃ is a solution to

(3.29) { a i j D y i y j v ̃ + b ̃ i D y i v ̃ + c ̃ v ̃ = h ̃ in Q 1.9 r , r 2 , D y n v ̃ = 0 on { y n = ± r 2 } Q 1.9 r , r 2 ̄ ,

where b ̃ : = ( b ̃ i ) n × 1 L ( Q 1.9 r , r 2 ̄ : R n ) , and c ̃ , h ̃ L ( Q 1.9 r , r 2 ̄ ) satisfy

(3.30) | b ̃ | C r , | c ̃ | C r 2 , | h ̃ | C | U 1 | r .

Proof

By (3.11) and direct computations, we have ψ C 1 , 1 ( Q 1.9 r , r 2 ̄ ) , satisfying

(3.31) | ψ | C r 2 , | D y ψ | C , | D y 2 ψ | C r 2 .

For i , j { 1 , , n } , we define

b ̃ i = b i 2 j = 1 n a i j D y j ψ , c ̃ = a i j ( D y i ψ D y j ψ D y i y j ψ ) b i D y i ψ , h ̃ = ( a i j D y i y j w b i D y i w ) e ψ ,

where b i and 𝑤 are the same as in Lemmas 3.1 and 3.2, respectively. Then (3.30) follows (3.9), (3.17), and (3.31). Thus, by (3.16), (3.28), and the chain rule, v ̃ is a solution to (3.29). ∎

Step 4: Extension and estimates

Next we extend v ̃ , 𝑎, b ̃ , c ̃ , and h ̃ to the larger cylinder Q 1.9 r : = { ( y , y n ) R n : | y x 0 | < 1.9 r } . For i , j = 1 , 2 , , n 1 , we take the even extension of a i j , a n n , b ̃ i , c ̃ , h ̃ , and v ̃ with respect to { y n = r 2 } , and take the odd extension of a i n , a n i , and b n with respect to { y n = r 2 } . Then we take the periodic extension of these functions in the y n axis, so that the period is equal to 4 r 2 . We still denote these functions by v ̃ , 𝑎, b ̃ , c ̃ , and h ̃ after the extension. Then, because of the Neumann boundary condition, v ̃ is a solution of the following equation:

(3.32) a i j D y i y j v ̃ + b ̃ i D y i v ̃ + c ̃ v ̃ = h ̃ in Q 1.9 r .

Note that, because of (3.10), after the extension, we have that a C 0 , 1 ( Q 1.9 r ̄ : R n × n ) satisfies

(3.33) I C a C I , | D y a | C .

Thus we have the following gradient estimate for v ̃ .

Lemma 3.4

Let v ̃ be a solution of (3.32). Then it holds that

(3.34) D y v ̃ L ( Q 1.1 r , 1.1 r ) + r 1 v ̃ L ( Q 1.1 r , 1.1 r ) C ( r 1 v ̃ L 2 ( Q 1.9 r , 1.9 r ) + | U 1 | ) .

Proof

Observe that, after the extensions defined as above, b ̃ , c ̃ and H ̃ are bounded in Q 1.9 r , still satisfying (3.30). We consider the following rescaling of equation (3.32):

v ¯ ( y ) = v ̃ ( x 0 + r y , r y n ) , a ¯ ( y ) = a ( x 0 + r y , r y n ) , b ¯ ( y ) = r b ̃ ( x 0 + r y , r y n ) , c ¯ ( y ) = r 2 c ̃ ( x 0 + r y , r y n ) , h ¯ ( y ) = r 2 h ̃ ( x 0 + r y , r y n ) .

Then v ¯ is a solution of the following equation:

a ¯ i j D y i y j v ¯ + b ¯ i D y i v ¯ + c ¯ v ¯ = h ¯ in C 1.9 ,

where

C 1.9 : = { y = ( y , y n ) R n : | y | < 1.9 } .

By (3.30) and (3.33), we know that a ¯ is Lipschitz continuous in C 1 , 9 , and b ¯ , c ¯ , h ¯ are bounded in C 1 , 9 , satisfying

I C a ¯ C I , | D y a ¯ | + | b ¯ | + | c ¯ | C , | h ¯ | C r | U 1 | .

Then, by the classical W 2 , p estimate (cf. [20, Theorem 9.11]) and a standard iteration argument, for any p > 1 , it holds that

(3.35) v ¯ W 2 , p ( Q 1.1 ) C 1 ( p ) ( v ¯ L p ( Q 1.5 ) + h ¯ L p ( Q 1.5 ) ) C 2 ( p ) ( v ¯ L 2 ( Q 1.9 ) + h ¯ L p ( Q 1.9 ) ) .

Here C 1 ( p ) , C 2 ( p ) are two constants depending only on 𝑛, c 1 , c 2 , 𝛾, and 𝑝. We fix some p > n and take θ = 1 n / p . Then, by using the Sobolev embedding W 2 , p ( Q 1.9 ) C 1 , θ ( Q 1.9 ) , estimate (3.35) also implies

(3.36) v ¯ L ( Q 1.1 ) + D y v ¯ L ( Q 1.1 ) C ( v ¯ L 2 ( Q 1.9 ) + r | U 1 | ) .

Scaling back (3.36) gives (3.34). The lemma is proved. ∎

Now we are ready to prove Theorem 1.1.

Proof of Theorem 1.1

As is explained at the beginning of this section, we prove the theorem at x = x 0 and it suffices to prove (1.13) for the case when R 0 = 1 , U 2 = U 1 , and r : = 1 4 ( ε + | x 0 | 2 ) 1 / 2 r 0 , where r 0 = r 0 ( n , c 1 , c 2 ) is given in Lemma 3.1. Since 𝑢 is a solution to (3.7), we define u ̃ , 𝑤, 𝑣, and v ̃ as earlier in this section. Since v ̃ is even with respect to { y n = r 2 } and periodic in y n with a period of 4 r 2 , we know that (3.34) directly implies

D y v ̃ L ( Q 1.1 r , r 2 ) + r 1 v ̃ L ( Q 1.1 r , r 2 ) C ( r 1 v ̃ L 2 ( Q 1.9 r , r 2 ) + | U 1 | ) .

Since v = v ̃ e ψ , the last inequality and (3.31) also imply

(3.37) D y v L ( Q 1.1 r , r 2 ) v ̃ D y e ψ L ( Q 1.1 r , r 2 ) + D y v ̃ e ψ L ( Q 1.1 r , r 2 ) C v ̃ L ( Q 1.1 r , r 2 ) + C ( r 1 v ̃ L 2 ( Q 1.9 r , r 2 ) + | U 1 | ) C ( r 1 v L 2 ( Q 1.9 r , r 2 ) + | U 1 | ) .

Next, since u ̃ = v + w , from (3.37), we deduce that

(3.38) D y u ̃ L ( Q 1.1 r , r 2 ) D y v L ( Q 1.1 r , r 2 ) + D y w L ( Q 1.1 r , r 2 ) C ( r 1 v L 2 ( Q 1.9 r , r 2 ) + | U 1 | ) + D y w L ( Q 1.1 r , r 2 ) C ( r 1 u ̃ L 2 ( Q 1.9 r , r 2 ) + r 1 w L ( Q 1.9 r , r 2 ) + | U 1 | ) + D y w L ( Q 1.1 r , r 2 ) C ( r 1 u ̃ L 2 ( Q 1.9 r , r 2 ) + | U 1 | ) .

Here we used (3.17) in the last line.

Finally, since u ̃ ( y ) = u ( Φ ( y ) ) in Q 1.9 r , r 2 ̄ , by using Lemma 3.1 (a) and (b), from (3.38), we obtain (1.13). The proof is completed. ∎

Next, we give the proof of Corollary 1.2.

Proof of Corollary 1.2

By classical gradient estimates (cf. [31, Chapter 4]), it suffices to prove (1.14) for the case when ε ( 0 , R 0 / 4 ) and x Ω R 0 / 4 . By symmetry, if 𝑢 is a solution of (1.5), then u ̂ ( x ) : = u ( x , x n ) satisfies

{ Δ u ̂ = 0 in Ω ̃ , u ̂ + γ ν u ̂ = U 2 on D 1 , u ̂ + γ ν u ̂ = U 1 on D 2 , D j ν u ̂ d σ = 0 , j = 1 , 2 , u ̂ = φ ( x ) on Ω .

Then, by the uniqueness of solutions (Lemma 2.2), we know that u u ̂ . In particular, we have

(3.39) U 1 = U 2

and

(3.40) u ( x , 0 ) = 0 when ( x , 0 ) Ω ̃ ̄ .

By Theorem 1.1, (3.39), and (2.2), for any x Ω R 0 / 2 and r = 1 4 ( ε + | x | 2 ) 1 2 , it holds that

(3.41) | D u ( x ) | C ( r 1 u L 2 ( Ω x , r ) + | U 1 | ) C ( r 1 u L ( Ω x , r ) + φ L ( Ω ) ) .

Using (1.11), (3.41) directly implies

(3.42) | D u ( x ) | C r 1 φ L ( Ω ) .

By the mean value theorem and (3.40), for any x = ( x , x n ) Ω R 0 / 2 , there exists ξ R between 0 and x n such that

u ( x ) = u ( x ) u ( x , 0 ) = D n u ( x , ξ ) x n .

The last equality together with (3.42), (1.8), and (1.10) yields

| u ( x ) | C r φ L ( Ω )

for any x Ω R 0 / 2 . Combining (3.41), (3.3), and the last inequality yields (1.14). The proof is completed. ∎

4 Proof of Theorem 1.3

In this section, we prove Theorem 1.3.

Proof of Theorem 1.3

As in the proof of Theorem 1.1, without loss of generality, we may assume that U 1 = U 2 . It suffices to prove the theorem with β > 0 . Let s 0 ( 0 , R 0 ] be an 𝜀-independent constant to be determined throughout the proof. We only need to prove (1.15) for x Ω s 0 . For s [ ε , s 0 ) , we denote Γ s : = ( Γ + Γ ) Ω ̄ s . Define an auxiliary function

v ( x ) : = U 1 ε / 2 + γ x n , x Ω 2 s 0 ,

and u ̃ : = u v . Then Δ u ̃ = 0 in Ω 2 s 0 . On Γ + , we have

u ̃ + γ ν u ̃ = U 1 U 1 ε / 2 + γ ( ε 2 + f ( x ) + γ 1 + | f | 2 ) = U 1 ε / 2 + γ ( f ( x ) + γ ( 1 1 + | f | 2 1 ) ) .

Similarly, on Γ , we have

u ̃ + γ ν u ̃ = U 2 + U 2 ε / 2 + γ ( ε 2 + g ( x ) γ 1 + | g | 2 ) = U 2 ε / 2 + γ ( g ( x ) γ ( 1 1 + | g | 2 1 ) ) .

We denote h = u ̃ + γ ν u ̃ on Γ ± . By (1.8), (1.10), and U 1 = U 2 , we have

(4.1) | h ( x ) | C γ | U 1 | | x | 2 , x Γ ± .

Let 𝜂 be a smooth cut-off function in the x variable, so that η = 1 in Ω s , η = 0 outside Ω 2 s , and | η | < C / s . Multiplying u ̃ η 2 on both sides of Δ u ̃ = 0 in Ω R 0 , and integrating by parts, we have

Ω s | u ̃ | 2 + 1 γ Γ s | u ̃ | 2 C Ω 2 s | u ̃ | 2 | η | 2 + 1 γ Γ 2 s | u ̃ h | ,

which implies

(4.2) Ω s | u ̃ | 2 + 1 γ Γ s | u ̃ | 2 C s 2 Ω 2 s | u ̃ | 2 + C γ Γ 2 s | h | 2 .

By the fundamental theorem of calculus, we have

u ̃ ( x ) = ε / 2 + g ( x ) x n n u ̃ ( x , s ) d s + u ̃ ( x , ε / 2 + g ( x ) ) for x Ω 2 s ,

which implies

(4.3) Ω 2 s | u ̃ | 2 C s 4 Ω 2 s | n u ̃ | 2 + C s 2 Γ 2 s | u ̃ | 2 .

Combining (4.1), (4.2), and (4.3), we have

Ω s | u ̃ | 2 + 1 γ Γ s | u ̃ | 2 C s 2 Ω 2 s | n u ̃ | 2 + C Γ 2 s | u ̃ | 2 + C γ 3 | U 1 | 2 s n + 3 ,

where 𝐶 is some positive constant independent of 𝛾. Let 𝑁 be a large constant to be determined later. We can choose γ 0 and s 0 to be sufficiently small such that C s 0 2 1 / N and C γ 0 1 / N ; then, for any γ ( 0 , γ 0 ) ,

(4.4) Ω s | u ̃ | 2 + 1 γ Γ s | u ̃ | 2 1 N ( Ω 2 s | u ̃ | 2 + 1 γ Γ 2 s | u ̃ | 2 ) + C | U 1 | 2 s n + 3 ,

where 𝐶 is some positive constant that may depend on 𝛾. For k N satisfying 2 k s 0 ε , we denote

A k : = Ω 2 k s 0 | u ̃ | 2 + 1 γ Γ 2 k s 0 | u ̃ | 2 .

Then (4.4) implies that

(4.5) A k ( 1 N ) k A 0 + C | U 1 | 2 j = 1 k ( 1 N ) k j 2 j ( n + 3 ) s 0 n + 3 ( 1 N ) k A 0 + C | U 1 | 2 2 k ( n + 1 ) j = 1 k ( 2 n + 1 N ) k j s 0 n + 3 .

By (4.2) with s = s 0 , we have

(4.6) A 0 C ( u L ( Ω 2 s 0 ) 2 + | U 1 | 2 ) .

Now we choose N 2 n + 1 + 2 β . Then, by (4.5) and (4.6),

(4.7) A k C 2 k ( n + 1 + 2 β ) u L ( Ω 2 s 0 ) 2 + C 2 k ( n + 1 ) | U 1 | 2 .

By (4.3) and (4.7), we have

Ω 2 k + 1 s 0 Ω 2 k s 0 | u ̃ | 2 C 2 k ( n + 1 ) Ω 2 k + 1 s 0 | u ̃ | 2 C 2 k ( n 1 ) A k 1 C 2 ( 2 + 2 β ) k u L ( Ω 2 s 0 ) 2 + C 2 2 k | U 1 | 2 ,

which implies, by choosing 𝑘 such that 𝑠 and 2 k s 0 are comparable,

u L 2 ( Ω 4 s Ω s ) u ̃ L 2 ( Ω 4 s Ω s ) + v L 2 ( Ω 4 s Ω s ) C s 1 + β u L ( Ω 2 s 0 ) + C s | U 1 | for s [ ε , s 0 ) .

Then, by estimate (1.13) with U 1 = U 2 , we have

| u ( x ) | C | x | β u L ( Ω 2 s 0 ) + C | U 1 | for x Ω s 0 Ω 2 ε .

To obtain the estimate for x Ω 2 ε , we choose 𝑘 so that

2 k 1 s 0 < 4 ε 2 k s 0 .

Then (4.3) with s = 2 ε and (4.7) imply

u L 2 ( Ω 4 ε ) u ̃ L 2 ( Ω 4 ε ) + v L 2 ( Ω 4 ε ) C ε n 1 4 A k 1 1 / 2 + C ε | U 1 | C ε ( β + 1 ) / 2 u L ( Ω 2 s 0 ) + C ε 1 / 2 | U 1 | ,

which gives, by estimate (1.13) with U 1 = U 2 again,

| u ( x ) | C ε β / 2 u L ( Ω 2 s 0 ) + C | U 1 | for x Ω 2 ε .

This concludes the proof. ∎

5 A dimensional reduction argument

In this section, we adapt a dimensional reduction argument in [14] to reduce equation (1.18) to n 1 dimensions, under the assumption that f , g satisfy (1.8) and (1.16). This argument plays an important role in proving Theorems 1.4 and 1.5.

The weak formulation of (1.18) is given by

Ω R 0 u φ d x + 1 γ Γ ± u φ d σ = 0

for all smooth 𝜑 that vanishes on { | x | = R 0 } . Flatten the boundaries by

(5.1) { y = x , y n = 2 ε ( x n g ( x ) + ε / 2 ε + f ( x ) g ( x ) 1 2 ) for all ( x , x n ) Ω R 0 ,

and let v ( y ) = u ( x ) . Note that

Γ + u ( x , x n ) φ ( x , x n ) d σ = { | x | R 0 } u ( x , ε / 2 + f ( x ) ) φ ( x , ε / 2 + f ( x ) ) 1 + | x f ( x ) | 2 d x = { | y | R 0 } v ( y , ε ) φ ( y , ε ) 1 + | y f ( y ) | 2 d y .

Similarly,

Γ u ( x , x n ) φ ( x , x n ) d σ = { | y | R 0 } v ( y , ε ) φ ( y , ε ) 1 + | y g ( y ) | 2 d y .

Therefore, 𝑣 satisfies

(5.2) { i ( a i j ( y ) j v ( y ) ) = 0 in Q R 0 , ε , 2 ε γ 1 1 + | y f | 2 v ( y ) + a n j ( y ) j v ( y ) = 0 on { y n = ε } , 2 ε γ 1 1 + | y g | 2 v ( y ) a n j ( y ) j v ( y ) = 0 on { y n = ε } ,

where the coefficient matrix ( a i j ( y ) ) is given by

(5.3) ( a i j ( y ) ) = 2 ε ( x y ) ( x y ) t det ( x y ) = ( ε + μ | y | 2 0 0 a 1 n 0 ε + μ | y | 2 0 a 2 n 0 0 ε + μ | y | 2 a n 1 , n a n 1 a n 2 a n , n 1 4 ε 2 + i = 1 n 1 | a i n | 2 ε + f ( y ) g ( y ) ) + ( e 1 0 0 0 0 e 2 0 0 0 0 e n 1 0 0 0 0 0 ) ,
a n i = a i n = 2 ε i g ( y ) ( y n + ε ) i ( f ( y ) g ( y ) ) ,
e i = f ( y ) g ( y ) μ | y | 2 for i = 1 , 2 , , n 1 ,
and

(5.4) Q s , t : = { y = ( y , y n ) R n | y | < s , | y n | < t } .

Note that there is an extra 2 ε factor appearing in the boundary condition of (5.2). This is due to the extra 2 ε factor of ( a i j ( y ) ) defined in (5.3). We define

(5.5) v ̄ ( y ) : = ε ε v ( y , y n ) d y n .

By (5.2), v ̄ satisfies

div ( ( ε + μ | y | 2 ) v ̄ ) = 1 2 ε j = 1 n ( a n j j v ) ( y , ε ) + 1 2 ε j = 1 n ( a n j j v ) ( y , ε ) i = 1 n 1 i ( a i n n v ̄ ) i = 1 n 1 i ( e i i v ̄ ) = γ 1 1 + | y f | 2 v ( y , ε ) + γ 1 1 + | y g | 2 v ( y , ε ) i = 1 n 1 i ( a i n n v ̄ + e i i v ̄ ) = 2 γ 1 v ̄ ( y ) i = 1 n 1 i ( a i n n v ̄ + e i i v ̄ ) = : div F + γ 1 1 + | y f | 2 v ( y , ε ) + γ 1 1 + | y g | 2 v ( y , ε ) 2 γ 1 v ̄ ( y ) = : G ,

where a i n n v ̄ denotes the vertical average of a i n n v with respect to y n ( ε , ε ) . We will adapt this notation throughout this article.

Summarizing this section, we have shown the following lemma.

Lemma 5.1

Let f , g satisfy (1.8) and (1.16) and let u H 1 ( Ω R 0 ) be a solution of (1.18). Perform the change of variables (5.1), and let v ( y ) = u ( x ) , v ̄ be defined as in (5.5). Then v ̄ satisfies v ̄ ( 0 ) = 0 and the equation

(5.6) div ( ( ε + μ | y | 2 ) v ̄ ) 2 γ 1 v ̄ = div F + G in B R 0 R n 1 ,

where

(5.7) | F ( y ) | C ( ε | y | | n v ̄ | + | e 1 | | y v ̄ | ) , | G ( y ) | C ( ε max y n ( ε , ε ) | n v | + | y | 2 | v ̄ | ) ,

and 𝐶 is some positive constant depending only on 𝑛, 𝛾, f C 2 , σ , and g C 2 , σ .

Proof

The derivation of equation (5.6) follows from the argument above. It remains to prove estimates (5.7). Since f , g satisfy (1.8), we have | a i n | C ε | y | for 1 i n 1 . Therefore, it is straightforward to see that

| F ( y ) | C ( | a i n n v ̄ | + | e i i v ̄ | ) C ( ε | y | | n v ̄ | + | e 1 | | y v ̄ | ) .

For the 𝐺 term, we have

| G ( y ) | γ 1 1 + | y f | 2 | v ( y , ε ) v ̄ ( y ) | + γ 1 1 + | y g | 2 | v ( y , ε ) v ̄ ( y ) | γ 1 ( 1 + | y f | 2 + 1 + | y g | 2 2 ) | v ̄ ( y ) | C ( ε max y n ( ε , ε ) | n v | + | y | 2 | v ̄ | ) .

This completes the proof. ∎

6 Proof of Theorem 1.4

In this section, we prove Theorem 1.4. Without loss of generality, we may assume μ = 1 . Namely, we consider

f ( x ) g ( x ) = | x | 2 + O ( | x | 2 + σ ) for 0 < | x | < R 0 .

In this case,

(6.1) α = ( n 1 ) + ( n 1 ) 2 + 4 ( n 2 + 2 / γ ) 2 .

For general μ > 0 , we only need to replace 𝜀 and 𝛾 in the proof by ε / μ and μ γ , respectively, in view of equation (5.6). The strategy of the proof is similar to the proof of [14, Theorem 1.1]. However, some modifications are needed to address the case n = 2 and the extra zero-order term 2 γ 1 v ̄ in equation (5.6).

For ε > 0 , σ , s R , an open set D R n 1 , we introduce the following norm:

(6.2) F ε , σ , s , D : = sup y D | F ( y ) | | y | σ ( ε + | y | 2 ) 1 s .

Throughout this section, B R always lies in R n 1 . For any 0 < R < R 0 , we denote by

( u ) B R : = B R u d S

the average of 𝑢 over B R .

First, we establish estimates for v ̄ .

Proposition 6.1

For n 2 , 0 s 1 / 2 , γ > 0 , σ > 0 , 1 + σ 2 s min { α , 1 } , and ε > 0 , let v ̄ H 1 ( B 1 ) be a solution of

div ( ( ε + | y | 2 ) v ̄ ) 2 γ 1 v ̄ = div F + G in B R 0 R n 1 ,

where F , G satisfy

(6.3) F ε , σ , s , B R 0 < , G ε , σ 1 , s , B R 0 < .

Then, for any R ( 0 , R 0 / 2 ) , we have

(6.4) v ̄ ( v ̄ B R ) L 2 ( B R ) C ( F ε , σ , s , B R 0 + G ε , σ 1 , s , B R 0 + v ̄ ( v ̄ ) B R 0 L 2 ( B R 0 ) ) R α ̃ ,

where α ̃ : = min { α , 1 , 1 + σ 2 s } , 𝛼 is given in (6.1), and 𝐶 is some positive constant depending only on 𝑛, 𝜎, 𝑠, and is independent of 𝜀.

For the proof, we use an iteration argument based on the following lemmas.

Lemma 6.2

For n 2 , γ > 0 , and ε > 0 , let v 1 H 1 ( B R 0 ) satisfy

(6.5) div ( ( ε + | y | 2 ) v 1 ) 2 γ 1 v 1 = 0 in B R 0 R n 1 .

Then, for any 0 < ρ < R R 0 , we have

( B ρ | v 1 ( y ) ( v 1 ) B ρ | 2 d S ) 1 2 ( ρ R ) α ̂ ( B R | v 1 ( y ) ( v 1 ) B R | 2 d S ) 1 2 ,

where α ̂ : = min { α , 1 } , and 𝛼 is given in (6.1).

Proof

By the elliptic theory, v 1 C ( B 1 ) . By scaling, it suffices to prove the lemma for R = 1 . We first provide a proof for the case when n 3 . Denote y = ( r , ξ ) ( 0 , 1 ) × S n 2 . We can rewrite (6.5) as

r r v 1 + ( n 2 r + 2 r ε + r 2 ) r v 1 + 1 r 2 Δ S n 2 v 1 2 γ ( ε + r 2 ) v 1 = 0 in B 1 { 0 } .

Take the spherical harmonic decomposition

(6.6) v 1 ( y ) = V 0 ( r ) Y 0 + k = 1 i = 1 N ( k ) V k , i ( r ) Y k , i ( ξ ) , y B 1 { 0 } ,

where Y 0 = | S n 2 | 1 / 2 , Y k , i is a 𝑘-th degree spherical harmonics, that is,

Δ S n 2 Y k , i = k ( k + n 3 ) Y k , i

and { Y k , i } k , i Y 0 forms an orthonormal basis of L 2 ( S n 2 ) . Then

V 0 ( r ) Y 0 = S n 2 v 1 ( r , ξ ) d ξ = ( v 1 ) B r ,

V k , i ( r ) C 2 ( 0 , 1 ) is given by

V k , i ( r ) = S n 2 v 1 ( y ) Y k , i ( ξ ) d ξ

and satisfies

L k V k , i := V k , i ′′ ( r ) + ( n 2 r + 2 r ε + r 2 ) V k , i ( r ) ( k ( k + n 3 ) r 2 + 2 γ ( ε + r 2 ) ) V k , i ( r ) = 0 in ( 0 , 1 )

for each k N , i = 1 , 2 , , N ( k ) . Since v 1 C ( B 1 ) and { Y k , i } k , i Y 0 is an orthonormal basis, we have

k = 1 i = 1 N ( k ) | V k , i ( r ) | 2 = B ρ | v 1 ( y ) ( v 1 ) B r | 2 d S 0 as r 0 .

This implies | V k , i ( r ) | 0 as r 0 for each k N , i = 1 , 2 , , N ( k ) .

Next, we consider two separate cases 0 < γ 1 and γ > 1 .

Case 1.  When 0 < γ 1 , for any k N , by a direct computation,

L k r = n 2 r k ( k + n 3 ) r + ( 1 1 γ ) 2 r ε + r 2 0 .

Since ± V k , i ( r ) | V k , i ( 1 ) | r 0 as r 0 , and clearly,

± V k , i ( r ) | V k , i ( 1 ) | r 0 when r = 1 .

By the maximum principle,

(6.7) | V k , i ( r ) | r | V k , i ( 1 ) | for 0 < r 1 .

Case 2.  When γ > 1 , for any k N , let

α k : = ( n 1 ) + ( n 1 ) 2 + 4 [ k ( k + n 3 ) + 2 / γ ] 2 > 0 .

Note that α 1 = α , where 𝛼 is defined in (6.1). By a direct computation,

L k r α k = α k ( α k 1 ) r α k 2 + ( n 2 + 2 r 2 ε + r 2 ) α k r α k 2 ( k ( k + n 3 ) + 2 γ r 2 ε + r 2 ) r α k 2 = 2 ε ( α k 1 γ ) r α k 2 ε + r 2 for r ( 0 , 1 ) ,

where in the second line, we used the equality

(6.8) α k 2 + ( n 1 ) α k ( k ( k + n 3 ) + 2 γ 1 ) = 0

following from the definition of α k . By (6.8) again, we have

α k 1 γ = α k 1 2 ( α k 2 + ( n 1 ) α k k ( k + n 3 ) ) = 1 2 ( α k + k + n 3 ) ( α k k ) > 0 ,

since

α k < ( n 1 ) + ( n 1 ) 2 + 4 k 2 + 4 ( n 1 ) k + 8 8 k 2 ( n 1 ) + ( n 1 + 2 k ) 2 2 k .

Therefore, L k r α k < 0 for r ( 0 , 1 ) . Since

± V k , i ( r ) | V k , i ( 1 ) | r α k 0 as r 0 ,

and clearly,

± V k , i ( r ) | V k , i ( 1 ) | r α k = 0 when r = 1 .

By the maximum principle,

(6.9) | V k , i ( r ) | r α k | V k , i ( 1 ) | for 0 < r < 1 .

Combining these two cases, it follows from (6.6), (6.7), and (6.9),

B ρ | v 1 ( y ) ( v 1 ) B ρ | 2 d S = k = 1 i = 1 N ( k ) | V k , i ( ρ ) | 2 ρ 2 α ̂ k = 1 i = 1 N ( k ) | V k , i ( 1 ) | 2 = ρ 2 α ̂ B 1 | v 1 ( y ) ( v 1 ) B 1 | 2 d σ .

This completes the proof for n 3 .

When n = 2 , we use ( v ̄ ( r ) v ̄ ( r ) ) / 2 in place of V 1 , 1 ( r ) , and repeat the arguments for estimates (6.7) and (6.9) to obtain

| v ̄ ( ρ ) v ̄ ( ρ ) 2 | ρ α ̂ | v ̄ ( 1 ) v ̄ ( 1 ) 2 | .

The lemma is proved. ∎

Lemma 6.3

For n 2 , s 1 / 2 , σ > 0 , and ε > 0 , suppose that F , G satisfy (6.3), and v 2 H 0 1 ( B 1 ) satisfies

(6.10) div ( ( ε + | y | 2 ) v 2 ) 2 γ 1 v 2 = div F + G in B 1 R n 1 .

Then we have

(6.11) v 2 L ( B 1 ) C ( F ε , σ , s , B 1 + G ε , σ 1 , s , B 1 ) ,

where C > 0 depends only on 𝑛, 𝜎, and 𝑠, and is in particular independent of 𝜀.

Proof

Without loss of generality, we assume F ε , σ , s , B 1 + G ε , σ 1 , s , B 1 = 1 . When ε 1 , we divide equation (6.10) by 𝜀 and obtain

div ( ( 1 + ε 1 | y | 2 ) v 2 ) 2 ( ε γ ) 1 v 2 = ε 1 div F + ε 1 G in B 1 R n 1 .

By assumptions, | ε 1 F | C | y | σ and | ε 1 G | C | y | σ 1 , so that (6.11) follows from the standard Moser iteration. Therefore, we will focus on the case 0 < ε < 1 , where we will use Moser iteration in weighted L p spaces.

For p 2 , we multiply equation (6.10) with | v 2 | p 2 v 2 and integrate by parts to obtain

(6.12) ( p 1 ) B 1 ( ε + | y | 2 ) | v 2 | 2 | v 2 | p 2 d y + 2 γ 1 B 1 | v 2 | p d y = ( p 1 ) B 1 F v 2 | v 2 | p 2 d y B 1 G v 2 | v 2 | p 2 d y .

By the definition in (6.2),

| F ( y ) | | y | σ ( ε + | y | 2 ) 1 s F ε , σ , s , B 1 , | G ( y ) | | y | σ 1 ( ε + | y | 2 ) 1 s G ε , σ 1 , s , B 1 for y B 1 .

To estimate the first term on the right-hand side of (6.12), we use Young’s inequality, s 1 / 2 , and ε < 1 ,

( p 1 ) | B 1 F v 2 | v 2 | p 2 d y | ( p 1 ) 4 B 1 ( ε + | y | 2 ) | v 2 | 2 | v 2 | p 2 d y + C ( p 1 ) B 1 | y | 2 σ ( ε + | y | 2 ) 1 2 s | v 2 | p 2 d y ( p 1 ) 4 B 1 ( ε + | y | 2 ) | v 2 | 2 | v 2 | p 2 d y + C ( p 1 ) B 1 | y | 2 σ | v 2 | p 2 d y .

Then, by Hölder’s inequality and σ > 0 ,

(6.13) B 1 | y | 2 σ | v 2 | p 2 d y ( B 1 | y | σ ( n + 1 + 2 ϱ ) ( n 1 + 2 ϱ ) d y ) 2 n + 1 + 2 ϱ × ( B 1 | y | 2 | v 2 | ( p 2 ) n + 1 + 2 ϱ n 1 + 2 ϱ d y ) n 1 + 2 ϱ n + 1 + 2 ϱ ,

where ϱ > 0 is chosen sufficiently small so that

B 1 | y | σ ( n + 1 + 2 ϱ ) / 2 ( n 1 + 2 ϱ ) d y < .

To estimate the last term of (6.12), we have

| B 1 G v 2 | v 2 | p 2 d y | B 1 | y | σ 1 ( ε + | y | 2 ) 1 s | v 2 | p 1 d y = 1 n 1 B 1 | y | σ 1 ( ε + | y | 2 ) 1 s | v 2 | p 1 ( y ) d y = σ 1 n 1 B 1 | y | σ 1 ( ε + | y | 2 ) 1 s | v 2 | p 1 d y 2 ( 1 s ) n 1 B 1 | y | σ + 1 ( ε + | y | 2 ) s | v 2 | p 1 d y p 1 n 1 B 1 | y | σ 1 ( ε + | y | 2 ) 1 s ( y v 2 ) | v 2 | p 3 v 2 d y .

Therefore,

B 1 | y | σ 1 ( ε + | y | 2 ) 1 s | v 2 | p 1 d y
C ( p 1 ) B 1 | y | σ ( ε + | y | 2 ) 1 s | v 2 | | v 2 | p 2 d y
+ C B 1 | y | σ + 1 ( ε + | y | 2 ) s | v 2 | p 1 d y
( p 1 ) 4 B 1 ( ε + | y | 2 ) | v 2 | 2 | v 2 | p 2 d y
+ C ( p 1 ) B 1 | y | 2 σ | v 2 | p 2 d y + C B 1 | y | σ | v 2 | p 1 d y ,
where we used ε < 1 in the second inequality. The first term in the last line can be controlled as in (6.13), and the second term can be estimated similarly as follows:

| B 1 | y | σ | v 2 | p 1 d y | ( B 1 | y | σ ( n + 1 + 2 ϱ ) / 2 ( n 1 + 2 ϱ ) d y ) 2 n + 1 + 2 ϱ × ( B 1 | y | 2 | v 2 | ( p 1 ) n + 1 + 2 ϱ n 1 + 2 ϱ d y ) n 1 + 2 ϱ n + 1 + 2 ϱ .

Hence, from (6.12), we have

(6.14) 4 ( p 1 ) p 2 B 1 | y | 2 | | v 2 | p 2 | 2 d y = ( p 1 ) B 1 | y | 2 | v 2 | 2 | v 2 | p 2 d y C ( p 1 ) v 2 p 2 L n + 1 + 2 ϱ n 1 + 2 ϱ ( B 1 , | y | 2 d y ) + C v 2 p 1 L n + 1 + 2 ϱ n 1 + 2 ϱ ( B 1 , | y | 2 d y ) .

We use the following version of the Caffarelli–Kohn–Nirenberg inequality (see [11]):

(6.15) u L 2 ( n + 1 ) n 1 ( B 1 , | y | 2 d y ) C u L 2 ( B 1 , | y | 2 d y ) for all u H 0 1 ( B 1 , | y | 2 d y ) .

Taking p = 2 in (6.14), we have, by (6.15) with u = | v 2 | and Hölder’s inequality,

(6.16) v 2 L 2 ( n + 1 + 2 ϱ ) n 1 + 2 ϱ ( B 1 , | y | 2 d y ) C .

For p 2 , from (6.14), by (6.15) with u = | v 2 | p 2 and Hölder’s inequality,

v 2 L ( n + 1 ) p n 1 ( B 1 , | y | 2 d y ) p C | v 2 | p 2 L 2 ( B 1 , | y | 2 d y ) 2 C p 2 v 2 L n + 1 + 2 ϱ n 1 + 2 ϱ ( p 2 ) ( B 1 , | y | 2 d y ) p 2 + C p v 2 L n + 1 + 2 ϱ n 1 + 2 ϱ ( p 1 ) ( B 1 , | y | 2 d y ) p 1 max i = 1 , 2 C p i v 2 L n + 1 + 2 ϱ n 1 + 2 ϱ p ( B 1 , | y | 2 d y ) p i .

By Young’s inequality,

v 2 L ( n + 1 ) p n 1 ( B 1 , | y | 2 d y ) max i = 1 , 2 ( C p i ) 1 / p ( p i p v 2 L n + 1 + 2 ϱ n 1 + 2 ϱ p ( B 1 , | y | 2 d y ) + i p ) ( C p 2 ) 1 / p ( v 2 L n + 1 + 2 ϱ n 1 + 2 ϱ p ( B 1 , | y | 2 d y ) + 2 p ) .

For k 0 , let

p k = 2 ( n + 1 n 1 n 1 + 2 ϱ n + 1 + 2 ϱ ) k n + 1 + 2 ϱ n 1 + 2 ϱ + as k + .

Iterating the relations above, we have, by (6.16),

(6.17) v 2 L p k ( B 1 , | y | 2 d y ) i = 0 k 1 ( C p i 2 ) 3 / p i v 2 L p 0 ( B 1 , | y | 2 d y ) + i = 0 k 1 j = i k 1 ( C p j 2 ) 3 / p j 6 p i C v 2 L 2 ( n 1 + 2 ϱ ) n 3 + 2 ϱ ( B 1 , | y | 2 d y ) + C i = 0 k 1 1 p i C ,

where 𝐶 is a positive constant depending on 𝑛 and 𝜎, and is in particular independent of 𝑘. The lemma is concluded by taking k in (6.17). ∎

Now we are in a position to prove Proposition 6.1.

Proof of Proposition 6.1

Without loss of generality, we assume that

F ε , σ , s , B R 0 + G ε , σ 1 , s , B R 0 + v ̄ ( v ̄ ) B R 0 L 2 ( B R 0 ) = 1 .

We denote ω ( ρ ) : = v ̄ ( v ̄ ) B ρ L 2 ( B ρ ) . For 0 < ρ R / 2 R 0 / 2 , we write v ̄ = v 1 + v 2 in B R , where v 2 satisfies

div ( ( ε + | y | 2 ) v 2 ) 2 γ 1 v 2 = div F + G in B R

and v 2 = 0 on B R . Thus v 1 satisfies

div ( ( ε + | y | 2 ) v 1 ) 2 γ 1 v 1 = 0 in B R ,

and v 1 = v ̄ on B R . Since v ̃ 2 ( y ) : = v 2 ( R y ) satisfies

div ( ( R 2 ε + | y | 2 ) v ̃ 2 ) 2 γ 1 v ̃ 2 = div F ̃ + G ̃ in B 1 ,

where F ̃ ( y ) : = R 1 F ( R y ) and G ̃ ( y ) : = G ( R y ) satisfy

F ̃ R 2 ε , σ , s , B 1 = R 1 + σ 2 s F ε , σ , s , B R , G ̃ R 2 ε , σ 1 , s , B 1 = R 1 + σ 2 s G ε , σ 1 , s , B R .

We apply Lemma 6.3 to v ̃ 2 with 𝜀 replaced with R 2 ε to obtain

(6.18) v 2 L ( B R ) C R 1 + σ 2 s .

By Lemma 6.2,

(6.19) ( B ρ | v 1 ( y ) ( v 1 ) B ρ | 2 d S ) 1 2 ( ρ R ) α ̂ ( B R | v 1 ( y ) ( v 1 ) B R | 2 d S ) 1 2 .

Combining (6.19) and (6.18) yields, using v ̄ = v 1 on B R and v ̄ = v 1 + v 2 ,

(6.20) ω ( ρ ) ( B ρ | v 1 ( y ) ( v 1 ) B ρ | 2 d S ) 1 2 + ( B ρ | v 2 ( y ) ( v 2 ) B ρ | 2 d S ) 1 2 ( ρ R ) α ̂ ( B R | v 1 ( y ) ( v 1 ) B R | 2 d S ) 1 2 + 2 v 2 L ( B R ) ( ρ R ) α ̂ ω ( R ) + C R 1 + σ 2 s .

For a positive integer 𝑘, we take ρ = 2 i 1 and R = 2 i in (6.20), and iterate from i = 0 to k 1 . We have, using 1 + σ 2 s α ̂ ,

ω ( 2 k ) 2 k α ̂ ω ( 1 ) + C i = 1 k 2 ( k i ) α ̂ ( 2 1 i ) 1 + σ 2 s 2 k α ̂ ω ( 1 ) + C 2 k α ̂ 1 2 k ( α ̂ 1 σ + 2 s ) 1 2 α ̂ 1 σ + 2 s .

It follows that

ω ( 2 k ) 2 k α ̃ ( ω ( 1 ) + C ) , where α ̃ = min { α ̂ , 1 + σ 2 s } .

For any ρ ( 0 , 1 / 2 ) , let 𝑘 be the integer such that 2 k 1 < ρ 2 k . Then ω ( ρ ) C ρ α ̃ for all ρ ( 0 , 1 / 2 ) . Therefore, (6.4) is proved. ∎

Proof of Theorem 1.4

Without loss of generality, we assume that u L ( Ω R 0 ) = 1 . Recall that μ = 1 . We will prove (1.20) and (1.21) with 𝛼 given as in (6.1). We make the change of variables (5.1), and let v ( y ) = u ( x ) . Then 𝑣 satisfies (5.2). Let v ̄ be the vertical average defined as in (5.5). By (1.13) with U 1 = U 2 = 0 ,

(6.21) v ̄ ε , 0 , s 0 + 1 , B R 0 C ,

where s 0 = 1 2 . On the other hand, since ν u = γ 1 u on Γ + and Γ , we have, by (1.8), (1.11), and (1.13) with U 1 = U 2 = 0 that

| n u ( x ) | C | i = 1 n 1 x i i u | + C | u | C for all x Γ + Γ .

By the harmonicity of n u , estimate (1.11), and the maximum principle, | n u | C in Ω R 0 , and consequently,

(6.22) | n v | C ( ε + | y | 2 ) / ε in Q R 0 , ε ,

where Q R 0 , ε is defined as in (5.4). On the other hand, e i in (5.3) can be bounded by C | y | 2 + σ . Therefore, by Lemma 5.1, (6.21), and (6.22), v ̄ satisfies equation (5.6) with μ = 1 , F , G satisfying F ε , σ , s 0 , B R 0 + G ε , σ 1 , s 0 , B R 0 C . By (6.22),

(6.23) | v ( y , y n ) v ̄ ( y ) | 2 ε max y n ( ε , ε ) | n v ( y , y n ) | C ( ε + | y | 2 ) in Q R 0 , ε .

By decreasing 𝜎 if necessary, we may assume that 1 + σ 2 s 0 = σ < α ̂ , where α ̂ = min { α , 1 } . Since 𝑢 is odd in x j for some 1 j n 1 , v ̄ is also odd in x j . In particular, this implies

( v ̄ ) B R = 0 for all R ( 0 , R 0 ) .

By Proposition 6.1 and (6.23), we have

(6.24) u L 2 ( Ω 2 ε 1 / 2 ) C v L 2 ( Q 2 ε 1 / 2 , ε ) C ( v v ̄ L 2 ( Q 2 ε 1 / 2 , ε ) + v ̄ L 2 ( Q 2 ε 1 / 2 , ε ) ) C ε α ̃ / 2 ,

where α ̃ = min { α ̂ , 1 + σ 2 s 0 } = σ . By (6.24) and (1.13) with U 1 = U 2 = 0 , we have

u L ( Ω ε 1 / 2 ) C ε α ̃ 1 2 .

Similarly, for any R ( ε 1 / 2 , R 0 / 4 ) , Proposition 6.1 and (6.23) imply

u L 2 ( Ω 4 R Ω R / 2 ) C R α ̃ ,

which implies, by (1.13) with U 1 = U 2 = 0 ,

u L ( Ω 4 R Ω R / 2 ) C R α ̃ 1 for any R ( ε 1 / 2 , R 0 / 4 ) .

Therefore, we have improved the upper bound | u ( x ) | C ( ε + | x | 2 ) s 0 to

| u ( x ) | C ( ε + | x | 2 ) α ̃ 1 2 ,

where α ̃ 1 2 = min { 0 , α 1 2 , s 0 + σ 2 } . If s 0 + σ 2 < min { 0 , α 1 2 } , we take s 1 = s 0 σ 2 and repeat the argument above. We may decrease 𝜎 if necessary so that min { 0 , α 1 2 } s 0 + k σ 2 for any k = 1 , 2 , . After repeating the argument a finite number of times, we obtain estimates (1.20) or (1.21). ∎

7 Proof of Theorem 1.5

In this section, we prove Theorem 1.5. Without loss of generality, we assume μ = 1 again throughout this section. The ordinary differential operator

(7.1) L h : = h ′′ ( r ) + ( n 2 r + 2 r ε + r 2 ) h ( r ) ( n 2 r 2 + 2 γ 1 ε + r 2 ) h ( r )

will play an important role in our proof. First, we prove the following ODE lemma.

Lemma 7.1

For ε > 0 , γ > 1 , n 2 , let 𝐿 be the operator defined as in (7.1). There exists a unique solution h C ( [ 0 , 1 ] ) C ( ( 0 , 1 ] ) of

(7.2) L h = 0 , 0 < r < 1 ,

satisfying h ( 0 ) = 0 and h ( 1 ) = 1 . Moreover, there exists a positive constant C ( ε ) , depending only on 𝑛, 𝛾, and 𝜀, such that

(7.3) r < h ( r ) < min { C ( ε ) r , r α } for 0 < r < 1 ,

where 𝛼 is given by (6.1) and ℎ is strictly increasing in [ 0 , 1 ] .

Proof

Because of the singularity of 𝐿 at r = 0 , we will construct the solution ℎ through the following approximating process. For 0 < a < 1 , let h a C 2 ( [ a , 1 ] ) be the solution of L h a = 0 in ( a , 1 ) satisfying h a ( a ) = a and h a ( 1 ) = 1 . By computations similar to those in the proof of Lemma 6.2, we have

L r = 2 r ε + r 2 ( 1 1 γ ) > 0 for r ( 0 , 1 ) ,
(7.4) L r α = 2 ε ( α 1 γ ) r α 2 ε + r 2 < 0 for r ( 0 , 1 ) .
Note that h a ( a ) = a < a α since α ( 0 , 1 ) . By the maximum principle and the strong maximum principle, r < h a ( r ) < r α , a < r < 1 . Sending a 0 along a subsequence, h a h in C loc 2 ( ( 0 , 1 ] ) for some h C ( [ 0 , 1 ] ) C ( ( 0 , 1 ] ) satisfying r h ( r ) r α , L h = 0 in ( 0 , 1 ) , and h ( 0 ) = 0 . By the strong maximum principle, r < h ( r ) < r α , 0 < r < 1 . Next, we show the upper bound h ( r ) < C ( ε ) r . Let v = r r 3 / 2 / 2 . By a direct computation,

| L v + 1 4 ( n 1 2 ) r 1 2 | 2 ε ( 1 1 γ ) r + 1 ε ( 3 2 γ ) r 3 2 .

Hence L v < 0 in ( 0 , r 0 ( ε ) ) for some small r 0 ( ε ) . Recall that L h = 0 and h < r α in ( 0 , r 0 ( ε ) ) , h ( 0 ) = v ( 0 ) = 0 . By the maximum principle, we have

h h ( r 0 ( ε ) ) v ( r 0 ( ε ) ) v C ( ε ) r in ( 0 , r 0 ( ε ) ) ,

where C ( ε ) = r 0 α ( ε ) / v ( r 0 ( ε ) ) . This concludes the proof of (7.3).

Now, we show the uniqueness of ℎ. Let g C ( [ 0 , 1 ] ) C ( ( 0 , 1 ] ) be another solution of (7.2) in ( 0 , 1 ) satisfying g ( 0 ) = 0 and g ( 1 ) = 1 . Then w ( r ) : = g ( r ) / h ( r ) satisfies

( G w ) = 0 , 0 < r < 1 ,

where G = h 2 r n 2 ( ε + r 2 ) . Therefore, for some constants C 0 and C 1 , we have

g ( r ) = h ( r ) w ( r ) = h ( r ) r 1 C 0 h 2 ( s ) s n 2 ( ε + s 2 ) d s + C 1 h ( r ) , 0 < r < 1 .

By (7.3), we have

h ( r ) r 1 1 h 2 ( s ) s n 2 ( ε + s 2 ) d s r C ( ε ) 2 r 1 1 s n ( ε + s 2 ) d s .

When n = 2 , by the L’Hospital rule,

r C ( ε ) 2 r 1 1 s 2 ( ε + s 2 ) d s = r C ( ε ) 2 ε r 1 ( 1 s 2 1 ε + s 2 ) d s 1 C ( ε ) 2 ε > 0

as r 0 . When n 3 ,

r C ( ε ) 2 r 1 1 s n ( ε + s 2 ) d s +

as r 0 . Since h ( 0 ) = g ( 0 ) = 0 and h ( 1 ) = g ( 1 ) = 1 , we have C 0 = 0 , C 1 = 1 . Therefore, g h .

Finally, we show that ℎ is strictly increasing in ( 0 , 1 ) . If not, there exists an r ( 0 , 1 ) such that h ( r ) = 0 and h ′′ ( r ) 0 . Since h ( r ) > 0 , we have L h ( r ) < 0 , which leads to a contradiction. ∎

Next, we prove a lower bound of ℎ by constructing a subsolution.

Lemma 7.2

For ε > 0 , γ > 1 , n 2 , let h C ( [ 0 , 1 ] ) C ( ( 0 , 1 ] ) be the solution to (7.2). There exists a positive constant 𝑐, depending only on 𝑛 and 𝛾, such that

(7.5) h ( r ) > ( r α ( c ε ) α ) + , 0 < r < 1 ,

where 𝛼 is given by (6.1).

Proof

Denote v ( r ) = ( r α ( c ε ) α ) + for some positive constant 𝑐 to be determined. When r > c ε , by (7.4), we know

L v = 2 ε ( α 1 γ ) r α 2 ε + r 2 + ( n 2 r 2 + 2 γ 1 ε + r 2 ) ( c ε ) α c α 2 ε α 2 ε + r 2 ( 2 α + 2 γ + c 2 ( n 2 + 2 γ ) ) .

Therefore, for any γ > 1 , n 2 , we can find a positive 𝑐 such that L v > 0 when r > c ε . Then (7.5) follows from the maximum principle. ∎

Now we are ready to prove Theorem 1.5.

Proof of Theorem 1.5

Without loss of generality, we assume that μ = 1 . For general μ > 0 , we only need to replace 𝜀 and 𝛾 in the proof with ε / μ and μ γ , respectively. We will prove (1.22) for n 3 in the following four steps. The proof for n = 2 is similar and simpler. A brief remark on the case n = 2 will be provided at the end.

Step 1.  By the elliptic theory, the fact that Ω ̃ is symmetric in x 1 , and the fact that 𝜑 is odd in x 1 , we know that 𝑢 is smooth, | u | 5 , and 𝑢 is odd in x 1 . Take the change of variables (5.1) in Ω 1 , let v ( y ) = u ( x ) , and let v ̄ be the vertical average as in (5.5). By Lemma 5.1, we know that v ̄ satisfies equation (5.6),

div ( ( ε + | y | 2 ) v ̄ ) 2 γ 1 v ̄ ( y ) = div F + G in B 1 R n 1 .

By (1.13) with U 1 = U 2 = 0 and (6.22), we know that

| y v ( y ) | C ( ε + | y | 2 ) 1 / 2 , | n v ( y ) | C ( ε + | y | 2 ) ε 1 in Q 1 , ε ,

where Q 1 , ε is defined in (5.4). Since both D 1 and D 2 are balls, e i from (5.3) can be bounded by C | y | 4 . Therefore, by (5.7), for | y | < 1 , we have

(7.6) { | F ( y ) | C ( | y | ( ε + | y | 2 ) + | y | 4 ( ε + | y | 2 ) 1 / 2 ) , | G ( y ) | C ( ε + | y | 2 ) .

Similar to the proof of Lemma 6.2, we denote by Y k , i a 𝑘-th degree normalized spherical harmonics so that { Y k , i } k , i forms an orthonormal basis of L 2 ( S n 2 ) , by Y 1 , 1 the one after normalizing y 1 | S n 2 , and we write y = ( r , ξ ) in the polar coordinate. Since v ̄ is odd with respect to y 1 , and in particular v ̄ ( 0 ) = 0 , we have the following decomposition:

(7.7) v ̄ ( y ) = V 1 , 1 ( r ) Y 1 , 1 ( ξ ) + k = 2 i = 1 N ( k ) V k , i ( r ) Y k , i ( ξ ) , x B 1 { 0 } ,

where V k , i ( r ) = S n 2 v ̄ ( r , ξ ) Y k , i ( ξ ) d ξ and V k , i C ( [ 0 , 1 ) ) C ( ( 0 , 1 ) ) . Since v ̄ ( 0 ) = 0 and ε + | y | 2 is independent of 𝜉, we know that V 1 , 1 satisfies V 1 , 1 ( 0 ) = 0 , and

L V 1 , 1 = H ( r ) , 0 < r < 1 ,

where 𝐿 is the differential operator defined in (7.1),

H ( r ) = S n 2 ( div F + G ) Y 1 , 1 ( ξ ) ε + r 2 d ξ = S n 2 r F r + 1 r ξ F ξ + G ε + r 2 Y 1 , 1 ( ξ ) d ξ
= r ( S n 2 F r ε + r 2 Y 1 , 1 ( ξ ) d ξ ) + S n 2 2 r F r Y 1 , 1 ( ε + r 2 ) 2 F ξ ξ Y 1 , 1 r ( ε + r 2 ) + G Y 1 , 1 ε + r 2 d ξ
= : A ( r ) + B ( r ) , 0 < r < 1 .
Hence A ( r ) , B ( r ) C 1 ( [ 0 , 1 ) ) satisfy, in view of (7.6), that

(7.8) | A ( r ) | C ( n , γ ) r , | B ( r ) | C ( n , γ ) , 0 < r < 1 .

Step 2.  We will prove, for some 𝜀-dependent constant C 1 ( ε ) and positive 𝜀-independent constant C 2 , that

(7.9) | V 1 , 1 ( r ) C 1 ( ε ) h ( r ) | C 2 r 1 + α , 0 < r < 1 ,

where h ( r ) is the solution of (7.2) satisfying h ( 0 ) = h ( 1 ) = 1 . We use the method of reduction of order to write down a bounded function 𝑔 satisfying L g = H in ( 0 , 1 ) , and then give an estimate | g ( r ) | C 2 r 1 + α .

Let g = h w and

w ( r ) : = 0 r 1 h 2 ( s ) s n 2 ( ε + s 2 ) 0 s h ( τ ) τ n 2 ( ε + τ 2 ) H ( τ ) d τ d s , 0 < r < 1 .

By a direct computation,

L g = L ( h w ) = h w ′′ + [ 2 h + ( n 2 r + 2 r ε + r 2 ) h ] w = h G ( G w ) = H ,

where G = h 2 r n 2 ( ε + r 2 ) . By integration by parts, (7.8), and the fact that h > 0 and h 0 , we can estimate

| 0 s h ( τ ) τ n 2 ( ε + τ 2 ) A ( τ ) d τ | + | 0 s h ( τ ) τ n 2 ( ε + τ 2 ) B ( τ ) d τ | | 0 s h τ n 2 ( ε + τ 2 ) A ( τ ) d τ | + C h ( s ) s n 1 ( ε + s 2 ) C s n 1 ( ε + s 2 ) 0 s h ( τ ) d τ + C h ( s ) s n 1 ( ε + s 2 ) C h ( s ) s n 1 ( ε + s 2 ) , 0 < r < 1 .

Therefore, using (7.3),

| g ( r ) | C h ( r ) 0 r s h ( s ) d s C 2 r 1 + α , 0 < r < 1 .

Since V 1 , 1 g is bounded and satisfies L ( V 1 , 1 g ) = 0 in ( 0 , 1 ) and V 1 , 1 ( 0 ) g ( 0 ) = 0 , by Lemma 7.1, there exists a constant C 1 ( ε ) such that V 1 , 1 g = C 1 ( ε ) h . Hence (7.9) follows.

Step 3.  We will show that

(7.10) C 1 ( ε ) > 1 C

for some positive 𝜀-independent constant 𝐶.

By a direct computation, x 1 + γ ν x 1 = ( 1 γ ) x 1 on D 1 D 2 . Therefore, x 1 is a subsolution of (1.17) in { x 1 > 0 } , and is a supersolution of (1.17) in { x 1 < 0 } . Hence u x 1 in { x 1 0 } and u x 1 in { x 1 0 } . Then

{ v ̄ ( y ) y 1 when y 1 0 , v ̄ ( y ) y 1 when y 1 0 .

Recall that Y 1 , 1 ( ξ ) is the normalization of y 1 ; we have

V 1 , 1 ( r ) = S n 2 v ̄ ( r , ξ ) Y 1 , 1 ( ξ ) d ξ r , 0 < r < 1 .

Combining with (7.3) and (7.9), we have

r V 1 , 1 ( r ) C 1 ( ε ) h ( r ) + 1 2 r C 1 ( ε ) r α + 1 2 r for all 0 < r r 0 ,

where r 0 = ( 1 / 2 C 2 ) 1 / α . This implies

C 1 ( ε ) 1 2 r 0 1 α .

Step 4.  Completion of the proof of Theorem 1.5. In view of (7.3), (7.9), and (7.10), it follows that

(7.11) V 1 , 1 ( r ) 1 C h ( r ) C 2 r 1 + α 1 2 C h ( r ) , 0 < r < r 1 ,

for some small 𝜀-independent constant r 1 . By (7.5),

(7.12) h ( 2 c ε ) > ( 2 α 1 ) c α ε α / 2 ,

where 𝑐 is the constant in Lemma 7.2. By (7.7), (7.11), and (7.12), we have

( S n 2 | v ̄ ( 2 c ε , ξ ) | 2 d ξ ) 1 / 2 | V 1 , 1 ( 2 c ε ) | 1 C h ( 2 c ε ) 1 C ε α / 2 for all 0 < ε < ε 0

for some small positive ε 0 depending only on 𝑛 and 𝛾. Then there exists a ξ 0 S n 2 such that | v ̄ ( 2 c ε , ξ 0 ) | 1 C ε α / 2 . Since v ̄ is the vertical average of 𝑣, and recalling that v ( y ) = u ( x ) , there exists an x n ( ε / 2 + g ( x ) , ε / 2 + f ( x ) ) such that

(7.13) | u ( 2 c ε , ξ 0 , x n ) | 1 C ε α 2 .

Estimate (1.22) follows from (7.13) and u ( 0 ) = 0 . This concludes the proof when n 3 .

When n = 2 , by Lemma 5.1, v ̄ satisfies the equation

( ( ε + | y 1 | 2 ) v ̄ ) 2 γ 1 v ̄ = F + G in ( 1 , 1 ) ,

where 𝐹 and 𝐺 satisfy the same bounds as in (7.6). By using v ̄ in place of V 1 , 1 and repeating Steps 2–4 with minor modifications, we can establish estimate (1.22) for n = 2 . ∎

Award Identifier / Grant number: DMS-2006372

Award Identifier / Grant number: DMS-2055244

Award Identifier / Grant number: DMS-2306726

Award Identifier / Grant number: DMS-2350129

Award Identifier / Grant number: DMS-2510251

Funding statement: Hongjie Dong was partially supported by the NSF under agreements DMS-2055244 and DMS-2350129. Zhuolun Yang was partially supported by the AMS-Simons Travel Grant and the NSF under agreement DMS-2510251. Hanye Zhu was partially supported by the NSF under agreements DMS-2006372 and DMS-2306726.

A Appendix

Here we prove Lemma 3.1. The proof essentially follows that of [16, Lemma 3.1].

Proof of Lemma 3.1

Following the proof of [16, Lemma 3.1], with (3.6) in place of [16, (3.9)], for y Q 2 r , r 2 ̄ , we have

| D y x I ( n 1 ) × ( n 1 ) | C r 2 , | D y n x | C r , | D y x n | C r .

Since

D y n x n = 1 2 r 2 ( ε + f ̃ ( y ) g ̃ ( y ) ) ,

by (3.3), for y Q 2 r , r 2 ̄ , we also have

1 C D y n x n C .

Then (a) follows by shrinking r 0 to be sufficiently small.

Since h ( y ) = 0 when y = ± r 2 , Φ maps the upper and lower boundaries of Q 2 r , r 2 onto the upper and lower boundaries of Ω x 0 , 2 r , respectively. Then (b) simply follows from the fact that | h ( y ) | C r 3 , and we can shrink r 0 so that | h ( y ) | r / 10 .

To verify (c), note that 𝑢 is smooth by the classical elliptic theory. By the chain rule,

D x k u ( x ) = D y i u ̃ ( y ) D x k y i , D x k 2 u ( x ) = D y i D y j u ̃ ( y ) D x k y i D x k y j + D y i u ̃ ( y ) D x k 2 y i .

For i , j { 1 , , n } , we define

a i j : = D x y i D x y j , b i : = k = 1 n D x k 2 y i .

Then we have

(A.1) a : = ( a i j ) = ( D Φ ) 1 ( ( D Φ ) 1 ) T ,

and u ̃ = u ̃ ( y ) satisfies the equation

(A.2) a i j D y i y j u ̃ + b i D y i u ̃ = 0 in Q 1.9 r , r 2 .

We then deduce the boundary condition for u ̃ on { y n = ± r 2 } . By the chain rule,

D y n u ̃ ( y ) = D y u ̃ ( y ) e n = D x u ( x ) D y Φ e n ,

where e n : = ( 0 , , 0 , 1 ) T . Similar to the proof of [16, Lemma 3.1], using

(A.3) f f ̃ f ̃ κ , g g ̃ g ̃ κ on { y n = ± r 2 } Q 2 r , r 2 ̄ ,

we know that

(A.4) D y Φ e n = ( D y n h , 1 2 r 2 ( ε + f ̃ ( y ) g ̃ ( y ) ) ) T = 1 2 r 2 ( ε + f ̃ ( y ) g ̃ ( y ) ) ( D x f ̃ ( y ) , 1 ) T

when y n = r 2 . Therefore, u ̃ satisfies

(A.5) u ̃ + γ ( 1 2 r 2 ( ε + f ̃ g ̃ ) 1 + | D x f ̃ | 2 ) 1 D y n u ̃ = U 1 on { y n = r 2 } Q 1.9 r , r 2 ̄ .

Similarly, u ̃ also satisfies

(A.6) u ̃ γ ( 1 2 r 2 ( ε + f ̃ g ̃ ) 1 + | D x g ̃ | 2 ) 1 D y n u ̃ = U 1 on { y n = r 2 } Q 1.9 r , r 2 ̄ .

By (A.2), (A.5), (A.6), and (A.3), we know that u ̃ is a solution to (3.8).

Next, we prove (3.9). From part (a), we have

I C D x y = ( D Φ ) 1 C I ,

which directly implies that I C a C I , where a : = ( a i j ) n × n . By a proof similar to that of [16, Lemma 3.1], we know that

| 2 y i x k x l | C r for i , k , l { 1 , 2 , , n } .

The last three inequalities directly imply (3.9).

Now, we prove (3.10). Let a 1 : = ( c i j ) n × n . By (A.1),

a 1 = ( D Φ ) T D Φ .

When y n = r 2 , we know that

x y = I n 1 , x y n = D y n h , x n y = D y f ̃ , x n y n = 1 2 r 2 ( ε + f ̃ ( y ) g ̃ ( y ) ) .

Therefore, by (A.4), when y n = r 2 , for j { 1 , 2 , , n 1 } , we have

c n j = c j n = ( det ( D Φ ) ) 1 ( D y n h j + 1 2 r 2 ( ε + f ̃ ( y ) g ̃ ( y ) ) D y j f ̃ ) = 0 .

Similarly, we also have

c n j = c j n = 0 when y n = r 2 , for j { 1 , 2 , , n 1 } .

Since a 1 = ( c i j ) , the equalities above imply (3.10).

Finally, we prove (3.11). Using (1.8)–(1.10), and (3.6), one can directly obtain

1 C F i C , | D y F i | C r .

Similar to the proof of [16, Lemma 3.1], we have

(A.7) | D y n D y f ̃ κ ( y ) | C r , | D y 3 f ̃ κ ( y ) | C r r 4 y n 2 , | D y n D y 2 f ̃ κ ( y ) | C r 2 r 4 y n 2 , | D y n 2 D y f ̃ κ ( y ) | C r 3 r 4 y n 2 .

The inequalities above also hold with g ̃ κ in place of f ̃ κ . Direct computations using (3.6) and (A.7) yield

| D y n F i | C r 2 , | D y 2 F i ( y ) | C r 2 r 4 y n 2 , | D y n D y F i ( y ) | C r 3 r 4 y n 2 , | D y n 2 F i ( y ) | C r 4 r 4 y n 2 .

Thus (3.11) holds. The lemma is proved. ∎

References

[1] H. Ammari, T. Boulier and J. Garnier, Modeling active electrolocation in weakly electric fish, SIAM J. Imaging Sci. 6 (2013), no. 1, 285–321. 10.1137/12086858XSearch in Google Scholar

[2] H. Ammari, H. Kang, H. Lee, J. Lee and M. Lim, Optimal estimates for the electric field in two dimensions, J. Math. Pures Appl. (9) 88 (2007), no. 4, 307–324. 10.1016/j.matpur.2007.07.005Search in Google Scholar

[3] H. Ammari, H. Kang and M. Lim, Gradient estimates for solutions to the conductivity problem, Math. Ann. 332 (2005), no. 2, 277–286. 10.1007/s00208-004-0626-ySearch in Google Scholar

[4] F. F. T. Araujo and H. M. Rosenberg, The thermal conductivity of epoxy-resin/metal-powder composite materials from 1.7 to 300k, J. Phys. D 9 (1976), no. 4, Paper No. 665. 10.1088/0022-3727/9/4/017Search in Google Scholar

[5] I. Babuška, B. Andersson, P. J. Smith and K. Levin, Damage analysis of fiber composites. I. Statistical analysis on fiber scale, Comput. Methods Appl. Mech. Engrg. 172 (1999), no. 1–4, 27–77. 10.1016/S0045-7825(98)00225-4Search in Google Scholar

[6] E. S. Bao, Y. Y. Li and B. Yin, Gradient estimates for the perfect conductivity problem, Arch. Ration. Mech. Anal. 193 (2009), no. 1, 195–226. 10.1007/s00205-008-0159-8Search in Google Scholar

[7] E. S. Bao, Y. Y. Li and B. Yin, Gradient estimates for the perfect and insulated conductivity problems with multiple inclusions, Comm. Partial Differential Equations 35 (2010), no. 11, 1982–2006. 10.1080/03605300903564000Search in Google Scholar

[8] Y. Benveniste, Effective thermal conductivity of composites with a thermal contact resistance between the constituents: Nondilute case, J. Appl. Phys. 61 (1987), no. 8, 2840–2843. 10.1063/1.337877Search in Google Scholar

[9] Y. Benveniste and T. Miloh, Neutral inhomogeneities in conduction phenomena, J. Mech. Phys. Solids 47 (1999), no. 9, 1873–1892. 10.1016/S0022-5096(98)00127-6Search in Google Scholar

[10] E. Bonnetier and M. Vogelius, An elliptic regularity result for a composite medium with “touching” fibers of circular cross-section, SIAM J. Math. Anal. 31 (2000), no. 3, 651–677. 10.1137/S0036141098333980Search in Google Scholar

[11] L. Caffarelli, R. Kohn and L. Nirenberg, First order interpolation inequalities with weights, Compos. Math. 53 (1984), no. 3, 259–275. Search in Google Scholar

[12] Y. C. Chiew and E. D. Glandt, The effect of structure on the conductivity of a dispersion, J. Coll. Interface Sci. 94 (1983), no. 1, 90–104. 10.1016/0021-9797(83)90238-2Search in Google Scholar

[13] H. Dong, Y. Li and Z. Yang, Gradient estimates for the insulated conductivity problem: The non-umbilical case, J. Math. Pures Appl. (9) 189 (2024), Article ID 103587. 10.1016/j.matpur.2024.06.002Search in Google Scholar

[14] H. Dong, Y. Li and Z. Yang, Optimal gradient estimates of solutions to the insulated conductivity problem in dimension greater than two, J. Eur. Math. Soc. (JEMS) 27 (2025), no. 8, 3275–3296. 10.4171/jems/1432Search in Google Scholar

[15] H. Dong and Z. Yang, Optimal estimates for transmission problems including relative conductivities with different signs, Adv. Math. 428 (2023), Article ID 109160. 10.1016/j.aim.2023.109160Search in Google Scholar

[16] H. Dong, Z. Yang and H. Zhu, The insulated conductivity problem with 𝑝-Laplacian, Arch. Ration. Mech. Anal. 247 (2023), no. 5, Paper No. 95. 10.1007/s00205-023-01926-0Search in Google Scholar

[17] A. G. Every, Y. Tzou, D. P. H. Hasselman and R. Raj, The effect of particle size on the thermal conductivity of zns/diamond composites, Acta Metall. Mater. 40 (1992), no. 1, 123–129. 10.1016/0956-7151(92)90205-SSearch in Google Scholar

[18] J. E. Flaherty and J. B. Keller, Elastic behavior of composite media, Comm. Pure Appl. Math. 26 (1973), 565–580. 10.1002/cpa.3160260409Search in Google Scholar

[19] S. Fukushima, Y.-G. Ji, H. Kang and X. Li, Finiteness of the stress in presence of closely located inclusions with imperfect bonding, Math. Ann. 391 (2025), no. 2, 1753–1778. 10.1007/s00208-024-02968-9Search in Google Scholar

[20] D. Gilbarg and N. S. Trudinger, Elliptic partial differential equations of second order, Class. Math., Springer, Berlin 2001. 10.1007/978-3-642-61798-0Search in Google Scholar

[21] Z. Hashin, Extremum principles for elastic heterogenous media with imperfect interfaces and their application to bounding of effective moduli, J. Mech. Phys. Solids 40 (1992), no. 4, 767–781. 10.1016/0022-5096(92)90003-KSearch in Google Scholar

[22] Y.-G. Ji and H. Kang, Spectrum of the Neumann–Poincaré operator and optimal estimates for transmission problems in the presence of two circular inclusions, Int. Math. Res. Not. IMRN 2023 (2023), no. 9, 7638–7685. 10.1093/imrn/rnac057Search in Google Scholar

[23] H. Kang, Quantitative analysis of field concentration in presence of closely located inclusions of high contrast, ICM—International Congress of Mathematicians. Vol. 7. Sections 15–20, European Mathematical Society, Zürich (2023), 5680–5699. 10.4171/icm2022/83Search in Google Scholar

[24] H. Kang and K. Yun, Precise estimates of the field excited by an emitter in presence of closely located inclusions of a bow-tie shape, J. Math. Anal. Appl. 479 (2019), no. 2, 1670–1707. 10.1016/j.jmaa.2019.07.018Search in Google Scholar

[25] J. B. Keller, Conductivity of a medium containing a dense array of perfectly conducting spheres or cylinders or nonconducting cylinders, J. Appl. Phys. 34 (1963), no. 4, 991–993. 10.1063/1.1729580Search in Google Scholar

[26] J. B. Keller, Stresses in narrow regions, J. Appl. Mech. 60 (1993), no. 4, 1054–1056. 10.1115/1.2900977Search in Google Scholar

[27] H. Li and Y. Zhao, Optimal gradient estimates for the insulated conductivity problem with general convex inclusions case, preprint (2024), https://arxiv.org/abs/2404.17201. Search in Google Scholar

[28] Y. Li and L. Nirenberg, Estimates for elliptic systems from composite material, Comm. Pure Appl. Math. 56 (2003), no. 7, 892–925. 10.1002/cpa.10079Search in Google Scholar

[29] Y. Li and Z. Yang, Gradient estimates of solutions to the insulated conductivity problem in dimension greater than two, Math. Ann. 385 (2023), no. 3–4, 1775–1796. 10.1007/s00208-022-02368-xSearch in Google Scholar

[30] Y. Y. Li and M. Vogelius, Gradient estimates for solutions to divergence form elliptic equations with discontinuous coefficients, Arch. Ration. Mech. Anal. 153 (2000), no. 2, 91–151. 10.1007/s002050000082Search in Google Scholar

[31] G. M. Lieberman, Oblique derivative problems for elliptic equations, World Scientific, Hackensack 2013. 10.1142/8679Search in Google Scholar

[32] R. Lipton and B. Vernescu, Composites with imperfect interface, Proc. Roy. Soc. London Ser. A 452 (1996), no. 1945, 329–358. 10.1098/rspa.1996.0018Search in Google Scholar

[33] V. Pacheco-Peña, M. Beruete, A. I. Fernández-Domínguez, Y. Luo and M. Navarro-Cía, Description of bow-tie nanoantennas excited by localized emitters using conformal transformation, ACS Photonics 3 (2016), no. 7, 1223–1232. 10.1021/acsphotonics.6b00232Search in Google Scholar

[34] S. Torquato and M. D. Rintoul, Effect of the interface on the properties of composite media, Phys. Rev. Lett. 75 (1995), 4067–4070. 10.1103/PhysRevLett.75.4067Search in Google Scholar PubMed

[35] K. Yun, Estimates for electric fields blown up between closely adjacent conductors with arbitrary shape, SIAM J. Appl. Math. 67 (2007), no. 3, 714–730. 10.1137/060648817Search in Google Scholar

[36] K. Yun, Optimal bound on high stresses occurring between stiff fibers with arbitrary shaped cross-sections, J. Math. Anal. Appl. 350 (2009), no. 1, 306–312. 10.1016/j.jmaa.2008.09.057Search in Google Scholar

Received: 2024-12-22
Revised: 2025-08-28
Published Online: 2025-11-05
Published in Print: 2026-01-01

© 2025 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 7.3.2026 from https://www.degruyterbrill.com/document/doi/10.1515/crelle-2025-0076/html
Scroll to top button