Home A system of equations involving the fractional p-Laplacian and doubly critical nonlinearities
Article Open Access

A system of equations involving the fractional p-Laplacian and doubly critical nonlinearities

  • Mousomi Bhakta , Kanishka Perera EMAIL logo and Firoj Sk
Published/Copyright: September 13, 2023

Abstract

This article deals with existence of solutions to the following fractional p -Laplacian system of equations:

( Δ p ) s u = u p s * 2 u + γ α p s * u α 2 u v β in Ω , ( Δ p ) s v = v p s * 2 v + γ β p s * v β 2 v u α in Ω ,

where s ( 0 , 1 ) , p ( 1 , ) with N > s p , α , β > 1 such that α + β = p s * N p N s p and Ω = R N or smooth bounded domains in R N . When Ω = R N and γ = 1 , we show that any ground state solution of the aforementioned system has the form ( λ U , τ λ V ) for certain τ > 0 and U and V are two positive ground state solutions of ( Δ p ) s u = u p s * 2 u in R N . For all γ > 0 , we establish existence of a positive radial solution to the aforementioned system in balls. When Ω = R N , we also establish existence of positive radial solutions to the aforementioned system in various ranges of γ .

MSC 2010: 35B09; 35B33; 35E20; 35D30; 35J50; 45K05

1 Introduction

We consider the following fractional p -Laplacian system of equations in R N :

( Δ p ) s u = u p s * 2 u + α p s * u α 2 u v β in R N , ( Δ p ) s v = v p s * 2 v + β p s * v β 2 v u α in R N , u , v W ˙ s , p ( R N ) , ( S )

where 0 < s < 1 , p ( 1 , ) , N > s p , and α , β > 1 such that α + β = p s * N p N s p . Here, ( Δ p ) s denotes the fractional p -Laplace operator, which can be defined for the Schwartz class functions S ( R N ) as follows:

( Δ p ) s u ( x ) P.V. R N u ( x ) u ( y ) p 2 ( u ( x ) u ( y ) ) x y N + s p d y , x R N ,

where P.V. denotes the principle value sense. Consider the following homogeneous fractional Sobolev space

W ˙ s , p ( R N ) u L p s * ( R N ) : R 2 N u ( x ) u ( y ) p x y N + s p d x d y < .

The space W ˙ s , p ( R N ) is a Banach space with the corresponding Gagliardo norm

u W ˙ s , p ( R N ) R 2 N u ( x ) u ( y ) p x y N + s p d x d y 1 p .

For simplicity of the notation, we write u W ˙ s , p instead of u W ˙ s , p ( R N ) . In the vectorial case, as described in [4], the natural solution space for ( S ) is the product space X = W ˙ s , p ( R N ) × W ˙ s , p ( R N ) with the norm

( u , v ) X ( u W ˙ s , p ( R N ) p + v W ˙ s , p ( R N ) p ) 1 p .

Definition 1.1

We say a pair ( u , v ) X is a positive weak solution of the system ( S ) if u , v > 0 , and for every ( ϕ , ψ ) X , it holds

R 2 N u ( x ) u ( y ) p 2 ( u ( x ) u ( y ) ) ( ϕ ( x ) ϕ ( y ) ) x y N + s p d x d y + R 2 N v ( x ) v ( y ) p 2 ( v ( x ) v ( y ) ) ( ψ ( x ) ψ ( y ) ) x y N + s p d x d y = R N u p s * 2 u ϕ d x + R N v p s * 2 v ψ d x + α p s * R N u α 2 u v β ϕ d x + β p s * R N v β 2 v u α ψ d x .

Define

(1.1) S = S α + β inf u W ˙ s , p ( R N ) , u 0 u W ˙ s , p p R N u p s * d x p p s * .

In the limit case p = 1 , the sharp constant S has been determined in [18, Theorem 4.1] (see also [8, Theorem 4.10]). The relevant extremals are given by the characteristic functions of balls, exactly as in the local case. For p > 1 , (1.1) is related to the study of the following nonlocal integro-differential equation

(1.2) ( Δ p ) s u = S u p s * 1 in R N , u > 0 , u W ˙ s , p ( R N ) .

In the Hilbertian case p = 2 , it is known by [14, Theorem 1.1], the best Sobolev constant S is attained by the family of functions

U t ( x ) = t 2 s N 2 1 + x x 0 t 2 2 s N 2 , x 0 R N , t > 0 .

Moreover, the family U t is the only set of minimizers for the best Sobolev constant [11]. However, for p 2 , the minimizers of S are not yet known, and it is not known whether (1.1) has any unique minimizer. In [9], Brasco et al. have conjectured that the optimizers of S in (1.1) are given by

U t ( x ) = C t s p N p 1 + x x 0 t p p 1 s p N p , x 0 R N , t > 0 ,

but it remains as an open question till date. However, in [9, Theorem 1.1], it has been proved that if U is any minimizer of S , then U is of constant sign, radially symmetric and monotone function with

lim x x N s p p 1 U ( x ) = U ,

for some constant U R { 0 } .

For p = 2 , α = β and u = v , the system ( S ) reduces to the fractional Laplacian equation with purely critical exponent. In [21], authors have studied existence and convergence properties of least-energy symmetric solutions u s ( s is a varying parameter) in symmetric bounded domains. For scalar equation, we also refer [7,23] where existence/multiplicity of solutions for a class nonlinear elliptic equation with mixed fractional Laplacians have been studied.

Peng et al. in [25] studied system ( S ) for p = 2 and s = 1 , and among the other results, they proved uniqueness of least energy solution. In the local case s = 1 , a variant of system ( S ) (with p = 2 ) appears in various contexts of mathematical physics e.g. in Bose-Einstein condensates theory, nonlinear wave-wave interaction in plasma physics, nonlinear optics, and for more details, see [1,3,26] and the references therein. With the system of elliptic p -Laplacian type equations with weakly coupled nonlinearities, we also cite [19] and the references therein. In the nonlocal case, there are not so many articles, in which weakly coupled systems of equations have been studied. We refer to [12,13,16,20,22], where Dirichlet systems of equations in bounded domains have been treated. In [20], existence and multiplicity of solutions to system of equations with critical and concave nonlinearities have been studied (see [6,10,24] for similar problem in the case of scalar equations). For the nonlocal systems of equations in the entire space R N , we cite [5,17,27] and the references therein.

For p = 2 and s ( 0 , 1 ) Bhakta et al. in [4] studied the following system:

(1.3) ( Δ ) s u = α 2 s * u α 2 u v β + f ( x ) in R N , ( Δ ) s v = β 2 s * v β 2 v u α + g ( x ) in R N , u , v > 0 in R N ,

where f , g belongs to the dual space of W ˙ s , 2 ( R N ) . Among other results, the authors proved that when f = 0 = g , any ground state solution of (1.3) has the form ( B w , C w ) , where C B = β α and w is the unique solution of (1.2) (corresponding to p = 2 ).

Being inspired by the aforementioned works, in this article, we generalize some of the aforementioned results in the fractional- p -Laplacian case.

Definition 1.2

  1. We say a weak solution ( u , v ) of ( S ) is of the synchronized form if u = λ w , v = μ w for some constants λ , μ and a common function w W ˙ s , p ( R N ) .

  2. We say a weak solution ( u , v ) of ( S ) is a ground state solution if ( u , v ) is a minimizer of S α , β (see (1.4)).

Define

(1.4) S α , β inf ( u , v ) X , ( u , v ) ( 0 , 0 ) u W ˙ s , p p + v W ˙ s , p p R N ( u p s * + v p s * + u α v β ) d x p p s * .

Suppose that ( S ) has a positive solution of the synchronized form ( λ U , μ U ) for some λ > 0 , μ > 0 and U W ˙ s , p ( R N ) is a ground state solution of (1.2). Then it holds

λ p s * p + α p s * μ β λ α p = 1 = μ p s * p + β p s * μ β p λ α .

Now setting μ = τ λ , we obtain λ p s * p = p s * p s * + α τ β and τ satisfies

(1.5) p s * + α τ β β τ β p p s * τ p s * p = 0 .

On the other hand, we find that if τ satisfies (1.5), then ( λ U , τ λ U ) solves ( S ).

Therefore, the natural question arises: Are all the ground state solutions of ( S ) is of the synchronized form ( λ U , τ λ U ) ?

If the answer of the aforementioned question is affirmative, then it will hold

S α , β = 1 + τ p ( 1 + τ β + τ p s * ) p p s * S .

This inspires us to define the following function:

(1.6) h ( τ ) 1 + τ p ( 1 + τ β + τ p s * ) p p s * .

Note that h ( τ min ) = min τ 0 h ( τ ) 1 .

Below we state the main results of this article, we present the following:

Theorem 1.3

Let ( u 0 , v 0 ) be any positive ground state solution of ( S ). If one of the following conditions hold

  1. 1 < β < p ,

  2. β = p and α < p ,

  3. β > p and α < p ,

then, there exists unique τ min > 0 satisfying

h ( τ min ) = min τ 0 h ( τ ) < 1 ,

where h is defined by (1.6). Moreover,

( u 0 , v 0 ) = ( λ U , τ min λ V ) ,

where U and V are two positive ground state solutions of (1.2). Further, λ p s * p = p s * p s * + α τ min β .

Remark 1.4

Since for p 2 , uniqueness of ground state solutions of (1.2) is not yet known, we are not able to conclude whether any ground state solution of ( S ) is of the synchronized form, i.e., of the form of ( λ U , τ min λ U ) or not.

Next, we consider ( S ) with a small perturbation γ > 0 , namely, we consider the system

( Δ p ) s u = u p s * 2 u + α γ p s * u α 2 u v β in R N , ( Δ p ) s v = v p s * 2 v + β γ p s * v β 2 v u α in R N , u , v W ˙ s , p ( R N ) . ( S γ ˜ )

and prove existence of positive solutions to ( S γ ˜ ) in various range of γ . The corresponding energy functional of the problem ( S γ ˜ ), given by for ( u , v ) X

(1.7) J ( u , v ) = 1 p ( u W ˙ s , p p + v W ˙ s , p p ) 1 p s * R N ( u p s * + v p s * + γ u α v β ) d x .

We define

(1.8) N = ( u , v ) X : u 0 , v 0 , u W ˙ s , p p = R N u p s * + α γ p s * u α v β d x , v W ˙ s , p p = R N v p s * + β γ p s * u α v β d x .

It is easy to see that N and that any nontrivial solution of ( S γ ˜ ) is belongs to N . Set

A inf ( u , v ) N J ( u , v ) .

Consider the nonlinear system of algebraic equations

(1.9) k p s * p p + α γ p s * k α p p β p = 1 , p s * p p + β γ p s * β p p k α p = 1 , k , > 0 .

Theorem 1.5

Assume that one of the following conditions hold:

  1. If N 2 s < p < N s , α , β > p and

    (1.10) 0 < γ p s * ( p s * p ) p min 1 α α p β p β p p , 1 β β p α p α p p .

  2. If 2 N N + 2 s < p < N 2 s , α , β < p and

    (1.11) γ p s * ( p s * p ) p max 1 α p β p α p β p , 1 β p α p β p α p .

Then the least energy A = s N ( k 0 + 0 ) S N s p and A is attained by ( k 0 1 p U , 0 1 p U ) , where U is a minimizer of (1.1), k 0 , 0 satisfies (1.9) and

(1.12) k 0 = min { k : ( k , ) satisfies ( 1.9 ) } .

Theorem 1.6

Assume that 2 N N + 2 s < p < N 2 s and α , β < p . There exists γ 1 > 0 such that for any γ ( 0 , γ 1 ) , there exists a solution ( k ( γ ) , ( γ ) ) of (1.9) such that ( k ( γ ) 1 p U , ( γ ) 1 p U ) is a positive solution of system ( S ˜ γ ) with J ( k ( γ ) 1 p U , ( γ ) 1 p U ) > A ˜ , where U is a minimizer of (1.1),

A ˜ = inf ( u , v ) N ˜ J ( u , v )

and

N ˜ = ( u , v ) X { ( 0 , 0 ) } : u W ˙ s , p p + v W ˙ s , p p = R N ( u p s * + v p s * + γ u α v β ) d x .

Theorem 1.7

Assume that 2 N N + 2 s < p < N 2 s and α , β < p . Then the following system of equations

(1.13) ( Δ p ) s u = u p s * 2 u + α γ p s * u α 2 u v β in B R ( 0 ) , ( Δ p ) s v = v p s * 2 v + β γ p s * v β 2 v u α in B R ( 0 ) , u , v W 0 s , p ( B R ( 0 ) ) ,

admit a radial positive solution ( u 0 , v 0 ) .

The organization of the rest of the article is as follows: In Section 2, we prove Theorem 1.3. Section 3 deals with the proof of Theorems 1.5, 1.7, and 1.6.

2 Proof of Theorem 1.3

Lemma 2.1

Suppose α , β > 1 such that α + β = p s * . Then

  1. S α , β = h ( τ min ) S .

  2. S α , β has minimizers ( U , τ min U ) , where U is a ground state solution of (1.2) and τ min satisfies

    τ p 1 ( p s * + α τ β β τ β p p s * τ p s * p ) = 0 .

Proof

Let { ( u n , v n ) } be a minimizing sequence in X for S α , β . Choose τ n > 0 such that v n L p s * ( R N ) = τ n u n L p s * ( R N ) . Now set, w n = v n τ n . Therefore, u n L p s * ( R N ) = w n L p s * ( R N ) and applying Young’s inequality,

R N u n α w n β d x α p s * R N u n p s * d x + β p s * R N w n p s * d x = R N u n p s * d x = R N w n p s * d x .

Therefore,

S α , β + o ( 1 ) = u n W ˙ s , p p + v n W ˙ s , p p R N ( u n p s * + v n p s * + u n α v n β ) d x p p s * = u n W ˙ s , p p R N ( u n p s * + τ n p s * u n p s * + τ n β u n α w n β ) d x p p s * + τ n p w n W ˙ s , p p R N ( u n p s * + τ n p s * u n p s * + τ n β u n α w n β ) d x p p s * 1 ( 1 + τ n β + τ n p s * ) p p s * u n W ˙ s , p p R N u n p s * d x p p s * + τ n p w n W ˙ s , p p R N w n p s * d x p p s * 1 + τ n p ( 1 + τ n β + τ n p s * ) p p s * S min τ > 0 h ( τ ) S .

Thus, as n , we have h ( τ min ) S S α , β . For the reverse inequality, we choose u = U , v = τ min U to obtain h ( τ min ) S S α , β . In Lemma 2.2, we will show that point τ min exists. This proves (i).

(ii) Taking ( u , v ) = ( U , τ min U ) , a simple computation yields that

u W ˙ s , p p + v W ˙ s , p p R N ( u p s * + v p s * + u α v β ) d x p p s * = h ( τ min ) S .

By using (i), we infer that ( U , τ min U ) is a minimizer of S α , β . Further, since τ min is a critical point of h , computing h ( τ min ) = 0 yields that τ min satisfies

τ p 1 ( p s * + α τ β β τ β p p s * τ p s * p ) = 0 .□

We observe from (1.6) that h ( 0 ) = 1 and lim τ h ( τ ) = 1 . Therefore, to ensure the existence of τ min (i.e., minimum point of h does not escape at infinity), τ min is uniquely defined and τ min > 0 , we need to investigate the solvability of the following equation:

(2.1) g ( τ ) p s * + α τ β β τ β p p s * τ p s * p = 0 .

Lemma 2.2

Let α , β > 1 , and α + β = p s * . Then (2.1) always has at least one root τ > 0 , and for any root τ > 0 , the problem ( S ) has positive solutions ( λ U , μ U ) , where

μ = τ λ , λ p s * p = p s * p s * + α τ β .

Moreover, if one of the following conditions hold

  1. 1 < β < p ,

  2. β = p and α < p ,

  3. β > p and α < p ,

then, τ min > 0 and h ( τ min ) < 1 . In all other cases, τ min = 0 .

Proof

Clearly, if τ > 0 solves

( p s * + α τ β ) λ p s * p = p s * , ( p s * τ p s * p + β τ β p ) λ p s * p = p s * ,

then ( λ U , μ U ) with μ = τ λ solves ( S ). Thus to prove the required result, it is enough to show that (2.1) has positive roots τ , which we discuss in the following cases.

Case 1: If 1 < β < p .

Therefore, lim τ 0 + g ( τ ) = .

Now, if α p , then g ( 1 ) = α β > 0 . Thus, there exists τ ( 0 , 1 ) such that g ( τ ) = 0 .

If 1 < α < p , then we have p s * p < p s * α = β , and consequently, lim τ g ( τ ) = . Thus, there exists τ > 0 such that g ( τ ) = 0 .

Also observe that, by direct computation, we obtain

h ( τ ) = f ( τ ) g ( τ ) , where f ( τ ) = p τ p 1 p s * ( 1 + τ β + τ p s * ) p p s * + 1 .

Thus, f ( τ ) 0 for all τ > 0 and f ( 0 ) = 0 . This together with the fact that lim τ 0 + g ( τ ) = implies h ( τ ) < 0 in τ ( 0 , ε ) for some ε > 0 . This means h is a decreasing function near 0. Combining this with the fact that h ( 0 ) = 1 and lim τ h ( τ ) = 1 , we conclude that there exists a point τ min ( 0 , ) such that min τ 0 h ( τ ) = h ( τ min ) < 1 , and this holds for all α > 1 .

Case 2: If β = p .

In this case, g becomes g ( τ ) = α ( 1 + τ p ) p s * τ α . Hence, g ( 0 ) = α > 0 and g ( 1 ) = α p .

(i) If α = p , then N = 2 s p and g ( τ ) = p p τ p . Thus, there exists a unique root τ 1 = 1 of g . Also note that h is increasing near 0. Hence, τ 1 is the maximum point of h with h ( τ 1 ) > 1 . In this case, min τ 0 h ( τ ) = h ( τ min ) = h ( 0 ) .

(ii) If 1 < α < p , then we have 2 s p < N < s p ( p + 1 ) . Observe that g ( 0 ) = α > 0 , g ( 1 ) = α p < 0 , and lim τ g ( τ ) = + . Also note that, g is decreasing in 0 , ( p s * p ) 1 p α and increasing in ( p s * p ) 1 p α , . Therefore, g has exactly one critical point ( p s * p ) 1 p α and two roots τ i ( i = 1 , 2 ) with τ 1 ( 0 , ( p s * p ) 1 p α ) , τ 2 ( ( p s * p ) 1 p α , ) .

Further, note that in this case h is increasing near 0, which leads that first positive critical point of h , i.e., τ 1 is the local maximum for h and h ( τ 1 ) > 1 . Further, as lim τ h ( τ ) = 1 , the second root of g , i.e., τ 2 , becomes the second and last critical point of h and h ( τ 2 ) < 1 . Therefore, in this case, τ min = τ 2 > 0 is the minimum point of h with h ( τ min ) < 1 .

(iii) If α > p , then s p < N < 2 s p and g ( 0 ) > 0 . We see that g is increasing in 0 , ( p p s * ) 1 α p and decreasing in ( p p s * ) 1 α p , . This together with the fact lim τ g ( τ ) = leads that there exists a unique τ > 0 such that g ( τ ) = 0 .

Since in this case, h is increasing near 0, so at τ , h attains the maximum with h ( τ ) > 1 . Hence, h has no other critical point, and therefore, h ( τ min ) = h ( 0 ) .

Case 3: If β > p .

If 1 < α p , then g ( 1 ) = α β 0 . Since g ( 0 ) > 0 , there is a τ ( 0 , 1 ] such that g ( τ ) = 0 . If α > p and α > β , then g ( 1 ) > 0 and lim τ g ( τ ) = . Thus, there exists τ ( 1 , ) such that g ( τ ) = 0 . If α > p and α β , then g ( 1 ) 0 . As g ( 0 ) > 0 , thus there exists τ ( 0 , 1 ] such that g ( τ ) = 0 . Next we analyze τ min in case 3 in the following three subcases.

(i) β > p and α > p .

Observe that in this case we have

(2.2) β < p s * p and α < p s * p .

Hence, without loss of generality, we can assume α β .

Claim 1: g ( τ ) > 0 for τ [ 0 , 1 ) . Indeed, by using (2.2), τ [ 0 , 1 ) implies τ α , τ β > τ p s * p . Therefore,

g ( τ ) > p s * + α τ β β τ β p p s * τ β = p s * + ( α p s * ) τ β β τ β p > p * + α p s * β τ β p (as  α < p s *  and  τ β < 1 ) = α β τ β p > 0 ,

where in the last inequality, we have used the fact that τ β p < 1 τ β p < β α . This proves claim 1.

Claim 2: g is monotonically decreasing for τ 1 . Indeed, τ 1 implies τ α τ p . Therefore, by using (2.2), we have

g ( τ ) = τ β p 1 [ α β τ p p s * ( p s * p ) τ α β ( β p ) ] τ β p 1 [ ( α β p s * ( p s * p ) ) τ α β ( β p ) ] τ β p 1 [ ( p s * p ) ( β p s * ) τ α β ( β p ) ] < 0 .

This proves claim 2. Also observe that g ( 1 ) 0 and g ( τ ) as τ . Combining these facts along with claim 1 and 2 proves that g has only one root say τ in ( 0 , ) , which in turn implies h has only one critical point τ in ( 0 , ) . Since β > p implies h is increasing near 0, so at τ , h attains the maximum with h ( τ ) > 1 . Combining this with lim τ h ( τ ) = 1 proves that h ( τ min ) = h ( 0 ) = 1 , i.e, τ min = 0 .

(ii) β > p and α < p .

In this case, g ( 0 ) > 0 , g ( 1 ) < 0 , and we claim g is strictly decreasing in ( 0 , 1 ) . Indeed, α < p τ p < τ α for τ ( 0 , 1 ) . Also β > p α < p s * p . Therefore,

g ( τ ) = τ β p 1 [ α β τ p p s * ( p s * p ) τ α β ( β p ) ] < τ β p 1 [ ( p s * p ) ( β p s * ) τ α β ( β p ) ] < 0 .

Claim: g has only one critical point in ( 1 , ) . Indeed,

g ( τ ) = τ β p 1 g 1 ( τ ) , where g 1 ( τ ) α β τ p p s * ( p s * p ) τ α β ( β p ) .

So to prove that g has only critical point in ( 1 , ) , it is enough to show that g 1 has only one root in ( 1 , ) . Observe that, g 1 ( 0 ) < 0 , lim τ g 1 ( τ ) = and a straight forward computation yields that g 1 is a decreasing function in 0 , p s * ( p s * p ) p β 1 p α and g 1 is an increasing function in p s * ( p s * p ) p β 1 p α , . Thus, g 1 has only one root. Hence, the claim follows. Next, we observe that α < p β > p s * p , and therefore, lim τ g ( τ ) = .

Combining all the aforementioned observations and claim, it follows that g has only one critical point in ( 0 , ) and two roots τ 1 , τ 2 with τ 1 ( 0 , 1 ) and τ 2 ( 1 , ) . Hence, h has exactly two critical points τ 1 , τ 2 . Since h is increasing near 0 leads to the conclusion that first positive critical point of h , i.e., τ 1 is the local maximum for h and h ( τ 1 ) > 1 and since lim τ h ( τ ) = 1 at the second critical point of h , i.e., at τ 2 , we have h ( τ 2 ) < 1 . Therefore, in this case, τ min = τ 2 > 0 is the minimum point of h with h ( τ min ) < 1 .

(iii) β > p , α = p .

In this case, g ( 0 ) > 0 and α = p β = p s * p . Therefore,

g ( τ ) = τ β p 1 [ α β τ p p s * ( p s * p ) τ α β ( β p ) ] = τ β p 1 [ ( α p s * ) β τ α β ( β p ) ] < 0 ,

i.e., g is a strictly decreasing function. Also, observe that lim τ g ( τ ) = . Hence, g has only one root in ( 0 , ) , i.e., h has only critical point τ in ( 0 , ) . Since β > p implies h is increasing near 0, so at τ , h attains the maximum with h ( τ ) > 1 . Combining this with lim τ h ( τ ) = 1 proves that h ( τ min ) = h ( 0 ) = 1 , i.e, τ min = 0 .□

To prove Theorem 1.3, next we introduce an auxiliary system of equations with a positive parameter η ,

( Δ p ) s u = η u p s * 2 u + α p s * u α 2 u v β in R N , ( Δ p ) s v = v p s * 2 v + β p s * v β 2 v u α in R N , u , v W ˙ s , p ( R N ) . ( S η )

We define the following minimization problem associated to ( S η ):

S η , α , β inf ( u , v ) X , ( u , v ) 0 u W ˙ s , p p + v W ˙ s , p p R N ( η u p s * + v p s * + u α v β ) d x p p s * .

Similarly for τ > 0 , we define

f η ( τ ) 1 + τ p ( η + τ β + τ p s * ) p p s * , f η ( τ min * ) = min τ 0 f η ( τ ) .

Proceeding as in the proof of Lemma 2.2, we find ε ( 0 , 1 ) small such that τ min * ( η ) , λ * ( η ) , μ * ( η ) are unique for η ( 1 ε , 1 + ε ) and τ min * ( η ) satisfies

τ p 1 ( η p s * + α τ β β τ β p p s * τ p s * p ) = 0 .

Moreover, τ min * ( η ) , λ * ( η ) , μ * ( η ) are C 1 for η ( 1 ε , 1 + ε ) and ε > 0 small. Indeed, if we denote

F ( η , τ ) = η p s * + α τ β β τ β p p s * τ p s * p .

Then,

F η = τ β p 1 [ α β τ p p s * ( p s * p ) τ α β ( β p ) ] .

Since τ min is the minimum of h , direct computation yields g ( τ min ) = 0 , g ( τ min ) > 0 . Therefore, F ( 1 , τ min ) = 0 , F η ( 1 , τ min ) > 0 . Consequently, by implicit function theorem, we obtain that τ min * ( η ) , λ * ( η ) , μ * ( η ) are C 1 for η ( 1 ε , 1 + ε ) .

Proof of Theorem 1.3

Let ( u 0 , v 0 ) is a ground state solution of ( S ). First, we claim that

(2.3) R N u 0 p s * d x = λ p s * R N U p s * d x .

To prove this, we define the following min–max problem associated to ( S η )

B ( η ) inf ( u , v ) X { 0 } max t > 0 E η ( t u , t v ) ,

where

E η ( u , v ) 1 p ( u W ˙ s , p p + v W ˙ s , p p ) 1 p s * R N ( η ( u + ) p s * + ( v + ) p s * + ( u + ) α ( v + ) β ) d x .

Observe that there exists t ( η ) > 0 such that max t > 0 E η ( t u 0 , t v 0 ) = E η ( t ( η ) u 0 , t ( η ) v 0 ) , and moreover, t ( η ) satisfies H ( η , t ( η ) ) = 0 , where H ( η , t ) = t p s * p ( η G + D ) C with

C u 0 W ˙ s , p p + v 0 W ˙ s , p p , D R N ( v 0 p s * + u 0 α v 0 β ) d x and G R N u 0 p s * d x .

As ( u 0 , v 0 ) is a least energy solution of ( S ), then

H ( 1 , 1 ) = 0 , H t ( 1 , 1 ) > 0 and H ( η , t ( η ) ) = 0 .

Thus, by the implicit function theorem, there exists ε > 0 such that t ( η ) : ( 1 ε , 1 + ε ) R is C 1 and

t ( η ) = H η H t η = 1 = t = G ( p s * p ) ( G + D ) .

By Taylor expansion, we also have t ( η ) = 1 + t ( 1 ) ( η 1 ) + O ( η 1 2 ) and thus

t p ( η ) = 1 + p t ( 1 ) ( η 1 ) + O ( η 1 2 ) .

H ( 1 , 1 ) = 0 implies C = G + D , and H ( η , t ( η ) ) = 0 implies C = t ( η ) p s * p ( η G + D ) . Therefore, by definition of B ( η ) and the aforementioned observation, we obtain

(2.4) B ( η ) E η ( t ( η ) u 0 , t ( η ) v 0 ) = t ( η ) p p C t ( η ) p s * p s * ( η G + D ) = t ( η ) p s N C = t ( η ) p B ( 1 ) = B ( 1 ) p G B ( 1 ) ( p s * p ) ( G + D ) ( η 1 ) + O ( η 1 2 ) .

Now, let us compute B ( 1 ) from the definition

B ( 1 ) = inf ( u , v ) X E 1 ( t max u , t max v ) , where t max p s * p = u W ˙ s , p p + v W ˙ s , p p R N ( u p s * + v p s * + u α v β ) d x = s N inf ( u , v ) X u W ˙ s , p p + v W ˙ s , p p R N ( u p s * + v p s * + u α v β ) d x p p s * p s * p s * p = s N S α , β p s * p s * p = s N u 0 W ˙ s , p p + v 0 W ˙ s , p p R N ( u 0 p s * + v 0 p s * + u 0 α v 0 β ) d x p p s * p s * p s * p = s N ( G + D ) .

By using this in (2.4), we obtain

B ( η ) B ( 1 ) G p s * ( η 1 ) + O ( η 1 2 ) .

Therefore, we have

B ( η ) B ( 1 ) η 1 G p s * + O ( η 1 ) if η > 1 , G p s * + O ( η 1 ) if η < 1 .

This implies that

(2.5) B ( 1 ) = G p s * = 1 p s * R N u 0 p s * d x .

Arguing similarly as in the proof of Lemma 2.1, it follows that S η , α , β is attained by ( t U , τ ( η ) t U ) . Therefore,

B ( η ) = s N 1 + τ ( η ) p ( η + τ ( η ) β + τ ( η ) p s * ) p p s * p s * p s * p R N U p s * d x = s N ( 1 + τ ( η ) p ) n s p ( η + τ ( η ) β + τ ( η ) p s * ) n s p 1 R N U p s * d x .

Then, from a simple computation, it follows

B ( η ) = ( 1 + τ ( η ) p ) n s p 1 p s * ( η + τ ( η ) β + τ ( γ ) p s * ) n s p [ τ ( η ) τ ( η ) p 1 ( η p s * + α τ ( η ) β β τ ( η ) β p p s * τ ( η ) p s * p ) 1 τ ( η ) p ] R N U p s * d x .

Note that for η = 1 , τ ( 1 ) satisfies the equation g ( τ ) = 0 , where g ( τ ) is given by (2.1); thus, we obtain τ ( 1 ) = τ min . Consequently,

(2.6) B ( 1 ) = 1 p s * 1 + τ min p 1 + τ min β + τ min p s * p s * p s * p R N U p s * d x = λ p s * p s * R N U p s * d x .

By combining (2.5) and (2.6), we conclude (2.3). By a similar argument as in the proof of (2.3), we show that

(2.7) R N v 0 p s * d x = τ min p s * λ p s * R N U p s * d x , R N u 0 α v 0 β d x = τ min β λ p s * R N U p s * d x .

Therefore, by (2.3) and (2.7), we obtain

R N u 0 α v 0 β d x = τ min β R N u 0 p s * , R N u 0 α v 0 β d x = τ min β p s * R N v 0 p s * d x .

Again, since ( λ U , μ U ) solves the problem ( S ), we obtain

(2.8) λ p s * p + α p s * μ β λ α p = 1 = μ p s * p + β p s * μ β p λ α .

Now define ( u 1 , v 1 ) u 0 λ , v 0 μ . By using (2.3), (2.7), and (2.8), we have

u 1 W ˙ s , p p = λ p u 0 W ˙ s , p p = λ p R N u 0 p s * + α p s * u 0 α v 0 β d x = λ p λ p s * + α p s * μ β λ α R N U p s * d x = U W ˙ s , p p .

Similarly, we obtain v 1 W ˙ s , p p = U W ˙ s , p p . Therefore, we have

(2.9) u 1 W ˙ s , p p = U W ˙ s , p p = v 1 W ˙ s , p p .

Also, by (2.3),

(2.10) R N u 1 p s * d x = R N U p s * d x ,

and by (2.7),

(2.11) R N v 1 p s * d x = R N U p s * d x .

Thus, from (2.9) and (2.10), we conclude that u 1 achieves S . Further, from (1.1), (2.9) and (2.11) imply that v 1 also achieves S in (1.1). This completes the proof.□

3 Proof of Theorems 1.5, 1.6, and 1.7

In this section, we study the system ( S γ ˜ ) that we introduced in the introduction. For the reader’s convenience, we recall ( S γ ˜ ):

( Δ p ) s u = u p s * 2 u + α γ p s * u α 2 u v β in R N , ( Δ p ) s v = v p s * 2 v + β γ p s * v β 2 v u α in R N , u , v W ˙ s , p ( R N ) . ( S γ ˜ )

We also recall that (see (1.7)) the energy functional associated to the aforementioned system is

J ( u , v ) = 1 p ( u W ˙ s , p p + v W ˙ s , p p ) 1 p s * R N ( u p s * + v p s * + γ u α v β ) d x , ( u , v ) X .

The definition of Nehari manifold (1.8) is

N = ( u , v ) X : u 0 , v 0 , u W ˙ s , p p = R N u p s * + α γ p s * u α v β d x , v W ˙ s , p p = R N v p s * + β γ p s * u α v β d x .

Therefore, it follows

A = inf ( u , v ) N J ( u , v ) = inf ( u , v ) N s N ( u W ˙ s , p p + v W ˙ s , p p ) = inf ( u , v ) N s N R N ( u p s * + v p s * + γ u α v β ) d x .

Proposition 3.1

Assume that c , d R satisfy

(3.1) c p s * p p + α γ p s * c α p p d β p 1 , d p s * p p + β γ p s * d β p p c α p 1 , c , d > 0 .

If N 2 s < p < N s , α , β > p and (1.10) hold, then c + d k + , where k , R satisfy (1.9).

Proof

We use the change of variables y = c + d , x = c d , y 0 = k + , and x 0 = k into (3.1) and (1.9), we obtain

y p s * p p ( x + 1 ) p s * p p x p s * p p + α γ p s * x α p p f 1 ( x ) , y 0 p s * p p = f 1 ( x 0 ) ,

y p s * p p ( x + 1 ) p s * p p 1 + β γ p s * x α p f 2 ( x ) , y 0 p s * p p = f 2 ( x 0 ) .

Then, one has

f 1 ( x ) = α γ ( x + 1 ) p s * 2 p p x α 2 p p p p s * x p s * p p + α γ p s * x α p p 2 p s * ( p s * p ) α γ x β p + β x α + p α γ ( x + 1 ) p s * 2 p p x α 2 p p p p s * x p s * p p + α γ p s * x α p p 2 g 1 ( x ) , f 2 ( x ) = β γ ( x + 1 ) p s * 2 p p p p s * 1 + β γ p s * x α p 2 p s * ( p s * p ) β γ + ( β p ) x α p α x α p p β γ ( x + 1 ) p s * 2 p p p p s * 1 + β γ p s * x α p 2 g 2 ( x ) .

Hence, we obtain x 1 = p α γ p s * ( p s * p ) p β p from g 1 ( x ) = 0 and similarly, for g 2 , we have x 2 = α p β p . Now by using (1.10), we conclude that

max x > 0 g 1 ( x ) = g 1 ( x 1 ) = p α γ p s * ( p s * p ) p β p ( β p ) ( α p ) 0 ,

min x > 0 g 2 ( x ) = g 2 ( x 2 ) = p s * ( p s * p ) β γ p α p β p α p p 0 .

Therefore, we conclude that the function f 1 is decreasing in ( 0 , ) , and on the other hand, the function f 2 is increasing in ( 0 , ) . Thus, we have

y p s * p p max { f 1 ( x ) , f 2 ( x ) } min x > 0 ( max { f 1 ( x ) , f 2 ( x ) } ) = min { f 1 = f 2 } ( max { f 1 ( x ) , f 2 ( x ) } ) = y 0 p s * p p .

Hence, the result follows.□

We define the functions

(3.2) F 1 ( k , ) k p s * p p + α γ p s * k α p p β p 1 , k > 0 , 0 , F 2 ( k , ) p s * p p + β γ p s * β p p k α p 1 , k 0 , > 0 , ( k ) p s * α γ p β k p α β 1 k p s * p p p β , 0 < k 1 , k ( ) p s * β γ p α p β α 1 p s * p p p α , 0 < 1 .

Then F 1 ( k , ( k ) ) = 0 and F 2 ( k ( ) , ) = 0 .

Lemma 3.2

Assume that 2 N N + 2 s < p < N 2 s and α , β < p . Then

(3.3) F 1 ( k , ) = 0 , F 2 ( k , ) = 0 , k , > 0

has a solution ( k 0 , 0 ) such that F 2 ( k , ( k ) ) < 0 for all k ( 0 , k 0 ) , that is, ( k 0 , 0 ) satisfies (1.12). Similarly, (3.3) has a solution ( k 1 , 1 ) such that F 1 ( k ( ) , ) < 0 for all ( 0 , 1 ) , that is, ( k 1 , 1 ) satisfies (1.9) and 1 = min { : ( k , ) satisfies ( 1.9 ) } .

Proof

The proof is exactly similar to [19, Lemma 3.2].□

Lemma 3.3

Assume that N N + 2 s < p < N 2 s ; α , β < p and (1.11) holds. Then k 0 + 0 < 1 , where ( k 0 , 0 ) is same as in Lemma 3.2and

F 1 ( k ( ) , ) < 0 ( 0 , 0 ) , F 2 ( k , ( k ) ) < 0 k ( 0 , k 0 ) .

Proof

By using (3.2), we obtain

( k ) = p s * α γ p β k p p s * β 1 k p s * p p p β β p α β k p s * p p ,

and then we have

( k ) = ( p β ) ( p s * p ) p β p s * α γ p β k p 2 β α β 1 k p s * p p p 2 β β × k p s * p p p α β + k p s * p p 1 k p s * p p p ( p α ) β ( p β ) k p s * p p .

Note that ( 1 ) = 0 = p α β p p s * p and ( k ) > 0 for 0 < k < p α β p p s * p , whereas ( k ) < 0 for p α β p p s * p < k < 1 . From ( k ) = 0 , we obtain k ˜ = p ( p α ) β ( 2 p p s * ) p p s * p . Then by (1.11), we obtain

min k ( 0 , 1 ] ( k ) = min k p α β p p s * p , 1 ( k ) = ( k ˜ ) = p s * ( p s * p ) α γ p p β p β p α p β β 1 .

The remaining proof follows from [19, Lemma 3.3] by considering μ 1 = 1 = μ 2 in their proof.□

Lemma 3.4

Assume that N N + 2 s < p < N 2 s ; α , β < p , and (1.11) holds. Then

k + k 0 + 0 F 1 ( k , ) 0 , F 2 ( k , ) 0 k , 0 ( k , ) ( 0 , 0 )

has a unique solution ( k , ) = ( k 0 , 0 ) , where F 1 and F 2 are given by (3.2).

Proof

The proof follows from [19, Proposition 3.4].□

Proof of Theorem 1.5

By using (1.9), we have ( k 0 1 p U , 0 1 p U ) N is a nontrivial solution of ( S γ ˜ ) and

(3.4) A J ( k 0 1 p U , 0 1 p U ) = s N ( k 0 + 0 ) S N s p .

Now, suppose { ( u n , v n ) } N be a minimizing sequence for A such that J ( u n , v n ) A as n . Let c n = u n L p s * ( R N ) p and d n = v n L p s * ( R N ) p . Then by Hölder’s inequality, we have

(3.5) S c n u n W ˙ s , p p = R N u n p s * + α γ p s * u n α v n β d x c n p s * p + α γ p s * c n α p d n β p .

This implies that

c n ˜ p s * p p + α γ p s * c n ˜ α p p d n ˜ β p 1 i.e., F 1 ( c n ˜ , d n ˜ ) 0 ,

where c n ˜ = c n S p p s * p , d n ˜ = d n S p p s * p . Similarly, we obtain

(3.6) S d n v n W ˙ s , p p = R N v n p s * + β γ p s * u n α v n β d x d n p s * p + β γ p s * c n α p d n β p ,

and thus, F 2 ( c n ˜ , d n ˜ ) 0 . Then for α , β > p , by Proposition 3.1, we have c n ˜ + d n ˜ k + = k 0 + 0 , and on the other hand for α , β < p , by Lemma 3.4, we have c n ˜ + d n ˜ = k 0 + 0 . Hence,

(3.7) c n + d n ( k 0 + 0 ) S N s p s p .

Since J ( u n , v n ) = s N ( u n W ˙ s , p p + v W ˙ s , p p ) , by using (3.4)–(3.6), we have

S ( c n + d n ) N s J ( u n , v n ) = N s A + o ( 1 ) ( k 0 + 0 ) S N s p + o ( 1 ) .

This implies that

(3.8) c n + d n ( k 0 + 0 ) S N s p s p + o ( 1 ) .

By combining (3.7) and (3.8), we obtain c n + d n ( k 0 + 0 ) S N s p s p as n . Therefore,

A = lim n J ( u n , v n ) s N S lim n ( c n + d n ) = s N ( k 0 + 0 ) S N s p .

Therefore,

A = s N ( k 0 + 0 ) S N s p = J ( k 0 1 p U , 0 1 p U ) .

This completes the proof of Theorem 1.5.□

Next, we prove existence of solutions of (1.13), namely, Theorem 1.7. For this, define

W s , p ( R N ) u L p ( R N ) : R 2 N u ( x ) u ( y ) p x y N + s p d x d y < ,

X ( B R ( 0 ) ) = W 0 s , p ( B R ( 0 ) ) × W 0 s , p ( B R ( 0 ) ) ,

where W 0 s , p ( B R ( 0 ) ) = { u W s , p ( R N ) : u = 0 in R N B R ( 0 ) } with the norm W ˙ s , p , and

N ˜ ( R ) = ( u , v ) X ( B R ( 0 ) ) { ( 0 , 0 ) } : u W ˙ s , p p + v W ˙ s , p p = B R ( 0 ) ( u p s * + v p s * + γ u α v β ) d x ,

and set A ˜ ( R ) inf ( u , v ) N ˜ ( R ) J ( u , v ) . We also define

N ˜ = ( u , v ) X { ( 0 , 0 ) } : u W ˙ s , p p + v W ˙ s , p p = R N ( u p s * + v p s * + γ u α v β ) d x .

Set A ˜ inf ( u , v ) N ˜ J ( u , v ) . Since N N ˜ , it follows A ˜ A and by the fractional Sobolev embedding A ˜ > 0 .

For ε ( 0 , min { α , β } 1 ) , consider

(3.9) ( Δ p ) s u = u p s * 2 2 ε u + ( α ε ) γ p s * 2 ε u α 2 ε u v β ε in B R ( 0 ) , ( Δ p ) s v = v p s * 2 2 ε v + ( β ε ) γ p s * 2 ε v β 2 ε v u α ε in B R ( 0 ) , u , v W 0 s , p ( B R ( 0 ) ) .

The corresponding energy functional of the system (3.9) is given by

J ε ( u , v ) 1 p ( u W ˙ s , p p + v W ˙ s , p p ) 1 p s * 2 ε B R ( 0 ) ( u p s * 2 ε + v p s * 2 ε + γ u α ε v β ε ) d x .

Define

N ˜ ε ( R ) ( u , v ) X ( B R ( 0 ) ) { ( 0 , 0 ) } : G ε ( u , v ) u W ˙ s , p p + v W ˙ s , p p B R ( 0 ) ( u p s * 2 ε + v p s * 2 ε + γ u α ε v β ε ) d x = 0 ,

and set A ˜ ε ( R ) inf ( u , v ) N ˜ ε ( R ) J ε ( u , v ) .

Lemma 3.5

For any ε 0 ( 0 , min { α 1 , β 1 , ( p s * p ) 2 } ) , there exists a constant C ε 0 > 0 such that

A ˜ ε ( R ) C ε 0 ε ( 0 , ε 0 ] .

Proof

Let ( u , v ) N ˜ ε ( R ) . Then

J ε ( u , v ) = 1 p 1 p s * 2 ε ( u W ˙ s , p p + v W ˙ s , p p ) ,

so it suffices to show that u W ˙ s , p p + v W ˙ s , p p is bounded away from zero. We have

(3.10) u W ˙ s , p p + v W ˙ s , p p = B R ( 0 ) ( u p s * 2 ε + v p s * 2 ε + γ u α ε v β ε ) d x B R ( 0 ) 2 ε p s * B R ( 0 ) u p s * d x ( p s * 2 ε ) p s * + B R ( 0 ) v p s * d x ( p s * 2 ε ) p s * + γ B R ( 0 ) u p s * d x ( α ε ) p s * B R ( 0 ) u p s * d x ( β ε ) p s * B R ( 0 ) 2 ε p s * S ( p s * 2 ε ) p ( u W ˙ s , p p s * 2 ε + v W ˙ s , p p s * 2 ε + γ u W ˙ s , p α ε v W ˙ s , p β ε )

by the Hölder and Sobolev inequalities. By Young’s inequality,

u W ˙ s , p α ε v W ˙ s , p β ε α ε p s * 2 ε u W ˙ s , p p s * 2 ε + β ε p s * 2 ε v W ˙ s , p p s * 2 ε u W ˙ s , p p s * 2 ε + v W ˙ s , p p s * 2 ε .

Therefore, (3.10) gives

(3.11) u W ˙ s , p p + v W ˙ s , p p ( 1 + γ ) B R ( 0 ) 2 ε p s * S ( p s * 2 ε ) p ( u W ˙ s , p p s * 2 ε + v W ˙ s , p p s * 2 ε ) .

Since ( p s * 2 ε ) p > 1 ,

u W ˙ s , p p s * 2 ε + v W ˙ s , p p s * 2 ε ( u W ˙ s , p p + v W ˙ s , p p ) ( p s * 2 ε ) p ,

thus (3.11) gives

u W ˙ s , p p + v W ˙ s , p p S ( p s * 2 ε ) p ( 1 + γ ) B R ( 0 ) 2 ε p s * p ( p s * p 2 ε ) .

The desired conclusion follows from this since p s * p 2 ε p s * p 2 ε 0 > 0 and the function h ( t ) = S ( p s * 2 t ) p ( 1 + γ ) B R ( 0 ) 2 t p s * p ( p s * p 2 t ) is continuous and positive in [ 0 , ε 0 ] .□

Lemma 3.6

Assume that 2 N N + 2 s < p < N 2 s and α , β < p . For ε ( 0 , min { α , β } 1 ) , it holds

A ˜ ε ( R ) < min { inf ( u , 0 ) N ˜ ε ( R ) J ε ( u , 0 ) , inf ( 0 , v ) N ˜ ε ( R ) J ε ( 0 , v ) } .

Proof

Clearly, 2 < p s * 2 ε < p s * from min { α , β } p s * 2 . Then we may assume that u 1 is a least energy solution of

( Δ p ) s u = u p s * 2 2 ε u in B R ( 0 ) , u W 0 s , p ( B R ( 0 ) ) .

Set

J ε ( u 1 , 0 ) = a 10 inf ( u , 0 ) N ˜ ε ( R ) J ε ( u , 0 ) , J ε ( 0 , u 1 ) = a 01 inf ( 0 , v ) N ˜ ε ( R ) J ε ( 0 , v ) .

We claim that for any σ R , there exists a unique t ( σ ) > 0 such that ( t ( σ ) 1 p u 1 , t ( σ ) 1 p σ u 1 ) N ˜ ε ( R ) .

t ( σ ) p s * p 2 ε p = u 1 W ˙ s , p p + σ p u 1 W ˙ s , p p B R ( 0 ) ( u 1 p s * 2 ε + σ u 1 p s * 2 ε + γ u 1 α ε σ u 1 β ε ) d x = q a 10 + q a 01 σ p q a 10 + q a 01 σ p s * 2 ε + σ β ε γ B R ( 0 ) u 1 p s * ε d x ,

where q p ( p s * 2 ε ) p s * p 2 ε , i.e., 1 q = 1 p 1 p s * 2 ε . Note that t ( 0 ) = 1 , we have

lim σ 0 t ( σ ) σ β 2 ε σ = ( β ε ) γ B R ( 0 ) u 1 p s * ε d x a 10 ( p s * 2 ε ) .

This implies that as σ 0

t ( σ ) = ( β ε ) γ B R ( 0 ) u 1 p s * ε d x a 10 ( p s * 2 ε ) σ β 2 ε σ ( 1 + o ( 1 ) ) .

Then

t ( σ ) = 1 γ B R ( 0 ) u 1 p s * ε d x a 10 ( p s * 2 ε ) σ β ε ( 1 + o ( 1 ) ) as σ 0 ,

and therefore,

t ( σ ) p s * 2 ε p = 1 γ B R ( 0 ) u 1 p s * ε d x p a 10 σ β ε ( 1 + o ( 1 ) ) as σ 0 .

We obtain for σ small enough

A ˜ ε ( R ) J ε ( t ( σ ) 1 p u 1 , t ( σ ) 1 p σ u 1 ) = 1 q q a 10 + q a 01 σ p s * 2 ε + σ β ε γ B R ( 0 ) u 1 p s * ε d x t ( σ ) p s * 2 ε p = a 10 1 p s * 2 ε σ β ε γ B R ( 0 ) u 1 p s * ε d x + o ( σ β ε ) < a 10 .

Similarly, we see that A ˜ ε ( R ) < a 01 . This completes the proof.□

Note that, similarly to Lemma 3.6, we obtain

(3.12) A ˜ < min { inf ( u , 0 ) N ˜ J ( u , 0 ) , inf ( 0 , v ) N ˜ J ( 0 , v ) } = min { J ( U , 0 ) , J ( 0 , U ) } = s N S N s p .

Proposition 3.7

For 0 < ε < min { min { α , β } 1 , p s * p 2 } , system (3.9) has a positive least energy solution ( u ε , v ε ) , where both u ε , v ε are radially symmetric nonincreasing functions.

Proof

By Lemma 3.5, A ˜ ε ( R ) > 0 . Let ( u , v ) N ˜ ε ( R ) with u , v 0 . Let u * , v * be Schwartz symmetrization of u , v , respectively. Then by nonlocal Pólya-Szegö inequality [2] and properties of the Schwartz symmetrization, we obtain

u * W ˙ s , p p + v * W ˙ s , p p B R ( 0 ) ( u * p s * 2 ε + v * p s * 2 ε + γ u * α ε v * β ε ) d x .

Also, note that J ε ( t * 1 p u * , t * 1 p v * ) J ε ( u , v ) for some t * ( 0 , 1 ] such that ( t * 1 p u * , t * 1 p v * ) N ˜ ε ( R ) . Hence, we choose a minimizing sequence { ( u n , v n ) } N ˜ ε ( R ) of A ˜ ε such that ( u n , v n ) = ( u n * , v n * ) for any n and J ε ( u n , v n ) A ˜ ε as n . Thus, we obtain both the sequences { u n } and { v n } that are bounded in W 0 s , p ( B R ( 0 ) ) . W 0 s , p ( B R ( 0 ) ) is a reflexive Banach space, upto a subsequence, u n u ε , v n v ε weakly in W 0 s , p ( B R ( 0 ) ) . Moreover, as W 0 s , p ( B R ( 0 ) ) L p s * 2 ε ( B R ( 0 ) ) is a compact embedding, it follows u n u ε , v n v ε strongly in L p s * 2 ε ( B R ( 0 ) ) . Therefore,

B R ( 0 ) ( u ε p s * 2 ε + v ε p s * 2 ε + γ u ε α ε v ε β ε ) d x = lim n B R ( 0 ) ( u n p s * 2 ε + v n p s * 2 ε + γ u n α ε v n β ε ) d x = p ( p s * 2 ε ) p s * 2 ε p lim n J ε ( u n , v n ) = p ( p s * 2 ε ) p s * 2 ε p A ˜ ε ( R ) > 0 ,

and this yields that ( u ε , v ε ) ( 0 , 0 ) and also u ε , v ε are nonnegative radially symmetric decreasing. By using the weak lower semicontinuity property of the norm, we also have

u ε W ˙ s , p p + v ε W ˙ s , p p lim n ( u n W ˙ s , p p + v n W ˙ s , p p ) ,

and therefore,

u ε W ˙ s , p p + v ε W ˙ s , p p B R ( 0 ) ( u ε p s * 2 ε + v ε p s * 2 ε + γ u ε α ε v ε β ε ) d x .

Therefore, there exists t ε ( 0 , 1 ] such that ( t ε 1 p u ε , t ε 1 p v ε ) N ˜ ε , and hence,

A ˜ ε ( R ) J ε ( t ε 1 p u ε , t ε 1 p v ε ) = t ε ( p s * 2 ε p ) p ( p s * 2 ε ) ( u ε W ˙ s , p p + v ε W ˙ s , p p ) p s * 2 ε p p ( p s * 2 ε ) lim n ( u n W ˙ s , p p + v n W ˙ s , p p ) = lim n J ε ( u n , v n ) = A ˜ ε ( R ) ,

which yields that t ε = 1 , ( u ε , v ε ) N ˜ ε ( R ) , A ˜ ε ( R ) = J ε ( u ε , v ε ) , and

u ε W ˙ s , p p + v ε W ˙ s , p p = lim n ( u n W ˙ s , p p + v n W ˙ s , p p ) .

This proved that u n u ε , v n v ε strongly in W 0 s , p ( B R ( 0 ) ) . Now by Lagrange multiplier theorem, there exists λ R such that

J ε ( u ε , v ε ) + λ G ε ( u ε , v ε ) = 0 .

Since J ε ( u ε , v ε ) ( u ε , v ε ) = G ε ( u ε , v ε ) = 0 and

G ε ( u ε , v ε ) ( u ε , v ε ) = ( p s * 2 ε p ) B R ( 0 ) ( u ε p s * 2 ε + v ε p s * 2 ε + γ u ε α ε v ε β ε ) d x < 0 ,

we obtain λ = 0 and hence, J ε ( u ε , v ε ) = 0 . Since A ˜ ε ( R ) = J ε ( u ε , v ε ) and by Lemma 3.6, we have u ε , v ε 0 . By maximum principle [15, Lemma 3.3] we conclude the desired result.□

Lemma 3.8

For any ( u , v ) N ˜ , there is a sequence ( u n , v n ) N ˜ ( C 0 ( R N ) × C 0 ( R N ) ) such that ( u n , v n ) ( u , v ) in X as n .

Proof

By density, there is a sequence ( u ˜ n , v ˜ n ) C 0 ( R N ) × C 0 ( R N ) such that ( u ˜ n , v ˜ n ) ( u , v ) in X as n . Let

t n = u ˜ n W ˙ s , p p + v ˜ n W ˙ s , p p R N ( u ˜ n p s * + v ˜ n p s * + γ u ˜ n α v ˜ n β ) d x 1 ( p s * p )

and note that t n 1 since ( u , v ) N ˜ . Then ( u n , v n ) = ( t n u ˜ n , t n v ˜ n ) N ˜ ( C 0 ( R N ) × C 0 ( R N ) ) and ( u n , v n ) ( u , v ) in X .□

Lemma 3.9

There is a minimizing sequence ( u n , v n ) N ˜ ( C 0 ( R N ) × C 0 ( R N ) ) for A ˜ .

Proof

Let ( u ˜ n , v ˜ n ) N ˜ be a minimizing sequence for A ˜ , i.e., J ( u ˜ n , v ˜ n ) A ˜ . By the continuity of J and Lemma 3.8, there is a ( u n , v n ) N ˜ ( C 0 ( R N ) × C 0 ( R N ) ) such that

J ( u n , v n ) J ( u ˜ n , v ˜ n ) < 1 n .

Then J ( u n , v n ) A ˜ , so ( u n , v n ) N ˜ ( C 0 ( R N ) × C 0 ( R N ) ) is a minimizing sequence for A ˜ .□

Proof of Theorem 1.7

First, we prove that

(3.13) A ˜ ( R ) = A ˜ for every R > 0 .

Let R 1 < R 2 , then N ˜ ( R 1 ) N ˜ ( R 2 ) , and hence, by definition, we have A ˜ ( R 2 ) A ˜ ( R 1 ) . To prove reverse inequality, let ( u , v ) N ˜ ( R 2 ) and define

( u 1 ( x ) , v 1 ( x ) ) R 2 R 1 N s p p u R 2 R 1 x , v R 2 R 1 x .

Clearly, ( u 1 , v 1 ) N ˜ ( R 1 ) . Therefore, we obtain

A ˜ ( R 1 ) J ( u 1 , v 1 ) = J ( u , v ) , for any ( u , v ) N ˜ ( R 2 ) ,

and this implies that A ˜ ( R 1 ) A ˜ ( R 2 ) . So, we obtain A ˜ ( R 1 ) = A ˜ ( R 2 ) . Let ( u n , v n ) N ˜ be a minimizing sequence of A ˜ . In view of Lemma 3.9, we may assume that u n , v n W 0 s , p ( B R n ( 0 ) ) for some R n > 0 . Then, ( u n , v n ) N ˜ ( R n ) and

A ˜ = lim n J ( u n , v n ) lim n A ˜ ( R n ) = A ˜ ( R ) ,

and hence, (3.13) holds.

Let ( u , v ) N ˜ ( R ) be arbitrary, then there exists t ε > 0 with t ε 1 as ε 0 such that ( t ε 1 p u , t ε 1 p v ) N ˜ ε ( R ) . Therefore, we have

limsup ε 0 A ˜ ε ( R ) limsup ε 0 J ε ( t ε 1 p u , t ε 1 p v ) = J ( u , v ) .

Thus, by using (3.13), we obtain

(3.14) limsup ε 0 A ˜ ε ( R ) A ˜ ( R ) = A ˜ .

By Proposition 3.7, let ( u ε , v ε ) be a positive least energy solution of (3.9), which is radially symmetric nonincreasing. Then by Lemma 3.5, for any ε 0 ( 0 , min { α 1 , β 1 , ( p s * p ) 2 } ) , there exists a constant C ε 0 > 0 such that

(3.15) A ˜ ε ( R ) = p s * p 2 ε p ( p s * 2 ε ) ( u ε W ˙ s , p p + v ε W ˙ s , p p ) C ε 0 ε ( 0 , ε 0 ] .

Therefore, from (3.14), we obtain u ε , v ε W 0 s , p ( B R ( 0 ) ) are uniformly bounded. Thus, by reflexivity upto a subsequence, u ε u 0 and v ε v 0 weakly in W 0 s , p ( B R ( 0 ) ) as ε 0 . Since (3.9) is a subcritical system in bounded domain, passing the limit ε 0 , it follows that ( u 0 , v 0 ) is a solution of the following system:

( Δ p ) s u = u p s * 2 u + α γ p s * u α 2 u v β in B R ( 0 ) , ( Δ p ) s v = v p s * 2 v + β γ p s * v β 2 v u α in B R ( 0 ) , u , v W 0 s , p ( B R ( 0 ) ) .

Also note that u 0 and v 0 are nonnegative and from (3.15), we see that ( u 0 , v 0 ) ( 0 , 0 ) . We may now assume that u 0 0 . Therefore, by strong maximum principle [15], we obtain u 0 > 0 in B R ( 0 ) . Further, we claim, v 0 0 . If v 0 0 , then substituting ( u 0 , v 0 ) in the aforementioned system of equation shows that u 0 is a positive solution to ( Δ p ) s u = u p s * 2 u in B R ( 0 ) . Since u 0 W 0 s , p ( B R ( 0 ) ) , it follows

(3.16) J ( u 0 , 0 ) = 1 p u 0 W ˙ s , p p 1 p s * R N u 0 p s * d x = 1 p u 0 W ˙ s , p p 1 p s * B R ( 0 ) u 0 p s * d x = s N u 0 W ˙ s , p p .

We also observe that ( u 0 , 0 ) , ( 0 , u 0 ) N ˜ . Therefore, by using (3.12), we have

(3.17) A ˜ < min { inf ( u , 0 ) N ˜ J ( u , 0 ) , inf ( 0 , v ) N ˜ J ( 0 , v ) } min { J ( u 0 , 0 ) , J ( 0 , u 0 ) } = J ( u 0 , 0 ) .

Combining (3.16) and (3.17) together yields

(3.18) A ˜ < s N u 0 W ˙ s , p p .

Further, by (3.14) and the fact that ( u ε , v ε ) is a positive least energy solution of (3.9), it follows

A ˜ limsup ε 0 A ˜ ε ( R ) = limsup ε 0 J ε ( u ε , v ε ) = limsup ε 0 1 p ( u ε W ˙ s , p p + v ε W ˙ s , p p ) 1 p s * 2 ε B R ( 0 ) ( u ε p s * 2 ε + v ε p s * 2 ε + γ u ε α ε v ε β ε ) d x = limsup ε 0 1 p 1 p s * 2 ε ( u ε W ˙ s , p p + v ε W ˙ s , p p ) s N ( u 0 W ˙ s , p p + v 0 W ˙ s , p p ) = s N u 0 W ˙ s , p p > A ˜ (by (3.18)) ,

which is a contradiction. Hence, v 0 0 , and again by strong maximum principle, we obtain v 0 > 0 in B R ( 0 ) . Moreover, as ( u ε , v ε ) is radial and u ε u 0 a.e. and v ε v 0 a.e. (up to a subsequence), we also have u 0 , v 0 are radial functions. Hence, ( u 0 , v 0 ) is a positive radial solution to (1.13).□

Proof of Theorem 1.6

To prove the existence of ( k ( γ ) , ( γ ) ) for small γ > 0 , recalling (3.2), we denote F i ( k , , γ ) instead of F i ( k , ) , i = 1 , 2 in this case. Let k ( 0 ) = 1 = ( 0 ) , then F i ( k ( 0 ) , ( 0 ) , 0 ) = 0 , i = 1 , 2 . Clearly, we have

F 1 k ( k ( 0 ) , ( 0 ) , 0 ) = F 2 ( k ( 0 ) , ( 0 ) , 0 ) = p s * p p > 0

and

F 1 ( k ( 0 ) , ( 0 ) , 0 ) = F 2 k ( k ( 0 ) , ( 0 ) , 0 ) = 0 .

Therefore, the Jacobian determinant is J F ( k ( 0 ) , ( 0 ) ) = ( p s * p ) 2 p 2 > 0 , where F ( F 1 , F 2 ) . Therefore, by the implicit function theorem, k ( γ ) and ( γ ) are well-defined functions and of class C 1 in ( γ 2 , γ 2 ) for some γ 2 > 0 and F i ( k , , γ ) = 0 for γ ( γ 2 , γ 2 ) . Then ( k ( γ ) 1 p U , ( γ ) 1 p U ) is a positive solution of ( S γ ˜ ). Since lim γ 0 ( k ( γ ) + ( γ ) ) = 2 . Thus, there exists γ 1 ( 0 , γ 2 ] such that k ( γ ) + ( γ ) > 1 for all γ ( 0 , γ 1 ) . Therefore, by (3.12), we obtain

J ( k ( γ ) 1 p U , ( γ ) 1 p U ) = s N ( k ( γ ) + ( γ ) ) S N s p > s N S N s p = A ˜ .

This completes the proof.□

  1. Funding information: The research of M. Bhakta is partially supported by the SERB WEA grant (WEA/2020/000005) and DST Swarnajaynti fellowship (SB/SJF/2021-22/09). K. Perera was partially supported by the Simons Foundation grant 962241. F. Sk was partially supported by the SERB grant WEA/2020/000005.

  2. Conflict of interest: The authors declare that there is no conflict of interest in this article.

References

[1] N. Akhmediev and A. Ankiewicz, Partially coherent solitons on a finite background, Phys. Rev. Lett. 82 (1999), no. 13, 2661. 10.1103/PhysRevLett.82.2661Search in Google Scholar

[2] F. J. Almgren and E. H. Lieb, Symmetric decreasing rearrangement is sometimes continuous, J. Amer. Math. Soc. 2 (1989), no. 4, 683–773. 10.1090/S0894-0347-1989-1002633-4Search in Google Scholar

[3] J. C. Bhakta, Approximate interacting solitary wave solutions for a pair of coupled nonlinear Schrödinger equations, Phys. Rev. E 49 (1994), no. 6, 5731–5741. 10.1103/PhysRevE.49.5731Search in Google Scholar

[4] M. Bhakta, S. Chakraborty, O. H. Miyagaki, and P. Pucci, Fractional elliptic systems with critical nonlinearities, Nonlinearity 34 (2021), no. 11, 7540–7573. 10.1088/1361-6544/ac24e5Search in Google Scholar

[5] M. Bhakta, S. Chakraborty, and P. Pucci, Nonhomogeneous systems involving critical or subcritical nonlinearities, Differ. Integr. Equ. 33 (2020), no. 7–8, 323–336. 10.57262/die/1594692052Search in Google Scholar

[6] M. Bhakta and D. Mukherjee, Sign changing solutions of p-fractional equations with concave-convex nonlinearities, Topol. Methods Nonlinear Anal. 51 (2018), no. 2, 511–544. 10.12775/TMNA.2017.052Search in Google Scholar

[7] M. Bhakta and D. Mukherjee, Multiplicity results for (p,q) fractional elliptic equations involving critical nonlinearities, Adv. Diff. Equ. 24 (2019), no. 3–4, 185–228. 10.57262/ade/1548212469Search in Google Scholar

[8] L. Brasco, E. Lindgren, and E. Parini, The fractional Cheeger problem, Interfaces Free Bound. 16 (2014), no. 3, 419–458. 10.4171/IFB/325Search in Google Scholar

[9] L. Brasco, S. Mosconi, and M. Squassina, Optimal decay of extremal functions for the fractional Sobolev inequality, Cal. Var. Partial Diff. Equ. 55 (2016), no. 2, Art. 23, 32 pp. 10.1007/s00526-016-0958-ySearch in Google Scholar

[10] H. P. Bueno, E. HuertoCaqui, O. H. Miyagaki, and F. R. Pereira, Critical concave convex Ambrosetti-Prodi type problems for fractional p-Laplacian, Adv. Nonlinear Stud. 20 (2020), no. 4, 847–865. 10.1515/ans-2020-2106Search in Google Scholar

[11] W. Chen, C. Li, and B. Ou, Classification of solutions for an integral equation, Commun. Pure Appl. Math. 59 (2006), no. 3, 330–343. 10.1002/cpa.20116Search in Google Scholar

[12] W. Chen and M. Squassina, Critical nonlocal systems with concave-convex powers, Adv. Nonlinear Stud. 16 (2016), no. 4, 821–842. 10.1515/ans-2015-5055Search in Google Scholar

[13] D. G. Costa, O. H. Miyagaki, M. Squassina, and J. Yang, Asymptotics of ground states for fractional Hénon systems contributions to nonlinear elliptic equations and systems: a tribute to Djairo Guedes de Figueiredo on the occasion of his 80th birthday, Progress Nonlinear Differ. Equ. Their Appl. 86 (2015), 133–161. 10.1007/978-3-319-19902-3_10Search in Google Scholar

[14] A. Cotsiolis and N. Tavoularis, Best constants for Sobolev inequalities for higher order fractional derivatives, J. Math. Anal. Appl. 295 (2004), no. 1, 225–236. 10.1016/j.jmaa.2004.03.034Search in Google Scholar

[15] L. M. Del Pezzo and A. Quaas, A Hopf’s lemma and a strong minimum principle for the fractional p-Laplacian, J. Differ. Equ. 263 (2017), no. 1, 765–778. 10.1016/j.jde.2017.02.051Search in Google Scholar

[16] L. F. O. Faria, O. H. Miyagaki, F. R. Pereira, M. Squassina, and C. Zhang, The Brézis-Nirenberg problem for nonlocal systems, Adv. Nonlinear Anal. 5 (2016), no. 1, 85–103. 10.1515/anona-2015-0114Search in Google Scholar

[17] A. Fiscella, P. Pucci, and S. Saldi, Existence of entire solutions for Schrödinger-Hardy systems involving two fractional operators, Nonlinear Anal. 158 (2017), 109–131. 10.1016/j.na.2017.04.005Search in Google Scholar

[18] R. L. Frank and R. Seiringer, Non-linear ground state representations and sharp Hardy inequalities, J. Funct. Anal. 255 (2008), no. 12, 3407–3430. 10.1016/j.jfa.2008.05.015Search in Google Scholar

[19] Z. Guo, K. Perera, and W. Zou, On critical p-Laplacian systems, Adv. Nonlinear Stud. 17 (2017), no. 4, 641–659. 10.1515/ans-2017-6029Search in Google Scholar

[20] X. He, M. Squassina, and W. Zou, The Nehari manifold for fractional systems involving critical nonlinearities, Commun. Pure Appl. Anal. 15 (2016), no. 4, 1285–1308. 10.3934/cpaa.2016.15.1285Search in Google Scholar

[21] S. V. Hernández and A. Saldana, Existence and convergence of solutions to fractional pure critical exponent problems, Adv. Nonlinear Stud. 21 (2021), no. 4, 827–854. 10.1515/ans-2021-2041Search in Google Scholar

[22] G. Lu and Y. Shen, Existence of solutions to fractional p-Laplacian systems with homogeneous nonlinearities of critical Sobolev growth, Adv. Nonlinear Stud. 20 (2020), no. 3, 579–597. 10.1515/ans-2020-2098Search in Google Scholar

[23] T. Luo and H. Hajaiej, Normalized solutions for a class of scalar field equations involving mixed fractional Laplacians, Adv. Nonlinear Stud. 22 (2022), no. 1, 228–247. 10.1515/ans-2022-0013Search in Google Scholar

[24] S. Mosconi, K. Perera, M. Squassina, and Y. Yang, The Brezis-Nirenberg problem for the fractional p-Laplacian, Calc. Var. Partial Diff. Equ. 55 (2016), no. 4, Art. 105, 25 p. 10.1007/s00526-016-1035-2Search in Google Scholar

[25] S. Peng, Y. F. Peng, and Z. Q. Wang, On elliptic systems with Sobolev critical growth, Calc. Var. Partial Diff. Equ. 55 (2016), no. 6, Art. 142, 30 p. 10.1007/s00526-016-1091-7Search in Google Scholar

[26] S. Peng and Z. Wang, Segregated and synchronized vector solutions for nonlinear Schrödinger systems, Arch. Ration. Mech. Anal. 208 (2013), no. 1, 305–339. 10.1007/s00205-012-0598-0Search in Google Scholar

[27] Y. Shen, Existence of solutions to elliptic problems with fractional p-Laplacian and multiple critical nonlinearities in the entire space RN, Nonlinear Anal. 202 (2021), Paper No. 112102, 17 pp. 10.1016/j.na.2020.112102Search in Google Scholar

Received: 2022-11-24
Revised: 2023-08-10
Accepted: 2023-08-11
Published Online: 2023-09-13

© 2023 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Research Articles
  2. Asymptotic properties of critical points for subcritical Trudinger-Moser functional
  3. The existence of positive solution for an elliptic problem with critical growth and logarithmic perturbation
  4. On some dense sets in the space of dynamical systems
  5. Sharp profiles for diffusive logistic equation with spatial heterogeneity
  6. Generic properties of the Rabinowitz unbounded continuum
  7. Global bifurcation of coexistence states for a prey-predator model with prey-taxis/predator-taxis
  8. Multiple solutions of p-fractional Schrödinger-Choquard-Kirchhoff equations with Hardy-Littlewood-Sobolev critical exponents
  9. Improved fractional Trudinger-Moser inequalities on bounded intervals and the existence of their extremals
  10. The existence of infinitely many boundary blow-up solutions to the p-k-Hessian equation
  11. A priori bounds, existence, and uniqueness of smooth solutions to an anisotropic Lp Minkowski problem for log-concave measure
  12. Existence of nonminimal solutions to an inhomogeneous elliptic equation with supercritical nonlinearity
  13. Non-degeneracy of multi-peak solutions for the Schrödinger-Poisson problem
  14. Gagliardo-Nirenberg-type inequalities using fractional Sobolev spaces and Besov spaces
  15. Ground states of Schrödinger systems with the Chern-Simons gauge fields
  16. Quasilinear problems with nonlinear boundary conditions in higher-dimensional thin domains with corrugated boundaries
  17. A system of equations involving the fractional p-Laplacian and doubly critical nonlinearities
  18. A modified Picone-type identity and the uniqueness of positive symmetric solutions for a prescribed mean curvature problem
  19. On a version of hybrid existence result for a system of nonlinear equations
  20. Special Issue: Geometric PDEs and applications
  21. Preface for the special issue on “Geometric Partial Differential Equations and Applications”
  22. Convex hypersurfaces with prescribed Musielak-Orlicz-Gauss image measure
  23. Total mean curvatures of Riemannian hypersurfaces
  24. On degenerate case of prescribed curvature measure problems
  25. A curvature flow to the Lp Minkowski-type problem of q-capacity
  26. Aleksandrov reflection for extrinsic geometric flows of Euclidean hypersurfaces
  27. A note on second derivative estimates for Monge-Ampère-type equations
  28. The Lp chord Minkowski problem
  29. Widths of balls and free boundary minimal submanifolds
  30. Smooth approximation of twisted Kähler-Einstein metrics
  31. The exterior Dirichlet problem for the homogeneous complex k-Hessian equation
  32. A Carleman inequality on product manifolds and applications to rigidity problems
  33. Asymptotic behavior of solutions to the Monge-Ampère equations with slow convergence rate at infinity
  34. Pinched hypersurfaces are compact
  35. The spinorial energy for asymptotically Euclidean Ricci flow
  36. Geometry of CMC surfaces of finite index
  37. Capillary Schwarz symmetrization in the half-space
  38. Regularity of optimal mapping between hypercubes
  39. Special Issue: In honor of David Jerison
  40. Preface for the special issue in honor of David Jerison
  41. Homogenization of oblique boundary value problems
  42. A proof of a trace formula by Richard Melrose
  43. Compactness estimates for minimizers of the Alt-Phillips functional of negative exponents
  44. Regularity properties of monotone measure-preserving maps
  45. Examples of non-Dini domains with large singular sets
  46. Sharp inequalities for coherent states and their optimizers
  47. Gradient estimates and the fundamental solution for higher-order elliptic systems with lower-order terms
  48. Propagation of symmetries for Ricci shrinkers
  49. Linear extension operators for Sobolev spaces on radially symmetric binary trees
  50. The Neumann problem on the domain in 𝕊3 bounded by the Clifford torus
  51. On an effective equation of the reduced Hartree-Fock theory
  52. Polynomial sequences in discrete nilpotent groups of step 2
  53. Integral inequalities with an extended Poisson kernel and the existence of the extremals
  54. On singular solutions of Lane-Emden equation on the Heisenberg group
Downloaded on 13.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/ans-2023-0103/html
Scroll to top button