Startseite Mathematik Local well-posedness for the two-component Benjamin-Ono equation
Artikel Open Access

Local well-posedness for the two-component Benjamin-Ono equation

  • Min Zhao EMAIL logo
Veröffentlicht/Copyright: 29. Oktober 2025

Abstract

The Cauchy problem for the two-component Benjamin-Ono equation is considered. It is shown that this problem is local well-posed in H s ( R ) × H s ( R ) for any s > 9 8 . A local smoothing effect in H s ( R ) × H s ( R ) , s > 3 2 is also established.

MSC 2020: 35Q51; 37K55

1 Introduction

Our objective is to investigate the local well-posedness problem for the two-component Benjamin-Ono (BO) equation. We consider the initial value problem in the following form:

(1.1) u t = H u x x + a H v x x + a u u x + a 2 ( u v ) x + a 2 v v x , t > 0 , x R , v t = H v x x + a H u x x + a v v x + a 2 ( u v ) x + a 2 u u x , t > 0 , x R , u ( 0 , x ) = u 0 ( x ) , x R , v ( 0 , x ) = v 0 ( x ) , x R ,

where a 0 is a real constant, and u = u ( x , t ) , v = v ( x , t ) are real-valued functions of two real variables x and t . And H is the Hilbert transform which is defined as follows:

H u ( x , t ) = 1 π p.v. R u ( y , t ) x y d y .

This system was derived by Grimshaw and Zhu [3] as a model to describe the oblique interaction of weakly nonlinear, long internal gravity waves in both shallow and deep fluids. When v = 0 or u = v , the system can be simplified to the BO equation. Thus, system (1.1) can be viewed as a two-component generalization of the BO equation

(1.2) u t + H u x x + 2 u u x = 0 .

The BO equation (1.2), first derived by Benjamin [1] using the Fourier integral theorem alongside a heuristic argument, has been rigorously analyzed in subsequent studies. Ono [13] provided a more formal derivation, demonstrating that the BO equation (1.2) arises in the context of modeling internal gravity waves in stratified fluids and governs the propagation of nonlinear Rossby waves in rotating fluids. It is well-known that the BO equation (1.2) is a completely integrable system and is well-posed for initial data in the Sobolev space H s ( R ) [18]. The local and global well-posedness of the BO equation has been established by several researchers, including Saut [15], I’orio [6], Ponce [14], Koch and Tzvetkov [12], Kenig and Koenig [8], and Tao [17]. Notably, the best-known result by Ionescu and Kenig [5] confirms global well-posedness in H s ( R ) for s 0 . Recently, Killip et al. [11] utilized the BO equation, which is shown to be well-posed, both on the line and on the circle, in the Sobolev spaces H s for s > 1 2 . The proof is based on a novel gauge transformation and is further strengthened by the introduction of a modified Lax pair representation for the entire hierarchy.

Similar to the BO equation, the linear terms in the two-component BO system (1.1) fail to adequately balance the derivatives of the nonlinear terms. This insufficiency leads to the impossibility of providing well-posedness results for the system in Bourgain spaces. To address this challenge, we adopt the methods of Kenig [8] to develop a low-regularity theory for the system. It is noteworthy that system (1.1) possesses the following conserved quantities:

I 1 = R u d x , I 2 = R v d x , I 3 = R ( u 2 + v 2 ) d x .

Moreover, there exists a conservation law

I 4 = R u H u x + v H v x + 4 u H v x + 2 3 u 3 + 4 u 2 v + 4 v 2 u + 2 3 v 3 d x

for a = 2 . These conservation laws provide a priori bounds on the solution. Specifically, from the invariants I 3 and I 4 , we can conclude that the H 1 2 norm of the solution remains bounded for finite time, provided that the initial data belongs to H 1 2 .

The main results of this work are as follows.

Theorem 1.1

Let ( u 0 , v 0 ) H s ( R ) × H s ( R ) , for s > 9 8 . There exist T = T ( ( u 0 , v 0 ) H s ) > 0 and a unique solution ( u , v ) C ( [ 0 , T ] ; H s ( R ) × H s ( R ) ) corresponding to the initial data ( u 0 , v 0 ) of (1.1). Moreover, the map ( u 0 , v 0 ) ( u , v ) is from H s ( R ) × H s ( R ) to C ( [ 0 , T ] ; H s ( R ) × H s ( R ) ) . Thus, ( u , v ) is continuously dependent on ( u 0 , v 0 ) .

We divide the proof of Theorem 1.1 into two main parts: In the first part, we use the parabolic regularization method to prove the global existence of the initial value problem in the Sobolev space H s ( R ) × H s ( R ) , s > 3 2 . In the second part, we employ the methods of Koch and Kenig to prove the local existence of the initial value problem in the Sobolev space H s ( R ) × H s ( R ) , 3 2 s > 9 8 .

We shall use the following notations. The operator D x is defined by D x = 1 i x . Constants are denoted by C and may change from line to line. Denote ( u , v ) = ( u 2 + v 2 ) 1 2 , ( u , v ) p = ( R ( u , v ) p d x ) 1 p , thus for any s 1 , the norm of ( u , v ) H s ( R ) × H s ( R ) can be given as follows: ( u , v ) H s = ( u H s 2 + v H s 2 ) 1 2 . Let χ be a nondecreasing C -function such that χ 1 and support on χ ( 1,2 ) . For any j Z , χ j = χ ( j ) .

The article is organized as follows. In Section 2, we first introduce the necessary estimates. In Section 3, using the parabolic regularization method, we prove the local well-posedness of (1.1) in Sobolev spaces H s ( R ) × H s ( R ) with regularity s > 3 2 . In Section 4, we first state the linear and nonlinear estimates. The rest of the section contains the proof of Theorem 4.1.

2 Preliminary estimates

Let us recall the following lemmas.

Lemma 2.1

[16] Let s γ + 1 > 3 2 , and let u, v H s ( R ) . Then, there exists a constant c = c ( γ , s ) such that

[ D s , f ] g L 2 c ( γ , s ) ( f H s g H γ + f H γ + 1 g H s 1 ) ,

moreover, c ( γ , s ) = c ( s ) γ 1 2 .

Lemma 2.1 can be reformulated as follows.

Lemma 2.2

For any s, γ 1 , γ 2 R , such that s > 3 2 and s 1 γ i > 1 2 , i = 1,2 , there exists a constant c = c ( s , γ 1 , γ 2 ) > 0 , such that

D s ( f g ) L 2 c ( f H s g H γ 1 + f H γ 2 + 1 g H s 1 ) + f D s g L 2

for all u , v H s ( R ) .

Lemma 2.3

[10] If s > 0 , p , p 2 , p 3 ( 1 , ) , p 1 , p 4 ( 1 , ] with 1 p = 1 p 1 + 1 p 2 = 1 p 3 + 1 p 4 , then

[ J s , f ] g L 2 x f L p 1 J s 1 g L p 2 + J s f L p 3 g L p 4

and

J s ( f g ) L 2 f L p 1 J s g L p 2 + J s f L p 3 g L p 4 .

We require the following Leibniz rules [9] for fractional derivatives.

Lemma 2.4

( a ) Let α = α 1 + α 2 ( 0 , 1 ) , with α i ( 0 , α ) , p [ 1 , ) , and p 1 , p 2 ( 1 , ) , such that 1 p = 1 p 1 + 1 p 2 , then

(2.1) D α ( f g ) f D α g g D α f L p D α 1 f L p 1 D α 2 g L p 2 .

Moreover, if p > 1 , then the case α 2 = 0 , p 2 ( 1 , ] is also allowed.

( b ) Let α = α 1 + α 2 ( 0 , 1 ) , α i [ 0 , α ] , and let p, p 1 , p 2 , q , q 1 , q 2 ( 1 , ) such that 1 p = 1 p 1 + 1 p 2 1 q = 1 q 1 + 1 q 2 , then

D α ( f g ) f D α g g D α f L x p L t q D α 1 f L x p 1 L t q 1 D α 2 g L x p 2 L t q 2 .

Moreover, the following additional cases are allowed: ( α 1 , q 1 ) = ( 0 , ) , ( p , q ) = ( 1,2 ) , and q = 1 , provided that α i ( 0 , α ) . We remark that all of these results remain valid with D ˜ = D H instead of D .

Lemma 2.5

[14] Let s > 0 , p ( 1 , ) , then for any θ [ 0 , s ] , p 1 ( 1 , ] , with p 2 , p 3 , p 4 ( 1 , ) satisfying 1 p = 1 p 1 + 1 p 2 = 1 p 3 + 1 p 4 , then

J s ( f g ) f ( J s g ) g ( J s f ) L p c ( x f L p 1 J s 1 g L p 2 + J s 1 f L p 2 x g L p 1 + J s θ f L p 3 J θ g L p 4 ) .

Lemma 2.6

[4,7] Let U γ f = F 1 [ e i ξ ξ t γ ξ 3 2 t F f ] . And let 0 r p . Then

x s U γ * f L x p ( γ t ) ( 2 s ) 3 f L x r

for any s > 0 and f L r ( R ) .

Definition 2.1

Let φ C ( R ) be such that

  1. 0 φ ( ξ ) 1 , for ξ R

  2. φ ( 0 ) = 1 ,

  3. d k d ξ k ψ ( 0 ) = 0 , for k N , where ψ ( ξ ) = 1 φ ( ξ ) ,

  4. φ ( ξ ) tends exponentially to 0 as ξ .

Then, for ξ R , define

u ˆ 0 ε = φ ( ε 1 6 ξ ) u ˆ 0 ( ξ ) .

The following proposition was proved in [2].

Proposition 2.1

[2] Let s > 0 and u 0 H s ( R ) . Then, u 0 ε H ( R ) , and u 0 u 0 ε H s 0 , as ε 0 . For any r 0 , there exists C > 0 , such that

(2.2) u 0 ε H s + r C ε r 6 u 0 H s , u 0 u 0 ε H s r C ε r 6 u 0 H s , u 0 u 0 ε H s C u 0 H s .

Moreover, if u 0 n u 0 with u 0 H s ( R ) as n , then u 0 ε n u 0 n H s 0 uniformly in n as ε 0 . The constants C occurring in (2.2) depend only on r and φ and are independent of ε , provided ε is restricted to a bounded subset of R + .

Here, we first provide the linear estimates needed in the proof of Theorem 4.1. Consider the following linear problem:

(2.3) U t = 3 H U x x , V t = H V x x , U ( x , 0 ) = U 0 ( x ) , V ( x , 0 ) = V 0

and

(2.4) U ˜ t = 3 H U ˜ x x + f 1 , V ˜ t = H V ˜ x x + f 2 , U ( x , 0 ) = 0 , V ( x , 0 ) = 0 .

Let us state the standard Strichartz estimate for W 1 ( t ) f = F 1 e 3 i ξ ξ t F f and W 2 ( t ) f = F 1 e i ξ ξ t F f , where t R and F f is the space Fourier transform of f .

Lemma 2.7

Let ( U , V ) and ( U ˜ , V ˜ ) be solutions to the initial value problems (2.3) and (2.4). For the IVP (2.3), if ( U 0 , V 0 ) represents the initial data, then there exist constants C i = C i ( ( U 0 , V 0 ) ) > 0 , such that

W 1 ( t ) U 0 L t q L x p U 0 L 2 , W 2 ( t ) V 0 L t q L x p V 0 L 2

and

D x 1 2 W 1 ( t ) U 0 L x L t 2 = C 1 U 0 L x 2 , D x 1 2 W 2 ( t ) V 0 L x L t 2 = C 2 V 0 L x 2 .

For the IVP (2.4),

W j ( t ) f i L t q L x p f j L t q L x p ,

with j = 1 , 2, where for any θ [ 0 , 1 ] , and ( p , q ) = ( 4 θ , 2 1 θ ) is admissible, and 1 p + 1 p = 1 q + 1 q = 1 .

Next, we recall the maximal function estimate.

Lemma 2.8

[8] For any ε > 0 , let ( U 0 , V 0 ) H 1 2 + ε × H 1 2 + ε . Then

W 1 ( t ) U 0 L x 2 L T j = W 1 ( t ) U 0 L ( [ j , j + 1 ) × [ 0 , T ] ) 2 1 2 U 0 H 1 2 + ε , W 2 ( t ) V 0 L x 2 L T j = W 2 ( t ) V 0 L ( [ j , j + 1 ) × [ 0 , T ] ) 2 1 2 V 0 H 1 2 + ε .

Proposition 2.2

[12] Let T > 0 , δ 0 . Assume ( U ˜ , V ˜ ) is a smooth solution to the linear problem (2.4) defined on the time interval [ 0 , T ] . Then, there exist κ 1 , κ 2 ( 0 , 1 2 ) such that

x U ˜ L T 2 L x T κ 1 D x 9 8 + ε U ˜ L T L x 2 + T κ 1 U ˜ L T L x 2 + T κ 2 D x 5 8 + ε f 1 L T 2 L x 2 + T κ 2 f 1 L T 2 L x 2 , x V ˜ L T 2 L x T κ 1 D x 9 8 + ε V ˜ L T L x 2 + T κ 1 V ˜ L T L x 2 + T κ 2 D x 5 8 + ε f 2 L T 2 L x 2 + T κ 2 f 2 L T 2 L x 2 ,

for any ε > 0 .

3 Local well-posedness in H s ( R ) × H s ( R ) , s > 3 2

We prove the main theorem of this article for the case s > 3 2 .

Theorem 3.1

Let ( u 0 , v 0 ) H s ( R ) × H s ( R ) , s > 3 2 . There exist T = T ( ( u 0 , v 0 ) H s ) > 0 and a unique solution ( u , v ) C ( [ 0 , T ] ; H s ( R ) × H s ( R ) ) corresponding to initial data ( u 0 , v 0 ) of (1.1). Moreover, the map ( u 0 , v 0 ) ( u , v ) is from H s ( R ) × H s ( R ) to C ( [ 0 , T ] ; H s ( R ) × H s ( R ) ) , Thus, ( u , v ) is continuous dependent on ( u 0 , v 0 ) .

Before proving Theorem 3.1, we first provide a priori estimates of the solutions.

Lemma 3.1

Let ( u , v ) H s ( R ) × H s ( R ) , s > 3 2 be a solution of the initial value problem (1.1) with initial data ( u 0 , v 0 ) , then

(3.1) sup [ 0 , T ] ( u ( , t ) , v ( , t ) ) H s ( u 0 , v 0 ) H s exp ( u x L T 1 L x + v x L T 1 L x ) .

Proof

Taking the L 2 ( R ) scalar product with u for the first equation, and v for the second equation, and integrating by parts, we obtain

1 2 d d t R u 2 d x = R ( a u H v x x + a 2 u ( u v ) x + a 2 u v v x ) d x , 1 2 d d t R v 2 d x = R ( a v H u x x + a 2 v ( u v ) x + a 2 u v u x ) d x .

Combining these two equations, we obtain the identity

1 2 d d t R ( u 2 + v 2 ) d x = 0 ,

thus, we have

( u , v ) L 2 C ( u 0 , v 0 ) L 2 .

Next, we prove the higher order case s 0 . Apply J s to both sides of (1.1) and take the L 2 ( R ) scalar product of the result with J s u and J s v , respectively. By using Hölder’s inequality, we have u x ( J s u ) 2 L 1 u x L u H s 2 . Using Lemma 2.3 with f = u , g = u x , p 1 = p 4 = , and p 2 = p 3 = 2 , we find that [ J s , u ] u x L 2 u x L u H s . We obtain

(3.2) 1 2 d d t R ( ( J s u ) 2 + ( J s v ) 2 ) d x = a R J s u ( [ J s , u ] u x + a [ J s , v ] u x + a [ J s , v ] v x + a [ J s , u ] v x ) + J s v ( [ J s , v ] v x + a [ J s , u ] v x + a [ J s , u ] u x + a [ J s , v ] u x ) 1 2 u x ( J s u ) 2 1 2 v x ( J s v ) 2 1 2 a v x ( J s u ) 2 1 2 a u x ( J s v ) 2 a ( u + v ) x J s u J s v d x C ( u x L x + v x L x ) ( u , v ) H s 2 .

Then, applying the Gronwall inequality to (3.2), it suffices to show that

sup [ 0 , T ] ( u ( , t ) , v ( , t ) ) H s ( u 0 , v 0 ) H s exp ( u x L T 1 L x + v x L T 1 L x ) .

Thus, we have completed the proof.□

Next, we will prove Theorem 3.1 from three aspects: existence, uniqueness of solutions, and the continuous dependence on initial values.

Part I. Existence. We shall now prove the existence of a solution. First, we will deal with the initial value problem (1.1) by using parabolic regularization:

(3.3) u t = H u x x + 2 H v x x + 2 u u x + 4 ( u v ) x + 4 v v x γ ˜ D x 3 2 u , t > 0 , x R , v t = H v x x + 2 H u x x + 2 v v x + 4 ( u v ) x + 4 u u x γ ˜ D x 3 2 v , t > 0 , x R , u ( 0 , x ) = u 0 ( x ) , x R , v ( 0 , x ) = v 0 ( x ) , x R .

Lemma 3.2

Let s > 3 2 , γ ˜ ( 0 , 1 ) . For any ( u 0 , v 0 ) H s ( R ) × H s ( R ) , there exists a time T γ ˜ > 0 and a unique solution ( u , v ) of (3.3) such that ( u , v ) C ( [ 0 , T γ ˜ ] ; H s ( R ) × H s ( R ) ) .

Proof

We shall solve the integral equation corresponding to (3.3)

(3.4) u ( t ) = 1 2 S 1 ( t ) ( u 0 , v 0 ) + S 2 ( t ) ( u 0 v 0 ) + 0 t ( S 1 ( t t ) F 1 + S 2 ( t t ) F 2 ) ( t ) d t , v ( t ) = 1 2 S 1 ( t ) ( u 0 , v 0 ) S 2 ( t ) ( u 0 v 0 ) + 0 t ( S 1 ( t t ) F 1 S 2 ( t t ) F 2 ) ( t ) d t ,

where S 1 ( t ) f = F 1 e t ( 3 i ξ ξ γ ˜ ξ 3 2 ) F f , S 2 ( t ) f = F 1 e t ( i ξ ξ γ ˜ ξ 3 2 ) F f , and

F 1 = 2 u u x + 4 ( u v ) x + 4 v v x , F 2 = 2 v v x + 4 ( u v ) x + 4 u u x .

We only need to find a function ( u , v ) so that the integral equation (3.4) holds in the space H s × H s . Let us define the map

Φ ( u , v ) = 1 2 S 1 ( t ) ( u 0 , v 0 ) + S 2 ( t ) ( u 0 v 0 ) + 0 t ( S 1 ( t t ) F 1 + S 2 ( t t ) F 2 ) ( t ) d t , Ψ ( u , v ) = 1 2 S 1 ( t ) ( u 0 , v 0 ) S 2 ( t ) ( u 0 v 0 ) + 0 t ( S 1 ( t t ) F 1 S 2 ( t t ) F 2 ) ( t ) d t

and

B r = { ( u , v ) C ( [ 0 , T ] ; H s ( R ) × H s ( R ) ) ; ( u , v ) X = sup t [ 0 , T ] ( u , v ) H s r } ,

such that ( Φ ( u , v ) , Ψ ( u , v ) ) maps B r into B r and is a contraction. Then, (3.4) holds exactly when ( u , v ) is a fixed point of the map ( Φ ( u , v ) , Ψ ( u , v ) ) . A standard way to produce a fixed point is to iterate a contraction. Set r = 4 ( u 0 , v 0 ) H s , if ( u , v ) B r , we have

Φ ( u , v ) H s ( u 0 , v 0 ) H s + 1 2 0 t S 1 ( t t ) F 1 H s d t + 1 2 0 t S 2 ( t t ) F 2 H s d t , Ψ ( u , v ) H s ( u 0 , v 0 ) H s + 1 2 0 t S 1 ( t t ) F 1 H s d t + 1 2 0 t S 2 ( t t ) F 2 H s d t .

Thus, we need to estimate S 1 ( t t ) F 1 H s , S 2 ( t t ) F 2 H s , that is

(3.5) S 1 ( t t ) F 1 H s S 1 ( t t ) x ( u 2 ) H s + 4 S 1 ( t t ) x ( u v ) H s + 2 S 1 ( t t ) x ( v 2 ) H s , S 2 ( t t ) F 2 H s S 2 ( t t ) x ( v 2 ) H s + 4 S 2 ( t t ) x ( u v ) H s + 2 S 2 ( t t ) x ( u 2 ) H s .

By using Lemmas 2.6 and 3.1, it follows that

S 1 ( t t ) x ( u 2 ) H s = < ξ > s ξ e t ( 3 i ξ ξ γ ˜ ξ 3 2 ) F ( u 2 ) L 2 γ ˜ 2 3 ( t t ) 2 3 u 2 H s 1 2 γ ˜ 2 3 ( t t ) 2 3 ( u 0 , v 0 ) H s 2 .

And the same proof shows that the right-hand side of (3.5) can be controlled by γ ˜ 2 3 ( t t ) 2 3 ( u 0 , v 0 ) H s 2 . Therefore, the estimate yields

(3.6) sup t [ 0 , T ] Φ ( u , v ) H s ( u 0 , v 0 ) H s + 4 γ ˜ 3 2 ( t t ) 1 3 ( u 0 , v 0 ) H s 2 , sup t [ 0 , T ] Ψ ( u , v ) H s ( u 0 , v 0 ) H s + 4 γ ˜ 3 2 ( t t ) 1 3 ( u 0 , v 0 ) H s 2 .

Now, we start to show that Φ × Ψ : ( u , v ) ( Φ ( u , v ) , Ψ ( u , v ) ) , which is a contraction. Let ( u , v ) , ( u ˜ , v ˜ ) B r , then we have

(3.7) sup t [ 0 , T ] Φ ( u , v ) Φ ( u ˜ , v ˜ ) H s 4 γ ˜ 3 2 ( t t ) 1 3 ( u 0 , v 0 ) H s sup t [ 0 , T ] ( u u ˜ , v v ˜ ) H s , sup t [ 0 , T ] Ψ ( u , v ) Ψ ( u ˜ , v ˜ ) H s 4 γ ˜ 3 2 ( t t ) 1 3 ( u 0 , v 0 ) H s sup t [ 0 , T ] ( u u ˜ , v v ˜ ) H s .

From (3.6) and (3.7), for T > 0 sufficiently small such that

4 γ ˜ 3 2 ( t t ) 1 3 ( u 0 , v 0 ) H s 1 2 .

Therefore, the maps Φ ( u , v ) and Ψ ( u , v ) are contractions. We obtain a unique fixed point ( u γ ˜ , v γ ˜ ) L ( [ 0 , T γ ˜ ] ; H s ( R ) × H s ( R ) ) . Additionally, the right-hand side of (3.3) is in L ( [ 0 , T γ ˜ ] ; H s 2 ( R ) × H s 2 ( R ) ) . Thus, ( u γ ˜ , v γ ˜ ) C ( [ 0 , T γ ˜ ] ; H s ( R ) × H s ( R ) ) .□

Lemma 3.3

Let s s 0 > 3 2 , γ ˜ [ 0 , 1 ) , T > 0 . The solution ( u , v ) C ( [ 0 , T ] ; H s ( R ) × H s ( R ) ) C 1 ( [ 0 , T ) ; H s + 2 ( R ) × H s + 2 ( R ) ) satisfies (3.3) and sup t [ 0 , T ] ( u , v ) H s 0 K for K > 0 . Then, there exists a constant C = C ( s , s 0 , K ) > 0 such that

d d t ( u , v ) H s 2 C ( u , v ) H s 2

for any t [ 0 , T ] .

Proof

First, apply the operator D s to the regularized equation (3.3) with u and v , respectively. By Lemma 2.1 and u x L u H s 0 , it would thus suffice to show that

1 2 d d t R ( ( D s u ) 2 + ( D s v ) 2 ) d x = 2 R ( D s u ( [ D s , u ] u x + 2 [ D s , v ] u x + 2 [ D s , v ] v x + 2 [ D s , u ] v x ) + D s v ( [ D s , v ] v x + 2 [ D s , u ] v x + 2 [ D s , u ] u x + 2 [ D s , v ] u x ) 1 2 u x ( D s u ) 2 1 2 v x ( D s v ) 2 v x ( D s u ) 2 u x ( D s v ) 2 2 ( u + v ) x D s u D s v γ ˜ ( ( D s + 3 4 v ) 2 + ( D s + 3 4 u ) 2 ) ) d x C ( u 0 H s 0 + v 0 H s 0 ) ( u , v ) H s 2 C ( u , v ) H s 2 ,

where we used the Hölder inequality u x ( D s u ) 2 L 1 u x L u H s 2 . And the commutator rules (Lemma 2.1 with f = u , g = u x , γ = 1 2 + η , and γ + 1 s 0 ), hence [ D s , u ] u x L 2 u H s 0 u H s .□

Lemma 3.4

Let s s 0 > 3 2 , γ ˜ ( 0 , 1 ) , ( u 0 , v 0 ) H s ( R ) × H s ( R ) . Let T γ > 0 and ( u , v ) C ( [ 0 , T γ ˜ ] ; H s ( R ) × H s ( R ) ) C 1 ( [ 0 , T γ ˜ ) ; H s + 2 ( R ) × H s + 2 ( R ) ) is a solution of the problem (3.3). Then, there exist T = T ( s 0 , ( u 0 , v 0 ) H s 0 ) > 0 and C = C ( s , s 0 , ( u 0 , v 0 ) H s 0 ) > 0 such that

T γ ˜ T , sup t [ 0 , T ] ( u , v ) H s C ( u 0 , v 0 ) H s , d d t ( u , v ) H s 2 C ( u 0 , v 0 ) H s 2 ,

where T (or C ) is monotone decreasing with ( u 0 , v 0 ) H s 0 .

Proof

Set E s 0 ( u , v ) = ( u , v ) H s 0 2 , E s ( u , v ) = ( u , v ) H s 2 . Assume that the set F = { t 0 ; E s 0 ( u ( t ) , v ( t ) ) > 2 E s 0 ( u 0 , v 0 ) } is not empty. Set T γ ˜ * = inf F , then for any t [ 0 , T γ ˜ * ] , we have E s 0 ( u ( t ) , v ( t ) ) 2 E s 0 ( u 0 , v 0 ) . Assume that t [ 0 , T γ ˜ * ] such that E s 0 ( u ( t ) , v ( t ) ) > 2 E s 0 ( u 0 , v 0 ) . By the definition of T γ ˜ * , then we have t = T γ ˜ * . Thus, sup t [ 0 , T γ ˜ * ] E s 0 ( u ( t ) , v ( t ) ) C ( ( u 0 , v 0 ) ) H s 0 . By Lemma 3.3, there exists C s = C ( s , s 0 , ( u 0 , v 0 ) H s 0 ) such that

d d t ( u ( t ) , v ( t ) ) H s 2 C s ( u ( t ) , v ( t ) ) H s 2 ,

for t [ 0 , T γ ˜ * ] . An application of the Gronwall inequality gives that

(3.8) ( u ( t ) , v ( t ) ) H s 2 ( u 0 , v 0 ) H s 2 e C s t

on [ 0 , T γ ˜ * ] . Set T = min { ( 2 C s 0 ) 1 , T γ ˜ * } , then (3.8) with s = s 0 shows that

E s 0 ( u ( t ) , v ( t ) ) E s 0 ( u 0 , v 0 ) e 1 2 < 2 E s 0 ( u 0 , v 0 ) .

From the definition of T γ ˜ * and the continuity of ( u ( t ) , v ( t ) ) H s 0 , we have 0 < T = ( 2 C s 0 ) 1 < T γ ˜ * T γ ˜ . If F is empty, we thus see that T * = T γ ˜ = . In particular, we can take T = ( 2 C s 0 ) 1 < .□

Lemma 3.5

Let s s 0 > 3 2 , γ j ( 0 , 1 ) , T > 0 , and ( u j , v j ) C ( [ 0 , T ] ; H s ( R ) × H s ( R ) ) be a solution of (3.3), satisfying sup t [ 0 , T ] ( u j ( t ) , v j ( t ) ) H s 0 K for K > 0 , j = 1 , 2. Then, there exists C = C ( K , s ) such that

d d t ( u 1 u 2 , v 1 v 2 ) L 2 2 ( u 1 u 2 , v 1 v 2 ) L 2 2 + max { γ 1 2 , γ 2 2 } ,

for t [ 0 , T ] .

Proof

Let u ˜ = u 2 , u = u 1 , v ˜ = v 2 , v = v 1 , and p = u u ˜ , q = v v ˜ , such that p and q satisfy the following equations:

p t = H p x x + 2 H q x x + 2 u p x + 2 p u ˜ x + 4 ( u q ) x + 4 ( p v ˜ ) x + 4 v q x + 4 q v ˜ x γ 1 D 3 2 p ( γ 1 + γ 2 ) D 3 2 u ˜ , q t = H q x x + 2 H p x x + 2 v q x + 2 q v ˜ x + 4 ( v p ) x + 4 ( q u ˜ ) x + 4 u p x + 4 p u ˜ x γ 1 D 3 2 q ( γ 1 + γ 2 ) D 3 2 v ˜ .

Taking the L 2 ( R ) scalar product with p for the first equation and q for the second equation, then applying integration by parts and the inequality u x L u H s 0 , we obtain

(3.9) 1 2 d d t ( p , q ) L 2 2 = R ( p 2 ( 2 u ˜ x u x ) + 4 p q ( u ˜ x + v ˜ x ) + q 2 ( 2 v ˜ x v x ) γ 1 ( D 3 4 p ) 2 γ 1 ( D 3 4 q ) 2 ( γ 1 + γ 2 ) ( p D 3 2 u ˜ + q D 3 2 v ˜ ) ) d x ( u x L + v x L + u ˜ x L + v ˜ x L ) ( p , q ) L 2 2 + 2 max { γ 1 2 , γ 2 2 } K ( p , q ) L 2 2 + 2 max { γ 1 2 , γ 2 2 } .

Using the Cauchy-Schwarz inequality and the results of Lemmas 3.2, 3.3, and 3.4, we have ( γ 1 + γ 2 ) R ( p D 3 2 u ˜ + q D 3 2 v ˜ ) d x ( u ˜ , v ˜ ) H s 0 γ 1 + γ 2 ( p , q ) L 2 C γ 1 + γ 2 2 + ( p , q ) L 2 2 .□

Integrating (3.9) in time and applying the Gronwall inequality with initial data ( p 0 ( x ) , q 0 ( x ) ) = ( 0,0 ) yields

sup t [ 0 , T ] ( p , q ) L 2 2 max { γ 1 2 , γ 2 2 } .

Therefore, ( u j , v j ) is a Cauchy sequence in C ( [ 0 , T ] ; L 2 ( R ) × L 2 ( R ) ) as γ j 0 . Thus, there exists a unique solution ( u , v ) C ( [ 0 , T ] ; L 2 ( R ) × L 2 ( R ) ) of (1.1), such that

lim γ j 0 ( u γ ˜ u , v γ ˜ v ) L ( [ 0 , T ] ; L 2 ( R ) × L 2 ( R ) ) = 0 .

Part II. Uniqueness. Next, we prove the uniqueness of system (1.1). We shall follow the technique used by Bona and Smith. For 0 < ε < ε , denote ( w 1 , w 2 ) = ( u ε u ε , v ε v ε ) . Then

(3.10) w 1 , t = H w 1 , x x + 2 H w 2 , x x + 2 ( ( u ε + v ε ) ( w 1 + w 2 ) ) x 4 ( w 1 w 2 ) x 4 w 1 w 1 , x 4 w 2 w 2 , x , w 2 , t = H w 2 , x x + 2 H w 1 , x x + 2 ( ( u ε + v ε ) ( w 1 + w 2 ) ) x 4 ( w 1 w 2 ) x 4 w 1 w 1 , x 4 w 2 w 2 , x ,

with initial data ( w 1,0 , w 2,0 ) = ( u 0 , ε u 0 , ε , v 0 , ε v 0 , ε ) .

Lemma 3.6

Let T > 0 , and ( u 0 , v 0 ) H s ( R ) × H s ( R ) , where s > 3 2 is given. For 0 < ε ε , let ( u ε , v ε ) and ( u ε , v ε ) denote the solutions of (1.1) corresponding to the initial data ( u 0 , ε , v 0 , ε ) and ( u 0 , ε , v 0 , ε ) , respectively. There exists a constant C = C ( T , ( u 0 , ε , v 0 , ε ) H s ) such that for ε sufficiently small, we have

(3.11) ( w 1 , w 2 ) H s 2 C ( T , ( u 0 , v 0 ) H s ) ( ε λ 0 + ( w 1 ( , 0 ) , w 2 ( , 0 ) ) H s 2 ) ,

where γ ( s ) = ν s 6 ( ν + 1 ) , and ν is any nonnegative number such that ν < s 3 2 .

Proof

To prove (3.11), we first claim that for 0 < ε ε ,

(3.12) sup t [ 0 , T ] ( w 1 ( , t ) , w 2 ( , t ) ) L 2 C ( T , ( u 0 , v 0 ) H s ) ε 1 6 .

To prove (3.12), take the L 2 ( R ) scalar product of (3.10) with ( w 1 , w 2 ) and integrate by parts to obtain

(3.13) 1 2 d d t R ( w 1 2 + w 2 2 ) d x = a 2 2 R ( u ε + v ε ) x ( w 1 + w 2 ) 2 d x C ( u ε + v ε ) x L ( w 1 , w 2 ) L 2 C ( x u ε L + x v ε L ) ( w 1 , w 2 ) L 2 .

Integrating (3.13) in time, over the temporal interval [ 0 , t ] to obtain

( w 1 , w 2 ) L 2 2 ( w 1 ( , 0 ) , w 2 ( , 0 ) ) L 2 2 + C ( T , ( u 0 , v 0 ) H s ) 0 t ( w 1 ( , s ) , w 2 ( , s ) ) L 2 2 d s .

Applying the Gronwall inequality immediately implies that

(3.14) ( w 1 ( , t ) , w 2 ( , t ) ) L 2 2 ( w 1 ( , 0 ) , w 2 ( , 0 ) ) L 2 2 e c t .

On the contrary, by the triangle inequality,

( w 1 ( , 0 ) , w 2 ( , 0 ) ) L 2 = ( u 0 , ε u 0 , ε , v 0 , ε v 0 , ε ) L 2 ( u 0 u 0 , ε , v 0 v 0 , ε ) L 2 + ( u 0 , ε u 0 , v 0 , ε v 0 ) L 2 ,

such that by (3.14)

( w 1 ( , t ) , w 2 ( , t ) ) L 2 2 C ( ( u 0 , v 0 ) ) H s ( ε 1 6 + ( ε ) 1 6 ) .

Since ε ε , we reach the inequality

(3.15) ( w 1 ( , t ) , w 2 ( , t ) ) L 2 2 C ε 1 3 e c T

for t [ 0 , T ] , which completes the proof of (3.11).

The proof of Lemma 3.6 continues by establishing estimates like (3.11) for higher Sobolev norms. To this end, apply D s to (3.10) and take the L 2 ( R ) scalar product with ( D s w 1 , D s w 2 ) to obtain

(3.16) 1 2 d d t R ( ( D s u ) 2 + ( D s v ) 2 ) d x = R ( D s w 1 D s ( u ε w 1 , x ) + D s w 2 D s ( v ε w 2 , x ) D s w D s ( w 1 w 1 , x ) + a D s w 1 D s ( v ε w 1 , x ) + a D s w 2 D s ( u ε w 2 , x ) + a D s w 1 D s ( x v ε ( w 1 + w 2 ) ) + a D s w 1 D s ( x u ε w 2 ) + D s w 1 D s ( x u ε w 1 ) + D s w 2 D s ( x v ε w 2 ) + a D s w 2 D s ( x u ε ( w 1 + w 2 ) ) + a D s w 2 D s ( x v ε w 1 ) + a D s w 1 [ D s , u ε + v ε ] w 2 , x + a D s w 2 [ D s , u ε + v ε ] w 1 , x a ( u ε + v ε ) x D s w 1 D s w 2 ) d x ( w 1 H s + w 2 H s ) ( ( u ε H s + 1 + v ε H s + 1 ) ( w 1 H γ 1 + w 2 H γ 1 ) + ( u ε H γ 2 + 2 + v ε H γ 2 + 2 ) × ( w 1 H s 1 + w 2 H s 1 ) + ( x u ε L + x v ε L ) ( w 1 H s + w 2 H s ) ) .

Now, we turn to the right-hand side of (3.16), we need to estimate u ε H s + 1 w j H γ , v ε H s + 1 w j H γ , u ε H γ 2 + 2 w j H s 1 , and v ε H γ 2 + 2 w j H s 1 , j = 1 , 2.

First of all, we consider the combination u ε H s + 1 w 1 H γ , where γ ( 1 2 , s 1 ] . Because of (3.1) and (2.2), we have

(3.17) u ε ( , t ) H s + 1 u ε L ( [ 0 , T ] ; H s + 1 ( R ) ) u 0 , ε H s + 1 C ( T , u 0 H s ) ε 1 6 .

We can interpolate w H s with w L 2 to obtain the further estimate

(3.18) w 1 H γ w 1 H s γ s w 1 L 2 1 γ s .

From (3.17) and (3.18), it follows that

(3.19) u ε H s + 1 w 1 H γ w 1 H s C ( T , ( u 0 , v 0 ) H s ) ε 2 s β 0 s γ + ( w 1 , w 2 ) H s 2 .

Write γ = s 1 ν , so that 0 ν < s 3 2 . Then

2 s β 0 s γ = ν s 3 ( ν + 1 ) = λ 0 ,

so (3.19) leads to an inequality

(3.20) u ε H s + 1 w 1 H γ w 1 H s C ε λ 0 + ( w 1 , w 2 ) H s 2 .

Now, consider the other combination appearing on the right-hand side of (3.16), namely u ε H γ + 2 w 1 H s 1 , where 1 2 γ s 1 . First, we have

(3.21) u ε ( , t ) H γ + 2 u ε L ( [ 0 , T ] ; H γ + 2 ( R ) ) u 0 , ε H γ + 2 C ( T , u 0 H s ) ε s 2 γ 6 .

Using interpolation inequality and (3.15) gives the inequality

w 1 H s 1 w 1 H s 1 1 s w 1 L 2 1 s C ( T , ( u 0 , v 0 ) H s ) ε 1 6 ( w 1 , w 2 ) H s 1 1 s ,

and combining this with (3.21) and Young’s inequality yields

(3.22) u γ + 2 w 1 H s 1 w H s C ( T , ( u 0 , v 0 ) H s ) ε s ( s ( 1 + γ ) ) 3 + ( w 1 , w 2 ) H s 2 C ( T , ( u 0 , v 0 ) H s ) ε ν s 3 + ( w 1 , w 2 ) H s 2 .

Finally, combining all these estimates (3.16), (3.17), (3.18), (3.19), and (3.22) together, we obtain the inequality. We conclude that

(3.23) 1 2 d d t ( w 1 , w 2 ) H s 2 w H s 2 + ( ε λ + ε ν s 3 ) ,

where λ 0 = ν s ( 1 + ν ) 3 = 2 γ ( s ) 0 ν < s 3 2 . Applying the Gronwall inequality leads to

( w 1 , w 2 ) H s 2 C ( T , ( u 0 , v 0 ) H s ) ( ε λ 0 + ( w 1 ( , 0 ) , w 2 ( , 0 ) ) H s 2 ) .

Thus, we have completed the proof of Lemma 3.6.□

It is now easy to deduce that { ( u ε , v ε ) } ε > 0 is Cauchy in C ( [ 0 , T ] ; H s ( R ) × H s ( R ) ) . This is obviously equivalent to showing that for any η > 0 , there exists ε 0 > 0 such that for any ε , ε with 0 < ε ε ε 0 , we have

(3.24) ( u ε u ε , v ε v ε ) C ( [ 0 , T ] ; H s ( R ) × H s ( R ) ) η .

However, equation (3.24) follows directly from equation (2.2) and the conclusion of Lemma 3.6.

On the contrary, to show that { ( t u ε , t v ε ) } ε > 0 is a Cauchy sequence in C ( [ 0 , T ] ; H s 2 ( R ) × H s 2 ( R ) ) . To this purpose, note that

w 1 , t = H w 1 , x x + 2 H w 2 , x x + 2 ( ( u ε + v ε ) ( w 1 + w 2 ) ) x 4 ( w 1 w 2 ) x 4 w 1 w 1 , x a 2 w 2 w 2 , x , w 2 , t = H w 2 , x x + 2 H w 1 , x x + 2 ( ( u ε + v ε ) ( w 1 + w 2 ) ) x 4 ( w 1 w 2 ) x 4 w 1 w 1 , x a 2 w 2 w 2 , x ,

and hence that

(3.25) ( w 1 , t , w 2 , t ) H s 2 ( w 1 , w 2 ) H s + ( u , v ) H s 1 ( w 1 , w 2 ) H s 1 + ( w 1 , w 2 ) H s 1 2 .

It follows directly from (3.25) and the conclusion of Lemma 3.6 that { ( t u ε , t v ε ) } ε > 0 is Cauchy in C ( [ 0 , T ] ; H s 2 ( R ) × H s 2 ( R ) ) .

Part III. Continuous dependence with respect to the initial data. We begin by proving the assertion that the mapping U T is continuous from H s ( R ) × H s ( R ) into C ( [ 0 , T ] ; H s ( R ) × H s ( R ) ) for s > 3 2 . This requires us to show that if { ( u 0 n , v 0 n ) } n N is a sequence in H s ( R ) × H s ( R ) such that

( u 0 n , v 0 n ) ( u 0 , v 0 ) H s ( R ) × H s ( R ) ,

then

(3.26) sup t [ 0 , T ] U t ( u 0 n , v 0 n ) U t ( u 0 , v 0 ) 0 as n .

With the standard notation, the triangle inequality guarantees that

(3.27) ( u n u , v n v ) H s ( u n u ε n , v n v ε n ) H s + ( u ε n u ε , v ε n v ε ) H s + ( u ε u , v ε v ) H s .

Letting ε 0 , the estimate derived in Lemma 3.6 yields the following result for s > 3 2

sup t [ 0 , T ] ( u u ε , v v ε ) H s C ε γ ( s ) + C ( u 0 u 0 , ε , v 0 v 0 , ε ) H s ,

where γ ( s ) is defined in Lemma 3.6, and the constants are independent of ε for sufficiently small values. Therefore, as ε 0 ,

sup t [ 0 , T ] ( u u ε , v v ε ) H s 0 , sup t [ 0 , T ] ( u n u ε n , v n v ε n ) H s 0 .

The last convergence is uniform in n due to Proposition 2.1. Hence, for any μ > 0 , there exists an ε 0 > 0 such that for all ε ( 0 , ε 0 ] and for all n , one has

(3.28) sup t [ 0 , T ] ( u u ε , v v ε ) H s 1 3 μ , sup t [ 0 , T ] ( u n u ε n , v n v ε n ) H s 1 3 μ .

To prove (3.26), it is sufficient to show that for any fixed ε with 0 < ε ε 0 , the following holds

(3.29) sup t [ 0 , T ] ( u ε u ε n , v ε v ε n ) H s 0 ,

as n . Set w ˜ 1 = u ε n u ε , w ˜ 2 = v ε n v ε , so that ( w ˜ 1 , w ˜ 2 ) satisfies the initial value problem

(3.30) w ˜ 1 , t = H w ˜ 1 , x x + 2 H w ˜ 2 , x x + 2 ( ( u ε n + v ε n ) ( w ˜ 1 + w ˜ 2 ) ) x 4 ( w ˜ 1 w ˜ 2 ) x 4 w ˜ 1 w ˜ 1 , x 4 w ˜ 2 w ˜ 2 , x , w ˜ 2 , t = H w ˜ 2 , x x + 2 H w ˜ 1 , x x + 2 ( ( u ε n + v ε n ) ( w ˜ 1 + w ˜ 2 ) ) x 4 ( w ˜ 1 w ˜ 2 ) x 4 w ˜ 1 w ˜ 1 , x 4 w ˜ 2 w ˜ 2 , x , w ˜ 1 ( , 0 ) = w ˜ 1,0 , w ˜ 2 ( , 0 ) = w ˜ 2,0 .

As before, we have

sup t [ 0 , T ] ( w ˜ 1 ( , t ) , w ˜ 2 ( , t ) ) H s 2 C ( w ˜ 1 ( , 0 ) , w ˜ 2 ( , 0 ) ) H s 2 e C t ,

where the constant C depends on the fixed value of ε . Since

( w ˜ 1 ( , 0 ) , w ˜ 2 ( , 0 ) ) H s ( u 0 n u 0 , v 0 n v 0 ) H s 0 ,

as n . We obtain (3.29). Thus, there exists N 0 = N 0 ( ε ) 0 , such that for any n N 0 ,

(3.31) ( u 0 n u 0 , v 0 n v 0 ) H s 1 3 μ .

The assertion (3.26) follows from (3.27), (3.28), and (3.31). Now, from (3.30), it can be deduced that for s 2 ,

( w ˜ 1 , t , w ˜ 2 , t ) H s 2 ( u ε n + v ε n ) H s 1 ( w ˜ 1 , w ˜ 2 ) H s + ( w ˜ 1 , w ˜ 2 ) H s 2 ,

which shows that ( t u ε n t u ε , t v ε n t v ε ) H s 0 uniformly on [ 0 , T ] , as n . The proof of Theorem 3.1 is now complete.

4 Local well-posedness in H s ( R ) × H s ( R ) , s > 9 8

We have proved the local well-posedness for (1.1) in H s ( R ) , s > 3 2 , using the energy method. To further study the Cauchy problem, we will take advantage of the dispersive estimate of (1.1). Our main result is as follows.

Theorem 4.1

Let 3 2 s > 9 8 . For any ( u 0 , v 0 ) H s ( R ) × H s ( R ) , there exists T ( u 0 , v 0 ) H s 4 and a unique solution ( u , v ) of (1.1) satisfying

( u , v ) C ( [ 0 , T ] ; H s ( R ) × H s ( R ) ) , ( x u , x v ) L 1 ( [ 0 , T ] ; L ( R ) × L ( R ) ) .

Moreover, for any R > 0 , the map ( u 0 , v 0 ) ( u , v ) is continuous from the ball { ( u 0 , v 0 ) H s ( R ) × H s ( R ) : ( u 0 , v 0 ) H s < R } to C ( [ 0 , T ] ; H s ( R ) × H s ( R ) ) .

Set U = u + v and V = u v . It follows that the system (1.1) can be written as

(4.1) U t = 3 H U x x + 7 U U x V V x , V t = H V x x U U x V V x , U ( x , 0 ) = u 0 v 0 , V ( x , 0 ) = u 0 v 0 .

We provide some linear estimates associated with the initial value problem (4.1). These estimates can be found in the literature [8].

We prove the energy estimates and the local smoothing effect satisfied by the solution of (1.1).

Lemma 4.1

Let s 0 , ( u , v ) C ( [ 0 , T ] ; H s + 2 ( R ) × H s + 2 ( R ) ) is a solution to (1.1).

sup t [ 0 , T ] ( u , v ) H s ( u 0 , v 0 ) H s exp c 0 T ( x u L x + x v L x ) d t ( u 0 , v 0 ) H s exp ( c T 1 2 ( x u , x v ) L T 2 L x ) .

Lemma 4.2

Fix ε > 0 and let I be a bounded interval. If u C ( [ 0 , T ] ; H s + 1 ( R ) × H s + 1 ( R ) ) be a solution of (1.1). Then, for any s 1 2 , it satisfies

(4.2) 0 T I ( D x s 1 2 u x 2 + D x s 1 2 v x 2 ) d x d t 1 2 ( u 0 , v 0 ) H s ( 1 + T + T ( u 0 , v 0 ) H 1 2 + ε + ( x u , x v ) L T 1 L x ) exp ( c ( x u , x v ) L T 1 L x ) .

Proof

Apply D s to (1.1), multiply by D s u χ j , D s v χ j , respectively, and integrate in space to obtain

R χ j ( D s + 1 2 u 2 + D s + 1 2 v 2 ) d x = 1 2 d d t R χ j ( D s u 2 + D s v 2 ) d x a R ( 2 χ j D s + 1 2 u D s + 1 2 v χ j D s v D s H u ) d x a 2 R ( u ( D s u ) 2 + v ( D s v ) 2 + a ( v ( D s u ) 2 + u ( D s v ) 2 ) + 2 a ( u + v ) D s u D s v ) d x + a R χ j D s u ( [ D s , u ] u x + a [ D s , v ] v x + a [ D s , u ] v x + a [ D s , v ] u x ) d x + a R χ j D s v ( [ D s , v ] v x + a [ D s , u ] u x + a [ D s , v ] u x + a [ D s , u ] v x ) d s a 2 R χ j ( ( u x + a v x ) ( D s u ) 2 + ( v x + a u x ) ( D s v ) 2 + 2 a ( u x + v x ) D s u D s v ) d x R D s u ( [ D , χ j ] D s u [ D 1 2 , χ j ] D 1 2 + s u a [ H , χ j ] D s v x x 2 a [ H , χ j ] D s v x ) d x R D s v ( [ D , χ j ] D s v [ D 1 2 , χ j ] D 1 2 + s v 2 a [ D , χ j ] D s u + 2 a [ D 1 2 , χ j ] D s + 1 2 u x ) d x d d t R χ j ( ( D s u ) 2 + ( D s v ) 2 ) d x + ( u L + v L + u x L + v x L ) ( u , v ) H s 2 .

Then, by integrating in time, we can derive the necessary bounds on the solutions, which implies the estimate given in equation (4.2).□

A priori estimates for smooth solutions are crucial in the analysis of partial differential equations, as they provide bounds on the solutions that depend only on the initial data and the properties of the equations, allowing us to establish regularity and stability of the solutions over time.

Proposition 4.1

Let s ( 9 8 , 3 2 ] . for any M > 0 , there exists a constant T ˜ = T ˜ ( M ) > 0 , such that for any initial data ( u 0 , v 0 ) H ( R ) × H ( R ) satisfying ( u 0 , v 0 ) H s M , the solution ( u , v ) defined in Theorem 3.1 is defined on [ 0 , T ˜ ] , and satisfying

Λ T s ( u , v ) = max ( u , v ) L T H x s , ( x u , x v ) L T 2 L x , ( 1 + T ) ρ j ( u , v ) L ( [ j , j + 1 ) × [ 0 , T ] ) 2 1 2 C s ( T ) ( u 0 , v 0 ) H s ,

for T ( 0 , T ˜ ] , where ρ > 1 2 , C s ( T ˜ ) is a positive constant depending only on s and T ˜ .

Proof

For simplicity, let λ T s ( u , v ) = ( u , v ) L x H x s , γ T s ( u , v ) = ( x u , x v ) L T 2 L x , μ T s ( u , v ) = ( 1 + T ) β j ( u , v ) L ( [ j , j + 1 ) × [ 0 , T ] ) 2 1 2 . From Lemma 4.1, we have

(4.3) λ T s ( u , v ) ( u 0 , v 0 ) H s e C T 1 2 γ T s ( u , v ) .

Set U ˜ = u + v , V ˜ = u v , using Proposition 2.2, the estimate of γ T s ( u , v ) is

(4.4) γ T s ( u , v ) x U ˜ L T 2 L x + x V ˜ L T 2 L x T κ 1 ( D x 9 8 + ε u L T L x 2 + D x 9 8 + ε v L T L x 2 + u L T L x 2 + v L T L x 2 ) + T κ 2 ( D x 5 8 + ε u L T 2 L x 2 + D x 5 8 + ε v L T 2 L x 2 + u L T 2 L x 2 + v L T 2 L x 2 ) T κ 1 ( u , v ) L T H x s + T κ 2 ( D x 5 8 + ε ( u u x ) L x , T 2 + D x 5 8 + ε ( u v x ) L x , T 2 + D x 5 8 + ε ( v u x ) L x , T 2 + D x 5 8 + ε ( v v x ) L x , T 2 + u u x L x , T 2 + u v x L x , T 2 + v u x L x , T 2 + v v x L x , T 2 ) T κ 1 λ T s ( u , v ) + T κ 2 ( f ( u , v ) + g ( u , v ) ) ,

where f ( u , v ) = D x 8 5 + ε ( u u x ) L x , T 2 + D x 8 5 + ε ( u v x ) L x , T 2 + D x 8 5 + ε ( v u x ) L x , T 2 + D x 8 5 + ε ( v v x ) L x , T 2 , and g ( u , v ) = ( u , v ) L T L x 2 γ T s ( u , v ) . To estimate f ( u , v ) , using the Leibnitz rue, that is Lemma 2.4. Indeed, the first term is

D x 5 8 + ε ( u u x ) L x , T 2 u D x 5 8 + ε x u L x , T 2 + x L T H x s .

Using the smoothing effect established in Lemma 4.2, we can obtain significant improvements in the regularity of the solutions as follows:

(4.5) u D x 5 8 + ε x u L T 2 L x 2 = R 0 T u D x 5 8 + ε x u 2 d t d x 1 2 = j = j j + 1 0 T u D x 5 8 + ε x u 2 d t d x 1 2 j = j j + 1 ( sup t [ 0 , T ] u ) 2 0 T D x 5 8 + ε x u 2 d t d x 1 2 j = u L ( [ j , j + 1 ) × [ 0 , T ] ) 2 1 2 sup j Z D x 5 8 + ε x u L 2 ( [ j , j + 1 ) × [ 0 , T ] ) ( 1 + T ) β μ T s ( u , v ) ( u 0 , v 0 ) H s ( 1 + T + T ( u 0 , v 0 ) H s + T 1 2 γ T s ( u , v ) ) exp ( c T 1 2 γ T s ( u , v ) ) .

Consequently, based on equations (4.3), (4.4), and (4.5),

γ T s ( u , v ) ( u 0 , v 0 ) H s ( T κ 1 + T κ 2 γ T s ( u , v ) + T κ 2 μ T s ( u , v ) ( 1 + T ) β ( 1 + T ( 1 + λ T s ( u , v ) ) + T 1 2 γ T s ( u , v ) ) ) e C T 1 2 γ T s ( u , v ) .

Utilizing the integral equation for u and v given in (3.4), along with the maximal function estimate from Lemma 2.8 and the L 2 conservation law, we can conclude that

(4.6) j = u L ( [ j , j + 1 ) × [ 0 , T ] ) 2 1 2 W 1 ( t ) u 0 L ( [ j , j + 1 ) × [ 0 , T ] ) + W 1 ( t ) v 0 L ( [ j , j + 1 ) × [ 0 , T ] ) + W 2 ( t ) u 0 L ( [ j , j + 1 ) × [ 0 , T ] ) + W 2 ( t ) u 0 L ( [ j , j + 1 ) × [ 0 , T ] ) + 0 t W 1 ( t t ) F 1 d t L ( [ j , j + 1 ) × [ 0 , T ] ) + 0 t W 2 ( t t ) F 1 d t L ( [ j , j + 1 ) × [ 0 , T ] ) + 0 t W 1 ( t t ) F 2 d t L ( [ j , j + 1 ) × [ 0 , T ] ) + 0 t W 2 ( t t ) F 2 d t L ( [ j , j + 1 ) × [ 0 , T ] ) ( 1 + T ) ( u 0 , v 0 ) H 1 2 + + D x 1 2 + ε F 1 L x 2 L T 2 + D x 1 2 + ε F 2 L x 2 L T 2 + F 1 L x 2 L T 2 + + F 2 L x 2 L T 2 .

For a fixed η > 0 , note that 1 2 + ε < 5 8 + ε . Thus, we have

D x 1 2 + ε F 1 L T 2 L x 2 ( 1 + T ) β μ T s ( u , v ) ( u 0 , v 0 ) H s × ( 1 + T + T ( u 0 , v 0 ) H s + T 1 2 γ T s ( u , v ) ) exp ( c T 1 2 γ T s ( u , v ) ) .

Therefore, we obtain

(4.7) μ T s ( u , v ) ( u 0 , v 0 ) H s ( 1 + T 1 2 + T 1 2 γ T s ( u , v ) + T 1 2 μ T s ( u , v ) ( 1 + T ) β ( 1 + T ( 1 + λ T s ( u , v ) ) + T 1 2 γ T s ( u , v ) ) ) e C T 1 2 γ T s ( u , v ) .

Note that T κ 2 γ T s ( u , v ) , T 1 2 γ T s ( u , v ) , T κ 2 μ T s ( u , v ) , T 1 2 μ T s ( u , v ) , and T λ T s ( u , v ) tend to 0, as T 0 . So, there exists a time T ˜ ( 0 , T ] , such that

max { T ˜ κ 2 γ T ˜ s ( u , v ) , T ˜ 1 2 γ T ˜ s ( u , v ) , T ˜ κ 2 μ T ˜ s ( u , v ) , T ˜ 1 2 μ T ˜ s ( u , v ) , T ˜ λ T ˜ s ( u , v ) } = 1 2 .

Moreover, there exists C s ( T ˜ ) such that Λ T s ( u , v ) C s ( T ˜ ) ( u 0 , v 0 ) H s . Next, we verify that T ˜ C ( M ) such that ( u 0 , v 0 ) H s M . Without loss of generality, we assume that T ˜ κ 2 γ T ˜ s ( u , v ) = 1 2 . This, combined with Λ T s ( u , v ) C s ( T ˜ ) ( u 0 , v 0 ) H s , implies that

1 2 T ˜ κ 2 C s ( T ˜ ) ( u 0 , v 0 ) H s T ˜ κ 2 C s ( T ˜ ) M ,

thus we have T ˜ C ( M ) .□

Now, we begin the proof of Theorem 4.1.

Part I. Uniqueness. Let ( u , v ) and ( u ˜ , v ˜ ) be two solutions of (1.1) with initial data ( u 0 , v 0 ) , ( u ˜ 0 , v ˜ 0 ) respectively. Define positive number K > 0 as follows:

K = max { ( x u , x v ) L T 1 L x , ( x u ˜ , x v ˜ ) L T 1 L x } .

Set p = u u ˜ and q = v v ˜ . Then, the pair ( p , q ) satisfies the following equation

(4.8) p t = H p x x + 2 H q x x + 2 u p x + 2 p u ˜ x + 4 ( u q + p v ˜ ) x + 4 v q x + 4 q v ˜ x , q t = H q x x + 2 H p x x + 2 v q x + 2 q v ˜ x + 4 ( v p + q u ˜ ) x + 4 u p x + 4 p u ˜ x , p ( x , 0 ) = u 0 u ˜ 0 , q ( x , 0 ) = v 0 v ˜ 0 .

Multiply (4.8) by u and v , respectively, then integrate over space, and apply integration by parts to deduce that

1 2 d d t R ( p 2 + q 2 ) d x = R ( p 2 ( 2 u ˜ x u x + 2 v ˜ x ) + 4 ( u ˜ x + v ˜ x ) + q 2 ( 2 v ˜ x v x + 2 u ˜ x ) ) d x ( ( x u , x v ) L T 1 L x + ( x u ˜ , x v ˜ ) L T 1 L x ) ( p , q ) H s 2 K ( p , q ) H s 2 .

Apply the Gronwall inequality to obtain the following result:

(4.9) ( p , q ) H s ( u 0 u ˜ 0 , v 0 v ˜ 0 ) H s e c K .

The estimate (4.9) provides the uniqueness result by taking ( u 0 , v 0 ) = ( u ˜ 0 , v ˜ 0 ) .

Part II. Existence. We use the Bona-Smith argument. Fix initial data ( u 0 , v 0 ) H s ( R × H s ( R ) ) . We regularize the initial data by letting u 0 , ε = φ u 0 and v 0 , ε = φ v 0 . Since ( u 0 , ε , v 0 , ε ) H ( R ) × H ( R ) , we deduce from Theorem 3.1 that for any ε > 0 , there exist a positive time T ε > 0 and a unique solution ( u ε , v ε ) C ( [ 0 , T ε ] ; H ( R ) × H ( R ) ) with u ε ( x , 0 ) = u 0 , ε , v ε ( x , 0 ) = v 0 , ε . Observe that ( u 0 , ε , v 0 , ε ) H s ( u 0 , v 0 ) H s . Thus, it follows from the proof of Proposition 4.1 that there exists a positive time T = T ( ( u 0 , v 0 ) H s ) such that the sequence of solutions { ( u ε , v ε ) } can be extended on the time interval [ 0 , T ] and satisfies

( u ε , v ε ) L T H x s Λ T s ( u ε , v ε ) ( u 0 , v 0 ) H s ,

for any ε > 0 . Arguing as in the proof of Proposition 4.1 and using Definition (2.1) and estimates (2.2), we obtain that

(4.10) ( D x s 1 x 2 u ε , D x s 1 x 2 v ε ) L T 2 L x ( u 0 , ε , v 0 , ε ) H 2 s ε s ( u 0 , v 0 ) H s , ( x 2 u ε , x 2 v ε ) L T 2 L x ( u 0 , ε , v 0 , ε ) H s + 1 ε 1 ( u 0 , v 0 ) H s , ( D x s 1 2 x u ε , D x s 1 2 x v ε ) L T 2 L x ( u 0 , ε , v 0 , ε ) H 2 s 1 2 ε ( s 1 2 ) ( u 0 , v 0 ) H s .

Now, we prove that { u ε , v ε } is a Cauchy sequence in C ( [ 0 , T ] ; H s ( R ) × H s ( R ) ) . We set p = u ε u ε and q = v ε v ε for 0 < ε < ε . By gathering Definition (2.1), estimates (2.2) and (4.9), we deduce that

( p , q ) L T L x 2 = o ( ε s ) , ( p , q ) L T H x σ = o ( ε s σ ) ,

for all ε 0 , where σ [ 0 , s ) . It remains to prove the convergence in C ( [ 0 , T ] ; H s ( R ) × H s ( R ) ) . Then, ( p , q ) satisfies

(4.11) p t = H p x x + 2 H q x x + 2 u ε p x + 2 p x u ε + 4 ( u ε q + p v ε ) x + 4 v ε q x + 4 q x v ε , q t = H q x x + 2 H p x x + 2 v ε q x + 2 q x v ε + 4 ( v ε p + q u ε ) x + 4 u ε p x + 4 p x u ε , p ( x , 0 ) = u 0 , ε u 0 , ε , q ( x , 0 ) = v 0 , ε v 0 , ε .

Proposition 4.2

Assume s ( 9 8 , 3 2 ] . Let ( p , q ) be the solution of (4.11). Then, there exists a time T 1 = T 1 ( ( u 0 , v 0 ) H s ) , where T 1 ( 0 , T ) , such that

( p , q ) L T 1 H x s Γ T 1 s ( p , q ) 0 ,

as ε 0 . Where Γ T 1 s ( p , q ) = max { λ T s ( p , q ) , γ T s ( p , q ) } ,

λ T s ( p , q ) = ( p , q ) L T H x s , γ T s ( p , q ) = ( x p , x q ) L T 2 L x .

Proof

Apply J s p J s and J s q J s to (4.11), respectively, and integrate in space to deduce

(4.12) 1 2 d d t ( p , q ) H s 2 = 2 R ( J s p ( J s ( u ε p x ) + J s ( p x u ε ) + 2 J s ( q x u ε ) + 2 J s ( u ε p x ) + 2 J s ( v ε p x ) + 2 J s ( p x v ε ) + 2 J s ( v ε q x ) + 2 J s ( q x v ε ) ) + J s q ( J s ( v ε q x ) + J s ( q x v ε ) + 2 J s ( p x v ε ) + 2 J s ( v ε q x ) + 2 J s ( u ε q x ) + 2 J s ( q x u ε ) + 2 J s ( u ε p x ) + 2 J s ( p x u ε ) ) ) d x .

We separate the right-hand side of (4.12) into two parts I + I I . One part involves the derivative acting on ( p , q ) , for example, J s ( u ε p x ) . By the commutator rules, we can obtain [ J s , u ε ] p x + u ε J s p x . Then, using Lemma 2.3, we analyze the second part, where the derivative acts on ( u , v ) , such as J s ( p x u ε ) . Here, we can calculate the first derivative again, yielding J s 1 ( x p x u ε + p x 2 u ε ) . Thus, we can apply the fractional Leibniz rule since s 1 ( 1 8 , 1 2 ] . The parts I and I I are

(4.13) I = 2 R ( J s p ( [ J s , u ε ] x p + 2 [ J s , u ε ] x q + 2 [ J s , v ε ] x p + 2 [ J s , v ε ] x q 1 2 u ε J s p x v ε J s p ) + J s q ( [ J s , v ε ] x q + 2 [ J s , v ε ] x p + 2 [ J s , u ε ] x q + 2 [ J s , u ε ] x p 1 2 v ε J s q x u ε J s q ) 2 x u ε J s p J s q 2 x v ε J s p J s q ) d x , I I = 2 R ( J s p ( J s ( p x u ε ) + 2 J s ( q x u ε ) + 2 J s ( p x v ε ) + 2 J s ( q x v ε ) ) + J s q ( J s ( q x v ε ) + 2 J s ( p x v ε ) + 2 J s ( q x u ε ) + 2 J s ( p x u ε ) ) ) d x .

We provide the estimates for J s p [ J s , u ε ] x p L x 1 and J s p J s ( p x u ε ) L x 1 . For the first term, we apply Lemma 2.5 with p 1 = p 4 = , p 2 = p 3 = 2 to obtain

(4.14) J s p [ J s , u ε ] x p L x 1 p H s [ J s , u ε ] x p L x 2 p H s ( x u ε L x J s 1 p x L x 2 + u ε H s p x L x ) .

For the second term, we have

(4.15) J s p J s ( p x u ε ) L x 1 p H s ( p x u ε L x 2 + D s ( p x u ε ) L x 2 ) ,

where D s ( p x u ε ) L x 2 = D s 1 ( x p x u ε + p x 2 u ε ) L x 2 . Using Lemma 2.4 with α = s 1 ( 1 8 , 1 2 ] , we obtain

D s 1 ( x p x u ε ) L x 2 x p L x D x s 1 x u ε L 2 + D s 1 x p L x 2 x u ε L x , D s 1 ( p x 2 u ε ) L x 2 p L x 2 D s 1 x 2 u ε L + D x s 1 p L x 2 x 2 u ε L x .

Using (4.14) and (4.15), we have the estimates for I and I I . Hence, (4.12) can be expressed as follows:

d d t ( p , q ) H s ( p , q ) H s ( ( x u ε , x v ε ) L + ( x u ε , x v ε ) L ) + ( x p , x q ) L ( ( u ε , v ε ) H s + ( u ε , v ε ) H s + ( D s 1 x u ε , D s 1 x v ε ) L 2 + ( D s 1 x u ε , D s 1 x v ε ) L 2 ) + ( p , q ) L 2 ( ( u ε , v ε ) L + ( u ε , v ε ) L + ( D s 1 x 2 u ε , D s 1 x 2 v ε ) L + ( D s 1 x 2 u ε , D s 1 x 2 v ε ) L ) + ( D s 1 p , D s 1 q ) L 2 ( ( x 2 u ε , x 2 v ε ) L + ( x 2 u ε , x 2 v ε ) L ) .

Applying the Gronwall inequality and Hólder’s inequality, we obtain

λ T s ( u , v ) ( T 1 2 ( u 0 , v 0 ) H s γ T s ( p , q ) + g ε , ε ) e C T 1 2 ( u 0 , v 0 ) H s ,

where

g ε , ε = ( u 0 , ε u 0 , ε , v 0 , ε v 0 , ε ) H s + T 1 2 ( p , q ) L T L x 2 ( ( u ε , v ε ) L T 2 L x + ( u ε , v ε ) L T 2 L x + ( D s 1 x 2 u ε , D s 1 x 2 v ε ) L T 2 L x + ( D s 1 x 2 u ε , D s 1 x 2 v ε ) L T 2 L x ) + T 1 2 ( D s 1 p , D s 1 q ) L T L x 2 ( ( x 2 u ε , x 2 v ε ) L T 2 L x + ( x 2 u ε , x 2 v ε ) L T 2 L x ) .

We obtain from (4.10) that g ε , ε 0 , as ε , ε 0 . Next, we apply the estimate as in Proposition 4.1 and deduce that

γ T s ( u , v ) T κ 1 ( p , q ) L T H x s + T κ 2 ( J s 1 2 ( u ε p x ) L x , T 2 + J s 1 2 ( u ε q x ) L x , T 2 + J s 1 2 ( v ε q x ) L x , T 2 + J s 1 2 ( v ε p x ) L x , T 2 + J s 1 2 ( v ε p x ) L x , T 2 + J s 1 2 ( u ε q x ) L x , T 2 + J s 1 2 ( q x u ε ) L x , T 2 + J s 1 2 ( p x v ε ) L x , T 2 + J s 1 2 ( q x v ε ) L x , T 2 + J s 1 2 ( p x v ε ) L x , T 2 + J s 1 2 ( q x u ε ) L x , T 2 + J s 1 2 ( p x u ε ) L x , T 2 ) .

Without loss of generality, we estimate J s 1 2 ( u ε p x ) L x , T 2 and J s 1 2 ( p x u ε ) L x , T 2 as follows:

J s 1 2 ( p x u ε ) L x , T 2 x u ε L T 2 L x p L T H x s + D x s 1 2 x u ε L T 2 L x p L x L x 2 ,

(4.16) J s 1 2 ( u ε p x ) L x , T 2 x p L T 2 L x u ε L T H x s + u ε D x s 1 2 x p L x , T 2 .

We estimate the second term of (4.16) as follows:

u ε D x s 1 2 x p L x , T 2 = j j j + 1 0 T u ε D x s 1 2 x p 2 d t d x 1 2 j j j + 1 sup t [ 0 , T ] u ε 2 0 T D x s 1 2 x p 2 d t d x 1 2 j u ε L ( [ j , j + 1 ) × [ 0 , T ] ) 2 1 2 sup j Z D x s 1 2 x p L 2 ( [ j , j + 1 ) × [ 0 , T ] ) ( 1 + T ) β ( u 0 , v 0 ) H s sup j Z D x s 1 2 x p L 2 ( [ j , j + 1 ) × [ 0 , T ] ) .

Thus, by gathering the estimates of λ T s ( p , q ) and γ T s ( p , q ) , we conclude that Λ T s ( p , q ) 0 , as ε and ε 0 .□

With Proposition 4.2 at hand, there exists a function ( u , v ) C ( [ 0 , T 1 ] ; H s ( R ) × H s ( R ) ) such that ( u ε u , v ε v ) L T 1 H x s 0 , as ε 0 .

At this stage, the continuous dependence on initial data follows from the proof of Theorem 4.1.

Thus, combining Theorems 3.1 and 4.1, we have proved Theorem 1.1.

Acknowledgments

The author is greatly indebted to the anonymous referees for the careful reading of the manuscript and the invaluable suggestions and comments which significantly improved the quality of the presentation.

  1. Funding information: The work of Zhao is partially supported by the grant-2090011540028 in Ningbo University of Technology and Scientific Research Fund of Zhejiang Provincial Education Department.

  2. Author contributions: The author confirms the sole responsibility for the conception of the study, presented results and manuscript preparation.

  3. Conflict of interest: The author states no conflict of interest.

  4. Data availability statement: Data sharing is not applicable to this article.

References

[1] T. B. Benjamin, Internal waves of permanent form in fluids of great depth, J. Fluid Mech., 29 (1967), 559–592. https://doi.org/10.1017/S002211206700103X.Suche in Google Scholar

[2] J. L. Bona and R. Smith, The initial-value problem for the Korteweg-de Vries equations, Philos. Trans. Roy. Soc. London Ser. A 1287 (1975), 555–601. https://doi.org/10.1098/rsta.1975.0035.Suche in Google Scholar

[3] R. Grimshaw and Y. Zhu, Oblique interactions between internal solitary waves, Stud. Appl. Math. 92 (1994), 249–270. https://doi.org/10.1002/sapm1994923249.Suche in Google Scholar

[4] P. Grisvard, Quelques proprietés des espaces de Sobolev utiles dans laétude des équations de Navier-Stokes (i), Problemes daévolution non linéaires, Séminaire de Nice 40 (1989), 360–392, https://doi.org/10.1016/0167-2789(89)90050-X.Suche in Google Scholar

[5] A. D. Ionescu and C. E. Kenig, Global well-posedness of the Benjamin-Ono equation in low-regularity spaces, J. Amer. Math. Soc. 20 (2007), 753–798. https://doi.org/10.1090/S0894-0347-06-00551-0. Suche in Google Scholar

[6] J. Iório, On the Cauchy problem for the Benjamin-Ono equation, Commun. Partial Differ. Equ. 11 (1986), 1031–1081. https://doi.org/10.1080/03605308608820456. Suche in Google Scholar

[7] T. Kato, Quasi-linear equations of evolution, with applications to partial differential equations, Spectral theory and differential equations, Springer, Berlin, Heidelberg, 1975, pp. 25–70. https://doi.org/10.1007/BFB0067080. Suche in Google Scholar

[8] C. E. Kenig and K. D. Koenig, On the local well-posedness of the Benjamin-Ono and modified Benjamin-Ono equations, Math. Res. Lett. 10 (2003), 879–895. https://doi.org/10.4310/MRL.2003.V10.N6.A13. Suche in Google Scholar

[9] C. E. Kenig, G. Ponce, and L. Vega, Oscillatory integrals and regularity of dispersive equations, Indiana Univ. Math. J. 40 (1991), 33–69. www.jstor.org/stable/24896258.10.1512/iumj.1991.40.40003Suche in Google Scholar

[10] T. Kato and G. Ponce, Commutator estimates and the Euler and Navier-Stokes equations, Comm. Pure Appl. Math. 41 (1988), 891–907. https://doi.org/10.1002/cpa.3160410704.Suche in Google Scholar

[11] R. Killip, T. Laurens, and M. Visan, Sharp well-posedness for the Benjamin-Ono equation, Invent. Math. 236 (2024), 999–1054. https://doi.org/10.1007/s00222-024-01250-8. Suche in Google Scholar

[12] H. Koch and N. Tzvetkov, On the local well-posedness of the Benjamin-Ono equation in Hs(R), Int. Math. Res. Not. 26 (2003), 1449–1464. https://doi.org/10.1155/S1073792803211260. Suche in Google Scholar

[13] H. Ono, Algebraic solitary waves in stratified fluids, J. Phys. Soc. Jpn. 39 (1975), 1082–1091. https://doi.org/10.1143/JPSJ.39.1082. Suche in Google Scholar

[14] G. Ponce, On the global well-posedness of the Benjamin-Ono equation, Differ. Integr. Equ. 4 (1991), 527–542. https://doi.org/10.57262/die/1372700427. Suche in Google Scholar

[15] J. C. Saut, Sur quelques généralisations de laéquation de Korteweg-de Vries. (French), J. Math. Pures Appl. 58 (1979), 21–61. https://doi.org/10.1016/0022-0396(79)90068-8.Suche in Google Scholar

[16] J. C. Saut and R. Temam, Remarks on the Korteweg-de Vries equation, Israel J. Math. 24 (1976), 78–87. https://doi.org/10.1007/BF02761431. Suche in Google Scholar

[17] T. Tao, Global well-posedness of the Benjamin-Ono equation in H1(R), J. Hyperbolic Differ. Equ. 1 (2004), 27–49. https://doi.org/10.1142/S0219891604000032. Suche in Google Scholar

[18] M. Tom, Existence of global solutions for nonlinear dispersive equations, Nonlinear Anal. 2 (1993), 175–189. https://doi.org/10.1016/0362-546X(93)90016. Suche in Google Scholar

Received: 2025-01-01
Revised: 2025-06-29
Accepted: 2025-09-12
Published Online: 2025-10-29

© 2025 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Artikel in diesem Heft

  1. Research Articles
  2. Incompressible limit for the compressible viscoelastic fluids in critical space
  3. Concentrating solutions for double critical fractional Schrödinger-Poisson system with p-Laplacian in ℝ3
  4. Intervals of bifurcation points for semilinear elliptic problems
  5. On multiplicity of solutions to nonlinear Dirac equation with local super-quadratic growth
  6. Entire radial bounded solutions for Leray-Lions equations of (p, q)-type
  7. Combinatorial pth Calabi flows for total geodesic curvatures in spherical background geometry
  8. Sharp upper bounds for the capacity in the hyperbolic and Euclidean spaces
  9. Positive solutions for asymptotically linear Schrödinger equation on hyperbolic space
  10. Existence and non-degeneracy of the normalized spike solutions to the fractional Schrödinger equations
  11. Existence results for non-coercive problems
  12. Ground states for Schrödinger-Poisson system with zero mass and the Coulomb critical exponent
  13. Geometric characterization of generalized Hajłasz-Sobolev embedding domains
  14. Subharmonic solutions of first-order Hamiltonian systems
  15. Sharp asymptotic expansions of entire large solutions to a class of k-Hessian equations with weights
  16. Stability of pyramidal traveling fronts in time-periodic reaction–diffusion equations with degenerate monostable and ignition nonlinearities
  17. Ground-state solutions for fractional Kirchhoff-Choquard equations with critical growth
  18. Existence results of variable exponent double-phase multivalued elliptic inequalities with logarithmic perturbation and convections
  19. Homoclinic solutions in periodic partial difference equations
  20. Classical solution for compressible Navier-Stokes-Korteweg equations with zero sound speed
  21. Properties of minimizers for L2-subcritical Kirchhoff energy functionals
  22. Asymptotic behavior of global mild solutions to the Keller-Segel-Navier-Stokes system in Lorentz spaces
  23. Blow-up solutions to the Hartree-Fock type Schrödinger equation with the critical rotational speed
  24. Qualitative properties of solutions for dual fractional parabolic equations involving nonlocal Monge-Ampère operator
  25. Regularity for double-phase functionals with nearly linear growth and two modulating coefficients
  26. Uniform boundedness and compactness for the commutator of an extension of Riesz transform on stratified Lie groups
  27. Normalized solutions to nonlinear Schrödinger equations with mixed fractional Laplacians
  28. Existence of positive radial solutions of general quasilinear elliptic systems
  29. Low Mach number and non-resistive limit of magnetohydrodynamic equations with large temperature variations in general bounded domains
  30. Sharp viscous shock waves for relaxation model with degeneracy
  31. Bifurcation and multiplicity results for critical problems involving the p-Grushin operator
  32. Asymptotic behavior of solutions of a free boundary model with seasonal succession and impulsive harvesting
  33. Blowing-up solutions concentrated along minimal submanifolds for some supercritical Hamiltonian systems on Riemannian manifolds
  34. Stability of rarefaction wave for relaxed compressible Navier-Stokes equations with density-dependent viscosity
  35. Singularity for the macroscopic production model with Chaplygin gas
  36. Global strong solution of compressible flow with spherically symmetric data and density-dependent viscosities
  37. Global dynamics of population-toxicant models with nonlocal dispersals
  38. α-Mean curvature flow of non-compact complete convex hypersurfaces and the evolution of level sets
  39. High-energy solutions for coupled Schrödinger systems with critical growth and lack of compactness
  40. On the structure and lifespan of smooth solutions for the two-dimensional hyperbolic geometric flow equation
  41. Well-posedness for physical vacuum free boundary problem of compressible Euler equations with time-dependent damping
  42. On the existence of solutions of infinite systems of Volterra-Hammerstein-Stieltjes integral equations
  43. Remark on the analyticity of the fractional Fokker-Planck equation
  44. Continuous dependence on initial data for damped fourth-order wave equation with strain term
  45. Unilateral problems for quasilinear operators with fractional Riesz gradients
  46. Boundedness of solutions to quasilinear elliptic systems
  47. Existence of positive solutions for critical p-Laplacian equation with critical Neumann boundary condition
  48. Non-local diffusion and pulse intervention in a faecal-oral model with moving infected fronts
  49. Nonsmooth analysis of doubly nonlinear second-order evolution equations with nonconvex energy functionals
  50. Qualitative properties of solutions to the viscoelastic beam equation with damping and logarithmic nonlinear source terms
  51. Shape of extremal functions for weighted Sobolev-type inequalities
  52. One-dimensional boundary blow up problem with a nonlocal term
  53. Doubling measure and regularity to K-quasiminimizers of double-phase energy
  54. General solutions of weakly delayed discrete systems in 3D
  55. Global well-posedness of a nonlinear Boussinesq-fluid-structure interaction system with large initial data
  56. Optimal large time behavior of the 3D rate type viscoelastic fluids
  57. Local well-posedness for the two-component Benjamin-Ono equation
  58. Self-similar blow-up solutions of the four-dimensional Schrödinger-Wave system
  59. Existence and stability of traveling waves in a Keller-Segel system with nonlinear stimulation
  60. Existence and multiplicity of solutions for a class of superlinear p-Laplacian equations
  61. On the global large regular solutions of the 1D degenerate compressible Navier-Stokes equations
  62. Normal forms of piecewise-smooth monodromic systems
  63. Fractional Dirichlet problems with singular and non-locally convective reaction
  64. Sharp forced waves of degenerate diffusion equations in shifting environments
  65. Global boundedness and stability of a quasilinear two-species chemotaxis-competition model with nonlinear sensitivities
  66. Non-existence, radial symmetry, monotonicity, and Liouville theorem of master equations with fractional p-Laplacian
  67. Global existence, asymptotic behavior, and finite time blow up of solutions for a class of generalized thermoelastic system with p-Laplacian
  68. Formation of singularities for a linearly degenerate hyperbolic system arising in magnetohydrodynamics
  69. Linear stability and bifurcation analysis for a free boundary problem arising in a double-layered tumor model
  70. Hopf's lemma, asymptotic radial symmetry, and monotonicity of solutions to the logarithmic Laplacian parabolic system
  71. Generalized quasi-linear fractional Wentzell problems
  72. Existence, symmetry, and regularity of ground states of a nonlinear Choquard equation in the hyperbolic space
  73. Normalized solutions for NLS equations with general nonlinearity on compact metric graphs
  74. Review Article
  75. Existence and stability of contact discontinuities to piston problems
Heruntergeladen am 17.12.2025 von https://www.degruyterbrill.com/document/doi/10.1515/anona-2025-0119/html
Button zum nach oben scrollen