Home Mathematics One-side Liouville theorems under an exponential growth condition for Kolmogorov operators
Article Open Access

One-side Liouville theorems under an exponential growth condition for Kolmogorov operators

  • Enrico Priola EMAIL logo
Published/Copyright: November 19, 2024

Abstract

It is known that for a possibly degenerate hypoelliptic Ornstein-Uhlenbeck (OU) operator

L = 1 2 tr ( Q D 2 ) + A x , D = 1 2 div ( Q D ) + A x , D , x R N ,

all (globally) bounded solutions of L u = 0 on R N are constant if and only if all the eigenvalues of A have non-positive real parts (i.e., s ( A ) 0 ). We show that if Q is positive definite and s ( A ) 0 , then any non-negative solution v of L v = 0 on R N , which has at most an exponential growth, is indeed constant. Thus, under a non-degeneracy condition, we relax the boundedness assumption on the harmonic functions and maintain the sharp condition on the eigenvalues of A . We also prove a related one-side Liouville theorem in the case of hypoelliptic OU operators.

MSC 2010: 31B05; 47D07; 60H30

Dedicated to Professor Ermanno Lanconelli on the occasion of his 80th birthday.

1 Introduction

Let Q be a symmetric non-negative definite N × N matrix, and let A be a real N × N matrix. The possibly degenerate Ornstein-Uhlenbeck operator (OU operator) associated with ( Q , A ) is defined as

(1.1) L = 1 2 tr ( Q D 2 ) + A x , D = 1 2 div ( Q D ) + A x , D , x R N .

We will assume the so-called Kalman controllability condition (see, for instance, Chapter 1 in [23]):

(1.2) rank [ Q , A Q , , A N 1 Q ] = N .

This is equivalent to the hypoellipticity of L t (see [13] and [14] for more details). Clearly, in the non-degenerate case, when Q is positive definite, we have that condition (1.2) holds.

We study one-side Liouville-type theorems for L , i.e., we want to know when non-negative smooth solutions u : R N R to

L u ( x ) = 0 , x R N ,

are constant. Such functions are called positive (non-negative) harmonic functions for L [15]. Positive harmonic functions are important in the study of the Martin boundary for L (see [4] for the non-degenerate two-dimensional case and Chapter 7 in [15] for more information on the Martin boundary for non-degenerate diffusions).

Before stating our main result, we discuss a known Liouville theorem concerning bounded harmonic functions (BHFs) for L . To this purpose, we introduce the spectral bound of A :

(1.3) s ( A ) = max { Re ( λ ) : λ σ ( A ) } ,

where σ ( A ) is the spectrum of A (i.e., set of the eigenvalues of A ).

By Theorem 3.8 of [17], it follows that all BHFs for the hypoelliptic OU operator L are constant if and only if s ( A ) 0 (cf. Theorem 2.1 and see the comments in Section 2 for more details). Liouville theorems involving BHFs for diffusions are also considered in [1], [2] and [19]. Liouville theorems involving BHFs for purely non-local OU operators are proved in [21] and [20] (see also Section 6). Recall that Liouville theorems for BHFs have probabilistic interpretations, in terms of absorption functions (see Chapter 9 in [15] and Section 6 in [17]) and in terms of successful couplings (see [21,20], and references therein).

Theorem 3.8 in [17] suggests the natural question if under the assumption s ( A ) 0 , we have more generally a one-side Liouville-type theorem for L . In general, this is an open problem (Section 6). However, one-side Liouville theorems for L have been proved in some cases under specific assumptions on Q and A (such assumptions imply that s ( A ) 0 ). We refer to [912] and references therein (note that [911] contain also Liouville theorems for other classes of Kolmogorov operators). In particular, we mention the main result in [12], where a one-side Liouville theorem is proved, assuming that Q is positive definite and that the norm of the exponential matrix e t A is uniformly bounded when t R (cf. Theorem 2.4 and the related discussion in Section 2).

The main result of this work states that if Q is positive definite and s ( A ) 0 , then any positive harmonic function v for L , which has at most an exponential growth, is indeed constant. Thus, under a non-degeneracy condition, we relax the boundedness assumption on the harmonic functions of [17] and maintain the sharp condition on the eigenvalues of A (see Theorem 4.1 for the precise statement). We also prove a related one-side Liouville theorem under a sublinear growth condition, which is valid for hypoelliptic OU operators (Theorem 4.2).

This article is organized as follows. We recall and discuss known results in Section 2. In Section 3, we prove the convexity of positive harmonic functions for L under an exponential growth condition. This is a consequence of a result given in [18]. Such convexity property will be the starting point for the proof of our main result. We state and discuss Theorem 4.1 in Section 4, where we also present Example 1 to illustrate an idea of the proof in a significant case. The complete proof of the main result is given in Section 5. We finish this article by presenting some open problems.

1.1 Notations

We denote by the usual euclidean norm in any R k , k 1 . Moreover, x y or x , y indicates the usual inner product in R k , x , y R k . The canonical basis in R k is denoted by ( e i ) i = 1 , , k .

Let k 1 . Given a regular function u : R k , we denote by D 2 u ( x ) the k × k Hessian matrix of u at x R k , i.e., D 2 u ( x ) = ( x i x j 2 u ( x ) ) i , j = 1 , , k , where x i x j 2 u are the usual second-order partial derivatives of u . Similarly, we define the gradient D u ( x ) R k .

Given a real k × k matrix A , A denotes its operator norm and tr ( A ) its trace.

Given a symmetric non-negative definite k × k matrix Q , we denote by N ( 0 , Q ) the symmetric Gaussian measure with mean 0 and covariance matrix Q (see, for instance, Section 1.7 in [3] or Section 2.2 in [6]). If, in addition, Q is positive definite, then N ( 0 , Q ) has the density 1 ( 2 π ) k det ( Q ) e 1 2 Q 1 x , x , x R k , with respect to the k -dimensional Lebesgue measure.

2 Some known results

First, we introduce the Banach space B b ( R N ) of all Borel and bounded functions from R N into R endowed with the supremum norm. We define the following semigroup of operators ( P t ) acting on B b ( R N ) :

(2.1) ( P t f ) ( x ) = P t f ( x ) = R N f ( e t A x + y ) N ( 0 , Q t ) d y , x R N , t > 0 ,

P 0 f = f , f B b ( R N ) , where N ( 0 , Q t ) is the Gaussian measure with mean 0 and covariance matrix

Q t = 0 t e s A Q e s A * d s

(see, for instance, Chapter 6 in [5] for more details). We are using exponential matrices e s A and e s A * , where A * denotes the transpose of the matrix A , and ( P t ) is called the OU semigroup (briefly, the OU semigroup).

Recall that the Kalman condition is equivalent to the fact that Q t is positive definite for t > 0 (cf. Section 1.3 in [23]). It is also equivalent to the strong Feller property of ( P t ) and, moreover, to the fact that P t ( B b ( R N ) ) C b ( R N ) , t > 0 (see, for instance, Chapter 6 in [5]).

Now, we mention a special case of Theorem 3.8 in [17] (this covers also some classes of non-local OU operators). It shows that any BHF for L is constant if and only if s ( A ) 0 .

Theorem 2.1

[17] Let us consider the hypoelliptic OU operator L (i.e., we assume (1.2)). Let w C 2 ( R N ) be any bounded solution to L w ( x ) = 0 on R N ([1]). Then, w is constant if and only if s ( A ) 0 .

Remark 2.2

Note that a smooth bounded real function w is a solution to L w ( x ) = 0 on R N if and only if it is a BHF for the OU semigroup, i.e.,

P t w = w on  R N , t 0 .

This fact can be easily proved using the Itô formula (we point out that the Itô formula will also be used in the proof of Proposition 3.2).

To prove in the previous theorem, one uses the next result, the proof of which uses control theoretic techniques.

Theorem 2.3

[16] Let us consider the matrix Q t 1 2 e t A , t > 0 (this is well defined by (1.2)). We have

(2.2) Q t 1 2 e t A 0 , as t s ( A ) 0 .

Now, we state a one-side Liouville-type theorem proved in Theorem 1.1 of [12], which will be used in the sequel.

Theorem 2.4

[12] Let Q be a N × N positive definite matrix. Suppose that sup t R e t A < ([2]). Let v be a smooth non-negative solution to L v = 0 . Then, v is constant.

Clearly, we can replace in the previous theorem the condition that v is non-negative by requiring that v is bounded from above or from below.

Remark 2.5

(i) Theorem 1.1 in [12] is proved when Q = I . On the other hand, if Q is positive definite, then using the change of variable u ( x ) = l ( Q 1 2 x ) , l ( y ) = u ( Q 1 2 y ) , y R N , we can pass from an OU operator associated with ( Q , A ) to an OU operator associated with ( I , Q 1 2 A Q 1 2 ) . In particular, we have L u ( x ) = 0 , x R N , if and only if

1 2 l ( y ) + Q 1 2 A Q 1 2 y , D l ( y ) = 0 , y R N .

Therefore, Theorem 1.1 in [12] holds more generally when Q is positive definite.

(ii) We do not know if the previous theorem holds replacing the assumption that Q is positive definite with the more general Kalman condition (1.2).

3 Positive harmonic function for the OU semigroup

Note that formula (2.1) is meaningful even if the Borel function f is only non-negative. Following [18], we say that a Borel function u : R N R + is a positive harmonic function for the OU semigroup ( P t ) if it satisfies

(3.1) P t u ( x ) = u ( x ) , x R N , t 0 .

The next theorem is a special case of Theorem 5.1 in [18], which holds in infinite dimensions as well. Its proof uses Theorem 2.3 together with an idea of S. Kwapien (personal communication). We sketch its proof in Appendix.

Theorem 3.1

Assume the Kalman condition (1.2) and s ( A ) 0 . Consider a positive harmonic function u for the OU semigroup ( P t ) . Then, u is convex on R N .

In the next result, we provide a sufficient condition under which positive harmonic functions for L are positive harmonic functions for the OU semigroup as well. For such result, we do not need the Kalman condition (1.2).

Proposition 3.2

Let u C 2 ( R N ) be a non-negative solution to L u ( x ) = 0 on R N . Assume that u verifies the following exponential growth condition: there exists c 0 > 0 such that

(3.2) u ( x ) c 0 e c 0 x , x R N .

Then, we have P t u ( x ) = u ( x ) , x R N , t 0 ([3]).

Proof

The proof uses stochastic calculus. Let us introduce the OU stochastic process starting at x R N (see, for instance, page 232 in [8]). It is the solution to the following SDE:

(3.3) X t x = x + 0 t A X s x d s + 0 t Q d W s , t 0 , x R N ,

where W = ( W t ) is a standard N -dimensional Wiener process defined and adapted on a stochastic basis ( Ω , , ( t ) , P ) . The solution is given by

X t x = e t A x + 0 t e ( t s ) A Q d W s .

It is well known that E [ u ( X t x ) ] = P t u ( x ) , t 0 , x R N (see, for instance, Section 5.1.2 in [6]).

By Itô’s formula (see, for instance, Chapter 8 in [3] or Chapter 2 in [8]), we know that, P -a.s.,

u ( X t x ) = u ( x ) + 0 t L u ( X s x ) d s + M t = u ( x ) + M t , t 0 , x R N ,

using the local martingale M = ( M t ) , M t = 0 t D u ( X s x ) Q d W s . Let us fix x R N . Using the stopping times τ n x = inf { t 0 : X t x B n } (here B n is the open ball of radius n and center 0), we find

E [ u ( X t τ n x x ) ] = u ( x ) + E [ M t τ n x ] .

By considering a C b 2 -function u n with bounded first and second derivatives on R N which coincides with u on B n + 1 we obtain

M t τ n x = 0 t τ n x D u n ( X s x ) Q d W s , t 0 , n 1 .

Since 0 t D u n ( X s x ) Q d W s is a martingale, by the Doob optional stopping theorem, we know that

0 = E 0 t τ n x D u n ( X s x ) Q d W s = E [ M t τ n x ] , t 0 , n 1 .

We arrive at

(3.4) E [ u ( X t τ n x x ) ] = u ( x ) , t 0 , n 1 .

We fix t > 0 . In order to pass to the limit in (3.4), we use that, for any n 1 , P -a.s.,

u ( X t τ n x x ) c 0 e c 0 X t τ n x x c 0 e c 0 sup s [ 0 , t ] X s x .

It is known that there exists δ > 0 , possibly depending also on t , such that

(3.5) E exp δ sup s [ 0 , t ] 0 s e ( s r ) A Q d W r 2 <

(for instance, one can use Proposition 8.7 in [3] together with the estimate sup s [ 0 , t ] 0 s e s A e r A Q d W r c t sup s [ 0 , t ] 0 s e r A Q d W r ). It follows that E [ e c 0 sup s [ 0 , t ] X s x ] < . Since P -a.s. τ n x , we can pass to the limit in (3.4) as n by the dominated convergence theorem and obtain

E [ u ( X t x ) ] = u ( x ) , t 0 .

The proof is complete since E [ u ( X t x ) ] = P t u ( x ) .□

According to Theorem 3.1 and Proposition 3.2, we have

Corollary 3.3

Assume the Kalman condition (1.2) and s ( A ) 0 . Let u be a non-negative smooth solution to L u ( x ) = 0 on R N , which verifies the exponential growth condition (3.2). Then, u is a convex function on R N .

We obtain here a first one-side Liouville-type theorem for possibly degenerate hypoelliptic OU operators. It holds under a sublinear growth condition and generalizes Theorem 2.1.

Theorem 3.4

Let us consider the OU operator L under the Kalman condition (1.2). Let u C 2 ( R N ) be a non-negative solution to L u = 0 on R N . Assume the following growth condition: there exist δ [ 0 , 1 ) and C δ > 0 such that

(3.6) u ( x ) C δ ( 1 + x δ ) , x R N .

If s ( A ) 0 , then we have that u is constant.

Proof

Since condition (3.6) implies the exponential growth condition (3.2), we can apply Corollary 3.3. The assertion follows since u is a convex function on R N .□

4 One-side Liouville theorem under an exponential growth condition

The main result of this paper concerns non-degenerate OU operators L :

(4.1) L = 1 2 tr ( Q D 2 ) + A x , D ,

where Q is a positive definite N × N -matrix.

Theorem 4.1

Let us consider the OU operator L with Q positive definite. Let u C 2 ( R N ) be a non-negative solution to L u ( x ) = 0 on R N . Suppose that s ( A ) 0 holds. Suppose that u satisfies the exponential growth condition (3.2). Then, u is constant.

In order to prove the result, we need a first lemma that holds more generally for hypoelliptic OU operators.

Lemma 4.2

Let us consider a hypoelliptic OU operator L. Let u C 2 ( R N ) be a non-negative solution to L u ( x ) = 0 on R N . Suppose that u verifies the exponential growth condition (3.2). Then, if s ( A ) 0 , we have the following identity for any x 0 , x R N :

(4.2) u ( x ) u ( x 0 ) D u ( x 0 ) x 0 + D u ( x 0 ) e t A x , t 0 .

Proof

We know by Corollary 3.3 that for any x 0 R N , we have

(4.3) u ( y ) u ( x 0 ) + D u ( x 0 ) ( y x 0 ) , y R N .

We apply the OU semigroup ( P t ) to both sides of (4.3):

P t u ( x ) u ( x 0 ) + D u ( x 0 ) R N [ e t A x x 0 + z ] N ( 0 , Q t ) d z = u ( x 0 ) D u ( x 0 ) x 0 + D u ( x 0 ) e t A x .

Taking into account Proposition 3.2, we have that P t u = u , t 0 , and the assertion follows.□

To illustrate the proof of Theorem 4.1, we first examine an example when N = 3 .

Example 1

We introduce

A = 0 1 0 0 0 1 0 0 0 , e t A x = x 1 + t x 2 + t 2 2 x 3 x 2 + t x 3 x 3 ,

x = ( x 1 , x 2 , x 3 ) R 3 . Let Q can be any positive definite 3 × 3 matrix. We consider the OU operator L associated to ( Q , A ) . Let u be a positive harmonic function for L , which satisfies the growth condition (3.2). We first prove that, for any x 0 R 3 , x 1 u ( x 0 ) = x 2 u ( x 0 ) = 0 . Suppose by contradiction that

k = x 1 u ( x 0 ) 0 ,

for some x 0 R 3 . Then, we consider x = ( 0 , 0 , k ) , and by (4.2), we obtain

u ( 0 , 0 , k ) u ( x 0 ) D u ( x 0 ) x 0 + D u ( x 0 ) e t A ( 0 , 0 , k ) = u ( x 0 ) D u ( x 0 ) x 0 + D u ( x 0 ) t 2 2 k , t k , k = u ( x 0 ) D u ( x 0 ) x 0 + t 2 2 k 2 + t x 2 u ( x 0 ) k + x 3 u ( x 0 ) k , t 0 .

Letting t , we obtain a contradiction since t 2 2 k 2 + t x 2 u ( x 0 ) k tends to . It follows that x 1 u ( x 0 ) = 0 . We have u ( x 1 , x 2 , x 3 ) = u ( 0 , x 2 , x 3 ) on R 3 .

Suppose by contradiction that l = x 2 u ( x 0 ) 0 , for some x 0 R 3 . Then, we consider x = ( 0 , 0 , l ) and by (4.2), for any t 0 , we obtain

u ( 0 , 0 , l ) u ( x 0 ) D u ( x 0 ) x 0 + D u ( x 0 ) t 2 2 l , t l , l = u ( x 0 ) D u ( x 0 ) x 0 + t l 2 + x 3 u ( x 0 ) l .

Letting t , we obtain a contradiction. It follows that x 2 u ( x 0 ) = 0 , for any x 0 R N . We have obtained that u ( x 1 , x 2 , x 3 ) = u ( 0 , 0 , x 3 ) = v ( x 3 ) on R 3 .

Since 0 = L u ( x ) = q 33 2 x 3 x 3 2 u ( 0 , 0 , x 3 ) = q 33 2 d 2 d x 3 2 v ( x 3 ) , x 3 R , where q 33 = Q e 3 e 3 , we finally obtain that u = cost .

In the proof of Theorem 4.1, we will also use the following remarks.

Remark 4.3

Arguing as in (ii) of Remark 2.5, we note that it is enough to prove Theorem 4.1 when Q is replaced by δ Q for some δ > 0 . Indeed, using the change of variable u ( x ) = v ( δ 1 2 x ) , v ( y ) = u ( δ 1 2 y ) , we can pass from an OU operator associated with ( Q , A ) to an OU operator associated with ( δ Q , A ) . We have

L u ( x ) = 0 , x R N δ 2 tr ( Q D 2 v ( y ) ) + A y , D v ( y ) = 0 , y R N .

Remark 4.4

In the sequel, we will always assume that in (4.1)

(4.4) the matrix  A  is in the real Jordan form,

possibly replacing Q by P Q P * , where P is a N × N real invertible matrix.

Note that P Q P * is still positive definite. Let us clarify the previous assertion. Let P be an invertible real matrix such that P A P 1 = J .

Using the change of variable u ( x ) = v ( P x ) , v ( y ) = u ( P 1 y ) , we can pass from an OU operator associated with ( Q , A ) to an OU operator associated with ( P Q P * , P A P 1 ) . We have

L u ( x ) = 0 , x R N 1 2 tr ( P Q P * D 2 v ( y ) ) + J y , D v ( y ) = 0 , y R N .

We also remark that s ( J ) = s ( A ) .

5 On the proof of Theorem 4.1

According to Remarks 4.3 and 4.4, we concentrate on proving the Liouville theorem for L in (4.1) assuming that A is in the real Jordan form. Moreover, when needed, we replace Q by δ Q with δ > 0 small enough.

5.1 Technical lemma

Recall that ( e j ) denotes the canonical basis in R N . Given a real N × N matrix C , we write C = B 0 B 1 B n if

C = B 0 0 0 0 B 1 0 0 0 0 0 B n ,

where B i is a real k i × k i matrix, i = 0 , , n , k i 1 and k 0 + + k n = N . Let x R N . We write

(5.1) x = ( x B 0 , , x B n ) ,

where x B 0 = ( x 1 0 , , x k 0 0 ) = ( x 1 , , x k 0 ) ,

(5.2) x B i = ( x 1 i , , x k i i ) = ( x k 0 + + k i 1 + 1 , , x k i 1 + k i ) .

We say that x 1 i , , x k i i are the variables corresponding to B i .

We also consider the related orthogonal projections π B i : R N R k i :

(5.3) π B i x = x B i x R N , i = 0 , , n .

Clearly, we have

(5.4) C x = ( B 0 x B 0 , , B n x B n ) = ( B 0 π B 0 x , , B n π B n x ) , x R N .

Now, we write the N × N matrix A appearing in (4.1) with s ( A ) 0 in the following real Jordan form:

(5.5) A = S E 0 J ( 0 , k 1 ) J ( 0 , k p ) J ( 0 , d 1 , g 1 ) J ( 0 , d q , g q ) E 1 ,

p , q 1 . Note that some of the previous blocks could be not present (for instance, it could be possible that A = S E 1 ). In the sequel, we will examine the various possibile blocks in (5.5).

The block S is a s × s matrix, s 1 , and corresponds to the stable part of A (i.e., it corresponds to the eigenvalues of A with negative real part).

The block E 0 is the null k 0 × k 0 matrix, k 0 1 . The 2 t × 2 t block E 1 , t 1 , corresponds to all possible simple complex eigenvalues i h 1 , , i h t of A (with h 1 , , h t R ):

E 1 = 0 h 1 h 1 0 0 0 0 h t h t 0 .

Moreover, J ( 0 , k i ) is the k i × k i Jordan block, k i 2 , i = 1 , , p ,

J ( 0 , k i ) = 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0

such that J ( 0 , k i ) k i is the null k i × k i matrix. Finally, the 2 g j × 2 g j Jordan block J ( 0 , d j , g j ) is

(5.6) J ( 0 , d j , g j ) = 0 d j 1 0 0 0 0 0 d j 0 0 1 0 0 1 0 0 0 0 1 0 0 0 d j 0 0 d j 0 ,

where d j R , g j 2 , j = 1 , , q . If all blocks are present, according to (5.5), we have

s + k 0 + k 1 + + k p + 2 [ g 1 + + g q ] + 2 t = N .

Let us consider (5.5) and fix a real function u ( x 1 , , x N ) .

In the following definition, we suppose that at least one Jordan block like J ( 0 , k i ) or J ( 0 , d j , g j ) is present in (5.5).

We say that u is quasi-constant with respect to Jordan blocks like J ( 0 , k i ) or J ( 0 , d j , g j ) if the following conditions hold:

  1. If the block J ( 0 , k i ) is present in formula (5.5), then u is constant in the variables

    x 1 i , , x k i 1 i ,

    where x J ( 0 , k i ) = ( x 1 i , , x k i i ) (cf. (5.2), i.e., if we consider only the variables x 1 i , , x k i i , corresponding to the block J ( 0 , k i ) , i = 1 , , p , we say that u may only depend on x k i i .

  2. If the block J ( 0 , d j , g j ) is present in formula (5.5), then u is constant in the variables

    x 1 j , , x 2 g j 2 j ,

    where x J ( 0 , d j , g j ) = ( x 1 j , , x 2 g j j ) ; i.e., if we consider only the variables x 1 j , , x 2 g j j , corresponding to J ( 0 , d j , g j ) we say that u may only depend on x 2 g j 1 j and x 2 g j j .

Lemma 5.1

We assume the hypotheses of Theorem4.1about L and u. Then, u is constant in the following cases:

  1. There is not the stable part S in (5.5). Moreover, blocks like J ( 0 , k i ) or J ( 0 , d j , g j ) are not present.

  2. There is not the stable part S in (5.5). Moreover, u is quasi-constant with respect to Jordan blocks like J ( 0 , k i ) or J ( 0 , d j , g j ) .

  3. There is the stable part S in (5.5). Moreover, u is quasi-constant with respect to Jordan blocks like J ( 0 , k i ) or J ( 0 , d j , g j ) .

Proof

(i) In this case, L verifies the assumptions of Theorem 2.4 and the assertion follows (note that in such case, we do not need to impose any growth condition on the non-negative function u ).

To treat (ii) and (iii), we concentrate on the most difficult case when both blocks like J ( 0 , k i ) and J ( 0 , d j , g j ) are present in (5.5) (otherwise, we can argue similarly).

(ii) We start to study the term A x D u (cf. (4.1)).

Let 1 i p . If u is constant in the variables x 1 i , , x k i 1 i corresponding to the block J ( 0 , k i ) , then, for any x R N of the form

( 0 , , 0 , x J ( 0 , k i ) , 0 , , 0 ) = ( 0 , , 0 , x 1 i , , x k i i , 0 , , 0 ) ,

(cf. (5.1)), i.e., x has all the coordinates 0 possibly apart from the coordinates x 1 i , , x k i i , we have:

(5.7) A ( 0 , , 0 , x 1 i , , x k i i , 0 , , 0 ) D u ( x ) = J ( 0 , k i ) ( x 1 i , , x k i i ) D ( x 1 i , , x k i i ) u ( x ) = 0 x k i i u ( x ) = 0 ,

where D ( x 1 i , , x k i i ) u ( x ) R k i denotes the gradient with respect to the x 1 i , , x k i i variables of u at x R N .

Let 1 j q . If u is constant in the variables x 1 j , , x 2 g i 2 j corresponding to J ( 0 , d j , g j ) , then

(5.8) A ( 0 , , 0 , x 1 j , , x 2 g j j , 0 , , 0 ) D u ( x ) = J ( 0 , d j , g j ) ( x 1 j , , x 2 g j j ) D g u ( x ) = ( d j x 2 g j j , d j x 2 g j 1 j ) ( x 2 g j 1 j u ( x ) , x 2 g j j u ( x ) ) = d j x 2 g j j x 2 g j 1 j u ( x ) d j x 2 g j 1 j x 2 g j j u ( x ) ,

where D g u ( x ) = D ( x 1 j , , x 2 g j j ) u ( x ) R 2 g j denotes the gradient with respect to the x 1 j , , x 2 g j j variables of u at x R N .

Note that according to (5.5) and (5.4), we have, for any x R N ,

A x = ( E 0 π E 0 x , J ( 0 , k 1 ) π J ( 0 , k 1 ) x , , J ( 0 , k p ) π J ( 0 , k p ) x , J ( 0 , d 1 , g 1 ) π J ( 0 , d 1 , g 1 ) x , , J ( 0 , d q , g q ) π J ( 0 , d q , g q ) , E 1 π E 1 x ) .

By assumptions u depends only on m variables, m N . Taking into account (5.7) and (5.8), we obtain, for any x R N ,

(5.9) A x D u ( x ) = E 0 π E 0 x D E 0 u ( x ) + [ d 1 x 2 g 1 1 x 2 g 1 1 1 u ( x ) d 1 x 2 g 1 1 1 x 2 g 1 1 u ( x ) ] + + [ d q x 2 g q q x 2 g q 1 q u ( x ) d q x 2 g q 1 q x 2 g q q u ( x ) ] + E 1 π E 1 x D E 1 u ( x ) ,

where D E 0 denotes the gradient with respect to the k 0 variables corresponding to E 0 and D E 1 denotes the gradient with respect to the 2 t variables corresponding to E 1 .

We can set

(5.10) u ( x 1 , , x N ) = v ( x 1 , , x m ) = v ( x C ) ,

with v : R m R + ,

x C = ( x 1 , , x m ) = ( x E 0 , x 2 g 1 1 1 , x 2 g 1 1 , , x 2 g q 1 q , x 2 g q q , x E 1 ) .

By (5.9), we see that

A x D u ( x ) = A C x C D v ( x C ) ,

for a suitable m × m -matrix A C , which is diagonalizable over the complex field with all the eigenvalues on the imaginary axis. We obtain

0 = L ˜ v ( x C ) = 1 2 tr ( Q ˜ D 2 v ( x C ) ) + A C x C D v ( x C ) , x C R m ,

for a suitable positive definite m × m matrix Q ˜ . By applying Theorem 2.4, we obtain that v is constant and so u is constant as well.

(iii) We start as in (ii). We denote the variables corresponding to the stable part S of A by

x 1 , , x s .

By assumptions, u depends only on n variables, n N . Moreover, n = s + m where m is considered in (5.10). So we write

(5.11) u ( x 1 , , x N ) = w ( x 1 , . , x s , x s + 1 , , x n ) = w ( x S , x C ) ,

where w : R n R + , x S = ( x 1 , . , x s ) and x C = ( x s + 1 , , x n ) .

Arguing as earlier we obtain that there exists an ( n s ) × ( n s ) -matrix A C such that, for any x = ( x S , x C ) R n ,

0 = L ˜ w ( x ) = 1 2 tr ( Q ˜ D 2 w ( x ) ) + S x S D S w ( x ) + A C x C D C w ( x ) ,

for a suitable positive definite n × n matrix Q ˜ . Moreover, D S denotes the gradient with respect to the first s variables and D C the gradient with respect to the x s + 1 , , x n variables.

To complete the proof, it is convenient to replace Q ˜ by

δ Q ˜ , for some δ > 0 small enough to be chosen later

(cf. Remark 4.3). We also recall that the matrix A C is diagonalizable over the complex field with all the eigenvalues on the imaginary axis.

To complete the proof, we prove that w does not depend on the x S -variable. Indeed, once this is proved, we can apply Theorem 2.4 and obtain that w is constant.

To obtain such assertion, we will use here the exponential growth condition (3.2).

By the previous notation, x = ( x S , x C ) R n , since, in particular, w verifies the exponential growth condition (3.2), we have: P ˜ t w = w , t 0 , i.e.,

(5.12) R n w ( e t S x S + y S , e t A C x C + y C ) N ( 0 , Q ˜ t ) d y S d y C = w ( x S , x C ) ,

t 0 , where N ( 0 , Q ˜ t ) is the Gaussian measure with mean 0 and covariance matrix

Q ˜ t = δ 0 t e s A ˜ Q ˜ e s A ˜ * d s .

Here, we are considering the n × n -matrix A ˜ = S A C so that

e t A ˜ = e t S e t A C .

By Corollary 3.3, w is a convex function on R n . Applying a well-known result on convex functions (cf. Section 6.3 in [7]), we obtain, in particular, that, for any x R n , x > 1 ,

sup y x D w ( y ) c ( n ) 2 x B ( 0 , 2 x ) w ( y ) d y .

It follows that possibly replacing c 0 in (3.2) by another constant c > 0 , we have

(5.13) D w ( x ) c e c x , x R n .

Let us fix h S R n s and x = ( x S , x C ) R n . Differentiating both sides of (5.12) along the direction h S , we find

(5.14) R n D S w ( e t S x S + y S , e t A C x C + y C ) e t S h S N ( 0 , Q ˜ t ) d y S d y C = D S w ( x S , x C ) h S ,

where, as before, D S denotes the gradient with respect to the first s variables. Recall that since the matrix S is stable, there exist C > 0 and ω > 0 such that

(5.15) e t S h S C e ω t h S , t 0 .

By (5.13), we infer

D S w ( e t S x S + y S , e t A C x C + y C ) c e c e t S x S e c e t A C x C e c y S e c y C .

Note that e t A C x C = x C , t 0 . It follows that there exists a positive function λ ( x ) (independent of t 0 ) such that

(5.16) D S w ( e t S x S + y S , e t A C x C + y C ) λ ( x ) e 2 c ( y S , y C ) , t 0 .

Setting y = ( y S , y C ) , it is not difficult to prove that there exists c 1 > 0 (independent of t ) such that

(5.17) R n e 2 c y N ( 0 , Q ˜ t ) d y c 1 e c 1 c 2 δ t .

To this purpose, we first remark that

(5.18) Q ˜ t δ 0 t e s A ˜ Q e s A ˜ * d s C 0 δ t , t 0 ,

for some constant C 0 > 0 independent of t . Then, recall that if R is a n × n symmetric and non-negative definite matrix; using also the Fubini theorem, we obtain

R n e r y N ( 0 , R ) d y = R n e r R 1 2 y N ( 0 , I ) d y R n e r R 1 2 ( y 1 + + y n ) N ( 0 , I ) d y = 2 2 π 0 e r R 1 2 y e y 2 2 d y n 2 n e n 2 r 2 R , r 0 .

Combining the last computation and (5.18), we obtain (5.17). Using (5.14), (5.16), and (5.17), we infer

D S w ( x S , x C ) h S = R n D S w ( e t S x S + y S , e t A C x C + y C ) e t S h S N ( 0 , Q ˜ t ) d y S d y C λ ( x ) e t S h S R n e 2 c y N ( 0 , Q ˜ t ) d y c 1 C λ ( x ) e c 1 c 2 δ t e ω t h S , t 0 .

Since c 1 and c are independent of t and ω , choosing δ > 0 small enough and passing to the limit as t , we obtain

D S w ( x S , x C ) h S = 0 .

It follows that w does not depend on the x S -variable. We have w ( x S , x C ) = g ( x C ) for a regular function g : R n s R + . Moreover,

L ˜ w ( x S , x C ) = δ 2 tr ( Q ˜ 0 D C 2 g ( x C ) ) + A C x C D C g ( x C ) = 0 , x C R n s ,

where Q ˜ 0 is a positive definite ( n s ) × ( n s ) -matrix.

Applying Theorem 2.4 to the OU operator δ 2 tr ( Q ˜ 0 D C 2 ) + A C x C D C , we obtain that g is constant, and this completes the proof.□

5.2 Proof of Theorem 4.1

We concentrate on the most difficult case when both blocks like J ( 0 , k i ) and J ( 0 , d j , g j ) are present in (5.5). By Lemma 5.1, it is enough to show that u is quasi-constant with respect to the Jordan blocks

J ( 0 , k 1 ) , , J ( 0 , k p ) , J ( 0 , d 1 , g 1 ) , , J ( 0 , d q , g q ) .

Step I. We fix i = 1 , , p , and consider the block J ( 0 , k i ) (see (5.5)). Let x 1 i , , x k i i be the variables corresponding to J ( 0 , k i ) according to (5.2). Let x 0 R N . We prove that x k i u ( x 0 ) = 0 when k = 1 , , k i 1 .

To this purpose, we first consider k = 1 . We argue by contradiction and suppose that

(5.19) x 1 i u ( x 0 ) 0 ,

for some x 0 R N . In order to apply Lemma 4.2, we first choose x having 0 in all the coordinates apart from the coordinates x 1 i , , x k i i , i.e., we have x = ( 0 , , 0 , x 1 i , , x k i i , 0 , , 0 ) . We find, setting M x 0 = u ( x 0 ) D u ( x 0 ) x 0 ,

u ( 0 , , 0 , x 1 i , , x k i i , 0 , , 0 ) M x 0 + D u ( x 0 ) e t A ( 0 , , 0 , x 1 i , , x k i i , 0 , , 0 ) , = M x 0 + D ( x 1 i , , x k i i ) u ( x 0 ) e t J ( 0 , k i ) ( x 1 i , , x k i i ) , t 0 ,

where D ( x 1 i , , x k i i ) u ( x 0 ) denotes the gradient with respect to the variables x 1 i , , x k i i . Recall that

e t J ( 0 , k i ) = 1 t t 2 2 ! t k i 1 ( k i 1 ) ! 0 1 t t k i 2 ( k i 2 ) ! 0 0 1 t k i 3 ( k i 3 ) ! 0 0 1 t 0 0 1 .

Choosing further x 1 1 = 0 , , x k i 1 i = 0 , x k i i = x 1 i u ( x 0 ) , we find

u ( 0 , , 0 , x 1 i u ( x 0 ) , 0 , , 0 ) M x 0 + x 1 i u ( x 0 ) 0 + t 0 + + t k i 1 ( k i 1 ) ! x k i i + p ( t , x 0 ) = M x 0 + ( x 1 i u ( x 0 ) ) 2 t k i 1 ( k i 1 ) ! + p ( t , x 0 ) , t 0 ,

where p ( t , x 0 ) is a polynomial in the t -variable, which has degree less than k i 1 . Letting t , we find a contradiction since ( x 1 i u ( x 0 ) ) 2 t k i 1 ( k i 1 ) ! + p ( t , x 0 ) tends to . Hence, (5.19) cannot hold, and we have proved that u does not depend on the x 1 i -variable.

Similarly, we prove that u does not depend on the x 2 i -variable as well.

Proceeding in finite steps, once we have proved that u does not depend on the variables x 1 1 , , x k 1 i , k = 1 , , k i 1 , we can show that for any x 0 R N , we have x k i u ( x 0 ) = 0 . To this purpose, we argue by contradiction and suppose that

(5.20) x k i u ( x 0 ) 0 ,

for some x 0 R N . In order to apply Lemma 4.2, we choose x having 0 in all the coordinates apart from the coordinates x 1 i , , x k i i . Moreover, choosing further x 1 1 = 0 , , x k i 1 i = 0 , x k i i = x k i u ( x 0 ) , we find (using that u does not depend on the variables x 1 1 , , x k 1 i )

u ( 0 , , 0 , x k i u ( x 0 ) , 0 , , 0 ) M x 0 + x k i u ( x 0 ) x k i + t x x k + 1 i + + t k i k ( k i k ) ! x k i i + q k ( t , x 0 ) = M x 0 + ( x k i u ( x 0 ) ) 2 t k i k ( k i k ) ! + q ( t , x 0 ) , t 0 ,

where q ( t , x 0 ) is a polynomial in the t -variable, which has degree less than k i k .

Letting t , we find a contradiction since ( x k i u ( x 0 ) ) 2 t k i k ( k i k ) ! + q ( t , x 0 ) tends to . Hence, (5.20) cannot hold, and we have proved the assertion.

Step II. We fix j = 1 , , q , and consider the block J ( 0 , d j , g j ) (see (5.5)). Let x 1 j , , x 2 g j j be the variables corresponding to J ( 0 , d j , g j ) . Let x 0 R N . We prove that x k j u ( x 0 ) = 0 when k = 1 , , 2 g j 2 .

We first consider k = 1 . We argue by contradiction and suppose that

(5.21) x 1 j u ( x 0 ) 0 ,

for some x 0 R N . As in Step I in order to apply Lemma 4.2, we choose x having 0 in all the coordinates apart from the coordinates x 1 j , , x 2 g j j . Moreover, we choose further

x 1 j = 0 , , x 2 g j 1 j = 0 , x 2 g j j = x 1 j u ( x 0 )

and set M x 0 = u ( x 0 ) D u ( x 0 ) x 0 . By considering t = T n , with

(5.22) d j T n = π 2 + 2 n π , n 0 ,

we find

u ( 0 , , 0 , x 1 j u ( x 0 ) , 0 , , 0 ) M x 0 + x 1 j u ( x 0 ) [ x 1 j cos ( d j T n ) + x 2 j sin ( d j T n ) + x 3 j T n cos ( d j T n ) + x 4 j T n sin ( d j T n ) + + x 2 g j 1 j T n g j 1 ( g j 1 ) ! cos ( d j T n ) + x 2 g j j T n g j 1 ( g j 1 ) ! sin ( d j T n ) ] + p ( T n , x 0 ) = M x 0 + ( x 1 j u ( x 0 ) ) 2 T n g j 1 ( g j 1 ) ! + p ( T n , x 0 ) , n 0 ,

where p ( t , x 0 ) is a polynomial in the t -variable that has degree less than g j 1 . Letting n , we find a contradiction since ( x k j u ( x 0 ) ) 2 T n g j 1 ( g j 1 ) ! + p ( T n , x 0 ) tends to . Hence, (5.21) cannot hold, and we have proved that u does not depend on the x 1 j -variable.

Similarly, one can prove that u does not depend on the x 2 j -variable as well. We only note that in this case, we choose x having 0 in all the coordinates apart from the coordinates x 1 j , , x 2 g j j . Moreover, x 1 j = 0 , , x 2 g j 1 j = 0 , x 2 g j j = x 2 j u ( x 0 ) , and define T n such that d j T n = 2 n π , n 0 . We have

u ( 0 , , 0 , x 1 j u ( x 0 ) , 0 , , 0 ) M x 0 + x 2 j u ( x 0 ) [ x 1 j sin ( d j T n ) + x 2 j cos ( d j T n ) x 3 j T n sin ( d j T n ) + x 4 j T n cos ( d j T n ) + x 2 g j 1 j T n g j 1 ( g j 1 ) ! sin ( d j T n ) + x 2 g j j T n g j 1 ( g j 1 ) ! cos ( d j T n ) + q ( T n , x 0 ) , n 0 ,

where q ( t , x 0 ) is a polynomial in the t -variable, which has degree less than g j 1 .

Proceeding in finite steps, once we have proved that u does not depend on the variables x 1 1 , …, x k 1 j , k = 1 , , 2 g j 2 , we show that for any x 0 R N , we have x k j u ( x 0 ) = 0 . To this purpose, we suppose that k is even (we can proceed similarly if k is odd). We argue by contradiction and suppose that

(5.23) x k j u ( x 0 ) 0 ,

for some x 0 R N . We choose x having 0 in all the coordinates apart from the coordinates x 1 j , , x 2 g j j . Moreover, we set x 1 1 = 0 , , x 2 g j 1 j = 0 , x 2 g j j = x k j u ( x 0 ) . We find (using that u does not depend on the variables x 1 1 , , x k 1 j ) with T n as in (5.22):

u ( 0 , , 0 , x k j u ( x 0 ) , 0 , , 0 ) M x 0 + x k j u ( x 0 ) [ x k j cos ( d j T n ) + x k + 1 j sin ( d j T n ) + x k + 2 j T n cos ( d j T n ) + x k + 3 j T n sin ( d j T n ) + + x 2 g j 1 j T n g j k + 1 2 ( g j k + 1 2 ) ! cos ( d j T n ) + x 2 g j j T n g j k + 1 2 ( g j k + 1 2 ) ! sin ( d j T n ) + h ( T n , x 0 ) = M x 0 + ( x k j u ( x 0 ) ) 2 T n g j k + 1 2 ( g j k + 1 2 ) ! + h ( T n , x 0 ) , n 0 ,

where h ( t , x 0 ) is a polynomial in the t -variable that has degree less than g j k + 1 2 . Letting n , we find a contradiction since ( x k j u ( x 0 ) ) 2 T n g j k + 1 2 ( g j k + 1 2 ) ! + h ( T n , x 0 ) tends to . Hence, (5.23) cannot hold, and we have proved the assertion. The proof is complete.

6 Some open problems

We list some open problems related to Liouville-type theorems for OU operators.

  1. In general, it is not known if under the Kalman condition (1.2) and the condition s ( A ) 0 , all non-negative smooth solutions v to L v = 0 on R N are constant. This problem is also open in the non-degenerate case when we assume, in addition, that Q is positive definite.

  2. The papers [21] and [20] treat non-degenerate purely non-local OU operators L

    L f ( x ) = R N ( f ( x + y ) f ( x ) 1 { y 1 } y , D f ( x ) ) ν ( d y ) + A x D f ( x ) ,

    x R N , f : R N R bounded and smooth, where ν is a Lèvy measure. The hypotheses of [20] on ν improve the ones in [21]. Theorem 1.1 in [20] shows that under suitable hypotheses on ν and assuming sup t 0 e t A < , all bounded smooth harmonic functions for L are constant. It is not known if such result holds more generally under the assumption that s ( A ) 0 (for instance, a matrix like A in Example 1 is not covered both in [21] and [20]).

  3. In [12], the following result has been proved using probabilistic methods based on the known characterization of recurrence for OU stochastic processes. It seems that a purely analytic proof of such result is not known.

Theorem 6.1

(Theorem 6.1 in [12]) Let us consider hypoelliptic OU operator L . Let v : R N R be a non-negative C 2 -function such that L v 0 on R N . Then, v is constant if the following condition holds:

The real Jordan representation of B is B 0 0 0 B 1 , where B 0 is stable and B 1 is at most of dimension 2 and of the form B 1 = [ 0 ] or B 1 = 0 α α 0 for some α R (in this case, we need N 2 ).

Acknowledgement

The author wishes to thank J. Zabczyk for useful discussions. The author is a member of GNAMPA of the Istituto Nazionale di Alta Matematica (INdAM).

  1. Funding information: The author states no funding involved.

  2. Author contribution: The author confirms the sole responsibility for the conception of the study, presented results and manuscript preparation.

  3. Conflict of interest: The author states no conflict of interest.

Appendix A Proof of Theorem 3.1

The proof of Theorem 3.1 is based on the following lemma, which is a special case of an infinite dimensional result proved in Section 5 in the study by Priola et al. [18]. We include the proof for the sake of completeness.

Lemma A.1

Let ( P t ) be the OU semigroup. Assume (1.2) and s ( A ) 0 . Then, for any non-negative Borel function f : R N R , the following holds:

(A1) P t f ( x + a ) + P t f ( x a ) 2 C t ( a ) P t f ( x ) , x , a R N ,

where C t ( a ) = exp [ 1 2 Q t 1 2 e t A a 2 ] , t > 0 (note that both sides in (A1) can be + ).

Proof

We fix x , a R N and set N ( 0 , Q t ) = N 0 , Q t . By a direct computation, we have

P t f ( x + a ) = R N f ( e t A x + y ) d N e t A a , Q t d N 0 , Q t ( y ) N 0 , Q t ( d y ) = R N f ( e t A x + y ) exp 1 2 Q t 1 2 e t A a 2 + Q t 1 2 e t A a , Q t 1 2 y N 0 , Q t ( d y ) .

Note that the previous identity also holds in infinite dimensions by the Cameron-Martin formula (see, for instance, Chapter 1 in [5]). It follows that

1 2 ( P t f ( x + a ) + P t f ( x a ) ) = e 1 2 Q t 1 2 e t A a 2 R N f ( e t A x + y ) 1 2 ( e Q t 1 2 e t A a , Q t 1 2 y + e Q t 1 2 e t A a , Q t 1 2 y ) N 0 , Q t ( d y ) ; exp 1 2 Q t 1 2 e t A a 2 R N f ( e t A x + y ) N 0 , Q t ( d y ) = C t ( a ) P t f ( x ) , where C t ( a ) = exp 1 2 Q t 1 2 e t A a 2 .

Proof of Theorem 3.1

Let u be a positive harmonic function for ( P t ) . By the previous lemma, we have, for any x , a R N ,

1 2 ( u ( x + a ) + u ( x a ) ) = 1 2 ( P t u ( x + a ) + P t u ( x a ) ) exp 1 2 Q t 1 2 e t A a 2 P t u ( x ) = exp 1 2 Q t 1 2 e t A a 2 u ( x ) .

Passing to the limit as t , we infer by Theorem 2.3,

1 2 ( u ( x + a ) + u ( x a ) ) u ( x ) , x , a R N .

By a well-known result due to W. Sierpiśki, this condition together with the measurability of u imply the convexity of u (see [22] for the result in the case of convex functions of one variable; a standard argument allows to obtain the needed result we use starting from [22]).□

References

[1] M. Bertoldi and S. Fornaro, Gradient estimates in parabolic problems with unbounded coefficients, Studia Math. 165 (2004), 221–254. 10.4064/sm165-3-3Search in Google Scholar

[2] M. Bertoldi and L. Lorenzi, Estimates of the derivatives for parabolic operators with unbounded coefficients, Trans. Amer. Math. Soc. 357 (2005), 2627–2664. 10.1090/S0002-9947-05-03781-5Search in Google Scholar

[3] P. Baldi, Stochastic calculus, An Introduction Through Theory and Exercises, Universitext. Springer, Cham, 2017. 10.1007/978-3-319-62226-2Search in Google Scholar

[4] M. Cranston, S. Orey, and U. Rösler, The Martin boundary of two dimensional Ornstein-Uhlenbeck processes, Probability, statistics and analysis, London Math. Soc. Lect. Note Ser. 79, Cambridge University Press, Cambridge, 1983, pp. 63–78. 10.1017/CBO9780511662430.004Search in Google Scholar

[5] G. Da Prato and J. Zabczyk, Second order partial differential equations in Hilbert spaces, London Mathematical Society Note Series, vol. 293, Cambridge University Press, Cambridge, 2002. 10.1017/CBO9780511543210Search in Google Scholar

[6] G. DaPrato and J. Zabczyk, Stochastic equations in infinite dimensions, Second edition. Encyclopedia of Mathematics and its Applications, 152, Cambridge University Press, Cambridge, 2014. 10.1017/CBO9781107295513Search in Google Scholar

[7] L. C. Evans and R. F. Gariepy, Measure Theory and Fine Properties of Functions, Studies in Advanced Mathematics, CRC Press, Boca Raton, 1992.Search in Google Scholar

[8] N. Ikeda, and S. Watanabe, Stochastic Differential Equations and Diffusion Processes, North Holland-Kodansha, II edition, 1989. Search in Google Scholar

[9] A. Kogoj and E. Lanconelli, An invariant Harnack inequality for a class of hypoelliptic ultraparabolic equations, Mediter. J. Math. 1 (2004), no. 1, 51–80. 10.1007/s00009-004-0004-8Search in Google Scholar

[10] A. E. Kogoj and E. Lanconelli, One-side Liouville theorems for a class of hypoelliptic ultraparabolic equations, in Geometric analysis of PDE and several complex variables, vol. 368 of Contemporary Mathematics, American Mathematical Society, Providence, RI, 2005, pp. 305–312. 10.1090/conm/368/06786Search in Google Scholar

[11] A. E. Kogoj and E. Lanconelli, Liouville theorems for a class of linear second-order operators with non-negative characteristic form, Bound. Value Probl. (2007), Art. ID. 48232, 16 pages.10.1155/2007/48232Search in Google Scholar

[12] A. E. Kogoj, E. Lanconelli, and E. Priola, Harnack inequality and Liouville-type theorems for Ornstein-Uhlenbeck and Kolmogorov operators, Math. Eng. 2 (2020), no. 4, 680–697. Search in Google Scholar

[13] L. P. Kupcov, The fundamental solutions of a certain class of elliptic-parabolic second order equations, Differ. Uravn. 8 (1972), 1649–1660, 1716. Search in Google Scholar

[14] E. Lanconelli and S. Polidoro, On a class of hypoelliptic evolution operators, Rend. Sem. Mat. Univ. Politec. Torino 52 (1994), no. 1, 29–63, Partial differential equations, II (Turin, 1993). Search in Google Scholar

[15] R. Pinsky, Positive Harmonic Functions and Diffusion, Cambridge University Press, Cambridge, 1995. 10.1017/CBO9780511526244Search in Google Scholar

[16] E. Priola and J. Zabczyk, Null controllability with vanishing energy, SIAM J. Control Optim. 42 (2003), 1013–1032. 10.1137/S0363012902409970Search in Google Scholar

[17] E. Priola and J. Zabczyk, Liouville theorems for nonlocal operators, J. Funct. Anal. 216 (2004), 455–490. 10.1016/j.jfa.2004.04.001Search in Google Scholar

[18] E. Priola and J. Zabczyk, Harmonic functions for generalized Mehler semigroups, SPDEs and Applications-VII, Lecture Notes on Pure Applied Mathematics, vol. 245, Chapman Hall/CRC, 2006, pp. 243–256. https://iris.unito.it/retrieve/handle/2318/62663/755313/PriolaZabczyk_Mehlerquad.pdf. 10.1201/9781420028720.ch20Search in Google Scholar

[19] E. Priola and F. Y. Wang, Gradient estimates for diffusion semigroups with singular coefficients, J. Funct. Anal. 236 (2006), 244–264. 10.1016/j.jfa.2005.12.010Search in Google Scholar

[20] R. L. Schilling, P. Sztonyk, and J. Wang, On the coupling property and the Liouville theorem for Ornstein-Uhlenbeck processes, J. Evol. Equ. 12 (2012), no. 1, 119–140. 10.1007/s00028-011-0126-ySearch in Google Scholar

[21] F. Y. Wang, Coupling for Ornstein-Uhlenbeck processes with jumps, Bernoulli 17 (2011), no. 4, 1136–1158. 10.3150/10-BEJ308Search in Google Scholar

[22] W. Sierpinski, Sur le fonctions convexes mesurables, Fund. Math. 1 (1920), 125–129. 10.4064/fm-1-1-125-128Search in Google Scholar

[23] J. Zabczyk, Mathematical Control Theory – An Introduction, Second edition, Birkhäuser/Springer, Cham, 2020. 10.1007/978-3-030-44778-6Search in Google Scholar

Received: 2024-05-03
Revised: 2024-07-29
Accepted: 2024-09-17
Published Online: 2024-11-19

© 2024 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 5.12.2025 from https://www.degruyterbrill.com/document/doi/10.1515/agms-2024-0013/html
Scroll to top button