Home Using sums-of-squares to prove Gaussian product inequalities
Article Open Access

Using sums-of-squares to prove Gaussian product inequalities

  • Oliver Russell and Wei Sun EMAIL logo
Published/Copyright: June 18, 2024
Become an author with De Gruyter Brill

Abstract

The long-standing Gaussian product inequality (GPI) conjecture states that E [ j = 1 n X j y j ] j = 1 n E [ X j y j ] for any centered Gaussian random vector ( X 1 , , X n ) and any non-negative real numbers y j , j = 1 , , n . In this study, we describe a computational algorithm involving sums-of-squares representations of multivariate polynomials that can be used to resolve the GPI conjecture. To exhibit the power of the novel method, we apply it to prove new four- and five-dimensional GPIs: E [ X 1 2 m X 2 2 X 3 2 X 4 2 ] E [ X 1 2 m ] E [ X 2 2 ] E [ X 3 2 ] E [ X 4 2 ] for any m N , and E [ X 1 y X 2 2 X 3 2 X 4 2 X 5 2 ] E [ X 1 y ] E [ X 2 2 ] E [ X 3 2 ] E [ X 4 2 ] E [ X 5 2 ] for any y 1 10 .

1 Introduction

Gaussian distributions play a fundamental role in both probability and statistics (see [30] and references therein). Over the past decade, there has been a surge of interest in the study of inequalities related to Gaussian distributions. This was sparked by Royen’s famous proof of the long-conjectured Gaussian correlation inequality (GCI) [16,23]. GCIs are valuable tools for the study of small ball probabilities (i.e. Li [17] and Shao [28]) and the zeros of random polynomials (i.e. Li and Shao [18]), among others. In this work, we will investigate the Gaussian product inequality (GPI), another long-standing conjecture and one of the late Wenbo Li’s open problems (Shao [29]).

In its most general form, the GPI conjecture (Li and Wei [19]) states that for any non-negative real numbers y j , j = 1 , , n , and any centered R n -valued Gaussian random vector ( X 1 , , X n ) ,

(1.1) E j = 1 n X j y j j = 1 n E [ X j y j ] .

Verification of this inequality has immediate and far-reaching consequences. For example (Malicet et al. [21]), if (1.1) holds when the y j ’s are any equal positive even integers, then the “real linear polarization constant” conjecture raised by Benítez et al. [1] is true, and (1.1) is deeply linked to the celebrated U -conjecture by Kagan et al. [12].

The GPI conjecture has proven to be very challenging to solve, but not for lack of trying. Some progress has been made in the form of partial results. By Karlin and Rinott [11, Corollary 1.1, Theorem 3.1 and Remark 1.4], (1.1) holds for n = 2 by virtue of ( X 1 , X 2 ) possessing the MTP 2 property. Thus, we know the following two-dimensional (2D) GPI holds:

Theorem 1.1

Let ( X 1 , X 2 ) be a centered R 2 -valued Gaussian random vector and y 1 , y 2 > 0 or 1 < y 1 , y 2 < 0 . Then,

(1.2) E [ X 1 y 1 X 2 y 2 ] E [ X 1 y 1 ] E [ X 2 y 2 ] ,

and the equality sign holds if and only if X 1 , X 2 are independent.

Liu et al. [20] also used integral representations to provide an alternate proof of (1.2) when y 1 , y 2 ( 0 , 2 ) . In [24], we completed the picture of bivariate Gaussian product relations by proving a novel “opposite GPI”, when 1 < y 1 < 0 and y 2 > 0 :

E [ X 1 y 1 X 2 y 2 ] E [ X 1 y 1 ] E [ X 2 y 2 ] .

Frenkel [4] used algebraic methods to prove (1.1) for the case that y j = 2 for each j = 1 , , n . Lan et al. [15, Theorem 3.2] used the Gaussian hypergeometric functions to prove the following GPI: for any m 1 , m 2 N and any centered Gaussian random vector ( X 1 , X 2 , X 3 ) ,

E [ X 1 2 m 1 X 2 2 m 2 X 3 2 m 2 ] E [ X 1 2 m 1 ] E [ X 2 2 m 2 ] E [ X 3 2 m 2 ] .

Lan et al.’s proof of the three-dimensional (3D) GPI brought renewed hope to solving this difficult problem.

In [26], we investigated the 3D inequality

E [ X 1 2 X 2 2 m 2 X 3 2 m 3 ] E [ X 1 2 ] E [ X 2 2 m 2 ] E [ X 3 2 m 3 ] ,

for any m 2 , m 3 N . We showed that this inequality is implied by a combinatorial inequality which can be verified directly for small values of m 2 and arbitrary m 3 . Hence, the corresponding cases of the 3D inequality were proved. In [25], we completed the proof through the discovery of a novel inequality for the moment ratio E [ X 2 2 m 2 + 1 X 3 2 m 3 + 1 ] E [ X 2 2 m 2 X 3 2 m 3 ] , which implies this 3D GPI. We then extended these 3D results to the case where the exponents in the GPI can be real numbers rather than simply even integers.

Several months after the preprint of our publication [25] was initially posted online, Herry et al. [9] proved the 3D GPI (1.1) for any even-integer exponents. Motivated by this work, we continued to investigate the GPI in dimension greater than three and the case where the exponents are real-valued.

Throughout this work, any Gaussian random variable is assumed to be real-valued and non-degenerate, i.e., has positive variance.

Conjecture 1.2

Let n 3 and m 1 , , m n N . For any centered Gaussian random vector ( X 1 , , X n ) ,

(1.3) E j = 1 n X j 2 m j j = 1 n E [ X j 2 m j ] .

The equality holds if and only if X 1 , , X n are independent.

In [26], we proved Conjecture 1.2 (for any n N ) when all correlations are non-negative.

Proposition 1.3

[26, Lemma 2.3] Let ( X 1 , , X n ) be a centered Gaussian random vector such that E [ X i X j ] 0 for any i j . Then, (1.3) holds and

E j = 1 n X j 2 m j E j = 1 k X j 2 m j E j = k + 1 n X j 2 m j , 1 k n 1 .

Remark 1.4

(i) Genest and Ouimet [5] proved (1.3) when the covariance matrix is completely positive. Edelmann et al. [3] extended Proposition 1.3 to the multivariate gamma distribution in the sense of Krishnamoorthy and Parthasarathy [14]. It is interesting to point out that Edelmann et al. [3] also demonstrated that the GCI implies the strong form of the GPI with negative exponents.

(ii) Note that product inequality (1.3) may not hold for non-Gaussian random vectors. Let V 1 and V 2 be independent Rademacher random variables, i.e., P ( V i = 1 ) = P ( V i = 1 ) = 1 2 , i = 1 , 2 . Define Y 1 = V 1 + 0.9 V 2 and Y 2 = V 1 0.9 V 2 . Then, ( Y 1 , Y 2 ) is a centered random vector with

Cov ( Y 1 , Y 2 ) = E [ V 1 2 0.81 V 2 2 ] = 0.19 .

We have

E [ Y 1 2 Y 2 2 ] = E [ ( V 1 2 0.81 V 2 2 ) 2 ] = ( 0.19 ) 2

and

E [ Y 1 2 ] E [ Y 2 2 ] = ( 1.81 ) 2 .

Hence,

E [ Y 1 2 Y 2 2 ] < E [ Y 1 2 ] E [ Y 2 2 ] .

For some more recent GPI-related results, we refer the reader to Genest and Ouimet [6] and Genest et al. [7].

In this work, we develop an efficient computational algorithm that produces exact sums-of-squares (SOS) polynomials to tackle the GPI. We describe this method in Section 2 and use it to rigorously prove some special cases of the GPI conjecture with fixed exponents in Section 3. Then, in Section 4, we reveal the true power of the SOS method by extending these special cases to the stronger result where one exponent is unbounded. Finally, in Section 5 and Additional Material Section 2, we prove a five-dimensional (5D) GPI as a template for a new and improved SOS method that handles the case where the unbounded exponent can be real rather than simply an even integer. Fully solving the GPI conjecture is most difficult when correlations can be negative and when exponents are not integers. In theory, our algorithm is applicable to any GPI of the form (1.1) with fixed n and all but one exponent fixed even integers, and is therefore the first universal method for solving the GPI.

2 The SOS method of solving the GPI

An SOS representation of a polynomial is of the form i = 1 p f i 2 , where the f i ’s are real-coefficient polynomials. It is clear that any polynomial with an SOS representation is necessarily non-negative.

Lemma 2.1

Let ( X 1 , , X n ) be a centered Gaussian random vector. Denote by Λ the covariance matrix of ( X 1 , , X n ) and c m 1 , , m n the coefficient corresponding to the term t 1 2 m 1 t n 2 m n of the polynomial.

G ( t 1 , , t n ) = k , l = 1 n Λ k l t k t l j = 1 n m j , t 1 , , t n R .

Then,

(2.1) E j = 1 n X j 2 m j = j = 1 n ( 2 m j ) ! 2 j = 1 n m j j = 1 n m j ! c m 1 , , m n .

Proof

In view of the classical Isserlis-Wick formula (cf. [13]), (2.1) is an equality involving polynomials in the entries of Λ . Then, it suffices to establish (2.1) for an invertible Λ . We use to denote the transpose of a matrix or a vector. Define

f Λ ( x ) = 1 ( 2 π ) n det ( Λ ) e 1 2 x Λ 1 x , x = ( x 1 , , x n ) R n ,

and

Ψ Λ ( t ) = R n e i t x f Λ ( x ) d x , t = ( t 1 , , t n ) R n .

Then,

Ψ Λ ( t ) = exp 1 2 t Λ t .

For β = ( β 1 , , β n ) with β j N { 0 } , we have

t β Ψ Λ ( 0 ) = i j = 1 n β j R n j = 1 n x j β j f Λ ( x ) d x .

Hence,

E j = 1 n X j 2 m j = ( 1 ) j = 1 n m j t ( 2 m 1 , , 2 m n ) exp 1 2 t Λ t ( 0 ) = 1 2 j = 1 n m j j = 1 n m j ! t ( 2 m 1 , , 2 m n ) k , l = 1 n Λ k l t k t l j = 1 n m j ( 0 ) .

Therefore, the proof is complete.□

Let U j , 1 j n , be independent standard Gaussian random variables. Define

X k = j = 1 n x k j U j , 1 k n ,

where each x k j R , 1 k , j n . Then, we have

(2.2) Λ k k = j = 1 n x k j 2 , 1 k n , Λ k l = j = 1 n x k j x l j , 1 k < l n .

Define

F m 1 , , m n ( Λ ) = E j = 1 n X j 2 m j j = 1 n E [ X j 2 m j ] = E j = 1 n X j 2 m j k = 1 n [ ( 2 m k 1 ) ! ! Λ k k m k ] .

By (2.1) and (2.2), it is easy to see that F m 1 , , m n ( Λ ) can be expressed as a polynomial of the x i j ’s, say F m 1 , , m n ( x 11 , , x 1 n , , x n 1 , , x n n ) .

By (2.1) and using the Mathematica functions Expand[] and Coefficient[], we obtain

(2.3) Poly Expand k , l = 1 n Λ k l t k t l j = 1 n m j , F m 1 , , m n ( Λ ) = Expand j = 1 n ( 2 m j ) ! 2 j = 1 n m j j = 1 n m j ! Coefficient Poly , j = 1 n t j 2 m j k = 1 n [ ( 2 m k 1 ) ! ! Λ k k m k ] .

Combining (2.2) and (2.3), we have an algorithm to obtain the expression of F m 1 , , m n ( x 11 , , x 1 n , , x n 1 , , x n n ) . Suppose we consider (2.3) for fixed n , m 1 , , m n . Applying the Macaulay2 package “SumsOfSquares” [2] to this polynomial, we may attempt to obtain an SOS decomposition i = 1 p c i f i 2 of F m 1 , , m n , where the c i ’s are positive rational numbers and the f i ’s are rational-coefficient polynomials. Although not every rational-coefficient SOS polynomial necessarily has such a decomposition (Scheiderer [27, Theorem 2.1]), this software aims to produce one regardless (Peyrl and Parrilo [22]). If it is obtained, then (1.3) is verified for this case.

This package is very user-friendly and, along with the semi-definite programming package it uses, comes pre-installed in the newest versions of Macaulay2. By using a “mixed symbolic-numerical approach” [22], “SumsOfSquares” takes advantage of the speed of approximate numerical calculations, yet still produces a final SOS decomposition that is exact (not an approximation). This SOS polynomial can then be expanded and checked to match the original F m 1 , , m n using the value() function.

To further increase the efficiency of our method, we will use a trick to reduce the number of variables of the polynomial F m 1 , , m n . In this way, calculation of the SOS decomposition becomes significantly less computationally intensive and much faster. This technique resides in a key result proved in one of our previous papers, which we restate here for convenience.

Lemma 2.2

[26, Lemma 2.1] Let n 3 and m 1 , , m n N . If for any centered Gaussian random vector ( Y 1 , , Y n ) with Y n = y 1 Y 1 + + y n 1 Y n 1 for some constants y 1 , , y n 1 ,

(2.4) E j = 1 n 1 Y j 2 m j Y n 2 k j = 1 n 1 E [ Y j 2 m j ] E [ Y n 2 k ] , 0 k m n ,

then for any centered Gaussian random vector ( X 1 , , X n ) ,

(2.5) E j = 1 n X j 2 m j j = 1 n E [ X j 2 m j ] .

Additionally, if Inequality (2.4) is strict when k = m n , then the equality sign of (2.5) holds only if X n is independent of X 1 , , X n 1 .

The main advantage of Lemma 2.2 is to prove the GPI (2.5), we may assume without loss of generality that X n is a linear combination of X 1 , , X n 1 .

3 Applications of the SOS method

First, we will verify (1.3) for the case n = 3 , m 1 = 4 , m 2 = 3 , m 3 = 1 .

Theorem 3.1

For any centered Gaussian random vector ( X 1 , X 2 , X 3 ) ,

E [ X 1 8 X 2 6 X 3 2 ] E [ X 1 8 ] E [ X 2 6 ] E [ X 3 2 ] .

The equality holds if and only if X 1 , X 2 , X 3 are independent.

Proof

Let X 3 be a linear combination of X 1 and X 2 . Note that if X 2 = c 1 X 1 or X 3 = c 2 X 1 , then this reduces to the 2D GPI which has been solved. Then, without loss of generality, we may write

X 1 = U 1 , X 2 = a U 1 + U 2 , X 3 = b U 1 + U 2 , a , b R .

By Lemma 2.2, we need only to show that F 4 , 3 , 1 is strictly positive. We will complete the proof by giving an SOS decomposition of F 4 , 3 , 1 . By (2.3), we obtain

F 4 , 3 , 1 ( a , b ) = 9,450 + 207,900 a 2 + 463,050 a 4 + 133,560 a 6 + 170,100 a b + 1,247,400 a 3 b + 1,621,620 a 5 b + 12,600 b 2 + 463,050 a 2 b 2 + 2,022,300 a 4 b 2 + 2,025,450 a 6 b 2 = 8,939,085 4 517,492 8,939,085 a 3 + a 2 b + 5,076,483 23,837,560 a + 11,941 17,878,170 b 2 + 2,025,450 a 3 b + 681,437 2,025,450 a 2 56,659 1,080,240 a b 262,277 21,604,800 2 + 261,845,942,007 576,128 1,920,322,441,444 3,927,689,130,105 a 2 + a b + 195,121,880,459 1,745,639,613,380 2 + 78,825,114,179,908,206,647 471,322,695,612,600 a 2 92,099,248,237,922,489,613 1,261,201,826,878,531,306,352 2 + 1,126,954,700,084 8,939,085 a 3 3,475,478,394,117 9,015,637,600,672 a 4,058,130,969,823 36,062,550,402,688 b 2 + 14,264,057,991,471,695,069 180,312,752,013,440 a + 39,195,431,778,109,777,165 114,112,463,931,773,560,552 b 2 2 + 195,090,047,841,794,618,494,397,997 116,851,163,066,136,126,005,248 ( b ) 2 + 2,083,472,753,121,284,705,951,976,293 807,169,169,202,260,036,065,280 ( 1 ) 2 > 0 ,

where the second expression (i.e., the SOS decomposition) is obtained by an application of “SumsOfSquares” to the first expression.□

Building on this result, we will verify (1.3) for the case n = 3 , m 1 = 4 , m 2 = 3 , m 3 = 2 .

Theorem 3.2

For any centered Gaussian random vector ( X 1 , X 2 , X 3 ) ,

E [ X 1 8 X 2 6 X 3 4 ] E [ X 1 8 ] E [ X 2 6 ] E [ X 3 4 ] .

The equality holds if and only if X 1 , X 2 , X 3 are independent.

Proof

Let X 3 be a linear combination of X 1 and X 2 . Note that if X 2 = c 1 X 1 or X 3 = c 2 X 1 , then this reduces to the 2D GPI which has been solved. Let U 1 and U 2 be independent standard Gaussian random variables. Then, without loss of generality, we may write

X 1 = U 1 , X 2 = a U 1 + U 2 , X 3 = b U 1 + U 2 , a , b R .

By Lemma 2.2, we need only to show that F 4 , 3 , 1 and F 4 , 3 , 2 are strictly positive. By Theorem 3.1, F 4 , 3 , 1 > 0 . We will complete the proof by giving an SOS decomposition of F 4 , 3 , 2 (see Additional Material Section 1 for the full SOS decomposition). By (2.3), we obtain

F 4 , 3 , 2 ( a , b ) = 94,500 + 1,474,200 a 2 + 2,324,700 a 4 + 400,680 a 6 + 2,381,400 a b + 12,474,000 a 3 b + 9,729,720 a 5 b + 585,900 b 2 + 14,004,900 a 2 b 2 + 36,458,100 a 4 b 2 + 12,152,700 a 6 b 2 + 3,742,200 a b 3 + 32,432,400 a 3 b 3 + 48,648,600 a 5 b 3 + 151,200 b 4 + 6,066,900 a 2 b 4 + 30,391,200 a 4 b 4 + 34,454,700 a 6 b 4 = 145,216,503 4 60,146,041 290,433,006 a 3 b + a 2 b 2 + 2,953,715 145,216,503 a 2 + 88,269,419 290,433,006 a b 244,387 12,627,522 b 2 + 54,414,145 2,468,680,551 2 + + 1,012,078,525,061,381,023,480,077,400,990,087,498,959,489,065 140 , 146 , 636 , 339 , 259 , 089 , 636 , 648 , 852 , 302 , 082 , 527 , 294 , 576 ( 1 ) 2 > 0 ,

where the second expression (i.e., the SOS decomposition) is obtained by an application of “SumsOfSquares” to the first expression.□

Next we will verify (1.3) for the case n = 4 , m 1 = 2 , m 2 = 1 , m 3 = 1 , m 4 = 1 .

Theorem 3.3

For any centered Gaussian random vector ( X 1 , X 2 , X 3 , X 4 ) ,

(3.1) E [ X 1 4 X 2 2 X 3 2 X 4 2 ] E [ X 1 4 ] E [ X 2 2 ] E [ X 3 2 ] E [ X 4 2 ] .

The equality holds if and only if X 1 , X 2 , X 3 , X 4 are independent.

Proof

Let X 4 be a linear combination of X 1 , X 2 and X 3 . Note that if X 2 = c 1 X 1 or X 3 = c 2 X 1 or X 4 = c 3 X 1 , then this reduces to a 3D GPI which has been solved. Let U 1 , U 2 and U 3 be independent standard Gaussian random variables. Then, we consider the following three constructions of X 1 , , X 4 in terms of U 1 , U 2 , U 3 :

Case 1:

X 1 = U 1 , X 2 = a U 1 + U 2 , X 3 = b U 1 + c U 2 + U 3 , X 4 = d U 1 + e U 2 + U 3 , a , b , c , d , e R .

Case 2:

X 1 = U 1 , X 2 = a U 1 + U 2 , X 3 = b U 1 + U 2 , X 4 = c U 1 + d U 2 + U 3 , a , b , c , d R .

Case 3:

X 1 = U 1 , X 2 = a U 1 + U 2 , X 3 = b U 1 + U 2 , X 4 = c U 1 + U 2 , a , b , c R .

Note that Case 1 corresponds to the case that the rank of the covariance matrix equals 3 and X 3 is linearly independent of X 1 and X 2 . Case 2 corresponds to the case that the rank of the covariance matrix equals 3 and X 3 is linearly dependent of X 1 and X 2 . Case 3 corresponds to the case that the rank of the covariance matrix equals 2. By Lemma 2.2, we need only to show that, in each of these three cases, F 2 , 1 , 1 , 1 > 0 .

Case 1 (see Additional Material Section 1 for the full SOS decomposition):

F 2 , 1 , 1 , 1 ( a , b , c , d , e ) = 6 + 42 a 2 + 12 b 2 + 102 a 2 b 2 + 60 a b c + 6 c 2 + 12 a 2 c 2 + 60 b d + 420 a 2 b d + 120 a c d + 12 d 2 + 102 a 2 d 2 + 102 b 2 d 2 + 942 a 2 b 2 d 2 + 420 a b c d 2 + 42 c 2 d 2 + 102 a 2 c 2 d 2 + 120 a b e + 36 c e + 60 a 2 c e + 60 a d e + 420 a b 2 d e + 180 b c d e + 420 a 2 b c d e + 180 a c 2 d e + 6 e 2 + 12 a 2 e 2 + 42 b 2 e 2 + 102 a 2 b 2 e 2 + 180 a b c e 2 + 42 c 2 e 2 + 42 a 2 c 2 e 2 = 942 a b d + 1,169 7,536 a c e + 1,159 7,536 c d + 1,159 7,536 b e + 1,217 7,536 a 2 + + 403,986 855,377 ( 1 ) 2 > 0 ,

Case 2 (see Additional Material Section 1 for the full SOS decomposition):

F 2 , 1 , 1 , 1 ( a , b , c , d ) = 6 + 12 a 2 + 60 a b + 12 b 2 + 102 a 2 b 2 + 42 c 2 + 102 a 2 c 2 + 420 a b c 2 + 102 b 2 c 2 + 942 a 2 b 2 c 2 + 180 a c d + 180 b c d + 420 a 2 b c d + 420 a b 2 c d + 42 d 2 + 42 a 2 d 2 + 180 a b d 2 + 42 b 2 d 2 + 102 a 2 b 2 d 2 = 942 a b c + 583 3,768 a d + 583 3,768 b d + 583 3,768 c 2 + + 599 408 ( 1 ) 2 > 0 ,

Case 3 (see Additional Material Section 1 for the full SOS decomposition):

F 2 , 1 , 1 , 1 ( a , b , c ) = 42 + 42 a 2 + 180 a b + 42 b 2 + 102 a 2 b 2 + 180 a c + 180 b c + 420 a 2 b c + 420 a b 2 c + 42 c 2 + 102 a 2 c 2 + 420 a b c 2 + 102 b 2 c 2 + 942 a 2 b 2 c 2 = 942 a b c + 593 3,768 a + 593 3,768 b + 593 3,768 c 2 + + 2,241 902 ( 1 ) 2 > 0 ,

where in each case, the first expression is obtained by (2.3) and the second expression (i.e., the SOS decomposition) is obtained by an application of “SumsOfSquares” to the first expression.□

Remark 3.4

To establish Inequality (3.1), we need only to consider Case 1. However, to show that the equality sign holds if and only if X 1 , X 2 , X 3 , X 4 are independent, we must check all three cases.

4 SOS method with one exponent unbounded

In this section, we demonstrate an even more powerful application of the SOS method to be used when one exponent is unknown. In particular, we extend Theorems 3.2 and 3.3 to the case where m 1 is unbounded by exploiting the fact that verifying these GPIs can be reduced to proving the non-negativity of a multivariate polynomial with variables a , b , . . . , m 1 .

Theorem 4.1

Let m N . For any centered Gaussian random vector ( X 1 , X 2 , X 3 ) ,

E [ X 1 2 m X 2 6 X 3 4 ] E [ X 1 2 m ] E [ X 2 6 ] E [ X 3 4 ] .

The equality holds if and only if X 1 , X 2 , X 3 are independent.

Proof

Let X 3 be a linear combination of X 1 and X 2 , and let U 1 and U 2 be independent standard Gaussian random variables. Then, similar to the proof of Theorem 3.2, without loss of generality, we may write

X 1 = U 1 , X 2 = a U 1 + U 2 , X 3 = b U 1 + U 2 , a , b R .

By Lemma 2.2, we need only to show that F m , 3 , 1 and F m , 3 , 2 are strictly positive. By [26, Theorem 3.5], F m , 3 , 1 > 0 . We will complete the proof by giving an SOS decomposition of F m , 3 , 2 (see Additional Material Section 1 for the full SOS decomposition). By direct calculation (expanding and taking expectations), then letting m = p 2 + 1 , we have that for any p R ,

F m , 3 , 2 ( a , b , p ) 2 ( 2 m 1 ) ! ! = 450 + 2,295 a 2 + 1,620 a 4 + 135 a 6 + 3,780 a b + 9,000 a 3 b + 3,780 a 5 b + 900 b 2 + 9,990 a 2 b 2 + 14,040 a 4 b 2 + 2,790 a 6 b 2 + 2,700 a b 3 + 12,600 a 3 b 3 + 11,340 a 5 b 3 + 90 b 4 + 2,295 a 2 b 4 + 7,020 a 4 b 4 + 5,175 a 6 b 4 + 1,575 a 2 p 2 + 1,800 a 4 p 2 + 213 a 6 p 2 + 2,520 a b p 2 + 9,600 a 3 b p 2 + 5,112 a 5 b p 2 + 630 b 2 p 2 + 10,800 a 2 b 2 p 2 + 19,170 a 4 b 2 p 2 + 4,464 a 6 b 2 p 2 + 2,880 a b 3 p 2 + 17,040 a 3 b 3 p 2 + 17,856 a 5 b 3 p 2 + 120 b 4 p 2 + 3,195 a 2 b 4 p 2 + 11,160 a 4 b 4 p 2 + 9,129 a 6 b 4 p 2 + 450 a 4 p 4 + 90 a 6 p 4 + 2,400 a 3 b p 4 + 2,160 a 5 b p 4 + 2,700 a 2 b 2 p 4 + 8,100 a 4 b 2 p 4 + 2,472 a 6 b 2 p 4 + 720 a b 3 p 4 + 7,200 a 3 b 3 p 4 + 9,888 a 5 b 3 p 4 + 30 b 4 p 4 + 1,350 a 2 b 4 p 4 + 6,180 a 4 b 4 p 4 + 6,020 a 6 b 4 p 4 + 12 a 6 p 6 + 288 a 5 b p 6 + 1,080 a 4 b 2 p 6 + 576 a 6 b 2 p 6 + 960 a 3 b 3 p 6 + 2,304 a 5 b 3 p 6 + 180 a 2 b 4 p 6 + 1,440 a 4 b 4 p 6 + 1,880 a 6 b 4 p 6 + 48 a 6 b 2 p 8 + 192 a 5 b 3 p 8 + 120 a 4 b 4 p 8 + 280 a 6 b 4 p 8 + 16 a 6 b 4 p 10 = 2,461,933 196 46,403 29,543,196 a 3 b 2 p 5 + 381,808 2,461,933 a 3 b 2 p 3 + 12,180,469 29,543,196 a 3 b 2 p 8,183 4,923,866 a 3 p 3 + 188,501 2,461,933 a 2 b p 3 + 57,820 2,461,933 a b 2 p 3 + 1,233,281 29,543,196 a 3 p + a 2 b p + 837,655 2,461,933 a b 2 p + 8,357,783 39,390,928 a p + 8,216,565 9,1091,521 b p 2 + + 2,629,929,480,562,269,930,314,071,400,782,243,373,877,386,514,044,892,496,969,174,184,208,223,685,867,430,351,966,437 1,167,665,414,293,974,784,763,735,440,802,934,310,791,398,655,203,134,471,061,602,327,930,858,255,273,025,136,604,440 ( a 3 b 2 p 5 ) 2 > 0 ,

where the second expression (i.e., the SOS decomposition) is obtained by an application of “SumsOfSquares” to the first expression.□

Theorem 4.2

Let m N . For any centered Gaussian random vector ( X 1 , X 2 , X 3 , X 4 ) ,

(4.1) E [ X 1 2 m X 2 2 X 3 2 X 4 2 ] E [ X 1 2 m ] E [ X 2 2 ] E [ X 3 2 ] E [ X 4 2 ] .

The equality holds if and only if X 1 , X 2 , X 3 , X 4 are independent.

Proof

Let X 4 be a linear combination of X 1 , X 2 and X 3 , and let U 1 , U 2 and U 3 be independent standard Gaussian random variables. Then, similar to the proof of Theorem 3.3, without loss of generality, we may consider three constructions of X 1 , , X 4 in terms of U 1 , U 2 , U 3 :

Case 1:

X 1 = U 1 , X 2 = a U 1 + U 2 , X 3 = b U 1 + c U 2 + U 3 , X 4 = d U 1 + e U 2 + U 3 , a , b , c , d , e R .

Case 2:

X 1 = U 1 , X 2 = a U 1 + U 2 , X 3 = b U 1 + U 2 , X 4 = c U 1 + d U 2 + U 3 , a , b , c , d R .

Case 3:

X 1 = U 1 , X 2 = a U 1 + U 2 , X 3 = b U 1 + U 2 , X 4 = c U 1 + U 2 , a , b , c R .

By Lemma 2.2, we need only to show that, in each of these three cases, F m , 1 , 1 , 1 > 0 . By direct calculation (expanding and taking expectations), then letting m = p 2 + 1 , we have that for any p R ,

Case 1 (see Additional Material Section 1 for the full SOS decomposition):

F m , 1 , 1 , 1 ( a , b , c , d , e , p ) 2 ( 2 m 1 ) ! ! = 1 + 4 a 2 + b 2 + 7 a 2 b 2 + 6 a b c + c 2 + a 2 c 2 + 6 b d + 30 a 2 b d + 12 a c d + d 2 + 7 a 2 d 2 + 7 b 2 d 2 + 52 a 2 b 2 d 2 + 30 a b c d 2 + 4 c 2 d 2 + 7 a 2 c 2 d 2 + 12 a b e + 6 c e + 6 a 2 c e + 6 a d e + 30 a b 2 d e + 18 b c d e + 30 a 2 b c d e + 18 a c 2 d e + e 2 + a 2 e 2 + 4 b 2 e 2 + 7 a 2 b 2 e 2 + 18 a b c e 2 + 7 c 2 e 2 + 4 a 2 c 2 e 2 + 3 a 2 p 2 + b 2 p 2 + 8 a 2 b 2 p 2 + 4 a b c p 2 + a 2 c 2 p 2 + 4 b d p 2 + 32 a 2 b d p 2 + 8 a c d p 2 + d 2 p 2 + 8 a 2 d 2 p 2 + 8 b 2 d 2 p 2 + 71 a 2 b 2 d 2 p 2 + 32 a b c d 2 p 2 + 3 c 2 d 2 p 2 + 8 a 2 c 2 d 2 p 2 + 8 a b e p 2 + 4 a 2 c e p 2 + 4 a d e p 2 + 32 a b 2 d e p 2 + 12 b c d e p 2 + 32 a 2 b c d e p 2 + 12 a c 2 d e p 2 + a 2 e 2 p 2 + 3 b 2 e 2 p 2 + 8 a 2 b 2 e 2 p 2 + 12 a b c e 2 p 2 + 3 a 2 c 2 e 2 p 2 + 2 a 2 b 2 p 4 + 8 a 2 b d p 4 + 2 a 2 d 2 p 4 + 2 b 2 d 2 p 4 + 30 a 2 b 2 d 2 p 4 + 8 a b c d 2 p 4 + 2 a 2 c 2 d 2 p 4 + 8 a b 2 d e p 4 + 8 a 2 b c d e p 4 + 2 a 2 b 2 e 2 p 4 + 4 a 2 b 2 d 2 p 6 = 123 2 27 410 a b d p 3 + a b d p + 137 1,230 a c e p + 71 615 c d p + 71 615 b e p + 9 82 a p 2 + + 229,159,727 5,822,517,568 ( 1 ) 2 > 0 ,

Case 2 (see Additional Material Section 1 for the full SOS decomposition):

F m , 1 , 1 , 1 ( a , b , c , d , p ) 2 ( 2 m 1 ) ! ! = 1 + a 2 + 6 a b + b 2 + 7 a 2 b 2 + 4 c 2 + 7 a 2 c 2 + 30 a b c 2 + 7 b 2 c 2 + 52 a 2 b 2 c 2 + 18 a c d + 18 b c d + 30 a 2 b c d + 30 a b 2 c d + 7 d 2 + 4 a 2 d 2 + 18 a b d 2 + 4 b 2 d 2 + 7 a 2 b 2 d 2 + a 2 p 2 + 4 a b p 2 + b 2 p 2 + 8 a 2 b 2 p 2 + 3 c 2 p 2 + 8 a 2 c 2 p 2 + 32 a b c 2 p 2 + 8 b 2 c 2 p 2 + 71 a 2 b 2 c 2 p 2 + 12 a c d p 2 + 12 b c d p 2 + 32 a 2 b c d p 2 + 32 a b 2 c d p 2 + 3 a 2 d 2 p 2 + 12 a b d 2 p 2 + 3 b 2 d 2 p 2 + 8 a 2 b 2 d 2 p 2 + 2 a 2 b 2 p 4 + 2 a 2 c 2 p 4 + 8 a b c 2 p 4 + 2 b 2 c 2 p 4 + 30 a 2 b 2 c 2 p 4 + 8 a 2 b c d p 4 + 8 a b 2 c d p 4 + 2 a 2 b 2 d 2 p 4 + 4 a 2 b 2 c 2 p 6 = 368 5 25 2,944 a b c p 3 + a b c p + 281 2,944 a d p + 281 2,944 b d p + 281 2,944 c p 2 + + 2,957 34,142 ( 1 ) 2 > 0 ,

Case 3 (see Additional Material Section 1 for the full SOS decomposition):

F m , 1 , 1 , 1 ( a , b , c , p ) 2 ( 2 m 1 ) ! ! = 7 + 4 a 2 + 18 a b + 4 b 2 + 7 a 2 b 2 + 18 a c + 18 b c + 30 a 2 b c + 30 a b 2 c + 4 c 2 + 7 a 2 c 2 + 30 a b c 2 + 7 b 2 c 2 + 52 a 2 b 2 c 2 + 3 a 2 p 2 + 12 a b p 2 + 3 b 2 p 2 + 8 a 2 b 2 p 2 + 12 a c p 2 + 12 b c p 2 + 32 a 2 b c p 2 + 32 a b 2 c p 2 + 3 c 2 p 2 + 8 a 2 c 2 p 2 + 32 a b c 2 p 2 + 8 b 2 c 2 p 2 + 71 a 2 b 2 c 2 p 2 + 2 a 2 b 2 p 4 + 8 a 2 b c p 4 + 8 a b 2 c p 4 + 2 a 2 c 2 p 4 + 8 a b c 2 p 4 + 2 b 2 c 2 p 4 + 30 a 2 b 2 c 2 p 4 + 4 a 2 b 2 c 2 p 6 = 372 5 5 2,976 a b c p 3 + a b c p + 139 1,488 a p + 139 1,488 b p + 139 1,488 c p 2 + + 1,125,167 3,159,157 ( c p ) 2 > 0 ,

where in each case, the second expression (i.e., the SOS decomposition) is obtained by an application of “SumsOfSquares” to the first expression.□

Remark 4.3

To establish Inequality (4.1), we need only consider Case 1. However, to show that the equality sign holds if and only if X 1 , X 2 , X 3 , X 4 are independent, we must check all three cases.

5 Alternative SOS method in Mathematica

Since Mathematica is much more user-friendly than Macaulay2, we were pleased to see that, in the latest versions of Mathematica, a new function PolynomialSumOfSquaresList[] has been added. Although this function does not give SOS decompositions with strictly rational components, it still produces an exact and verifiable SOS decomposition when it works.

This motivated us to develop a more efficient SOS method that only necessitates the use of Mathematica. Please see Additional Material Section 2 for a self-contained proof in a Mathematica notebook in which we prove the following new 5D GPI using this SOS method:

Theorem 5.1

Let y [ 1 10 , ) . For any centered Gaussian random vector ( X 1 , X 2 , X 3 , X 4 , X 5 ) ,

E [ X 1 y X 2 2 X 3 2 X 4 2 X 5 2 ] E [ X 1 y ] E [ X 2 2 ] E [ X 3 2 ] E [ X 4 2 ] E [ X 5 2 ] .

6 Discussion

The GPI conjecture is an extremely difficult problem to solve, particularly when some of the correlations are negative. The SOS method described in this study can be used to rigorously verify any specific case of the GPI (1.3), constrained only by computing power. Furthermore, as demonstrated in Section 4, this method is even powerful enough to prove GPIs with one exponent unbounded, a feat that is extremely difficult by purely theoretical methods. Moreover, in Section 5 and Additional Material Section 2, we proved a new 5D GPI, showing that we can even use the SOS method to verify more general GPIs with at least one unbounded real exponent. On the other hand, should the GPI conjecture not hold in its full generality, our method may prove quite useful in the search for a counterexample. Our algorithm is efficient, straightforward, and produces exact results. Furthermore, while calculations of multivariate Gaussian moments are often burdened by the constraints imposed by the covariance matrix, our method has the advantage of using free variables (with domain over the reals).

Despite the fact that the GPI is widely believed to be true, as of yet there has not been much to strongly support this presumption. At the time the original preprint was posted online, all theorems in this work constituted never-before obtained results. Till today, to the best of our knowledge, Theorems 4.2 and 5.1 have not been proved by any other methods. In fact, using the method outlined above, we were able to verify many more GPIs with one exponent unbounded and higher fixed exponents, the proofs of which we omit seeing as the SOS decompositions, although exact, can be quite long. Thus, with the help of software, our work provides some of the first legitimate support to the correctness of the GPI in dimensions higher than 3. Therefore, we propose a stronger version of the GPI:

Conjecture 6.1

Let n 3 , m 1 , , m n N , and U 1 , , U n be independent standard Gaussian random variables. Define the polynomial H on R n ( n 1 ) 2 by

H ( x 21 , x 31 , x 32 , , x n 1 , , x n , n 1 ) = E U 1 2 m 1 i = 2 n j = 1 i 1 x i j U j + U i 2 m i ( 2 m 1 1 ) ! ! i = 2 n E j = 1 i 1 x i j U j + U i 2 m i .

Then, H has an SOS representation.

By approximation, we find that if Conjecture 6.1 is true, then the GPI (1.3) holds. By virtue of this conjecture, we have connected the probability inequality to analysis, algebra, geometry, combinatorics, mathematical programming and computer science. In particular, research on SOS is abundant and ongoing (e.g., [8]), but so far, no theoretical results in that field have been used to prove a GPI. Although software is used, our rigorous proofs establish the first meaningful link between the GPI and SOS, and should stimulate those working on SOS.

It is well-known that a non-negative multivariate polynomial may not have an SOS representation. Denote by P n , 2 d the set of all non-negative polynomials in n variables of degree at most 2 d and n , 2 d the set of all polynomials in P n , 2 d that are SOS. In 1888, Hilbert proved that n , 2 d = P n , 2 d if and only if n = 1 or d = 1 or ( n , d ) = ( 2 , 2 ) [10]. However, we would like to point out that E [ j = 1 n X j 2 m j ] itself is an SOS. To see this, let U 1 , , U n be independent standard Gaussian random variables. For x 11 , , x 1 n , , x n 1 , , x n n R , we can write:

X i = j = 1 n x i j U j , 1 i n .

Then, we have

j = 1 n X j 2 m j = a b ,

where a = ( 1 , x 11 , , x n n , x 11 2 , x 11 x 12 , , x 1 n m 1 x n n m n ) is a vector of monomials of length l n j = 1 n m j and b = ( P 1 ( U 1 , , U n ) , P 2 ( U 1 , , U n ) , , P l ( U 1 , , U n ) ) is a vector of polynomials also of length l . Thus,

E j = 1 n X j 2 m j = E j = 1 n X j m j 2 = E [ a b b a ] = a Q a ,

where Q = b b is a non-negative definite matrix. Therefore, E j = 1 n X j 2 m j has an SOS representation.

Acknowledgements

We thank Dr. Thomas Royen for fruitful discussion and encouragement with regards to the SOS method. We thank Dr. Victor Magron for pointing out Scheiderer’s result [27, Theorem 2.1] among other helpful suggestions.

  1. Funding information: This work was supported by the Natural Sciences and Engineering Research Council of Canada (Nos. 559668-2021 and 4394-2018).

  2. Author contributions: Both authors have accepted responsibility for the entire content of this manuscript and consented to its submission to the journal, reviewed all the results and approved the final version of the manuscript. OR and WS conceived of the presented idea and developed the theory. OR performed the computations.

  3. Conflict of interest: The authors state no conflict of interest.

  4. Supplementary Material: This article contains supplementary material available at https://www.degruyter.com/document/doi/10.1515/demo-2024-0003.

References

[1] Benítez, C., Sarantopolous, Y., & Tonge, A. M. (1998). Lower bounds for norms of products of polynomials. Mathematical Proceedings of the Cambridge Philosophical Society, 124(3), 395–408. 10.1017/S030500419800259XSearch in Google Scholar

[2] Cifuentes, D., Kahle, T., & Parrilo, P. (2020). Sums of squares in Macaulay2. Journal of Software for Algebra and Geometry, 10(1), 17–24. 10.2140/jsag.2020.10.17Search in Google Scholar

[3] Edelmann, D., Richards, D., & Royen, T. (2023). Product inequalities for multivariate Gaussian, gamma, and positively upper orthant dependent distributions. Statistics and Probability Letters, 197, 109820. 10.1016/j.spl.2023.109820Search in Google Scholar

[4] Frenkel, P. E. (2008). Pfaffians, Hafnians and products of real linear functionals. Mathematical Research Letters, 15(2), 351–358. 10.4310/MRL.2008.v15.n2.a12Search in Google Scholar

[5] Genest, C., & Ouimet, F. (2022). A combinatorial proof of the Gaussian product inequality beyond the MTP2 case. Dependence Modeling, 10(1), 236–244. 10.1515/demo-2022-0116Search in Google Scholar

[6] Genest, C., & Ouimet, F. (2023). Miscellaneous results related to the Gaussian product inequality conjecture for the joint distribution of traces of Wishart matrices. Journal of Mathematical Analysis and Applications, 523(1), 126951. 10.1016/j.jmaa.2022.126951Search in Google Scholar

[7] Genest, C., Ouimet, F., & Richards, D. (2024). On the Gaussian product inequality conjecture for disjoint principal minors of Wishart random matrices. arXiv:2311.00202v2. Search in Google Scholar

[8] Henrion, D., Korda, M., & Lasserre, J. B. (2020). The Moment-SOS Hierarchy. London, UK: World Scientific. 10.1142/q0252Search in Google Scholar

[9] Herry, R., Malicet, D., & Poly, G. (2024). A short proof of a strong form of the three dimensional Gaussian product inequality. Proceedings of the American Mathematical Society, 152, 403–409. 10.1090/proc/16448Search in Google Scholar

[10] Hilbert, D. (1888). Ueber die Darstellung definiter Formen als Summe von Formenquadraten. Mathematische Annalen, 32(3), 342–350. 10.1007/BF01443605Search in Google Scholar

[11] Karlin, S., & Rinott, Y. (1981). Total positivity properties of absolute value multinormal variables with applications to confidence interval estimates and related probabilistic inequalities. The Annals of Statistics, 9(5), 1035–1049. 10.1214/aos/1176345583Search in Google Scholar

[12] Kagan, A. M., Linnik, Y. V., & Rao, C. R. (1973). Characterization Problems in Mathematical Statistics. New York and London: Wiley. Search in Google Scholar

[13] Kan, R. (2008). From moments of sum to moments of product. Journal of Multivariate Analysis, 99(3), 542–554. 10.1016/j.jmva.2007.01.013Search in Google Scholar

[14] Krishnamoorthy, A. S., & Parthasarathy, M. (1951). A multivariate gamma-type distribution. The Annals of Mathematical Statistics, 22(4), 549–557. 10.1214/aoms/1177729544Search in Google Scholar

[15] Lan, G. L., Hu, Z. C., & Sun, W. (2020). The three-dimensional Gaussian product inequality. Journal of Mathematical Analysis and Applications, 485(2), 123858. 10.1016/j.jmaa.2020.123858Search in Google Scholar

[16] Latała, R., & Matlak, D. (2017). Royenas proof of the Gaussian correlation inequality. In: Geometric Aspects of Functional Analysis (Vol. 2169, pp. 265–275). Cham, Switzerland: Springer. 10.1007/978-3-319-45282-1_17Search in Google Scholar

[17] Li, W. V. (1999). A Gaussian correlation inequality and its applications to small ball probabilities. Electronic Communications in Probability, 4, 111–118. 10.1214/ECP.v4-1012Search in Google Scholar

[18] Li, W. V., & Shao, Q. M. (2002). A normal comparison inequality and its applications. Probability Theory and Related Fields, 122(4), 494–508. 10.1007/s004400100176Search in Google Scholar

[19] Li, W. V., & Wei, A. (2012). A Gaussian inequality for expected absolute products. Journal of Theoretical Probability, 25(1), 92–99. 10.1007/s10959-010-0329-0Search in Google Scholar

[20] Liu, Z., Wang, Z., & Yang, X. (2017). A Gaussian expectation product inequality. Statistics and Probability Letters, 124, 1–4. 10.1016/j.spl.2016.12.018Search in Google Scholar

[21] Malicet, D., Nourdin, I., Peccati, G., & Poly, G. (2016). Squared chaotic random variables: new moment inequalities with applications. Journal of Functional Analysis, 270(2), 649–670. 10.1016/j.jfa.2015.10.013Search in Google Scholar

[22] Peyrl, H., & Parrilo, P. A. (2008). Computing sum of squares decompositions with rational coefficients. Theoretical Computer Science, 409(2), 269–281. 10.1016/j.tcs.2008.09.025Search in Google Scholar

[23] Royen, T. (2014). A simple proof of the Gaussian correlation conjecture extended to multivariate gamma distributions. Far East Journal of Theoretical Statistics, 48(2), 139–145. Search in Google Scholar

[24] Russell, O., & Sun, W. (2022). An opposite Gaussian product inequality. Statistics and Probability Letters, 191, 109656. 10.1016/j.spl.2022.109656Search in Google Scholar

[25] Russell, O., & Sun, W. (2023). Moment ratio inequality of bivariate Gaussian distribution and three-dimensional Gaussian product inequality. Journal of Mathematical Analysis and Applications, 527(1), 127410. 10.1016/j.jmaa.2023.127410Search in Google Scholar

[26] Russell, O., & Sun, W. (2022). Some new Gaussian product inequalities. Journal of Mathematical Analysis and Applications, 515(2), 126439. 10.1016/j.jmaa.2022.126439Search in Google Scholar

[27] Scheiderer, C. (2016). Sums of squares of polynomials with rational coefficients. Journal of the European Mathematical Society, 18(7), 1495–1513. 10.4171/jems/620Search in Google Scholar

[28] Shao, Q. M. (2003). A Gaussian correlation inequality and its applications to the existence of small ball constant. Stochastic Processes and their Applications, 107(2), 269–287. 10.1016/S0304-4149(03)00084-XSearch in Google Scholar

[29] Shao, Q. M. (2016). In memory of Wenbo V. Lias contributions. In: High Dimensional Probability VII (Vol. 71, pp. 281–291). Switzerland: Birkhäuser. 10.1007/978-3-319-40519-3_12Search in Google Scholar

[30] Tong, Y. L. (1990). The Multivariate Normal Distribution. New York, NY: Springer. 10.1007/978-1-4613-9655-0Search in Google Scholar

Received: 2024-02-13
Revised: 2024-04-25
Accepted: 2024-04-29
Published Online: 2024-06-18

© 2024 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 25.10.2025 from https://www.degruyterbrill.com/document/doi/10.1515/demo-2024-0003/html?recommended=sidebar
Scroll to top button