Home Matrix weights and a maximal function with exponent 3/2
Article Open Access

Matrix weights and a maximal function with exponent 3/2

  • Sergei Treil and Alexander Volberg ORCID logo EMAIL logo
Published/Copyright: February 26, 2025

Abstract

We build an example of a simple sparse operator for which its norm with scalar A 2 weight has linear estimate in [ w ] A 2 , but whose norm in matrix setting grows at least as [ W ] A 2 3 / 2 for specially constructed weights from A 2. We also demonstrate a matrix weighted maximal function with the same estimate from below.

2010 Mathematics Subject Classification: 42B20; 42B35; 47A30

1 Definitions and results

1.1 Matrix weights and A2 condition

A d × d matrix weight is a measurable function with values in the set of positive semidefinite matrices. A matrix weight W on R satisfies the A 2 condition if

(1.1) [ W ] A 2 sup I W I 1 / 2 W 1 I 1 / 2 2 < ,

where the supremum is taken over all intervals I R ; for weight on R n the intervals should be replaced by cubes. If the supremum is taken only over the dyadic intervals I D , we say that the weight satisfy the dyadic A 2 condition A 2 dy ; the corresponding supremum is denoted by [ W ] A 2 dy .

The A 2 conditions has a very simple interpretation: if one defines the averaging operator E I ,

E I f f I 1 I ,

then

E I f L 2 ( W ) L 2 ( W ) = W I 1 / 2 W 1 I 1 / 2 .

So, the A 2 condition (1.1) is just the uniform boundedness of the averaging operators E I in L 2(W); we squared this norm in the definition of the A 2 characteristics [ W ] A 2 to be consistent with the definition in the scalar case.

The matrix A 2 weights were introduced in Ref. [1], where it has been proved that the Hilbert transform in bounded in L 2 ( C d , W d x ) if and only if the weight satisfies the A 2 condition.

Later an estimate of the norm of the Hilbert transform with matrix weight was obtained in Ref. [2], and it was pushed to the estimate [ W ] A 2 3 / 2 in Ref. [3]. The power 3/2 stayed a bit surprising and bewildering for a while, but in Ref. [4] it was proved that this estimate is sharp, at least for the Hilbert transform.

We would like to mention that Ref. [4] contains a very detailed motivation and lists several applications of the sharp estimates of the Hilbert transform with matrix weight.

We recall the reader that the famous A 2 conjecture for scalar weights asked to prove a linear estimate in [ w ] A 2 for Calderón–Zygmund operators. For all Calderón–Zygmund operators this was achieved in Ref. [5], but first the linear estimate was proved for the maximal function in Ref. [6], for the Haar Multipliers in Ref. [7], for the Ahlfors–Beurling transform in Ref. [8], and for the Hilbert transform in Ref. [9]. Later a sparse domination proof (for general Calderón–Zygmund operators) appeared in Ref. [10].

Paper [4] and the present paper are devoted to matrix weight L 2 estimates. But matrix weight L p estimates represent a challenging problem as well. The classical theory with scalar weights has a very powerful tool: Rubio de Francia extrapolation theorem, [11], [12]. Recently Marcin Bownik and David Cruz-Uribe obtained the matrix generalization of Rubio de Francia theory, see [13].

In this paper they proved the Jones factorization theorem and the Rubio de Francia extrapolation theorem for matrix A p weights. The proof requires the development of the theory of convex-set valued functions and measurable semi-norm functions. In particular, they defined a convex-set valued version of the Hardy Littlewood maximal operator and constructed an appropriate generalization of the Rubio de Francia iteration algorithm, which is central to the proof of both results in the scalar case.

Paper of Cruz-Uribe, Isralowitz and Moen [14] had a certain estimate of the norm of the Hilbert transform with matrix weight in A p . For p = 2 it gave the estimate of the matrix Muckenhoupt norm to the power 3/2, matching [3].

Interestingly, paper [13] of Bownik and Cruz-Uribe gives better exponent for 1 < p < 2.

1.2 Maximal functions

Let us recall some classical (scalar) definitions of dyadic maximal functions. For a locally integrable function f (say on the real line) the simple (dyadic) maximal function Mf is defined as

M f ( x ) = sup I D : I x | I | 1 I f ( x ) d x sup I D : I x | f I | ,

and the Hardy–Littlewood maximal function M HL f

M HL f ( x ) = sup I D : I x | I | 1 I | f ( x ) | d x sup I D : I x | f | I ,

i.e. in the Hardy–Littlewood maximal function one takes absolute value before averaging.

The importance of the simple maximal function stems from the fact that f belongs to the dyadic H 1 space if and only if MfL 1.

One can naturally consider weighted analogues of the above functions. Given a weight (i.e. a locally integrable non-negative function) w, one can define the weighted simple maximal function M w f as

M w f ( x ) sup I D : I x w I 1 | w f I | ,

and the weighted Hardy–Littlewood maximal function M w HL f

M w HL f ( x ) sup I D : I x w I 1 | w f | I ;

here, recall we denote

f I | I | 1 I f ( x ) d x .

Note that both M w and M w HL are bounded in L 2(w) with constant independent of w. Of course, one can consider arbitrary measures μ here, not just absolutely continuous ones wdx, but since the main result here is a counterexample, we restrict ourselves to the case of weights.

So, what will be a natural generalization of these maximal functions to the vector-valued case in context of weighted estimates with matrix-valued weights? Note that simply taking the maximal function of the norm would not work, since it does not take into account the directionality, and would not work even for the diagonal matrix weights.

However the idea of the convex body domination tells us what should be a correct generalization.

For a function f with values in R d we define its convex body average

f I φ f I : φ L ( I ) , 1 φ 1

which, as it was discussed in Ref. [3] is a convex closed set of R d .

The correct analogue of the Hardy–Littlewood maximal function M HL then would be the convex body maximal function M c , which is the closed[1] absolute convex hull of the sets ⟨⟨f⟩⟩ I ,

(1.2) M c f ( x ) Clos I D : I x α I f I : I D : I x | α I | 1 ;

the sum here is understood as the Minkowski sum.

As for the simple maximal function M, the correct analogue then will be the absolute convex hull of the averages ⟨f I ,

(1.3) M f ( x ) Clos I D : I x α I f I : I D : I x | α I | 1 .

For a matrix weight W we define the simple weighted maximal function M W

(1.4) M W f ( x ) Clos I D : I x α I W I 1 W f I : I D : I x | α I | 1 .

and its convex body analogue

(1.5) M W c f ( x ) Clos I D : I x α I W I 1 W f I : I D : I x | α I | 1 .

For simplicity let us assume that the weight W is invertible a.e. on R , so the inverses are well defined; however, by considering the Moore–Penrose inverses, it is possible to interpret the above maximal function in the general case.

One can also consider the following modification of the simple weighted maximal function (1.4), which will be one of the main objects of the current paper,

(1.6) M ̃ W f ( x ) Clos I D : I x α I W 1 I W f I : I D : I x | α I | 1 .

This maximal function is morally “bigger” than M W . It appears naturally when one tries to estimate Lerner’s operator in L 2 with matrix weight. Let us explain the part that M ̃ W plays in such an estimate.

Consider a simple usual Lerner’s sparse operator but on vector functions:

L S f ( x ) Q S f Q 1 Q ( x ) ,

where S is the sparse family of dyadic cubes.

Let us see how the above maximal functions appear naturally in estimating

L S L 2 ( W ) L 2 ( W ) c ( d ) [ W ] A 2 3 / 2 .

Let f, g be in unweighted vector L 2. To prove the theorem we need to estimate

W 1 / 2 g , L S ( W 1 / 2 f ) L 2 ( C d , d x )

by c ( d ) [ W ] A 2 3 / 2 f 2 g 2 . Just by writing W 1 Q 1 W 1 Q = I d , we get

(1.7) W 1 / 2 g , L S ( W 1 / 2 f ) L 2 ( C d , d x ) Q W 1 Q 1 W 1 / 2 f Q , W 1 Q W 1 / 2 g Q | E Q | ,

where E Q denotes Q minus the next generation of cubes in S. Notice that we just naturally generated disjoint family E = { E Q } .

Now just change the variable WGW 1/2 g, W −1 FW −1/2 f, with GL 2(W), FL 2(W −1). Now we see that we need to estimate

Q W 1 Q 1 W 1 F Q , W 1 Q W G Q | E Q | C [ W ] A 2 3 / 2 F L 2 ( W 1 ) G L 2 ( W ) .

Now we just use Hölder inequality and the fact that {E Q } are disjoint and that sequence {|E Q |} form Carleson sequence. Then the desired above estimate follows from the estimates of two maximal functions: M W 1 for F and M ̃ W for G. In (1.9) one can see (exchanging W −1 by W) the estimate for M W 1 for F. It does not depend at all on [ W ] A 2 . Our note is devoted to the proof of the lower bound of M ̃ W . The upper bound is an open question, but we suspect that it is [ W ] A 2 3 / 2 .

1.3 Weighted estimates of the maximal functions

We are interested in the weighted estimates of the vector maximal functions. Let us recall that the norm in the weighted space L 2(W) is given by

f L 2 ( W ) 2 R ( W ( x ) f ( x ) , f ( x ) ) R d d x = R f ( x ) W ( x ) 2 d x ,

where for e R d

e W ( x ) 2 ( W ( x ) e , e ) R d

1.3.1 Norms of set-valued functions

So, how does one interpret the weighted norm of the function whose values are convex bodies? As it was discussed in Ref. [15] there are few possible interpretation. The first one is to define the norm of the function F whose values are subsets of R d by

F L 2 ( W ) 2 R F ( x ) W ( x ) 2 d x ,

where the norm of a subset is the supremum of the norm of its elements; of course, there should be appropriate measurability of the function F. The other way is to consider all measurable sections of F, i.e. all measurable function f, f(x) ∈ F(x) a.e., and take the supremum of f L 2 ( W ) .

For convex body maximal function this all was discussed in Ref. [15], and it was explained there that two above definitions are equivalent.

1.3.2 Scalarizations of vector-valued maximal functions

Very often in the literature instead of vector-valued functions, their scalarizations, i.e. functions with scalar values are considered.

For example, to study estimates in L 2(W) for simple maximal function (1.3) one can define its scalarization M W : L 2 ( W ) L 2 ,

M W f ( x ) sup I D : I x W ( x ) 1 / 2 f I R d = sup I D : I x f I W ( x )

It is easy to see that

M f ( x ) W ( x ) = M f ( x )

so the estimates of M in L 2(W) are equivalent to the estimates of M W : L 2 ( W ) L 2 .

Similarly, the scalarization of the convex body maximal function (1.2) gives us a version of the Christ–Goldberg maximal function M W CG : L 2 ( W ) L 2 , see [16],[2]

(1.8) M W CG f ( x ) sup I D : I x | I | 1 I W ( x ) 1 / 2 f ( y ) d y .

In this case we do not have the equality, but the two-sided estimate

d 1 M W CG f ( x ) M c f ( x ) W ( x ) M W CG f ( x ) ,

see (2.1) in Ref. [15], so the weighted estimates of the convex body maximal function M c in L 2(W) are equivalent (up to a dimensional constant) to the estimates of M W CG : L 2 ( W ) L 2 .

The advantage of using scalarizations is the comfortable familiarity of working with scalar objects (instead of sets). The disadvantage is that we need to incorporate the space we are working in the definition of the operator, while in convex body version the operator is defined separately from the space, and we can study its boundedness in different weighted spaces.

1.4 Known results about weighted estimates

It is well known and not hard to show that the simple maximal function and the convex body maximal functions are bounded in L 2(W) if and only if the weight W satisfies the (dyadic) matrix A 2 condition (1.1). Moreover, one can show, see [17] that the norm of these maximal functions admits a linear in (dyadic) [ W ] A 2 bound, i.e. that

M f L 2 ( W ) M c f L 2 ( W ) C ( d ) [ W ] A 2 dy f L 2 ( W )

Moreover, the above estimate is sharp, even for the scalar weights. Namely, for appropriate power weight and appropriate test function the opposite inequality holds.

As for the case of weighted maximal functions, the scalar and vector result here are dramatically different.

First, in the scalar case

M w HL f ( x ) = M w | f | ( x ) ,

so there is no difference in the estimates for M w and M w HL . And it is well known that the operators M w , M w HL are bounded in L p (w) for 1 < p ≤ ∞.

But there is a dramatic difference in the vector case. Namely, it was shown in Ref. [18] that the simple maximal function M W from (1.3) is a bounded operator in L 2(W),

(1.9) M W f L 2 ( W ) C ( d ) f L 2 ( W )

for all weights W.

However, as it was surprisingly shown in Ref. [15], the convex body maximal function M W c defined by (1.5) is not bounded in L 2(W)!

It is an interesting question whether M W is bounded in L 2(W) for the dyadic A 2 weights and what is the sharp dependence on the A 2 characteristic [ W ] A 2 dy , but this will probably be a subject of a different paper

In this paper we are interested in the weighted estimates of the modified simple weighted maximal function (1.6). Note that since for the scalar dyadic A 2 weights

w 1 I [ w ] A 2 dy w I 1 I D ,

the estimate M w f L 2 ( w ) 2 f L 2 ( w ) trivially implies

M ̃ w f L 2 ( w ) 2 [ w ] A 2 dy f L 2 ( w ) .

However, that is not the case for the matrix weights!

2 Main results

To state main results we need to give some definitions.

2.1 Linearization of matrix weights and sparse collections

Let E { E I } I D be a collection of mutually disjoint subsets E I I, I D . Define the linear operator L ̃ E , W

L ̃ E , W f I D W 1 I W f I 1 E I

The following simple lemma is well known to specialists for different maximal functions. For M ̃ W it states

Lemma 2.1.

For fL 2(W)

M ̃ W f L 2 ( W ) = sup E L ̃ E , W f L 2 ( W ) ,

where the supremum is taken over all collections E { E I } I D of mutually disjoint subsets E I I.

This lemma “linearizes” the maximal function. Informally speaking one can reason as follows: for almost every point x one finds the interval I, where the supremum (where we think informally it is a maximum) is attained. Now we change the perspective, and for each I we consider those x that we just paired with that I. Call these sets E I . If, again informally, we think that for each x only one I gives maximum, then those E I will be disjoint. In fact, this informal reasoning can be made rigorous. The reader be warned that the construction of E I depends on a test function f. However, this is not a problem, still the estimate of maximal operator will be related to the estimates of all such linearizations.

Definition 2.2.

Let 0 < η < 1. A collection S D of dyadic intervals is called η -sparse if there exist disjoint sets E I I, I S such that |E I | ≥ η|I| for all I S .

To get the upper bound for M ̃ W it is sufficient to estimate the linear operators L ̃ E , W , which, unfortunately, we are not able to do at the moment. We are only able to estimate a sparse version of this operator. Namely, let us assume that for 0 < η < 1 the collection E is such that

(2.1) | E I | η | I | whenever E I .

This means that the collection S { I D : E I } is an η-sparse collection.

The first result is the following upper bound.

Theorem 2.3.

Let W be a d × d matrix weight satisfying the (dyadic) A 2 condition (1.1), and let the collection E = { E I } I D satisfy the assumption (2.1) with 0 < η < 1. Then for the sparse operator L ̃ E , W

L ̃ E , W f L 2 ( W ) η 1 / 2 C ( d ) [ W ] A 2 dy 3 / 2 f L 2 ( W ) .

where C(d) = 4ed.

It turns out that unlike the scalar case, the above upper bound is sharp, and the exponent 3/2 cannot be improved.

Theorem 2.4.

There exists an absolute constant c > 0 such that for any sufficiently large Q > 1 there exists a 2 × 2 matrix weight W satisfying the (dyadic) A 2 condition, [ W ] A 2 dy Q , and a collection E { E I } I D of mutually disjoint subsets E I I satisfying (2.1) with η = 1/2 such that

(2.2) L ̃ E , W f L 2 ( W ) c Q 3 / 2 f L 2 ( W ) .

for some non-zero function fL 2(W).

By Lemma 2.1 this theorem implies that the same lower bound (2.2) holds for the maximal function M ̃ W . Finding an upper bound for M ̃ W is an interesting open problem.

3 Proof of the upper bound

To prove Theorem 2.3 we need the following result from Ref. [19]

Theorem 3.1.

(Culiuc–Treil, [19], Theorem 1.2]). Let W be a d × d matrix-valued weight and let A I , I D be a sequence of positive semidefinite d × d matrices. Then the following are equivalent:

  1. I D A I 1 / 2 W 1 / 2 f I 2 | I | A f L 2 2 .

  2. I D A I 1 / 2 W f I 2 | I | A f L 2 ( W ) 2 .

  3. I D : I I 0 W I A I W I | I | B W I 0 | I 0 | for all I 0 D .

Moreover, the best constants A and B satisfy BACB, where C = C(d) = 4e 2 d 2.

Denote S { I D : E I } . For fL 2(W)

L ̃ E , W f L 2 ( W ) 2 = I S A I 1 / 2 W f I 2 | I | ,

where

A I = W 1 I 1 E I W I W 1 I .

So, to prove the lemma, we need to verify condition (iii) of Theorem 3.1. Since trivially 1 E I W I W I , and |I| ≤ η −1|E I | for I S , we can write

I S : I I 0 W I A I W I | I | η 1 I S : I I 0 W I W 1 I W I W 1 I W I | E I | = η 1 I S : I I 0 W I 1 / 2 B I 2 W I 1 / 2 | E I | ,

where B I W I 1 / 2 W 1 I W I 1 / 2 . By the A 2 condition (1.1) we have B I [ W ] A 2 , so

W I 1 / 2 B I 2 W I 1 / 2 [ W ] A 2 2 W I .

Thus

I S : I I 0 W I A I W I | I | η 1 [ W ] A 2 2 I S : I I 0 W I | E I | .

We need the following simple and probably well-known lemma that will be proved shortly.

Lemma 3.2.

Let E = { E I } I D be any collection of disjoint sets E I I, and let a weight W satisfy the (dyadic) matrix A 2 condition (1.1). Then for any I 0 D

(3.1) I D : I I 0 W I | E I | 4 [ W ] A 2 dy W I 0 | I 0 | .

Applying this lemma we get that

I S : I I 0 W I A I W I | I | 4 η 1 [ W ] A 2 dy 3 W I 0 | I 0 | .

Theorem 3.1 immediately implies Theorem 2.3. So, Theorem 2.3 is proved modulo Lemma 3.2. □

3.1 Proof of Lemma 3.2

Let us recall some definitions.

Definition 3.3.

A sequence { α I } I D , α I ≥ 0 is called B-Carleson if for any I 0 D

I D : I I 0 α I B | I 0 | .

Clearly, the sequence α I ≔ |E I | is 1-Carleson due to disjointness of E I .

We need the following simple fact

Lemma 3.4.

(Dyadic Carleson Embedding). Let { α I } I D be a B-Carleson sequence. Then for any fL 2,

I D α I | f I | 2 | I | 4 B f L 2 2 .

This result is well known, at least with some constant instead of 4. The result with constant 4 can be found in numerous places, see for example [20]. The constant 4 is in fact the best possible; the proof also can be found in numerous places, for example in Ref. [21].

To prove Lemma 3.2 we first notice that we only need to prove it for scalar (dyadic) A 2 weights. Indeed, if one takes e R d and consider scalar weights w = w e , w e ( x ) ( W ( x ) e , e ) R d , it is not hard to see that [ w ] A 2 dy [ W ] A 2 dy .

This fact immediately follows from the interpretation of the A 2 condition via the norms of averaging operators; a “direct” proof by playing with the Cauchy–Schwarz inequality is also not hard.

To prove Lemma 3.2 for scalar (dyadic) A 2 weights, we first notice that for any I D

[ w ] A 2 dy w I w 1 I = w I exp ( ln w I ) w 1 I exp ( ln w I )

By Jensen inequality both factors in the right hand side are at least 1, therefore

w I exp ( ln w I ) [ w ] A 2 dy .

Thus

w I [ w ] A 2 dy exp ( ln w I ) [ w ] A 2 dy w 1 / 2 I 2 ;

in the last inequality we have used the Jensen inequality again. Then, applying Lemma 3.4 for f = w 1 / 2 1 I 0 and the 1-Carleson sequence { | E I | } I D , we get

I D : I I 0 w I | E I | [ w ] A 2 dy I D : I I 0 w 1 / 2 I 2 | E I | 4 [ w ] A 2 dy w 1 I 0 L 2 2 = 4 [ w ] A 2 dy w I 0 | I 0 | ,

which is exactly what we need. □

The above proof is pretty standard and is essentially the proof from Ref. [3] of Lemma 4.1, where the localized maximal function of the weight w was estimated. The connection of the Carleson Embeddings and the estimates of maximal functions is well known, so specialists can easily see that the two proofs are the same.

4 Proof of the lower bound (Theorem 2.4)

Let us describe the construction of the weight W. For I D we denote by I + and I the right and left halves of I respectively.

We start with the interval I 0 ≔ [0, 1), and define inductively I k + 1 I + k . The collection of the intervals I k , k Z + will be our sparse collection S , and for I = I k we set E I I k . Trivially, for all I S

| E I | = | I | / 2 .

4.1 The construction

We fix a large number Q (in the future [ W ] A 2 dy Q ) and define

c Q 2 Q Q + 1 , T Q c Q , 0 0,1 , M Q 1 , Q Q , Q .

We will inductively define the weight W on the intervals I k , which gives us a weight on the whole interval I 0; we can then extend it periodically to the whole line R .

Now we consider on I 0 a matrix weight W such that

  1. W I 0 = I ,

  2. W 1 I 0 = I + M Q ,

  3. W is constant matrix on children of I 0 .

This is easily achievable: the system

X + Y 2 = I , X 1 + Y 1 2 = I + M Q

is solvable in positive matrices because I + M Q I.

Remark 4.1.

This choice of W 1 I 0 and W I 0 was suggested by Fedor Nazarov.

To extend W to I 1 , consider the affine orientation preserving bijection L 0 : I + 0 = I 1 I 0 , L(x) = 2x − 1. Note that L(1) = 1, and that L maps I 1 bijectively to I 0 .

Thus for x I 1 we have L ( x ) I 0 , so we can define

W ( x ) = T Q W ( L ( x ) ) T Q , x I 1 .

Let us also notice that L maps I k + 1 bijectively onto I k . Therefore, if we know W on I k , we can define it on I k + 1 using the same formula

(4.1) W ( x ) = T Q W ( L ( x ) ) T Q , x I k + 1 .

So, starting from the weight W on I 0 , we define it inductively using formulas (4.1) on I 1 , I 2 etc.; since the collection I k k 0 is a disjoint cover of I 0, everything is well defined and in the limit we get a weight W on I 0.

Note that by the construction we have for this weight

(4.2) W ( x ) = T Q W ( L ( x ) ) T Q , x I + 0 = I 1 .

4.2 Why the weight is A 2 dy ?

We want to show that [ W ] A 2 dy A Q with some absolute constant A. Note that the inequality

W I 1 / 2 W 1 I 1 / 2 2 A

is equivalent to

(4.3) W 1 I A W I 1 W I A W 1 I 1 .

Due to the recursive relation (4.2) we only need to check (4.3) on two intervals, I = I 0 and I = I 0.

Indeed, one can see that the estimates (4.3) do not change if we replace the weight W, by its rescaling S*WS with an invertible matrix S. The recursive relation (4.2) means that for any interval I D , I I + 0 , weights on I and L(I) are connected via matrix rescaling (and the rescaling of the argument). So, the estimates (4.3) hold with the same constant on both I and L(I), and therefore on both I and its kth iteration

L k ( I ) = L L L k  times ( I )

whenever L k (I) ⊂ I 0.

Let I D , II 0, II 0, I I 0 , and we want to show that verifying (4.3) on I is reduced to verifying it either of I 0 or on I 0 .

Let II 0. Since the weight W is constant on descendants of the interval I 0 , the estimate (4.3) with A = 1 trivially holds for such I.

Let now I D , I I + 0 , and let k = k(I) be the largest integer such that L k (I) ⊂ I 0. There are 3 possibilities:

  1. L k (I) = I 0: in this case we need to verify (4.3) for I 0;

  2. L k ( I ) = I 0 : in this case we need to verify (4.3) for I 0 ;

  3. L k ( I ) I 0 : in this case by the previous observation (4.3) trivially holds with A = 1.

So, indeed we only need to check (4.3) for I = I 0 and I = I 0.

The estimate for I 0 is easier.

(4.4) W 1 I 0 = I + M Q I + 2 1 0 0 Q 3 1 0 0 Q ,

so, since W I 0 = I , we conclude that

W 1 I 0 3 Q I = 3 Q W I 0 1 ,

i.e. (4.3) holds for I 0 with A = 3.

The case of I 0 is just a bit more complicated. First, on I k we have, by (4.2), that

W I k = T Q k I T Q k = T Q 2 k ,

So, summing geometric series we get

(4.5) W I 0 = k 0 2 k 1 W I k = k 0 2 k 1 T Q 2 k = ( Q + 1 ) / 2 0 0 1 Q 0 0 1 .

On the other had, by the same relations (4.2) we have

W 1 I k = T Q k W 1 I 0 T Q k .

Therefore

(4.6) W 1 I 0 = k 0 2 k 1 W 1 I k = k 0 2 k 1 T Q k W 1 I 0 T Q k

3 k 0 2 k 1 T Q k 1 0 0 Q T Q k ;

in the inequality here we used estimate (4.4). Matrices T Q k are contractive diagonal matrices, so we can continue the estimate, using (4.5) for the last inequality:

W 1 I 0 3 k 0 2 k 1 1 0 0 Q = 3 1 0 0 Q = 3 Q Q 0 0 1 1 3 Q W I 0 1 .

So, we have proved that (4.3) holds for I = I 0 and I = I 0 . □

4.3 Lower bounds

Take e = ( 1,0 ) T R 2 . We will show that the lower bound (2.2) holds for f 1 I 0 e .

The main part of the estimate is the following lemma.

Lemma 4.2.

For all I S

(4.7) 1 16 Q 3 W I e , e R 2 W I W 1 I W I W 1 I W I e , e R 2 .

Proof.

First of all notice that we only need to prove (4.7) for I = I 0. Indeed, since

W I k = T Q k W I 0 T Q k , W I k = T Q k W I 0 T Q k , W 1 I k = T Q k W I 0 T Q k ,

we can see that the estimate (4.7) for I k is the same as the estimate for I 0 with e = (1,0) T replaced by T Q k e = c Q k , 0 T = c Q k e . But by homogeneity, these inequalities are equivalent.

So, let us prove the estimate for I = I 0. Since W I 0 = I we conclude that the left hand side of (4.7) is Q 3/16, and that the right hand side of (4.7) is

W 1 I 0 W I 0 e 2

Using (4.5) we get that W I 0 e = Q + 1 2 e , so the right hand side of (4.7) is

Q + 1 2 2 W 1 I 0 e 2 .

Using formula (4.6) for W 1 I 0 and noticing that all matrices in (4.6) as well as the vector e have non-negative entries, we can estimate

W 1 I 0 e 1 2 W 1 I 0 e = 1 2 2 Q Q Q + 1 e Q 2 .

Gathering everything together we get that the right hand side of (4.7) is estimated below by

Q + 1 2 2 Q 4 Q 3 16

which is exactly the conclusion of the lemma. □

The lower bound for L ̃ E , W is now immediate: for f = 1 I 0 e using the estimate (4.7) we get

L E , W f L 2 ( W ) 2 = k 0 W I k 1 / 2 W 1 I k W I k e 2 | I k | k 0 Q 3 16 W I k e , e R 2 | I k | = Q 3 16 I 0 W ( x ) e , e R 2 d x = Q 3 16 f L 2 ( W ) 2 ;

here in the second equality we used the fact that the collection I k k 0 is a disjoint cover of the interval I 0.


Corresponding author: Alexander Volberg, Department of Mathematics, Michigan State University, East Lansing, MI, 48823, USA, E-mail: 

ST is partially supported by the NSF grant DMS-2154321.

AV is partially supported by the NSF grants DMS-1900286, DMS-2154402 and by the Hausdorff Center for Mathematics, Bonn, Germany.


Acknowledgments

The authors sincerely thank the anonymous referees for their critical and detailed review, significantly improving the manuscript.

  1. Research ethics: Not applicable.

  2. Informed consent: All authors consent to participate in this work.

  3. Author contributions: All authors contributed to the study conception and design. All authors performed material preparation, and analysis. The authors read and approved the final manuscript.

  4. Use of Large Language Models, AI and Machine Learning Tools: None declared.

  5. Conflict of interest: The authors declare no conflict of interest.

  6. Research funding: S. Treil is partially supported by the NSF grant DMS-2154321. A. Volberg is partially supported by the NSF grant DMS-2154402 and by the Hausdorff Center for Mathematics, Bonn, Germany.

  7. Data availability: Not applicable.

References

[1] S. Treil and A. Volberg, “Wavelets and the angle between past and future,” J. Funct. Anal., vol. 143, no. 2, pp. 269–308, 1997. https://doi.org/10.1006/jfan.1996.2986.Search in Google Scholar

[2] K. Bickel, S. Petermichl, and B. Wick, “Bounds for the Hilbert transform with matrix A2 weights,” J. Funct. Anal., vol. 270, no. 5, pp. 1719–1743, 2016. https://doi.org/10.1016/j.jfa.2015.12.006.Search in Google Scholar

[3] F. Nazarov, S. Petermichl, S. Treil, and A. Volberg, “Convex body domination and weighted estimates with matrix weights,” Adv. Math., vol. 318, no. 1, pp. 279–306, 2017. https://doi.org/10.1016/j.aim.2017.08.001.Search in Google Scholar

[4] K. Domelevo, S. Petermichl, S. Treil, and A. Volberg, “The matrix A2 conjecture fails, i.e. 3/2 > 1,” arXiv:2402.06961, pp. 1–44, 2024.Search in Google Scholar

[5] T. Hytönen, “The sharp weighted bound for general Calderón-Zygmund operators,” Ann. Math., vol. 175, no. 3, pp. 1473–1506, 2012. https://doi.org/10.4007/annals.2012.175.3.9.Search in Google Scholar

[6] S. M. Buckley, “Estimates for operator norms on weighted spaces and reverse Jensen inequalities,” Trans. Am. Math. Soc., vol. 340, no. 1, pp. 253–272, 1993. https://doi.org/10.1090/s0002-9947-1993-1124164-0.Search in Google Scholar

[7] J. Wittwer, “A sharp estimate on the norm of martingale transform,” Math. Res. Lett., vol. 7, no. 1, pp. 1–12, 2000. https://doi.org/10.4310/mrl.2000.v7.n1.a1.Search in Google Scholar

[8] S. Petermichl and A. Volberg, “Heating of the Ahlfors-Beurling operator: weakly quasiregular maps on the plane are quasiregular,” Duke Math. J., vol. 112, no. 2, pp. 281–305, 2002. https://doi.org/10.1215/s0012-9074-02-11223-x.Search in Google Scholar

[9] S. Petermichl, “The sharp bound for the Hilbert transform on weighted Lebesgue spaces in terms of the classical Ap characteristic,” Am. J. Math., vol. 129, no. 5, pp. 1355–1375, 2007. https://doi.org/10.1353/ajm.2007.0036.Search in Google Scholar

[10] A. Lerner, “A simple proof of A2 conjecture,” Int. Math. Res. Not., vol. 2013, no. 14, pp. 3159–3170, 2013. https://doi.org/10.1093/imrn/rns145.Search in Google Scholar

[11] D. Cruz-Uribe, J. M. Martell, and C. Pérez, “Weights, extrapolation and the theory of Rubio de Francia,” in Operator Theory: Advances and Applications, vol. 215, Basel AG, Basel, Birkhäuser/Springer, 2011.10.1007/978-3-0348-0072-3Search in Google Scholar

[12] J. L. Rubio de Francia, “Factorization theory and Ap weights,” Am. J. Math., vol. 106, no. 3, pp. 533–547, 1984. https://doi.org/10.2307/2374284.Search in Google Scholar

[13] M. Bownik and D. Cruz-Uribe, “Extrapolation and factorization of matrix weights,” Preprint, arXiv:2210.09443, 2022.Search in Google Scholar

[14] D. Cruz-Uribe, J. Isralowitz, and K. Moen, “Two weight bump conditions for matrix weights,” Integr. Equ. Operat. Theor., vol. 90, no. 3, 2018, Art. no. 36.10.1007/s00020-018-2455-5Search in Google Scholar

[15] F. Nazarov, S. Petermichl, K. A. Škreb, and S. Treil, “The matrix-weighted dyadic convex body maximal operator is not bounded,” Adv. Math., vol. 410, no. 2, p. 108711, 2022. https://doi.org/10.1016/j.aim.2022.108711.Search in Google Scholar

[16] M. Christ and M. Goldberg, “Vector A2 weights and a Hardy-Littlewood maximal function,” Trans. Am. Math. Soc., vol. 353, no. 5, pp. 1995–2002, 2001. https://doi.org/10.1090/s0002-9947-01-02759-3.Search in Google Scholar

[17] J. Isralowitz, H.-K. Kwon, and S. Pott, “Matrix weighted norm inequalities for commutators and paraproducts with matrix symbols,” J. Lond. Math. Soc., vol. 96, no. 2, pp. 243–270, 2017. https://doi.org/10.1112/jlms.12053.Search in Google Scholar

[18] S. Petermichl, S. Pott, and M. C. Reguera-Rodriguez, “A matrix weighted bilinear Carleson lemma and maximal function,” Anal. Math. Phys., vol. 9, no. 3, pp. 1163–1180, 2019. https://doi.org/10.1007/s13324-019-00331-9.Search in Google Scholar

[19] A. Culiuc and S. Treil, “The Carleson embedding theorem with matrix weights,” Int. Math. Res. Not., vol. 2019, no. 11, pp. 3301–3312, 2019. https://doi.org/10.1093/imrn/rnx222.Search in Google Scholar

[20] F. Nazarov and S. Treil, “The hunt for the Bellman function: applications to estimates of singular integral operators and to other classical problems in harmonic analysis,” St. Petersburg Math. J., vol. 8, no. 5, pp. 32–162, 1996.Search in Google Scholar

[21] F. Nazarov, S. Treil, and A. Volberg, “Bellman function in stochastic control and harmonic analysis,” in Systems, Approximation, Singular Integral Operators, and Related Topics (Bordeaux, 2000), vol. 129 of Oper. Theory Adv. Appl., Basel, Birkhäuser, 2001, pp. 393–423.10.1007/978-3-0348-8362-7_16Search in Google Scholar

Received: 2024-08-19
Accepted: 2025-01-30
Published Online: 2025-02-26

© 2025 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 16.11.2025 from https://www.degruyterbrill.com/document/doi/10.1515/ans-2023-0170/html
Scroll to top button