Home A non-geodesic analogue of Reshetnyak’s majorization theorem
Article Open Access

A non-geodesic analogue of Reshetnyak’s majorization theorem

  • Tetsu Toyoda EMAIL logo
Published/Copyright: March 28, 2023

Abstract

For any real number κ and any integer n 4 , the Cycl n ( κ ) condition introduced by Gromov (CAT(κ)-spaces: construction and concentration, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) 280 (2001), (Geom. i Topol. 7), 100–140, 299–300) is a necessary condition for a metric space to admit an isometric embedding into a CAT ( κ ) space. For geodesic metric spaces, satisfying the Cycl 4 ( κ ) condition is equivalent to being CAT ( κ ) . In this article, we prove an analogue of Reshetnyak’s majorization theorem for (possibly non-geodesic) metric spaces that satisfy the Cycl 4 ( κ ) condition. It follows from our result that for general metric spaces, the Cycl 4 ( κ ) condition implies the Cycl n ( κ ) conditions for all integers n 5 .

1 Introduction

To find a characterization of those metric spaces that admit an isometric embedding into a CAT ( κ ) space is a longstanding open problem posed by Gromov (see [3, Section 1.4], [9, Section 1.19+] and [10, §15]). On the other hand, various conditions on a general metric space are known to become equivalent to being CAT ( κ ) under the assumption that the metric space is geodesic. The Cycl 4 ( κ ) condition defined by Gromov [10] is one of such conditions (see [10, §15]). In this article, we prove an analogue of Reshetnyak’s majorization theorem [14] for (possibly non-geodesic) metric spaces that satisfy the Cycl 4 ( κ ) condition. Our result shows that metric spaces with the Cycl 4 ( κ ) condition have more properties in common with CAT ( κ ) spaces than expected. In particular, it follows from our result that every metric space with the Cycl 4 ( κ ) condition satisfies the Cycl n ( κ ) conditions for all integers n 5 , although Gromov stated that this is apparently not true (see Subsection 1.1).

For a real number κ , we denote by M κ 2 the complete, simply connected, two-dimensional Riemannian manifold of constant Gaussian curvature κ , and by d κ the distance function on M κ 2 . Let D κ be the diameter of M κ 2 . Thus, we have

D κ = π κ if κ > 0 , if κ 0 .

For a positive integer n and an integer m , we denote by [ m ] n the element of Z / n Z represented by m . We recall the definition of the Cycl n ( κ ) conditions introduced in [10].

Definition 1.1

Fix κ R and an integer n 4 . Let ( X , d X ) be a metric space. We say that X is a Cycl n ( κ ) space or that X satisfies the Cycl n ( κ ) condition if for any map f : Z / n Z X with

(1.1) i Z / n Z d X ( f ( i ) , f ( i + [ 1 ] n ) ) < 2 D κ ,

there exists a map g : Z / n Z M κ 2 such that

d κ ( g ( i ) , g ( i + [ 1 ] n ) ) d X ( f ( i ) , f ( i + [ 1 ] n ) ) , d κ ( g ( i ) , g ( j ) ) d X ( f ( i ) , f ( j ) )

for any i , j Z / n Z with j i + [ 1 ] n and i j + [ 1 ] n .

Remark 1.2

The original definition of Cycl n ( κ ) spaces in [10, §7] requires the existence of a map g : Z / n Z M κ 2 for some κ κ such that

d κ ( g ( i ) , g ( i + [ 1 ] n ) ) d X ( f ( i ) , f ( i + [ 1 ] n ) ) , d κ ( g ( i ) , g ( j ) ) d X ( f ( i ) , f ( j ) )

for any i , j Z / n Z with j i + [ 1 ] n and i j + [ 1 ] n , instead of the existence of a map g as in Definition 1.1. As mentioned in [10], this definition is equivalent to Definition 1.1. In fact, the existence of such a map g implies the existence of a map g as in Definition 1.1 by Reshetnyak’s majorization theorem.

Remark 1.3

In [10, §7], assumption (1.1) is not stated explicitly. It is just remarked that we have to consider only maps f : Z / n Z X with “small” images f ( Z / n Z ) when κ > 0 .

For the definition of CAT ( κ ) spaces, see Definition 2.8 in this article. Concerning Cycl n ( κ ) spaces and CAT ( κ ) spaces, Gromov [10] established the following fact.

Theorem 1.4

(Gromov [10]) Fix κ R . The following two assertions hold true.

  1. A metric space ( X , d X ) is CAT ( κ ) if and only if X is Cycl 4 ( κ ) and D κ -geodesic. Here, we say X is D κ -geodesic if any x , y X with d X ( x , y ) < D κ can be joined by a geodesic segment in X.

  2. Every CAT ( κ ) space is Cycl n ( κ ) for all integers n 4 .

On the other hand, the Cycl 4 ( κ ) condition generally does not imply the isometric embeddability into a CAT ( κ ) space without assuming that the metric space is D κ -geodesic. In fact, Nina Lebedava constructed a 6-point Cycl 4 ( 0 ) space that does not admit an isometric embedding into any CAT ( 0 ) space (see [1, §7.2]). Moreover, it follows from the result of Eskenazis, Mendel, and Naor [8] that there exists a Cycl 4 ( 0 ) space that does not admit a coarse embedding into any CAT ( 0 ) space (see [17, p. 116]).

The following theorem is our main result, which can be viewed as an analogue of Reshetnyak’s majorization theorem for Cycl 4 ( κ ) spaces.

Theorem 1.5

Let κ R . If X is a Cycl 4 ( κ ) space, then for any integer n 3 , and for any map f : Z / n Z X that satisfies

i Z / n Z d X ( f ( i ) , f ( i + [ 1 ] n ) ) < 2 D κ , f ( j ) f ( j + [ 1 ] n )

for every j Z / n Z , there exists a map g : Z / n Z M κ 2 that satisfies the following two conditions:

  1. For any i , j Z / n Z , we have

    d κ ( g ( i ) , g ( i + [ 1 ] n ) ) = d X ( f ( i ) , f ( i + [ 1 ] n ) ) , d κ ( g ( i ) , g ( j ) ) d X ( f ( i ) , f ( j ) ) .

  2. For any i , j Z / n Z with i j , we have [ g ( i ) , g ( j ) ] [ g ( i [ 1 ] n ) , g ( i + [ 1 ] n ) ] , where we denote by [ a , b ] the line segment in M κ 2 with endpoints a and b.

Note that when the polygon with vertices g ( i ) , i Z / n Z is non-degenerate, the condition (2) in Theorem 1.5 means that this polygon is convex.

1.1 Gromov’s remark about the Cycl n ( κ ) conditions

Theorem 1.4 tells us that the Cycl 4 ( κ ) condition implies the Cycl n ( κ ) conditions for all integers n 5 under the assumption that the metric space is D κ -geodesic. In the study of upper curvature bound for general metric spaces, it is natural to ask whether this implication is true without assuming that the metric space is D κ -geodesic. Concerning this question, Gromov [10, §15, Remarks (b)] stated “We shall see later on that Cycl 4 Cycl k for all k 5 in the geodesic case but this is apparently not so in general.” However, we prove as a direct consequence of Theorem 1.5 that this implication actually holds true without assuming that the metric space is D κ -geodesic:

Theorem 1.6

Let κ R . Every Cycl 4 ( κ ) space is Cycl n ( κ ) for all integers n 5 .

1.2 Gromov’s question about the Wirtinger inequalities

In [10, §6], Gromov also introduced the following conditions on a metric space.

Definition 1.7

Fix an integer n 4 . We say that a metric space ( X , d X ) is a Wir n space if any map f : Z / n Z X satisfies

(1.2) 0 sin 2 j π n i Z / n Z d X ( f ( i ) , f ( i + [ 1 ] n ) ) 2 sin 2 π n i Z / n Z d X ( f ( i ) , f ( i + [ j ] n ) ) 2

for every j Z [ 2 , n 2 ] .

The family of inequalities (1.2) can be thought of as a discrete and nonlinear analogue of classical Wirtinger’s inequality for functions on S 1 . Every Euclidean space is Wir n for all integers n 4 , which was first proved by Pech [13] before Gromov introduced the notion of Wir n spaces. Therefore, it follows from the definition of Cycl n ( 0 ) spaces that every Cycl n ( 0 ) space is Wir n for each integer n 4 . Thus, for general metric spaces, the following implications are true for each integer n 4 :

CAT ( 0 ) Cycl n ( 0 ) Wir n .

In [10, p. 133, §25, Question], Gromov posed the question of whether the implication Cycl 4 ( 0 ) Wir n holds true for every integer n 5 without assuming that the metric space is geodesic. Kondo, Toyoda, and Uehara [11] answered this question affirmatively:

Theorem 1.8

[11] Every Cycl 4 ( 0 ) space is Wir n for all integers n 4 .

Since Theorem 1.6 implies the stronger implication Cycl 4 ( 0 ) Cycl n ( 0 ) for every integer n 4 , Theorem 1.6 strengthens Theorem 1.8 and gives another proof of it.

1.3 The -inequalities

It was remarked in [10, §7] that the Cycl 4 ( 0 ) condition is equivalent to the validity of a certain family of inequalities, defined as follows.

Definition 1.9

We say that a metric space ( X , d X ) satisfies the -inequalities if for any t , s [ 0 , 1 ] and any x , y , z , w X , we have

0 ( 1 t ) ( 1 s ) d X ( x , y ) 2 + t ( 1 s ) d X ( y , z ) 2 + t s d X ( z , w ) 2 + ( 1 t ) s d X ( w , x ) 2 t ( 1 t ) d X ( x , z ) 2 s ( 1 s ) d X ( y , w ) 2 .

Gromov [10] and Sturm [16] proved independently that every CAT ( 0 ) space satisfies the -inequalities. The name “ -inequalities” is based on a notation used by Gromov [10] and was used in [11] and [17]. Sturm [16] called these inequalities the weighted quadruple inequalities. Gromov stated the following fact in [10, §7].

Theorem 1.10

[10] A metric space is Cycl 4 ( 0 ) if and only if it satisfies the -inequalities.

For a proof of Theorem 1.10, see Section 6. The following two corollaries follow from Theorems 1.5 and 1.6 immediately.

Corollary 1.11

If a metric space X satisfies the -inequalities, then for any integer n 3 , and for any map f : Z / n Z X that satisfies f ( j ) f ( j + [ 1 ] n ) for every j Z / n Z , there exists a map g : Z / n Z R 2 that satisfies the following two conditions:

  1. For any i , j Z / n Z , we have

    g ( i ) g ( i + [ 1 ] n ) = d X ( f ( i ) , f ( i + [ 1 ] n ) ) , g ( i ) g ( j ) d X ( f ( i ) , f ( j ) ) .

  2. For any i , j Z / n Z with i j , we have [ g ( i ) , g ( j ) ] [ g ( i [ 1 ] n ) , g ( i + [ 1 ] n ) ] , where we denote by [ a , b ] the line segment in R 2 with endpoints a and b.

Corollary 1.12

If a metric space satisfies the -inequalities, then it is Cycl n ( 0 ) for every integer n 4 .

1.4 Graph comparison

We can say that the Cycl n ( κ ) condition is defined by comparing embeddings of the cycle graph with n vertices into a given metric space with embeddings of the same graph into M κ 2 (see Definition 1.4 in [17]). By replacing the cycle graph with another graph, and M κ 2 with another space, we can define a large number of new conditions. In recent years, such graph comparison conditions have been used by Lebedeva, Petrunin, and Zolotov [12] and the present author [17], among others, and play an interesting role in the study of the geometry of metric spaces.

1.5 Outline of the proof of Theorem 1.5

Our proof of Theorem 1.5 is based on the idea used by Ballmann in his lecture note [4] for proving Reshetnyak’s majorization theorem. We also recommend [2, Chapter 9, L] for the proof of Reshetnyak’s majorization theorem by using Ballmann’s idea. To use Ballmann’s idea in our setting, we need the following lemma, which we prove in Section 5.

Lemma 1.13

Let κ R , and let ( X , d X ) be a Cycl 4 ( κ ) space. Suppose x , y , z , w X and x , y , z , w M κ 2 are points such that

d κ ( x , y ) + d κ ( y , z ) + d κ ( z , w ) + d κ ( w , x ) < 2 D κ , d X ( x , y ) d κ ( x , y ) , d X ( y , z ) d κ ( y , z ) , d X ( z , w ) d κ ( z , w ) , d X ( w , x ) d κ ( w , x ) , d κ ( x , z ) d X ( x , z ) .

Then we have d X ( y , w ) d κ ( y , p ) + d κ ( p , w ) for every p [ x , z ] .

Once we have proved Lemma 1.13, we can prove Theorem 1.5 by just following the idea of Ballmann mentioned above. We show this in Section 3.

Remark 1.14

For the use of Lemma 1.13 to prove Theorem 1.5 by following Ballmann’s idea, it is no problem to replace the assumption d κ ( x , z ) d X ( x , z ) in Lemma 1.13 with the equality d κ ( x , z ) = d X ( x , z ) . However, proving the d κ ( x , z ) = d X ( x , z ) version of Lemma 1.13 still requires a similar argument as we do in this article for proving Lemma 1.13.

For κ 0 , Lemma 1.13 can be proved by a straightforward computation (see Remark 2.7). To prove Lemma 1.13 for a general real number κ , we prove two lemmas in Section 4, which generalize Alexandrov’s lemma [6, p. 25]. By using those lemmas, we prove Lemma 1.13 in Section 5.

Remark 1.15

It was pointed out by an anonymous referee that the d κ ( x , z ) = d X ( x , z ) version of Lemma 1.13 can also be proved by using Zalgaller’s arm lemma [18] instead of the generalized Alexandrov’s lemma.

1.6 Organization of the paper

This article is organized as follows. In Section 2, we recall some definitions and results from metric geometry and the geometry of M κ 2 . In Section 3, we prove Theorems 1.5 and 1.6 by using Lemma 1.13. In Section 4, we prove two lemmas, which generalize Alexandrov’s lemma [6, p. 25]. In Section 5, we prove Lemma 1.13 by using those lemmas proved in Section 4. In Section 6, we present a proof of Theorem 1.10 for completeness.

2 Preliminaries

In this section, we recall some definitions and results from metric geometry and the geometry of M κ 2 .

2.1 Geodesics

Let ( X , d X ) be a metric space. A geodesic in X is an isometric embedding of an interval of the real line into X . For x , y X , a geodesic segment with endpoints x and y is the image of a geodesic γ : [ 0 , d X ( x , y ) ] X with γ ( 0 ) = x , γ ( d X ( x , y ) ) = y . If there exists a unique geodesic segment with endpoints x and y , we denote it by [ x , y ] . We also denote the sets [ x , y ] { x , y } , [ x , y ] { x } and [ x , y ] { y } by ( x , y ) , ( x , y ] and [ x , y ) , respectively. A metric space X is called geodesic if for any x , y X , there exists a geodesic segment with endpoints x and y . Let D ( 0 , ) . We say that X is D-geodesic if for any x , y X with d X ( x , y ) < D , there exists a geodesic segment with endpoints x and y . A subset S of X is called convex if for any x , y S , every geodesic segment in X with endpoints x and y is contained in S . If this condition holds for any x , y S with d X ( x , y ) < D , then S is called D-convex.

2.2 The geometry of M κ 2

Let κ R . For any x , y M κ 2 with d κ ( x , y ) < D κ , there exists a unique geodesic segment [ x , y ] with endpoints x and y . We mean by a line in M κ 2 the image of an isometric embedding of R into M κ 2 if κ 0 , and a great circle in M κ 2 if κ > 0 . Then for any two distinct points x , y M κ 2 with d κ ( x , y ) < D κ , there exits a unique line through x and y , which we denote by ( x , y ) . For any line in M κ 2 , M κ 2 consists of exactly two connected components. We call each connected component of M κ 2 a side of . One side of is called the opposite side of the other. If x , y M κ 2 lie on the same side of a line, then d κ ( x , y ) < D κ . Each side of a line is a convex subset of M κ 2 . For a subset S of a side of some line in M κ 2 , the convex hull of S is the intersection of all convex subsets of M κ 2 containing S , or equivalently, the minimal convex subset of M κ 2 containing S . We denote the convex hull of S by conv ( S ) . We recall the following well-known fact, which is trivial when κ 0 .

Proposition 2.1

Let κ R , and let n 3 be an integer. Suppose f : Z / n Z M κ 2 is a map such that i Z / n Z d κ ( f ( i ) , f ( i + [ 1 ] n ) ) < 2 D κ . Then there exists a line L in M κ 2 such that all f ( i ) , i Z / n Z lie on one side of L, and thus, conv ( f ( Z / n Z ) ) is contained in one side of L.

For x , y , z M κ 2 with 0 < d κ ( x , y ) < D κ and 0 < d κ ( y , z ) < D κ , we denote by x y z [ 0 , π ] the interior angle measure at y of the (possibly degenerate) triangle with vertices x , y , and z . By the law of cosines for M κ 2 (see [6, p. 24]), x y z [ 0 , π ] satisfies the following formula:

cos x y z = d κ ( x , y ) 2 + d κ ( y , z ) 2 d κ ( z , x ) 2 2 d κ ( x , y ) d κ ( y , z ) , if κ = 0 , cosh ( κ d κ ( x , y ) ) cosh ( κ d κ ( y , z ) ) cosh ( κ d κ ( z , x ) ) sinh ( κ d κ ( x , y ) ) sinh ( κ d κ ( y , z ) ) , if κ < 0 , cos ( κ d κ ( z , x ) ) cos ( κ d κ ( x , y ) ) cos ( κ d κ ( y , z ) ) sin ( κ d κ ( x , y ) ) sin ( κ d κ ( y , z ) ) , if κ > 0 .

The following proposition follows immediately from the law of cosines.

Proposition 2.2

Let κ R . Suppose x , y , z , x , y , z M κ 2 are points such that

0 < d κ ( x , y ) = d κ ( x , y ) < D κ , 0 < d κ ( y , z ) = d κ ( y , z ) < D κ .

Then d κ ( x , z ) d κ ( x , z ) if and only if x y z x y z . Moreover, d κ ( x , z ) = d κ ( x , z ) if and only if x y z = x y z .

The following proposition also follows from the law of cosines.

Proposition 2.3

Let κ R . Suppose x , y , z M κ 2 are three distinct points such that

d κ ( x , y ) + d κ ( y , z ) < D κ , d κ ( x , y ) d κ ( y , z ) .

Suppose x , y , z M κ 2 are points such that

d κ ( x , y ) = d κ ( x , y ) , d κ ( y , z ) = d κ ( y , z ) , d κ ( z , x ) d κ ( z , x ) .

Then y x z y x z .

Proof

We consider three cases.

CASE 1: κ = 0 . In this case, we have y z x y x z and y z x y x z by hypothesis, and therefore, y z x π / 2 and y z x π / 2 . It follows that

0 d κ ( x , z ) 2 + d κ ( y , z ) 2 d κ ( x , y ) 2 , 0 d κ ( x , z ) 2 + d κ ( y , z ) 2 d κ ( x , y ) 2 = d κ ( x , z ) 2 + d κ ( y , z ) 2 d κ ( x , y ) 2 ,

and therefore, we have

(2.1) 0 α 2 + d κ ( y , z ) 2 d κ ( x , y ) 2

for any α [ d κ ( x , z ) , d κ ( x , z ) ] . Define a function f 1 : ( 0 , ) R by

f 1 ( α ) = d κ ( x , y ) 2 + α 2 d κ ( y , z ) 2 2 d κ ( x , y ) α .

Then we have

d d α f 1 ( α ) = 1 2 d κ ( x , y ) α 2 ( α 2 + d κ ( y , z ) 2 d κ ( x , y ) 2 ) 0

for any α [ d κ ( x , z ) , d κ ( x , z ) ] by (2.1), which implies that y x z y x z because

f 1 ( d κ ( x , z ) ) = cos y x z , f 1 ( d κ ( x , z ) ) = cos y x z

by the law of cosines.

CASE 2: κ < 0 . In this case, we define a function f 2 : ( 0 , ) R by

f 2 ( α ) = cosh ( κ d κ ( x , y ) ) cosh ( κ α ) cosh ( κ d κ ( y , z ) ) sinh ( κ d κ ( x , y ) ) sinh ( κ α ) .

Then we have

d d α f 2 ( α ) = κ ( cosh ( κ d κ ( y , z ) ) cosh ( κ α ) cosh ( κ d κ ( x , y ) ) ) sinh ( κ d κ ( x , y ) ) sinh 2 ( κ α ) κ ( cosh ( κ d κ ( y , z ) ) cosh ( κ d κ ( x , y ) ) ) sinh ( κ d κ ( x , y ) ) sinh 2 ( κ α ) 0

for any α ( 0 , ) by hypothesis. This implies that y x z y x z because

f 2 ( d κ ( x , z ) ) = cos y x z , f 2 ( d κ ( x , z ) ) = cos y x z

by the law of cosines.

CASE 3: κ > 0 . In this case, we define a function f 3 : ( 0 , D κ ) R by

f 3 ( α ) = cos ( κ d κ ( y , z ) ) cos ( κ d κ ( x , y ) ) cos ( κ α ) sin ( κ d κ ( x , y ) ) sin ( κ α ) .

Then

d d α f 3 ( α ) = κ ( cos ( κ d κ ( x , y ) ) cos ( κ d κ ( y , z ) ) cos ( κ α ) ) sin ( κ d κ ( x , y ) ) sin 2 ( κ α ) .

If d κ ( y , z ) D κ / 2 , then

0 < κ d κ ( x , y ) κ d κ ( y , z ) π 2

by hypothesis, and therefore,

cos ( κ d κ ( x , y ) ) cos ( κ d κ ( y , z ) ) cos ( κ α ) cos ( κ d κ ( x , y ) ) cos ( κ d κ ( y , z ) ) 0

for any α ( 0 , D κ ) . If d κ ( y , z ) > D κ / 2 , then

0 < κ d κ ( x , y ) < π 2 < κ d κ ( y , z ) < π , κ d κ ( x , y ) < π κ d κ ( y , z )

by hypothesis, and therefore,

cos ( κ d κ ( x , y ) ) cos ( κ d κ ( y , z ) ) cos ( κ α ) cos ( κ d κ ( x , y ) ) + cos ( κ d κ ( y , z ) ) = cos ( κ d κ ( x , y ) ) cos ( π κ d κ ( y , z ) ) > cos ( κ d κ ( x , y ) ) cos ( κ d κ ( x , y ) ) = 0

for any α ( 0 , D κ ) . Thus, we always have

0 d d α f 3 ( α )

for any α ( 0 , D κ ) . This implies that y x z y x z because

f 3 ( d κ ( x , z ) ) = cos y x z , f 3 ( d κ ( x , z ) ) = cos y x z

by the law of cosines.

The aforementioned three cases exhaust all possibilities.□

The following formulas follow from straightforward computation.

Proposition 2.4

Let κ R . Suppose x , y , z M κ 2 are points such that x z and d κ ( a , b ) < D κ for any a , b { x , y , z } . Suppose γ : [ 0 , d κ ( x , z ) ] M κ 2 is the geodesic such that γ ( 0 ) = x and γ ( d κ ( x , z ) ) = z . Let t [ 0 , 1 ] , and let p = γ ( t d κ ( x , z ) ) . If κ = 0 , then

d κ ( y , p ) 2 = ( 1 t ) d κ ( x , y ) 2 + t d κ ( y , z ) 2 t ( 1 t ) d κ ( x , z ) 2 .

If κ < 0 , then

cosh ( κ d κ ( y , p ) ) = 1 sinh ( κ d κ ( x , z ) ) ( sinh ( κ ( 1 t ) d κ ( x , z ) ) cosh ( κ d κ ( x , y ) ) + sinh ( κ t d κ ( x , z ) ) cosh ( κ d κ ( y , z ) ) ) .

If κ > 0 , then

cos ( κ d κ ( y , p ) ) = sin ( κ ( 1 t ) d κ ( x , z ) ) cos ( κ d κ ( x , y ) ) + sin ( κ t d κ ( x , z ) ) cos ( κ d κ ( y , z ) ) sin ( κ d κ ( x , z ) ) .

The next corollary follows immediately from Proposition 2.4

Corollary 2.5

Let κ R . Suppose x , y , z , y ˜ M κ 2 are points such that

d κ ( x , y ) d κ ( x , y ˜ ) < D κ , d κ ( y , z ) d κ ( y ˜ , z ) < D κ , 0 < d κ ( x , z ) < D κ .

Then we have d κ ( y , p ) d κ ( y ˜ , p ) for any p [ x , z ] .

Although it is not necessary for our purpose, it is worth noting that for κ ( , 0 ] , Proposition 2.4 also implies the following corollary.

Corollary 2.6

Let κ ( , 0 ] . Suppose x , y , z , x ˜ , y ˜ , z ˜ M κ 2 are points such that

d κ ( x , y ) d κ ( x ˜ , y ˜ ) < D κ , d κ ( y , z ) d κ ( y ˜ , z ˜ ) < D κ , 0 < d κ ( x ˜ , z ˜ ) d κ ( x , z ) < D κ .

Let γ : [ 0 , d κ ( x , z ) ] M κ 2 and γ ˜ : [ 0 , d κ ( x ˜ , z ˜ ) ] M κ 2 be the geodesics such that

γ ( 0 ) = x , γ ( d κ ( x , z ) ) = z , γ ˜ ( 0 ) = x ˜ , γ ˜ ( d κ ( x ˜ , z ˜ ) ) = z ˜ .

Fix t [ 0 , 1 ] , and set p = γ ( t d κ ( x , z ) ) , p ˜ = γ ˜ ( t d κ ( x ˜ , z ˜ ) ) . Then d κ ( y , p ) d κ ( y ˜ , p ˜ ) .

Remark 2.7

For the case in which κ ( , 0 ] , we can prove Lemma 1.13 easily by using Corollary 2.6. To prove Lemma 1.13 for general κ R , we will prove a generalization of Alexandrov’s lemma [6, p. 25] in Section 4.

2.3 CAT ( κ ) spaces

A geodesic triangle in a metric space X is a triple = ( γ 1 , γ 2 , γ 3 ) of geodesics γ i : [ a i , b i ] X such that γ 1 ( b 1 ) = γ 2 ( a 2 ) , γ 2 ( b 2 ) = γ 3 ( a 3 ) and γ 3 ( b 3 ) = γ 1 ( a 1 ) . Let κ R . If the perimeter i = 1 3 b i a i of the geodesic triangle is less than 2 D κ , then there exists a geodesic triangle κ = ( γ 1 κ , γ 2 κ , γ 3 κ ) , γ i κ : [ a i , b i ] M κ 2 in M κ 2 . Such a geodesic triangle κ is unique up to isometry of M κ 2 . The geodesic triangle is said to be κ -thin if d X ( γ i ( s ) , γ j ( t ) ) d κ ( γ i κ ( s ) , γ j κ ( t ) ) for any i , j { 1 , 2 , 3 } , any s [ a i , b i ] , and any t [ a j , b j ] .

Definition 2.8

Let κ R . A metric space X is called a CAT ( κ ) space if X is D κ -geodesic, and any geodesic triangle in X with perimeter < 2 D κ is κ -thin.

By definition, M κ 2 is a CAT ( κ ) space. It is easily observed that if ( X , d X ) is a CAT ( κ ) space, then for any x , y X with d X ( x , y ) < D κ , there exists the unique geodesic segment with endpoints x and y . Every D κ -convex subset of a CAT ( κ ) space equipped with the induced metric is a CAT ( κ ) space. For a detailed exposition of CAT ( κ ) spaces, see [2,6,7]. We note that in [2, p. 96, Definition 9.3], the term CAT ( κ ) space is used with somewhat different meaning.

3 Ballmann’s argument for our setting

In this section, we prove Theorems 1.5 and 1.6 by using Lemma 1.13. As we mentioned in Section 1, our proof is based on the idea used by Ballmann in his lecture note [4] for proving Reshetnyak’s majorization theorem. We will prove Lemma 1.13 later in Section 5. First, we define the following conditions by slightly modifying the definition of the Cycl n ( κ ) conditions.

Definition 3.1

Fix κ R and a positive integer n . We say that a metric space ( X , d X ) is a Cycl n ( κ ) space if for any map f : Z / n Z X that satisfies

i Z / n Z d X ( f ( i ) , f ( i + [ 1 ] n ) ) < 2 D κ , f ( j ) f ( j + [ 1 ] n )

for every j Z / n Z , there exists a map g : Z / n Z M κ 2 that satisfies the following two conditions:

  1. For any i , j Z / n Z ,

    d κ ( g ( i ) , g ( i + [ 1 ] n ) ) = d X ( f ( i ) , f ( i + [ 1 ] n ) ) , d κ ( g ( i ) , g ( j ) ) d X ( f ( i ) , f ( j ) ) .

  2. For any i , j Z / n Z with i j , [ g ( i ) , g ( j ) ] [ g ( i [ 1 ] n ) , g ( i + [ 1 ] n ) ] .

We call such a map g : Z / n Z M κ 2 that satisfies the aforementioned two conditions a comparison map for f .

Note that when the polygon with vertices g ( i ) , i Z / n Z is non-degenerate, the condition (2) in Definition 3.1 means that this polygon is convex. By definition, every metric space is Cycl 1 ( κ ) , Cycl 2 ( κ ) , and Cycl 3 ( κ ) for all κ R . It is also easily seen that for any κ R and any integer n 4 , if a metric space is Cycl m ( κ ) for all m Z [ 1 , n ] , then it is Cycl n ( κ ) . To prove Theorem 1.5, it clearly suffices to prove that every Cycl 4 ( κ ) space is Cycl n ( κ ) for all positive integers n .

Proof of Theorem 1.5 by using Lemma 1.13

Fix κ R . We will prove that every Cycl 4 ( κ ) space is Cycl n ( κ ) for all positive integers n by induction on n . As we mentioned above, every metric space is Cycl 1 ( κ ) , Cycl 2 ( κ ) , and Cycl 3 ( κ ) trivially. Fix an integer n 3 , and assume that every Cycl 4 ( κ ) space is Cycl l ( κ ) for all l Z [ 1 , n ] . Let ( X , d X ) be an arbitrary Cycl 4 ( κ ) space, and let f : Z / ( n + 1 ) Z X be an arbitrary map that satisfies

(3.1) i Z / ( n + 1 ) Z d X ( f ( i ) , f ( i + [ 1 ] n + 1 ) ) < 2 D κ , f ( j ) f ( j + [ 1 ] n + 1 )

for every j Z / ( n + 1 ) Z . Then what we need to prove is the existence of a comparison map for f . Define a map f 0 : Z / n Z X by

f 0 ( [ m ] n ) = f ( [ m ] n + 1 ) , m Z [ 0 , n 1 ] .

We consider two cases.

CASE 1: The map f satisfies f ( [ n 1 ] n + 1 ) f ( [ 0 ] n + 1 ) . In this case, it follows from (3.1) that

i Z / n Z d X ( f 0 ( i ) , f 0 ( i + [ 1 ] n ) ) < 2 D κ , f 0 ( j ) f 0 ( j + [ 1 ] n )

for every j Z / n Z , and so there exists a comparison map g 0 : Z / n Z M κ 2 for f 0 by the inductive hypothesis. Since

d κ ( g 0 ( [ n 1 ] n ) , g 0 ( [ 0 ] n ) ) = d X ( f ( [ n 1 ] n + 1 ) , f ( [ 0 ] n + 1 ) ) ,

there exists p M κ 2 such that

d κ ( g 0 ( [ n 1 ] n ) , p ) = d X ( f ( [ n 1 ] n + 1 ) , f ( [ n ] n + 1 ) ) , d κ ( p , g 0 ( [ 0 ] n ) ) = d X ( f ( [ n ] n + 1 ) , f ( [ 0 ] n + 1 ) ) .

Because g 0 is a comparison map for f 0 , the condition (2) in Definition 3.1 guarantees that for any i , j Z / n Z , g 0 ( i ) and g 0 ( j ) do not lie on opposite sides of ( g 0 ( [ n 1 ] n ) , g 0 ( [ 0 ] n ) ) . So we may assume that p is not on the same side of ( g 0 ( [ n 1 ] n ) , g 0 ( [ 0 ] n ) ) as g 0 ( i ) for every i Z / n Z . Define a map g : Z / ( n + 1 ) Z M κ 2 by

g ( [ m ] n + 1 ) = g 0 ( [ m ] n ) , if m Z [ 0 , n 1 ] , p , if m = n .

Then it is straightforward to see that

(3.2) d κ ( g ( [ l ] n + 1 ) , g ( [ m ] n + 1 ) ) d X ( f ( [ l ] n + 1 ) , f ( [ m ] n + 1 ) )

for any l , m Z [ 0 , n 1 ] ,

(3.3) d κ ( g ( i ) , g ( i + [ 1 ] n + 1 ) ) = d X ( f ( i ) , f ( i + [ 1 ] n + 1 ) )

for any i Z / ( n + 1 ) Z , and

(3.4) d κ ( g ( [ n 1 ] n + 1 ) , g ( [ 0 ] n + 1 ) ) = d X ( f ( [ n 1 ] n + 1 ) , f ( [ 0 ] n + 1 ) ) .

We divide CASE 1 into the following two subcases.

SUBCASE 1A: The map g satisfies

g ( [ n 2 ] n + 1 ) g ( [ n 1 ] n + 1 ) g ( [ 0 ] n + 1 ) + g ( [ 0 ] n + 1 ) g ( [ n 1 ] n + 1 ) g ( [ n ] n + 1 ) π , g ( [ 1 ] n + 1 ) g ( [ 0 ] n + 1 ) g ( [ n 1 ] n + 1 ) + g ( [ n 1 ] n + 1 ) g ( [ 0 ] n + 1 ) g ( [ n ] n + 1 ) π .

In this subcase, since g 0 is a comparison map for f 0 , it follows from elementary geometry that we have

[ g ( i ) , g ( j ) ] [ g ( i [ 1 ] n + 1 ) , g ( i + [ 1 ] n + 1 ) ]

for any i , j Z / ( n + 1 ) Z with i j . In particular, we have

[ g ( [ n 1 ] n + 1 ) , g ( [ 0 ] n + 1 ) ] [ g ( [ m ] n + 1 ) , g ( [ n ] n + 1 ) ]

for every m Z [ 0 , n 1 ] . Clearly, we also have

d κ ( g ( [ 0 ] n + 1 ) , g ( [ m ] n + 1 ) ) + d κ ( g ( [ m ] n + 1 ) , g ( [ n 1 ] n + 1 ) ) + d κ ( g ( [ n 1 ] n + 1 ) , g ( [ n ] n + 1 ) ) + d κ ( g ( [ n ] n + 1 ) , g ( [ 0 ] n + 1 ) ) < 2 D κ .

Therefore, Lemma 1.13 implies

d κ ( g ( [ m ] n + 1 ) , g ( [ n ] n + 1 ) ) d X ( f ( [ m ] n + 1 ) , f ( [ n ] n + 1 ) )

for every m Z [ 0 , n 1 ] because X is Cycl 4 ( κ ) , and we have

d X ( f ( [ 0 ] n + 1 ) , f ( [ m ] n + 1 ) ) d κ ( g ( [ 0 ] n + 1 ) , g ( [ m ] n + 1 ) ) , d X ( f ( [ m ] n + 1 ) , f ( [ n 1 ] n + 1 ) ) d κ ( g ( [ m ] n + 1 ) , g ( [ n 1 ] n + 1 ) ) , d X ( f ( [ n 1 ] n + 1 ) , f ( [ n ] n + 1 ) ) = d κ ( g ( [ n 1 ] n + 1 ) , g ( [ n ] n + 1 ) ) , d X ( f ( [ n ] n + 1 ) , f ( [ 0 ] n + 1 ) ) = d κ ( g ( [ n ] n + 1 ) , g ( [ 0 ] n + 1 ) ) , d X ( f ( [ 0 ] n + 1 ) , f ( [ n 1 ] n + 1 ) ) = d κ ( g ( [ 0 ] n + 1 ) , g ( [ n 1 ] n + 1 ) )

by (3.2), (3.3), and (3.4). Thus, g is a comparison map for f .

SUBCASE 1B: The map g satisfies

(3.5) π < g ( [ n 2 ] n + 1 ) g ( [ n 1 ] n + 1 ) g ( [ 0 ] n + 1 ) + g ( [ 0 ] n + 1 ) g ( [ n 1 ] n + 1 ) g ( [ n ] n + 1 )

or

π < g ( [ 1 ] n + 1 ) g ( [ 0 ] n + 1 ) g ( [ n 1 ] n + 1 ) + g ( [ n 1 ] n + 1 ) g ( [ 0 ] n + 1 ) g ( [ n ] n + 1 ) .

In this subcase, we may assume without loss of generality that we have (3.5). Let

S = conv ( g 0 ( Z / n Z ) ) , T = conv ( { g 0 ( [ n 1 ] n ) , p , g 0 ( [ 0 ] n ) } ) .

Equip the subsets S and T of M κ 2 with the induced metrics, and regard them as disjoint metric spaces. Define ( R , d R ) to be the metric space obtained by gluing S and T by identifying [ g 0 ( [ n 1 ] n ) , g 0 ( [ 0 ] n ) ] S with [ g 0 ( [ n 1 ] n ) , g 0 ( [ 0 ] n ) ] T . Then R is a CAT ( κ ) space by Reshetnyak’s gluing theorem (see [14] or [6, Chapter II.11, Theorem 11.1]). We denote by r m the point in R represented by g 0 ( [ m ] n ) S for each m Z [ 0 , n 1 ] , and by r n the point in R represented by p T (Figure 1). Define a map f 1 : Z / n Z R by

f 1 ( [ m ] n ) = r m , if m Z [ 0 , n 2 ] , r n , if m = n 1 .

Then it follows from (3.1) and the definition of f 1 that

i Z / n Z d R ( f 1 ( i ) , f 1 ( i + [ 1 ] n ) ) < 2 D κ , f 1 ( j ) f 1 ( j + [ 1 ] n )

for every j Z / n Z . Because R is Cycl 4 ( κ ) by Theorem 1.4, there exists a comparison map g 1 : Z / n Z M κ 2 for f 1 by the inductive hypothesis. It follows from (3.5) and Alexandrov’s lemma [6, p. 25] that

d κ ( g 1 ( [ n 2 ] n ) , g 1 ( [ n 1 ] n ) ) = d R ( r n 2 , r n ) = d R ( r n 2 , r n 1 ) + d R ( r n 1 , r n ) .

Therefore, there exists a point q [ g 1 ( [ n 2 ] n ) , g 1 ( [ n 1 ] n ) ] such that

d κ ( g 1 ( [ n 2 ] n ) , q ) = d R ( r n 2 , r n 1 ) , d κ ( q , g 1 ( [ n 1 ] n ) ) = d R ( r n 1 , r n ) .

Define a map g 2 : Z / ( n + 1 ) Z M κ 2 by

g 2 ( [ m ] n + 1 ) = g 1 ( [ m ] n ) , if m Z [ 0 , n 2 ] , q , if m = n 1 , g 1 ( [ n 1 ] n ) , if m = n .

Then, we clearly have

[ g 2 ( i ) , g 2 ( j ) ] [ g 2 ( i [ 1 ] n + 1 ) , g 2 ( i + [ 1 ] n + 1 ) ]

for any i , j Z / ( n + 1 ) Z with i j . It is straightforward to see that

d κ ( g 2 ( [ l ] n + 1 ) , g 2 ( [ m ] n + 1 ) ) d X ( f ( [ l ] n + 1 ) , f ( [ m ] n + 1 ) )

for any l , m Z [ 0 , n 2 ] , and

d κ ( g 2 ( i ) , g 2 ( i + [ 1 ] n + 1 ) ) = d X ( f ( i ) , f ( i + [ 1 ] n + 1 ) )

for any i Z / ( n + 1 ) Z . Lemma 1.13 implies

(3.6) d κ ( g 1 ( [ m ] n ) , q ) d R ( r m , r n 1 )

for every m Z [ 0 , n 2 ] because R is Cycl 4 ( κ ) , and we have

d κ ( g 1 ( [ n 1 ] n ) , g 1 ( [ m ] n ) ) d R ( r n , r m ) , d κ ( g 1 ( [ m ] n ) , g 1 ( [ n 2 ] n ) ) d R ( r m , r n 2 ) , d κ ( g 1 ( [ n 2 ] n ) , q ) = d R ( r n 2 , r n 1 ) , d κ ( q , g 1 ( [ n 1 ] n ) ) = d R ( r n 1 , r n ) , d κ ( g 1 ( [ n 2 ] n ) , g 1 ( [ n 1 ] n ) ) = d R ( r n 2 , r n ) , q [ g 1 ( [ n 2 ] n ) , g 1 ( [ n 1 ] n ) ]

by definition of g 1 . It follows from (3.6) and the definition of g 2 that

d κ ( g 2 ( [ m ] n + 1 ) , g 2 ( [ n 1 ] n + 1 ) ) d X ( f ( [ m ] n + 1 ) , f ( [ n 1 ] n + 1 ) )

for every m Z [ 0 , n 2 ] . Together with the definition of R , Lemma 1.13 also implies

(3.7) d R ( r m , r n ) d X ( f ( [ m ] n + 1 ) , f ( [ n ] n + 1 ) )

for every m Z [ 0 , n 1 ] because X is Cycl 4 ( κ ) , and we have

d R ( r 0 , r m ) d X ( f ( [ 0 ] n + 1 ) , f ( [ m ] n + 1 ) ) , d R ( r m , r n 1 ) d X ( f ( [ m ] n + 1 ) , f ( [ n 1 ] n + 1 ) ) , d R ( r n 1 , r n ) = d X ( f ( [ n 1 ] n + 1 ) , f ( [ n ] n + 1 ) ) , d R ( r n , r 0 ) = d X ( f ( [ n ] n + 1 ) , f ( [ 0 ] n + 1 ) ) , d R ( r 0 , r n 1 ) = d X ( f ( [ 0 ] n + 1 ) , f ( [ n 1 ] n + 1 ) ) .

It follows from (3.7) and the definition of g 2 that

d κ ( g 2 ( [ m ] n + 1 ) , g 2 ( [ n ] n + 1 ) ) d X ( f ( [ m ] n + 1 ) , f ( [ n ] n + 1 ) )

for every m Z [ 0 , n 1 ] . Thus, g 2 is a comparison map for f .

CASE 2: The map f satisfies f ( [ n 1 ] n + 1 ) = f ( [ 0 ] n + 1 ) . In this case, we set

d = d X ( f ( [ n 1 ] n + 1 ) , f ( [ n ] n + 1 ) ) = d X ( f ( [ 0 ] n + 1 ) , f ( [ n ] n + 1 ) ) .

Define a map f ˜ 0 : Z / ( n 1 ) Z X by f ˜ 0 ( [ m ] n 1 ) = f ( [ m ] n + 1 ) , m Z [ 0 , n 2 ] . Then there exists a comparison map g ˜ 0 : Z / ( n 1 ) Z M κ 2 for f ˜ 0 by the inductive hypothesis. Let

S ˜ = conv ( g ˜ 0 ( Z / ( n 1 ) Z ) ) , T ˜ = [ 0 , d ] .

Equip S ˜ M κ 2 and T ˜ R the induced metrics, and regard them as metric spaces in their own right. Define ( R ˜ , d R ˜ ) to be the metric space obtained by gluing S ˜ and T ˜ by identifying { g ˜ 0 ( [ 0 ] n 1 ) } S ˜ with { 0 } T ˜ . Then R ˜ is a CAT ( κ ) space by Reshetnyak’s gluing theorem. We denote by r ˜ m the point in R ˜ represented by g ˜ 0 ( [ m ] n 1 ) S ˜ for each m Z [ 0 , n 2 ] , by r ˜ n 1 the point in R ˜ represented by g ˜ 0 ( [ 0 ] n 1 ) S ˜ , and by r ˜ n the point in R ˜ represented by d T ˜ . In particular, we have r ˜ 0 = r ˜ n 1 . Define a map f ˜ 1 : Z / n Z R ˜ by

f ˜ 1 ( [ m ] n ) = r ˜ m , if m Z [ 0 , n 2 ] , r ˜ n , if m = n 1 .

Then since R ˜ is Cycl 4 ( κ ) by Theorem 1.4, there exists a comparison map g ˜ 1 : Z / n Z M κ 2 of f ˜ 1 by the inductive hypothesis. By definition of the gluing of metric spaces, we have

d κ ( g ˜ 1 ( [ n 2 ] n ) , g ˜ 1 ( [ n 1 ] n ) ) = d R ˜ ( r ˜ n 2 , r ˜ n ) = d R ˜ ( r ˜ n 2 , r ˜ n 1 ) + d R ˜ ( r ˜ n 1 , r ˜ n ) ,

and therefore, there exists a point q ˜ [ g ˜ 1 ( [ n 2 ] n ) , g ˜ 1 ( [ n 1 ] n ) ] such that

d κ ( g ˜ 1 ( [ n 2 ] n ) , q ˜ ) = d R ˜ ( r ˜ n 2 , r ˜ n 1 ) , d κ ( q ˜ , g ˜ 1 ( [ n 1 ] n ) ) = d R ˜ ( r ˜ n 1 , r ˜ n ) .

Define a map g ˜ 2 : Z / ( n + 1 ) Z M κ 2 by

g ˜ 2 ( [ m ] n + 1 ) = g ˜ 1 ( [ m ] n ) , if m Z [ 0 , n 2 ] , q ˜ , if m = n 1 , g ˜ 1 ( [ n 1 ] n ) , if m = n .

Then the same argument as in SUBCASE 1B shows that g ˜ 2 is a comparison map for f , which completes the proof.□

Theorem 1.6 follows immediately from Theorem 1.5.

Figure 1 
               The 
                     
                        
                        
                           CAT
                           
                              (
                              
                                 κ
                              
                              )
                           
                        
                        {\rm{CAT}}\left(\kappa )
                     
                   space 
                     
                        
                        
                           R
                        
                        R
                     
                  .
Figure 1

The CAT ( κ ) space R .

Proof of Theorem 1.6

Let X be a Cycl 4 ( κ ) space. Then X is Cycl m ( κ ) for every positive integer m as shown in the proof of Theorem 1.5, which implies that X is Cycl n ( κ ) for every integer n 4 .□

4 A generalization of Alexandrov’s lemma

In this section, we prove two lemmas, Lemmas 4.6 and 4.10, which generalize Alexandrov’s lemma [6, p. 25]. We use these lemmas to prove Lemma 1.13 in Section 5.

Before proving the first lemma, we recall some elementary facts about angle measure in M κ 2 . We first recall the following three basic propositions about the sum of two adjacent angle measures in M κ 2 .

Proposition 4.1

Let κ R . Suppose o , x , y , z M κ 2 are points such that 0 < d κ ( o , a ) < D κ for every a { x , y , z } . Then x o z x o y + y o z .

Proposition 4.2

Let κ R . Suppose o , x , y , z M κ 2 are points such that 0 < d κ ( o , a ) < D κ for every a { x , y , z } . Assume that x o z = x o y + y o z . Then all of the following conditions are true:

  • y and z do not lie on opposite sides of ( o , x ) ;

  • x and y do not lie on opposite sides of ( o , z ) ;

  • x and z do not lie on the same side of ( o , y ) .

Proposition 4.3

Let κ R . Suppose o , x , y , z M κ 2 are points such that 0 < d κ ( o , a ) < D κ for every a { x , y , z } . Then the identity x o z = x o y + y o z holds if and only if y and z do not lie on opposite sides of ( o , x ) , and x o y x o z .

We recall two more elementary propositions.

Proposition 4.4

Let κ R . Suppose o , x , y , z M κ 2 are points such that 0 < d κ ( o , a ) < D κ for every a { x , y , z } . Then the following conditions are equivalent:

  1. x o y + y o z + z o x = 2 π ;

  2. π y o x + x o z , and y and z do not lie on the same side of ( o , x ) ;

  3. π z o y + y o x , and z and x do not lie on the same side of ( o , y ) ;

  4. π x o z + z o y , and x and y do not lie on the same side of ( o , z ) .

When { o , x , y , z } is contained in one side of some line, each condition in the statement of Proposition 4.4 means that o lies on the solid triangle with vertices x , y , and z . Therefore, it is intuitively obvious that the following proposition holds true.

Proposition 4.5

Let κ R . Suppose o , x , y , z M κ 2 are four distinct points such that { o , x , y , z } is contained in one side of some line. Assume that one of (hence all of) the conditions (1), (2), (3), and (4) in the statement of Proposition 4.4 holds true. Then all of the following conditions are true:

  • o and x do not lie on opposite sides of ( y , z ) ;

  • z x o + o x y = z x y ;

  • If o ( x , y ) ( y , z ) ( z , x ) , then o [ x , y ] [ y , z ] [ z , x ] .

We now start to prove the first lemma of this section, which was proved for κ = 0 in [17, Lemma 8.4].

Lemma 4.6

Let κ R . Suppose x , y , z , w , x , y , z , w M κ 2 are points such that

d κ ( x , y ) + d κ ( y , z ) + d κ ( z , w ) + d κ ( w , x ) < 2 D κ , d κ ( x , y ) = d κ ( x , y ) , d κ ( y , z ) = d κ ( y , z ) , d κ ( z , w ) = d κ ( z , w ) , d κ ( w , x ) = d κ ( w , x ) , d κ ( y , w ) d κ ( y , w ) .

Whenever x, y, z, and w are distinct except the possibility that y = w , we assume in addition that π y z x + x z w , and that y and w do not lie on the same side of ( x , z ) . Then

d κ ( x , z ) d κ ( x , z ) .

Proof

If one of the identities x = y , x = z , x = w , y = z , or z = w holds, then it is straightforward to check the desired inequality holds true. So we assume that none of these identities hold. Suppose that y = w . Then y ( x , z ) because y and w do not lie on the same side of ( x , z ) by hypothesis, and therefore, we have x z y = x z w = 0 or x z y = x z w = π . Because π y z x + x z w by hypothesis, it follows that

(4.1) x z y = x z w = π .

We also have

(4.2) d κ ( x , z ) + d κ ( z , y ) < D κ

because otherwise κ would be greater than 0, and d κ ( x , z ) + d κ ( z , y ) + d κ ( y , x ) would be equal to 2 D κ , which would imply that

d κ ( x , y ) + d κ ( y , z ) + d κ ( z , w ) + d κ ( w , x ) d κ ( x , y ) + d κ ( y , z ) + d κ ( z , x ) = 2 D κ ,

contradicting the hypothesis. It follows from (4.1) and (4.2) that z [ x , y ] , and thus,

d κ ( x , z ) = d κ ( x , y ) d κ ( y , z ) = d κ ( x , y ) d κ ( y , z ) d κ ( x , z ) .

So henceforth we assume that x , y , z , and w are distinct. We consider two cases.

CASE 1: z ( x , y ) ( y , w ) ( w , x ) . Let w ˜ M κ 2 be the point such that

d κ ( z , w ˜ ) = d κ ( z , w ) , y z w ˜ = y z w ,

and w ˜ is not on the opposite side of ( y , z ) from w , as shown in Figure 2. Then Proposition 2.2 implies

d κ ( y , w ˜ ) = d κ ( y , w ) ,

and therefore, by using Proposition 2.2 again, we obtain

(4.3) w ˜ y z = w y z .

Since d κ ( y , w ) d κ ( y , w ) by hypothesis, Proposition 2.2 implies that

y z w y z w = y z w ˜ ,

and therefore, Proposition 4.3 implies that

(4.4) y z w + w z w ˜ = y z w ˜ π .

Because π y z x + x z w , and y and w are not on the same side of ( x , z ) , Proposition 4.4 implies that π y z w + w z x . Combining this with (4.4) yields w z w ˜ w z x . Furthermore, w ˜ and x are not on opposite sides of ( z , w ) because (4.4) implies that w ˜ is not on the same side of ( z , w ) as y by Proposition 4.2, the hypothesis implies that x is not on the same side of ( z , w ) as y by Proposition 4.4, and y ( z , w ) by the assumption of CASE 1. Therefore, Proposition 4.3 implies that

(4.5) w z x = w z w ˜ + w ˜ z x .

We have w ˜ z x w z x by (4.5), and therefore, Proposition 2.2 implies that

d κ ( w ˜ , x ) d κ ( w , x ) = d κ ( w , x ) .

Using Proposition 2.2 again, this implies that

(4.6) w ˜ y x w y x .

Because the hypothesis implies y z w + w z x + x z y = 2 π by Proposition 4.4, it follows from (4.4) and (4.5) that

w ˜ z x + x z y = 2 π y z w ˜ π .

Furthermore, y and w ˜ are not on the same side of ( x , z ) because y is not on the same side of ( x , z ) as w by hypothesis, (4.5) implies that w ˜ is not on the opposite side of ( x , z ) from w by Proposition 4.2, and w ( x , z ) by the assumption of CASE 1. Therefore, Proposition 4.5 implies that

(4.7) w ˜ y x = w ˜ y z + z y x .

We also have

(4.8) w y x w y z + z y x

by Proposition 4.1. By combining (4.3), (4.6), (4.7), and (4.8), we obtain

z y x = w ˜ y x w ˜ y z w y x w ˜ y z = w y x w y z z y x ,

and therefore, Proposition 2.2 implies that d κ ( x , z ) d κ ( x , z ) .

CASE 2: z ( x , y ) ( y , w ) ( w , x ) . In this case, z [ x , y ] [ y , w ] [ w , x ] by Proposition 4.5. If z [ x , y ] , then

d κ ( x , z ) = d κ ( x , y ) d κ ( y , z ) = d κ ( x , y ) d κ ( y , z ) d κ ( x , z ) .

If z [ w , x ] , then we obtain d κ ( x , z ) d κ ( x , z ) similarly. So assume that z [ y , w ] . Then we have

d κ ( y , w ) d κ ( y , w ) d κ ( y , z ) + d κ ( z , w ) = d κ ( y , z ) + d κ ( z , w ) = d κ ( y , w ) ,

and thus, d κ ( y , w ) = d κ ( y , w ) and z [ y , w ] . Hence, Proposition 2.2 implies

x y z = x y w = x y w = x y z ,

and therefore, by using Proposition 2.2 again, we obtain d κ ( x , z ) = d κ ( x , z ) .□

Figure 2 
               Proof of Lemma 4.6.
Figure 2

Proof of Lemma 4.6.

Remark 4.7

Suppose x , y , z , w , x , y , z , w M κ 2 are points that satisfy the hypothesis of Lemma 4.6. Alexandrov’s lemma [6, p. 25] states that if the identity

d κ ( y , w ) = d κ ( y , z ) + d κ ( z , w )

holds in addition, then we have d κ ( x , z ) d κ ( x , z ) .

Before proving the second lemma of this section, we recall two additional elementary facts about the sum of two adjacent angle measures in M κ 2 .

Proposition 4.8

Let κ R . Suppose o , x , y , z M κ 2 are points such that 0 < d κ ( o , a ) < D κ for every a { x , y , z } , and d κ ( x , z ) < D κ . If [ x , z ] [ o , y ] , then

x o z = x o y + y o z .

Proposition 4.9

Let κ R . Suppose o , x , y , z , w M κ 2 are points such that 0 < d κ ( o , a ) < D κ for every a { x , y , z , w } . If we have

x o z = x o y + y o z , x o w = x o z + z o w ,

then we have

y o w = y o z + z o w , x o w = x o y + y o w .

We now prove the second lemma of this section.

Lemma 4.10

Let κ R . Suppose x , y , z , w , x , y , z , w M κ 2 are points such that

d κ ( x , y ) + d κ ( y , z ) + d κ ( z , w ) + d κ ( w , x ) < 2 D κ , d κ ( x , y ) = d κ ( x , y ) , d κ ( y , z ) = d κ ( y , z ) , d κ ( z , w ) = d κ ( z , w ) , d κ ( w , x ) = d κ ( w , x ) , d κ ( x , z ) d κ ( x , z ) ,

and [ x , z ] [ y , w ] . Then

d κ ( y , w ) d κ ( y , w ) .

Proof

If x = z , then it follows from the hypothesis that x [ y , w ] , which implies the desired inequality by the triangle inequality. So we assume that x z . We next consider the case in which y ( x , z ) . If y [ x , z ] , then we have

d κ ( x , z ) d κ ( x , y ) + d κ ( y , z ) = d κ ( x , y ) + d κ ( y , z ) = d κ ( x , z ) d κ ( x , z ) ,

and thus, d κ ( x , z ) = d κ ( x , z ) and y [ x , z ] . Hence, Proposition 2.2 implies

y x w = z x w = z x w = y x w ,

and therefore, by using Proposition 2.2 again, we obtain

d κ ( y , w ) = d κ ( y , w ) .

If y ( x , z ) [ x , z ] , then it follows from the hypothesis that x or z is on [ y , w ] , which implies the desired inequality by the triangle inequality. Similarly, the desired inequality holds true in the case in which w ( x , z ) as well. So henceforth we assume in addition that y ( x , z ) and w ( x , z ) In particular, this requires that x , y , z , and w are distinct.

By hypothesis, one of the inequalities d κ ( x , y ) + d κ ( y , z ) < D κ or d κ ( z , w ) + d κ ( w , x ) < D κ holds true. We may assume without loss of generality that

d κ ( x , y ) + d κ ( y , z ) < D κ , d κ ( x , y ) d κ ( y , z ) .

Then we have

(4.9) y x z y x z

by Proposition 2.3. Let z ˜ M κ 2 be the point such that

d κ ( y , z ˜ ) = d κ ( y , z ) , x y z ˜ = x y z ,

and z ˜ is not on the opposite side of ( x , y ) from z , as shown in Figure 3. Then Proposition 2.2 implies

d κ ( x , z ˜ ) = d κ ( x , z ) ,

and therefore, by using Proposition 2.2 again, we obtain

(4.10) y x z ˜ = y x z .

Since d κ ( x , z ) d κ ( x , z ) , Proposition 2.2 implies

(4.11) x y z x y z = x y z ˜ .

Because z ˜ is not on the opposite side of ( x , y ) from z , (4.11) implies

(4.12) x y z ˜ = x y z + z y z ˜

by Proposition 4.3. The hypothesis that [ x , z ] [ y , w ] implies

(4.13) x y z = x y w + w y z

by Proposition 4.8. By Proposition 4.9, (4.12) and (4.13) imply

(4.14) x y z ˜ = x y w + w y z ˜ .

By combining (4.11), (4.13), and (4.14), we obtain w y z w y z ˜ , and therefore, Proposition 2.2 implies

d κ ( w , z ) = d κ ( w , z ) d κ ( w , z ˜ ) .

Using Proposition 2.2 again, this implies

(4.15) z x w z ˜ x w .

The hypothesis that [ x , z ] [ y , w ] implies

(4.16) y x w = y x z + z x w

by Proposition 4.8. It follows from (4.9) and (4.10) that y x z ˜ y x z , which implies

(4.17) y x z = y x z ˜ + z ˜ x z

by Proposition 4.3 because z ˜ is not on the opposite side of ( x , y ) from z . By Proposition 4.9, (4.16), and (4.17) imply

(4.18) y x w = y x z ˜ + z ˜ x w .

We also have

(4.19) y x w y x z + z x w

by Proposition 4.1. By combining (4.10), (4.15), (4.18), and (4.19), we obtain

y x w y x z + z x w y x z ˜ + z ˜ x w = y x w ,

and therefore, Proposition 2.2 implies d κ ( y , w ) d κ ( y , w ) .□

Figure 3 
               Proof of Lemma 4.10.
Figure 3

Proof of Lemma 4.10.

Recall that for any x , y , z , w M κ 2 such that d κ ( x , y ) + d κ ( y , z ) + d κ ( z , w ) + d κ ( w , x ) < 2 D κ , and x , y , z , and w are distinct except the possibility that y = w , we have

[ x , z ] [ y , w ]

if and only if y x z + z x w π , y z x + x z w π , and y and w are not on the same side of ( x , z ) .

Lemmas 4.6 and 4.10 imply the following corollary.

Corollary 4.11

Suppose x , y , z , w , x , y , z , w M κ 2 are points that satisfy the hypothesis of Lemma 4.10. Assume in addition that x z and that y and w do not lie on the same side of ( x , z ) . Then [ x , z ] [ y , w ] .

Proof

We have

(4.20) d κ ( y , w ) d κ ( y , w )

by Lemma 4.10. If one of the identities x = y , x = w , y = z , or z = w holds, then we have [ x , z ] [ y , w ] clearly. So we assume that x , y , z , and w are distinct except the possibility that y = w . Suppose for the sake of contradiction that [ x , z ] [ y , w ] = . Since y and w do not lie on the same side of ( x , z ) , this requires that we have π < y x z + z x w or π < y z x + x z w . Therefore, (4.20) implies d κ ( x , z ) d κ ( x , z ) by Lemma 4.6. Combining this with the hypothesis that d κ ( x , z ) d κ ( x , z ) yields d κ ( x , z ) = d κ ( x , z ) . Therefore, it follows from Proposition 2.2 that we have

π < y x z + z x w = y x z + z x w

or

π < y z x + x z w = y z x + x z w ,

which implies [ x , z ] [ y , w ] = , contradicting the hypothesis that [ x , z ] [ y , w ] . Thus, we have [ x , z ] [ y , w ] .□

5 Proof of Lemma 1.13

In this section, we prove Lemma 1.13.

Proof of Lemma 1.13

Let κ , X , x , y , z , w , x , y , z , and w be as in the hypothesis, and fix p [ x , z ] . If x = z , then p = x , and therefore,

d X ( y , w ) d X ( y , x ) + d X ( x , w ) d κ ( y , x ) + d κ ( x , w ) = d κ ( y , p ) + d κ ( p , w ) .

So henceforth we assume that x z . Then we also have x z since d κ ( x , z ) d X ( x , z ) . Let w ¯ M κ 2 be the point such that

d κ ( x , w ¯ ) = d κ ( x , w ) , d κ ( w ¯ , z ) = d κ ( w , z ) ,

and w ¯ is not on the same side of ( x , z ) as y . (If w is not on the same side of ( x , z ) as y , then w ¯ is w itself.) Then it is easily seen that d κ ( p , w ¯ ) = d κ ( p , w ) , and therefore,

(5.1) d κ ( y , w ¯ ) d κ ( y , p ) + d κ ( p , w ¯ ) = d κ ( y , p ) + d κ ( p , w ) .

Because X is Cycl 4 ( κ ) , there exist x 0 , y 0 , z 0 , w 0 M κ 2 such that

d κ ( x 0 , y 0 ) d X ( x , y ) , d κ ( y 0 , z 0 ) d X ( y , z ) , d κ ( z 0 , w 0 ) d X ( z , w ) , d κ ( w 0 , x 0 ) d X ( w , x ) , d X ( x , z ) d κ ( x 0 , z 0 ) , d X ( y , w ) d κ ( y 0 , w 0 ) .

These points satisfy

d κ ( x , y ) d κ ( y , z ) d κ ( x , z ) d X ( x , z ) d κ ( x 0 , z 0 ) d κ ( x 0 , y 0 ) + d κ ( y 0 , z 0 ) d X ( x , y ) + d X ( y , z ) d κ ( x , y ) + d κ ( y , z ) ,

and thus,

d κ ( x , y ) d κ ( y , z ) d κ ( x 0 , z 0 ) d κ ( x , y ) + d κ ( y , z ) .

This guarantees that there exists a point y ˜ M κ 2 such that

d κ ( x 0 , y ˜ ) = d κ ( x , y ) , d κ ( y ˜ , z 0 ) = d κ ( y , z ) .

Similarly, there also exists a point w ˜ M κ 2 such that

d κ ( x 0 , w ˜ ) = d κ ( x , w ) , d κ ( w ˜ , z 0 ) = d κ ( w , z ) .

Clearly, we may assume that w ˜ does not lie on the same side of ( x 0 , z 0 ) as y ˜ . We consider two cases.

CASE 1: [ x , z ] [ y , w ¯ ] . In this case, it follows from Lemma 4.10 that

(5.2) d κ ( y ˜ , w ˜ ) d κ ( y , w ¯ )

because

d κ ( x 0 , y ˜ ) = d κ ( x , y ) , d κ ( y ˜ , z 0 ) = d κ ( y , z ) , d κ ( z 0 , w ˜ ) = d κ ( z , w ¯ ) , d κ ( w ˜ , x 0 ) = d κ ( w ¯ , x ) , d κ ( x , z ) d X ( x , z ) d κ ( x 0 , z 0 ) .

We also have [ x 0 , z 0 ] [ y ˜ , w ˜ ] by Corollary 4.11. Choose p 0 [ x 0 , z 0 ] [ y ˜ , w ˜ ] . Then

d κ ( y 0 , p 0 ) d κ ( y ˜ , p 0 ) , d κ ( p 0 , w 0 ) d κ ( p 0 , w ˜ )

by Corollary 2.5, and therefore,

(5.3) d κ ( y 0 , w 0 ) d κ ( y 0 , p 0 ) + d κ ( p 0 , w 0 ) d κ ( y ˜ , p 0 ) + d κ ( p 0 , w ˜ ) = d κ ( y ˜ , w ˜ ) .

It follows from (5.1), (5.2), and (5.3) that

d X ( y , w ) d κ ( y 0 , w 0 ) d κ ( y ˜ , w ˜ ) d κ ( y , w ¯ ) d κ ( y , p ) + d κ ( p , w ) .

CASE 2: [ x , z ] [ y , w ¯ ] = . In this case, we have x y , x w , y z and z w ¯ , and one of the inequalities π < y x z + z x w ¯ or π < y z x + x z w ¯ holds since y and w ¯ are not on the same side of ( x , z ) . We may assume without loss of generality that

π < y x z + z x w ¯ .

Then we have

d κ ( y , x ) + d κ ( x , w ¯ ) d κ ( y , p ) + d κ ( p , w ¯ )

by Alexandrov’s lemma [6, p. 25], and therefore,

d X ( y , w ) d X ( y , x ) + d X ( x , w ) d κ ( y , x ) + d κ ( x , w ) = d κ ( y , x ) + d κ ( x , w ¯ ) d κ ( y , p ) + d κ ( p , w ¯ ) = d κ ( y , p ) + d κ ( p , w ) ,

which completes the proof.□

6 Proof of Theorem 1.10

In this section, we present a proof of Theorem 1.10 for completeness (see Remark 6.2). First, we recall the following fact, which was established by Sturm when he proved in [16, Theorem 4.9] that if a geodesic metric space satisfies the -inequalities, then it is CAT ( 0 ) .

Proposition 6.1

Let ( X , d X ) be a metric space that satisfies the -inequalities. Suppose x , y , z X are points such that x z , and

(6.1) d X ( x , z ) = d X ( x , y ) + d X ( y , z ) .

Set t = d X ( x , y ) / d X ( x , z ) . Then we have

d X ( y , w ) 2 ( 1 t ) d X ( x , w ) 2 + t d X ( z , w ) 2 t ( 1 t ) d X ( x , z ) 2 .

for any w X .

Proof

By the hypothesis (6.1), we compute

( 1 t ) d X ( x , y ) 2 + t d X ( y , z ) 2 = d X ( y , z ) d X ( x , z ) d X ( x , y ) 2 + d X ( x , y ) d X ( x , z ) d X ( y , z ) 2 = d X ( x , y ) d X ( y , z ) d X ( x , z ) ( d X ( x , y ) + d X ( y , z ) ) = d X ( x , y ) d X ( y , z ) = t ( 1 t ) d X ( x , z ) 2 .

Combining this with the -inequality in X yields

0 ( 1 t ) ( 1 s ) d X ( x , y ) 2 + t ( 1 s ) d X ( y , z ) 2 + t s d X ( z , w ) 2 + s ( 1 t ) d X ( w , x ) 2 t ( 1 t ) d X ( x , z ) 2 s ( 1 s ) d X ( y , w ) 2 = ( 1 s ) ( ( 1 t ) d X ( x , y ) 2 + t d X ( y , z ) 2 ) + t s d X ( z , w ) 2 + s ( 1 t ) d X ( w , x ) 2 t ( 1 t ) d X ( x , z ) 2 s ( 1 s ) d X ( y , w ) 2 = ( 1 s ) t ( 1 t ) d X ( x , z ) 2 + t s d X ( z , w ) 2 + s ( 1 t ) d X ( w , x ) 2 t ( 1 t ) d X ( x , z ) 2 s ( 1 s ) d X ( y , w ) 2 = t s d X ( z , w ) 2 + s ( 1 t ) d X ( w , x ) 2 s t ( 1 t ) d X ( x , z ) 2 s ( 1 s ) d X ( y , w ) 2

for every s [ 0 , 1 ] . For any s ( 0 , 1 ] , dividing this by s , we obtain

( 1 s ) d X ( y , w ) 2 t d X ( z , w ) 2 + ( 1 t ) d X ( w , x ) 2 t ( 1 t ) d X ( x , z ) 2 .

Letting s 0 in this inequality yields the desired inequality.□

We now prove Theorem 1.10.

Proof of Theorem 1.10

First, assume that a metric space ( X , d X ) is Cycl 4 ( 0 ) . Then for any x , y , z , w X , there exist x 0 , y 0 , z 0 , w 0 R 2 such that

x 0 y 0 d X ( x , y ) , y 0 z 0 d X ( y , z ) , z 0 w 0 d X ( z , w ) , w 0 x 0 d X ( w , x ) , x 0 z 0 d X ( x , z ) , y 0 w 0 d X ( y , w ) .

Therefore, for any s , t [ 0 , 1 ] , we have

( 1 t ) ( 1 s ) d X ( x , y ) 2 + t ( 1 s ) d X ( y , z ) 2 + t s d X ( z , w ) 2 + ( 1 t ) s d X ( w , x ) 2 t ( 1 t ) d X ( x , z ) 2 s ( 1 s ) d ( y , w ) 2 ( 1 t ) ( 1 s ) x 0 y 0 2 + t ( 1 s ) y 0 z 0 2 + t s z 0 w 0 2 + ( 1 t ) s w 0 x 0 2 t ( 1 t ) x 0 z 0 2 s ( 1 s ) y 0 w 0 2 = ( ( 1 t ) x + t z ) ( ( 1 s ) y + s w ) 2 0 ,

which means that X satisfies the -inequalities.

For the converse, assume that ( X , d X ) satisfies the -inequalities. Fix x , y , z , w X . If x , y , z , and w are not distinct, then we can embed { x , y , z , w } isometrically into R 2 . So we assume that x , y , z , and w are distinct. Then there exist x , y , z , w R 2 such that

x y = d X ( x , y ) , y z = d X ( y , z ) , z w = d X ( z , x ) , x w = d X ( x , w ) , w z = d X ( w , z ) ,

and y and w do not lie on the same side of ( x , z ) . We consider three cases.

CASE 1: [ x , z ] ( y , w ) . In this case, there exist s ( 0 , 1 ) and t [ 0 , 1 ] such that

( 1 t ) x + t z = ( 1 s ) y + s w .

It follows that

0 = ( ( 1 t ) x + t z ) ( ( 1 s ) y + s w ) 2 = ( 1 t ) ( 1 s ) x y 2 + t ( 1 s ) y z 2 + t s z w 2 + ( 1 t ) s w x 2 t ( 1 t ) x z 2 s ( 1 s ) y w 2 = ( 1 t ) ( 1 s ) d X ( x , y ) 2 + t ( 1 s ) d X ( y , z ) 2 + t s d X ( z , w ) 2 + ( 1 t ) s d X ( w , x ) 2 t ( 1 t ) d X ( x , z ) 2 s ( 1 s ) y w 2 .

On the other hand, we have

0 ( 1 t ) ( 1 s ) d X ( x , y ) 2 + t ( 1 s ) d X ( y , z ) 2 + t s d X ( z , w ) 2 + ( 1 t ) s d X ( w , x ) 2 t ( 1 t ) d X ( x , z ) 2 s ( 1 s ) d X ( y , w ) 2

because X satisfies the -inequalities. Comparing these yields d X ( y , w ) y w .

CASE 2: [ x , z ] { y , w } . In this case, we may assume without loss of generality that y [ x , z ] . Then we can write

y = ( 1 c ) x + c z ,

where

(6.2) c = x y x z = d X ( x , y ) d X ( x , z ) ( 0 , 1 ) .

It follows that

y w 2 = ( 1 c ) x + c z w 2 = ( 1 c ) x w 2 + c z w 2 c ( 1 c ) x z 2 = ( 1 c ) d X ( x , w ) 2 + c d X ( z , w ) 2 c ( 1 c ) d X ( x , z ) 2 .

On the other hand, it follows from (6.2) and Proposition 6.1 that

d X ( y , w ) 2 ( 1 c ) d X ( x , w ) 2 + c d X ( z , w ) 2 c ( 1 c ) d X ( x , z ) 2

because we have

d X ( x , z ) = x z = x y + y z = d X ( x , y ) + d X ( y , z ) .

Combining these yields d X ( y , w ) y w .

CASE 3: [ x , z ] [ y , w ] = . Since y and w do not lie on the same side of ( x , z ) , this requires that we have y z x + x z w > π or y x z + z x w > π . We may assume without loss of generality that y z x + x z w > π . Therefore, it follows from Alexandrov’s lemma [6, p. 25] that there exist x ˜ , y ˜ , w ˜ R 2 and z ˜ [ y ˜ , w ˜ ] such that

x ˜ y ˜ = d X ( x , y ) , x ˜ w ˜ = d X ( x , w ) , y ˜ w ˜ = d X ( y , z ) + d X ( z , w ) , y ˜ z ˜ = d X ( y , z ) , z ˜ w ˜ = d X ( z , w ) ,

and

d X ( x , z ) = x z x ˜ z ˜ .

The points x ˜ , y ˜ , z ˜ , w ˜ R 2 have the desired property.

The aforementioned three cases exhaust all possibilities, and thus, X is Cycl 4 ( 0 ) .□

Remark 6.2

A proof of Theorem 1.10 can also be found in [11, Lemma 2.6]. However, the case corresponding to CASE 2 in the aforementioned proof is omitted in the proof in [11].

Acknowledgments

The author would like to thank Takefumi Kondo, Yu Kitabeppu, and Toshimasa Kobayashi for helpful discussions and a number of valuable comments on the first version of this article. The author would like to thank Masato Mimura for helpful discussions, especially for noting that it follows from the result of [8] that the Cycl 4 ( 0 ) condition do not imply the coarse embeddability into a CAT ( 0 ) space. The author would like to thank the anonymous referee for valuable comments, especially those mentioned in Remark 1.15.

  1. Funding information: This work was supported by JSPS KAKENHI Grant Number JP21K03254.

  2. Conflict of interest: The author states no conflict of interest.

References

[1] S. Alexander, V. Kapovitch, and A. Petruninm, Alexandrov meets Kirszbraun, in: Proceedings of the Gökova Geometry-Topology Conference 2010. Int. Press, Somerville, MA, 2011, pp. 88–109.Search in Google Scholar

[2] S. Alexander, V. Kapovitch, and A. Petrunin, Alexandrov Geometry: Foundations, 2022, https://arxiv.org/abs/1903.08539. Search in Google Scholar

[3] A. Andoni, A. Naor, and O. Neiman, Snowflake universality of Wasserstein spaces, Ann. Sci. Éc. Norm. Supér. (4) 51 (2018), no. 3, 657–700. 10.24033/asens.2363Search in Google Scholar

[4] N. Ballmann, Lectures on Spaces of Nonpositive Curvature, Vol. 25, DMV Sminar. With an appendix by Misha Brin, 1995. 10.1007/978-3-0348-9240-7Search in Google Scholar

[5] I. D. Berg and I. G. Nikolaev, Quasilinearization and curvature of Aleksandrov spaces, Geom. Dedicata 133 (2008), 195–218. 10.1007/s10711-008-9243-3Search in Google Scholar

[6] M. R. Bridson and A. Haefliger, Metric spaces of non-positive curvature, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 319, Springer-Verlag, Berlin, 1999. 10.1007/978-3-662-12494-9Search in Google Scholar

[7] D. Burago, Y. Burago, and S. Ivanov, A course in metric geometry, in: Graduate Studies in Mathematics, vol. 33, American Mathematical Society, Providence, RI, 2001. 10.1090/gsm/033Search in Google Scholar

[8] A. Eskenazis, M. Mendel, and A. Naor, Nonpositive curvature is not coarsely universal, Invent. Math. 217 (2019), no. 3, 833–886. 10.1007/s00222-019-00878-1Search in Google Scholar

[9] M. Gromov, Metric structures for Riemannian and non-Riemannian spaces, in: Progress in Mathematics, vol. 152, Birkhäuser Boston, Inc., Boston, MA, 1999. Based on the 1981 French original [MR0682063 (85e:53051)], With appendices by M. Katz, P. Pansu and S. Semmes, Translated from the French by Sean Michael Bates. Search in Google Scholar

[10] M. Gromov, CAT(κ)-spaces: construction and concentration, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) 280 (2001), (Geom. i Topol. 7), 100–140, 299–300. Search in Google Scholar

[11] T. Kondo, T. Toyoda, and T. Uehara, On a question of Gromov about the Wirtinger inequalities, Geom. Dedicata 195 (2018), no. 1, 203–214. 10.1007/s10711-017-0284-3Search in Google Scholar

[12] N. Lebedeva, A. Petrunin, and V. Zolotov, Bipolar comparison, Geom. Funct. Anal. 29 (2019), no. 1, 258–282. 10.1007/s00039-019-00481-9Search in Google Scholar

[13] P. Pech, Inequality between sides and diagonals of a space n-gon and its integral analog, Časopis Pěst. Mat. 115 (1990), no. 4, 343–350. 10.21136/CPM.1990.118412Search in Google Scholar

[14] Y. G. Reshetnyak, Inextensible mappings in a space of curvature no greater than K, Siberian Math. J. 9 (1968), no. 4, 683–689. 10.1007/BF02199105Search in Google Scholar

[15] T. Sato, An alternative proof of Berg and Nikolaev’s characterization of CAT(0)-spaces via quadrilateral inequality, Arch. Math. (Basel) 93 (2009), no. 5, 487–490. 10.1007/s00013-009-0057-9Search in Google Scholar

[16] K.-T. Sturm, Probability measures on metric spaces of nonpositive curvature, In: Heat Kernels and Analysis on Manifolds, Graphs, and Metric Spaces (Paris, 2002), volume 338 of Contemp. Math., Amer. Math. Soc., Providence, RI, 2003, p. 357–390. 10.1090/conm/338/06080Search in Google Scholar

[17] T. Toyoda, An intrinsic characterization of five points in a CAT(0) space, Anal. Geom. Metr. Spaces 8 (2020), no. 1, 114–165. 10.1515/agms-2020-0111Search in Google Scholar

[18] V. A. Zalgaller, On deformations of a polygon on a sphere, Uspekhi Mat. Nauk 11 (1956), no. 5(71), 177–178. Search in Google Scholar

Received: 2022-12-01
Revised: 2023-01-22
Accepted: 2023-02-08
Published Online: 2023-03-28

© 2023 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 19.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/agms-2022-0151/html
Scroll to top button