Home Orbit-blocking words and the average-case complexity of Whitehead’s problem in the free group of rank 2
Article Publicly Available

Orbit-blocking words and the average-case complexity of Whitehead’s problem in the free group of rank 2

  • Lucy Hyde , Siobhan O’Connor and Vladimir Shpilrain EMAIL logo
Published/Copyright: October 2, 2024

Abstract

Let F 2 denote the free group of rank 2. Our main technical result of independent interest is the following: for any element 𝑢 of F 2 , there is some g F 2 such that no cyclically reduced image of 𝑢 under an automorphism of F 2 contains 𝑔 as a subword. We then address the computational complexity of the following version of the Whitehead automorphism problem: given a fixed u F 2 , decide, on an input v F 2 of length 𝑛, whether or not 𝑣 is an automorphic image of 𝑢. We show that there is an algorithm that solves this problem and has constant (i.e., independent of 𝑛) average-case complexity.

1 Introduction

The Whitehead problem (see [17, 12]) for a free group is the following: given two elements, 𝑢 and 𝑣, of a free group 𝐹, find out whether there is an automorphism of 𝐹 that takes 𝑢 to 𝑣.

In this paper, we address the computational complexity of the following version of the Whitehead problem: given a fixed u F , decide, on an input v F of length 𝑛, whether or not 𝑣 is an automorphic image of 𝑢. We show that, in the case where the free group has rank 2, there is an algorithm that solves this problem and has constant average-case complexity.

Our main technical result is of independent interest; it settles [1, Problem (F40)] in the free group F 2 of rank 2 (see also [16, Problems 1 and 2]).

Theorem 1

For any element 𝑤 of F 2 , there is some g F 2 such that no cyclically reduced image of 𝑤 under an automorphism of F 2 contains 𝑔 as a subword. Such a word 𝑔 can be produced explicitly for any given 𝑤.

We call such an element 𝑔orbit-blocking for 𝑤. This generalizes the idea of primitivity-blocking words (see e.g. [16]), i.e., words that cannot be subwords of any cyclically reduced primitive element of a free group. (A primitive element is part of a free generating set of 𝐹.) Examples of primitivity-blocking words can be easily found based on an observation by Whitehead himself (see [17, 12]) that the Whitehead graph of any cyclically reduced primitive element of length greater than 2 has either an isolated edge or a cut vertex, i.e., a vertex that, having been removed from the graph together with all incident edges, increases the number of connected components of the graph. A short and elementary proof of this result was recently given in [4].

Our technique in the present paper is quite different and is specific to the free group of rank 2. It is based on a description of primitive elements and primitive pairs in F 2 from [3]. We give more details and a proof of Theorem 1 in Section 2.

In Section 3, based on Theorem 1, we establish that the average-case complexity of the version of the Whitehead problem mentioned in the beginning of the introduction is constant, i.e., is independent of the length of the input 𝑣. This generalizes a result of [16] that applies to the special case where the fixed element 𝑢 is primitive. The result of [16], however, is valid in any free group of a finite rank, whereas our result is limited to F 2 . Extending it to an arbitrary F r would require extending Theorem 1 to F r with r > 2 . While there is little doubt that Theorem 1 holds for any F r , proving it for r > 2 would require an altogether different approach. More details are given in Section 2.

2 Orbit-blocking words

Let F 2 be a free group of rank 2, with generators 𝑎 and 𝑏. A primitive pair in F 2 is a pair of words ( u , v ) such that, for some φ Aut ( F 2 ) , φ ( a ) = u and φ ( b ) = v . Our proof of Theorem 1 relies on the following result from [3] characterizing primitive pairs.

Theorem 2

Theorem 2 ([3])

Suppose that some conjugate of

u = a n 1 b m 1 a n p b m p

and some conjugate of

v = a r 1 b s 1 a r q b s q

form a basis of F ( a , b ) , where p 1 , q 1 , and all of the exponents are non-zero. Then, modulo the possible replacement of 𝑎 by a 1 or 𝑏 by b 1 throughout, there are integers t > 0 and ε = ± 1 such that either

m 1 = m 2 = = m p = ε s 1 = = ε s q = 1 , { n 1 , , n p , ε r 1 , , ε r q } = { t , t + 1 }

(the latter being an equality of sets) or, symmetrically,

n 1 = n 2 = = n p = ε r 1 = = ε r q = 1 , { m 1 , , m p , ε s 1 , , ε s q } = { t , t + 1 } .

The following lemma makes the above description even more specific.

Lemma 1

Every primitive pair in F 2 is conjugate to a primitive pair where both entries are cyclically reduced.

Proof

The following proof of Lemma 1 was suggested by the referee. By way of contradiction, suppose there is a primitive pair ( g 1 u g , h 1 v h ) of minimum total length. Then ( u , g h 1 v h g 1 ) is a primitive pair, too, and the total length of the latter pair is not greater than that of the original pair. If g h and there is a cancellation in g h 1 v h g 1 , then we get a primitive pair with smaller total length, a contradiction. If g h and there is no cancellation in g h 1 v h g 1 , then ( u , g h 1 v h g 1 ) is a Nielsen reduced pair, and therefore any non-trivial product of 𝑢 and g h 1 v h g 1 is either equal to u ± 1 or has length at least 2. Therefore, the subgroup of F 2 generated by 𝑢 and g h 1 v h g 1 cannot contain both 𝑎 and 𝑏, a contradiction. ∎

For a given word 𝑣, we will refer to the greatest absolute value of an exponent that appears on 𝑎 (or 𝑏) in 𝑣 as m a ( v ) ( m b ( v ) , respectively), omitting 𝑣 if it is clear from the context. We will refer to the greatest absolute value of an exponent that appears on 𝑎 (or 𝑏) in 𝑣 considered as a cyclic word as m a ( v ) ( m b ( v ) , respectively).

Lemma 2

Let u i be words in F 2 such that m a ( u i ) 1 for all i { 1 , 2 , , n } . Then m a ( u 1 u n ) n and m a ( u 1 u n ) n + 1 .

Proof

By way of contradiction, suppose there is a tuple ( u 1 , , u n ) of minimum total length that does not satisfy the conclusion of the lemma. Then we can assume that there are no cancellations in the products u i u i + 1 since otherwise we can choose our u i s differently to decrease Σ | u i | while preserving their product and conditions of the lemma. (For example, if u i ends with 𝑎 and u i + 1 starts with a 1 , we can replace u i by u i a 1 and u i + 1 by a u i + 1 ; this will decrease Σ | u i | by 2.)

Note that, since m a ( u 1 ) 1 , we have m a ( u 1 ) 2 . By induction, for k 2 , if m a ( u 1 u k ) k and m a ( u 1 u k ) k + 1 , then m a ( u 1 u k u k + 1 ) k + 1 and m a ( u 1 u k u k + 1 ) k + 2 since u k + 1 cannot start or end with a ± 2 by the conditions of the lemma. Thus, our tuple ( u 1 , , u n ) will satisfy the conclusion of the lemma, and this contradiction completes the proof. ∎

Proof of Theorem 1

Let 𝑤 be our given word and 𝑙 its length. Let φ Aut ( F 2 ) send ( a , b ) to ( u , v ) . By Lemma 1, Theorem 2, and the fact that swapping 𝑎 and 𝑏 is an automorphism, we may assume m a ( u ) , m a ( v ) 1 . Then, by Lemma 2, m a ( φ ( w ) ) l + 1 . Thus, a l + 2 b l + 2 cannot appear as a subword of the cyclic reduction of φ ( w ) . ∎

3 Average-case complexity of the Whitehead problem in F 2

The idea of average-case complexity appeared in [8], formalized in [11], and was addressed in the context of group theory for the first time in [5]. Specifically, the authors of [5] addressed the average-case complexity of the word and subgroup membership problems in some non-amenable groups and showed that this complexity was linear.

The strategy (used in [5]) is, for a given input, to run two algorithms in parallel. One algorithm, called honest, always terminates in finite time and gives a correct result. The other algorithm, a Las Vegas algorithm, is a fast randomized algorithm that never gives an incorrect result, that is, it either produces the correct result or informs about the failure to obtain any result. (In contrast, a Monte Carlo algorithm is a randomized algorithm whose output may be incorrect with some (typically small) probability.)

A Las Vegas algorithm can improve the time complexity of an honest, “hard-working”, algorithm that always gives a correct answer but is slow. Specifically, by running a fast Las Vegas algorithm and a slow honest algorithm in parallel, one often gets another honest algorithm whose average-case complexity is somewhere in between because there is a large enough proportion of inputs on which the fast Las Vegas algorithm will terminate with the correct answer to dominate the average-case complexity. This idea was used in [5] where it was shown, in particular, that if a group 𝐺 has the word problem solvable in subexponential time and if 𝐺 has a non-amenable factor group where the word problem is solvable in a complexity class 𝒞, then there is an honest algorithm that solves the word problem in 𝐺 with average-case complexity in 𝒞. Similar results were obtained for the subgroup membership problem.

We refer to [5, 14] for formal definitions of average-case complexity of algorithms working with group words; we choose not to reproduce them here and appeal to intuitive understanding of the average-case complexity of an algorithm as the expected runtime instead.

The word and subgroup membership problems are not the only group-theoretic problems whose average-case complexity can be significantly lower than the worst-case complexity. In [16], it was shown that the average-case complexity of the problem of detecting a primitive element in a free group has constant time complexity (with respect to the length of the input) if the input is a cyclically reduced word. The same idea was later used in [15] to design an algorithm, with constant average-case complexity, for detecting relatively primitive elements, i.e., elements that are primitive in a given subgroup of a free group.

Here we address the computational complexity of the following version of the Whitehead problem: given a fixed u F , decide, on an input v F of length 𝑛, whether or not 𝑣 is an automorphic image of 𝑢. We show that the average-case complexity of this version of the Whitehead problem is constant if the input 𝑣 is a cyclically reduced word.

This version is a special case of the general Whitehead algorithm that decides, given two elements u , v F r , whether or not 𝑢 can be taken to 𝑣 by an automorphism of F r . The worst-case complexity of the Whitehead algorithm is unknown in general (cf. [1, Problem (F25)] and [10]) but is at most quadratic in max ( | u | , | v | ) = | u | if r = 2 ; see [13, 7].

We note, in passing, that the generic-case complexity of the Whitehead algorithm was shown to be linear in any F r (see [6]). Here we are going to address the average-case complexity of the standard Whitehead algorithm run in parallel with a fast algorithm that detects “orbit-blocking” subwords in the input word.

Denote by B ( u ) a word that cannot occur as a subword of any cyclically reduced φ ( u ) , φ Aut ( F 2 ) . Given any particular u F 2 , one can easily produce an orbit-blocking word B ( u ) based on the argument in our proof of Theorem 1 in Section 2. Specifically, one can use B ( u ) of the form a s b t , where 𝑠 and 𝑡 are positive integers, each larger than the length of 𝑢.

We emphasize that, in the version of the Whitehead problem that we consider here, 𝑢 is not part of the input. Therefore, constructing B ( u ) does not contribute to complexity of a solution; it is considered to be pre-computed.

A fast algorithm 𝒯 to detect if B ( u ) is a subword of a (cyclically reduced) input word 𝑣 would be as follows. Let 𝑛 be the length of 𝑣. The algorithm 𝒯 would read the initial segments of 𝑣 of length 𝑘, k = 1 , 2 , , adding one letter at a time, and check if this initial segment has B ( u ) as a subword. This takes time bounded by C k for some constant 𝐶; see [9].

Denote the “usual” Whitehead algorithm (that establishes whether or not 𝑣 is an automorphic image of 𝑢) by 𝒲. Now we are going to run the algorithms 𝒯 and 𝒲 in parallel; denote the composite algorithm by 𝒜. Then we have the following theorem.

Theorem 3

Suppose possible inputs of the above algorithm 𝒜 are cyclically reduced words that are selected uniformly at random from the set of cyclically reduced words of length 𝑛. Then the average-case time complexity (a.k.a. expected runtime) of the algorithm 𝒜, working on a classical Turing machine, is O ( 1 ) , a constant that does not depend on 𝑛. If one uses the “Deque” (double-ended queue) model of computing [18] instead of a classical Turing machine, then the “cyclically reduced” condition on the input can be dropped.

Proof

Suppose first that the input word 𝑢 is cyclically reduced.

(1) First we address the complexity of the algorithm 𝒯. Here we use a result of [2] saying that the number of (freely reduced) words of length 𝐿 with (any number of) forbidden subwords grows exponentially slower than the number of all freely reduced words of length 𝐿.

In our situation, we have at least one B ( u ) as a forbidden subword. Therefore, the probability that the initial segment of length 𝑘 of the word 𝑣 does not have B ( u ) as a subword is O ( s k ) for some 𝑠, 0 < s < 1 . Thus, the average time complexity of the algorithm 𝒯 is bounded by

k = 1 n C k s k ,

which is bounded by a constant.

(2) Now suppose that the input word 𝑣 of length 𝑛 does not have any subwords B ( u ) , so that we have to rely on the standard Whitehead algorithm 𝒲 for an answer. The probability of this happening is O ( s n ) for some 𝑠, 0 < s < 1 , as mentioned before.

The worst-case time complexity of the Whitehead algorithm is known to be O ( n 2 ) in the group F 2 (see [7]).

Thus, the average-case complexity of the composite algorithm 𝒜 is

k = 1 n C k s k + O ( n 2 ) O ( s n ) ,

which is bounded by a constant.

(3) Now suppose the input word 𝑢 is not cyclically reduced. Then we are going to cyclically reduce it. This cannot be done in constant (or even sublinear) time on a classical Turing machine, so here we are going to use the “Deque” model of computing [18]. This allows one to move between the first and last letter of a word in constant time. We are going to show that, with this facility, one can cyclically reduce any element 𝑣 of length 𝑛, in any F r , in constant time (on average) with respect to n = | v | . In fact, this was previously shown in [16], but we reproduce the proof here to make the exposition complete.

First, recall that the number of freely reduced words of length 𝑛 in F r is

2 r ( 2 r 1 ) n 1 .

The following algorithm, that we denote by ℬ, will cyclically reduce 𝑣 on average in constant time with respect to n = | v | .

This algorithm will compare the first letter of 𝑣, call it 𝑎, to the last letter, call it 𝑧. If z a 1 , the algorithm stops right away. If z = a 1 , the first and last letters are deleted, and the algorithm now works with this new word.

The probability of z = a 1 is 1 2 r for any freely reduced word whose letters were selected uniformly at random from the set { x 1 , , x r , x 1 1 , , x r 1 } . At the next step of the algorithm, however, the letter immediately following 𝑎 cannot be equal to a 1 if we assume that the input is a freely reduced word, so at the next steps (if any) of the algorithm ℬ, the probability of the last letter being equal to the inverse of the first letter will be 1 2 r 1 . Then the expected runtime of the algorithm ℬ on an input word of length 𝑛 is

k = 1 n 2 1 2 r ( 1 2 r 1 ) k 1 k < k = 1 ( 1 2 r 1 ) k k .

The infinite sum on the right is known to be equal to 2 r 1 ( 2 r 2 ) 2 ; in particular, it is constant with respect to 𝑛. ∎

Acknowledgements

We are grateful to the referee for suggesting several simplifications of our original proof of Theorem 1.

  1. Communicated by: Anton Klyachko

References

[1] G. Baumslag, A. G. Myasnikov and V. Shpilrain, Open problems in combinatorial group theory, https://shpilrain.ccny.cuny.edu/gworld/problems/oproblems.html. Search in Google Scholar

[2] T. Ceccherini-Silberstein and W. Woess, Growth and ergodicity of context-free languages, Trans. Amer. Math. Soc. 354 (2002), no. 11, 4597–4625. 10.1090/S0002-9947-02-03048-9Search in Google Scholar

[3] M. Cohen, W. Metzler and A. Zimmermann, What does a basis of F ( a , b ) look like?, Math. Ann. 257 (1981), no. 4, 435–445. 10.1007/BF01465865Search in Google Scholar

[4] M. Heusener and R. Weidmann, A remark on Whitehead’s cut-vertex lemma, J. Group Theory 22 (2019), no. 1, 15–21. 10.1515/jgth-2018-0118Search in Google Scholar

[5] I. Kapovich, A. Miasnikov, P. Schupp and V. Shpilrain, Average-case complexity and decision problems in group theory, Adv. Math. 190 (2005), no. 2, 343–359. 10.1016/j.aim.2003.02.001Search in Google Scholar

[6] I. Kapovich, P. Schupp and V. Shpilrain, Generic properties of Whitehead’s algorithm and isomorphism rigidity of random one-relator groups, Pacific J. Math. 223 (2006), no. 1, 113–140. 10.2140/pjm.2006.223.113Search in Google Scholar

[7] B. Khan, The structure of automorphic conjugacy in the free group of rank two, Computational and Experimental Group Theory, Contemp. Math. 349, American Mathematical Society, Providence (2004), 115–196. 10.1090/conm/349/06360Search in Google Scholar

[8] D. E. Knuth, The analysis of algorithms, Actes du Congrès International des Mathématiciens. Tome 3, Gauthier-Villars, Paris (1971), 269–274. Search in Google Scholar

[9] D. E. Knuth, J. H. Morris, Jr. and V. R. Pratt, Fast pattern matching in strings, SIAM J. Comput. 6 (1977), no. 2, 323–350. 10.1137/0206024Search in Google Scholar

[10] D. Lee, Counting words of minimum length in an automorphic orbit, J. Algebra 301 (2006), no. 1, 35–58. 10.1016/j.jalgebra.2006.04.012Search in Google Scholar

[11] L. A. Levin, Average case complete problems, SIAM J. Comput. 15 (1986), no. 1, 285–286. 10.1137/0215020Search in Google Scholar

[12] R. C. Lyndon and P. E. Schupp, Combinatorial Group Theory, Class. Math., Springer, Berlin, 2001. 10.1007/978-3-642-61896-3Search in Google Scholar

[13] A. G. Myasnikov and V. Shpilrain, Automorphic orbits in free groups, J. Algebra 269 (2003), no. 1, 18–27. 10.1016/S0021-8693(03)00339-9Search in Google Scholar

[14] A. Olshanskii and V. Shpilrain, Linear average-case complexity of algorithmic problems in groups, preprint (2022), https://arxiv.org/abs/2205.05232. Search in Google Scholar

[15] M. Roy, E. Ventura and P. Weil, The central tree property and algorithmic problems on subgroups of free groups, J. Group Theory 27 (2024), no. 5, 1059–1089. 10.1515/jgth-2023-0050Search in Google Scholar

[16] V. Shpilrain, Average-case complexity of the Whitehead problem for free groups, Comm. Algebra 51 (2023), no. 2, 799–806. 10.1080/00927872.2022.2113791Search in Google Scholar

[17] J. H. C. Whitehead, On equivalent sets of elements in a free group, Ann. of Math. (2) 37 (1936), no. 4, 782–800. 10.2307/1968618Search in Google Scholar

[18] Double-ended queue, https://en.wikipedia.org/wiki/Double-ended_queue. Search in Google Scholar

Received: 2024-06-23
Revised: 2024-08-30
Published Online: 2024-10-02
Published in Print: 2025-03-01

© 2024 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 30.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jgth-2024-0140/html?srsltid=AfmBOorXa-rYAH52z73ZSlMGC93zeBdlpnObhvDVTOLnSmceEwfwtW_r
Scroll to top button