Arithmetic progressions in space!

Last time I introduced a neat little question: Is it true that every infinite path in \mathbb{Z}^n contains arbitrarily long arithmetic progressions? For example, in the following path, the squares colored in red have coordinates (11,12), (15,17), (19,22), (23,27) , which form an arithmetic progression of length 3, where every two consecutive points differ by (4,5) .

As it turns out, this problem has already been solved! although not exactly in this form. But I did not know this at the time, and so before I tell you the answer, I will share with you a bit about how I started to investigate it, what were the directions of attack, and what eventually worked (read that as “how I eventually found out the right piece of literature”).

Now, how you approach this problem depends, of course, on what you think the solution is. If you think that it’s possible for paths to avoid long arithmetic progressions, a good starting place would be to try to explicitly construct such a path, perhaps by some recursive formula. On the other hand, if you think that every path must contain infinitely many progressions of increasing length, it would be advisable to pay a visit to the vast literature about one dimensional arithmetic progressions in \mathbb{Z} or \mathbb{N} (i.e Szemerédi’s theorem, Green-Tao Theorem).

As for me, I thought that the answer depended on the dimension n : My conjecture was that for n = 2 , arithmetic progressions could not be avoided, that for n \geq 4 they could, and for n = 3 I was undecided. My intuition was that both paths and the arithmetic progressions within them are inherently linear, one dimensional objects; in fact, an arithmetic progression in a path in \mathbb{Z}^n is just the intersection of the path with some rational-sloped line. It is generally very difficult to avoid intersecting too many lines in two dimensions, while in higher dimensions there is a lot more space for evasive maneuvers.

Based on this guts-feeling, I decided to work on \mathbb{Z}^2 first, with the aim of showing that arithmetic sequences cannot be avoided in any path. Now, once you decide that arithmetic sequences cannot be avoided in any path, you can make your life easier and try to prove a “practice theorem” which only applies for specific subsets of all paths. For example, it is much easier to work with positive paths, i.e paths in which the coordinates can only increase at every step. In \mathbb{Z}^2 , this corresponds to paths which only go either right or up, and never left or down (English is a funny language; those who step right up are never left down!).

A very Good Thing about \mathbb{Z}^2 is that it sits very comfortably in the computer screen, and we can generate pretty pictures like the one at the top of this post. Indeed, one of the first things I did was to write a short computer program which, for every integer k , tries to find the length of the longest path that doesn’t contain arithmetic progressions of length k . It then draws the path, together with one extra step, so that you can see why an arithmetic progression would be created. Hopefully, if indeed arithmetic progressions are unavoidable, they also appear quickly enough so that the program can hint at their existence (for the number of possible paths grows very large, very quickly).

Any two points constitute an arithmetic progression of length 1. So for k = 1 , this is the longest path which forces a 1 -arithmetic progression:

For k = 2 , this is the unique longest path (up to inversion symmetry about the line y=x ):

There doesn’t have to be a unique path, though, as the case k = 3 suggests:

So far, if we denote by f(k) the length of longest path without an arithmetic progression of length k , the program shows that f(1) = 1 , f(2) = 4 , and f(3) = 10 . What about f(4) ? Unfortunately, the program did not terminate when I plugged in k=4 ; it found a path of length over 20,000 without an arithmetic progression of length 4, and eventually crashed due to a memory error (indeed, all art imitates life). Here are the first 1250 steps of such a path:

Now for the dire dilemma: Should we be concerned with this result? One immediate response, certainly valid, is to conclude (in the weakest sense of the word) that arithmetic progressions of length 4 can be avoided, and to try and search for an explicit construction which avoids them. Another, equally valid conclusion, is to politely excuse ourselves and say that f(4) just happens to be very, very large, say in the order of millions, and that the program crashing due to memory error implies nothing because there is nothing to be implied.

Now, any Registered Practicalist will tell you that the second conclusion is farcical, and that we do not often encounter sequences that feature explosive growth of the form 1, 4, 10, 20000000. But in response, any Certified Theorist will then bring up Szemerédi’s theorem.

Szemerédi’s theorem states that for every integer k>0 , there is a tiny constant c , so that if N is large enough and A \subseteq {1 \ldots N } has size at least cN , then A contains a k -term arithmetic progression. The proof of the theorem is based on the famous (among the right circles) Szemerédi’s Regularity Lemma. Roughly speaking, this lemma says that any graph can be partitioned into a finite number of equal sized parts, where the edges between any two parts behave in many ways almost as though they were random. The “almost” in the last sentence is controlled by a small real parameter \varepsilon . The issue with Szemerédi’s regularity lemma is that the number of partitions depends very, very badly on \varepsilon . Say, an exponential tower of the sort 2 ^{2^{\ldots ^2}} , where the number of exponentials in the tower is of order (1/\varepsilon)^{1/16} .

When my program crashed, I did not see any immediate way of applying Szemerédi’s theorem or lemma to the case of arithmetic progressions, but I could definitely imagine how a rapidly-exploding sequence such as 1, 4, 10, 2000000 could emerge from a horrendous tower of \varepsilon s.

The idea was to show that there exists a line with rational slope that intersects the path with positive density (showing that it intersects the path only an infinite number of times is not enough for Szemerédi’s theorem). The hope behind this was that the path would have a hard time both evading arithmetic sequences of length 4 and varying the slope too much so that it avoids intersecting lines. Unfortunately (or perhaps fortunately), I (together with my friend Ori, who was helping me) was unable to prove anything of this sort.

Defeated in one campaign, I decided to treacherously switch sides and try to prove that arithmetic progressions of length 4 can actually be avoided. The strongest lead I had to follow was the Thue-Morse sequence.

The Thue-Morse sequence is an infinite binary sequence which can be iteratively constructed as follows:

  • Start with the one-letter string T^0 = 0 .
  • Construct T^{i+1} by simultaneously replacing each 0 in T^i with 01 , and each 1 with 10 . In other words, if we define a substitution function \theta : \{0,1\} \to \{0,1\}^* by \theta 0 = 01 and \theta 1 = 10 , this is the same as saying that T^{i+i} = \theta T^i , where \theta is applied bitwise to the elements of T^i .
  • The Thue-Morse sequence T is the limit as i \to \infty of the finite words T^i .

You may have to convince yourself a bit that the limit does exist, in the sense that each T^i is actually a prefix of T^{i+1} . This is true because \theta maps the bit 0 into the string 01 , which itself starts with 0 . The following first iterations may help to visualize this:

  • T^0 = 0
  • T^1 = 01
  • T^2 = 0110
  • T^3 = 01101001
  • T^4 = 01101001100101101

The Thue-Morse sequence has the following peculiar property (which is not that trivial to prove, actually): It is cube-free. That is, there is no string of binary digits Y = Y_1 Y_2 \ldots Y_m such that the string YYY appears anywhere in T .

This type of result is a good starting place for our analysis on arithmetic progressions, because every monotone path in \mathbb{Z}^n can be represented by an infinite word on an alphabet of size n : You can always assume that the path starts at (0,0) , and you can encode each step in each direction by a digit from 0 to n-1 . When thus encoded, the Thue-Morse sequence gives the following path:

Since the path represented by the Thue-Morse sequence is cube-free, there is no series of steps which repeats itself three times in a row. This is good for us, since a series of steps which repeats itself three times in a row gives rise to an arithmetic progression of length 3. However, this does not mean that the path has no arithmetic progressions of length 3, since it might achieve the same overall difference vector d but with the steps taken in different order. Indeed, as the computer program images above show, it is impossible to avoid having a progression of length 3. Still, it’s worthwhile to check whether or not the Thue-Morse path has any longer arithmetic progressions; if the computer doesn’t find any, then it would be a prime candidate for some actual theorem proving.

But alas, fate is not so generous:

Convince yourself that the Thue-Morse sequence has lots of long arithmetic progressions

So much for Thue-Morse.

The problem is that while the Thue-Morse sequence promises that no block of m letters can exactly repeat itself three times (or more), it says absolutely nothing whatsoever about repeated blocks which are permutations of the symbols in m . Since \mathbb{Z}^2 is an Abelian group, we do not care about the order of the steps in the block, but only on the number of steps in each direction in each block.

The next logical step in our endeavour therefore is to find out if a stronger form of the Thue-Morse word exists, which not only prohibits the same block from appearing 4 times in a row, but also prohibits a block or its permutations from appearing 4 times in a row. If such a word exists, then the path it represents does not have any arithmetic progression of length 4 or more.

A Clever Mathematician might be able to find such a generalized Thue-Morse word, and truly this is the place where I turned to the existing literature to see if a Clever Mathematician already did so. As it turns out, he did!

Theorem (F. M. Dekking, 1978): There exists a sequence on two symbols in which no four blocks occur consecutively that are permutations of each other.

Wow, that pretty much nails it, doesn’t it?

Since this post didn’t have any proofs until now, only pretty pictures, I’d like to present Dekking’s proof (but still with pretty pictures). It is short, but for me sort of comes out of nowhere (maybe because there were already similar, slightly weaker ideas floating around at the time of its writing, but I only saw the finalized form). I can convince myself that it’s true, but I’m not exactly sure why it’s true. Like the Thue-Morse sequence, Dekking uses a substitution function \theta to iteratively generate an infinite word. The function \theta itself is simple, and you can easily construct the final word yourself, if you want to. The proof that the final limit has no consecutive permutations of blocks, however, relies on a short, clever analysis, with several finely tailored requirements that are seemingly almost magically satisfied by his construction.

Dekking’s paper is only 5 pages long, so you can definitely read it yourself, but I will repeat the main claims anyway, and add some pictures which I think make the proof a tad more clear. As stated above, the theorem follows from a technical lemma concerning sequences generated by substitution functions \theta . To state, we need several quick definitions:

  • Symbol definitions: Let I be a set of symbols. A block B of size m is a finite word B = b_1\ldots b_m , where each b_i \in I . The frequency vector of a block is a row vector of size |I| where the i -th component is the number of times that i \in I appears in the block. The frequency matrix M of a substitution function \theta is the |I| \times |I| matrix whose j -th row is the frequency vector of \theta j . For i \in I , we can decompose \theta i into two smaller blocks, \theta i = VV' . The block V is called a left-subblock of \theta .
  • Group definitions: Let G be an additive finite Abelian group. A subset A \subseteq G is called progression-free of order n if a\in A , a + g \in A , …, a+(n-1)g \in A altogether implies that g = 0 . A function f: I \to G can be extended to blocks by defining f(b_1 b_2 \ldots b_m) = f(b_1) + f(b_2) + \ldots f(b_m). Such a function is called injective if for all n and any collection V_1,\ldots,V_n of left-subblocks of \theta , the equality f(V_1) = \ldots = f(V_n) implies that either all V_1 = \ldots = V_n or V_1' = \ldots = V_n' .

We can now state the main lemma:

Lemma: Let n > 1 , \theta a substitution function on a set I , G a finite Abelian group, and f:I \to G be map, such that the following four conditions are met:

  1. The frequency matrix M of \theta is non-singular,
  2. For each i \in I , we have f(\theta i) = 0 ,
  3. The set A = \{g \in G \mid g = f(V), \text{V is a left subblock of } \theta \} is progression free of order n+1 ,
  4. f is \theta -injective.

Then any sequence generated by \theta (like how Thue-Morse was generated from T^0 = 0 ) does not have any n blocks which occur consecutively that are permutations of each other. For us, this means in particular that the path represented by the sequence does not have a length n arithmetic progression.

Those conditions are quite specific, aren’t they?

Proof: Roughly speaking, the conditions of the lemma all gang up to say that any n consecutive blocks which are permutations of each other have found their way into the sequence by applying the substitution function to some other n consecutive blocks which are shorter. But this cannot really be, because eventually we would need \theta to create such blocks from single letters.

More formally: Let X be the infinite string generated by \theta :

For simplicity, we will omit the actual symbols in our graphic representation, and display X as just a rectangle:

Assume by contradiction X has n blocks occurring consecutively which are all permutations of each other. In particular, there is such a set of blocks B_1, \ldots, B_n with the length of each block minimal, and with B_1 starting leftmost of all such blocks.

Since X is generated by \theta , it can be split up into a concatenation of \theta -blocks (since X = \theta X ). These blocks do not necessarily have to have the same length. Each B_k starts in some \theta -block, which we call \theta i_k , where i_k \in I . The final block B_n ends in some \theta -block which we call \theta i_{n+1} . The situation looks somewhat like this:

(Actually, it may also happen that several blocks B_i fit into the same \theta -block, or that there is a \theta -block so small that it fits entirely within a block B_k . The treatment for these cases is similar to the case displayed above, and since it only overburdens the proof, I chose to omit it. For now, let’s just assume that the picture above is the correct one).

We can decompose each \theta -block into a left-subblock and a right-subblock, \theta i_k = V_k V_k' so that B_k occurs at the same place as V_k' for k \leq n , and so that B_n ends at the place as V_{n+1} :

Let us calculate the values of f(V_i) . From the above image, we can see by additivity of f that

f(V_2) = f(B_1) - f(V_1'),

and in general

f(V_k) = f(B_{k-1}) - f(V_{k-1}').

By condition (2) in the statement of the lemma, f(\theta i_k) = 0 for all i_k \in I , and since \theta i_k is the concatenation of V_k and V_k' , this gives

f(V_k) = -f(V_k').

Plugging this in the equation above, we have

f(V_k) = f(B_{k-1}) + f(V_{k-1}).

Now, since the blocks B_1, \ldots, B_n are all permutations of each other, and since G is Abelian, we have that

f(B_1) = f(B_2) = \ldots = f(B_n) := D,

as each f(B_k) is just the sum of f over all individual symbols in the block. We then get

f(V_k) = f(V_{k-1}) + D,

implying that the f(V_k) with 1\leq k \leq n+1 form an arithmetic sequence in G ! But from condition (3) in the statement of the lemma, any arithmetic sequence with n+1 elements whose values are f(V) for some left-subblocks V must have the difference element D be equal to 0. Thus

f(V_1) = f(V_2) = \ldots = f(V_{n+1}).

By condition (4), this either means that all V_k s are equal to each other, or that all V_k' s are equal to each other. Suppose that it’s the V_k s that are equal (a similar proof will work for V_k' s). Now we know that our initial image was a bit off, and the blocks \theta i_k do need to have the same length:


Looking at the above picture, since V_2 = V_3 , and since B_1 is a permutation of B_2 , we immediately get that V_1' is a permutation of V_2' . This means that \theta i_1 is a permutation of \theta i_2 ! The same argument actually holds for all k , and so we get that the blocks \theta i_k are permutations of each other. But if the blocks \theta i_k are permutations of each other, then their frequencies vectors are the same, which means that either all i_k are the same, or that the frequency matrix M has two identical rows. The latter cannot be true, since by assumption (1) in the lemma the frequency matrix is non-singular. The former also cannot be true, since then the symbols i_k themselves would form a sequence of n blocks which are permutations of each other. If the length of B_1 is larger than 1, this is a contradiction to the minimality of B_1 , while if B_1 has length 1, this in particular means that \theta i_k is just a single symbol, and since i_1 \ldots i_n appear in X , this contradicts the assumption that B_1 is the leftmost minimal permutation-repeating block (since \theta must in general blow up strings in order to generate an infinite word). Contradiction!

(Note: In the original proof, where the blocks B_i could be directly contained in blocks \theta i_k , it is not the \theta i_k ‘s that are necessarily permutations of each other, but rather a collection of blocks which are the direct image of \theta D_k for some blocks D . By similar reasoning as above, it turns out that the D ‘s are permutations of each other, and the proof proceeds.)

Whew!

Having proved the lemma, all we need to do is find the right substitution \theta , the right Abelian group G and the right function f : I \to G which all satisfy the conditions (1) through (4). A simple task, easily solved by either divine inspiration or computer enumeration (I do not know which of these Dekking chose). In any case, here is the all-star league:

  • The substitution function \theta is given by \theta 0 = 011 \theta 1 = 0001
  • The group is G = \mathbb{Z}_5 = \mathbb{Z}/ 5\mathbb{Z} .
  • The function f is given by f(0) = 1 and f(1) = 2 .

All that is left to do is to check that all four conditions of the lemma apply.

  1. The frequency vector of \theta 0 is (1,2) , and the frequency vector of \theta 1 is (3,1) . The frequency matrix is therefore \bigl( \begin{smallmatrix}     1&2    3&1     \end{smallmatrix} \bigr) , which is non-singular.
  2. The function f is 0 on \theta blocks: f(\theta 0) = f(011) = f(0) + f(1) + f(1) = 5 = 0, f(\theta 1) = f(0001) = 3f(0) + f(1) = 5 = 0.
  3. The left-subblocks of \theta are 0 , 01 , 011 for \theta 0 and 0 , 00 , 000 , and 0001 for\theta 1 , on which the function f attains the values 1 , 3 , 0 , 1 , 2 , 3 , and 0 , respectively. Crucially, the set A as defined in the lemma does not contain the element 4 , and is equal to \{0,1,2,3\} . This set is a progression-free set of order 5 : Since 5 is prime, any 5 -element arithmetic progression with non-zero difference in \mathbb{Z}_5 must contain the element 4 .
  4. You can similarly check that f is injective.

Since A is progression-free of order 5 , by the lemma the sequence generated by \theta does not have 4 consecutive blocks which are permutations of each other. We are done!

The first one hundred steps of Dekking’s path

That was quite a ride. Let me conclude it with a last remark about higher dimensions. The above theorem and computer search shows that there exists monotone paths in \mathbb{Z}^2 which manage to avoid arithmetic progressions of length 4 (and therefore any larger lengths). What about paths in higher dimensions? Well, in the same paper and using the same lemma, Dekking proved that there are monotone paths in \mathbb{Z}^3 which avoid progressions of length 3 . The unavoidable question, “what about monotone paths in \mathbb{Z}^4 which avoid progressions of length 2 ?” had to wait about 15 years, but this too was solved in the affirmative by Keranen, using a substitution function \theta which mapped the symbol 0 into an 85-letter string!

Until next time, here are two quick puzzles.

  1. Regarding the original question about arithmetic progressions in paths in \mathbb{Z}^n , is it easier to avoid arithmetic progressions in we consider all paths, and not just monotone paths?
  2. You can convince yourself that generating simple random walks is a lousy way of constructing paths which avoid arithmetic progressions. Still, it would be nice to get a decent bound on how many paths we need to generate until we get one that is free of the rascals. What is the probability P(m,n,k) that the simple random walk of m steps in \mathbb{Z}^n has an arithmetic progression of length k ?

* I found the reference to Dekking’s paper in the Notes section of the first chapter of the book Automatic Sequences: Theory, Applications, Generalizations by Jean-Paul Allouche and Jeffrey Shallit. It seems like a nice book, with plenty of exercises, and I hope to read it one day. The generalized Thue-Morse words are called “Abelian power-free”, or “strongly non-repetitive”.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s