There and back again, part 2

This is part two in a three-part series about recurrence and transience on the integer lattices. Here is part one. Here is part three.


In the previous post, we saw that the integer lattice \mathbb{Z} is recurrent. This means that a simple random walk, when starting at the origin 0 , will return to the origin with probability 1 . In fact, it will return to the origin infinitely many times, and will actually (eventually) visit every vertex an infinite number of times. Quite the workout.

Today we will break free of our one-dimensional chains, and ask ourselves: What happens for \mathbb{Z}^2 ?

At first, \mathbb{Z}^2 seems much more complicated than \mathbb{Z} : After all, the former is comprised of infinitely many copies of the latter! But if you stare at square lattice hard enough (provided that it does not stare back into you), you might start feeling a slight tingling of an intuition here, that since there are so many more edges, it should somehow be easier to run off to infinity – there are simply so many more ways that the random walk can run off and never return! Of course, this intuition is flawed: More edges also means more ways to return back to the origin, but it’s not too bad a starting point. For example, we might imagine that if in some sense there are always “more paths” leading off to infinity than paths leading back to the origin, then there is a non-zero probability of never coming back. This is all a bit vague, but we will later see that there is in fact a way to make precise the idea that more edges make it easier to escape. That’s only a basic notion, though, a rough guideline; inevitably, it is the particular details of where the edges are situated and how many there are that dictates the final recurrence/transience verdict.

But enough philosophizing. Without further ado, here is our main theorem:

Theorem: The simple random walk on \mathbb{Z}^2 is recurrent.

Hugh! Despite having a gazillion more ways to get lost, a drunkard randomly scouring the streets of an infinite Manhattan will always find his way back home. But he will not do so easily.

In the same spirit as the previous post, we will give two proofs of this theorem: One which directly counts the expected number of returns to the origin, and one which uses electrical networks to bound the probability of escaping from large boxes around the origin. In addition to proving the theorem, both methods will demonstrate that despite the fact that both \mathbb{Z} and \mathbb{Z}^2 are recurrent, the random walk on the latter has a harder time coming back home than the random walk on former.

Method #1: The straightforward calculation
As described in part #1, the gist of this method is to calculate the expected number of times that the random walk returns to the origin. Denote by p_{\mathrm{ret}} the probability that the random walk returns to the origin (so p_{\mathrm{ret}} = 1 means that the walk is recurrent, while p_{\mathrm{ret}} < 1 means that the walk is transient). As we saw, if p_{\mathrm{ret}} = 1 then the expected number of return times is infinite, while if p_{\mathrm{ret}} < 1 then the expected number of return times is finite. All that we need to do then is find a way to explicitly calculate the expected number of times the random walk on \mathbb{Z}^2 hits the origin.

The most straightforward way to do this is as follows. Let X_n be the random variable that is 1 if the walker is at the origin at time n , and 0 otherwise. Then the total number of times that the walk visits the origin is

T = \sum_{n=0}^{\infty} X_n.

The expectation of this variable is given by

\mathbb{E}[T] = \sum_{n=0}^{\infty} \mathbb{E}[X_n]

= \sum_{n=0}^{\infty} \mathbf{Pr}[X_n = 1].

When the walk was one-dimensional, on \mathbb{Z} , we reckoned that out of n steps, the walker has to take exactly n/2 steps to the left and exactly n/2 steps to the right in order to return to the origin. Since the order doesn’t matter, and since there are 2^{n} paths of length n , in the integer lattice the probability was equal to

\mathbf{Pr}[X_{n} = 1] = {n \choose n/2} / 2^{n}.

We then saw that for large n , this probability was roughly proportional to 1 / \sqrt{n} , so the sum over all n diverged.

If we want, we can repeat this argument for the two-dimensional case \mathbb{Z}^2 as well: In order to return to the origin after exactly n steps, the walk has to to take an even number of steps in the x direction and even number of steps in the y direction; in particular, n must be even, so we will denote it by n=2k . Further, the number of left steps must be equal to the number of right steps, and the number of up steps must be equal to the number of down steps. So we can calculate X_{2k} as follows: First, pick an \ell between 0 and k which denotes how many steps the walker takes to the left. It must therefore take the same number of steps to the right. This means that the walker took k-\ell steps up and k-\ell steps down as well. Since there are 4^{2k} paths of length 2k overall, summing over all possible \ell values gives

\mathbf{Pr}[X_{2k} = 1] = \frac{1}{4^{2k}} \sum_{\ell=0}^{k}{2k \choose \ell,\ell,k-\ell,k-\ell},

where {2k \choose \ell,\ell,k-\ell,k-\ell} is the multinomial coefficient.

You can solve this if you want, but it’s rather a bit tedious. Luckily, there is a neat trick which allows us to calculate \mathbf{Pr}[X_{2k} = 1] while only using the (already known) result for the one-dimensional case.
Recall that in the simple random walk on \mathbb{Z}^2 , at every step the walker flips a balanced four-sided coin, and goes either North, East, South or West depending on the result. The trick is to consider the following process instead. Suppose that at every step, the walker flips two independent two-sided coins. First, if the first coin turns up heads, the walker takes one step to the East, and if it’s tails, the walker walks West. Then, if the second coin turns up heads, the walker heads North, and otherwise South. The following image shows what happens when we combine both left-right and up-down motions:

Lo and behold! The net result of this movement is to go either NE, SE, SW or NW, each with probability 1/4 . This is exactly the random walk in \mathbb{Z}^2 , where the lattice has been rotated by 45^0 relative to the usual orientation, and scaled by a factor of \sqrt{2} . Thus the simple random walk on \mathbb{Z}^2 is actually the product of two simple random walks on \mathbb{Z} !

The conclusion from all of this, is that \mathbf{Pr}[X_{2k} = 1] for the two-dimensional walk is equal to the square of the corresponding value for the one-dimensional walk. In the previous post we saw that the latter is proportional to 1/\sqrt{k} , and so in this case each summand is proportional to 1/k . We finally get

\mathbb{E}[T] = \sum_{n=0}^{\infty} \mathbb{E}[X_n]

= \sum_{n=0}^{\infty} \mathbf{Pr}[X_n = 1]

\approx \sum_{k=0}^{\infty} \frac{1}{k} = \infty.

The expected number of times that the random walk hits the origin is infinite, and hence the walk is recurrent.

This is a nice result, because it shows us that returning to the origin in \mathbb{Z}^2 is quite a lot harder than returning to the origin in \mathbb{Z} . You can think of this in two ways. First, qualitatively, the two-dimensional walk is a product of two one-dimensional walks: You need to magically have two walks return to zero at the same time, which is of course a lot harder than having just one walk return to zero at the same time. Second, and more quantitatively, in the one-dimensional case the expected number of returns was \sum 1/\sqrt{k} , while in the two-dimensional case the sum was \sum 1/k . This last sum does indeed diverge, but only barely so (for example, if we had snuck in the cover of night and casually tweaked the denominator of each summand to be k \log^2 k instead of just k , the sum would converge).

Method #2: Electrical networks
This method is still freakin’ cool because it uses resistors, but it also has the added bonus of formalizing the notion that adding more edges to a graph makes it easier to escape to infinity.

Let’s recapitulate what we found in the previous post. By replacing each edge in the graph with a unit resistor, we showed that we can calculate whether or not the escape probability is 0 by checking the effective resistance between the origin and the boundary of increasingly larger and larger boxes. If the effective resistance grows to infinity, then the escape probability is 0 , and the walk is recurrent. And if the effective resistance is finite, then there is a non-zero probability of running away and never returning to the origin.

Here’s the cool thing: The relation between effective resistance and the escape probability immediately implies that adding more edges to a graph makes it easier to escape! This is because in our model, two vertices which have no edge between them can be thought of as connected by an infinite resistor: After all, if you can’t directly pass from vertex x to vertex y , the conductance between them C_{xy} can be treated as essentially 0 . Adding an edge between x and y is then equivalent to replacing a resistor with resistance \infty by a resistor with resistance 1 , which is by all accounts substantially smaller! Thus the effective resistance from 0 to the boundary of a box can only decrease when we add edges to the graph.

For showing that \mathbb{Z}^2 is recurrent, we are going to look at the effective resistance between the origin and the boundary of square boxes surrounding the origin:

If you are industrious and laborious enough, you might find out an explicit relation for the effective resistance, and show that it diverges. But there is actually a short, clean, backed-by-physical-intuition way of showing that it grows to infinity, using the so-called shorting law. If we have two different vertices x and y , we can short them by connecting them with with a 0 resistance resistor (i.e copper wire in “real life”). This forces them to be at the same potential, and effectively “merges” the vertices together. Of course, this reduces the effective resistance of the network, sometimes drastically so.

The basic strategy is to start with \mathbb{Z}^2 , and short together some of the vertices. The trick is to connect enough vertices and in the right places, so that computing the effective resistance of the new network becomes easy, but not to short too much, so that the new network’s effective resistance won’t change by too substantial an amount. In our particular quest to prove that \mathbb{Z}^2 is recurrent, all we require is a series of shorts which will allow us to easily compute the effective resistance, while still keeping it infinite.

So which vertices to short? Well, just fuse together the boundary of every box, as seen in the following image, where shorted vertices are connected by the same thick black line:

Effectively, after shorting, the origin 0 is connected by four resistors in parallel to a single vertex v_1 , which is the merge-together of the four vertices surrounding the origin in the original \mathbb{Z}^2 . This vertex is in turn connected to a vertex v_2 by 12 parallel resistors, and in general, vertex v_n is connected to vertex v_{n+1} via 8n+4 parallel resistors. Since the effective resistance of k resistors in parallel is 1/k , we end up with a single chain of resistors connected in series:

The sum \sum 1/n diverges, which means that the effective resistance of the new, shorted network diverges as we take larger and larger boxes. But this resistance was smaller than that of the original \mathbb{Z}^2 , and so the effective resistance of \mathbb{Z}^2 diverges as well. Hence the simple random walk on the two-dimensional lattice is recurrent!


We have seen that adding edges to a graph makes it easier to escape to infinity, and that \mathbb{Z}^2 is recurrent, although only marginally so. You might therefore guess that \mathbb{Z}^3 is transient, and that getting lost in space is a lot easier than getting lost in the plane. Is this true? Well, that’s a matter for post #3…

Until next time, here is a quick puzzle. We now know that the square lattice, where every vertex has four neighbors, is recurrent. What about the triangular lattice, where every vertex has six neighbors?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s