This is part two in a three-part series about recurrence and transience on the integer lattices. Here is part one. Here is part three.

In the previous post, we saw that the integer lattice is **recurrent**. This means that a simple random walk, when starting at the origin , will return to the origin with probability . In fact, it will return to the origin infinitely many times, and will actually (eventually) visit every vertex an infinite number of times. Quite the workout.

Today we will break free of our one-dimensional chains, and ask ourselves: What happens for ?

At first, seems much more complicated than : After all, the former is comprised of infinitely many copies of the latter! But if you stare at square lattice hard enough (provided that it does not stare back into you), you might start feeling a slight tingling of an intuition here, that since there are so many more edges, it should somehow be easier to run off to infinity – there are simply so many more ways that the random walk can run off and never return! Of course, this intuition is flawed: More edges also means more ways to return back to the origin, but it’s not too bad a starting point. For example, we might imagine that if in some sense there are always “more paths” leading off to infinity than paths leading back to the origin, then there is a non-zero probability of never coming back. This is all a bit vague, but we will later see that there is in fact a way to make precise the idea that more edges make it easier to escape. That’s only a basic notion, though, a rough guideline; inevitably, it is the particular details of where the edges are situated and how many there are that dictates the final recurrence/transience verdict.

But enough philosophizing. Without further ado, here is our main theorem:

**Theorem**: The simple random walk on is recurrent.

Hugh! Despite having a gazillion more ways to get lost, a drunkard randomly scouring the streets of an infinite Manhattan will always find his way back home. But he will not do so easily.

In the same spirit as the previous post, we will give two proofs of this theorem: One which directly counts the expected number of returns to the origin, and one which uses electrical networks to bound the probability of escaping from large boxes around the origin. In addition to proving the theorem, both methods will demonstrate that despite the fact that both and are recurrent, the random walk on the latter has a harder time coming back home than the random walk on former.

**Method #1: The straightforward calculation**

As described in part #1, the gist of this method is to calculate the expected number of times that the random walk returns to the origin. Denote by the probability that the random walk returns to the origin (so means that the walk is recurrent, while means that the walk is transient). As we saw, if then the expected number of return times is infinite, while if then the expected number of return times is finite. All that we need to do then is find a way to explicitly calculate the expected number of times the random walk on hits the origin.

The most straightforward way to do this is as follows. Let be the random variable that is if the walker is at the origin at time , and otherwise. Then the total number of times that the walk visits the origin is

The expectation of this variable is given by

When the walk was one-dimensional, on , we reckoned that out of steps, the walker has to take exactly steps to the left and exactly steps to the right in order to return to the origin. Since the order doesn’t matter, and since there are paths of length , in the integer lattice the probability was equal to

We then saw that for large , this probability was roughly proportional to , so the sum over all diverged.

If we want, we can repeat this argument for the two-dimensional case as well: In order to return to the origin after exactly steps, the walk has to to take an even number of steps in the direction and even number of steps in the direction; in particular, must be even, so we will denote it by . Further, the number of left steps must be equal to the number of right steps, and the number of up steps must be equal to the number of down steps. So we can calculate as follows: First, pick an between and which denotes how many steps the walker takes to the left. It must therefore take the same number of steps to the right. This means that the walker took steps up and steps down as well. Since there are paths of length overall, summing over all possible values gives

where is the multinomial coefficient.

You can solve this if you want, but it’s rather a bit tedious. Luckily, there is a neat trick which allows us to calculate while only using the (already known) result for the one-dimensional case.

Recall that in the simple random walk on , at every step the walker flips a balanced four-sided coin, and goes either North, East, South or West depending on the result. The trick is to consider the following process instead. Suppose that at every step, the walker flips two independent two-sided coins. First, if the first coin turns up heads, the walker takes one step to the East, and if it’s tails, the walker walks West. Then, if the second coin turns up heads, the walker heads North, and otherwise South. The following image shows what happens when we combine both left-right and up-down motions:

Lo and behold! The net result of this movement is to go either NE, SE, SW or NW, each with probability . This is exactly the random walk in , where the lattice has been rotated by relative to the usual orientation, and scaled by a factor of . Thus the simple random walk on is actually the product of two simple random walks on !

The conclusion from all of this, is that for the two-dimensional walk is equal to the square of the corresponding value for the one-dimensional walk. In the previous post we saw that the latter is proportional to , and so in this case each summand is proportional to . We finally get

The expected number of times that the random walk hits the origin is infinite, and hence the walk is recurrent.

This is a nice result, because it shows us that returning to the origin in is quite a lot harder than returning to the origin in . You can think of this in two ways. First, qualitatively, the two-dimensional walk is a product of two one-dimensional walks: You need to magically have two walks return to zero *at the same time*, which is of course a lot harder than having just one walk return to zero at the same time. Second, and more quantitatively, in the one-dimensional case the expected number of returns was , while in the two-dimensional case the sum was . This last sum does indeed diverge, but only barely so (for example, if we had snuck in the cover of night and casually tweaked the denominator of each summand to be instead of just , the sum would converge).

**Method #2: Electrical networks**

This method is still freakin’ cool because it uses resistors, but it also has the added bonus of formalizing the notion that adding more edges to a graph makes it easier to escape to infinity.

Let’s recapitulate what we found in the previous post. By replacing each edge in the graph with a unit resistor, we showed that we can calculate whether or not the escape probability is by checking the effective resistance between the origin and the boundary of increasingly larger and larger boxes. If the effective resistance grows to infinity, then the escape probability is , and the walk is recurrent. And if the effective resistance is finite, then there is a non-zero probability of running away and never returning to the origin.

Here’s the cool thing: The relation between effective resistance and the escape probability immediately implies that adding more edges to a graph makes it easier to escape! This is because in our model, two vertices which have no edge between them can be thought of as connected by an **infinite** resistor: After all, if you can’t directly pass from vertex to vertex , the conductance between them can be treated as essentially . Adding an edge between and is then equivalent to replacing a resistor with resistance by a resistor with resistance , which is by all accounts substantially smaller! Thus the effective resistance from to the boundary of a box can only decrease when we add edges to the graph.

For showing that is recurrent, we are going to look at the effective resistance between the origin and the boundary of square boxes surrounding the origin:

If you are industrious and laborious enough, you might find out an explicit relation for the effective resistance, and show that it diverges. But there is actually a short, clean, backed-by-physical-intuition way of showing that it grows to infinity, using the so-called **shorting law**. If we have two different vertices and , we can short them by connecting them with with a resistance resistor (i.e copper wire in “real life”). This forces them to be at the same potential, and effectively “merges” the vertices together. Of course, this reduces the effective resistance of the network, sometimes drastically so.

The basic strategy is to start with , and short together some of the vertices. The trick is to connect enough vertices and in the right places, so that computing the effective resistance of the new network becomes easy, but not to short too much, so that the new network’s effective resistance won’t change by too substantial an amount. In our particular quest to prove that is recurrent, all we require is a series of shorts which will allow us to easily compute the effective resistance, while still keeping it infinite.

So which vertices to short? Well, just fuse together the boundary of every box, as seen in the following image, where shorted vertices are connected by the same thick black line:

Effectively, after shorting, the origin is connected by four resistors in parallel to a single vertex , which is the merge-together of the four vertices surrounding the origin in the original . This vertex is in turn connected to a vertex by parallel resistors, and in general, vertex is connected to vertex via parallel resistors. Since the effective resistance of resistors in parallel is , we end up with a single chain of resistors connected in series:

The sum diverges, which means that the effective resistance of the new, shorted network diverges as we take larger and larger boxes. But this resistance was smaller than that of the original , and so the effective resistance of diverges as well. Hence the simple random walk on the two-dimensional lattice is recurrent!

We have seen that adding edges to a graph makes it easier to escape to infinity, and that is recurrent, although only marginally so. You might therefore guess that is transient, and that getting lost in space is a lot easier than getting lost in the plane. Is this true? Well, that’s a matter for post #3…

Until next time, here is a quick puzzle. We now know that the square lattice, where every vertex has four neighbors, is recurrent. What about the triangular lattice, where every vertex has six neighbors?