This is part three in a three-part series about recurrence and transience on the integer lattices. Here is part one. Here is part two.

Weary and tired, we are approaching the end of our coin-flipping, lattice-exploring adventures, but with one aching question still burning deep within our hearts: Is the random walk on the three-dimensional lattice recurrent? At this point, your intuition is starting to whisper softly and seductively in your ear, “my dear beloved, the random walk on is transient, can’t you see? While the walk on had a relatively easy time finding its way back to the origin, we saw that just by adding an extra dimension, was already so much closer to being transient. It’s now almost obvious that the horrendous maze of edges that is the three-dimensional lattice can only be transient. Why, just look how hard it is to draw the thing on the screen! No doubt, a jet-pack wielding drunkard (both a marvelous sight and an imminent danger) who stumbles randomly in space could always find himself lost forever, drifting away towards infinity”.

That’s some intuition you got there! At least it’s got the right answer down: Not even the most entrenched anti-vaxxer can deny, the simple random walk on is transient!

We will again give two proofs. The first will use our “old-reliable” electrical networks, which so far have served us well and without fail, and will continue to serve us well and without fail until the end of time. The second will try to use multiple independent random walks in a way similar to the two-dimensional case. It will initially fail, but after a slight tweak and a cute theorem, will ultimately prevail; for all’s well that ends well.

**Method #1: Electrical networks**

By this time around, you are already seasoned electrical resistance warriors. “What’s the big deal”, you say, “I mean, couldn’t we just approximate the complicated structure of by some easier to control graph which we know is transient? This shouldn’t be too hard, since all we have to do is calculate its effective resistance. And you know what, I’m willing to bet my jetpack that doing this will involve some sort of operation stemming from a real, physical-network law, just like the shorting law was inspired by real-life short-circuiting.”

Indeed, the principle we are going to use this time is the **cutting law**. If we have two vertices and which are connected by a resistor, we can remove the resistor, thereby increasing to infinity the resistance between and , and generally increasing the effective resistance of the entire network. This amounts to deleting an edge from the original graph from which the electrical network was obtained.

Our strategy will then be to cut away lots of edges from , so that in whatever’s left, we can easily calculate the effective resistance between the origin and the boundary of a large box. The trick is to cut out the right edges and in the right places, so that the effective resistance of the resultant graph doesn’t grow too much, and in fact remains *finite* as the size of the boxes grows to infinity. By the cutting law, this will imply that the effective resistance of boxes in the original is also finite, which in turn means that there is a non-zero probability to escape to infinity. In fact, we calculated this probability in the first post in the series:

where is the total conductance of the origin. Thus, getting an upper-bound on the effective resistance also gives a lower-bound on the probability to escape.

That’s all very well and nice and all, but the real problem here is to find out which edges to cut. And unfortunately, there is no shortcut here, and no general method which always works; you just have to go to the graph, give it a good shake, and see what falls out.

There is one guideline though, which is quite well and thoroughly explained in the book Random walks and electric networks, and which at least offers some insight to the problem. While calculating resistances for general graphs usually involves clever manipulations and a good unhealthy dose of Kirchoff’s laws, there is one particularly easy case where even preschoolers have a good time calculating: Trees. In particular, symmetric trees, where the number of offspring a vertex has only depends on its distance to the root of the tree. These can always be quickly reduced to a simple line of resistors by applying the parallel law (think why!), and for a line of resistors the effective resistance is immediate. So all we have to do is find a nice symmetric tree which sits comfortably and snugly inside . We’ll then prune away all edges which are not in this tree.

In fact, if you follow our construction in the previous post for , you will notice that there too we have constructed a tree. So a good place to start the investigation for the three-dimensional lattice, is to see what sorts of trees it contains (and what sorts of trees it doesn’t).

I think Doyle and Snell do a pretty good job in their treatise on treating trees, so if you want the background, you should definitely go and read them them (see page 78 for trees in general, and page 82-92 for the specific tree we will now show). In fact, I’ll even blatantly copy some images from book.

Anyway, here is an example of a nice, easily-computable graph which sits inside . It is not exactly a tree, but is heavily inspired by one. We build it in steps. Take the origin , and connect it to the three vertices which are the non-negative integer solutions to the equation

Thus the first three vertices are , and . We may think of obtaining these vertices via the following method: send out rays from the origin along the positive , , and direction, and check where they intersect with the plane whose equation is . This is given by the following image:

Now we continue in this fashion again and again, slowly building up the graph one level at a time: Suppose that we have just obtained all the vertices of generation . From each such vertex extend a ray in the positive , , and direction, and add the intersection points of these rays with the plane whose equation is . The following images demonstrate the next steps in the process:

Have we already mentioned that 3D graphs do not like being squeezed onto 2D screens?

If you follow this construction infinitely many times, you eventually (eventually) end up with an infinite tree. In order to calculate its effective resistance, it’s convenient to split apart vertices in the same generation (something we can do, since same-generation vertices are bound to have the same electric potential by the symmetry of the construction, and so we do not change the effective resistance when doing so). If you do this in the right places, you will find that the graph just described above is actually equivalent in resistance to the following nice tree fellow:

This tree, nicknamed (for logarithmic reasons), can be constructed iteratively as follows. Start out with only the origin as both the root and the sole leaf in the tree, and at step number grow three branches of length out of every leaf. Repeat ad infinitum, season to taste. That’s it!

Since this is a symmetric tree, we can rather straightforwardly calculate the effective resistance from the origin to infinity, by looking at what happens in between two different branchpoints. The branch segment which starts at branchopint and ends at branchpoint is of length , and so has effective resistance of . But since there are such segments in parallel, the effective resistance between level and level is . The total resistance of the entire tree is then

The effective resistance of the tree is finite! And since the tree was obtained only by removing edges from , by the cutting law the effective resistance of is finite as well, which means that it is transient.

**Method #2: The straightforward calculation, with a twist**

So far we’ve had a lot of luck with trying to explicitly figure out the expected number of times that the random walk hits the origin. We did this by calculating the probability that the random walk returns to the origin after an even number of steps. In the one-dimensional case, the calculation went smooth as butter, and we found that

The expected number of returns is then given by

implying that the walk on is recurrent.

In the two-dimensional case, the term was a bit more difficult, but luckily for our lazy-and-trick-loving selves, it turned out that the simple random walk on can actually be seen as the product of two simple random walks on , so that the expected number of returns is

implying that the walk on is recurrent as well. If we could show that the same were true for three dimensions, i.e that the walk on is a product of three one-dimensional walks, we’d be immediately done, for then the expected number of returns would be roughly equal to

which converges, implying that the walk is transient.

All that we need to do then, is show that if we flip a coin which tells us where to go in the direction, flip a coin which tells us where to go in the direction, and flip a coin which tells us where to go in the direction, we end following the same motions as if we flipped a six-sided coin and chose to go either forwards or backwards in one of the three directions at random.

Unfortunately, such dreams are cute but false. The simple random walk on does not decompose into three independent random walks. For example, as a quick sanity check, the simple random walk on has six options to choose from at each step, while the triple- random walk has eight. In fact, one step in the product of walks looks like this:

This is the unit cell of the so called BCC lattice.

However, not all is lost. If you solved the riddle from the previous post, your spider-sense might get tingling again and suggest that lattices which are not-quite-the-same yet not-too-different (for example, different lattices of the same dimension) might have the same recurrence/transience properties. This is basically true, as the following definition and theorem show.

**Definition**: Let be an infinite graph with bounded degree, and let be an integer. The –*fuzz* of , denoted , is the graph obtained by adding to an edge if it is possible to go from to in at most steps.

For example, here is and its -fuzz:

**Theorem**: For any , the simple random walk on is transient if and only if the simple random walk on is transient.

In other words, there is a specific way of adding edges to a graph – “fuzzing” it, if you will – so that the recurrence/transience of the graph does not change when we do so. As an intuition as to why this is true, let’s consider the -fuzz of a graph, which is the graph obtained by connecting together all pairs of vertices which were originally at distance from each other. A simple random walk on this graph is *almost* the same as randomly choosing between taking either *one or two steps* in the original graph. Sure, there are some mild differences in the transition probabilities, and it might be that after taking two steps we find ourselves in the same place where we started, but essentially, the two walks feel very much the same.

Although this feeling can be formalized, and the comparison between the -fuzz and the -step random walk made precise, I’d rather give an electric network proof of the above theorem.

**Proof**: The -fuzz is obtained from by adding edges, i.e adding more resistors to its network. Adding a resistor is the opposite of cutting (i.e we turn an infinite “no-edge” resistor into a finite resistor), and so lowers the effective resistance. Thus the effective resistance of is smaller than that of , and so if has finite resistance and therefore transient, then has finite resistance and is also transient.

On the other hand, let’s consider two vertices and and compare the effective resistance between them in and in . For what follows, we can suppose without loss of generality that they are connected in . In , the effective resistance between the vertices is no more than , since there is a path of length no more than from to . What about ? Well, denote the maximal degree of by . The smallest resistance possible between and is , since there can be no more than paths from to in . Thus the ratio between the effective resistance between any pair of vertices and differs by no more than the factor . This may be large, but it is always finite, so the effective resistance of is always smaller than some constant times that of . So if has finite resistance and therefore transient, then has finite resistance and is also transient.

This theorem is exactly the tool we have been waiting for! Look at the BCC lattice obtained by three independent coordinate motions. With a bit of mental gymnastics, you might notice that it’s possible to embed it into the -fuzz of , for a that is actually not that large. By cutting away extra unneeded edges, we must conclude that the -fuzz of is transient. But then by the above theorem, so is !

So ends our three-post extravaganza about simple random walks on the integer lattices. Long as they might seem, these posts barely scratch even a millimeter-deep dent in the surface of this vast subject. Even if you weren’t completely bored by my dense writing (and especially if you were), I still highly recommend giving Random walks and electric networks a chance.

Until next time, here is a quick puzzle. Suppose that we wish to break free of our discrete binding chains, and talk about random walks in a continuous space. As a concrete example, suppose we have a walker starting at the origin, who at every time step randomly chooses a coordinate, then randomly chooses a step size uniform in and steps the chosen amount in that direction. Is the walker guaranteed to return arbitrarily close to the origin in one, two, or three dimensions?