It is a truth universally acknowledged, that a single man in possession of a coin, who repeatedly flips it, taking one step forwards if it comes up heads, and one step backwards if it comes up tails, will eventually return to the place where he started.
Mathematicians like to call the process of using coins in order to decide what to do in life a “random walk”, and it is often performed on graphs. A walker (whom we’ll call Alice) sits at one of the vertices, and at each time step, she chooses a one of the vertex’s neighbors with some probability and moves there. The simplest case of a random walk, the aptly named “simple random walk”, is where Alice chooses each neighbor with equal probability. In the particular case of the single man in possession of a coin described above, that graph was the integer number line , also known as the one-dimensional integer lattice.
At each step, Alice takes a step to the right with probability , and a step to the left with probability . We could have asked a similar question, about , also known as the two-dimensional integer lattice:
Here, at each step, Alice moves up with probability , down with probability , left with probability , and right with probability . The adventurous (and jetpack wielding) among us may even consider performing a random walk on the three-dimensional lattice, where the probability of going in each direction is .
These three lattices will be the main players in this series of posts. Specifically, we will investigate their recurrence and transience: If Alice starts at the origin, that is, the vertex labeled , will she always, with 100% certainty, return back there, after some finite number of steps? If so, we call the graph that Alice walks on “recurrent”. Otherwise, if there is some probability (no matter how tiny) that Alice will be forever lost in wanderland, we call the graph “transient”.
If this is the first time that you encounter this notion, it is actually not at all obvious which of these graphs are transient and which recurrent. I remember the first time I ran into it, I was quite dazzled, and certainly hadn’t the faintest clue concerning the transience of even a single one of them. The only initial guideline is that it seems intuitive that graphs in higher dimension are somehow more complex than those in lower dimensions: The walk on smells much simpler than the walk on , since on the former Alice basically walks up and down on a single path, while on the latter she has many different paths she can take. Similarly, the walk on can be expected to be more complicated than that of . But this really doesn’t say anything concrete about any of them.
So what’s in store for us today? As alluded to in the opening paragraph, the walk on is… recurrent! With probability , if you flip a coin and go forwards or backwards based on the result, you will eventually find yourself right where you started, 100% guarantee, or your money back! (not your time, though, and it might take you quite a long while to return). This post, which is mostly based on the wonderful book Random walks and electric networks by Doyle and Snell, will prove this fact. The upcoming Post #2 and Post #3 will talk about and respectively; I won’t spoil here which (if any) is recurrent.
We will show that is recurrent in three different ways, in increasing order of elegance: straightforward counting, limits of finite boxes, and resistor networks.
Method #1: The straightforward calculation
This proof relies on the following nice observation: If the random walk returns to the origin with probability , then with probability it will actually return to the origin infinitely many times. You can think of it this way: Once we hit , the entire game essentially “resets”, and we find ourselves in exactly the same position as we started in. On the other hand, if there is a finite probability of escaping to infinity, then we expect to visit the origin only a finite number of times: Every time we return to the origin, we have a constant non-zero probability to never come back. If we return to too many times, then eventually we’ll run out of luck and escape.
In fact, in the latter case we can find out an expression for the expected number of times we return to the origin. Let be the probability that the walker, starting at , returns to . The probability of escaping to infinity then is just ; in this case, throughout the entire walk, the walker was at only once, at time . The probability of returning exactly once to the origin is : The first is to return once, and the is to never return after that; in this case the walker visited exactly twice. In general, the probability to return times and thus visit the origin times is . Since we never return an infinite number of times to the origin, the expected number of visits to is then just a sum over every finite number of returns:
Since , we can write this sum as
Thus, if , that is, we have a non-zero probability of escaping to infinity every time we hit , then the expected number of times we return to the origin is finite. (Incidentally, if we treat as a variable and let , we get an infinite number of returns to the origin, although of course is a constant that only depends on the lattice).
Now, here is another way to calculate the expected number of times we land at the origin, one which does not involve assuming beforehand whether or not we return an infinite number of times. Let be the random variable that is if the walker is at at time , and otherwise. Then the total number of times we are at is
The variables are very heavily dependent on each other: For example, if we know that we visited the origin at time , that is, , it really increases the likelihood that we visited the origin at times close to . But luckily, thankfully, and blissfully, the expectation of a sum is always the sum of expectations, and so
By the argument we gave in the beginning, this sum will be infinite if the random walk is recurrent, and finite if the random walk is transient. So to check if the random walk (on any graph, actually) is transient or recurrent, we only have to sum up the individual probabilities to return to the origin at time .
How do you calculate these probabilities? Well, first note that for all odd , : Every step we take to the right has to be counterbalanced at some point by a step to the left. In fact, this is the exact condition we need: For even , say , we need to take exactly steps to the right and exactly steps to the left. The probability is then just the total number of paths which take steps to the left and to the right (in any order), divided by the total number of paths there are of length . Thus
For large enough , the numerator can be approximated by (cf. good ol’ Stirling)
We can allow ourselves to use this approximation since eventually we only care about whether or not the expectation is finite or not. So even if the approximation is bad for small , or even if the calculations are off by some constant factors, this will not change the convergence or divergence of the above sum. Anyway, plugging this into our expression for the expected number of visits to , we get
We expect to return infinitely many times to the origin, and thus the walk is recurrent – .
Method #2: Escaping from finite boxes
This is a very neat method, because it explicitly shows us how hard it is for the random walk to travel long distances while avoiding returning to the origin.
Instead of looking at the infinite, rather harder-to-deal-with random walk, we can ask how a random walk behaves on finite intervals. That is, our walker will now be positioned at some point inside a corridor of length , and will randomly step either left or right until she exits the corridors through one of the sides:
We can think of this corridor as lying on the interval , so when the walker exits from the left, she returns to the origin , and when she exits from the right, she escapes to “infinity” (albeit for a very small value of infinity).
In this case we can do some exact calculations. For , let’s denote by the probability that the random walker, starting at , will exit through the right (i.e reaches “infinity”) before she exits through the left (i.e reaches ). If we already happen to start at , that probability is , so always. Similarly, if we already start at , that probability is , so . And for intermediate points, at each step we go either left or right with equal probability, so
This is a second order linear recurrence relation, with and as boundary conditions. The general way to solve these recurrences is to first try a that is proportional to for some unknown . This reduces the recurrence relation to a polynomial equation:
Dividing by , this gives
Since there is degeneracy in the solution, our next guess is to try proportional to ; but for us , and indeed, plugging in solves the recurrence relation exactly! Taking care of the boundary conditions, the escape probability becomes
I suppose we could have guessed that from the onset, couldn’t we? Silly us.
Anyway, let’s see how this finite case helps us prove the infinite case. Towards contradiction, suppose that the random walk on is transient, and let be the probability that the walker, starting at , returns to (i.e the probability of escaping to infinity is ). In particular, for every , the probability of hitting before returning to is greater than : At the very worst, when the walker escapes to infinity, she will reach every such ; but there can also be cases where she does return to the origin, but just happens to first pay a visit to along the way. But the probability of hitting before hitting is exactly , and we seen above that it goes like . This decays to as , which contradicts the assumption that it is greater than .
Method #3: Electrical networks
This method is cool, because it freakin’ uses resistors.
This one will take a little bit more explaining than the other methods, because (a) we have to build an entire framework which converts “escape probabilities in random walks on graphs” to “voltages on networks of resistors”, and (b) I’ll show a bit more than we formally require for the proof. But it will be worth it, since afterwards we’ll be able to use this framework to prove all sorts of other things. Also, after we build this framework, the proof that is recurrent will be like one line.
A “network of resistors” is basically exactly what you think it is: First, take your favorite graph and replace each edge between vertices and with a resistor of resistance . Second, attach electrodes to various vertices, so that you can control their potential. This can be used, for example, to set a fixed voltage between two given nodes (or between a node and infinity). In the simplest case, the resistance of each resistor is . Here is an example of a resistor network:
That’s it! With a resistor network in your possession, you can now ask all your favorite question from electronics 101, like “what is the voltage between node and node ”, “how much current flows through an edge ”, and “what is the effective resistance between node and node ”. Of course, depending on the graph, these might not always be easy questions.
There is a neat and simple way to connect resistor networks and random walks on graphs. The “conductance” of a resistor is defined as the inverse of its resistance: . Given a resistor network, we can define a random walk with the following transition rule: Denote by the sum of conductances of all edges incident to vertex ,
The probability to move from one vertex to another is if they are disconnected, and
if they are connected. In a way, this rule makes the random walker mimic electricity: the walker has a higher probability of moving along edges with higher conductances, that is, paths of low resistance, just like how there is a stronger electrical current along paths of low resistance. In the simple random walk, the probability of going along each edge is equal, so simple random walks correspond to electric networks where all the resistors have the same resistance. That’s why we like unit resistors so much!
The correspondence between resistor networks and random walks might seem superficially simple, but it is actually surprisingly deep; in fact, the whole point of the theory is to prove interesting properties of random walks by proving analogous results on electric networks instead. A major player in this game is the concept of a “harmonic function”.
Definition: Let be a finite*, connected, positive weighted graph, i.e every edge has some positive number attached to it. Let be a nonempty subset of its vertices. A function is called “harmonic on ” if the value that it gives to a vertex is equal to the weighted average value of its neighbors. Formally,
Often is called the “interior”, and the remaining vertices are called the “boundary”.
* Harmonic functions can just as well be defined for infinite graphs, but matters become more complicated in proving and stating some of the theorems. In any case, we will easily be able to take limits of finite graphs to show interesting results on infinite ones.
The averaging property of harmonic functions might appear docile at first, but actually it imposes some strong conditions on the function, from which many properties can be derived. But before we do that, let’s see two nice examples of harmonic functions.
Example 1: Let be a finite resistor network, and consider the random walk on defined by the network, where the probability of moving from one vertex to another along some edge is proportional to the conductance of the resistor along that edge. Let be two disjoint subsets of vertices. Think of as a “good” set which we want to land in during the random walk, and as a “bad” set which we want to avoid during the random walk. Now let be the following function: for a vertex , is the probability that if a random walker starts at , she will hit the good set before she hits the bad set . This is a purely probabilistic / combinatorial function, which only depends on the probabilities of moving from one vertex to the other.
Here are three of the nicer qualities of :
- For every , we have . This is because if we already started inside , then of course the walk hits before it hits ; it already has!
- For every , we have . This is because if we already started inside , then there is no chance to hit before hitting .
- is harmonic in . This is because if the walker is not in or , then her next step will take her to one of neighbors, and we can ask the same question “what is the probability to get to before getting to ” from that neighbor instead.
In this example, serves as the boundary of the harmonic function .
Example 2: Let be a finite resistor network. For each edge , let be its conductance. Let be two disjoint subsets of vertices. Connect all the vertices of and to a voltage supply, so that the potential of all is volt, and the potential of all is volts. Define a potential function by setting to be the potential of the vertex in volts. We can think of this as a purely physical function, which only depends on the resistances of the resistors in the network. If we wanted to, we could build the network out of real-life resistors and actually measure the potential differences! Anyway, here are three of the nicer qualities :
- For every , we have . This is by definition.
- For every , we have . This is also by definition.
- is harmonic in (where the edge weights are given by the conductances ).
The harmonicity is a consequence of two well-known physical laws from electronics: Kirchoff’s circuit law for currents, and Ohm’s law. Kirchoff’s law states that the total amount of current entering and leaving a vertex must be equal. In other words, if is the current flowing from vertex to vertex , then
Meanwhile, Ohm’s law states that the current flowing through a resistor is the ratio between its potential difference and its resistance (also known as “Uri’s law”, since it can be symbolically written as ). Plugging this into the equation above, we get
Extracting all instances of to one side and using the notation , we finally get
The two functions we built above, and are quite similar. “Too similar, in fact…”, you might be thinking. And you’d be right, and this can be shown in a straightforward way if you inspect the calculations we did closely enough. But rather than do that, I’d like to show the power of the averaging property of harmonic functions instead. One of the more famous / notorious / useful properties of harmonic functions is called the maximum principle, and reads as follows:
Theorem (the maximum principle): A harmonic function attains its maximum value and minimum values on the boundary .
I always picture the proof of this as a “wave of maximality” which erupts in the interior of the graph and spreads outwards. It goes like this. Suppose that attains the maximum somewhere in the interior, on some vertex . By the averaging property, must attain the maximum value on all of ‘s neighbors as well. Now carry on with this process, showing that all the neighbors of neighbors of , then the neighbors of neighbors of neighbors of , and so on, all attain the maximum, until you hit a vertex in the boundary. The maximum must then occur on the boundary!
With the maximum principle, a wonderful uniqueness theorem is almost immediate.
Theorem (the uniqueness theorem): Let and be harmonic on . If they are equal to each other on the (nonempty) boundary , then they are equal everywhere.
To see this, observe that the function is also harmonic on . Since and agree with each other on , then takes only the value on . By the maximum principle, the value is then both the maximum and the minimum of , so it must be everywhere, and hence .
This is a fantastic theorem. It allows us to show that two seemingly different functions are actually the same, provided we can show that they are both harmonic and agree on a possibly-very-small boundary set.
Hey, now wait a minute! This is exactly what we did in the two examples of and . Both functions are harmonic in , and both agree completely on the sets and . Thus the two are exactly equal! For random walks on finite graphs, the probability to reach a set before reaching a set when starting the walk from a given vertex , is exactly equal to the electric potential of that vertex in a network of resistors, when is held at constant potential and is held at constant potential .
As cool as this result is on its own, it also gives us a way to check whether a graph is recurrent or not. The basic technique is actually the same as the previous method, that of escaping from finite boxes, but the language of resistors grants us easier calculation and easier modification, as we will see in the next post in the series.
But for now we’ll stick to basics. Let be a finite graph, and let . Let be a set that does not contain . We consider to be the “origin” of the graph, and to be a set of bad vertices which we want to avoid (our “finite version” of infinity, as before). Given this, we want to find the following: What is the probability that a simple random walk which starts in returns back to before reaching ? To calculate this, turn into a resistor network by replacing all its edges with resistors. As noted above, this choice of resistor values corresponds to the simple random walk on . Now fix the vertex to have a potential of volt, and all of to have a potential of volts. What is the current flowing between and ? Well, by Ohm’s law,
Now, recall that the walk defined by a resistor network has the transition probabilities . Thus that last equation can be written as
But look at the summands in this sum: By the equivalence between voltages and return probabilities that we just established, is equal to the probability that the walk, starting at , will return to before hitting . Since we sum over all neighbors of and multiply this by the probability of transitioning from to , the entire term
is just equal to the escape probability, that is the probability that if we start at , the random walk will hit before it gets back to ! Symbolically, may then write this as
All that is left is to take care of the current . What is it? Well, by definition, it is the current flowing form to , and since there is a unit voltage difference between them, by definition / Ohm’s law, it is just equal to the effective resistance between them! We get that for finite graphs
We can now finally (and easily) prove that is recurrent. Let’s look at the boxes . Let and ; a random walker has to pass through at some point if she wishes to run off to infinity. The probability that a simple random walk which starts at will return to before hitting either of the boundaries of the box is then just proportional to the effective resistance between and the boundaries. The electric circuit looks like this (after “folding” the circuit in two):
The circuit consists of two parallel sets of unit resistors in series. By simple resistance laws, we can replace each series of resistors with one large resistor of resistance , and then replace the two resultant parallel resistors by a single resistor of resistance . Thus, as , the effective resistance goes to 0, and the escape probability goes to 0. is recurrent!
We have seen three methods which show that the simple random walk on is recurrent. The first was a direct counting argument, the second considered the escape probabilities from finite boxes, and the third generalized the second, showing how the escape probabilities relate to the effective resistance of electric networks. True, given that the second method already gave us the desired result, the electric network method is a bit of an overkill in this case. But in the upcoming posts, we will see how it can be used in and , where the second method is a lot harder to apply directly.
Until next time, here is a quick puzzle. Below are two different infinite trees. Can you tell whether they are recurrent or transient?