Here is a silly little question that my friend Ori and I ran into.

Let be a decreasing sequence of positive real numbers which tends to 0. Perhaps in a more decent universe, the sum would always equal some finite number, but already in kindergarten we learn that that’s not true: For example, the partial sums diverge to infinity roughly like . Luckily, there is still some justice in the world, as the next theorem attests:

**Theorem (Leibnitz test)**: Let be as above. Then

In words, if we alternate the signs of the elements of a decreasing sequence, we are guaranteed that the series converges, no matter how slowly decays to zero.

Our question is this: What happens if we don’t force the series to strictly alternate signs, but rather only demand that on average half the elements are positive and half are negative? To make things fair and precise, we formulate this as a probabilistic statement:

**Question**: Let be a sequence of independent symmetric Bernoulli random variables, i.e with probability and with probability . Then in the sum , most of the time indeed “about half” the elements are positive, and “about half” are negative. Under what conditions can we expect the expectation to be finite? That is, when do we have

Certainly, multiplying by the sign-changers gives us some more legroom, since naturally many elements in the sum will have opposite signs and we’ll get some cancellations. On the other hand, there is always some probability that the results will be skewed towards one particular sign, and that there will not be enough cancellations to really change the outcome in case the original series was divergent. So we cannot really hope for something as powerful as the Leibnitz test. But can we make converge? Can we make converge?

**Answer**: As it turns out, the Bernoulli random variables do indeed make converge better than , but only squarely so.

**Proposition**: The expectation is finite if and only if is finite.

**Proof**: Denote . Then have zero expectation and their variance is equal to . Thus, if we denote , since the s are independent we get that

So suppose first that is finite, and is bounded by, say, some number . Then for every . But by Jensen’s inequality, since the square-root function is concave,

Let’s now suppose that is infinite. Ordinarily, the fact that the variance of a random variable is infinite does not imply that the expectation of the absolute value is infinite. But luckily, we can use the following central limit theorem to our aid, which I found at this mathoverflow post:

**Theorem**: Let be a sequence of independent, uniformly bounded random variables with zero mean, such that goes to infinity with . Then converges in law to a standard Gaussian distribution.

This is different from the standard, run-of-the-mill central limit theorem, since the variables don’t have to be identically distributed. Note that the fact that goes to infinity is important here; for example, if all but one of the variables were degenerate and exactly equal to 0, the result certainly wouldn’t be a Gaussian.

I won’t prove this theorem here, because it is a central limit theorem, and the proofs of these tend to be messy. And also because it’s late, and way past my bedtime. The gist of this theorem is that for large enough , the sum behaves roughly like a Gaussian with variance . In particular, the expected value grows roughly like , which, by assumption, tends to infinity.

There you have it! Using symmetric Bernoulli random variables, we can force the series to converge (in expectation), but the series is still too mighty-slow for us to conquer.

Nice post. What made you think about this problem and this specific solution?