Converging Bernoulli Series

Here is a silly little question that my friend Ori and I ran into.

Let \{a_n\}_{n=1}^{\infty} be a decreasing sequence of positive real numbers which tends to 0. Perhaps in a more decent universe, the sum \sum_{n=1}^{\infty}a_n would always equal some finite number, but already in kindergarten we learn that that’s not true: For example, the partial sums \sum_{n=1}^{N}\frac{1}{n} diverge to infinity roughly like \log N . Luckily, there is still some justice in the world, as the next theorem attests:

Theorem (Leibnitz test): Let \{a_n\} be as above. Then

\sum_{n=1}^{\infty}(-1)^{n+1}a_n < \infty.

In words, if we alternate the signs of the elements of a decreasing sequence, we are guaranteed that the series converges, no matter how slowly a_n decays to zero.

Our question is this: What happens if we don’t force the series to strictly alternate signs, but rather only demand that on average half the elements are positive and half are negative? To make things fair and precise, we formulate this as a probabilistic statement:

Question: Let \{\varepsilon_n\}_{n=1}^{\infty} be a sequence of independent symmetric Bernoulli random variables, i.e \varepsilon_n = 1 with probability 1/2 and \varepsilon_n = -1 with probability 1/2 . Then in the sum \sum_{n=1}^{\infty} \varepsilon_n a_n , most of the time indeed “about half” the elements are positive, and “about half” are negative. Under what conditions can we expect the expectation to be finite? That is, when do we have

\lim_{N \to \infty} \mathbb{E} \left| \sum_{n=1}^{N} \varepsilon_n a_n \right| < \infty ?

Certainly, multiplying by the sign-changers \varepsilon_n gives us some more legroom, since naturally many elements in the sum will have opposite signs and we’ll get some cancellations. On the other hand, there is always some probability that the results will be skewed towards one particular sign, and that there will not be enough cancellations to really change the outcome in case the original series \sum a_n was divergent. So we cannot really hope for something as powerful as the Leibnitz test. But can we make a_n = 1/n converge? Can we make a_n = 1/\log n converge?

A random instance of 1/log(n)

Answer: As it turns out, the Bernoulli random variables \varepsilon_n do indeed make \sum \varepsilon_n a_n converge better than \sum a_n , but only squarely so.

Proposition: The expectation \mathbb{E} \left| \sum_{n=1}^{\infty} \varepsilon_n a_n \right| is finite if and only if \sum_{n=1}^{\infty} a_n^2 is finite.

Proof: Denote X_n = \varepsilon_n a_n . Then X_n have zero expectation and their variance is equal to \mathrm{Var}X_n = \mathbb{E}X_n^2 = a_n^2 . Thus, if we denote S_N = \sum_{n=1}^{N} X_n , since the X_n s are independent we get that

\mathbb{E}S_N^2 =  \mathrm{Var}S_N =\sum_{n=1}^{N} a_n^2.

So suppose first that \sum_{n=1}^{\infty} a_n^2 is finite, and is bounded by, say, some number M . Then \mathbb{E}S_N^2 < M for every N . But by Jensen’s inequality, since the square-root function is concave,

\mathbb{E}\left|S_N\right| = \mathbb{E}  \sqrt{S_N^2}

\leq   \sqrt{\mathbb{E} S_N^2}

\leq \sqrt{M}.

Let’s now suppose that \mathrm{Var}S_N is infinite. Ordinarily, the fact that the variance of a random variable is infinite does not imply that the expectation of the absolute value is infinite. But luckily, we can use the following central limit theorem to our aid, which I found at this mathoverflow post:

Theorem: Let X_n be a sequence of independent, uniformly bounded random variables with zero mean, such that \sigma(S_N) goes to infinity with N . Then S_N/\sigma(S_N) converges in law to a standard Gaussian distribution.

This is different from the standard, run-of-the-mill central limit theorem, since the variables X_n don’t have to be identically distributed. Note that the fact that \sigma(S_N) goes to infinity is important here; for example, if all but one of the variables X_n were degenerate and exactly equal to 0, the result certainly wouldn’t be a Gaussian.
I won’t prove this theorem here, because it is a central limit theorem, and the proofs of these tend to be messy. And also because it’s late, and way past my bedtime. The gist of this theorem is that for large enough N , the sum S_N behaves roughly like a Gaussian with variance \mathrm{Var}S_N . In particular, the expected value \mathbb{E} \left|S_N\right| grows roughly like \sigma(S_N) , which, by assumption, tends to infinity.


There you have it! Using symmetric Bernoulli random variables, we can force the series \sum 1/n  to converge (in expectation), but the series \sum 1/ \log n is still too mighty-slow for us to conquer.


3 thoughts on “Converging Bernoulli Series

  1. I have used that as an example in my Probability class. BTW, there’s an easier proof of \sum_n a_n^2=+\infty => Pr(convergence)=0. What you need is to note that Kolmogorov’s One Series Theorem is an “if and only if” fact when the random variables are bounded. To prove the converse part of that Thm, you split the random series into infinitely many parts U_k with variance \geq 1, observe that Ex(U_k^4) is not too far from Ex(U_k^2)^2, and then use Paley-Wiener and independence to deduce that infinitely many U_k’s will be far from 0 with probability 1.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s