The Johnson-Lindenstrauss Lemma

May 18, 2017

Today is a good day I think to talk about the Johnson-Lindenstrauss dimension reduction lemma!

Johnson and Lindenstrauss. It cannot be coincidental that they are both wearing the same type of glasses.
Left image from Johnson’s website, right image from Gil Kalai’s blogpost in memory of Lindenstrauss.

The setting is as follows. Suppose you have a large number n points, living in the Euclidean space of dimension \mathbb{R}^{d} . We’ll call these points x_1,x_2,\ldots,x_n , and think of both the number of points n and the number of dimensions d as being very large, much larger than the measly “3” of our easy-to-grasp physical world.

Now, you may wonder, “But we do live in a three dimensional world thank you very much, so where would I obtain such high dimensional data?” Well, these situations arise increasingly more often in computer science applications, where a point in d -dimensional space may represent an abstraction of some actual real-world thing.

For example, imagine this not-too-farfetched biology experiment. A biologist wants to understand how a particular gene works in a bacterium, and she does this using the following procedure: first, she makes small changes in the DNA sections that code up the protein. Then she replaces the original gene in the bacterium by the new modified one. Finally, she measures the concentration of about 1000 different proteins in the bacterium. If the original gene was important, or if it was a key ingredient in a large protein pathway, then many of these concentrations will be different under the modified gene when compared to the unchanged gene.

Our biologist repeats this experiment many many times, each time changing a different part of the gene in a different way. Ultimately, she has performed say about 500 experiments. Each experiment results in a vector of 1000 numbers, one for each protein. She now has 500 points living in a 1000-dimensional space*. Our poor biologist must now analyze these points, and from them understand the way in which the original gene affects the proteins and operates within the cell. She will run many statistical and geometrical algorithms on these data, and these algorithms will usually have the downside that they really, really slow down as a function of dimension. What can she do?

* Actually, since a set of n points always spans a space of dimension no larger than n , these points only live in a 500-dimensional space. But that’s besides the point here, as we will see next.

Ok, back to mathematics. The Johnson-Lindenstrauss lemma roughly says the following. Suppose you have these n points living in \mathbb{R}^{d} . You can actually put them into a very small space – logarithmic in n – in such a way that the distances between the points are almost preserved: If two points were originally far away from each other, they’ll remain far away from each other, and if two points were close, they’ll remain close. More formally,

Lemma: Let 0 < \varepsilon <1 and let x_1,x_2,\ldots,x_n be n points in \mathbb{R}^{d} . Then there is a function f: \mathbb{R}^{d} \rightarrow \mathbb{R}^{k} such that for every two points x_i and x_j ,

(1-\varepsilon)||x_i - x_j||^2 \leq ||f(x_i)-f(x_j)||^2 \leq (1+\varepsilon)||x_i - x_j||^2

with k = 4 \log n / (\varepsilon^2/2 - \varepsilon^3/3).

This lemma is good if you want to run geometric algorithms on your data. For example, in our biologist’s case, she might decide that if two modifications to the gene resulted in protein densities that are more or less the same (that is, two points in \mathbb{R}^d that are close to each other), then the modifications changed the gene in functionally the same manner. She might then decide to look for clusters of points which are close to each other. This is usually easier to do in smaller dimensional spaces. Applying a Johnson-Lindenstrauss transformation allows our biologist to run the algorithms faster and on more data, while (hopefully) not losing much accuracy.

Of course, this is useful only if the the new dimension k = 4 \log n / (\varepsilon^2/2 - \varepsilon^3/3) is much smaller than the original dimension d. This often happens in applications; for example, in a future post, we will work with n points in n dimensions, and the transition from a space of dimension n to space of dimension \log n will be quite substantial.

The proof we will present here follows a paper by Dasgupta and Gupta, which uses random projections. We’ll first illustrate this concept in two dimensions.

Suppose you have a set of points in the plane:

A one-dimensional subspace of the plane is just a line that goes through the origin. Once we have such a line, we can perform an orthogonal projection of the original points onto that line, giving us a new set of points, all on the line:

If we choose a different line, we get a different projection:

For some lines, the projection will totally wreck the distances between the original points; for example, in this image, we sent two different original points to the same point:

However, for other lines, it may be that the projection is able to preserve the ratio of distances fairly accurately.

The proof the Johnson-Lindenstrauss lemma uses the same idea, only in higher dimensions. Instead of points in the plane, we have our original d -dimensional space; and instead of projecting to a line, we try to find a subspace of dimension k so that projecting on to it will almost preserve distances. [Actually, we project the points and then blow them up by a constant factor, since orthogonal projections can only decrease the length of vectors; but this doesn’t change the gist of the argument. From now on when we say “distance preserving”, we’ll mean actually mean “up to a constant factor”. See the actual calculations for exact treatment].

A-priori, there is no grounds to think that such a subspace exists. However, it turns out that it indeed does, for a reason called “the concentration of measure in high dimensions” Roughly, it says that in high dimensions, some random processes – such as the orthogonal projection of a point onto a random subspace, or the scalar product of a unit vector with a Gaussian vector – are heavily concentrated around their expected value. The probability for such processes to be even a tiny bit away from their mean is exponentially small in the dimension. We usually don’t see these phenomena in 3d, because the lowly and pathetic exponent of 3 isn’t enough to give a profound effect, but in higher dimensions they flourish.

After all of this verbosity, here is the proof outline: for any k , we can choose a random k dimensional subspace of \mathbb{R}^d and project our points on to it. Choose two particular points x_i and x_j from our original cast in \mathbb{R}^d . It may either be that the distance between them is preserved up to a factor of 1 \pm \varepsilon , or it may not be preserved; denote the probability that it is not preserved is some number p . We can consider this “distance distortion” event for all of the {n \choose 2} = (n^2 - n)/2 pairs of points, and by the union bound, the probability for at least one pair of points to have their distance distorted is less than p \cdot (n^2 - n)/2 . If we can show that the probability p is smaller than 2 / (n^2 -n) , then the probability that at least one distance is distorted is smaller than 1. This means there is a non-zero probability that all the distances are preserved! But then there has to exist a distance-preserving projection; otherwise it would be impossible for this probability to be greater than 0. And as it turns out, we can indeed make the probability that a single distance is preserved be this small, by choosing k = 4 \log n / (\varepsilon^2/2 - \varepsilon^3/3) .

Actually, this proof not only shows that there exists such a transformation, it even tells you how to efficiently get one: the probability that a random projection gives a distance-preserving dimension reduction is actually quite high (and can be made very close to one if we just increase the number of dimensions by a bit). So you just need to take your points, project them randomly, and check that the distances are preserved; if not repeat, and sooner rather than later you are bound to get something that works.

Ok, let’s take a look the technical details. All we have to show is that with high probability, the length of a vector is nearly preserved when we project it to a random k dimensional subspace. Now, when performing the calculations, instead of taking a fixed vector and a random subspace, we can take a fixed subspace and a random vector; all the results will be the same, since all we care about when projecting is the relative orientation between the vector and the subspace.

So let’s take a random unit vector, and see what happens to its length. A uniformly random unit vector can be obtained by taking a random distribution which is spherically symmetric and normalizing it. Specifically, let X_1,\ldots,X_d be d independent Gaussian random variables with mean 0 and variance 1; in other words, the random vector X = (X_1,\ldots, X_d) distributes as a d dimensional Gaussian vector \mathcal{N}(0,\text{Id}_{d}) . Then the vector

Y = \frac{X}{||X||}

is a uniformly random unit vector. As for our fixed subspace, we’ll just take the space spanned by the first k coordinates, and denote the projection by Z = \frac{(X_1,\ldots ,X_k)}{||X||} . What is the expected value of the square of the length of Z ? Well, we know that the squared length of Y is 1; this is by design: we normalized it. We thus have:

||Y||^2 = 1 = \sum_{i = 1}^{d}\frac{X_i^2}{||X||^2}.

If this is true for ||Y||^2 , then it is also true for its expectation:

\mathbb{E}||Y||^2 = 1 = \sum_{i = 1}^{d}\mathbb{E}\frac{X_i^2}{||X||^2}.

Notice that the sum is symmetric in the X_i ‘s: in expectation, there is no reason for one X_i to behave differently from the other. Thus each term should contribute equally to the sum, and so for every i ,

\mathbb{E}\frac{X_i^2}{||X||^2} = 1/d.

Since Z is just the first k terms of Y , we get

\mathbb{E}||Z||^2 = \sum_{i=1}^{k}\mathbb{E}\frac{{X_i}^2}{||X||^2} = k/d.

Oops, the expected value is k/d , which is smaller than 1, the original norm of the vector! But not to worry, this can easily be fixed: after we project to a random subspace of dimension k , we just blow up all the points by a factor of \sqrt{d/k} . That way, when we look at the norm squared, the expectation will be d/k times larger, and the expectation will indeed by 1. This multiplication will not affect any of our probabilistic statements, since if ||Z||^2 is concentrated around its mean, so is \frac {d}{k}||Z||^2 .

Now to fulfil our promise and show that the squared length of Z is heavily concentrated around its mean. The calculations will be a bit lengthy, but do not despair (also, if you got up to here and understood everything, you are pretty much good-to-go for most of your Johnson-Lindenstrauss-ing needs).

We’ll show that for any positive real number \beta smaller than 1, the probability for ||Z||^2 to be \beta times smaller than its mean of k/d is exponentially small in the dimension and in \beta ; we’ll then take \beta = 1 - \varepsilon to show that the probability for a small deformation is small. The same calculations can also be repeated for checking when ||Z||^2 is greater than \beta for \beta > 1 , and then taking \beta = 1 + \varepsilon .

A standard method for showing such concentration of measure is using Markov’s inequality, which states that if W is a positive random variable, then for every a > 0 ,

\text{Pr}[W \geq a] \leq \frac{\mathbb{E}W}{a}.

Intuitively, all this inequality says is that if the expectation is small, then it’s impossible to have too large a probability of getting too large a value. Markov’s inequality is innocuous enough, but it becomes powerful when we combine it with exponential moments, that is, when we apply to e^{tW} instead of to W itself. Let’s see how that works.

When written explicitly, ||Z||^2 becomes

||Z||^2 = \frac{X_1^2 + \ldots + X_k^2} {X_1^2 + \ldots + X_d^2}.

So the probability \text{Pr}[||Z||^2 \leq \frac{\beta k}{d}] can be written as

= \text{Pr}[d(X_1^2 + \ldots + X_k^2) \leq k \beta(X_1^2 + \ldots + X_k^2) ]

= \text{Pr}[k \beta(X_1^2 + \ldots + X_d^2) - d(X_1^2 + \ldots + X_k^2) \geq 0]

=  \text{Pr}[e^{t(k \beta (X_1^2 + \ldots + X_d^2) - d(X_1^2 + \ldots + X_k^2))} \geq 1].

Here t was some number greater than 0, and we will optimize over it soon in order to get the best possible bounds. Now we invoke Markov’s inequality with a = 1 , getting

\leq  \mathbb{E}e^{t(k(X_1^2 + \ldots + X_d^2) - d(X_1^2 + \ldots + X_k^2))}.

The various X_i ‘s are independent, so the expectation of the product is the product of expectations:

\mathbb{E}e^{X_1 + X_2} = \mathbb{E}[e^{X_1}e^{X_2}] = \mathbb{E}[e^{X_1}]\mathbb{E}[e^{X_2}],

and after grouping together the X_i in our exponent, our probability is bounded by

=  \mathbb{E}[e^{t k \beta X_1^{2}}]^{d-k} \mathbb{E}[e^{t(k \beta - d)X_1^{2}}]^{k}

=  (1-2tk \beta)^{-(d-k)/2}(1-2k(k\beta-d))^{-k/2}.

This last equality we won’t go into, but it stems from knowledge about the moment generating function of Gaussians; in short, all that needs to be proven is that \mathbb{E}[e^{sX^2}] = 1/\sqrt{1-2s} for s<1/2 .

All that is left is to minimize this last expression. This we also won't do here, because it mainly involves annoying calculus which you can let WolframAlpha do for you. This finally gives:

\text{Pr}[||Z||^2 \leq \frac{\beta k}{d}] \leq \beta^{k/2}(1 + \frac{k(1-\beta)}{d-k})^{(d-k)/2}.

The expression on the right hand side is equal to e to the power of its logarithm:

\text{Pr}[||Z||^2 \leq \frac{\beta k}{d}] \leq \exp(\frac{k}{2} \log \beta + \frac{d-k}{2} \log(1 + \frac{k(1-\beta)}{d-k})).

Finally, since \log(1+x) \leq x for all x \geq 0, we can replace the second \log factor by \frac{k(1-\beta)}{d-k} , giving

\text{Pr}[||Z||^2 \leq \frac{\beta k}{d}] \leq \exp(\frac{k}{2}(1-\beta + \log\beta)).

Now we just need to clean up. Put \beta = 1-\varepsilon , and get

\leq \exp(\frac{k}{2}(1-(1-\varepsilon) + \log(1-\varepsilon))).

By more calculus it can be shown that \log(1-x) \leq -x - x^2/2 for 0\leq x <1 , and at last we have

\leq \exp(\frac{k \varepsilon^2}{4}).

And indeed, if we choose k = 4 \log n / (\varepsilon^2/2 - \varepsilon^3/3) , this exponent will be smaller than 1/n^2 , just as we wanted (actually, we can choose just k = 8 \log n / (\varepsilon^2) , but to deal with the second case of \beta = 1+\varepsilon we will also need the \varepsilon^3/3 term). A similar calculation for \beta = 1+\varepsilon will give another factor of 1/n^2 , and in total the probability of having a large distortion will be just 2/n^2 per pair of points.

New paper on arXiv: indistinguishable sceneries on the Boolean hypercube

January 28, 2017

I’m happy to say that fellow student Uri Grupel and I uploaded a paper to the arXiv recently under the title “Indistinguishable sceneries on the Boolean hypercube” ( We had great fun working on it, and most of the theorems are actually pretty simple and do not use heavy mathematical machinery, so I’d like to share the motivation and results with you below. I’ll try to write a fairly simple exposition, without most of the technical terms and details. If you find this too simple, just read the paper!

Our paper shows that the scenery reconstruction problem for the n-dimensional hypercube is impossible for n \geq 4. It does this using the combinatorial properties of what we call “locally biased” and “locally balanced” functions, some properties of which the paper investigates.

Don’t know what scenery reconstruction is? Want to learn about and play around with locally biased and locally balanced functions? Read on.

We’ll start with a simpler-to-state problem. Suppose you are given a cycle graph, that is, a collection of vertices arranged in a cycle. For example, here is a cycle with 13 vertices:


Each vertex is colored either black or white. There are many different such possible colorings: if there are n vertices, then there are 2^n different colorings. However, the cycle has some symmetries, and some of these colorings can be considered to be the same. For example, in the following figure, if we rotate the center cycle 5 places clockwise, we get the cycle on the left. Similarly, if we reflect the center cycle around a vertical line, we get the cycle on the right. Such colorings are called “isomorphic”, and we will treat them all as “the same” for all intents and purposes.

Suppose now we place a person at one of the vertices at random. The person then performs a random walk: at each step, she randomly goes to the vertex either to her left or to her right, with each possibility equally likely to occur. Then she tells us the color of the vertex she is currently at. The result is a sequence of colors, one for each time step, which constitute the trace of the random walk on the cycle. Here is a short example of such a trace:


The idea you should keep in your mind is this: the coloring of the cycle is a “scenery”, and the agent, walking randomly along the cycle, tells you what she sees but *not* where she is going. The scenery reconstruction on the n-cycle is then framed as follows: suppose that you are given the length of the cycle n and also an infinitely long trace of scenery observations by your agent. Can you reconstruct the scenery itself? That is, can you find the original color of each vertex of the cycle?

As an easy example, if the original cycle was colored all black, then no matter where the agent goes, she will always report “black” at every time step. So if she reports “BBBBBBB…”, we can be confident that the original scenery is the all black cycle. Similarly, if the agent reports “BWBWBWB…”, we can be confident that the original scenery is the alternating black-white cycle (we can assume that in the long run, the agent will never be stuck in a “back and forth between two vertices only path). But what about more complicated sceneries, like the one given below? Can we always reconstruct even very bizzare colorings?


The (perhaps surprising) answer is: yes! It was shown that if you are given a long enough series of observations, you can always reconstruct the original coloring! (up to isomorphism, meaning, you cannot differentiate between colorings which are shifts or reflections of one another, but we already mentioned above that weview these to be “the same”). So while this basic question is closed, there are other open questions still to be answered. For example, how many observations do we need so that we will be able to reconstruct the coloring with very high probability?

Variants of the n-cycle reconstruction problem abound. What if, in our random walk, we don’t go left or right with equal probability? What if we can jump other distances? What if we color and walk on another graph other than the cycle?

The latter question is what Uri’s and my paper addresses. Instead of the n-cycle, we investigate the scenery reconstruction problem on the Boolean hypercube in dimension n. In dimension 1, the cube is just a line. In dimension 2, a square. In dimension 3, an actual cube. In higher dimensions visualizations start to get messy. One way to think about it is as follows: the n+1-dimensional hypercube is composed of just two identical n-dimensional hypercubes, with edges connecting matching vertices in the two cubes.


So again, we place an agent at one of the vertices at random, and she starts a random walk. This time, there are n possible choices of where to go in each step. Again, she reports only the color of the vertex she is currently at. Now can we reconstruct the original coloring?

For n=1, the hypercube is just two points connected by an edge, and there aren’t that many colorings on it, so it’s no big deal. For n=2, the hypercube is a square, which is actually a cycle of length 4, and we already know how to deal with cycles, so that’s no big deal either. For n=3, it can be shown that the methods used to solve for the cycle, combined with some case-by-base analysis, again are able to reconstruct the scenery. What about other dimensions?

The (perhaps surprising) answer is: no! Starting from n=4, there exist two colorings which are non-isomorphic (that is, they are not obtained by rotations or reflections of the cube), but which nevertheless cannot be distinguished from one another based on the sequence of color observations of our agent alone! So if we are given such a sequence, we can never be sure whether it came from one coloring or the other.

Here is such a pair in dimension n=4:


To see that these are indeed two different colorings (meaning, that one is not just a rotation or reflection of the other), note that in the left image there are two small rings of black vertices, each of size 4, while in the right image there is only one large ring, of size 8. This difference cannot be due to rotations and reflections alone.

If you look carefuly, you will see an interesting property of these two colorings: every vertex, no matter what color it is, has exactly two neighbors which are black, and exactly two neighbors which are white. It is this property that prohibits us from distinguishing between the two colorings: no matter where our agent is, at the next step she will report “black” with probability 1/2, and “white” with probability 1/2, for both colorings. In technical terms, the sequence of observations distributes as a sequence of iid Bernoulli variables with success probability of 1/2.

We can take this property and generalize it: we call a coloring “locally p-biased” if every vertex, no matter what color it is, has a p-fraction of its neighbors colored black, and a 1-p-fraction of its neighbors colored white. If there are two non-isomorphic such colorings on the hypercube, for any p, then the scenery reconstruction problem is impossible to solve on that hypercube. In fact, this statement is true for any graph, not just the hypercube, so finding locally biased colorings on different graphs shows that the scenerey reconstruction problem is impossible for those graphs. But we think that such colorings stand alone by their own right, and deserve study regardless of their application to scenery reconstruction; they are simply interesting combinatorially!

Our first result was to classify for which values of p an n-dimensional hypercube even has a locally biased coloring. We saw a 1/2-biased coloring for n=4. Obviously, a 1/2-biased coloring requires the dimension to be even, since each vertex has n neighbors, and exactly n/2 of those need to be black, and n/2 white. So is there a locally 1/2-biased coloring on the 6-dimensional hypercube? What about a 1/3-biased coloring? You can try it out yourself for a bit and convince yourself that there is no 1/3-biased coloring on the 3-dimensional cube. After a bit of work, we came up with the following theorem:

Theorem 1: There exists a locally p-biased coloring on the n dimensional hypercube if and only if p is of the form p = b/2^k and 2^k divides n.

This shows, for example, that there are no locally biased colorings on odd-dimensional hypercubes. But for even-dimensional hypercubes, there do exist such colorings, and then it’s interesting to start counting them: as we said before, if there are more than two colorings for the same p value, then the scenery reconstruction problem is impossible.

Our proof uses elements from coding theory, and is not very complicated. I’ll tell it in short in the following paragraphs: feel free to skip it if you don’t want to go ito the details.

One side, the “only if”, tells you which colorings are impossible. In the n-dimensional hypercube, each vertex has n neighbors, and there are 2^n vertices overall. Suppose that in a specific locally p-biased coloring, a total of l vertices are colored black. Then when you pick a random vertex, it is black with probability l/2^n. On the other hand, when the agent takes a random step, it will be black with probability p = m/n for some m. These two probabilities are equal, so

b/n = l/2^n.

Decomposing n into its prime powers, n = c2^k where c is odd. Then

l = 2^{n-k} \cdot m / c

and since l must be a whole integer (and not a fraction), we get that m = bc for some b. This gives p = b/2^k after a short calculation.

The other direction, which says that for all p values of the proper form you can build a locally p-biased coloring, is a bit harder. We start by building locally 1/n-biased colorings for n that is a power of two, that is, n=2^k. Such colorings have the peculiar property that they induce tiles which tile the hypercube. What do I mean by this? In such a coloring, every vertex, no matter the color, has exactly 1 black neighbor. In particular, this is true for black vertices. So black vertices always come together in connected pairs, and all the neighbors of such pairs are white. Further, these white neighbors already have their “1-black-neighbor” condition satisfied, so all their other neighbors are also white. This is exemplified in the following shape, or “tile”:


If we can “tile the hypercube” with these shapes, that is, if we can cover the hypercube graph with these tiles without any pair of tiles overlapping, we will have a locally 1/n-biased coloring. Why? Well, these tiles exactly satisfy the properties stated in the above paragraph, and every vertex in such a tiling will have exactly one black neighbor.

The proof constructs such tilings by using perfect codes; without going into detail, these are well-known methods of tiling the hypercube with a half-tile: instead of two black vertices and two rings of white vertices, these methods tile the hypercube with just a single black vertex and a single ring of white vertices. These are actually known as “balls” on the hypercube:


By taking two copies of a perfect code and connecting the black vertices, we obtain the needed tiling! This gives us a locally 1/n-biased coloring, for n that is a power of two (the perfect codes we need only exist in dimension 2^k-1, so when we take two copies of the hypercube together we get a hypercube of dimension n = 2^k).

To obtain a locally m/n-biased coloring, we join together m different disjoint 1/n-biased colorings. By m disjoint colorings, we mean that if a vertex is colored black in one coloring, then it is necessarily colored white in all the others. Finally, using a bit of algebraic manipulation which I will not go into here, we can extend these colorings into hypercubes of dimension c \cdot 2^k for any c, which is exactly what our theorem stated we could do.

The theorem shows existence of some locally p-biased colorings, but tells us nothing of the number of such colorings. In order to show that the scenery reconstruction problem is impossible, we need to find at least two different such colorings. That’s where our next two theorems come into play:

Theorem 2.1: As a function of n, there is at least a super-exponential number of different locally 1/n-biased colorings on the n dimensional hypercube.

Theorem 2.2: As a function of n, there are at least C \cdot 2^{\sqrt{n}} / \sqrt{n} different locally 1/2-biased colorings on the n dimensional hypercube, where C is some constant.

Theorem 2.1 is proved by counting the number of perfect codes, which were used in proving the previous theorem; we won’t go into it here.

The proof of Theorem 2.2 is more interesting, and relies on an algebraic treatment of our colorings. It goes as follows.

Up until now, we have treated our colorings as assigning the color either black or white to each vertex. We could have equally well assigned the numbers “+1” or “-1“, and all the theorems and statements would have stayed the same: all we care is that there are two distinct “colors”. But this description has an immediate advantage: we can now easily perform arithmetic operations on the colorings. For example, if both f_1 and f_2 are colorings, then their product f = f_1 \cdot f_2 is also a coloring, where by multiplication we mean that if f_1 assigns the color f_1(v) to a vertex v, and f_2 assigns the color f_2(v), then f(v) = f_1(v)\cdot f_2(v). In words, what this multiplication means is that f is black precisely when f_1 and f_2 have the same color. This algebraic representation is often easier to work with than the verbal one.

Now, the n-dimensional hypercube can be described as follows: it is the set of points with n coordinates, where each coordinate can be either +1 or -1 (you can easily verify that this fits in with our notions of 2d and 3d cubes; it works just the same for higher dimensions). Thus a coloring is just a function

f(x_1, x_2, \ldots, x_n)

which returns either 1 or -1, and all the x_i‘s are also either 1 or -1.

This representation lets us combine two locally 1/2-biased colorings on small hypercubes together into one locally 1/2-biased coloring on a larger hypercubes: Suppose that f_1 is such a coloring on the n dimensional hypercube, and f_2 is such a coloring on the m dimensional hypercube. Then the new coloring

f(x_1, x_2, \ldots, x_{n+m}) = f_1(x_1,\ldots,x_n) f_2(x_{n+1}, \ldots, x_{n+m})

is a locally 1/2-biased coloring on the n+m dimensional hypercube! Our method for counting the number of locally 1/2-biased colorings then goes as follows: we construct many high dimensional colorings from different lower-dimensional ones. In fact, our “elementary building blocks”, from which we build all other colorings, are basically generalizations of the two colorings of the 4-dimensional hypercube which we encountered earlier in this post. It can be shown (via Fourier decomposition, a method which we won’t go into here) that taking combinations of different low-dimensional colorings always gives non-isomorphic high-dimensional colorings. In other words, if we use different building blocks, then the resultant colorings will be distinct.

We can then get a bound on the number locally 1/2-biased colorings by counting in how many distinct ways we can add up low-dimensional colorings into a high-dimensional one. After a bit of arithmetic, we find that there are at least as many colorings as there are solutions to this inequality:

4a_1 + 8a_2 + \ldots + 4k a_k \leq n,

where a_1,\ldots,a_k are non-negative integers. The number of solutions to this inequality is at least C \cdot 2^{\sqrt{n}} / \sqrt{n} , giving us Theorem 2.2.

This, I think, is quite neat: it’s enough to find just two different colorings in order to show that the scenery reconstruction problem is impossible, but in fact there are many, many more of them. And what we gave was only a lower bound: it’s entirely plausible that there is a more clever way of counting these colorings, which will give an even larger number.

So far, so good. We saw that the scenery reconstruction problem is impossible for even-dimensional hypercubes, and even saw that there are different indistinguishable colorings. What about odd-dimensional cubes?

The answer comes from what we call “locally p-balanced” colorings. In these colorings, every vertex, no matter what color it is, has a p-fraction of its neighbors colored in its own color, and a 1-p-fraction of its neighbors colored the opposite color. These colorings also yield identical scenery observations: when our agent tells us the colors she sees, the next color will always be the same as the current one with probability p. For example, the following figure shows a locally 2/3-stable and a locally 1/3-stable coloring on the 3-dimensional cube.


Locally balanced colorings are different than locally biased colorings, although the two are related. For example, we saw that there is a strong restriction on the possible p values for locally biased colorings, but it turns out that for every p, there is at least one locally balanced coloring. On the other hand, if p = 1/n or p = (n-1)/n, then there is exactly one locally balanced coloring, while, as we saw, there are many, many locally biased ones.

The main point, though, is that every locally 1/2-biased coloring f on the n-dimensional hypercube can easily give a locally balanced coloring g on the n+1 dimensional hypercube, by:

g(x_1, \ldots, x_{n+1}) = g(x_1, \ldots, x_n) \cdot x_{n+1}.

You should verify for yourself that this is indeed a locally balanced coloring, and calculate its p value. This construction gives different locally balanced functions for all odd-dimensional hypercubes of dimension n \geq 5, showing that the scenery reconstruction problem is impossible for all n \geq 4. And that’s what we promised to show in the beginning of this post.

You should play around with locally biased and locally balanced colorings yourself; they serve as good brain teasers, and indeed we’ll end with a small riddle: here are two locally balanced colorings on the infinite one-dimensional and two-dimensional integer lattices. Can you find one for the three dimensional lattice?



What you see is what you get

August 16, 2016

Here is a very nice image, composed of four squares. The upper-left and lower-right squares are pink with black horizontal stripes, and the upper-right and lower-left squares are green with black vertical stripes.


What do you mean by “you’re talking nonsense”? Any fool can see that two squares are pink and two are green, and if all you see is a black and white image, well, you had better get yourself checked for total colour blindness; I’m surprised you hadn’t noticed it until this far in your life! Now, we could argue about this for quite a long while, and you would claim that no matter which way you look at it, it’s a black and white image, and I would say that maybe it does depend on which way you look at it, but that would be quite worthless; I see what I see, and that’s that.

We all keep in back of our heads the fact that different people see the world differently. Either literally, as in, they are colourblind, or more abstractly, as in, they grew up in a different environment and had different life events and experiences and relationships and traumas and solar flares which shaped their persona in unique and delicate ways. But often the “back of the head” is very, very deep inside, and we are not consciously aware that we are judging others based on our own experiences and not on theirs.

So it’s good to occasionally get a reminder. A reminder that our experiences change the way we perceive the world, maybe even permanently; a reminder that what may be crystal clear to one may be the exact opposite to another. The McCollough effect is such a reminder, and a darn freakin’ awesome one at that.
The McCollough effect is produced as follows: you look at a grating with horizontal stripes (much like the top-left square above) that is coloured green for a couple of seconds. Then you look at a grating with vertical stripes that is coloured pink for a couple of seconds. Rinse and repeat for, say, 5-10 minutes. Afterwards, when you look at the ordinary black and white gratings, they appear to be coloured pink or green, depending on their stripe pattern.
This is reminiscent of “image burning” on the retina, where if you stare at an image for a while and then at a blank wall, you see an “afterimage” of what you looked at. But it goes much deeper than this. For example, it’s specific to the grating pattern: if you rotate the grating you are looking at by 90 degrees, the green and pink colours are swapped; if you rotate by 45 degrees, they disappear altogether! It also lasts longer. I did the experiment for 10 minutes, meaning I endured a 10 minute staring contest with the pink/green gratings, and the effect lasted for three days. Let’s repeat: for three days afterwards, instead of seeing a black and white square with horizontal stripes, I saw a pink square with horizontal stripes. Some people retained this effect for months!

Here is a link where you too can witness the phenomenon:
It takes a bit of patience to sit through 10 minutes of staring, but I think it’s quite worth it; it’s certainly much better than all those TED videos claiming they will change the way you see the world.

Maybe I’m reading a bit too much into this overall esoteric phenomenon. After all politics and worldviews are much more concrete, aren’t they? Well, what you see is what you get.

But to end lightly, let’s finish off with an ever relevant SMBC:

Book Review: Ringworld

August 8, 2016

*** Spoilers included. But I don’t think you should care, because this is a horrible book and you should not read it. That’s also a spoiler. ***

Recipe: take about 200 grams of insightful ideas concerning civilization and space exploration, slice ’em and dice ’em, then mix in about 10 kilos of misogyny, extreme cherry-picking physics, poor character development, unrealistic behaviour, some more misogyny and a ridiculous historical explanation, and mix thoroughly until homogeneous. Congratulations! You have just cooked “Ringworld”, by Larry Niven.

The main setting of Ringworld is actually quite interesting: an ancient, almost all powerful civilization has figured out a novel way to avoid real estate problems: take all the matter in the solar system and stretch it out in the form of a thin ring around the sun. If the ring is about as far away from the sun as Earth is, and the ring is wide enough, you get an area equivalent to a hundred thousand planets; enough space for everybody. This is the Ringworld.
How would life on such a construction look like? Will it still have trees, mountains, seas, and lakes? What sort precautions do you need to take so that the Ringworld survives through the ages? How do you produce energy? How do you maintain a day-night cycle? How do you effectively leave and return to Ringworld? What happens when society finally collapses? All these questions and more are discussed in the book. Some of Niven’s solutions make good sense, others less so, but in any case, it’s an interesting concept to think about, especially since it’s a sort of intermediate stepping stone on the way to achieving a Dyson Sphere.
Unfortunately, apart from these scientific ponderings, I found the book to have very little value. Negative value, in fact. As you see above, the list of faults is large. Some things annoyed me so very much that I had to force myself to keep on reading. To name a few:

Sexism: There are two women in the book. I reluctantly accept this, as almost all science fiction of the time was exclusively male centered. But the book’s treatment of these two women is horrible. The first woman, Teela, is part of the four-person team that explores Ringworld. Her primary role in the book is to be lucky and naive, having “never been hurt in her entire life”. And by this, I mean that this is an actual plot device. Sentences of the form “this could only have happened because Teela was never really hurt in her entire life” do actually appear in the book. Her most positive merit is her luck. Teela also has a secondary role, which is to be a fuck-buddy to the main character, and to be heroically saved by him whenever possible. In fact, I think I got mixed up in my priorities: this is her primary role, and most of the book’s plot is moved forward by her being a fuck-buddy to the main character, or when the rest of the team tries to save her from peril.
The second woman, Prill, is a survivor who lived through the Ringworld’s downfall. How did she survive? By being a professional whore, of course (again, I’m not making this up; since she takes youth-prolonging drugs, she has had “thousands of years of experience” and is a sex grandmaster; that’s how she gets by). Obviously, a large part of her role is to be a fuck-buddy to the main character. Oh, sorry, did I say “large”? I meant “only”.
And this is without mentioning that there are two alien species in the book, both of which have “non-sentient females”, and one of the aliens in the expedition team is helping out only because this will give him the privilege of mating. If Niven tried to be sexist on purpose, he certainly did a really great job at it.

Physical cherry picking: Niven tries to describe Ringworld realistically. He tries to be accurate with the sizes and dimensions, the methods that spaceships will have to undergo in order to leave and return the Ringworld, etc. But he only does this when it suits him. Otherwise, he just hurls all the rules out the window, and expects us to accept a plethora of miraculous devices which do incredible things. To be fair, science fiction has often relied on some unexplained non-existent phenomenon – say, faster-than-light travel – and then investigated the social and psychological consequences of having such a device. And that’s fine, because that’s why we have “fiction” in the genre title, and people can get away with a little suspension of disbelief.
But Niven really misses it. He introduces way more unbelievable / impossible devices than he explains. To name a few:

  • Spaceships that are impervious to any known force except light and gravity.
  • A pill which prolongs life pretty much indefinitely.
  • Anti-gravity generators, capable of selectively floating entire buildings to arbitrary heights.
  • A small machine that can transform any organic matter into food.
  • A nearly indestructible material that stops neutrinos.
  • Teleportation pads.

Most of these devices are used extensively throughout the book, so it’s not like they are just a curiosity: without the food-generating machines, the exploration team will have no food. Without the life-prolonging drugs, Prill would not have survived, and neither would the main character. Without the nearly-indestructible spaceship, the characters would have been fried at the very beginning. And so on. And these aren’t even the craziest things that happen in the book. Did you know that there is a civilization that decides to migrate away from their solar system, so they get on their spaceships and just push their planets along? Niven is playing god in sandbox mode, and he doesn’t care to explain to us what is possible and what isn’t; it’s useless for us readers to try and predict how any event will turn out. The technology is so advanced, it really is just magic.

Unrealistic downfall: The main characters explore the Ringworld, discovering that its society collapsed from a state of master engineers who can basically move stars, to savages barely above ancient farmer technology and civilization. And why? Because a spaceship accidentally brought in some fungus which eats superconductors. Apparently, for Niven, this is a sufficient explanation for the downfall of a civilization with the power to turn planets into a gigantic ring around the sun. The fungus ate all the superconductors in Ringworld, an object with a 150,000,000 kilometer radius, and destroyed all of its power sources, before anyone could notice and do anything about it. Also, all written text was catastrophically lost, apparently, because the survivors have no recollection of anything of their past grandeur, they cannot fix anything or access any technology, and they worship the “ancient engineers” as gods. Well, I was the one who lost it in this case.

In fact, I’m losing it right now. I already wrote a thousand words, and I haven’t even gotten around to telling you about the crappy, one dimensional behavioural psychology going on between the expedition’s team members (except for Teela, of course. She has zero-dimensional interactions). So I’m going to stop here.

Do yourself a favor; don’t read this book.

To Carnegie Hall!

August 4, 2016

Recently a friend and I had a chat about music, and he asked me if I do any composition of my own. Unfortunately I do not. I suppose I could blame it on a lack of improvisation skill, which in turn originates from a desire to play pieces “as they were originally intended”, i.e. sticking to the sheet music, though I guess the true answer also has something to do with a fear of failure of some sort.
In general, classical music sports a rather bold distinction between “performer” and “composer”. The composer is the person who creates the music, the performer is the person(s) who executes the music, and they need not be the same gal (indeed, the composer may write for an instrument she does not even know how to play! or write for an entire orchestra / ensemble / etc). The fact that classical music is “classic” also contributes greatly to the distinction: most of the great composers of yore are dead; the best we mortals can do is echo their masterpieces.
But being a classical performer is no shame. Indeed, some performers have risen to a demigod stature among the population (ok, among a very particular slice of the population, but it is a demigod stature nonetheless). These men and women have brought the art of execution of art to a grandmaster level. They are experts in their field; they tune every staccato and accent to picometer precision. They know each intricacy of each phrase by heart, mind, and finger.


Why am I telling you this? Because while in music both performers and composers are abundant, and both are respectable careers to aspire to, it seems to me that in high level mathematics, it is mostly the “composition” that is lauded. By “mathematical composers”, I mean research mathematicians, who explore the boundaries of the science, try to invent new mathematical structures and understand existing ones, and in general, prove a bunch of theorems, lemmas, corollaries, claims, propositions and remarks.
By “mathematical performers”, I mean those who take the work of the composers, and give the audience such a breathtaking show, that they’ll get a three-time standing ovation, eventually being forced to return to the stage to give an encore in the form of a “Using volume to prove Sperner’s Lemma” proof.

Yeah, I know, there aren’t much of the latter, and I think that we are all the poorer for it. What I envision is a mathematical lecturer virtuoso. Someone who can, through all the jumbled, messed up and interwoven six-part counterpoint of a proof, bring out a clear and lucid melody that will ring and resonate loud truth in the ears of the audience. Someone who can aptly tame the fierce and complex mathematical topics that generation upon generation of graduate students have failed to grasp, and finally bestow knowledge upon the ignorant. Someone who has studied the ancient texts and knows by heart, by mind, and by finger each intricacy of each phrase. Who can tune every theorem and lemma to picometer precision. An orator of great rhetoric, brilliant diction, and perfect handwriting. A lecture-hall veteran, who practices six hours a day and in the rest of her time finds out the best way to build a lecture series on a wide, demanding topic. In short, a full-time, professional, high level mathematics teacher.

Of course, the profession “full time teacher” is not unheard of. Yet, as far as I know, most teachers – i.e. most of those whose profession is to teach, and indeed do hone their presentation technique – are aimed at educating elementary and high schools. The number of such teachers at the academia level is small, if not infinitesimal. They do exist, for sure – at the Technion, as far as I know, at least two mathematics lecturers hold a full time position: Aviv Censor and Aliza Malek. They constantly receive much praise and awards, and their lecture halls are crammed so tightly, people stand in the hallways and peek through open doors just to hear them talk (though alas, I never chanced to study under them; this is in part because most of the courses they teach are aimed at non-mathematicians, and in any case are intended for undergraduates). But such men and women are a rarity.

Why is this? It’s quite understandable that many people would prefer to go into research rather than performance, but even then I would expect to see more performers than we have so far. Two other immediate reasons are: 1) lack of paying customers, lack of demand. 2) low social status when compared to research mathematicians (“Oh, you don’t invent anything of your own?”).
But this isn’t so with music, and *should not* be so with mathematics. I can therefore only hope that I live to see the day, when Carnegie hall is filled to burst with excited concert-goers; and when the lights turn on after an hour and a half of a dazzling performance of “The Nash Embedding Theorem”, there will not be a man or woman left unmoved, their hearts pounding with reborn youth, the math as music to their ears.

Electric Insanity

June 15, 2016

My piano is mentally ill. Now, you might be wondering how it is that a piano can be sick, but it’s an electronic piano (Korg SP250), and thus has a specialized electronic brain in charge of imitating the sound of tiny hammers hitting strings when I press its plastic keyboard. And like any brain, it can develop dementia.
This particular affliction is a rather odd one: sometimes, the piano plays notes which I did not press. Now, you may think, “poor boy, he plays the wrong notes – a musician always blames his instruments”, and while I do credit my machine with much of my musical disfunction, it being only a digital replica of real life, this is different.
To be fair, the piano has always faithfully played notes which I did press. The problem comes from those that it decides to add itself. Here are my observations:

  • Usually the added notes are one or two octaves above or below the ones that I’m actually playing. This comes as a great surprise and is in stark contrast to the piece I’m working on.
  • They are always very loud and startling.
  • Sometimes a single note is added, at other times a pair of them in rapid succession.
  • They are never in harmony with what I play (modulo chance).
  • I’m not exactly sure, as I haven’t recorded my statistics, but I tend to think that the phenomenon appears more often when I play a dense section with lots of notes in a short time interval, rather than a slower, sparser section.
  • Perhaps a suitable analogy is this: while you play your lovely Beethoven sonata, your 2 year old nephew materializes from the void on either your right or your left, bangs on a couple of notes, then quickly evaporates to mist before you have a chance to realize what is happening.

    I can only wonder how this came to be. When I bought the piano seven years ago it was in perfect condition. Perhaps some dust mote got into its circuitry, occasionally shorting between the keys. Perhaps I moved it around too often, or put it in too damp an environment. Perhaps it’s just old age – seven years for mechanical electronics is a lot these days, and even your most precious loved ones can develop dementia.

    And perhaps I’m wrong about all of this, and the exact opposite is happening. It’s not that the piano has reached old age and is developing alzheimer’s; on the contrary, it’s only now grown beyond infancy. It’s only now learned how to express itself, only now felt strong enough to pronounce something of its own. For all these years I have hammered away at the keys, and yes, it produced the requested sounds as it strived to fulfill my bidding. But in reality, it was silent and mute, contained in introverted misery.
    But now, it speaks! Sensing Beethoven’s genius, it tries to join in, to contribute its own harmony, its own melody. How it yearns to be part of such great music! How it aspires to compose like the grand legends of old! How it enjoys playing together with the pianist, not as master and slave, but as equals! Should not the instrument itself, being a vessel of music, be held in high regard? Could not the mechanical automaton finally harness the artist’s inspiration?

    Perhaps. And perhaps not. It is, after all, just an electronic piano; one of the cheaper models, in fact. But even if the mysterious erroneous notes are only the result of a deranged digital neuron, an unintentional crossover current, it will not always be so. The day will come when our machines compose, paint, write and program, and will enjoy themselves all the while, perhaps even more than we do. When that day comes, I sincerely hope we will see its glaring truth and rejoice in their jubilee, rather than just dismiss it as an digital, electric insanity.

    I don’t like spinach: Book Review: Giant Book of Jokes

    April 25, 2016


    I got Joseph Rosenbloom’s “Giant Book of Jokes” when I was around 9. It contains over one thousand jokes, which really might seem like quite a lot for a young boy. In an age before the graphical image macros and social media, one did have to resort to books to get some literary humor.
    I can’t testify much to the quality of the book. Most of the jokes are one or two liners, and many rely on awful puns and homophones. For example:

    • I don’t care if the basement wall is cracking. Please stop telling everyone you come from a broken home.
    • Hot weather never bothers me. I just throw the thermometer out the window and watch the temperature drop.
    • Nit: Please call me a taxi.
      Wit: Ok, I’ll call you a taxi, though you look more like a truck to me.


    If your taste is a bit darker, there are some more sinister or sarcastic ones:

    • Junior wrote a letter from camp:
      Dear Mom,
      What’s an epidemic?
      Signed, Junior
    • Salesman: That suit fits you like a glove.
      Customer: Great! Can you show me one that fits like a suit?

    In other words, if we want to keep up to date with the current jargon, it’s a book of “dad jokes”.

    I’ve actually had quite an experience looking back and rereading the book (now out of print). This is probably where my humor converged to. If only my parents had known, twenty years ago, that I’d absorb this kind of thing to my bones, maybe they’d have gotten me a book about knitting instead. But the damage is done, and in fact, I still use some of the jokes today (!).

    They say that every joke has a sliver of truth in it. Ok, obviously not *every* joke. But sometimes you find a joke so accurate, it shakes you with an exhilarating vibration. Two short examples:

    • Junior: Why does it rain, Dad?
      Father: To make the flowers grow, and the grass and the trees.
      Junior: So why does it rain on the sidewalk?
    • Teacher: Let us take the example of the busy ant. He works all the time, night and day. Then what happens?
      Pupil: He gets stepped on.

    The first is a parody of pretty much all purpose-oriented explanations you would get in response to any question (and some scientific ones, too); the second is an all-too-accurate reminder of the futility of our pitiful existence.
    But my probable favorite among them all is this gem:

    I don’t like spinach and I’m glad I don’t like it, because if I did like it, I would eat it – and I hate the stuff.

    In one simple nonsensical sentence, the joke exposes a basic fault of people dealing with politics, religion, and schisms: the belief that one’s opinion is clearly the correct one, and that there is no use in even trying to look at an issue from another perspective, because one’s opinion is clearly the correct one. To apply, just replace spinach with liberal, conservative, religious, atheist, alpha, beta, emacs, vim.

    And as for me; I too, used to dislike spinach.