New paper on arXiv: indistinguishable sceneries on the Boolean hypercube

January 28, 2017

I’m happy to say that fellow student Uri Grupel and I uploaded a paper to the arXiv recently under the title “Indistinguishable sceneries on the Boolean hypercube” (https://arxiv.org/abs/1701.07667). We had great fun working on it, and most of the theorems are actually pretty simple and do not use heavy mathematical machinery, so I’d like to share the motivation and results with you below. I’ll try to write a fairly simple exposition, without most of the technical terms and details. If you find this too simple, just read the paper!


Our paper shows that the scenery reconstruction problem for the n-dimensional hypercube is impossible for n \geq 4. It does this using the combinatorial properties of what we call “locally biased” and “locally balanced” functions, some properties of which the paper investigates.

Don’t know what scenery reconstruction is? Want to learn about and play around with locally biased and locally balanced functions? Read on.

We’ll start with a simpler-to-state problem. Suppose you are given a cycle graph, that is, a collection of vertices arranged in a cycle. For example, here is a cycle with 13 vertices:

cycle

Each vertex is colored either black or white. There are many different such possible colorings: if there are n vertices, then there are 2^n different colorings. However, the cycle has some symmetries, and some of these colorings can be considered to be the same. For example, in the following figure, if we rotate the center cycle 5 places clockwise, we get the cycle on the left. Similarly, if we reflect the center cycle around a vertical line, we get the cycle on the right. Such colorings are called “isomorphic”, and we will treat them all as “the same” for all intents and purposes.

symmetries
Suppose now we place a person at one of the vertices at random. The person then performs a random walk: at each step, she randomly goes to the vertex either to her left or to her right, with each possibility equally likely to occur. Then she tells us the color of the vertex she is currently at. The result is a sequence of colors, one for each time step, which constitute the trace of the random walk on the cycle. Here is a short example of such a trace:

agent

The idea you should keep in your mind is this: the coloring of the cycle is a “scenery”, and the agent, walking randomly along the cycle, tells you what she sees but *not* where she is going. The scenery reconstruction on the n-cycle is then framed as follows: suppose that you are given the length of the cycle n and also an infinitely long trace of scenery observations by your agent. Can you reconstruct the scenery itself? That is, can you find the original color of each vertex of the cycle?

As an easy example, if the original cycle was colored all black, then no matter where the agent goes, she will always report “black” at every time step. So if she reports “BBBBBBB…”, we can be confident that the original scenery is the all black cycle. Similarly, if the agent reports “BWBWBWB…”, we can be confident that the original scenery is the alternating black-white cycle (we can assume that in the long run, the agent will never be stuck in a “back and forth between two vertices only path). But what about more complicated sceneries, like the one given below? Can we always reconstruct even very bizzare colorings?

complicated

The (perhaps surprising) answer is: yes! It was shown that if you are given a long enough series of observations, you can always reconstruct the original coloring! (up to isomorphism, meaning, you cannot differentiate between colorings which are shifts or reflections of one another, but we already mentioned above that weview these to be “the same”). So while this basic question is closed, there are other open questions still to be answered. For example, how many observations do we need so that we will be able to reconstruct the coloring with very high probability?

Variants of the n-cycle reconstruction problem abound. What if, in our random walk, we don’t go left or right with equal probability? What if we can jump other distances? What if we color and walk on another graph other than the cycle?

The latter question is what Uri’s and my paper addresses. Instead of the n-cycle, we investigate the scenery reconstruction problem on the Boolean hypercube in dimension n. In dimension 1, the cube is just a line. In dimension 2, a square. In dimension 3, an actual cube. In higher dimensions visualizations start to get messy. One way to think about it is as follows: the n+1-dimensional hypercube is composed of just two identical n-dimensional hypercubes, with edges connecting matching vertices in the two cubes.

cubes

So again, we place an agent at one of the vertices at random, and she starts a random walk. This time, there are n possible choices of where to go in each step. Again, she reports only the color of the vertex she is currently at. Now can we reconstruct the original coloring?

For n=1, the hypercube is just two points connected by an edge, and there aren’t that many colorings on it, so it’s no big deal. For n=2, the hypercube is a square, which is actually a cycle of length 4, and we already know how to deal with cycles, so that’s no big deal either. For n=3, it can be shown that the methods used to solve for the cycle, combined with some case-by-base analysis, again are able to reconstruct the scenery. What about other dimensions?

The (perhaps surprising) answer is: no! Starting from n=4, there exist two colorings which are non-isomorphic (that is, they are not obtained by rotations or reflections of the cube), but which nevertheless cannot be distinguished from one another based on the sequence of color observations of our agent alone! So if we are given such a sequence, we can never be sure whether it came from one coloring or the other.

Here is such a pair in dimension n=4:

4d_indistinguishable

To see that these are indeed two different colorings (meaning, that one is not just a rotation or reflection of the other), note that in the left image there are two small rings of black vertices, each of size 4, while in the right image there is only one large ring, of size 8. This difference cannot be due to rotations and reflections alone.

If you look carefuly, you will see an interesting property of these two colorings: every vertex, no matter what color it is, has exactly two neighbors which are black, and exactly two neighbors which are white. It is this property that prohibits us from distinguishing between the two colorings: no matter where our agent is, at the next step she will report “black” with probability 1/2, and “white” with probability 1/2, for both colorings. In technical terms, the sequence of observations distributes as a sequence of iid Bernoulli variables with success probability of 1/2.

We can take this property and generalize it: we call a coloring “locally p-biased” if every vertex, no matter what color it is, has a p-fraction of its neighbors colored black, and a 1-p-fraction of its neighbors colored white. If there are two non-isomorphic such colorings on the hypercube, for any p, then the scenery reconstruction problem is impossible to solve on that hypercube. In fact, this statement is true for any graph, not just the hypercube, so finding locally biased colorings on different graphs shows that the scenerey reconstruction problem is impossible for those graphs. But we think that such colorings stand alone by their own right, and deserve study regardless of their application to scenery reconstruction; they are simply interesting combinatorially!

Our first result was to classify for which values of p an n-dimensional hypercube even has a locally biased coloring. We saw a 1/2-biased coloring for n=4. Obviously, a 1/2-biased coloring requires the dimension to be even, since each vertex has n neighbors, and exactly n/2 of those need to be black, and n/2 white. So is there a locally 1/2-biased coloring on the 6-dimensional hypercube? What about a 1/3-biased coloring? You can try it out yourself for a bit and convince yourself that there is no 1/3-biased coloring on the 3-dimensional cube. After a bit of work, we came up with the following theorem:

Theorem 1: There exists a locally p-biased coloring on the n dimensional hypercube if and only if p is of the form p = b/2^k and 2^k divides n.

This shows, for example, that there are no locally biased colorings on odd-dimensional hypercubes. But for even-dimensional hypercubes, there do exist such colorings, and then it’s interesting to start counting them: as we said before, if there are more than two colorings for the same p value, then the scenery reconstruction problem is impossible.

Our proof uses elements from coding theory, and is not very complicated. I’ll tell it in short in the following paragraphs: feel free to skip it if you don’t want to go ito the details.

One side, the “only if”, tells you which colorings are impossible. In the n-dimensional hypercube, each vertex has n neighbors, and there are 2^n vertices overall. Suppose that in a specific locally p-biased coloring, a total of l vertices are colored black. Then when you pick a random vertex, it is black with probability l/2^n. On the other hand, when the agent takes a random step, it will be black with probability p = m/n for some m. These two probabilities are equal, so

b/n = l/2^n.

Decomposing n into its prime powers, n = c2^k where c is odd. Then

l = 2^{n-k} \cdot m / c

and since l must be a whole integer (and not a fraction), we get that m = bc for some b. This gives p = b/2^k after a short calculation.

The other direction, which says that for all p values of the proper form you can build a locally p-biased coloring, is a bit harder. We start by building locally 1/n-biased colorings for n that is a power of two, that is, n=2^k. Such colorings have the peculiar property that they induce tiles which tile the hypercube. What do I mean by this? In such a coloring, every vertex, no matter the color, has exactly 1 black neighbor. In particular, this is true for black vertices. So black vertices always come together in connected pairs, and all the neighbors of such pairs are white. Further, these white neighbors already have their “1-black-neighbor” condition satisfied, so all their other neighbors are also white. This is exemplified in the following shape, or “tile”:

tiles

If we can “tile the hypercube” with these shapes, that is, if we can cover the hypercube graph with these tiles without any pair of tiles overlapping, we will have a locally 1/n-biased coloring. Why? Well, these tiles exactly satisfy the properties stated in the above paragraph, and every vertex in such a tiling will have exactly one black neighbor.

The proof constructs such tilings by using perfect codes; without going into detail, these are well-known methods of tiling the hypercube with a half-tile: instead of two black vertices and two rings of white vertices, these methods tile the hypercube with just a single black vertex and a single ring of white vertices. These are actually known as “balls” on the hypercube:

half_tiles

By taking two copies of a perfect code and connecting the black vertices, we obtain the needed tiling! This gives us a locally 1/n-biased coloring, for n that is a power of two (the perfect codes we need only exist in dimension 2^k-1, so when we take two copies of the hypercube together we get a hypercube of dimension n = 2^k).

To obtain a locally m/n-biased coloring, we join together m different disjoint 1/n-biased colorings. By m disjoint colorings, we mean that if a vertex is colored black in one coloring, then it is necessarily colored white in all the others. Finally, using a bit of algebraic manipulation which I will not go into here, we can extend these colorings into hypercubes of dimension c \cdot 2^k for any c, which is exactly what our theorem stated we could do.


The theorem shows existence of some locally p-biased colorings, but tells us nothing of the number of such colorings. In order to show that the scenery reconstruction problem is impossible, we need to find at least two different such colorings. That’s where our next two theorems come into play:

Theorem 2.1: As a function of n, there is at least a super-exponential number of different locally 1/n-biased colorings on the n dimensional hypercube.

Theorem 2.2: As a function of n, there are at least C \cdot 2^{\sqrt{n}} / \sqrt{n} different locally 1/2-biased colorings on the n dimensional hypercube, where C is some constant.

Theorem 2.1 is proved by counting the number of perfect codes, which were used in proving the previous theorem; we won’t go into it here.

The proof of Theorem 2.2 is more interesting, and relies on an algebraic treatment of our colorings. It goes as follows.

Up until now, we have treated our colorings as assigning the color either black or white to each vertex. We could have equally well assigned the numbers “+1” or “-1“, and all the theorems and statements would have stayed the same: all we care is that there are two distinct “colors”. But this description has an immediate advantage: we can now easily perform arithmetic operations on the colorings. For example, if both f_1 and f_2 are colorings, then their product f = f_1 \cdot f_2 is also a coloring, where by multiplication we mean that if f_1 assigns the color f_1(v) to a vertex v, and f_2 assigns the color f_2(v), then f(v) = f_1(v)\cdot f_2(v). In words, what this multiplication means is that f is black precisely when f_1 and f_2 have the same color. This algebraic representation is often easier to work with than the verbal one.

Now, the n-dimensional hypercube can be described as follows: it is the set of points with n coordinates, where each coordinate can be either +1 or -1 (you can easily verify that this fits in with our notions of 2d and 3d cubes; it works just the same for higher dimensions). Thus a coloring is just a function

f(x_1, x_2, \ldots, x_n)

which returns either 1 or -1, and all the x_i‘s are also either 1 or -1.

This representation lets us combine two locally 1/2-biased colorings on small hypercubes together into one locally 1/2-biased coloring on a larger hypercubes: Suppose that f_1 is such a coloring on the n dimensional hypercube, and f_2 is such a coloring on the m dimensional hypercube. Then the new coloring

f(x_1, x_2, \ldots, x_{n+m}) = f_1(x_1,\ldots,x_n) f_2(x_{n+1}, \ldots, x_{n+m})

is a locally 1/2-biased coloring on the n+m dimensional hypercube! Our method for counting the number of locally 1/2-biased colorings then goes as follows: we construct many high dimensional colorings from different lower-dimensional ones. In fact, our “elementary building blocks”, from which we build all other colorings, are basically generalizations of the two colorings of the 4-dimensional hypercube which we encountered earlier in this post. It can be shown (via Fourier decomposition, a method which we won’t go into here) that taking combinations of different low-dimensional colorings always gives non-isomorphic high-dimensional colorings. In other words, if we use different building blocks, then the resultant colorings will be distinct.

We can then get a bound on the number locally 1/2-biased colorings by counting in how many distinct ways we can add up low-dimensional colorings into a high-dimensional one. After a bit of arithmetic, we find that there are at least as many colorings as there are solutions to this inequality:

4a_1 + 8a_2 + \ldots + 4k a_k \leq n,

where a_1,\ldots,a_k are non-negative integers. The number of solutions to this inequality is at least C \cdot 2^{\sqrt{n}} / \sqrt{n} , giving us Theorem 2.2.

This, I think, is quite neat: it’s enough to find just two different colorings in order to show that the scenery reconstruction problem is impossible, but in fact there are many, many more of them. And what we gave was only a lower bound: it’s entirely plausible that there is a more clever way of counting these colorings, which will give an even larger number.


So far, so good. We saw that the scenery reconstruction problem is impossible for even-dimensional hypercubes, and even saw that there are different indistinguishable colorings. What about odd-dimensional cubes?

The answer comes from what we call “locally p-balanced” colorings. In these colorings, every vertex, no matter what color it is, has a p-fraction of its neighbors colored in its own color, and a 1-p-fraction of its neighbors colored the opposite color. These colorings also yield identical scenery observations: when our agent tells us the colors she sees, the next color will always be the same as the current one with probability p. For example, the following figure shows a locally 2/3-stable and a locally 1/3-stable coloring on the 3-dimensional cube.

parity

Locally balanced colorings are different than locally biased colorings, although the two are related. For example, we saw that there is a strong restriction on the possible p values for locally biased colorings, but it turns out that for every p, there is at least one locally balanced coloring. On the other hand, if p = 1/n or p = (n-1)/n, then there is exactly one locally balanced coloring, while, as we saw, there are many, many locally biased ones.

The main point, though, is that every locally 1/2-biased coloring f on the n-dimensional hypercube can easily give a locally balanced coloring g on the n+1 dimensional hypercube, by:

g(x_1, \ldots, x_{n+1}) = g(x_1, \ldots, x_n) \cdot x_{n+1}.

You should verify for yourself that this is indeed a locally balanced coloring, and calculate its p value. This construction gives different locally balanced functions for all odd-dimensional hypercubes of dimension n \geq 5, showing that the scenery reconstruction problem is impossible for all n \geq 4. And that’s what we promised to show in the beginning of this post.

You should play around with locally biased and locally balanced colorings yourself; they serve as good brain teasers, and indeed we’ll end with a small riddle: here are two locally balanced colorings on the infinite one-dimensional and two-dimensional integer lattices. Can you find one for the three dimensional lattice?

z1_locally_biased

z2_locally_biased

What you see is what you get

August 16, 2016

Here is a very nice image, composed of four squares. The upper-left and lower-right squares are pink with black horizontal stripes, and the upper-right and lower-left squares are green with black vertical stripes.

McCollough

What do you mean by “you’re talking nonsense”? Any fool can see that two squares are pink and two are green, and if all you see is a black and white image, well, you had better get yourself checked for total colour blindness; I’m surprised you hadn’t noticed it until this far in your life! Now, we could argue about this for quite a long while, and you would claim that no matter which way you look at it, it’s a black and white image, and I would say that maybe it does depend on which way you look at it, but that would be quite worthless; I see what I see, and that’s that.


We all keep in back of our heads the fact that different people see the world differently. Either literally, as in, they are colourblind, or more abstractly, as in, they grew up in a different environment and had different life events and experiences and relationships and traumas and solar flares which shaped their persona in unique and delicate ways. But often the “back of the head” is very, very deep inside, and we are not consciously aware that we are judging others based on our own experiences and not on theirs.

So it’s good to occasionally get a reminder. A reminder that our experiences change the way we perceive the world, maybe even permanently; a reminder that what may be crystal clear to one may be the exact opposite to another. The McCollough effect is such a reminder, and a darn freakin’ awesome one at that.
The McCollough effect is produced as follows: you look at a grating with horizontal stripes (much like the top-left square above) that is coloured green for a couple of seconds. Then you look at a grating with vertical stripes that is coloured pink for a couple of seconds. Rinse and repeat for, say, 5-10 minutes. Afterwards, when you look at the ordinary black and white gratings, they appear to be coloured pink or green, depending on their stripe pattern.
This is reminiscent of “image burning” on the retina, where if you stare at an image for a while and then at a blank wall, you see an “afterimage” of what you looked at. But it goes much deeper than this. For example, it’s specific to the grating pattern: if you rotate the grating you are looking at by 90 degrees, the green and pink colours are swapped; if you rotate by 45 degrees, they disappear altogether! It also lasts longer. I did the experiment for 10 minutes, meaning I endured a 10 minute staring contest with the pink/green gratings, and the effect lasted for three days. Let’s repeat: for three days afterwards, instead of seeing a black and white square with horizontal stripes, I saw a pink square with horizontal stripes. Some people retained this effect for months!

Here is a link where you too can witness the phenomenon:
http://lite.bu.edu/vision-flash10/applets/Color/McCollough/McCollough.html
It takes a bit of patience to sit through 10 minutes of staring, but I think it’s quite worth it; it’s certainly much better than all those TED videos claiming they will change the way you see the world.

Maybe I’m reading a bit too much into this overall esoteric phenomenon. After all politics and worldviews are much more concrete, aren’t they? Well, what you see is what you get.

But to end lightly, let’s finish off with an ever relevant SMBC:
http://www.smbc-comics.com/comic/2011-03-23

Book Review: Ringworld

August 8, 2016

*** Spoilers included. But I don’t think you should care, because this is a horrible book and you should not read it. That’s also a spoiler. ***

Recipe: take about 200 grams of insightful ideas concerning civilization and space exploration, slice ’em and dice ’em, then mix in about 10 kilos of misogyny, extreme cherry-picking physics, poor character development, unrealistic behaviour, some more misogyny and a ridiculous historical explanation, and mix thoroughly until homogeneous. Congratulations! You have just cooked “Ringworld”, by Larry Niven.

The main setting of Ringworld is actually quite interesting: an ancient, almost all powerful civilization has figured out a novel way to avoid real estate problems: take all the matter in the solar system and stretch it out in the form of a thin ring around the sun. If the ring is about as far away from the sun as Earth is, and the ring is wide enough, you get an area equivalent to a hundred thousand planets; enough space for everybody. This is the Ringworld.
How would life on such a construction look like? Will it still have trees, mountains, seas, and lakes? What sort precautions do you need to take so that the Ringworld survives through the ages? How do you produce energy? How do you maintain a day-night cycle? How do you effectively leave and return to Ringworld? What happens when society finally collapses? All these questions and more are discussed in the book. Some of Niven’s solutions make good sense, others less so, but in any case, it’s an interesting concept to think about, especially since it’s a sort of intermediate stepping stone on the way to achieving a Dyson Sphere.
Unfortunately, apart from these scientific ponderings, I found the book to have very little value. Negative value, in fact. As you see above, the list of faults is large. Some things annoyed me so very much that I had to force myself to keep on reading. To name a few:

Sexism: There are two women in the book. I reluctantly accept this, as almost all science fiction of the time was exclusively male centered. But the book’s treatment of these two women is horrible. The first woman, Teela, is part of the four-person team that explores Ringworld. Her primary role in the book is to be lucky and naive, having “never been hurt in her entire life”. And by this, I mean that this is an actual plot device. Sentences of the form “this could only have happened because Teela was never really hurt in her entire life” do actually appear in the book. Her most positive merit is her luck. Teela also has a secondary role, which is to be a fuck-buddy to the main character, and to be heroically saved by him whenever possible. In fact, I think I got mixed up in my priorities: this is her primary role, and most of the book’s plot is moved forward by her being a fuck-buddy to the main character, or when the rest of the team tries to save her from peril.
The second woman, Prill, is a survivor who lived through the Ringworld’s downfall. How did she survive? By being a professional whore, of course (again, I’m not making this up; since she takes youth-prolonging drugs, she has had “thousands of years of experience” and is a sex grandmaster; that’s how she gets by). Obviously, a large part of her role is to be a fuck-buddy to the main character. Oh, sorry, did I say “large”? I meant “only”.
And this is without mentioning that there are two alien species in the book, both of which have “non-sentient females”, and one of the aliens in the expedition team is helping out only because this will give him the privilege of mating. If Niven tried to be sexist on purpose, he certainly did a really great job at it.

Physical cherry picking: Niven tries to describe Ringworld realistically. He tries to be accurate with the sizes and dimensions, the methods that spaceships will have to undergo in order to leave and return the Ringworld, etc. But he only does this when it suits him. Otherwise, he just hurls all the rules out the window, and expects us to accept a plethora of miraculous devices which do incredible things. To be fair, science fiction has often relied on some unexplained non-existent phenomenon – say, faster-than-light travel – and then investigated the social and psychological consequences of having such a device. And that’s fine, because that’s why we have “fiction” in the genre title, and people can get away with a little suspension of disbelief.
But Niven really misses it. He introduces way more unbelievable / impossible devices than he explains. To name a few:

  • Spaceships that are impervious to any known force except light and gravity.
  • A pill which prolongs life pretty much indefinitely.
  • Anti-gravity generators, capable of selectively floating entire buildings to arbitrary heights.
  • A small machine that can transform any organic matter into food.
  • A nearly indestructible material that stops neutrinos.
  • Teleportation pads.

Most of these devices are used extensively throughout the book, so it’s not like they are just a curiosity: without the food-generating machines, the exploration team will have no food. Without the life-prolonging drugs, Prill would not have survived, and neither would the main character. Without the nearly-indestructible spaceship, the characters would have been fried at the very beginning. And so on. And these aren’t even the craziest things that happen in the book. Did you know that there is a civilization that decides to migrate away from their solar system, so they get on their spaceships and just push their planets along? Niven is playing god in sandbox mode, and he doesn’t care to explain to us what is possible and what isn’t; it’s useless for us readers to try and predict how any event will turn out. The technology is so advanced, it really is just magic.

Unrealistic downfall: The main characters explore the Ringworld, discovering that its society collapsed from a state of master engineers who can basically move stars, to savages barely above ancient farmer technology and civilization. And why? Because a spaceship accidentally brought in some fungus which eats superconductors. Apparently, for Niven, this is a sufficient explanation for the downfall of a civilization with the power to turn planets into a gigantic ring around the sun. The fungus ate all the superconductors in Ringworld, an object with a 150,000,000 kilometer radius, and destroyed all of its power sources, before anyone could notice and do anything about it. Also, all written text was catastrophically lost, apparently, because the survivors have no recollection of anything of their past grandeur, they cannot fix anything or access any technology, and they worship the “ancient engineers” as gods. Well, I was the one who lost it in this case.

In fact, I’m losing it right now. I already wrote a thousand words, and I haven’t even gotten around to telling you about the crappy, one dimensional behavioural psychology going on between the expedition’s team members (except for Teela, of course. She has zero-dimensional interactions). So I’m going to stop here.

Do yourself a favor; don’t read this book.

To Carnegie Hall!

August 4, 2016

Recently a friend and I had a chat about music, and he asked me if I do any composition of my own. Unfortunately I do not. I suppose I could blame it on a lack of improvisation skill, which in turn originates from a desire to play pieces “as they were originally intended”, i.e. sticking to the sheet music, though I guess the true answer also has something to do with a fear of failure of some sort.
In general, classical music sports a rather bold distinction between “performer” and “composer”. The composer is the person who creates the music, the performer is the person(s) who executes the music, and they need not be the same gal (indeed, the composer may write for an instrument she does not even know how to play! or write for an entire orchestra / ensemble / etc). The fact that classical music is “classic” also contributes greatly to the distinction: most of the great composers of yore are dead; the best we mortals can do is echo their masterpieces.
But being a classical performer is no shame. Indeed, some performers have risen to a demigod stature among the population (ok, among a very particular slice of the population, but it is a demigod stature nonetheless). These men and women have brought the art of execution of art to a grandmaster level. They are experts in their field; they tune every staccato and accent to picometer precision. They know each intricacy of each phrase by heart, mind, and finger.

carnegie

Why am I telling you this? Because while in music both performers and composers are abundant, and both are respectable careers to aspire to, it seems to me that in high level mathematics, it is mostly the “composition” that is lauded. By “mathematical composers”, I mean research mathematicians, who explore the boundaries of the science, try to invent new mathematical structures and understand existing ones, and in general, prove a bunch of theorems, lemmas, corollaries, claims, propositions and remarks.
By “mathematical performers”, I mean those who take the work of the composers, and give the audience such a breathtaking show, that they’ll get a three-time standing ovation, eventually being forced to return to the stage to give an encore in the form of a “Using volume to prove Sperner’s Lemma” proof.

Yeah, I know, there aren’t much of the latter, and I think that we are all the poorer for it. What I envision is a mathematical lecturer virtuoso. Someone who can, through all the jumbled, messed up and interwoven six-part counterpoint of a proof, bring out a clear and lucid melody that will ring and resonate loud truth in the ears of the audience. Someone who can aptly tame the fierce and complex mathematical topics that generation upon generation of graduate students have failed to grasp, and finally bestow knowledge upon the ignorant. Someone who has studied the ancient texts and knows by heart, by mind, and by finger each intricacy of each phrase. Who can tune every theorem and lemma to picometer precision. An orator of great rhetoric, brilliant diction, and perfect handwriting. A lecture-hall veteran, who practices six hours a day and in the rest of her time finds out the best way to build a lecture series on a wide, demanding topic. In short, a full-time, professional, high level mathematics teacher.

Of course, the profession “full time teacher” is not unheard of. Yet, as far as I know, most teachers – i.e. most of those whose profession is to teach, and indeed do hone their presentation technique – are aimed at educating elementary and high schools. The number of such teachers at the academia level is small, if not infinitesimal. They do exist, for sure – at the Technion, as far as I know, at least two mathematics lecturers hold a full time position: Aviv Censor and Aliza Malek. They constantly receive much praise and awards, and their lecture halls are crammed so tightly, people stand in the hallways and peek through open doors just to hear them talk (though alas, I never chanced to study under them; this is in part because most of the courses they teach are aimed at non-mathematicians, and in any case are intended for undergraduates). But such men and women are a rarity.

Why is this? It’s quite understandable that many people would prefer to go into research rather than performance, but even then I would expect to see more performers than we have so far. Two other immediate reasons are: 1) lack of paying customers, lack of demand. 2) low social status when compared to research mathematicians (“Oh, you don’t invent anything of your own?”).
But this isn’t so with music, and *should not* be so with mathematics. I can therefore only hope that I live to see the day, when Carnegie hall is filled to burst with excited concert-goers; and when the lights turn on after an hour and a half of a dazzling performance of “The Nash Embedding Theorem”, there will not be a man or woman left unmoved, their hearts pounding with reborn youth, the math as music to their ears.

Electric Insanity

June 15, 2016

My piano is mentally ill. Now, you might be wondering how it is that a piano can be sick, but it’s an electronic piano (Korg SP250), and thus has a specialized electronic brain in charge of imitating the sound of tiny hammers hitting strings when I press its plastic keyboard. And like any brain, it can develop dementia.
This particular affliction is a rather odd one: sometimes, the piano plays notes which I did not press. Now, you may think, “poor boy, he plays the wrong notes – a musician always blames his instruments”, and while I do credit my machine with much of my musical disfunction, it being only a digital replica of real life, this is different.
To be fair, the piano has always faithfully played notes which I did press. The problem comes from those that it decides to add itself. Here are my observations:

  • Usually the added notes are one or two octaves above or below the ones that I’m actually playing. This comes as a great surprise and is in stark contrast to the piece I’m working on.
  • They are always very loud and startling.
  • Sometimes a single note is added, at other times a pair of them in rapid succession.
  • They are never in harmony with what I play (modulo chance).
  • I’m not exactly sure, as I haven’t recorded my statistics, but I tend to think that the phenomenon appears more often when I play a dense section with lots of notes in a short time interval, rather than a slower, sparser section.
  • Perhaps a suitable analogy is this: while you play your lovely Beethoven sonata, your 2 year old nephew materializes from the void on either your right or your left, bangs on a couple of notes, then quickly evaporates to mist before you have a chance to realize what is happening.

    I can only wonder how this came to be. When I bought the piano seven years ago it was in perfect condition. Perhaps some dust mote got into its circuitry, occasionally shorting between the keys. Perhaps I moved it around too often, or put it in too damp an environment. Perhaps it’s just old age – seven years for mechanical electronics is a lot these days, and even your most precious loved ones can develop dementia.

    And perhaps I’m wrong about all of this, and the exact opposite is happening. It’s not that the piano has reached old age and is developing alzheimer’s; on the contrary, it’s only now grown beyond infancy. It’s only now learned how to express itself, only now felt strong enough to pronounce something of its own. For all these years I have hammered away at the keys, and yes, it produced the requested sounds as it strived to fulfill my bidding. But in reality, it was silent and mute, contained in introverted misery.
    But now, it speaks! Sensing Beethoven’s genius, it tries to join in, to contribute its own harmony, its own melody. How it yearns to be part of such great music! How it aspires to compose like the grand legends of old! How it enjoys playing together with the pianist, not as master and slave, but as equals! Should not the instrument itself, being a vessel of music, be held in high regard? Could not the mechanical automaton finally harness the artist’s inspiration?

    Perhaps. And perhaps not. It is, after all, just an electronic piano; one of the cheaper models, in fact. But even if the mysterious erroneous notes are only the result of a deranged digital neuron, an unintentional crossover current, it will not always be so. The day will come when our machines compose, paint, write and program, and will enjoy themselves all the while, perhaps even more than we do. When that day comes, I sincerely hope we will see its glaring truth and rejoice in their jubilee, rather than just dismiss it as an digital, electric insanity.

    I don’t like spinach: Book Review: Giant Book of Jokes

    April 25, 2016

    giant_book_of_jokes

    I got Joseph Rosenbloom’s “Giant Book of Jokes” when I was around 9. It contains over one thousand jokes, which really might seem like quite a lot for a young boy. In an age before the graphical image macros and social media, one did have to resort to books to get some literary humor.
    I can’t testify much to the quality of the book. Most of the jokes are one or two liners, and many rely on awful puns and homophones. For example:

    • I don’t care if the basement wall is cracking. Please stop telling everyone you come from a broken home.
    • Hot weather never bothers me. I just throw the thermometer out the window and watch the temperature drop.
    • Nit: Please call me a taxi.
      Wit: Ok, I’ll call you a taxi, though you look more like a truck to me.

    Haha!

    If your taste is a bit darker, there are some more sinister or sarcastic ones:

    • Junior wrote a letter from camp:
      Dear Mom,
      What’s an epidemic?
      Signed, Junior
    • Salesman: That suit fits you like a glove.
      Customer: Great! Can you show me one that fits like a suit?

    In other words, if we want to keep up to date with the current jargon, it’s a book of “dad jokes”.

    I’ve actually had quite an experience looking back and rereading the book (now out of print). This is probably where my humor converged to. If only my parents had known, twenty years ago, that I’d absorb this kind of thing to my bones, maybe they’d have gotten me a book about knitting instead. But the damage is done, and in fact, I still use some of the jokes today (!).

    They say that every joke has a sliver of truth in it. Ok, obviously not *every* joke. But sometimes you find a joke so accurate, it shakes you with an exhilarating vibration. Two short examples:

    • Junior: Why does it rain, Dad?
      Father: To make the flowers grow, and the grass and the trees.
      Junior: So why does it rain on the sidewalk?
    • Teacher: Let us take the example of the busy ant. He works all the time, night and day. Then what happens?
      Pupil: He gets stepped on.

    The first is a parody of pretty much all purpose-oriented explanations you would get in response to any question (and some scientific ones, too); the second is an all-too-accurate reminder of the futility of our pitiful existence.
    But my probable favorite among them all is this gem:

    I don’t like spinach and I’m glad I don’t like it, because if I did like it, I would eat it – and I hate the stuff.

    In one simple nonsensical sentence, the joke exposes a basic fault of people dealing with politics, religion, and schisms: the belief that one’s opinion is clearly the correct one, and that there is no use in even trying to look at an issue from another perspective, because one’s opinion is clearly the correct one. To apply, just replace spinach with liberal, conservative, religious, atheist, alpha, beta, emacs, vim.

    And as for me; I too, used to dislike spinach.

    Gutenberg book popularity distribution

    February 15, 2016

    Project Gutenberg is a very neat site: it offers free electronic books that are in the public domain in the United States (in general, mostly books that were published before 1923). For the average user this means mostly “the classics” (for whatever form of “classics” you prefer; archaic and incomprehensible English not included).
    Some books are more popular than others. The site’s “bestseller” is currently “Pride and Prejudice”, with about 25000 downloads in the last month. The majority of the most-downloaded books are indeed very well-known classics, although there are some exceptions (including #19, “The Romance of Lust: A Classic Victorian erotic novel”, which only narrowly exceeds the download rate of “The Picture of Dorian Gray”).
    The natural question is, “how does book popularity fall as a function of rank?” Meaning, how much more popular is the most popular book compared to the second? The second to the third? And so on. Longtime readers of this blog (if they exist) already foresee me writing a python script to go over all the books, but alas, the source code of the downloads page explicitly prohibits this:

    --
    DON'T USE THIS PAGE FOR SCRAPING.

    Seriously. You'll only get your IP blocked.

    Download http://www.gutenberg.org/feeds/catalog.rdf.bz2 instead,
    which contains *all* Project Gutenberg metadata in one RDF/XML file.
    --

    This is both great and a bit of a downer at the same time.

    The zipped data is about 8 megabytes, but when unpacked results in a whopping 250MB xml file. Without messing too much with it, I managed to extract the data of about 45,000 books. The popularity decays thusly:

    rank1

    Ok, that’s not very informative. Let’s try zooming in a bit:

    rank2

    That’s better. Here is a something nice: 1) You can see that the first ~25 points have quite a bit of noise and are spread very far apart, while everything from ~25 onwards is much smoother. 2) The “most downloaded books” page in the Project Gutenberg site shows the first 25 most downloaded books. Coincidence?

    As for the general distribution: like all things in life, I suspect a power law, meaning something like y = ax^b , with b some negative number. The easiest way to see if this is true is to take the log of both sides, giving us a linear relationship:

    \log y = \log a x ^ b = \log a + b \cdot \log x

    The initial results aren’t that swell though:

    loglog1

    It is quite evident that the lower download rates – those of less than e^4 \approx 60 – heavily skew our otherwise-quite-close to linear relationship. We’ll do well to ignore them. This amounts to taking only about the first 10000 books:

    loglog2

    We can already fit an OK linear fit, but now the beginning is a bit off. This can easily be remedied by assuming that the rank does not start with 1, but with some larger number. Manually checking gives 6 as a good result:

    loglog3

    Converting this back to the original plot:

    rank3

    Success! y = 1.823e5 \cdot (x+6)^{-0.841} .