**Part 1: Archery**

Lately I’ve been interested in archery. The target of target archery is to shoot a circle square in the middle. The target is usually placed quite far away, some 70 meters in the case of the Olympic games, so it’s not always very easy. Here is one of my volleys from 18 meters:

If you look closely at the picture, you will see that each ring has a number on it, indicating its score; the closer your arrow hits the center, the more points you get. This particular target has numbers ranging from 5 to 10, with each ring having the same width (i.e for each ring, the radius of the outer circle minus the radius of the inner circle is the same). The image below shows a 2d schematic of where my arrows hit:

In this batch, I got 10+8+8+7+7+7 = 47 points. One of the things you might notice is that while the “average shot” is actually not too far off from the center, only one of my arrows hit the 10 point mark, with the rest being absolute jerks and skirting about the average. This is because in order to hit the exact same spot, it’s generally advisable to fire your arrows from the exact same position, i.e your body posture has to be exactly the same before each shot. But this is usually a hard thing to do: There are **a lot** of joints and muscles in the human body, and each one offers several degrees of freedom. Can you really duplicate the posture of your elbows, wrists, neck (for aiming), back, hip, chest, shoulders, knees and toes (et. al)? There’s bound to be a bit of variation between one arrow and another, even without considering averse external conditions such as wind and rain. Indeed, in this sense Olympic target archery is quite boring: All you’re trying to do is just stand the same way twenty times in row.

**Part 2: Math and arrows**

Now, I’m quite certain that people came up with various sophisticated statistical models which try to describe the distribution of the archer’s arrows on the target, but for the purpose of this post I am definitely going to blatantly ignore all of them and just assume that each arrow hits a position which distributes as a Gaussian (Normal) random variable centered around the 10-point mark. In plain words, the arrows scatter around the 10-point mark in a bell curve, and the width of the curve is determined by the standard deviation . Rookie archers (like me) have a very high , while experienced professionals have very close to 0.

For simplicity, I’ll assume that the target has radius 1. Since each ring in the target has the same width, the number of points won by a particular shot is then given by a simple function of the magnitude of :

If is known, a diligent computer can use this function to easily calculate the probabilities of getting the different scores.

Now, I’m very much a newbie, and most of the time (if I even hit the target at all) I get fives, sixes and sevens. But still, occasionally fortune smiles upon me, and I manage to land an eight, nine, or even the ever-desired ten. And despite the fact that life in general is devoid of any meaning and we are all lost souls drifting hopelessly in space, this still brings some warmth to my heart. So it is at this point that I must say:**Thank you, dear universe, for being three dimensional!**

(and not higher). Because otherwise, beginner archers like me would practically never score a 10, and in fact everyone would just score almost all the time.

I’ll explain what I mean by this. We can imagine a world that is two-dimensional, where cute two-dimensional creatures launch one-dimensional arrows into a one dimensional target. In this case, the position of the arrow is just a univariate Gaussian random variable, and the score function only depends on .

We can also imagine a world that is three-dimensional, where cute three-dimensional creatures launch one dimensional arrows into a two dimensional target. Incidentally, this is the world we currently live in.

The next worlds we can’t actually really imagine, but we can imagine that we can imagine a general world that is -dimensional, where cute -dimensional creatures launch one-dimensional arrows into a -dimensional target. In this case, the position of an arrow is a multivariate, zero mean normal random variable . Naturally, I do not have an image readily available.

(Technical note: The variance of the normal random variable is . This is because a standard normal random variable tends to have norm of about (this can be shown by a direct-but-unpleasant calculation which will not be presented here). The normalization is there to make sure that our young archer still hits a target of radius 1).

Now, here’s the thing about the norms of -dimensional Gaussian distributions: They are really, really, really well concentrated around their means.

**Theorem** (Gaussian concentration): Let be a -dimensional Gaussian random variable. Then for any ,

Basically, what this theorem tells us is that in high dimensions, there is almost no hope for an archer to hit different rings in the target. She will almost always hit the ring at distance from the center, and that’s that. As a concrete example, suppose that , so that her average arrow gives her just 7 points. To get an 8, she needs her arrows to hit at least closer to the bullseye. In light of the above theorem, this requires choosing . The probability of hitting this far away is smaller than . That’s exponential in ! If the dimension is large enough, this probability is minuscule; our beloved high dimensional rookie archer will need to fire billions and billions of arrows until she succeeds in scoring a value other than 7! Sadly, this would not increase target archery’s reputation as a spectator sport.

**Part 3: More proofs, less arrows**

One way you can prove the Gaussian concentration theorem is by using moment generating functions, like we did in the post about the Johnson-Lindenstrauss lemma. But math, unlike archery, is not really about doing the exact same thing in perfect succession. Rather, in this post we’ll prove it using “concentration of measure as derived from isoperimetric inequalities”.

Our first task is to make ourselves acquainted with the isoperimetric concentration function, who turns out to be quite an agreeable chap once you get to know him.

Basically, the standard -dimensional Gaussian distribution , which we denote by , makes into a mathematician’s favorite kind of space: A *metric measure space*! (well, favorite for some mathematicians, anyway). It is a *metric space*, because there is a well defined distance between every two points and , namely, the norm . It is a *measure space* because we can assign to every nice enough set a volume, namely . Since is a probability distribution, the volume of the entire space is 1: . Note that this notion of volume, called “Gaussian measure”, differs tremendously from our regular, Euclidean notion of volume: The volume of the entire space is not infinite, multiplying the side-lengths of a box by doesn’t increase the volume by , and in general, Euclidean geometry goes out the window.

Even so, we can still mix up distances and volumes, and ask how they relate to each other. One of the more (in)famous questions in this vein is “Given a volume , what shape of volume minimizes the surface area?”. For example, in regular good-ol’ Euclidean space, the shape that minimizes the surface area is a sphere. This is intuitive for anyone who played with soap bubbles, but not very easy to prove even after taking numerous baths. These types of problems are called isoperimetric problems, and you can find plenty of literature about them, for all sorts of odd spaces.

We haven’t actually said what surface area means, and it isn’t always all that obvious when dealing with the wildly weird sets that mathematicians sometimes summon from the deep depths of hell. Here I’ll use the following definitions, which work well for pretty much all spaces with a metric function and a probability measure . For a set and a number , the -inflation of is defined as

that is, the set of all points which are at distance smaller than from . You can think that if is very very tiny, then is just itself plus a very thin shell around .

The surface area of can then be defined as

In other words, you measure the volume of the tiny shell you just added, divide by its thickness, and then send that thickness to .

For our purpose, though, we won’t use the surface area and talk explicitly about the isoperimetric inequality, but rather ask: Suppose we have a set of large size, say, of at least half the total volume of our space. What happens when we inflate it by a small number ? Will it only increase marginally in volume, or will it suddenly explode, filling up almost all of the entire space? To this end, we define the *concentration function* for the space :

Suppose has volume exactly . If inflating by doesn’t change its volume too much, then will be almost , and will also be very close to as well. So if there exists even a single set which doesn’t blow up when inflating, we’ll get a large value in our concentration function. But if, on the other hand, *for every * it turns out that is *enormous* and takes up almost all the volume, then will be very close to , and the concentration function will be very small.

You can already smell that the concentration function is related to isoperimetric problems I described earlier: If a set has a large surface area, then inflating it by will cause it to fill up a lot of the space. On the other hand, sets with small surface area might not fill up a lot of space. So finding a set with small surface area is like finding a set for which is large.

The function is known for some metric measure spaces, and amongst them our beloved Euclidean-distance Gaussian-measure space . The following theorem, while appearing as mere harmless pixels on your screen, actually involves considerable effort, mixed in with some tears and toil:

**Theorem** (Gaussian isoperimetry):

One way to prove this is to roll up your sleeves and find a (unique) set for which is maximal. It turns out that this is a half-plane, but showing it is a matter for another post. The thing to take home is this: For even moderately large values of , the expression is tiny. In this case we say that “ has *normal concentration*”, which, given that is a normal distribution, isn’t that surprising.

Now that we made friends with the Gaussian isoperimetric concentration function, it’s time to ruthlessly exploit it.

**Theorem** (Isoperimetric concentration of measure): Let be a metric measure space, and let be a real valued continuous function with Lipschitz constant . Let be a median of , i.e and . Then

To put it plainly: The isoperimetric concentration function puts strong bounds on the *concentration of measure* of nicely-behaving functions! If is small, then continuous functions have a really hard time being far away from their median value; they just can’t help being concentrated around it.

Maybe a few reminders are in order. The Lipschitz constant of a function is the smallest constant so that for all . In other words, it controls by how much can change as the inputs moves over the domain. If is a random variable, a median of is a number such that and . In other words, half the time is above this value, and half the time it is below.

The keen-eyed among you will have already noticed that in fact, this theorem gives us exactly what we wanted to prove concerning the norms of Gaussians in . We just need to apply it for our specific case: We choose our space to be , the distance to be , and the measure to be . As our Lipschitz function, we choose the norm: . This function has a Lipschitz constant of 1, since by the reverse triangle inequality,

We also need to calculate the median of , i.e the median value of when is a -dimensional standard normal random variable. Now, in this case, by “calculate”, I mean “look up”, since these sorts of calculations often tend to be tedious. Luckily, we live in the golden age of the Internet, and this information is readily accessible if you’re willing to trust the writings of people you’ve never met. Since the real function is strictly increasing on the positive axis, the median value of will just be the square root of the median value of , as squaring does not change the relative order of numbers. The distribution of is the well known distribution with degrees of freedom, which according to Wikipedia has median (up to some insignificant correction). The median of is therefore .

Plugging this into the concentration of measure theorem and using the Gaussian isoperimetric bound, we get that if is a standard normal in dimensions,

We now only have to fix the standard deviation: Our original discussion used not a standard Gaussian, but rather one with a variance of . This amounts to multiplying both sides of the inequality inside the probability brackets by , giving the desired result.

As a famous professor in my department once said: “Good”. Now we only have to prove the isoperimetric concentration of measure theorem, and we’re done! Luckily, this is not too difficult. Consider the set , i.e the set of all points in space on which is smaller than its median. This is a set of measure at least , by the very definition of the median. Now let be a point which is at most a distance from , i.e there exists a point so that . Since is Lipschitz, the value cannot be too large:

This means that for every , we have . Hence the set of all points for which obtains a value larger than must have a measure smaller than . But by definition, the concentration function is the supremum of over *all* sets of measure at least . We thus have

or, equivalently,

A similar calculation can be made for by looking at the function , and union bounding the two gives us the isoperimetric concentration theorem.