## The many faces of Mona Lisa

April 4, 2015

Or: fooling around with image processing kernels.

Original greyscale:

Edge fit + bw cutoff:

Partition into multiple bw images and taking Laplacians:

Plain Laplacian + bw cutoff:

High frequency removal by partitioning into multiple bw images:

Sexy dark high frequency removal:

Subtracting low frequencies from original image:

Overshooting Laplace central element (does this filter make me look old and cracking?):

And yet, throughout the whole process, she just keep smiling.

## A singlet’s lament

February 14, 2015

Valentine’s day is upon us. A poem.

When I see your wavefunction collapse into the arms of another man, I feel all my eigenvalues degenerate to zero.
You seem so happy together; happier with every measurement, like he is a ladder operator and you, so much more than a good number, are unbounded from above.
Now I am unitary, and can only toss coins into the infinite potential well that glares before me and wish for a steady state. I wanted to take action sooner, I did, I really did but did not dare, for fear I scatter you away.
If only we shared a bound state, I would span your entire space;
If only you’d let me tunnel into your life, the rest of our lifeline would be square integrable;
If only you’d let me, we would form a pair of equal-energy Fermions,
You up and me down,
Or you down and me up,
Or any superposition of the two, our worlds fractionally spinning,
We’d commute left and right, definitely positive and coherent only towards each other;
I could have been your Hermitian conjugate, adjoining to complete you,
Resonating for small angles even when you were offbeat;
We could have been entangled forever, never splitting up,
Connected together no matter the distance,
Our future no longer shrouded by uncertainty;
We could have built a warm and cosy house in the reciprocal lattice, a place to call our own;
We could have lived in spherical harmony, rotational symmetry,
We could have,
We could…
But no more.
There is no vacant orbital.
When I see your wavefunction collapse into the arms of another man, I feel all my eigenvalues degenerate to zero,
And the perturbation in my heart just keeps growing and growing,
And I don’t know how much longer I can keep going
Before they have to clean up the gory remains, the tragedy making front headlines in the newspapers,
“Lethal result for the famous experiment
As scientist places himself as ‘Schrödinger’s cat’”.

## There are no zeros in physics

February 5, 2015

I recently read an article by Joseph Ford: “How random is a coin toss?” (1983). In it, Ford talks about the relation between completely deterministic systems and the seemingly random behaviour they sometimes produce. “Roulette wheel spins, dice throws […] are universally presumed to be completely random despite their obvious underlying determinism. Weather, human behavior and the stock market are, on the other hand, commonly regarded as strictly deterministic, notwithstanding their seemingly frivolous unpredictability.”

Through arguments on chaotic orbits and algorithmic complexity, he claims that eventually we must throw away our continuum descriptions of physics (this is regardless of quantum mechanics; see footnote), as those require calculations and measurements with infinite precision; however, as our computers are finite and our measurements imprecise, any actual calculation we do will have errors. In chaotic systems, which vastly outnumber the non-chaotic ones, these errors increase exponentially, and our predictions break down.

Instead of a continuum, we will use a number set that does not rely on infinite precision: only those numbers which can be algorithmically computed in finite time. This pretty much means only the rationals, it would seem. Now, in a way, we already roughly do this, and every time we run numerical simulations we restrict ourselves to the rationals, plus maybe a few other selected numbers. However, this is not included in our main physical theory, but is a byproduct of our finite machines.

An interesting point which especially caught my attention is:
“We thus eliminate the last infinities, the infinitely large

$\infty = (1+1+1+...)$

and the infinitely small

$0 = (1+1+1+...)^{-1}$

And indeed, while checking for equality is a difficult topic in floating point computations, our computers still allow us to assign the perfectly rational “0” to a variable. Yet in reality, it is very unlikely to encounter a physical property with a perfect 0 value. As a statistical, averaged property I might expect it to appear quite a lot, but as a deterministic one – less so.

Here are two nice points of wisdom I encountered during my studies which emphasize this:

1) I remember clearly one of our classes in Waves in the third semester. We were talking about damped harmonic motion, and the professor, analysing the damping coefficient, said something along the lines of: “we see here that there are two types of solutions: overdamping, if ξ > 1, and underdamping, if ξ < 1”. At this point, a student raised his hand and asked, “what about the critical damping, for ξ = 1?”. To this the professor replied: “nonsense, that is just a mathematical joke; a true physical system will never have a ξ value equal to exactly one!”

2) In solid state, and in practically any field in physics, whenever there is a local minimum in a potential, the physicists like to say that at that point, the system behaves as a harmonic oscillator. Why? Because then the derivative is 0, and the second derivative is positive. Looking at the Taylor expansion:

$V(x)=x_0+V'(x_0)(x-x_0)+\frac{1}{2}V''(x_0)(x-x_0)^2+O((x-x_0)^3)$

We know that $V'(x_0)=0$, and if we neglect the higher order terms, we get the equations for a harmonic oscillator. A question naturally arises: why must the second derivative be positive? Maybe it too is 0, and the smallest power in the Taylor series is at least 3? My Solid State TA gives the following answer: when dealing with physical systems, it’s so enormously hard to actually get a 0, that in essence the quadratic coefficient is almost always finite; the curvature must be very unique for both the first and second derivatives to be 0.
A good follow up question would then be: if it’s so hard to get a 0, why can we assume that the first derivative is 0? My answer to this: this stems from the fact that the potential has an actual extremum; having or not having an extremum is a physical binary quality, and so appears in physical variables. If we throw a ball up and it comes back down, we know that at some point it reached some maximal height and had a velocity 0 (even if we cannot pinpoint the exact moment of time with infinite precision, we know it exists and can compute around it, at least using continuum physics). A unique curvature, however, is in the continuum.

*Note: Ford mentions, but does not expand much on, the use of infinite precision in quantum mechanics, which is usually deterministic in its evolution but statistical in its measurements. I tend to see strength in his arguments only when we are purely deterministic.

## Programming complex transformations

December 2, 2014

Facebook should change their “It’s complicated” status to “It’s complex”. That way, real people with imaginary girlfriends could spread the news to all their friends!

– Ancient proverb

We were learning about conformal maps in complex function theory. While we did plenty of “circles are sent to circles” and “connected components are sent to connected components”, it’s almost obvious that we barely got to see any actual map in action.
I remembered that I saw some very nice geometrical transforms in the wonderful Youtube series “Dimensions” by Leys, Ghys and Alvarez (link here), and decided to write a small program of my own that, given a picture, can apply any (reasonable) complex function to it, treating it as a transformation on pictures.

What does this mean? We can think of a picture as an array of pixels, each with its own $x$ and $y$ values. Treating the center of the picture as $(0,0)$, each pixel has a different coordinate. The pair $(x,y)$ in $\mathbb{R}^2$ can be treated as $z = x + iy$ in the complex plane. We can then take this complex number and put it in any complex function $f(z)$; the result is some other complex number, $w = a + ib$. We then interpret this new number as the coordinates of a new pixel; so in the new picture, the color at position $(a,b) = f(x+iy)$ will be the same as the color at position $(x,y)$ in the original picture.

If $f$ is a funky enough function, the results should be awesome, and this lets you understand a little better what all sorts of analytic functions do (analytic functions preserve angles between the two pictures, so however twisted things get, we’ll always have some sanity).

Here, look at what happened to our poor tiger (original image courtesy of Wikipedia commons):

This…

Turns to this…

(confession: ok, this mapping isn’t a standard conformal map; read more to see what’s actually happening here).

I’ll now describe some of the problems and solutions I ran into; if you just want to see more pretty pictures, feel free to do just that.

The naive way to get a mapping is to do just what was described in the above explanation: take the original picture, and for each pixel $(x,y)$, plot it at $f(x,y)$. Unfortunately this has several problems. These stem mainly from discretization: the coordinates of the pixels come in integer units, and two nearby pixels will always differ by at least 1 in one of their coordinates. Proper scaling can make this discretization as small as we wish, so effectively we can have any two nearby pixels differ by $\frac{1}{n}$ for any $n$ of our choice, but problems can still occur.

The first problem is that even if your function is onto, meaning that it’s possible to designate the color of every pixel in the new picture, the result may still have gaping holes or “isolated pixels”. This severity of this problem depends on the function itself (I guess it generally depends on whether the absolute value of its derivative is close to 1 or not), and for some choices of $f$, your end product might only be partially filled.
In this example, generated naively by $f = \sqrt{z}$, the top of the picture is falling to pieces (also, it’s evident that the edges of the picture are getting left out, though this is because our source picture is finite, not due to discretization).

A partial solution is to “fill in the blanks” by averaging over neighboring pixels: each blank pixel will take the average color its nearest neighbors. Assuming that the picture is not “too broken up”, this can work just fine – if holes are completely surrounded, you won’t really notice the difference (and it almost makes sense theoretically to do this, in terms of the middle-value theorem). Indeed, running this fix helps the picture quite a lot, although there are still untreated areas which cannot be helped:

The second problem is that some transformations can span over enormous scales. For example, with the transformation $z \rightarrow \frac{-1}{z}$, the interior of the unit disc exchanges places with the exterior. This means that when going over the pixels, ones very close to the origin are going to get sent way off to the edge of the new image, while ones far away are all going to be sent near the origin.

The result is that while the new image is very very large, most of the “interesting” things (aka – most of the actual original picture) is contained in a very, very small region. Look at this example of $z \rightarrow \frac{-1}{z}$:

Not very interesting, is it?

You can adjust for this phenomenon if you know where to cut off your picture, ignoring the endpoints that are way off. But how can you know in advance the size of your image, or which points are good and which are not? This generally requires analyzing your function beforehand, which we do not want to do.

As a (very) partial fix, I noticed that most of these image points are isolated – the discretization means that two remote points will probably not be directly near each other. I wrote some small cleanup code that finds these points and eliminates them, rescaling the picture appropriately. Of course, there will always be isolated points; in fact, due to the discrete nature, every point in the new picture is isolated, so in effect we have to specify how close two points have to be to each other to be considered neighbors or not; this is done according to the resolution of the target image.

In any case, the code I wrote works iteratively. Here are the first three iterations:

You can see that as we iterate, the pictures get better (and focus more on what’s important, the actual head of the tiger, instead of empty space). However, I would still consider this to be rather inadequate (although we do get a “particle-erosion” effect for free, which is cool).

A third problem with this method is that f doesn’t have to be a one-to-one function, meaning that there may be two possible colors for a given pixel in the generated picture. How do we decide which one to take? Do we combine? Do we override? This is a general problem not due to discretization, and I just ignored it here (the code overrides new pixels). Here is $f(z)=z^2$:

For our last image with this method, here is $\log(\text{tiger})$.

The naive method is riddled with problems and artifacts. However, there is a way that generally treats discretization better, and while it also has some black empty space, and some pixelated areas, it doesn’t tend to have small annoying holes (for nice functions, anyway).

The solution is this: instead of going over all the pixels $x,y$ in the original picture and computing the coordinates of the pixel in the new picture, $w = f(x+iy)$, we go over all the pixels $(a,b)$ in the new picture, and calculate the inverse $x+iy=f^{-1}(a+ib)$. If we do not exceed the size of the original picture, we basically ensure that we will not have holes in the result, because we actually go over each pixel and calculate its color, instead of having it “get picked by accident”, as was in the original method.

This method eliminates all the sporadic and isolated holes and points we had using the naive way. Here is $f(z) = \frac{-1}{z}$ using the inverse method:

Much better!
Of course, there are still a few problems:

1. The edges are pixelated – this is because they all draw from basically the same region – the immediate center of the tiger’s face – and a small region that is smeared over a large area is pixelated.
2. There is still a gaping hole in the center. In order to fill it up, we would need an even larger original tiger image – the inverse of these points is out of bounds. In fact, for this specific function, $\frac{1}{z}$, we always have some finite dark area at the center, since we do not work with infinite pictures. There are ways to overcome this, but not with simple finite image transformations.

One drawback of this method is that we need to directly specify the inverse of $f$. This may be simple, in case of simple functions like $\sqrt{z}$ or $\frac{1}{z}$, but in general it may not always be easy.
Further, there may be artifacts arising from this methods. For example, suppose we want to see the map $f(z) = \sqrt{z}$. In order to use the inverse method, we would have to calculate $z^2$ for all the points in the original image.
But the equation $w^2 = z$  has two solutions for $w$: both $\sqrt{z}$ and $- \sqrt{z}$! When using the inverse method while giving $z^2$ as an inverse function, we get a different image from the direct method:

And indeed, notice – the tiger has been replicated! This can never happen with the direct method, which by definition maps each point only once.

Now you see how I cheated you a bit with the first picture in this post – it cannot be the result of a conformal map, since there are clearly multiple instances of the tiger! In fact, it was created by calculating the inverse of $\frac{1}{z-100(1+i)} + \frac{2.5}{z+100(1+i)}$.

That’s it for now; happy mapping!

## I am The Harmonic

November 17, 2014

List of courses for which I’ve had to solve (for class, assignments, or tests) simple, damped, and driven harmonic oscillators:

Physics 1: The student encounters this recurrent and relentless creature for the first time.
Physics 2: Now with electricity!
Physics lab 1: First chance you actually get to see it in real spacetime. Still have to solve the equations again.
Physics lab 2: Now with electricity (2)!
Waves and oscillations: Well, it won’t be a course about oscillations without one or twelve.
Analytical Mechanics: First with Lagrangian, then with Hamiltonian, then with Hamiltonians and matrices.
Ordinary differential equations: “We will assume that this is the first time you have ever seen a linear second order differential equation.”
Partial differential equations: Special case for Sturm Liouville, all over the place.
Quantum Mechanics 1: But be discrete about it.
Quantum Mechanics 2: Once with Schrödinger picture; once with Heisenberg picture; interaction picture pending next homework set.

And the year has just started; I’m sure at least two more courses will add themselves to that list before this degree is all said and done.

Not that I mind it or anything. On the contrary. The harmonic motion is a neverdying beast, rising from the dead and possessing each course in its own spectral way. Each time, we, the students, are touched and are offered a unique glimpse at true Nirvana.

I can only hope that one day I too will truly embrace The Harmonic and become One with the Oscillating Truth.

Until then, I guess my CV will be organized hierarchically:
Vaguely familiar with: theories of space and time, theories of matter and light, topology and groups.
Strong with: oscillating point masses; oscillating mass points; oscillating charges; oscillating oscillations; oscillating scintillation; oscillation of moods and modes of oscillations.

Well, I guess that when all you have is an exponent, everything in life seems linear.

## The Oregon Trail

November 6, 2014

You may have gotten the impression from my previous post that I went to the United States in order to do a project on randomness and quantum mechanics, or something of that sort. This, of course, cannot be further from the truth. My real goal in life would never be something as silly as mess around with the foundations of nature, mathematics, and computer science. It’s to see as many trees as possible. And what better place to do that than Oregon?
So up we went, my friend and I, into pine country. It was the middle of Fire Season – where, unlike other gaming seasons, it is the fire that hunts the humans. But no signpost of “extreme fire danger”, “Beware! Cougars ahead” or “Danger of bears” will keep us out.
As in any good trip, first comes careful planning. A meticulously detailed and elaborate road plan was laid out, and consisted basically of drawing a closed circuit in Google Maps and hoping everything will converge in time and that we’d find places to sleep.

Rather surprisingly, this plan worked out quite well. There were a few deviations (The California Redwoods were too much of a temptation to ignore), but overall, this was the trip. Notice the repulsion from the center – too many cities, not enough trees.
So, did we get to see some trees? Most certainly! Many, many trees. Here is Boyscout, a hefty redwood in northern California (it did not return my embrace):

Better than trees, however was moss. And better than that, were trees and landscape covered in moss.

There were also picturesque mountains, Sahara-imitation desert, and volcanic lakes:

But worry not, we had plenty of energy sources to make it through!

Final verdict: Oregon is a beautiful place with friendly people. You should go there! (you should also try to mess around with the foundations of nature, mathematics, and computer science, though; try to do some of both!)

## They should have sent a complexity theorist

October 30, 2014

My O(1) readers are probably restlessly wondering where I’ve been, how I survived Israel’s freakishly sweaty summer, and what’s up in general.
Well, the truth is, I did manage to sweat under the Mediterranean sun, but most of the summer I spent in the United States. The official reason, given to the consulate and on my visa documents, was to do a project on “jump-starting a recent protocol on infinite randomness, using quantum-mechanical experiments”. The visa was approved, but I doubt that the people who stamped my passport understood what it was all about; hell, even I didn’t really know what I was going to do. I therefore dedicate this post to the men and women of the not-so-lubricated bureaucratic machinery that made my trip possible, in hopes that it will teach them all they ever wanted to know about unbounded randomness expansion using untrusted quantum devices, but were too afraid to ask. (Further dedication goes to Scott, who kindly took me under his wing and oversaw the project).
(The following post isn’t really technical, but feel free to skip over parts you don’t understand; there’s zounds more text that doesn’t require any fancy quantum mechanical stuff).

Randomness is important in life. For example, the optimal strategy in rock-paper-scissors is to pick each option with probability 1/3rd. This might seem easy, but it isn’t: humans are quite bad at imitating random sequences or won’t do so even if they know it’s best for them (best for them in theory; but then, what else is there?). It’d be much better if we had an infinite sequence of random bits that we could use whenever we wanted to. How do we go about getting such a sequence?
Ask any physicist, and she’ll tell you, “why it’s easy! Use quantum mechanics!” And indeed, quantum mechanics seems to be the place in nature where randomness comes not from a lack of prior information for us humans (i.e, a coin flip is random because we don’t know its precise position or the precise flipping force or the precise air pressure and currents), but is inherent in the physical reality itself. For most part, most reasonable “hidden variable” theories – theories in which the randomness observed in experiments stems from quantities that *are* deterministic, but we just don’t know them – have been ruled out.
So, the easiest way to get random bits using quantum mechanics is to take a quantum state that is in superposition – say a photon with both horizontal polarization (represented as the qubit $|0 \rangle$) and vertical polarization (represented as the qubit $|1 \rangle$) – and just measure which one it is. Thus, the overall state of the photon is the superposition $\frac{1}{\sqrt{2}}(|0 \rangle +|1 \rangle)$, and measuring its polarization yields $|0 \rangle$ with probability 0.5, and $|1 \rangle$ with probability 0.5.
So far so good. In an ideal world, we would be done here. We’d build a nice small black box with a large red button on top. Every time we press it, the box would create a superposed photon, measure it, and output the result. Infinite bits at the press of a button.
But alas, we do not live in an ideal world, and most of us, while avid rock-paper-scissors players, do not have the necessary equipment or training to build quantum mechanical boxes. Of course, in this capitalistic global entrepreneurship enterprise world we live in, this isn’t much of a problem – we can always count on the market adjusting to the needs of the people, and companies selling quantum mechanical random number generators will sprout up like mushrooms after the rain. Hey, they already have.
The problem with these companies is that you can never quite be sure that they are honest. How do you know that they aren’t selling you only a pseudorandom number generator, which uses a deterministic algorithm and a small random seed? There are statistical tests you can run on the output, but we don’t know yet if it’s possible to discern between a pseudorandom output or a truly random output in reasonable time. If they are cheating you in this way, then your entire “random” sequence is vulnerable.
Further, even if the company you bought your box from did give you truly random bits, how do you know that they were created on the spot? Perhaps the company generated a gigantic random string back in their HQ, and just put it in your box. Every time you press the big red button, you get a new bit out of that string. The output is indeed random, but it wouldn’t be secure – the company could sell information about your random bits to the highest bidder, and you would face a gratuitous defeat in the rock-paper-scissors nationals.
These two problems apply to any true random number generators, but if you are using quantum ones there is yet another issue: even if the company did generate the bits on the fly, they could still get information on your bits by quantum entanglement. In a jiffy, instead of creating a single photon in the state $\frac{1}{\sqrt{2}}(|0 \rangle +|1 \rangle)$, they’d create two photons in the entangled state $\frac{1}{\sqrt{2}}(|00 \rangle +|11 \rangle)$: a superposition of “both photons have horizontal polarization” and “both photons have vertical polarization”. One photon they’d put in your box, the other they’d keep for themselves back in their HQ. The rules of quantum mechanics then say that when you press the big red button and the box measures the state – say you got a $|1 \rangle$ – then when the company measures their photon, they also get a $|1 \rangle$. They always get what you got, and again, your information is not secure. This does not involve any communication between the box and the company – it would work even if you put it in an underground vault in Andromeda – its just a peculiar property of quantum states.
So right now things are looking pretty grim: there’s this wonderful startup idea – to produce random bits using quantum mechanical phenomena – but buyers don’t have a guarantee that the companies aren’t cheating them. And we all know where that leads to.
But not all is lost! For if we are allowed to tweak the way our quantum mechanical boxes operate, we can build a statistical test that has nothing to do with ordinary randomness / pseudorandomness tests, and that test guarantees honesty. Boxes which pass the test must produce random bits; they produce these bits on the fly; and they can only have a tiny amount of entanglement with the company HQ, giving the company almost no information about your random sequence. A magical cure for all our maladies!
To see how it works, we’ll look at the famous CHSH game. In this game, Alice and Bob are two players who play cooperatively in the following very realistic scenario: they are both put into separate rooms, and each one is presented with a bit: Alice is shown X, and Bob is shown Y. Based on that bit, they have to output a bit themselves: Alice outputs A, and Bob outputs B. They win the game if

$A \oplus B = X \wedge Y.$

They are allowed to decide on a strategy beforehand, but once they are put into the rooms, they cannot communicate.
Suppose that X and Y are chosen uniformly at random, that is, they each have a 0.5 probability of being 0, and 0.5 probability of being 1. What is the optimal strategy for Alice and Bob – the one which gives them the highest chances of winning?
Here is one strategy: Alice and Bob ignore the inputs X and Y completely, and always both output 0. So $A \oplus B = 0$, and this is equal to $X \wedge Y$ for 75% of the cases: whenever either X or Y is 0. The success rate for this strategy is 0.75.
It can be shown that there is no better deterministic strategy (you can do the truth table, if you want). But then there is also no better probabilistic strategy, since it would just be a convex combination of deterministic ones. So the best Alice and Bob can do, if they are not allowed to communicate, is to win 0.75 of the time.
Well, classically, that is true, but it is not true if they are allowed to share quantum resources. Specifically, if they each have one photon of the entangled state $\frac{1}{\sqrt{2}}(|00 \rangle +|11 \rangle)$, then once they are shown the bits, they can measure their photons from a set of agreed upon non-trivial measurement, and output whatever their measurements give. Their outputs will be completely random (individually), but correlated with each other. If they choose the right measurements, they can boost up their win rate to 0.84 (!). This is the best, known strategy that does not involve communication between the players. (For a billion details, see here)
But wait, there’s more! The CHSH game is robust, in the sense that if Alice and Bob have a success rate very close to 0.84, then with high probability they are using a strategy that is not very different than the known optimal one. This means that the bits they output are very close to random (what does it mean “very different strategy”? There is a formal definition which we won’t go into here, but as an example, the strategy “0.01 of the time output 0 and 0.99 of the time use the optimal strategy” is very close to optimal; so is “measure in a slightly different way than the optimal, so the correlations are changed just a bit”).
We now have a test for our random number generator box. Instead of having one big red button which measures a $\frac{1}{\sqrt{2}}(|0 \rangle +|1 \rangle)$ photon, we’ll ask the manufacturer to give us two boxes. These boxes will act as Alice and Bob: they will accept as input two random bits, and output bits of their own. We can play the CHSH game many many times, and measure their success rate: if its very close to 0.84, then they cannot have coordinated their outputs in advance; they cannot have used a deterministic algorithm; and the company cannot have a lot of information about the output (ok, we haven’t shown this here, but in a jiffy: in order to have information about it, the company needs to be entangled with the two boxes; but almost all the entanglement is “used up” in order to boost the win rate from 0.75 to 0.84, so “nothing is left” for the company to know).
This is what is commonly called (well, common among the relevant community) as “certified randomness”- the CHSH game can be used to test the randomness of our devices (in fact, there is a broad class of “XOR games” that can be used – games which are similar to the CHSH, but may involve different requirements or more players).
We would really like to say that we are done here, but the keen eyed among you must have already noticed the bug in the plan. We have a pair of boxes that, when given two random bits, output two correlated bits. We need random numbers just to test if a random number generator works. What’s worse, we put in two, and get back less than two. We are actually consuming bits in the process! Alas, the market for quantum random number generators is much more blooming than the one for quantum random number extinguishers.
But not all is lost! If we are allowed to tweak the inputs to the pair of boxes, we can create a test that uses less random bits than it puts out. The main concept is as follows: we still let the boxes play a lot of CHSH games, only now, instead of having totally random X and Y (requiring 2 random bits per game), we alter the input a bit: Most of the time, we’ll have X = 0 and Y = 0. This is like a “dud” game, and if the boxes anticipate this, they can potentially output 0,0, as described before. However, for a very small percentage of randomly selected inputs, X and Y are selected at random, as usual; these are called “real” games. On these games, if the boxes are to perform with high win rate, they have to be honest – they have to actually play the winning CHSH strategy. The point is that the real games are chosen at random, and the boxes have no idea which ones they are. If they play assuming the X = 0, Y =0 dud games, they run the risk of falling on real games and winning with only 0.75 probability. The trick of these tests, then, is to find out how to distribute the real games among the duds, how strict to be when deciding if the boxes pass the tests, etc. This type of test is called a randomness expansion protocol, in that it takes requires a certain number of bits (for choosing which games are duds and which are real, and also for the inputs of the real game), but outputs more than was used. Both polynomial and exponential expansions have been developed, and more recently, even infinite expansion! The latter is generally done by back-feeding the output as input for the boxes, but the details are a bit complicated, especially the whole “proving that it works against entangled opponents” thing. It means that you can start with a finite string of randomness (say, one you obtained from a trusted source), and expand it into one as long as you wish! There will be errors, but they grow exponentially smaller the more initial bits you use.
Personally, I think this whole thing is really cool. If you trust your quantum box, then generating an infinite random string is as easy as $|1 \rangle$ $|2 \rangle$ $|3 \rangle$. But even if you don’t, you can still obtain an infinite random string. It requires a bit more pyrotechnics, and it requires you to somehow obtain a small number of random bits elsewhere, but it’s possible. And actually, despite the fact that we called our boxes quantum, they don’t necessarily have to be. All they have to do is win the CHSH game with probability close to 0.84. Quantum mechanics isn’t the final say in physics; maybe we’ll find better theories which supersede it. Any mechanism which wins better than the classical 0.75 can be used in this type of protocol.
And that’s pretty much the gist of “a recent protocol on infinite randomness, using quantum-mechanical experiments”: a method to use untrusted quantum boxes in order to take a small number of random bits, and turn it into an unbounded number of (nearly) random bits. That’s it.
Where does your humble servant come into play in this whole ordeal? A very tiny part, and that’s the “jump starting” in the clause “jump starting a recent protocol on infinite randomness”. An initial random string is needed in order to run the protocol, and the question is: how large is that string, as a function of the errors you are willing to tolerate? (there are plenty of places where errors accumulate, but I skipped the discussion of errors and robustness because it really only complicates matters, and it’s complicated enough as it is).
So that’s what I set to find out. I basically gave bounds for various expressions described in the protocols which relate to the number of random bits outputted. The answer? Well it depends on the error, of course. But, let’s say, the bound I got is on the order of O(1,000,000) for reasonable error. For the rest, you’ll have to read my paper, I guess.