Programming complex transformations

December 2, 2014

Facebook should change their “It’s complicated” status to “It’s complex”. That way, real people with imaginary girlfriends could spread the news to all their friends!

        – Ancient proverb

We were learning about conformal maps in complex function theory. While we did plenty of “circles are sent to circles” and “connected components are sent to connected components”, it’s almost obvious that we barely got to see any actual map in action.
I remembered that I saw some very nice geometrical transforms in the wonderful Youtube series “Dimensions” by Leys, Ghys and Alvarez (link here), and decided to write a small program of my own that, given a picture, can apply any (reasonable) complex function to it, treating it as a transformation on pictures.

What does this mean? We can think of a picture as an array of pixels, each with its own x and y values. Treating the center of the picture as (0,0), each pixel has a different coordinate. The pair (x,y) in \mathbb{R}^2 can be treated as z = x + iy in the complex plane. We can then take this complex number and put it in any complex function f(z); the result is some other complex number, w = a + ib. We then interpret this new number as the coordinates of a new pixel; so in the new picture, the color at position (a,b) = f(x+iy) will be the same as the color at position (x,y) in the original picture.

If f is a funky enough function, the results should be awesome, and this lets you understand a little better what all sorts of analytic functions do (analytic functions preserve angles between the two pictures, so however twisted things get, we’ll always have some sanity).

Here, look at what happened to our poor tiger (original image courtesy of Wikipedia commons):

This…
tiger_small

Turns to this…

two_recips_unequal_tiger

(confession: ok, this mapping isn’t a standard conformal map; read more to see what’s actually happening here).

I’ll now describe some of the problems and solutions I ran into; if you just want to see more pretty pictures, feel free to do just that.

The naive way to get a mapping is to do just what was described in the above explanation: take the original picture, and for each pixel (x,y), plot it at f(x,y). Unfortunately this has several problems. These stem mainly from discretization: the coordinates of the pixels come in integer units, and two nearby pixels will always differ by at least 1 in one of their coordinates. Proper scaling can make this discretization as small as we wish, so effectively we can have any two nearby pixels differ by \frac{1}{n} for any n of our choice, but problems can still occur.

The first problem is that even if your function is onto, meaning that it’s possible to designate the color of every pixel in the new picture, the result may still have gaping holes or “isolated pixels”. This severity of this problem depends on the function itself (I guess it generally depends on whether the absolute value of its derivative is close to 1 or not), and for some choices of f, your end product might only be partially filled.
In this example, generated naively by f = \sqrt{z}, the top of the picture is falling to pieces (also, it’s evident that the edges of the picture are getting left out, though this is because our source picture is finite, not due to discretization).

naive_sqrt_tiger

A partial solution is to “fill in the blanks” by averaging over neighboring pixels: each blank pixel will take the average color its nearest neighbors. Assuming that the picture is not “too broken up”, this can work just fine – if holes are completely surrounded, you won’t really notice the difference (and it almost makes sense theoretically to do this, in terms of the middle-value theorem). Indeed, running this fix helps the picture quite a lot, although there are still untreated areas which cannot be helped:

naive_filled_sqrt_tigerThe second problem is that some transformations can span over enormous scales. For example, with the transformation z \rightarrow \frac{-1}{z}, the interior of the unit disc exchanges places with the exterior. This means that when going over the pixels, ones very close to the origin are going to get sent way off to the edge of the new image, while ones far away are all going to be sent near the origin.

The result is that while the new image is very very large, most of the “interesting” things (aka – most of the actual original picture) is contained in a very, very small region. Look at this example of z \rightarrow \frac{-1}{z}:

bad_inverse_tiger
Not very interesting, is it?

You can adjust for this phenomenon if you know where to cut off your picture, ignoring the endpoints that are way off. But how can you know in advance the size of your image, or which points are good and which are not? This generally requires analyzing your function beforehand, which we do not want to do.

As a (very) partial fix, I noticed that most of these image points are isolated – the discretization means that two remote points will probably not be directly near each other. I wrote some small cleanup code that finds these points and eliminates them, rescaling the picture appropriately. Of course, there will always be isolated points; in fact, due to the discrete nature, every point in the new picture is isolated, so in effect we have to specify how close two points have to be to each other to be considered neighbors or not; this is done according to the resolution of the target image.

In any case, the code I wrote works iteratively. Here are the first three iterations:

inverse_1_iter_tiger

inverse_2_iter_tiger

inverse_3_iter_tiger

You can see that as we iterate, the pictures get better (and focus more on what’s important, the actual head of the tiger, instead of empty space). However, I would still consider this to be rather inadequate (although we do get a “particle-erosion” effect for free, which is cool).

A third problem with this method is that f doesn’t have to be a one-to-one function, meaning that there may be two possible colors for a given pixel in the generated picture. How do we decide which one to take? Do we combine? Do we override? This is a general problem not due to discretization, and I just ignored it here (the code overrides new pixels). Here is f(z)=z^2:

quadratic_tiger

For our last image with this method, here is \log(\text{tiger}).

log_tiger

The naive method is riddled with problems and artifacts. However, there is a way that generally treats discretization better, and while it also has some black empty space, and some pixelated areas, it doesn’t tend to have small annoying holes (for nice functions, anyway).

The solution is this: instead of going over all the pixels x,y in the original picture and computing the coordinates of the pixel in the new picture, w = f(x+iy), we go over all the pixels (a,b) in the new picture, and calculate the inverse x+iy=f^{-1}(a+ib). If we do not exceed the size of the original picture, we basically ensure that we will not have holes in the result, because we actually go over each pixel and calculate its color, instead of having it “get picked by accident”, as was in the original method.

This method eliminates all the sporadic and isolated holes and points we had using the naive way. Here is f(z) = \frac{-1}{z} using the inverse method:

inverse_tiger3

Much better!
Of course, there are still a few problems:

  1. The edges are pixelated – this is because they all draw from basically the same region – the immediate center of the tiger’s face – and a small region that is smeared over a large area is pixelated.
  2. There is still a gaping hole in the center. In order to fill it up, we would need an even larger original tiger image – the inverse of these points is out of bounds. In fact, for this specific function, \frac{1}{z}, we always have some finite dark area at the center, since we do not work with infinite pictures. There are ways to overcome this, but not with simple finite image transformations.

One drawback of this method is that we need to directly specify the inverse of f. This may be simple, in case of simple functions like \sqrt{z} or \frac{1}{z}, but in general it may not always be easy.
Further, there may be artifacts arising from this methods. For example, suppose we want to see the map f(z) = \sqrt{z}. In order to use the inverse method, we would have to calculate z^2 for all the points in the original image.
But the equation w^2 = z  has two solutions for w: both \sqrt{z} and - \sqrt{z}! When using the inverse method while giving z^2 as an inverse function, we get a different image from the direct method:

sqrt_tiger

And indeed, notice – the tiger has been replicated! This can never happen with the direct method, which by definition maps each point only once.

Now you see how I cheated you a bit with the first picture in this post – it cannot be the result of a conformal map, since there are clearly multiple instances of the tiger! In fact, it was created by calculating the inverse of \frac{1}{z-100(1+i)} + \frac{2.5}{z+100(1+i)}.

That’s it for now; happy mapping!

staring_tiger

I am The Harmonic

November 17, 2014

List of courses for which I’ve had to solve (for class, assignments, or tests) simple, damped, and driven harmonic oscillators:

Physics 1: The student encounters this recurrent and relentless creature for the first time.
Physics 2: Now with electricity!
Physics lab 1: First chance you actually get to see it in real spacetime. Still have to solve the equations again.
Physics lab 2: Now with electricity (2)!
Waves and oscillations: Well, it won’t be a course about oscillations without one or twelve.
Analytical Mechanics: First with Lagrangian, then with Hamiltonian, then with Hamiltonians and matrices.
Ordinary differential equations: “We will assume that this is the first time you have ever seen a linear second order differential equation.”
Partial differential equations: Special case for Sturm Liouville, all over the place.
Quantum Mechanics 1: But be discrete about it.
Quantum Mechanics 2: Once with Schrödinger picture; once with Heisenberg picture; interaction picture pending next homework set.

And the year has just started; I’m sure at least two more courses will add themselves to that list before this degree is all said and done.

Not that I mind it or anything. On the contrary. The harmonic motion is a neverdying beast, rising from the dead and possessing each course in its own spectral way. Each time, we, the students, are touched and are offered a unique glimpse at true Nirvana.

I can only hope that one day I too will truly embrace The Harmonic and become One with the Oscillating Truth.

Until then, I guess my CV will be organized hierarchically:
Vaguely familiar with: theories of space and time, theories of matter and light, topology and groups.
Strong with: oscillating point masses; oscillating mass points; oscillating charges; oscillating oscillations; oscillating scintillation; oscillation of moods and modes of oscillations.

Well, I guess that when all you have is an exponent, everything in life seems linear.

The Oregon Trail

November 6, 2014

You may have gotten the impression from my previous post that I went to the United States in order to do a project on randomness and quantum mechanics, or something of that sort. This, of course, cannot be further from the truth. My real goal in life would never be something as silly as mess around with the foundations of nature, mathematics, and computer science. It’s to see as many trees as possible. And what better place to do that than Oregon?
So up we went, my friend and I, into pine country. It was the middle of Fire Season – where, unlike other gaming seasons, it is the fire that hunts the humans. But no signpost of “extreme fire danger”, “Beware! Cougars ahead” or “Danger of bears” will keep us out.
As in any good trip, first comes careful planning. A meticulously detailed and elaborate road plan was laid out, and consisted basically of drawing a closed circuit in Google Maps and hoping everything will converge in time and that we’d find places to sleep.

oregon_nolegend

Rather surprisingly, this plan worked out quite well. There were a few deviations (The California Redwoods were too much of a temptation to ignore), but overall, this was the trip. Notice the repulsion from the center – too many cities, not enough trees.
So, did we get to see some trees? Most certainly! Many, many trees. Here is Boyscout, a hefty redwood in northern California (it did not return my embrace):

boyscout

Better than trees, however was moss. And better than that, were trees and landscape covered in moss.

ramona_falls

There were also picturesque mountains, Sahara-imitation desert, and volcanic lakes:

mount_hood

dune

crater_lake

But worry not, we had plenty of energy sources to make it through!

voodoo_doughnuts

Final verdict: Oregon is a beautiful place with friendly people. You should go there! (you should also try to mess around with the foundations of nature, mathematics, and computer science, though; try to do some of both!)

They should have sent a complexity theorist

October 30, 2014

My O(1) readers are probably restlessly wondering where I’ve been, how I survived Israel’s freakishly sweaty summer, and what’s up in general.
Well, the truth is, I did manage to sweat under the Mediterranean sun, but most of the summer I spent in the United States. The official reason, given to the consulate and on my visa documents, was to do a project on “jump-starting a recent protocol on infinite randomness, using quantum-mechanical experiments”. The visa was approved, but I doubt that the people who stamped my passport understood what it was all about; hell, even I didn’t really know what I was going to do. I therefore dedicate this post to the men and women of the not-so-lubricated bureaucratic machinery that made my trip possible, in hopes that it will teach them all they ever wanted to know about unbounded randomness expansion using untrusted quantum devices, but were too afraid to ask. (Further dedication goes to Scott, who kindly took me under his wing and oversaw the project).
(The following post isn’t really technical, but feel free to skip over parts you don’t understand; there’s zounds more text that doesn’t require any fancy quantum mechanical stuff).

Randomness is important in life. For example, the optimal strategy in rock-paper-scissors is to pick each option with probability 1/3rd. This might seem easy, but it isn’t: humans are quite bad at imitating random sequences or won’t do so even if they know it’s best for them (best for them in theory; but then, what else is there?). It’d be much better if we had an infinite sequence of random bits that we could use whenever we wanted to. How do we go about getting such a sequence?
Ask any physicist, and she’ll tell you, “why it’s easy! Use quantum mechanics!” And indeed, quantum mechanics seems to be the place in nature where randomness comes not from a lack of prior information for us humans (i.e, a coin flip is random because we don’t know its precise position or the precise flipping force or the precise air pressure and currents), but is inherent in the physical reality itself. For most part, most reasonable “hidden variable” theories – theories in which the randomness observed in experiments stems from quantities that *are* deterministic, but we just don’t know them – have been ruled out.
So, the easiest way to get random bits using quantum mechanics is to take a quantum state that is in superposition – say a photon with both horizontal polarization (represented as the qubit |0 \rangle ) and vertical polarization (represented as the qubit |1 \rangle ) – and just measure which one it is. Thus, the overall state of the photon is the superposition \frac{1}{\sqrt{2}}(|0 \rangle +|1 \rangle), and measuring its polarization yields |0 \rangle with probability 0.5, and |1 \rangle with probability 0.5.
So far so good. In an ideal world, we would be done here. We’d build a nice small black box with a large red button on top. Every time we press it, the box would create a superposed photon, measure it, and output the result. Infinite bits at the press of a button.
But alas, we do not live in an ideal world, and most of us, while avid rock-paper-scissors players, do not have the necessary equipment or training to build quantum mechanical boxes. Of course, in this capitalistic global entrepreneurship enterprise world we live in, this isn’t much of a problem – we can always count on the market adjusting to the needs of the people, and companies selling quantum mechanical random number generators will sprout up like mushrooms after the rain. Hey, they already have.
The problem with these companies is that you can never quite be sure that they are honest. How do you know that they aren’t selling you only a pseudorandom number generator, which uses a deterministic algorithm and a small random seed? There are statistical tests you can run on the output, but we don’t know yet if it’s possible to discern between a pseudorandom output or a truly random output in reasonable time. If they are cheating you in this way, then your entire “random” sequence is vulnerable.
Further, even if the company you bought your box from did give you truly random bits, how do you know that they were created on the spot? Perhaps the company generated a gigantic random string back in their HQ, and just put it in your box. Every time you press the big red button, you get a new bit out of that string. The output is indeed random, but it wouldn’t be secure – the company could sell information about your random bits to the highest bidder, and you would face a gratuitous defeat in the rock-paper-scissors nationals.
These two problems apply to any true random number generators, but if you are using quantum ones there is yet another issue: even if the company did generate the bits on the fly, they could still get information on your bits by quantum entanglement. In a jiffy, instead of creating a single photon in the state \frac{1}{\sqrt{2}}(|0 \rangle +|1 \rangle), they’d create two photons in the entangled state \frac{1}{\sqrt{2}}(|00 \rangle +|11 \rangle): a superposition of “both photons have horizontal polarization” and “both photons have vertical polarization”. One photon they’d put in your box, the other they’d keep for themselves back in their HQ. The rules of quantum mechanics then say that when you press the big red button and the box measures the state – say you got a |1 \rangle – then when the company measures their photon, they also get a |1 \rangle . They always get what you got, and again, your information is not secure. This does not involve any communication between the box and the company – it would work even if you put it in an underground vault in Andromeda – its just a peculiar property of quantum states.
So right now things are looking pretty grim: there’s this wonderful startup idea – to produce random bits using quantum mechanical phenomena – but buyers don’t have a guarantee that the companies aren’t cheating them. And we all know where that leads to.
But not all is lost! For if we are allowed to tweak the way our quantum mechanical boxes operate, we can build a statistical test that has nothing to do with ordinary randomness / pseudorandomness tests, and that test guarantees honesty. Boxes which pass the test must produce random bits; they produce these bits on the fly; and they can only have a tiny amount of entanglement with the company HQ, giving the company almost no information about your random sequence. A magical cure for all our maladies!
To see how it works, we’ll look at the famous CHSH game. In this game, Alice and Bob are two players who play cooperatively in the following very realistic scenario: they are both put into separate rooms, and each one is presented with a bit: Alice is shown X, and Bob is shown Y. Based on that bit, they have to output a bit themselves: Alice outputs A, and Bob outputs B. They win the game if

A \oplus B = X \wedge Y.

They are allowed to decide on a strategy beforehand, but once they are put into the rooms, they cannot communicate.
Suppose that X and Y are chosen uniformly at random, that is, they each have a 0.5 probability of being 0, and 0.5 probability of being 1. What is the optimal strategy for Alice and Bob – the one which gives them the highest chances of winning?
Here is one strategy: Alice and Bob ignore the inputs X and Y completely, and always both output 0. So A \oplus B = 0, and this is equal to X \wedge Y for 75% of the cases: whenever either X or Y is 0. The success rate for this strategy is 0.75.
It can be shown that there is no better deterministic strategy (you can do the truth table, if you want). But then there is also no better probabilistic strategy, since it would just be a convex combination of deterministic ones. So the best Alice and Bob can do, if they are not allowed to communicate, is to win 0.75 of the time.
Well, classically, that is true, but it is not true if they are allowed to share quantum resources. Specifically, if they each have one photon of the entangled state \frac{1}{\sqrt{2}}(|00 \rangle +|11 \rangle), then once they are shown the bits, they can measure their photons from a set of agreed upon non-trivial measurement, and output whatever their measurements give. Their outputs will be completely random (individually), but correlated with each other. If they choose the right measurements, they can boost up their win rate to 0.84 (!). This is the best, known strategy that does not involve communication between the players. (For a billion details, see here)
But wait, there’s more! The CHSH game is robust, in the sense that if Alice and Bob have a success rate very close to 0.84, then with high probability they are using a strategy that is not very different than the known optimal one. This means that the bits they output are very close to random (what does it mean “very different strategy”? There is a formal definition which we won’t go into here, but as an example, the strategy “0.01 of the time output 0 and 0.99 of the time use the optimal strategy” is very close to optimal; so is “measure in a slightly different way than the optimal, so the correlations are changed just a bit”).
We now have a test for our random number generator box. Instead of having one big red button which measures a \frac{1}{\sqrt{2}}(|0 \rangle +|1 \rangle) photon, we’ll ask the manufacturer to give us two boxes. These boxes will act as Alice and Bob: they will accept as input two random bits, and output bits of their own. We can play the CHSH game many many times, and measure their success rate: if its very close to 0.84, then they cannot have coordinated their outputs in advance; they cannot have used a deterministic algorithm; and the company cannot have a lot of information about the output (ok, we haven’t shown this here, but in a jiffy: in order to have information about it, the company needs to be entangled with the two boxes; but almost all the entanglement is “used up” in order to boost the win rate from 0.75 to 0.84, so “nothing is left” for the company to know).
This is what is commonly called (well, common among the relevant community) as “certified randomness”- the CHSH game can be used to test the randomness of our devices (in fact, there is a broad class of “XOR games” that can be used – games which are similar to the CHSH, but may involve different requirements or more players).
We would really like to say that we are done here, but the keen eyed among you must have already noticed the bug in the plan. We have a pair of boxes that, when given two random bits, output two correlated bits. We need random numbers just to test if a random number generator works. What’s worse, we put in two, and get back less than two. We are actually consuming bits in the process! Alas, the market for quantum random number generators is much more blooming than the one for quantum random number extinguishers.
But not all is lost! If we are allowed to tweak the inputs to the pair of boxes, we can create a test that uses less random bits than it puts out. The main concept is as follows: we still let the boxes play a lot of CHSH games, only now, instead of having totally random X and Y (requiring 2 random bits per game), we alter the input a bit: Most of the time, we’ll have X = 0 and Y = 0. This is like a “dud” game, and if the boxes anticipate this, they can potentially output 0,0, as described before. However, for a very small percentage of randomly selected inputs, X and Y are selected at random, as usual; these are called “real” games. On these games, if the boxes are to perform with high win rate, they have to be honest – they have to actually play the winning CHSH strategy. The point is that the real games are chosen at random, and the boxes have no idea which ones they are. If they play assuming the X = 0, Y =0 dud games, they run the risk of falling on real games and winning with only 0.75 probability. The trick of these tests, then, is to find out how to distribute the real games among the duds, how strict to be when deciding if the boxes pass the tests, etc. This type of test is called a randomness expansion protocol, in that it takes requires a certain number of bits (for choosing which games are duds and which are real, and also for the inputs of the real game), but outputs more than was used. Both polynomial and exponential expansions have been developed, and more recently, even infinite expansion! The latter is generally done by back-feeding the output as input for the boxes, but the details are a bit complicated, especially the whole “proving that it works against entangled opponents” thing. It means that you can start with a finite string of randomness (say, one you obtained from a trusted source), and expand it into one as long as you wish! There will be errors, but they grow exponentially smaller the more initial bits you use.
Personally, I think this whole thing is really cool. If you trust your quantum box, then generating an infinite random string is as easy as |1 \rangle |2 \rangle |3 \rangle . But even if you don’t, you can still obtain an infinite random string. It requires a bit more pyrotechnics, and it requires you to somehow obtain a small number of random bits elsewhere, but it’s possible. And actually, despite the fact that we called our boxes quantum, they don’t necessarily have to be. All they have to do is win the CHSH game with probability close to 0.84. Quantum mechanics isn’t the final say in physics; maybe we’ll find better theories which supersede it. Any mechanism which wins better than the classical 0.75 can be used in this type of protocol.
And that’s pretty much the gist of “a recent protocol on infinite randomness, using quantum-mechanical experiments”: a method to use untrusted quantum boxes in order to take a small number of random bits, and turn it into an unbounded number of (nearly) random bits. That’s it.
Where does your humble servant come into play in this whole ordeal? A very tiny part, and that’s the “jump starting” in the clause “jump starting a recent protocol on infinite randomness”. An initial random string is needed in order to run the protocol, and the question is: how large is that string, as a function of the errors you are willing to tolerate? (there are plenty of places where errors accumulate, but I skipped the discussion of errors and robustness because it really only complicates matters, and it’s complicated enough as it is).
So that’s what I set to find out. I basically gave bounds for various expressions described in the protocols which relate to the number of random bits outputted. The answer? Well it depends on the error, of course. But, let’s say, the bound I got is on the order of O(1,000,000) for reasonable error. For the rest, you’ll have to read my paper, I guess.

Microwaves and birthdays

September 10, 2014

Without doubt, one of the largest differences between the USA and Israel is the microwave ovens. Whereas almost all microwaves I encountered in Israel had either analog rotary timers or preset “30 sec or 1 min” buttons, here in the states there is an overwhelming prevalence (100% of an astounding three cases) of numpad microwaves.

This is not advertisement

This is not advertisement

Seemingly, all you have to do is put in the number of minutes / seconds you want to heat, and fin, you are done.
But wait; is it minutes, or seconds? What happens if I put in a three or four digit number? Do I have to be an arithmetic expert to operate my microwave?
In what can only be stated as the “non-continuity of microwave space-time”, the input parsing is simple: if you put in xx:yy, it will run for xx minutes, and yy seconds. Simple and intuitive. The thing is, nothing constrains yy to be smaller than 60. 1:99 is as valid input as any, and will indeed run for 1 minute and 99 seconds (=159 seconds total). 2:00 is also a valid input, running for 2 minutes, 0 seconds (=120 seconds total).
This is the natural way to handle user input, and I totally approve of it, if only for the programmers’ and designers’ sake for not handling annoying details. There is a nice time discontinuity when you plot the actual cooking time against the numbers you punch in, if you arrange them in lexicographical order:

microwave_60

Starting from 60 seconds cook time, the user has two choices of how she wants the input to be shaped, rather than just the feeble one available with a rotary timer. This is in agreement with the USA’s enhanced economic and political freedom; it is no wonder that these microwaves are more prevalent here (as for me, you can find me standing dumbstruck in front of the machines, trying to decide which number I should punch in).

As the title of the above plot suggests, it is interesting to see how different minute lengths affect our options. The shorter the minute, the more overlap there will be, and the more options you will have, until finally, for SECONDS_PER_MINUTE = 1, we have 100(!) different options of input. Here is the example for a 30 second minute:

microwave_30

On the other hand, given that we work in base 10 and that our two digit numbers only go up to 99, if we had a longer minute (and kept the input method the same, allowing only two “seconds” digits), we would have gaps:

microwave_200

Not every desired time can be reached; we will likely not be seeing any 200 second minutes in the Imperial system any time soon.

This whole ordeal reminded me of a wonderful fact I stumbled upon that has to do with discretizing age. Consider the standard high school algebra question: Albert is X years old, and his son Jorgenhausfer is Y years old. When will Albert be twice as old as his son?
The question is easy, but one can also ask for how long Albert is twice as old as his son. It turns out that Albert will be twice as old as Jorgenhausfer for exactly one year, but that time period may be split into two sections, depending on their birthdays! I can do no better justice to the issue than the discussion given here:
http://www.math.brown.edu/~banchoff/twiceasold/

Do not track

August 23, 2014

“Home is where the wifi connects automatically”.
The question remains to be asked, how does it know to connect automatically? Well, of course, the computer saves a list of network names (or other identifiers) and their passwords, and tries to connect when it sees one it recognizes. I bet the passwords are saved in plaintext, too, but you should know better than to use the same one for your wifi and for your bank.

Anyway, Ubuntu is no exception. This is a laptop, and I don’t have a GPS that records my coordinates at all time, so it’s still interesting to see that if you know enough about me and obtain access to my computer, you can track my position with fair accuracy. Here is a list of networks I’ve connected to, by default sorted by date.

wifi_networks

Take a look at the connections. Can you reconstruct a map of where I’ve been as a function of time? Try to do so now.

.
.

All done?

.
.

I think the average user can, after a bit of thinking and googling, make at least some sense out of the somewhat-cryptic names. Certainly he will arrive at some route of the form: Technion -> Boston/MIT, with a stop in Europe. Here, let’s do it together. Start from the past:

ISRAEL-RAILWAYS – well, that’s easy. I sometimes take the train to the Technion, so no biggy there. Just two months ago I still had classes!
TechPublic, eewifi – TechPublic is, of course, the Technion’s campus-wide public wifi. eewifi is the same, for the department of electrical engineering. So a month ago I was certainly on campus.
caXXXyy – these are a bit a tricky, and require prior knowledge. “Canada” is the name of the dorms I live in. Specifically, ca94305 is my own apartment wifi. So after having connected to the public Technion wifi for the last time, I was at my dorms. But I haven’t been there for the past 29 days. It must be summer vacation.
Bezeq-n_6202 – my house wifi. Ok, you had no chance of guessing that :). But wait! Last time used, 27 days ago? Oh my, where have you been all this time?
Swisscom – that’s a Swiss communication company. How did that happen? Indeed, on my way to Boston I had a connection in Münich. Not quite Swiss, but close enough.
Loganwifi – Logan is Boston’s airport. I’ve been here for 26 days!
percher – stayed at a friend’s for a while. That might take a bit of digging. Seeing as it was used 22 days ago, and I arrived 26 days ago, one can conclude that I’ve stayed there for 4 days.
HMS public – unfortunately, not Her Majesty’s Ship, but Harvard Medical School. Went there for a visit.
MIT – ‘nough said. I spend most of my time here.
StataCenter – if it weren’t enough that you know at which institution I spend most of my time, you now also know the exact building. Oh well; as seen in a previous post, my wikipedia editing IP gives about that much information anyway.

How do you that I spend most of my time connected to the last two networks? Since it’s unlikely that I’ve been without internet access all this time (what with the blog posts and wikipedia edits and all), the fact that there are no networks in between MIT and StataCenter, and, say, the Percher / HMS ones, indicates that I’ve repeatedly connected to them, thus refreshing their status and putting them at the top of the recently used.

By the way, the Stata Center is a really cool place.

For me this isn’t confidential information – here, I’m posting it online. But I suppose that for those of us who wish to remain anonymous, for whatever reasons, wifi connection history is just one more problem to worry about. And seeing that my own connection history had entries from over a year ago, it stays with you for quite a while.

(footnote: I have intentionally omitted a network from the list, of the place I’m currently staying; I do not know if the owner would appreciate having the network name put here. This does not change the above analysis)

Credo

August 14, 2014

I believe there is life outside of Earth. There are many planets in the observable universe, and thirteen billion years is a long time. I doubt, however, that DNA has much to do with it.
I believe P ≠ NP. How else could it be so hard to prove? Relatedly, even non-standard computing architectures, based on relativity, condensed matter, and quantum physics, have not yet breached it.
I believe in a Singularity. Eventually we will find out enough about individual consciousness to be able to bend it to our will; our bodies, too, will be under our control. What field in life hasn’t succumbed to engineering?
I believe it is possible to understand the world using physics. It seems unlikely (and despairing) to me that the world does not operate according to a set of rules that can be formally described; and while claiming that humans will eventually find out these rules is a strong statement, I look at what we have done so far in so little time, and cannot but be optimistic.

I believe all of these things, and many more. So Gauss-dammit Internet, if you tell me “This policeman pulled over a speeding couple – you won’t believe what happens next!” one more time, I swear I will personally go over there, and pull the plug on you myself.


Follow

Get every new post delivered to your Inbox.

Join 53 other followers