*Facebook should change their “It’s complicated” status to “It’s complex”. That way, real people with imaginary girlfriends could spread the news to all their friends!*

– Ancient proverb

We were learning about conformal maps in complex function theory. While we did plenty of “circles are sent to circles” and “connected components are sent to connected components”, it’s almost obvious that we barely got to see any **actual** map in action.

I remembered that I saw some very nice geometrical transforms in the wonderful Youtube series “Dimensions” by Leys, Ghys and Alvarez (link here), and decided to write a small program of my own that, given a picture, can apply any (reasonable) complex function to it, treating it as a transformation on pictures.

What does this mean? We can think of a picture as an array of pixels, each with its own and values. Treating the center of the picture as , each pixel has a different coordinate. The pair in can be treated as in the complex plane. We can then take this complex number and put it in any complex function ; the result is some other complex number, . We then interpret this new number as the coordinates of a new pixel; so in the new picture, the color at position will be the same as the color at position in the original picture.

If is a funky enough function, the results should be awesome, and this lets you understand a little better what all sorts of analytic functions do (analytic functions preserve angles between the two pictures, so however twisted things get, we’ll always have some sanity).

Here, look at what happened to our poor tiger (original image courtesy of Wikipedia commons):

Turns to this…

(confession: ok, this mapping isn’t a standard conformal map; read more to see what’s actually happening here).

I’ll now describe some of the problems and solutions I ran into; if you just want to see more pretty pictures, feel free to do just that.

The naive way to get a mapping is to do just what was described in the above explanation: take the original picture, and for each pixel , plot it at . Unfortunately this has several problems. These stem mainly from discretization: the coordinates of the pixels come in integer units, and two nearby pixels will always differ by at least 1 in one of their coordinates. Proper scaling can make this discretization as small as we wish, so effectively we can have any two nearby pixels differ by for any of our choice, but problems can still occur.

The first problem is that even if your function is onto, meaning that it’s possible to designate the color of *every* pixel in the new picture, the result may still have gaping holes or “isolated pixels”. This severity of this problem depends on the function itself (I guess it generally depends on whether the absolute value of its derivative is close to 1 or not), and for some choices of , your end product might only be partially filled.

In this example, generated naively by , the top of the picture is falling to pieces (also, it’s evident that the edges of the picture are getting left out, though this is because our source picture is finite, not due to discretization).

A partial solution is to “fill in the blanks” by averaging over neighboring pixels: each blank pixel will take the average color its nearest neighbors. Assuming that the picture is not “too broken up”, this can work just fine – if holes are completely surrounded, you won’t really notice the difference (and it almost makes sense theoretically to do this, in terms of the middle-value theorem). Indeed, running this fix helps the picture quite a lot, although there are still untreated areas which cannot be helped:

The second problem is that some transformations can span over enormous scales. For example, with the transformation , the interior of the unit disc exchanges places with the exterior. This means that when going over the pixels, ones very close to the origin are going to get sent *way off* to the edge of the new image, while ones far away are all going to be sent near the origin.

The result is that while the new image is very very large, most of the “interesting” things (aka – most of the actual original picture) is contained in a very, very small region. Look at this example of :

You can adjust for this phenomenon if you know where to cut off your picture, ignoring the endpoints that are *way off*. But how can you know in advance the size of your image, or which points are good and which are not? This generally requires analyzing your function beforehand, which we do not want to do.

As a (very) partial fix, I noticed that most of these image points are isolated – the discretization means that two remote points will probably not be directly near each other. I wrote some small cleanup code that finds these points and eliminates them, rescaling the picture appropriately. Of course, there will always be isolated points; in fact, due to the discrete nature, *every* point in the new picture is isolated, so in effect we have to specify how close two points have to be to each other to be considered neighbors or not; this is done according to the resolution of the target image.

In any case, the code I wrote works iteratively. Here are the first three iterations:

You can see that as we iterate, the pictures get better (and focus more on what’s important, the actual head of the tiger, instead of empty space). However, I would still consider this to be rather inadequate (although we do get a “particle-erosion” effect for free, which is cool).

A third problem with this method is that f doesn’t have to be a one-to-one function, meaning that there may be two possible colors for a given pixel in the generated picture. How do we decide which one to take? Do we combine? Do we override? This is a general problem not due to discretization, and I just ignored it here (the code overrides new pixels). Here is :

For our last image with this method, here is .

~~The naive method is riddled with problems and artifacts. However, there is a way that generally treats discretization better, and while it also has some black empty space, and some pixelated areas, it doesn’t tend to have small annoying holes (for nice functions, anyway).~~

The solution is this: instead of going over all the pixels in the original picture and computing the coordinates of the pixel in the new picture, , we go over all the pixels in the new picture, and calculate the inverse . If we do not exceed the size of the original picture, we basically ensure that we will not have holes in the result, because we actually go over each pixel and calculate its color, instead of having it “get picked by accident”, as was in the original method.

This method eliminates all the sporadic and isolated holes and points we had using the naive way. Here is using the inverse method:

Much better!

Of course, there are still a few problems:

- The edges are pixelated – this is because they all draw from basically the same region – the immediate center of the tiger’s face – and a small region that is smeared over a large area is pixelated.
- There is still a gaping hole in the center. In order to fill it up, we would need an even larger original tiger image – the inverse of these points is out of bounds. In fact, for this specific function, , we
**always**have some finite dark area at the center, since we do not work with infinite pictures. There are ways to overcome this, but not with simple finite image transformations.

One drawback of this method is that we need to directly specify the inverse of . This may be simple, in case of simple functions like or , but in general it may not always be easy.

Further, there may be artifacts arising from this methods. For example, suppose we want to see the map . In order to use the inverse method, we would have to calculate for all the points in the original image.

But the equation has *two* solutions for : both and ! When using the inverse method while giving as an inverse function, we get a different image from the direct method:

And indeed, notice – the tiger has been replicated! This can never happen with the direct method, which by definition maps each point only once.

Now you see how I cheated you a bit with the first picture in this post – it cannot be the result of a conformal map, since there are clearly multiple instances of the tiger! In fact, it was created by calculating the inverse of .

That’s it for now; happy mapping!

Pretty.

Maybe treat the picture as infinitely tiled in both directions, perhaps multiplying pixel color saturation by min(1, 1/(distance from (0,0)/length of picture diagonal))?

I’d expect finding the reverse pixel for some pixel would be possible with some caching + Newton Raphson, but I may be talking out of my behind here.

Treating the picture as infinitely tiled helps remove holes, but since that tiling is arbitrary, there’s not much interpretation of it. What is your saturation effect supposed to do?

Maybe another solution would be to apply an analog to the arctan function on reals, to treat the image as if it were the entire plane (this isn’t actually possible in a precise sense, but we can approximate it well enough, I think).

When talking about inverting, I meant analytically, of course 🙂 Numerics is a whole other deal… https://en.wikipedia.org/wiki/Newton%27s_method#Complex_functions