One of the strangest things in fractals, at least to me, is the idea of space filling curves. A space filling curve is a curve constructed using a Koch-like replacement method, but instead of being

self-avoiding, it eventually contacts itself at every point.

What's so strange about these things is that they start out as a non-self-contacting curve. Through

further steps in the construction process, they get closer and closer to self-contacting, without touching. But in the limit, when the construction process is complete, you have a filled square.

Why is that so odd? Because you've basically taken a one-dimensional thing - a line - with no width at all - and by bending it enough times, you've wound up with a two-dimensional figure. This isn't

just odd to me - this was considered a crisis by many mathematicians - it seems to break some of

the fundamental assumptions of geometry: how did we get width from something with no width? It's nonsensical!

The original Peano curve is slightly less pathological than that; it's at least trivially self-overlapping. The basic Peano curve starts a line segment as an initiator; then uses the curve to the right as a generator. What you get, by applying it in successive iterations is what's show below - the gaps inside the squares getting smaller with each iteration, until in the limit, they disappear entirely. Using

appropriate formalisms, you can easily show that every point in the square is part of the Peano curve.

This is bad - in the sense of the seeming impossibility of a line - something with no width, no area - becoming something with an area simply by bending it enough times. But it's also not as bad as it could be. If you look at a Peano curve topologically, it's clearly something peculiar. Subsequent steps in the construction are not homeomorphic: each step in the construction adds holes. So while it has the

bizzareness of a line filling space, at least it preserves the fact that a line segment and a filled square are

two different things.

But like most things, it's possible to go from bad to worse. Following the idea of the Peano curve,

you can create something with the same space-filling property, and construct it so that it doesn't even have the redeeming feature of breaking homeomorphisms between iterations. Hilbert, among others, took the original idea of the Peano curve, and found better formulations

that are non-self-contacting in any finite approximation. What that means is that no matter how many

iterations you do, you'll never see the curve contact itself; but in the limit, it contacts itself

everywhere. That's the really pathological curve that gave people nightmares at first; and you can

still find geometers who complain that it's insane, ill-defined, broken, or non-existent in some odd

way, because they can't accept it.

The Hilbert curve is a more complicated construction. The easiest definition I've seen is

grammatical, using pairs of elements. The members of each pair are geometrically identical,

but in the replacement rules, they're replaced differently.

- Right
_{1}, and Right_{2}: a clockwise right angle with legs length L. - Left
_{1}and Left_{2}: a counterclockwise right angle with legs length L. - Straight
_{1}and Straight_{2}: a straight line segment with length 2L. - Left
_{1}→ Straight_{2}, Left_{1}, Left_{2}, Right_{1} - Left
_{2}→ Right_{2}, Left_{1}, Left_{2}, Straight_{2} - Right
_{1}→ Left_{1}, Right_{2}, Right_{1}, Straight_{1}. - Right
_{2}→ Straight_{1}, Right_{2}, Right_{1}, Left_{2}. - Straight
_{1}→ Right_{2}, Left_{1}, Left_{2}, Right_{1} - Straight
_{2}→ Left_{1}, Right_{2}, Right_{1}, Left_{2}

The replacement rules are:

Below is a series of stages of the Hilbert curve, using a square (<Left_{1}, Left_{2}, Left_{1}, Left_{2}) as an initiator, and the above rules as the generator. (Note that the distortions in the far right stage are the result of a minor glitch in my turtle graphics package, not a real property of the curve.)

As I said - the idea of these curves caused quite an uproar, with famous mathematicians talking

about them as tearing down the entire structure of mathematics. There were some efforts, along the

lines of the infamous Principia, to somehow ban these pathological curves from the realm of geometry. They're frightening in their way - they demonstrate that some of the things we believe about how math *should* work are just wrong.

For example, look at topology. As a very brief reminder, topology studies the structure of sets by looking at relations that define what points are *close to* or *in the neighborhood of* other points. Any two sets that

have the same closeness relationships between their points are called *homeomorphic*, which means, roughly, topologically equivalent. Any transformation that doesn't change those relationships is topologically a no-op.

A line segment and a filled square are *very* different things topologically. In a line segment, the neighborhood of a point are the points on either side of it - the points within a what's called a 1-D open sphere around the point. In a filled square, the neighborhood of a point is the collection of points in a 2-D open sphere around the point - points to its left and right, but also points above, below, and at all sorts of angles from the point. The neighborhood relationships of the line segment and the square are very different.

The Hilbert curve, if you're not very careful about your definitions, can be taken as an argument for the idea that a line segment is homeomorphic (that is, topologically equivalent) to a filled square. After all - any two steps of a finite construction of the Hilbert curve are homeomorphic to each other - they're *all* homeomorphic to a line segment. But if you take it to the limit, the curve after an infinite number of steps, it becomes a filled square. So somehow, the repetition of steps that preserve homeomorphism *fails* to preserve homeomorphism.

The BayeSys software package uses Hilbert curves for a cool purpose, since the software needs to sample a multi-dimensional space in a nice way, they use a Hilbert curve to reduce the higher space to a 1d line.

http://www.inference.phy.cam.ac.uk/bayesys/

The issues you have raised gave rise to many advances in mathematics.

You might enjoy the first chapter of Morgan's book "Introduction to Geometric Measure Theory". He gives an example of a sequece of smooth disks, each with the same boundary, that "in the limit" contains every point of R^3 and yet has a bounded area!

In short, many "nice" properties do NOT "pass to the limit" even with the objects in the limit are say, compact. THAT is why we need all of those seemingly needless "limit theorems".

And here is some more food for thought: there are honest to goodness topological circles that are so wildly embedded that, if they intersect any smooth disk at all, they intersect it in a whole cantor set worth of points!

THAT is why many of us in topology work mostly in the "smooth" or "piecewise linear" catagories.

Ollie:

You've nailed in on the head; I wish I'd thought to mention the connection to things like limit theories.

I see it as being sort of related to Gödel's theorem in some strange way. The way that I see the 20th century in math is as a sort of collapse of the idea of math as the clean, sensible realm of perfect beautiful formalisms. Before the 20th century, the real world might be complex and messy, but underlying it, there was an idea that it was all an expression of a clean and perfect mathematics; that the messiness was a result of not of any intrinsic mess of mathematics, but of the imperfection of the physical world. But the 20th century - in places like these pathological things in geometry/topology, incompleteness in logic, uncomputability in computation, and so many other places, we learned that the messiness was in fact an intrinsic part not just of the real world, but of the entire edifice of

mathematics.

It's really amazing. And what seems even stranger to me is how the *imperfection* of mathematics has made it even *more* fun to many of us.

These space-filling curves are also useful when you're introducing the Jordan curve theorem to students. That's the one that says that a simple closed curve divides the plane into two components, an inside and an outside, with the curve itself being the boundary between them (yes, this can be stated more precisely). The content of the theorem seems so trivial that it hardly seems worth the trouble to prove it. The space-filling curves usually manage to convince students that the theorem is not trivial. If it were possible to have a space-filling curve that starts and ends at the same point, the Jordan curve theorem would be false.

The Hilbert curve starts and ends at the same point ðŸ™‚

ah, Peano curves: you can use them to DoS Windows servers, you know =] (actually, the same thing can be done with any giant complex mathematical calculation. Mandelbrot's pretty good for it.)

i'm surprised you got Turtle to handle it decently, Mark. nice one.

Lepht

mau-

Good point, but the Hilbert curve crosses itself, and therefore is not simple. What I should have said was that a space-filling curve can not be both simple and closed, as otherwise it would be a contradiction to the Jordan Curve Theorem.

One thing you didn't mention is that the length L must be divided (by at least 2?) for each iteration to get the space filling effect. There is something strange (unfair?, not right?, unintuitive?) going on when taking the limit as L approaches 0 and saying that an infinitesimal still has a right angle bend in it.

The Hahn-Mazurkiewicz theorem: Every compact, connected set in R^n is a curve (that is, a continuous image of the unit interval).

So a chair, a ball, a bicycle and a 17-dimensional torus are all curves. "One-dimensional", so to speak.

That's why dimension theory is not quite trivial ðŸ™‚

You said 'choose carefully' Mark, but you never gave the easy solution! The hilbert curve isn't one-to-one in the limit and so preserves homeomorphism.

Likewise, Cantor's mapping between the unit line and the unit square isn't continous and so also fails (I preferred this example when I researched it at uni as it was so simple and unexpected at the same time).

Take unit line I = 0.aubvcxdyez... then set S = (0.abcde..., 0.uvxyz...). Line --> square, voila!

What? No, it goes from the point (0,0) to the point (0,1) in covering the square [0,1] x [0,1].

Right. In fact, it crosses itself at every dyadic rational, for a countably infinite number of crossings. These repeated points are actually dense in the square.

Nice article, but can I make a suggestion about the pictures? Usually a JPEG-compressed image is the way to go, for things like photographs, but for line drawings they struggle to cope with the edges, eventually introducing ugly artifacts and bloating to a greater file size than they need to. A GIF or PNG encoded image looks nicer, and often actually takes up less space.

No, no, no. Such fractls were not a big deal because they can make 1D artifacts cover 2D sets. By that time such happenings were well-established in multiple fields of mathematics. The simplest example is the geometric notion of a point having no extent, but points can build up a line, a plane, a cube, etc.

Through further steps in the construction process, they get closer and closer to self-contacting, without touching. But in the limit, when the construction process is complete, you have a filled square.Kronecker wept

As p said in the first comment, the Hilbert curve can be *really* useful for various tasks. You can use it to get a unique linear traversal of a space (assuming the space can be cut into appropriate discrete chunks), as p said.

Another cool use is essentially the opposite - using the properties of the curve to map a line to a space in a particular way that has certain nice properties. It turns out that if you map a line to an n-cube with the Hilbert curve, contiguous segments of the line form contiguous n-spaces in the n-cube. Xkcd used this property to construct the Map of the Internet:

http://xkcd.com/195/

He just listed out the ipspace lexicographically, and then mapped it to a square using the Hilbert curve.

Mark: Where does this thing you called "the original Peano curve" come from? It doesn't quite look like the one on Wikipedia for example, which incidentally is the same curve you find in [1]. (Though Peano did not give a geometric construction; rather, he gave a direct definition in terms of the base-three representation of the parameter.) Did Peano come up with more than one of these curves?

[1] G. Peano: Sur une courbe, qui remplit toute une aire plane.

Math. Annalen36(1890) 157-160.Harald:

I've been looking at a lot of different references, so I'm not sure exactly where I got the idea that that was the original Peano curve. Most likely, it's mathworld, which doesn't actually say that it's the original Peano curve, but which uses that as its canonical example of a Peano curve; I likely added the idea that that was the original through a mistake of my own as I was flipping between sources.

Ah, but the mistake seems to be pervasive. Mathworld does indeed mislabel this as the Peano curve, even though they reference the original paper by Peano. And on Mathforum there is mislabeling as well. Although what they call the Hilbert curve is in fact the one from Hilbert's 1891 paper, there is also a "Hilbert curve II" which is in fact Peano's 1890 original. And again, they have the same "Peano curve" whose origin I questioned.

I don't really see where there could have been any fuss. If "taking the limit" means "adding all the limit points", then surely it can be seen that there are some points, e.g. (Ï€/4, Ï€/4), to which the Peano or Hilbert curves can get arbitrarily close, but for which there is no finite number of steps after which they will have been reached (or am I wrong and they do reach irrational points in finite time? that would be quite strange...). Actually isn't that two steps: adding all points that can be reached after a finite number of steps (perhaps analogous to the rationals in a 1-D sense), and then making the result closed (perhaps analogous to the reals in a 1-D sense)?

So if "taking the limit" means "adding lots of stuff", then how could it be thought that taking the limit would produce something homeomorphic?

Incidentally, I don't see much difference between the Peano and Hilbert curves in that respect. Surely they both "limit out" at the unit square? Neither or which seems to be a problem, since we shouldn't confuse "has a limit of X" and "eventually reaches X". But then again I don't know much topology ðŸ™‚

The Hahn-Mazurkiewicz theorem: Every compact, connected set in R^n is a curve (that is, a continuous image of the unit interval).

Not quite, It should be every compact connected locally connected set in $R^n$ is a curve. A curve will always be locally connected.

A compact connected locally connected metric space is called a Peano continua, and all are curves (continuous images of the unit interval).

This sounds a lot like an example I remember from high school math -- a line that has length 2 and length 1.

Consider an equilateral triangle: going between the bottom vertices is length 1 across the base, and length 2 through the top vertex. Now, fold the triangle so that the top vertex just touches the base. Still length 1 across the bottom, length 2 across the top. Continue folding, and eventually, in the limit, the top line is right on top of the base, but still twice as long. A nice "proof" that 2 = 1.

Or, a nice explanation that infinity is tricky and you can't just go throwing it around without thinking about countable and uncountable etc.

I'm surprised nobody has mentioned that this kind of thing is particularly useful for making things like maps of the internet. Look, there's Google about the middle of the left edge!

Canadian Mathematical Society Winter Meeting (Special Session on Set-theoretic Topology)

December 13-15, 1998

Queen's University and Royal Military College

Kingston, ON, Canada

On Generalizations of the Hahn-Mazurkiewicz Theorem

by

Murat Tuncali

Nipissing University, North Bay, Ontario

The Hahn-Mazurkiewicz theorem characterizes the Hausdorff continuous images of [0, 1] as the class of locally connected metric continua (Peano continua). A theorem of Alexandroff gives a characterization of the Hausdorff continuous images of the Cantor ternary set as the class of compact metric spaces. Following these theorems, it was natural to ask whether one could obtain generalizations of these results in the category of Hausdorff spaces.

Nikiel (1988) obtained a characterization of locally connected continuous images of compact ordered spaces. Bula and Turzanski (1986) gave a characterization of conitnuous [sic] images of compact ordered spaces. Since these characterizations were obtained, there has been a considerable amount of development in the study of continuous images of ordered continua (arcs) and compact ordered spaces. In this talk, we will give a survey of the results concerning the classes of spaces that are continuous images of ordered continua and compact ordered spaces, and related classes

Date received: September 30, 1998

Johnny (#22): Actually, #15 already mentioned the exact same link as you do ðŸ™‚

> If "taking the limit" means "adding all the limit points",

> then surely it can be seen that there are some points, e.g.

> (Ï€/4, Ï€/4), to which the Peano or Hilbert curves can get

> arbitrarily close, but for which there is no finite number

> of steps after which they will have been reached.

As usual, "taking the limit" means "taking all the limit points and only the limit points". But (Ï€/4, Ï€/4) is in fact a limit point because (as you said) the mentioned curves can get arbitrarily close to it. It does not matter if they ever reach it, it is enough that they can get closer to it than any given distance. For a trivial example, the limit point of the sequence 1/n for n->infinity is 0 although 1/n never reaches 0 within a finite number of steps.

Amo (#24):

D'oh!Oh well, true geeks already read xkcd anyway...

Crawling back under my stone now.

Ah yes. So it is easy to intuitively see that the curve both approach all points and will continue to have holes. Seems similar to the funny sets of the Hausdorff-Banach-Tarski paradox which similarly mess with volume, albeit without appealing to the axiom of choice. (Though it may come into it when proving it rigorously.)

To the endless series of nitpicks I add this: Since we are going to "look at topology" we should probably not, in what amounts to a definition, use "open sphere". Topologists and geometers have balls, albeit with different dimensions. ðŸ˜›

Mathworld:

Dumb caveat. On second thought, I bet the difference in whether AC is "needed" or not lies in that HBT has finite number of pieces, while here the limit process 'messes up' things.

TorbjÃ¶rn: This is constructive, so I'd gamble that AC isn't implicated. On the other hand, the HBT decomposition is into non-measurable sets, and you need AC to show those exist. (In R^n, at least.)

Douglas: Ah yes, good point on the constructive property. Thanks!

As mentioned already, the Hilbert curve (and variations of) are great for linear traversal of a space. I use it to fill large 2D matrices where the cell values are calculated from the corresponding row and column representatives (axis objects). Moreover, the Hilbert traversal stays in the same general vicinity for quite a few turns. This clustering allows one to cache the axis objects. And the caching is reasonably good no matter the machine cache size.

I'm just wondering why the idea of a space-filling curve isn't self-contradictory (I've probably overlooked the crucial definition somewhere, and I'm so bad at maths that I've moved sideways into philosophy recently, but hopefully someone can point out the flaw in my reasoning) because, to my mind, a curve is a set of points with, for each point in the curve, points connecting to it only in two directions, whereas something that fills space has points connecting to it on all sides. I'm wondering if it's that the space-filling curve has something like a rule telling you which way (which 2 ways) the curve goes, but I can't see how that could work with such a fractaline curve (?)

In a way, it does have that kind of rule. Let's look at a simpler example, a figure 8. Imagine moving your finger along it in the motion you would use to draw it. When you're at the outside, there's no problem seeing that there are only 2 ways to go. When you get to the center, there are 4 lines coming together, but the curve only goes in two ways: the way your finger came from and the way it's going. You keep going for a bit, and you get back to the center. Again, one way is forward and one way is back, and this time it's the other two lines.

With the figure 8, you get to the center twice and everywhere else only once. With a space filling curve, you get to every point infinitely many times. Each time, there's one way that's forward and one way that's back.

Hmm, I think I would have explained it like this: a wire is (essentially) 1-dimensional, but I can wrap it up to get a solid block, filling space. Now, I can do this in a finite amount of time because wires do, in fact, have thickness -- they really are 3-dimensional. A curve doesn't have thickness, so I can't fill space with it in a finite amount of time. What these fractal results say, however, is that I can fill space in the limit -- as the amount of time gets larger, I can have a piece of my curve near every point; however near you want it to be, if you give me long enough, I can do it.

The problem with this explanation is that it tries to make fractals sound like common-sense objects, and I don't really think they are ðŸ™‚

Your concerns are valid for the traditional definition of a curve. In the limit (when it actually fills space), the curve *does* intersect itself. Infinitely many times, actually, at every single point in the square.

However, we don't limit ourself only to non-intersecting curves in general. As Antendren said, the figure-8 is a valid curve, even though at the cross there's no way to determine which way you were coming or going without more information. The solution is that we parametrize the curve by a different value than just the (x,y) location of a point. If you, say, define the location of a point according to the time t, then it's just fine for the curve to cross itself multiple times. Each time you can tell where the curve is going and coming from simply by looking at t.

(In essence, we're performing a trick to transform the curve into a non-intersecting one. Rather than looking like a figure 8 on a plane, the curve becomes a funky spiral ascending into space, where the point is defined by the triplet (x,y,t) rather than just (x,y). Most of the time we can ignore the t coordinate and just treat it like a figure-8, but when it's important we have that extra information to help out.)