Topology usually starts with the idea of a *metric space*. A metric space is a set of values with some concept of *distance*. We need to define that first, before we can get into anything really interesting.
Metric Spaces and Distance

What does *distance* mean?
Let's look at a set, S, consisting of elements s_{1}, s_{2}, s_{3},...,s_{n}. What does it mean to measure a *distance* from s_{i} to s_{j}?
We'll start by looking at a simple numberline, with the set of real numbers. What's the distance between two numbers x and y? It's a measure of *how far* over the numberline you have to go to get from x to y. But that's really circular; we've defined distance as "how far you have to go", which is defined by distance. Let's try again. Take a blank ruler, and put it next to the numberline, so that the left edge of the ruler is on X, and then draw a mark on the ruler where it touches Y. The length of the ruler up to that mark is the distance from x to y. The reason that this one isn't circular is because now, you can take that ruler, and use it to answer the question: is the number v farther from the number w than x is from y? Because the ruler gives you a *metric* that you can use *that is separate from* the numberline itself.
A *metric* over S is a function that takes two elements s_{i} and s_{j}, and returns a *real number* which measure the distance between those two elements. To be a proper metric, it needs to have a set of required properties. To be formal, a function d : S × S → ℜ is a *metric function over S* if/f:
1. ∀ s_{i}, s_{j} ∈ S: d(s_{i},s_{j}) = 0 if/f i=j. *(the identity property)*
2. ∀ s_{i}, s_{j} ∈ S: d(s_{i},s_{j}) = d(s_{j},s_{i}) *(the symmetry property)*
3. ∀ s_{i}, s_{j}, s_{k} ∈ S: d(s_{i},s_{k}) ≤ d(s_{i},s_{j}) + d(s_{j},s_{j}) *(the triangle inequality property)*
Some people also add a fourth property, called the *nonnegativity* property; I prefer to leave it out, because it can be inferred from the others. But for completeness, here it is: ∀ s_{i}, s_{j} ∈ S: d(s_{i},s_{j}) ≥ 0.
A *metric space* is just the pair (S,d) of a set S, and a metric function d over the set.
For example:
1. The real numbers are a *metric space* with the rulermetric function. You can easily verify that properties of a metric function all work with the rulermetric. In fact, they are are all things that you can easily check with a ruler and a numberline, to see that they work. The function that you're creating with the ruler is: d(x,y) = xy (the absolute value of x  y). So the rulermetric distance from 1 to 3 is 2.
2. A cartesian plane is a metric space whose distance function is the euclidean distance:
d((a_{x},a_{y}), (b_{x},b_{y})) = ((a_{x}b_{x})^{2} + (a_{y}b_{y})^{2} )^{1/2}.
3. In fact, for every n, the euclidean nspace is a metric space using the euclidean distance.
4. A checkerboard is a metric space if you use the *number of kings moves* as the distance function.
5. The Manhattan street grid is a metric space where the distance function between two intersections is the sum of the number of horizontal blocks and the number of vertical blocks between them.
Open and Closed Sets in Metric Spaces

You can start moving from metric spaces to topological spaces by looking at *open sets*. Take a metric space, (S,d), and a point p∈S. An *open ball* B(p,r) (a ball of radius *r* around point p) in S is the set of points x such that d(p,x) 0, B(p,r)∩T≠∅. The *closure* of T (usually written as T with a horizontal line over it; sometimes written as T^{*} by computer scientists, because that's the closure notation in many CS subjects). is the set of all points adherent to T. *(note: a typo was corrected in this paragraph. Thanks to the commenter who caught it!)*
A subset T of S is called a *closed* subset if/f T=T^{*}. Intuitively, T is closed if it *contains* the surface that forms its boundary. So in 3space, a solid sphere is a closed space. The *contents* of the sphere (think of the shape formed by the air in a spherical balloon) is *not* a closed space; it's bounded by a surface, but that surface is not part of the space.
Archive for: August, 2006
Metric Spaces
Introducing Topology
Back when GM/BM first moved to ScienceBlogs, we were in the middle of a poll about the next goodmath topic for me to write about. At the time, the vote was narrowly in favor of topology, with graph theory as a very close second.
We're pretty much done with category theory, so it's topology time!
So what's topology about? In some sense, it's about the fundamental abstraction of *continuity*: if I have a bunch of points that form a continuous line or surface, what does that really mean? In particular, what does it mean *from within* the continuous surface?
Another way of looking at is as the study of what kinds of *structures* are formed from continuous sets of points. This viewpoint makes much of topology look a lot like category theory: a study of mathematical structures, what they mean, and how we can build them and create mappings between them.
Let's take a quick look at an example. There's a famous joke about topologists; you can always recognize a topology at breakfast, because they're the people who can't tell the difference between their coffee mug and their donut.
It's not just a joke; there's a real example hidden in there. From the viewpoint of topology, the coffee mug and the donut *are the same shape*. They're both toruses. In topology, the exact shape doesn't matter: what matters is the basic continuities of the surface: what is *connected* to what, and *how* they are connected. In the following diagram, all three shapes are *topologically* identical:
If you turn the coffee mug into clay, you can remold it from mugshape to donutshape *without tearing or breaking*. Just squishing and stretching. So in topology, they *are* the same shape. On the other hand, a sphere is different: you can't turn a donut into a sphere without tearing a whole in it. If you've got a sphere and you want to turn it into a torus, you can either flatten it and punch a hole in the middle; or you can roll it into a cylinder, punch holes in the ends to create a tube, and then close the tube into a circle. And you can't turn a torus into a sphere without tearing it: you need to break the circle of the torus and then close the ends to create a sphere. In either case, you're tearing at least one whole in what was formerly a continuous surface.
Topology was one of the hottest mathematical topics of the 20th century, and as a result, it naturally has a lot of subfields. A few examples include:
1. **Metric topology**: the study of *distance* in different spaces. The measure of distance and related concepts like angles in different topologies.
2. **Algebraic topology**: the study of topologies using the tools of abstract algebra. In particular, studies of things like how to construct a complex space from simpler ones. Category theory is largely based on concepts that originated in algebraic topology.
3. **Geometric topology**: the study of manifolds and their embeddings. In general, geometric topology looks at lowerdimensional structures, most either two or three dimensional. (A manifold is an abstract space where every point is in a region that appears to be euclidean if you only look at the local neighborhood. But on a larger scale, the euclidean properties may disappear.)
4, **Network topology**: topology in the realm of discrete math. Network topologies are graphs (in the graph theory sense) consisting of nodes and edges.
5. **Differential Topology**: the study of differential equations in topological spaces that have the properties necessary to make calculus work.
Personally, I find metric topology rather dull, and differential topology incomprehensible. Network topology more properly belongs in a discussion of graph theory, which is something I want to write about sometime. So I'll give you a passing glance at metric topology to see what it's all about, and algebraic topology is where I'll spend most of my time.
One of the GM/BM readers, Ofer Ron (aka ParanoidMarvin) is starting a new blog, called [Antopology][antopology] where he'll be discussing topology, and we're going to be tagteaming our way through the introductions. Ofer specializes in geometric topology (knot theory in particular, if I'm not mistaken), so you can get your dose of geometric topology from him.
[antopology]: http://antopology.blogspot.com/
A Stunning Demonstration of Why Good Science Needs Good Math
Everyone is scientific circles is abuzz with the big news: there's proof that dark matter exists! The paper from the scientists who made the discovered is [here][darkmatterpaper]; and a Sean Carroll (no relation) has [a very good explanation on his blog, Cosmic Variance][cv]. This discovery happens to work as a great example of just why good science needs good math.
As I always say, one of the ways to recognize a crackpot theory in physics is by the lack of math. For an example, you can look at the [electric universe][electric] folks. They have a theory, and they make predictions: but because there's *no* math, the predictions are vague, and there's no good way of *really* testing them, because there's no quantifiable way of making a precise prediction  because there's no math. So they can make predictions like "the stardust experiment will get bigger particles than they expect"; but they can't tell you *how* big.
The dark matter result is a beautiful example of how to use good math in science.
Here's the basic idea. The theory says that there are two kinds of matter: "dark" matter, and "light" matter. Dark matter only interacts with light matter via gravity; it does not interact with light matter via other forces. But dark matter and light matter generally clump in the same places  because gravity pulls them together. So it's very difficult to really prove that dark matter exists  because you can't see it directly, and it normally only appears with light matter, so you can't really prove that the dark matter is there: any observed effect *might* be caused by the light matter behaving in a way different than our current theories claim it should.
But what if you could somehow sweep the light matter away?
What the scientists who did this work found is a collision of two galactic clusters. When these clusters collided, the light matter, mostly in the form of gas, interacted very strongly with one another, creating a shock wave pattern. But the *dark* matter passed through without interacting  the "collision" between the gas clouds didn't affect the dark matter. So the gas clouds were swept back, while the dark matter continued moving. There's a great animation illustrating this; in the animation, the blue is the dark matter; the red is the light matter. As the two clusters pass through each other, the light matter is swept away by the electromagnetic interactions between the gas clouds; the dark matter passes right through:
Here's where the math comes in.
They used a combination of optical and Xray telescope to produce maps of the gravitational fields of the clusters. This was done by computing the gravitational lensing effect distorting the images of other, more distant galaxies visible *behind* the collided clusters. By carefully computing the distortion caused by gravitational lensing, they were able to determine the distribution of *mass* in the collided clusters. And what they found was the bulk of the mass was *not* in the light matter. It was in the places that the center of gravities of the clusters would have been *without* the shockwave effects of the collision. So the bulk of the mass of these two clusters do not appear on our telescope images; but it behaves exactly as the math predicts it would if it were dark matter.
The prediction and the result are both based on very careful computations based on the *mathematical* predictions of gravity and relativity. They were able to predict precisely what they would expect from the interaction using a mathematical model of the how the gas clouds would interact to be swept away; and how the dark matter would interact gravitationally to predict where the dark matter masses should be. Then they were able, via a *separate* computation to determine how much mass was in what location based on gravitational lensing. And finally, they were able to compare the two *separately computed* results to see if the reality matched the prediction.
Now *that* is both good math and good science! And the science could *not* have been done without the math.
[darkmatterpaper]: http://chandra.harvard.edu/photo/2006/1e0657/media/paper.pdf
[cv]: http://cosmicvariance.com/2006/08/21/darkmatterexists/
[electric]: http://www.holoscience.com/
[animate]: http://chandra.harvard.edu/photo/2006/1e0657/media/bullet.mpg
A Glance at Hyperreal Numbers
Since we talked about the surreals, I thought it would be interesting to take a *very* brief look at an alternative system that also provides a way of looking at infinites and infinitessimals: the *hyperreal* numbers.
The hyperreal numbers are not a construction like the surreals; instead they're defined by axiom. The basic idea is that for the normal real numbers, there are a set of basic statements that we can make  statements of first order logic; and there is a basic structure of the set: it's an *ordered field*.
Hyperreals add the "number" ω, the size of the set of natural numbers, so that you can construct numbers using ω, like ω+1, 1/ω, etc; but it constrains it by axiom so that the set of hyperreals is *still* an ordered field; and all statements that are true in firstorder predicate logic over the reals are true in firstorder predicate logic over the hyperreals.
For notation, we write the real field ℜ, and the hyperreal field ℜ*.
The Structure of Reals and Hyperreals: What is an ordered field?

If you've been a longtime reader of GM/BM, you'll remember the discussion of group theory. If not, you might want to take a look at it; there's a link in the sidebar.
A field is a commutative ring, where the multiplicative identity and the additive identity are not equal; where all numbers have an additive inverse, and all numbers except 0 have a multiplicative inverse.
Of course, for most people, that statement is completely worthless.
In abstract algebra, we study things about the basic structure of the sets where algebra works. The most basic structure is a *group*. A group is basically a set of values with a single operation, "×", called the *group operator*. The "×" operation is *closed* over the set, meaning that for any values x and y in the set, x × y produces a value that is also in the set. The group operator also must be associative  that is, for any values x, y, and z, x×(y×z) = (x×y)×z. The set contains an *identity element* for the group, generally written "1", which has the property that for every value x in the group, "x×1=x". And finally, for any value x in the set, there must be a value x^{1} such that x×x^{1}=1. We often write a group as (G,×) where G is the set of values, and × is the group operator.
So, for example, the integers with the "+" operation form a group, (Z,+). The real numbers *almost* form a group with multiplication, except that "0" has no inverse. If you take the real numbers without 0, then you get a group.
If the group operator is also commutative (x=y if/f y=x), then it's called an *abelian group*. Addition with "+" is an abelian group.
A *ring* (R,+,×) is a set with two operations. (R,+) must be an abelian group; (R{0},×) needs to be a group. If × is commutative (meaning (R{0},×) is abelian), then the group is called a *commutative* group.
A *field* (F,+,×) is a commutative ring with two operators "+" and "×"; where the identity value for "+" is written 0, and the identity for "×" is written 1; all values have additive inverses, all values except 0 have multiplicative inverses; and 0 ≠ 1. A *subfield* (S,+,×) of a field (F,+,×) is a field with the same operations as F, and where its set of values is a subset of the values of F.
*Finally*, an *ordered* field is a field with a total order "≤": for any two values x and y, either "x ≤ y" or "y ≤ x", and if x ≤ y ∧ y ≤ x then x=y. The total order must also respect the two operations: if a ≤ b, then a + x ≤ b + x; and if 0 ≤ a and 0 ≤ b then 0 ≤ a×b.
The real numbers are the canonical example of an ordered field.
*(The definitions above were corrected to remove several errors pointed out in the comments by readers "Dave Glasser" and "billb". As usual, thanks for the corrections!)*
One of the things we need to ensure for the hyperreal numbers to work is that they form an ordered field; and that the real numbers are an ordered subfield of the hyperreals.
The Transfer Principle

To do the axiomatic definition of the hyperreal numbers, we need something called *the transfer principle*. I'm not going to go into the full details of the transfer principle, because it's not a simple thing to fully define it, and prove that it works. But the intuitive idea of it isn't hard.
What the transfer principle says is: *For any **first order** statement L that's true for the ordered field of real numbers, L is also true for the ordered field of hyperreal numbers*.
So for example: ∀ x ∈ ℜ, &exists; y ∈ ℜ : x ≤ y. Therefore, for any hyperreal number x ∈ ℜ*, &exists y ∈ ℜ* : x ≤ y.
Defining the Hyperreal Numbers

To define the hyperreal numbers so that they do form an ordered field with the right property, we need to two things:
1. Define at least one hyperreal number that is not a real number.
2. Show that the *transfer principle* applies.
So we define ω as a hyperreal number such that ∀ r ∈ ℜ, r < ω.
What we *should* do next is prove that the transfer principle applies. But that's well beyond the scope of this post.
What we end up with is very similar to what we have with the surreal numbers. We have infinitely large numbers. And because of the transfer principle, since there's a successor for any real number, that means that there's a successor for ω, so there is an ω+1. Since multiplication works (by transfer), there is a number 2×ω. Since the hyperreals are a field, ω has a multiplicative inverse, the infinitessimal 1/ω, and an additive inverse, ω.
There is, of course, a catch. Not quite everything can transfer from ℜ to ℜ*. We are constrained to *first order* statements. What that means is that we are limited to simple direct statements; we can't make statements that are quantified over other statements.
So for example, we can say that for any real number N, the series 1,1+1,1+1+1,1+1+1,1+1+1+1,... will eventually reach a point where every element after that point will be larger than N.
But that's not a first order statement. The *series* 1, 1+1, 1+1+1, ... is a *second order* statement: it isn't talking about a simple single statement. It's talking about a *series* of statements. So the transfer principle fails.
That does end up being a fairly serious limit. There are a lot of things that you can't say using firstorder statements. But in exchange for that limitation, you get the ability to talk about infinitely large and infinitely small values, which can make some problems *much* easier to understand.
Arithmetic with Surreal Numbers
Last thursday, I introduced the construction of John Conway's beautiful surreal numbers. Today I'm going to show you how to do arithmetic using surreals. It's really quite amazing how it all works! If you haven't read the original post introducing surreals, you'll definitely want to [go back and read that][surreals] before looking at this post!
Random Quotes
I figured it was time I did the latest random thing to be wandering its way around Scienceblogs. [Janet has introduced the "random quotes" meme.][janet], in which we're supposed to go wandering through the [quotes here][quotes], and pick the first five that reflect "who you are or what you believe".
1. "Human beings are perhaps never more frightening than when they are convinced beyond doubt that they are right.", Laurens Van der Post, The Lost World of the Kalahari (1958). Could any quote possibly be more true?
2. "He that would make his own liberty secure, must guard even his enemy from oppression; for if he violates this duty, he establishes a precedent that will reach to himself.", Thomas Paine (1737  1809). Given recent events in this country, this is particularly apropos.
3. "We are here to change the world with small acts of thoughtfulness done daily rather than with one great breakthrough.", Rabbi Harold Kushner. I've actually taken a class with Rabbi Kushner, and it was wonderful; this quote to me sums up a big part of how I try to live my life.
4. "Science is facts; just as houses are made of stones, so is science made of facts; but a pile of stones is not a house and a collection of facts is not necessarily science.", Henri Poincare (1854  1912). I couldn't possibly pick quotes for this blog without quoting a mathematician. Considering what I do on this blog, this one is quote appropriate for me. So many of the creationist screeds that I criticize are based on collections of actual facts put together in stupid ways that turn them in garbage instead of science.
5. "The art of dining well is no slight art, the pleasure not a slight pleasure.", Michel de Montaigne (1533  1592). Yes, I'm a foodie. Maybe someday I'll post a recipe or two. I used to have a website with the recipes from my Y2K New Years Eve party, but it got lost when I switched ISPs and forgot to copy it.
[janet]: http://scienceblogs.com/ethicsandscience/2006/08/random_quotations_meme.php
[quotes]: http://www.quotationspage.com/random.php3
A Crank Responds: GeorgieBoy and his "Scientific Proof of God"
Remember my post several weeks ago about ["The First Scientific Proof of God"?][georgie] The author, Georgieboy Shollenberger popped up [in the comments yesterday][georgiecomments], and posted [a response][georgieresponds] on his blog.
This is how he describes this blog:
>This website is an example of how some math teachers are thinking and teaching
>your children. In general, this website is a Good Math, Bad Math web. On this
>web, debunking creationism is listed under the bad math category. So, your
>children are most likely taught by atheists. Is this what parents want?
If this blog is indeed an example of how math teachers are thinking and teaching, then I'm very flattered. I'm not a teacher, but I would very much *like* to be one; if my writing works well for teaching math, then that means I'm doing a good job.
But the second part: does "debunking creationism" imply atheism? For a guy who purports to have made the greatest discovery in the history of the world; and who claims to show why both math and logic need to be reexamined from their very roots, this is a pathetic claim.
First: Creationism is *not* the only possible theistic belief system.
Second: Creationism is *bad* theism. It consists of throwing away math, logic, science, and reason all in an effort to support a bizarre and unreasonable interpretation of one poorly translated religious text.
Third: I'm not an atheist.
He follows that intro with an interesting nonsequitur:
>Mathematicians review my suggestion to restudy mathematics. First, they do not
>believe that humans might be living on other planets. You might agree with them
>but my scientific proof requires other planets to maintain human life
>eternally. Apparently, the reviewers believe that the evening stars are merely
>lights as the ancients thought. How mindless. When seeking the effects of a
>proven God, planet earth is not the first planet that has humans and will not
>be the last planet that has humans.
Fascinating though process there, huh? I criticize him for sloppy mathematical arguments, and therefore "I do not believe that humans might be living on other planets", and I "believe that the evening stars are merely lights".
As a matter of fact, I *don't* believe that there are humans living on other planets. But how one can conclude from my criticism of his math that I think "evening stars are merely lights"? (Or that I believe that humans don't live on other planets, for that matter? Just because I *do* believe that humans don't live on other planets doesn't mean you can conclude that from my criticism of his sloppy math!)
>... But, the author gripes because my book must
>be purchased to determine what I say. Yet, mathematicians make and sell books
>regularly.
Yes, mathematicians make and sell books. But I've yet to see a major mathematical discovery that you could *only* see if you were willing to pay
the author.
For example, the other day, I wrote about Grigory Perelman's proof of the Poincare conjecture. It's available online for anyone who wants it:
* The entropy formula for the Ricci flow and its geometric applications
* Ricci flow with surgery on threemanifolds
* Finite extinction time for the solutions to the Ricci flow on certain threemanifolds
Or Conway's surreal numbers? Yes, he wrote an [excellent book][onag] on them. He also made information on them widely available to all sorts of people. He showed them to Don Knuth, who wrote [the first book][knuthbook] on them. There've been articles on them all over  from Marvin Gardner in Scientific American to random people on personal websites. He didn't demand that everyone give him money to see his work.
How about Einstein? He published relativity in a physics journal called "[Annalen der Physik][annalen]". At the time, there was nothing like the internet, and scientists pretty much always published in journals (as they continue to do today). Annalen does *not* pay the authors of papers; it's available in *every* major university library; and you are permitted to make personal copies of articles from it for academic use.
Mathematicians and scientists publish articles and books  but we don't expect (or even particularly want) to be able to restrict access to our ideas *only* to people willing to give us money to see them.
Georgieboy doesn't do anything like that. If you want to see his wonderful, worldchanging proof, you have to pay him for it.
Finally, he gets around to addressing my criticism of his *math*.
>The author focuses on the concept of infinite, but does not seem to understand
>the great mathematician, Georg Cantor, who discovered the transfinite numbers.
>Instead, the author (1) plays with Aristotle's potential infinity, which Cantor
>calls the mathematical or bad infinity, (2) plays with 'infinity by division,'
>which is a verb that defined the atom for the ancients atomists, (3) plays with
>'infinity by addition,' which applies to Cantor's transfinite numbers, and (4)
>plays with surreal numbers in which infinity becomes a real number. I would
>throw John Conway's surreal numbers into the circle file. Then, the author
>charges me with saying that God is a number infinity. At no time have I ever
>gave God a number because. God is not a number. God's oneness opposes the
>universes' manyness and thus precedes all finite and infinite numbers that will
>ever be found in the universe.
Why did I talk about Aristotle's potential infinity? Because Georgieboy *specifically claims that mathematicians use Aristotle's infinity*. Infinity by addition and infinity by division are the two forms of infinity discussed by Aristotle. The horror! The horror! I actually *criticized* Georgieboy by *addressing his arguments*! Oh, will my unfairness and unreasonability never cease?!
Oh, and why would he throw Conway's surreals in the trash? Who knows? It's particularly interesting the way that he juxtaposes Cantor and the transfinite numbers in defense of his ideas, while tossing Conway and the surreals into the trash. Because, you see, the surreals are based *on the same concept of ordinals* as the transfinite numbers. *(Note: the previous paragraph originally had a typo; where it currently says "transfinite numbers", I originally repeated "surreal numbers". Thanks to commenter "Noodle" for the catch.)*
>My suggestion to restudy mathematics is a serious matter because I discovered
>the first scientific proof of God. I conclude that this discovery has vast
>potentials in mathematics and all sciences. With this proof, big changes can be
>expected.
Yes, his theory has *vast potential*. It's going to change the world! It's going to revolutionize all of math and science! And all you need to do to learn about it is: **buy his book**! Because he won't tell you about it otherwise.
>... For instance, Cantor's transfinite numbers must be developed by our
>mathematicians so we can understand the universe's atoms, the cosmology of God,
>and the cells of all the living bodies. Unfortunately, the atheistic
>mathematicianc believe that we live only in world of numbers. The theory of God
>will not go away during the life of any person. Today's mathematicians have a
>choice to work with 85% of the people in the USA who believe in God. On the
>other hand, they can live privately 'in their box of finites.' I hope to
>convince 'the majority' in the USA that the field of mathematics is falling
>apart and must thus be reformed but also expanded considerably.
Yeah, we need to start studying transfinite numbers, because *nobody* has been studying anything like that. (Except, of course, for thousands of number theorists.)
And we need to stop being atheists (even when we aren't), because the existence of god means, ummm, well, ummm.... Not very much in terms of math?
And mathematics is falling apart! Just because we've managed to accomplish trivial little things like proving the Poincare conjecture and Fermat's last theorem; characterizing the fundamental limits of mathematics, and silly things like that means *nothing*. Mathematics is falling apart! Who can save us?!
Why, nothing can save us except Georgieboy!
As long as we send him some cash.
[georgie]: http://scienceblogs.com/goodmath/2006/07/restudying_math_in_light_of_th.php
[georgiecomments]:http://scienceblogs.com/goodmath/2006/07/restudying_math_in_light_of_th.php#comment194071
[georgieresponds]: http://georgeshollenberger.blogspot.com/2006/08/whatmathematiciansareteachingyour.html
[onag]: http://rcm.amazon.com/e/cm?t=goodmathbadma20&o=1&p=8&l=as1&asins=1568811276&fc1=000000&IS2=1<1=_blank&lc1=0000ff&bc1=000000&bg1=ffffff&f=ifr
[knuthbook]: http://rcm.amazon.com/e/cm?t=goodmathbadma20&o=1&p=8&l=as1&asins=0201038129&fc1=000000&IS2=1<1=_blank&lc1=0000ff&bc1=000000&bg1=ffffff&f=ifr
[annalen]: http://www.wileyvch.de/publish/en/journals/alphabeticIndex/2257/
Beautiful Insanity: Pathological Programming in SNUSP
Todays programming language insanity is a real winner. It's a language called SNUSP. You can find the language specification [here][snuspspec], a [compiler][snuspcomp], and [an interpreter embedded in a web page][snuspinterp]. It's sort of like a cross between [Befunge][befunge] and [Brainfuck][brainfuck], except that it also allows subroutines. (And in a variant, *threads*!) The real beauty of SNUSP is its beauty: that is, programs in SNUSP are actually really quite pretty, and watching them run can be positively entrancing.
SNUSP keeps its data on a tape, like Brainfuck. The basic instructions are very Brainfuck like:
1. "" move the data tape pointer one cell to the right.
3. "+" add one to the value in the current data tape cell.
4. "" subtract one from the value of the current data tape cell.
5. "," read a byte from stdin to the current tape cell.
6. "." write a byte from the current tape cell to stdout.
7. "!" skip the next instruction.
8. "?" skip the next instruction if the current tape cell contains zero.
Then there's the twodimensional control flow. There aren't many instructions here.
1. "/" bounce the current control flow direction as if the "/" were a mirror: if the program is flowing up, switch to right; if it's flowing down, switch to left; if it's flowing left, switch to down; and if its flowing right, switch to up.
2. "" bounce the other way; also just like a mirror.
3. "=" noop for drawing a path flowing left/right.
4. "" noop for drawing a path flowing up/down.
So far, we've pretty much got a straightforward mix between Brainfuck and Befunge. Here's where it becomes particularly twisted. It also has two more
instructions for handling subroutines. There's a call stack which records pairs of location and direction, and the two instructions work with that:
1. "@" means push the current program location and the current direction onto the stack.
2. "#" means pop the top of the stack, set the location and direction, and *skip* one cell. If there is nothing on the stack, then exit (end program).
Finally, the program execution starts out wherever there is a "$", moving to the right.
So, for example, here's a program that reads a number and then prints it out twice:
/==!/===.===#
 
$====,===@/==@/====#
So, it starts at the "$" flowing right. Then gets to the ",", and reads a value
into the current tape cell. It hits the first "@", records the location and direction on the stack. Then it hits the "/" mirror, and goes up until it hits the "/" mirror, and turns right. It gets to the "!" and skips over the "/" mirror, then the "." prints, and the "#" pops the stack. So it returns to the
first "@", skips over the "/" mirror, and gets to the second "@", which pushes the stack, etc.
Here's a simple subroutine for adding 48 to a cell:
=++++++++++++++++++++++++++++++++++++++++++++++++#
Or a slight variant:
/=+++++++++++++++++++++++++
 #+++++++++++++++++++++++/

$
Or (copying from the language manual), how about this one? This one starts to give you an idea of what I like about this bugger; the programs just *look* cool. Writing a program in SNUSP can be as much art as it is programming.
#//
$===!++++
/++++/
/=++++
!///
One last +48 subroutine,
123 4
/=@@@+@+++++#

$
This last one is very clever, so I'll walk through it. The "1234" on the top are comments; any character that isn't an instruction is ignored. They're there to label things for me to explain.
The program goes to @1. It pushes the loc/dir on the stack. Then it gets to @2, pushed again. (So now the stack is "@1right,@2right"). Then @3, push (stack="@1right,@2right,@3right"). Then add one to the cell. Push again (stack=@1right,@2right,@3right,@4right"). Then 5 "+"s, so add 5 to the cell. So we've added 6. Then we hit "#", so pop, return to @4, skip one cell. So 4+s get executed. So we've added 10. Then pop again (so stack is "@1right,@2right"), return to @3, skip one instruction. So we're back to @4; push (stack=@1right,@2right,@4right). Add five (we're up to "+15"), and pop the stack, which brings us back to @4 again, and skip one cell, so now add another 4 (+19). Pop (stack=@1right), and we're at @2. Skip one instruction, so we jump over @3. Then add one (+20), and repeat what happened before when we first got to "@4", adding another 9 (+29). Pop again (so stack is empty), skip one instruction, so we're at @3. Skip, push, repeat from @4 (+38). Pop back to @2, skip @3, add one (+39), push @4, repeat the same thing from @4 again (+48).
Here's a real beauty: Multiplication, with documentation. If you look at it carefully, it's actually reasonably clear how it works! Given this instruction set, that's *truly* an amazing feat.
read two characters ,>,== * /=================== ATOI 
convert to integers /=/@</@=/ * // /===== ITOA ++++++++++ //
multiply @ =!=========/ // /++++++++++/ 
convert back !/@!============/ ++++++++++ //
and print the result / .# * /++++++++++/ #
/====================/ * ++++++++#

 / #/?=<<<>>> />>+<+>>>+/ // /======== BSPL2 !======?/#
 />+< /============== FMOV4 =/ // /<+>
 #?===/! FMOV1 ================== /====/ /====== FSPL2 !======?/#
 /============================/
 * * **  *  * * * * * * **  * * * /+@/>@/>>=== /====>>@<@=?/@==<#<<<== !<<<>>>?/ * // /
* \ /@==============!/?>=!/?>!/=======?<<<#
 />>+<<+?+>?/
\!=</</
=== 0 and y = 0
or A(x1,A(x,y1)) if x > 0 and y > 0
The value of Ackerman's function grows like nothing else. A(4, 2) is about 2×10^{19728}.
Here's Ackerman's in SNUSP:
/==!/==atoi=@@@@#
 
  /=========!==!==== ** recursion **
$,@/>,@/==ack=!? j+1
j i @/#   A(i,0) > A(i1,1)
@>@>@/@ A(i1,A(i,j1))
# #   
/<>!=/ =======@>>>@< 0) ? ?   
>>+<>+>+<<>>+<<</
[snuspspec]: http://www.deepwood.net/~drlion/snusp/snusp1.0specwd1.pdf
[snuspcomp]: http://www.baumanfamily.com/john/esoteric.html
[snupsinterp]: http://www.quirkster.com/snusp/snuspjs.html
[brainfuck]: http://scienceblogs.com/goodmath/2006/07/gmbm_friday_pathological_progr.php
[befunge]: http://scienceblogs.com/goodmath/2006/07/friday_pathological_programmin.php
Introducing the Surreal Numbers
Surreal numbers are a beautiful, simple, setbased construction that allows you to create and represent all real numbers, so that they behave properly; and in addition, it allows you to create infinitely large and infinitely small values, and have them behave and interact in a consistent way with the real numbers in their surreal representation.
The surreals were invented by John Horton Conway (yes, that John Conway, the same guy who invented the "Life" cellular automaton, and did all that amazing work in game theory). The name for surreal numbers was created by Don Knuth (yes, that Don Knuth!).
Surreal Numbers are something that has come up in discussions in the comments on a number of posts around here. Before they were first mentioned in a comment back when this blog was on Blogger, I had never heard of them. After someone said a little about them, I thought they sounded so cool that I rushed out and got the two main books that have been written about them: Conway's "On Numbers and Games", and Knuth's "Surreal Numbers".
The Basic Rules
The basic idea of surreal numbers is that you can represent any real number \(*\) using two sets: a set \(L\) which contains numbers less than \(N\), and a set \(R\) which contains numbers greater than \(N\). To make it easy to write, surreals are generally written N={ L  R }, where L is the definition of the set of values less than N, and R is the definition of the set of values greater than N.
There are two fundamental rules for surreal numbers:
 The Construction Rule: If L and R are two sets of surreal numbers, and ∀ l ∈ L, r ∈ R : l < r, then { L  R } is a valid surreal number.
 The Comparison Rule,/em>: If x = {X_{L}  X_{R}} and y = {Y_{L}  Y_{R}} then x ≤ y if/f:
¬ ∃ x_{i} ∈ X_{L} : y ≤ x_{i} (y is not less than and member of the left set of X), and ¬ ∃ y_{i} ∈ Y_{R} : y_{i} ≤ x. (x is not greater than any member of the right set of Y.).
In addition, we need two basic axioms: finite induction, and equality. Finite induction is basically the induction rule from the Peano axioms, so I won't repeat it here. The equality rule for surreals is also very simple: x = y if/f x ≤ y ∧ y ≤ x.
Constructing Integers
With the rules set, we can start creating numbers. The first one we create is 0: we say that by definition, 0 is the number with the empty set as its L and R values: 0 = { ∅  ∅ }, also written {  }.
Next we want to define the basic unit values, 1 and 1. So how do we define 1? Well, the only number we have so far is 0. So we define 1 as the value with 0 in its left set, and an empty right set: 1 = {0}. 1 is pretty much the same deal, except that it uses 0 as its right set: 1 = {0}.
We follow the same basic pattern for all of the integers: the positive integer i is a surreal number with all of the positive integers less than i in its left set, and an empty right set: i = { 0, ..., i2, i1  }. Similarly for the negatives: a negative number i = {  i+1, i+2, ... 0}.
Now, if we look at the possibilities for surreals so far  we're always using a null set for either the left, the right, or both sets. What happens if, for a positive number i, we leave out, say, i2 in its left set?
What happens is nothing. It's effectively exactly the same number. Just think of the definition above, and look at an example: 4 = {0, 1, 2, 3 }, and {2, 3}. Is {0, 1, 2, 3} ≤ {2, 3}? Yes. Is {2, 3} ≤ {0, 1, 2, 3}? Yes. So they're the same. So each integer is actually an equivalence class. Each positive integer i is the equivalence class of the surreal numbers formed by: { {l} where l ⊂ { 0, ..., i1 } && i1 ∈ l }.
At this point, we've got the ability to represent integers in the surreal form. But we haven't even gotten to fractions, much less to infinites and infinitessimals.
Constructing Fractions
The way we construct fractions is by dividing the range between two other numbers in half. So, for example, we can create the fraction 1/2, by saying it's the simplest number between 0 and 1: {0  1}. We can write 3/4 by saying it's halfway between 1/2 and 1: {1/2  1}.
This is, obviously, sort of limited. Given a finite left and right set for a number, we can only represent fractions whos denominators are powers of 2. So 1/3 is out of the question. We can create a value that is as close to 1/3 as we want, but we can't get it exactly with finite left and right sets. (Fortunately, we aren't restricted to finite left and right sets. So we'll let ourselves be lazy and write 1/3, with the understanding that it's really something like a continued fraction in surreal numbers.)
Happy Birthday to 2
One thing that's important in understanding the surreals is the idea of a birthday.
We started creating surreals with the number 0 = {}. We say that 0 has the integer 0 as its birthday.
Then we created 1 and negative one. So it took one step each to get from the set of surreals we knew about to get to the new set consisting of {0}, {}, {0}, or 1, 0, and 1. So 1 and 1 have a birthday of 1.
Once we had 1, 0, and 1, we could create 2, 1/2, 1/2, and 2 in one step. So they had a birthday of 2.
What we're doing here is creating an idea of ordinals: the ordinal of a surreal number is its birthday  the number of steps it took until we could create that number.
Given any surreal number {LR}, the value that it represents is the value between the largest value in L and the smallest value in R with the earliest birthday (or, equivalently, the lowest ordinal).
So if we had, for example {1, 2, 3 } (a surreal for 4), and {23} (a surreal for 24), then then surreal number { {1, 2, 3}  {23} } would be 5  because it's the surreal between 4 and 24 with the earliest birthday.
What about infinity?
I promised that you could talk about infinity in interesting ways.
Let's call the size of the set of natural numbers ω. What can we create with birthday ω + 1? Things with birthday ω+1 are things that require infinite L and R sets. So, for example, in step ω+1, we can create a precise version of 1/3.
How?
 Let's give a name to the of all surreal numbers with birthday N. We'll call it S_{N}.
 The left set of 1/3 is: { x/2^{y} ∈ S_{ω} : 3x < 2^{y}}
 The right set of 1/3 is: { x/2^{y} ∈ S_{ω} : 3x > 2^{y} }
 So 1/3 = { { x/2^{y} ∈ S_{ω} : 3x < 2^{y}}  { x/2^{y} ∈ S_{ω} : 3x > 2^{y} } }, and its birthday is ω+1.
 Using pretty much the same trick, we can create a number that represents the size of the set of natural numbers, which we'll call infinity. ∞ = { {x ∈ S_{ω}}  }, with birthday ω+1. And then, obviously, ∞+1 = { ∞  }, with birthday ω+2. And ∞/2 = { 0  ∞ }, also with birthday ω+2.
Infinitesimals are more of the same. To write an infinitely small number, smaller than any real fraction, what we do is say: { 0  { 1/2, 1/4, 1/8, 1/16, ... } }, with birthday ω+1.
What about irrationals? No problem! They're also in S_{ω+1}. We use a powersof2 version of continued fractions.
π = {3, 25/8, 201/64, ...  ..., 101/32, 51/16, 13/4, 7/2, 4}, birthday=ω+1.
Coming Soon
Either tomorrow or next monday (depending on how much time I have), I'll show you how to do arithmetic with surreal numbers.
Roman Numerals and Arithmetic
I've always been perplexed by roman numerals.
First of all, they're just *weird*. Why would anyone come up with something so strange as a way of writing numbers?
And second, given that they're so damned weird, hard to read, hard to work with, why do we still use them for so many things today?