Back during the DonorsChoose fundraiser, I promised a donor that I'd write an article about the math of zero. I haven't done it yet, because zero is actually a suprisingly deep subject, and I haven't really had time to do the research to do it justice. But in light of the comment thread that got started around [this post][fspog] yesterday, I think it's a good time to do it with whatever I've accumulated now.

History

---------

We'll start with a bit of history. Yes, there's an actual history to zero!

In general, most early number systems didn't have any concept of "zero". Numbers, in early mathematical systems, were measurements of quantity. They were used to ask questions like "How much grain do we have stored away? If we eat this much now, will we have enough to plant crops next season?" A measurement of zero doesn't really mean much; even when math is applied to measurements in modern math, leading zeros in a number - even if they're *measured* - don't count as significant digits in the measurement. (So if I'm measuring some rocks, and one weighs 99 grams, then that measurement has only two significant digits. If I use the same scale to weigh a very slightly larger rock, and it weighs 101 grams, then my measurement of the second rock has *three* significant digits. The leading zeros don't count!) *(In the original version of this post, I managed to stupidly blow my explanation of significant digits, which several alert commenters pointed out. As usual, my thanks for the correction.)*

Aristotle is pretty typical of the reasoning behind why zero wasn't part of most early number systems: he believed that zero was like infinity: an *idea* related to numbers, but not an actual number itself. After all, you can't *have* 0 of anything; zero of something isn't *any* something: you *don't have* anything. And you can't really *get* to zero as he understood it. Take any whole number, and divide into parts, you'll eventually get a part of size "1". You can get to any number by dividing something bigger. But not zero: zero, you can never get to by dividing things. You can spend eternity cutting numbers in half, and you'll still never get to zero.

The first number system that we know of to have any notion of zero is the babylonians; but they still didn't really quite treat it as a genuine number. They had a base-60 number system, and for digit-places that didn't have a number, they left a space: the space was the zero. (They later adopted a placeholder that looked something like "//".) It was never used *by itself*; it just kept the space open to show that there was nothing there. And if the last digit was zero, there was no indication. So, for example, 2 and 120 looked exactly the same - you needed to look at the context to see which it was.

The first real zero came from an Indian mathematician named Brahmagupta in the 7th century. He was quite a fascinating guy: he didn't just invent zero, but arguably he also invented the idea of negative numbers and algebra! He was the first to use zero as a real number, and work out a set of algebraic rules about how zero, positive, and negative numbers worked. The formulation he worked out is very interesting; he allowed zero as a numerator or a denominator in a fraction.

From Brahmagupta, zero spread both east (to the Arabs) and west (to the Chinese and Vietnamese.) Europeans were just about the last to get it; they were so attached to their wonderful roman numerals that it took quite a while to penetrate: zero didn't make the grade in Europe until about the 13th century, when Fibonacci (he of the series) translated the works of a Persian mathematican named al-Khwarizmi (from whose name sprung the word "algorithm" for a mathematical procedure). As a result, Europeans called the new number system "arabic", and credited it to the arabs; but as I said above, the arabs didn't create it; it originally came from India. (But the Arabic scholars, including the famous poet Omar Khayyam, are the ones who adopted Brahmagupta's notions *and extended them* to include complex numbers.)

Why is zero strange?

----------------------

Even now, when we recognize zero as a number, it's an annoyingly difficult one. It's neither positive nor negative; it's neither prime nor compound. If you include it in the set of real numbers, then they're not a group - even though the concept of group is built on multiplication! It's not a unit; and it breaks the closure of real numbers in algebra. It's a real obnoxious bugger in a lot of ways. One thing Aristotle was right about: zero is a kind of counterpart to infinity: a concept, not a quantity. But infinity, we can generally ignore in our daily lives. Zero, we're stuck with.

Still, it's there, and it's a real, inescapable part of our entire concept of numbers. It's just an oddball - the dividing line that breaks a lot of rules. But without it, a lot of rules fall apart. Addition isn't a group without 0. Addition and subtraction aren't closed without zero.

Our notation for numbers is also totally dependent on zero; and it's hugely important to making a polynomial number system work. Try looking at the [algorithm for multiplying roman numerals][roman-mult] sometime!

Because of the strangeness of zero, people make a lot of mistakes involving it.

For example, based on that idea of zero and infinities as relatives, a lot of people believe that 1/0=infinity. It doesn't. 1/0 doesn't equal *anything*; it's meaningless. You *can't* divide by 0. The intuition behind this fact comes from the Aristotelean idea about zero: concept, not quantity. Division is a concept based on quantity: Asking "What is x divided by y" is asking "What quantity of stuff is the right size so that if I take Y of it, I'll get X?"

So: what quantity of apples can I take 0 of to get 1 apple? The question makes no sense; and that's exactly right: it *shouldn't* make sense, because dividing by zero *is meaningless*.

There's a cute little algebraic pun that can show that 1 = 2, which is based on hiding a division by zero.

1. Start with "x = y"

2. Multiply both sides by x: "x^{2} = xy"

3. Subtract "y^{2}" from both sides: ""x^{2} - y^{2} = xy - y^{2}"

4. Factor: "(x+y)(x-y) = y(x-y)"

5. Divide both sides by the common factor "x-y": "x + y = y"

6. Since x=y, we can substitute y for x: "y + y = y"

7. Simplify: "2y=y"

8. Divide both sides by y: "2 = 1"

The problem, of course, is step 5: x-y = 0, so step five is dividing by zero. Since that's a meaningless thing to do, everything based on getting a meaningful result from that step is wrong - and so we get to "prove" false facts.

Anyway, if you're interested in reading more, the best source of information that I've found is an online article called ["The Zero Saga"][saga]. It covers not just a bit of history and random chit-chat like this article, but a detailed presentation of everything you could ever want to know, from the linguistics of words meaning zero or nothing to cultural impacts of the concept, to detailed mathematical explanation of how zero fits into algebras and topologies.

[fspog]: http://scienceblogs.com/goodmath/2006/07/restudying_math_in_light_of_th.php

[saga]: http://home.ubalt.edu/ntsbarsh/zero/ZERO.HTM

[roman-mult]: http://www.phy6.org/outreach/edu/roman.htm

## Archive for: July, 2006

## Zero

## Friday Pathological Programming: Befunge, the 2-dimensional language

Today, we're going to take a look at a brilliant language called **Befunge**. Befunge is the work of an evil genius named Chris Pressey.

Normal programming languages are based on a basically one-dimensional syntax; the program is a string, a sequence of characters, and it's processed by reading that string in a straight-ahead fashion. But that's not Befunge! Befunge is something like a two-dimensional turing machine: it says that the program and data are written on a two dimensional*torus*. Each instruction in Befunge is a single character, and where it's located on the torus is crucial. (In case you're not familiar with a torus, it's what you get if you take a very flexible sheet of paper, and roll it so that you connect the top edge to the bottom, and then roll that tube so that you connect the left edge to the right. You get a donut shape where moving up from what used to be the top of the page puts you on the bottom of the page; moving left from the left edge of the page puts you on the right.) This torus is called the *playfield*.

The basics of computation in Befunge are pretty straightforward. It's a stack based language. Operations take their parameters from the stack, and leave their results on the stack. Nothing too complicated there. There are arithmetic operators, IO operators, control flow operators, all operating on the values on the stack.

The arithmetic operators are the usual kinds of things: There are operators for addition (+), subtraction (-), division (/), multiplication (*), modulo (%), and logical negation (!). Digit characters are treated as operators that push the numeric value of the digit onto the stack. (So "99" will create a stack with two nines.) Comparisons are done using "`", which pops the top two values from the stack and compares them. So if the top of the stack was "x", and the value beneath it was "y", then "`" would leave a "0" on the stack if x≤y, and 1 if x > y.

For IO, there are four operators. "&" reads an integer from standard input and pushes it onto the stack. "~" reads a single character from standard input, and leaves it on the stack. "." pops a value off the stack and writes it to standard out as an integer. "," pops a value off the stack and writes it to standard out as a character.

Of course, we can't have a stack-based language without some stack operators: ":" makes a duplicate of the top value on the stack; "$" discards the top value on the stack; "" swaps the top two values of the stack.

So far, nothing has looked particularly pathological - in fact, nothing even looks particularly strange, other that the fact that it's pretty illegible because of the single-character operators. But now, we get to control flow, and *that* is where the insanity/brilliance of Befunge reveals itself.

In Befunge, there's a read-head that moves over the program. Each step, it executes the instruction under the head. But instead of just moving left or right, it can move left, right, up, or down. ">" is an instruction that tells the head to start moving to the right; "<" tells the head to start moving left; "^" means start moving up, and "v" means to start moving down. So, for example:

>v ^<

Is a program that runs an infinite loop: the head will just cycle over those four characters. An even more interesting infinite loop (taken from the befunge documentation) is:

>v>v >^ ^ <

Conditionals work by picking the direction that the head will move: "_" pops the stack, and if the value is zero, then it makes the head move right (">"); if it's non-zero, it makes the head move left ("<"). Similarly, "|" pops a value, and makes the head move up if the value was non-zero, or down if it was zero. To make things confusing, "#" means "skip the next instruction." (Actually, it's important for when a vertical and horizontal control flow cross.) And finally,

"@" is the exit command; it makes the head stop, and the program halt.

There's also a little hack for strings. A double-quote character (") starts a string; when one is encountered, the head keeps moving in the same direction, but instead of executing the characters as instuctions, it just pushes the character values onto the stack. When the second quote is found, it goes back to executing instructions.

Finally, just in case the basic two dimensional flow control isn't pathological enough, there are two instructions for modifying cells on the playfield! "g" pops an X and Y value off the stack, and pushes the character at (X,Y) onto the stack; "p" pops an X, Y, and a character off the stack, and writes the character onto location (X,Y) of the playfield. (The coordinates are relative to the cell where the program started.)

So, let's look at a couple of befunge programs. As usual, we start with the good old "hello world".

v >v"Hello world!"0< ,: ^_25*,@

We start at the top left, head moving right. It moves until it hits the "v" character, which makes it go down; then "<" makes it go left. 0 pushes a zero onto the stack. Then we've got a quote - so it starts pushing characters onto the stack until the next quote. When we get to the next quote, if we wrote the stack so that the top comes first, it would look like : ('H' 'e' 'l' 'l' 'o' ' ' 'w' 'o' 'r' 'l' 'd' '!' 0 ).

Then we hit a "v" which makes the head go down. This is the beginning of a loop; the leftmost two characters of rows 2, 3, and 4 are a while loop! The head goes down to ":" which duplicates the top of the stack; then it hits "_", which turns left if the value on top of the stack is not zero; then the head turns up, outputs a character, turns right, and we're back at the start of the loop. So the loop will output each character until we get the the "0" we pushed before the string; then at the "_", we turn right. 2 and 5 are pushed on the stack and multiplied, leaving a 10, which is the linefeed character (basically "n" for you C weenies). It outputs the linefeed, and then exits.

How about a truly pathological example? Here's a self-reproducing program in 12 bytes.

:0g,:93+`#@_1+

Stepping through that:

- Dup the value on top of the stack. That'll produce a "0" if the stack is empty. (So first pass, stack=[0])
- "0" Push a zero on the stack. (First pass, stack=[0,0])
- "g": Fetch the value at (x,y) on the stack; that's (0,0) initially. (First pass, stack = [':'])
- ",": Output it. So we printed the character at (0,0) (First pass, stack = [])
- ":" dup the stack top again. (First pass, stack = [0])
- "93+". Get the number 12 onto the stack. (First pass, stack = [12,0])
- Compare what was on top of the stack to twelve. Leave a 0 there if it was, or a 1 if it wasn't. (First pass, stack = [0]).
- "#" skip over the next character.
- "_" go right if the top of stack is zero; left if it's one. So if the value copied by the second ":" was greater than 12, then go left (in which case you hit "@" and halt); otherwise, keep going right.
- "1+": add one to the top of the stack (First pass, stack = ([1])). Then keep going right until you hit the right edge, and the you jump back to the left edge, so you're at the first ":" again.
- Now the whole thing repeats, except that there's a one on the stack. So the "g" will fetch (1,0); the "12" will be compared to 1, and the "1+" on the end will leave "2" on the stack. So now it fetches and outputs (2,0). And so on, until it reaches the "(_)" after outputting (12,0), when it halts.

One more, which I'll let you figure out for yourself. Here's a program that prompts you for a number, and computes its factorial:

v >v"Please enter a number (1-16) : "0$*99g1-:99p#v_.25*,@ ^_&:1-99p>:1-:!|10 < ^ <

## Restudying Math in light of The First Scientific Proof of God?

A reader sent me a link to [this amusing blog][blog]. It's by a guy named George Shollenberger, who claims to have devised The First scientific Proof of God (and yes, he always capitalizes it like that).

George suffers from some rather serious delusions of grandeur. Here's a quote from his "About Me" bio on his blog:

>I retired in 1994 and applyied my hard and soft research experience to today's

>world social problems. After retirement, my dual research career led to my

>discovery of the first scientific proof of God. This proof unifies the fields

>of science and theology. As a result of my book, major changes can be expected

>throughout the world.

>...

>I expect these blogs and the related blogs of other people to be detected by

>Jesus Christ and those higher intelligent humans who already live on other

>planets.

So far, he has articles on his blog about how his wonderful proof should cause us to start over again in the fields of science, mathematics, theology, education, medical care, economics, and religion.

Alas, the actual First Scientific Proof of God is [only available in his book][buymybook]. But we can at least look at why he thinks we need to [restudy the field of mathematics][restudy].

>The field of mathematics is divided into pure and applied mathematics. Pure

>mathematicians use mathematics to express their own thoughts and thus express

>the maximum degree of freedom found in the field of mathematics. On the other

>hand, applied mathematicians lose a degree of their freedom because they use

>mathematics to express the thoughts of people in the fields they serve. Most

>mathematicians are applied mathematicians and serve either counters (e.g.,

>accountants, pollsters, etc.) or sciences (e.g., physicists, sociologists,

>etc.).

That's a pretty insulting characterization of mathematicians, but since George is an engineer by training, it's not too surprising - that's a fairly common attitude about mathematicians among engineers.

>The field of physics is served by applied mathematicians who are called

>mathematical physicists. These physicists are the cause of the separation of

>theologians and scientists in the 17th century, after Aristotle's science was

>being challenged and the scientific method was beginning to be applied to all

>sciences. But, these mathematical physicists did not challenge Aristotle's

>meaning of infinity. Instead, they accepted Aristotle's infinity, which is

>indeterminate and expressed by infinite series such as the series of integers (

>1, 2, 3, ....etc.). Thus, to the mathematical physicist, a determinate infinity

>does not exist. This is why many of today's physicists reject the idea of an

>infinite God who creates the universe. I argue that this is a major error in

>the field of mathematics and explain this error in the first chapter of The

>First Scientific Proof of God.

So, quick aside? What was Aristotle's infinity? The best article I could find quickly is [here][aristotle-infinity]. The short version? Aristotle believed that infinity doesn't really *exist*. After all, there's no number you can point to and say "That's infinity". You can never assemble a quantity of apples where you can say "There's infinity apples in there". Aristotle's idea about infinity was that it's a term that describes a *potential*, but not an *actual* number. He also went on the describe two different kinds of infinity - infinity by division (which describes zero, which he wasn't sure should really be considered a *number*); and infinity by addition (which corresponds to what we normally think of as infinity).

So. George's argument comes down to: mathematics, and in particular, mathematical physics, needs to be rebooted, because it uses the idea of infinity as potential - that is, there is no specific *number* that we can call infinity. So since our math says that there isn't, well, that means we should throw it all away. Because, you see, according to George, there *is* a number infinity. It's spelled G O D.

Except, of course, George is wrong. George needs to be introduced to John Conway, who devised the surreal numbers, which *do* contain infinity as a number. Oh, well.

Even if you were to accept his proposition, what difference would it make?

Well - there's two ways it could go.

We could go the [surreal][onag] [numbers][surreal] route. In the surreal numbers (or several similar alternatives), infinity *does* exist as a number; but despite that, it has the properties that we expect of infinity; e.g., dividing it by two doesn't change it. If we did that, it would have no real effect on science: surreal numbers are the same as normal reals in most ways; they differ when you hit infinitesimals and infinities.

If we didn't go the surreal-ish route, then we're screwed. If infinity is a *real* real number, then the entire number system collapses. What's 1/0? If infinity is *real*, then 1/0 = infinity. What about 2/0? Is that 2*infinity? If it is, it makes no sense; if it isn't, it makes no sense.

>I believe that the field of mathematics must restudy their work by giving ample

>consideration to the nature of man's symbolic languages, the nature of the

>human mind, Plato's negative, and the nature of dialectical thinking.

Plato's negative is, pretty much, the negative of intuitionistic logic. Plato claimed that there's a difference between X, not-X, and the opposite of X. His notion of the opposite of X is the intuitionistic logic notion of not-X; his notion of not-X is the intuitionistic notion of "I don't have a proof of X".

In other words, George is hopelessly ignorant of real mathematics; and his reasoning about what needs to be changed about math makes no sense at all.

[aristotle-infinity]: http://plato.stanford.edu/entries/aristotle-mathematics/supplement3.html

[blog]: http://georgeshollenberger.blogspot.com

[restudy]: http://georgeshollenberger.blogspot.com/2006/07/restudying-field-of-mathematics.html

[buymybook]: http://rockstarramblings.blogspot.com/2006/06/doggerel-19-read-my-book.html

[surreal]: http://www.tondering.dk/claus/surreal.html

[onag]: http://www.akpeters.com/product.asp?ProdCode=1276

## Towards a Model for Linear Logic: Monoidal Categories

Time to come back to category theory from out side-trip. Category theory provides a good framework for defining linear logic - and for building a Curry-Howard style type system for describing computations with *state* that evolves over time. Linear logic provides a way of defining a valid *model* for part of linear logic (the multiplicative operators) that aren't tractable using other modeling techniques.

I'm going to show you the relationship between models for linear logic and category theory. It's going to take a couple of days to go through the whole shebang, but even the parts that we need to go through to get there are fun.

The first step is to define a *monoidal category*, also known as a *tensor* category. We've already done most of that when we built [monads][monads] earlier this week; a monad is a kind of monoidal category.

A monoidal category is a category C with one object t, and a *binary functor* ⊗ : C × C → C. This binary functor is called *tensor*. Tensor has three required properties defined using *natural isomorphisms*, called α, λ, and ρ.

α says that tensor must be associative: α_{(A,B,C)} : (A ⊗ B) ⊗ C → A ⊗ (B ⊗ C).

λ says that tensor has a *left identity*: λ_{A} : (I ⊗ λ) → A.

ρ says that tensor has a *right identity*, which is the same as the left identity: ρ_{A} : (ρ ⊗ 1) → A;.

And finally, the natural transformations need to make the following diagrams commute for all values of A, B, and C. These are known as the *coherence conditions* for the monoidal natural transformations.

A monoidal category is said to be *strict* if α, λ, and ρ are all identities. It turns out that for every monoidal category, there is an *equivalent* (in the sense of natural isomorphism) to a struct monoidal category.

And now, here comes an example of just why category theory is useful. In some of the [detailed models of quantum physics][quantum-condense], they try to describe the structure of different kinds of matter using what they call *topological orders*. The standard theory for describing the topological orders to different states of matter is called *Landau* theory. It turns out that Landau theory doesn't describe the topological order of high temperature semiconductors or very-low-temperature condensate states. Category theory - in particular, the theory surrounding strict monoidal categories does a better job of describing the topological order of the different states of matter than any other mathematical approach that's been tried so far.

[monads]: http://scienceblogs.com/goodmath/2006/07/monads_and_programming_languag_1.php

[quantum-condense]: http://dao.mit.edu/~wen/pub/qorder.html

## Tangled Bank #58

Issue number 58 of [the Tangled Bank][tb] is now live at Salto Sobrius. Head on over, take a look, and plan to spend some time reading some of the net's best science blogging from the last two weeks.

[tb]: http://saltosobrius.blogspot.com/2006/07/tangled-bank-58.html

## Linear Logic

[Monday][yesterday], I said that I needed to introduce the sequent calculus, because it would be useful for describing things like linear logic. Today we're going to take a quick look at linear logic - in particular, at *propositional* linear logic; you can expand to predicate linear logic in a way very similar to the way we went from propositional logic to first order predicate logic.

So first: what the heck *is* linear logic?

The answer is, basically, logic where statements are treated as *resources*. So using a statement in an inference step in linear logic *consumes* the resource. This is a strange notion if you're coming from a regular predicate logic. For example, in regular predicate logic, if we have the statements: "A", "A ⇒ B", and "A ⇒ C", we know that we can conclude "B ∧ C". In linear logic, that's not true: using either implication statement would *consume* the "A". So we could infer "B", or we could infer "C", but we could *not* infer both.

When people talk about linear logic, and why it makes sense, they almost always use a vending machine analogy. Suppose I walk up to a vending machine, and I want to buy a soda and a candy bar. I've got 8 quarters in my pocket; the soda costs $1.50; the candy bar costs $.75.

In linear logic, I'd say something like the following (the syntax is wrong, but we'll get to syntax later): (Q,Q,Q,Q,Q,Q,Q,Q), (Q,Q,Q,Q,Q,Q) ⇒ Soda, (Q,Q,Q) ⇒ Candy.

Using the rules, I can by a soda by "spending" 6 of my Qs. I wind up with "(Q,Q) ∧ Soda", and "(Q,Q,Q) ⇒ Candy". I've consumed 6 Qs, and I've consumed the "(Q,Q,Q,Q,Q,Q) ⇒ Soda" implication. I can't do anything else; I don't have enough Qs.

The basic statements in linear logic, with intuitive meanings are:

1. A ⊗ B. This is called *multiplicative conjunction*, also known as *simultaneous occurrence*. This means that I definitely have both A and B. This has an *identity unit* called "1", such that A ⊗ 1 ≡ 1 ⊗ A ≡ A. 1 represents the idea of the absence of any resource.

2. A & B : *additive conjunction*, aka *internal choice*. I can have either A *or* B, and I get to pick which one. The unit is ⊤, pronounced "top", and represents a sort of "I don't care" value.

3. A ⊕ B. This is called *additive disjunction*, also known as *external choice*. It means that I get either A or B, but I don't get to pick which one. The unit here is 0, and represents the lack of an outcome.

4. A ⅋ B : *multiplicative disjunction*, aka *parallel occurence*; I *need to have* both A and B at the same time. The unit for this is ⊥, pronounced "bottom", and represents the absence of a goal. In the vending machine metaphor, think of it as the "cancel" button on the vending machine: I decided I don't want any, so I'm not going to spend my resources.

5. A -o B : Linear implication. Consume resource A to *produce* resource B. The normal symbol for this looks like an arrow with a circle instead of an arrowhead; this operator is often called "lolly" because of what the normal symbol looks like. I'm stuck writing it as "-o", because there's no HTML entity for the symbol.

6. !A : Positive exponentiation, pronounced "Of course A". This *produces* an arbitrary number of As. Equivalent to A ⊗ !A.

7. ?A : Negative exponentiation, pronounced "Why not A?". This *consumes* As.

Ok. So, suppose I want to talk about buying lunch. I've got 10 dollars to buy lunch. I'll be full if I have a salad, a coke, and a tuna sandwich. If I wanted to write "I've got a dollar" as "D", "I have a salad" as "S", "I have a coke" as "C", "I have a tuna sandwich" as "T", and finally, "I'm full" as "F"

* I can write "I have 10 dollars" in LL as: "(D ⊗ D ⊗ D ⊗ D ⊗ D ⊗ D ⊗ D ⊗ D⊗ D ⊗ D)".

* I can write "Tuna sandwich and salad and coke" as a group of things that I want to have all of as: "T ⅋ S ⅋ C".

* I can say that I'll be full if I have lunch as "T ⅋ S ⅋ C -o F"

If I want to talk about buying lunch, I can describe the prices of the things I want using implication:

* A coke costs one dollar: "D -o C"; I can spend one dollar, and in return I get one coke.

* A salad costs 3 dollars: "(D ⊗ D ⊗ D) -o S"

* A tuna sandwich also costs three dollars: "(D ⊗ D ⊗ D) -o S"

Now, I can do some reasoning with these.

* By taking 1 of the dollars, I can get one C. That leaves me with "D ⊗ D ⊗ D ⊗ D ⊗ D ⊗ D ⊗ D ⊗ D⊗ D ⊗ C"

* By taking 3 D, I can get one S. "D ⊗ D ⊗ D ⊗ D ⊗ D⊗ D ⊗ C ⊗ S".

* By taking 3 D, I can get one T. "D ⊗ D⊗ D ⊗ C ⊗ S ⊗ T".

* Now I've got my lunch. I can eat it and be full, with three dollars left: "D ⊗ D⊗ D ⊗ F".

Just from this trivial example, you should be able to see why linear logic is cool: the idea of being able to talk about *how* resources are used in an inference process or a computation is really valuable, and linear logic gives you the ability to really work with the concept of resource in a solid, formal way. If you think of it in terms of the Curry-Howard isomorphism [types-as-proofs concept from the simply typed lambda calculus][types-as-proofs], you can imagine using linear logic for types of values that are *consumed* by a computation - i.e., they're no longer available once they've been used.

I'm going to adopt a slightly different format for the sequents for working in linear logic. The way that I produced the center bars in yesterdays post was really painful to write, and didn't even end up looking particularly good. So, the way that I'm going to right the sequents in this post is to wrap the "top" and "bottom" of the sequent in curly braces, and separate them by a "⇒", as in:

{GivenContext :- GivenEntailment } ⇒ { InferredContext :- InferredEntailment}

Now, let's take a look at the sequent rules for propositional linear logic. I'm using the version of these rules from [Patrick Lincoln's SIGACT '92 paper][sigact92]. Yeah, I know that's a bit of a big load there. Don't worry about it too much; the important part is the concept described up above; the sequents are useful to look at when you have a hard time figuring out what some operator means in inference. For example, you can see the difference between & and ⊕ (which I found confusing at first) by looking at their sequents, to see what they do.

1. **Identity**: { } ⇒ { A :- A }

2. **Cut**: { Γ_{1} :- A, Σ_{1} Γ_{2}, A :- Σ_{2} } ⇒ { Γ_{1}, Γ_{2} :- Σ_{1},Σ_{2}}

3. **Exchange Left**: { Γ_{1}, A, B, Γ_{2} :- Σ } ⇒ { Γ_{1}, B, A, Γ_{2} :- Σ }

4. **Exchange Right**: { Γ :- Σ_{1}, A, B, Σ_{2} } ⇒ { Γ :- Σ_{1}, B, A, Σ_{2}}

5. **⊗ Left**: {Γ, A, B :- Σ} ⇒ { Γ, A ⊗ B :- Σ }

6. **⊗ Right**: { Γ_{1} :- A, Σ_{1} Γ_{2} :- B, Σ_{2}} ⇒ { Γ_{1}, Γ_{2} :- (A ⊗ B), Σ_{1},Σ_{2}}

7. **-o Left**: { Γ_{1} :- A, Σ_{1} Γ_{2}, B :- Σ_{2} } ⇒ { Γ_{1},Γ_{2}, (A -o B) :- Σ_{1},Σ_{2}}

8. **-o Right**: { Γ, A :- B, Σ} ⇒ { Γ :- A -o B, Σ}

9. **⅋ Left**: { Γ_{1},A :- Σ_{1} Γ_{2} :- B, Σ_{2} } ⇒ { Γ_{1},Γ_{2}, (A ⅋ B) :- Σ_{1},Σ_{2}}

10. **⅋ Right**: { Γ :- A, B, Σ} ⇒ { Γ :- A ⅋ B, Σ}

11. **& Left**: { Γ, A :- Σ } ⇒ { Γ,A & B :- Σ}/{ Γ, B :- Σ } ⇒ { Γ,A & B :- Σ}

12. **& Right**: { Γ :- A,Σ Γ :- B,Σ} ⇒ { Γ :- (A & B), Σ }

13. **⊕ Left**: Γ,A :- Σ Γ,B :- Σ} ⇒ { Γ,A ⊕ B :- Σ}

14. **⊕ Right**: {Γ :- A,Σ} ⇒ {Γ :- A ⊕ B, Σ}/{Γ :- B,Σ} ⇒ {Γ :- A ⊕ B, Σ}

15. **!W**: {Γ :- Σ} ⇒ {Γ,!A :- Σ}

16. **!C**: {Γ,!A,!A :- Σ} ⇒ { Γ,!A :- Σ }

17. **!D**: { Γ, A :- Σ} ⇒ { Γ,!A :- Σ}

18. **!S**: { !Γ :- A, ?Σ} ⇒ { !Γ :- !A, ?Σ}

19. **?W**: {Γ :- Σ} ⇒ {Γ:- ?A, Σ}

20. **?C**: {Γ :- ?A,?A, Σ} ⇒ { Γ :- ?A, Σ }

21. **?D**: { Γ :- A, Σ} ⇒ { Γ :- ?A, Σ}

22. **?S**: { !Γ, A :- ?Σ} ⇒ { !Γ,?A :- ?Σ}

23. **⊥ Exp Left**: { Γ :- A, Σ} ⇒ { Γ, A^{⊥} :- Σ}

24. **⊥ Exp Right**: { Γ, A :- Σ} ⇒ { Γ :- A^{⊥}, Σ}

25. **0 Left**: { Γ,0 :- Σ } *(Nothing can be inferred)*

26. **⊤ Right**: { Γ :- ⊤,Σ}

27. **⊥ Left**: { ⊥ :- }

28. **⊥ Right**: {Γ :- Σ} ⇒ {Γ :- ⊥,Σ}

29. **1 Left**: { Γ :- Σ} ⇒ { Γ,1 :- Σ}

30. **1 Right**: { :- 1}

This is long enough that I'm not going to get into how this hooks into category theory today, except to point out that if you look carefully at the multiplication and exponentiations, they might seem a tad familiar.

[types-as-proofs]: http://goodmath.blogspot.com/2006/06/finally-modeling-lambda-calculus.html

[sigact92]: http://www.csl.sri.com/~lincoln/papers/sigact92.ps

[yesterday]: http://scienceblogs.com/goodmath/2006/07/a_brief_diversion_sequent_calc.php

## Rehashing Conservative Liars: Did Edwards tell the truth about poverty?

You might remember my post last week about [conservatives who can't subtract][subtract]: in particular, about how a conservative blogger who goes by "Captain Ed" attacked John Edwards for saying there are 37 million people in poverty in the US. It turned out that good ol' Ed wasn't capable of doing simple subtraction.

You might also remember a post about [lying with statistics][liar], discussing an article by Tim Worstall, who quoted a newspaper piece about abortion rates, and tried to misuse the statistics to argue something about sexual education in the UK.

Well, Tim (the target of the second piece) was pretty ticked off at my criticism; and so now, he's back - but not with a defense of his own piece (he tried that already), but with [a response to the criticism of the first piece][liar-subtracts]. Of course, he tries to defend our good captain not by defending his math - that is, by claiming that *the point that he made* was correct; but by moving the goalposts, and claiming that the *real* point about the piece wasn't to call Edwards a liar, but to pretend that he was *really* making an economic argument about whether or not people below the povertly line are really correctly described as poor - because, lucky duckies that they are, they get some money from the earned income tax credit! So obviously they're not *really* poor!

>"Thirty-seven million of our people, worried about feeding and clothing their

>children," he said to his audience. "Aren't we better than that?"

>

>and the link is to this table at the US Census Bureau which indeed states that

>there are some 37 million or so below the poverty line.

>

>Right, so that must mean that there really are 37 million poor people in the

>USA, right? So what's Tim bitchin' about? Well, how about the fact that those

>figures which show 37 million below the poverty line do not in fact show that

>there are 37 million poor people? Weird thought I know but nope, it ain't true.

>

>For this reason:

>

>The official poverty definition uses money income before taxes and does not

>include capital gains or noncash benefits (such as public housing, Medicaid,

>and food stamps).

>

>What is being measured in the first definition of poverty is how many people

>there are below the poverty line before we try to do anything about it.

This is what those of us who know anything about logic refer to as a "non-sequiter". That is, it's a conclusion that has nothing to do with what came before it. It's one of the oldest and sloppiest rhetorical tactics in the book. (Literally - I've got a book on classical rhetoric on my bookshelf, and it's cited in there.)

Edwards was talking about the division of wealth in the US: we have people like the CEOs of big companies taking home unbelievable amounts of money, while at the same time, the income of the people in the middle class is declining slightly in real terms, and the income of the people at the bottom isn't even approaching what they need to get by. There are 37 million people below the poverty line in this country in terms of their income. Some portion (not specified in the only source Tim and the captain cite) of those people are working, and despite working, are still not making enough money to get by. This is indisputable: there are many people in this country who are working, but who still require government assistance just to pay their bills. That's what Edwards said.

What does that have to do with whether or not the government gives them some token assistance? The point is that our economic policies quite deliberately *refuse* to do anything to help the people on the bottom of the economic ladder become self-sufficient. Witness the recent refusal to even allow an open debate in congress on increasing the minimum wage, even while the members of congress gave themselves a raise. A person with a family, working full time for the minimum wage is left *below* the poverty line. But it's not considered an important issue by the people currently running our government.

[subtract]: http://scienceblogs.com/goodmath/2006/07/subtraction_math_too_hard_for.php

[liar]: http://scienceblogs.com/goodmath/2006/07/lying_with_statistics_abortion.php

[liar-subtracts]: http://timworstall.typepad.com/timworstall/2006/07/good_maths_and_.html

## A Brief Diversion: Sequent Calculus

*(This post has been modified to correct some errors and add some clarifications in response to comments from alert readers. Thanks for the corrections!)*

Today, we're going to take a brief diversion from category theory to play with

some logic. There are some really neat connections between variant logics and category theory. I'm planning on showing a bit about the connections between category theory and one of those, called *linear logic* . But the easiest way to present things like linear logic is using a mechanism based on **sequent calculus**.

Sequent calculus is a deduction system for performing reasoning in first order propositional logic. But it's notation and general principles are useful for all sorts of reasoning systems, including many different logics, all sorts of type theories, etc. The specific sequent calculus that I'm to talk about is sometimes called system-LK; the general category of things that use this basic kind of rules is called Gentzen systems.

The sequent calculus consists of a set of rules called *sequents*, each of which is normally written like a fraction: the top of the fraction is what you know before applying the sequent; the bottom is what you can conclude. The statements in the sequents are always of the form:

CONTEXTS, Predicates :- CONTEXTS, Predicates

The "CONTEXTS" are sets of predicates that you already know are true. The ":-" is read "entails"; it means that the *conjuction* of the statements and contexts to the left of it can prove the *disjunction* of the statements to the right of it. In predicate logic, the conjuction is logical and, and disjunction is logical or, so you can read the statements as if "," is "∧" on the left of the ":-", and "∨" on the right. *(Note: this paragraph was modified to correct a dumb error that I made that was pointed out by commenter Canuckistani.)*

Contexts are generally written using capital greek letters; predicates are generally written using uppercase english letters. We often put a name for an inference rule to the right of the separator line for the sequent.

For example, look at the following sequent:

Γ :- Δ

————— Weakening-Left

Γ,A :- Δ

This sequent is named Weakening-left; the top says that "Given Γ everything in Δ can be proved."; and

the bottom says "Using Γ plus the fact that A is true, everything in Δ can be proved". The full sequent basically says: if Δ is provable given Γ, then it will still be provable when A is added to Γ;in other words, adding a true fact won't invalidate any proofs that were valid before the addition of A. *(Note: this paragraph was modified to correct an error pointed out by a commenter.)*

The sequent calculus is nothing but a complete set of rules that you can use to perform any inference in predicate calculus. A few quick syntactic notes, and I'll show you the full set of rules.

1. Uppercase greek letters are contexts.

2. Uppercase english letters are *statements*.

3. Lowercase english letters are *terms*; that is, the objects that predicates

can reason about, or variables representing objects.

4. A[b] is a statement A that contains the term b in some way.

5. A[b/c] means A with the term "b" replaced by the term "c".

-------

First, two very basic rules:

1.

———— (Identity)

A :- A

2. Γ :- A, Δ Σ, A :- Π

—————————————— (Cut)

Γ,Σ :- Δ, Π

Now, there's a bunch of rules that have right and left counterparts. They're duals of each other - move terms across the ":-" and switch from ∧ to ∨ or vice-versa.

3. Γ, A :- Δ

————————— (Left And 1)

Γ, A ∧ B :- Δ

4. Γ :- A, Δ

——————— ——— (Right Or 1)

Γ, :- A ∨ B, Δ

5. Γ, B :- Δ

——————— ——(Left And 2)

Γ,A ∧ B :- Δ

6. Γ :- B, Δ

——————— ——— (Right Or 2)

Γ :- A ∧ B, Δ

7. Γ, A :- Δ Σ,B :- Π

————————————— (Left Or)

Γ,Σ, A ∨ B :- Δ,Π

8. Γ :- A,Δ Σ :- B,Π

—————————— ——(Right And)

Γ,Σ :- A ∧ B, Δ,Π

9. Γ :- A,Δ

————— —— (Left Not)

Γ, ¬A :- Δ

10. Γ,A :- Δ

——————— (Right Not)

Γ :- ¬A, Δ

11. Γ :- A,Δ Σ,B :- Π

————————————— (Left Implies)

Γ, Σ, A → B :- Δ,Π

12. Γ,A[y] :- Δ *(y bound)*

————————————— (Left Forall)

Γ,∀x A[x/y] :- Δ

13. Γ :- A[y],Δ *(y free)*

————————————— (Right Forall)

Γ :- ∀x A[x/y],Δ

14. Γ, A[y] :- Δ *(y bound)*

———————————— (Left Exists)

Γ,∃x A[x/y] :- Δ

15. Γ, :- A[y], Δ *(y free)*

————————————(Right Exists)

Γ :- ∃x A[x/y], Δ

16. Γ :- Δ

—————— (Left Weakening)

Γ, A :- Δ

17. Γ :- Δ

—————— (Right Weakening)

Γ :- A, Δ

18. Γ, A, A :- Δ

——————— (Left Contraction)

Γ,A :- Δ

19. Γ :- A, A, Δ

——————— (Right Contraction)

Γ :- A, Δ

20. Γ, A, B, Δ :- Σ

————————— (Left Permutation)

Γ,B, A, Δ :- Σ

21. Γ :- Δ, A, B, Σ

————————— (Right Permutation)

Γ :- Δ B, A, Σ

Here's an example of how we can use sequents to derive A ∨ ¬ A:

1. Context empty. Apply Identity.

2. A :- A. Apply Right Not.

3. empty :- ¬ A, A. Apply Right And 2.

4. empty : A ∨ ¬A, A. Apply Permute Right.

5. empty :- A, A ∨ ¬ A. Apply Right And 1.

6. empty :- A ∨ ¬ A, A ∨ ¬ A. Right Contraction.

7. empty :- A ∨ ¬ A

If you look *carefully* at the rules, they actually make a lot of sense. The only ones that look a bit strange are the "forall" rules; and for those, you need to remember that the variable is *free* on the top of the sequent.

A lot of logics can be described using Gentzen systems; from type theory, to temporal logics, to all manner of other systems. They're a very powerful tool for describing all manner of inference systems.

## Mathematicians and Evolution: My Two Cents

There's been a bunch of discussion here at ScienceBlogs about whether or not mathematicians are qualified to talk about evolution, triggered by [an article by ID-guy Casey Luskin][luskin]. So far, [Razib at Gene Expression][gnxp], [Jason at][evblog1][EvolutionBlog][evblog2], and [John at Stranger Fruit][sf] have all commented on the subject. So I thought it was about time for me to toss in my two cents as well, given that I'm a math geek who's done rather a lot of writing about evolution here at this blog.

I don't want to spend a lot of time rehashing what's already been said by others. So I'll start off by just saying that absolutely agree that just being a mathematician gives you absolutely *no* qualifications to talk about evolution, and that an argument about evolution should *not* be considered any more credible because it comes from a PhD in mathematics rather than a plumber. That's not to say that there is no role for mathematics in the discussion of evolution - just that being a mathematician doesn't give you any automatic expertise or credibility about the subject. A mathematician who wants to study the mathematics of evolution needs to *study evolution* - and it's the knowledge of evolution that they gain from studying it that gives them credibility about the topic, not their background in mathematics. Luskin's argument is nothing but an attempt to cover up for the fact that the ID "scientists petition" has a glaring lack of signatories who actually have any qualifications to really discuss evolution.

What I would like to add to the discussion is something about what I do here on this blog with respect to writing about evolution. As I've said plenty of times, I'm a computer scientist. I certainly have no qualifications to talk about evolution: I've never done any formal academic study of evolution; I've certainly never done any professional work involving evolution; I can barely follow [work done by qualified mathematicians who *do* study evolution][gm-good-ev].

But if you look at my writing on this blog, what I've mainly done is critiques of the IDists and creationists who attempt to argue against evolution. And here's the important thing: the math that they do - the kind of arguments coming from the people that Luskin claims are uniquely well suited to argue about evolution - are so utterly, appallingly horrible that it doesn't take a background in evolution to be able to tear them to ribbons.

To give an extreme example, remember the [infamous Woodmorappe paper][woodie] about Noah's ark? You don't need to be a statistician to know that using the *median* is wrong. It's such a shallow and obvious error that anyone who knows any math at all should be able to knock it right down. *Every* mathematical argument that I've seen from IDists and/or creationists has exactly that kind of problems: errors so fundamental and so obvious that even without having to get into the detailed study of evolution, anyone who takes the time to actually *look at the math* can see why it's wrong. It's not always as bad as Woodie, but just look at things like [Dembski's specified complexity][dembski-sc]: anyone who knows information theory can see that it's a self-contradicting definition; you don't need to be an expert in mathematical biology to see the problem - the problem is obvious in the math itself.

That fact in itself should be enough to utterly discredit Luskin's argument: the so-called mathematicians that he's so proud to have on his side aren't even capable of putting together remotely competent mathematical arguments about evolution.

[luskin]: http://www.evolutionnews.org/2006/07/mathematicians_and_evolution.html

[gnxp]: http://scienceblogs.com/gnxp/2006/07/math_and_creation.php

[evblog1]: http://scienceblogs.com/evolutionblog/2006/07/are_mathematicians_qualified_t.php

[evblog2]: http://scienceblogs.com/evolutionblog/2006/07/are_mathematicians_qualified_t_1.php

[sf]: http://scienceblogs.com/strangerfruit/2006/07/more_on_mathematicians_1.php

[gm-good-ev]: http://scienceblogs.com/goodmath/2006/07/using_good_math_to_study_evolu.php

[woodie]: http://goodmath.blogspot.com/2006/06/more-aig-lying-with-statistics-john.html

[dembski-sc]: http://scienceblogs.com/goodmath/2006/06/dembskis_profound_lack_of_comp.php

## Unofficial "Ask a ScienceBlogger": Childrens Books (UPDATED)

Over at fellow SBer {Worlds Fair][worldsfair}, they've put up an unofficial "Ask a ScienceBlogger" question, about childrens books:

Are there any children's books that are dear to you, either as a child or a parent, and especially ones that perhaps strike a chord with those from a science sensibility? Just curious really. And it doesn't have to be a picture book, doesn't even have to be a children's book - just a book that, for whatever reason, worked for you.

I've got two kids, a girl who's almost six, and a boy who's three. And they're both showing serious signs of being pre-geeks. Whenever we go to a new place, the first thing they do is head for the bookshelves to see if there are any books they haven't seen yet. My daughter's school had a book fair last year, and we ended up spending a little over $100 on books for a kindergartener, and another $30 or so for the (then) 2yo. So obviously, I end up spending a lot of time reading childrens books!

There are a few books that really stand out in my mind as being *special*:

1. "Giraffes Can't Dance", by Giles Andreae, illustrated by Guy Parker-Rees. This isn't a science book at all, but it's simply one of the most wonderful children's book I've seen. The story is wonderful, the rhythm and the rhyme structure are fantastic, and the art is bright and beautiful in a cartoonish sort of way.

2. "Our Family Tree: An Evolution Story", by Lisa Westberg Peters, illustrated by Lauren Stringer. We bought this one for my daughter last december, after PZ recommended it on his blog. It's a beautiful book - great art, and it's actually really *compelling* for a child. Most kids science books have a kind of dull style to the writing; my daughter will generally want to read them once or twice, but once she understands what's in them, she doesn't want to read them again. But this one, she's either read it or had it read to her at least fifty different times.

3. "Rumble in the Jungle, Commotion in the Ocean, et al", by Giles Andreae, illustrated by David Wojtowycz. We started getting this series when my daughter was threeish, because it's by the same author as "Giraffes", and liked it so much that we have continued to get it for my son. Each book is about some environment and the animals that live in it. Each animal gets a little rhyme and a picture. The art is bright and colorful, and the rhymes are clever and very amusing to the kids.

UPDATE: I realized that I forgot one of *my* favorite books from my childhood: "The Lorax" by Dr. Seuss. In general, I'm not actually a huge Dr. Seuss fan: so many of his books are just rhyming nonsense. But the Lorax was one of my favorite books as a child; it turned me into a mini-environmentalist at the age of four. My son doesn't quite get the book yet; my daughter definitely does. No list of science-ish kids books would be complete without it.

[worldsfair]: http://scienceblogs.com/worldsfair/2006/07/childrens_book_roundup_and_a_q.php