## New Dimensions of Crackpottery

I have, in the past, ranted about how people abuse the word "dimension", but it's been a long time. One of my followers on twitter sent me a link to a remarkable piece of crackpottery which is a great example of how people simply do not understand what dimensions are.

There are several ways of defining "dimension" mathematically, but they all come back to one basic concept. A dimension an abstract concept of a direction. We can use the number of dimensions in a space as a way of measuring properties of that space, but those properties all come back to the concept of direction. A dimension is neither a place nor a state of being: it is a direction.

Imagine that you're sitting in an abstract space. You're at one point. There's another point that I want you to go to. In order to uniquely identify your destination, how many directions do I need to mention?

If the space is a line, you only need one: I need to tell you the distance. There's only one possible direction that you can go, so all I need to tell you is how far. Since you only need one direction, the line is one-dimensional.

If the line is a plane, then I need to tell you two things. I could do that by saying "go right three steps then up 4 steps", or I could say "turn 53 degrees clockwise, and then walk forward 5 steps." But there's no way I can tell you how to get to your destination with less than two directions. You need two directions, so the plane is two dimensional.

If the space is the interior of a cube, then you'll need three directions, which means that the cube is three dimensional.

On to the crackpottery!

E=mc2 represents a translation across dimensions, from energy to matter.

No, it does not. Energy and matter are not dimensions. is a statement about the fundamental relation between energy and matter, not a statement about dimensions. Our universe could be 2 dimensional, 3 dimensional, 4 dimensional, or 22 dimensional: relativity would still mean the same thing, and it's not a statement about a "translation across dimensions".

Energy can travel at the speed of light, and as Special Relativity tells us, from the perspective of light speed it takes no time to travel any distance. In this way, energy is not bound by time and space the way matter is. Therefore, it is in a way five-dimensional, or beyond time.

Bzzt, no.

Energy does not travel. Light travels, and light can transmit energy, but light isn't energy. Or, from another perspective, light is energy: but so is everything else. Matter and energy are the same thing.

From the perspective of light speed time most certainly does pass, and it does take plenty of time to travel a distance. Light takes roughly 6 minutes to get from the sun to the earth. What our intrepid author is trying to talk about here is the idea of time dilation. Time dilation describes the behavior of particles with mass when they move at high speeds. As a massive particle moves faster and approaches the speed of light, the mass of the particle increases, and the particle's experience of time slows. If you could accelerate a massive particle to the speed of light, its mass would become infinite, and time would stop for the particle. "If" is the key word there: it can't. It would require an infinite amount of energy to accelerate it to the speed of light.

But light has no mass. Relativity describes a strange property of the universe, which is hard to wrap your head around. Light always moves at the same speed, no matter your perspective. Take two spacecraft in outer space, which are completely stationary relative to each other. Shine a laser from one, and measure how long it takes for the light to get to the other. How fast is it going? Roughly 186,000 miles/second. Now, start one ship moving away from the other at half the speed of light. Repeat the experiment. One ship is moving away from the other at a speed of 93,000 miles/second. From the perspective of the moving ship, how fast is the light moving away from it towards the other ship? 186,000 miles/second. From the perspective of the stationary ship, how fast is the laser light approaching it? 186,000 miles/second.

It's not that there's some magic thing about light that makes it move while time stops for it. Light is massless, so it can move at the speed of light. Time dilation doesn't apply because it has no mass.

But even if that weren't the case, that's got nothing to do with dimensionality. Dimensionality is a direction: what does this rubbish have to do with the different directions that light can move in? Absolutely nothing: the way he's using the word "dimension" has nothing to do with what dimensions mean.

All “objects” or instances of matter are time-bound; they change, or die, or dissolve, or evaporate. Because they are subject to time, objects can be called four-dimensional.

Nope.

Everything in our universe is subject to time, because time is one of the dimensions in our universe. Time is a direction that we move. We don't have direct control over it - but it's still a direction. When and where did I write this blog post compared to where I am when you're reading it? The only way you can specify that is by saying how far my position has changed in four directions: 3 spatial directions, and time. Time is a dimension, and everything in our universe needs to consider it, because you can't specify anything in our universe without all four dimensions.

The enormous energy that can be released from a tiny object (as in an atomic bomb) demonstrates the role dimensions play in constructing reality.

No: the enormous energy that can be released from a tiny object demonstrates the fact that a small quantity of matter is equivalent to a large quantity of energy. As you'd expect if you look at that original equation: . A gram of mass - something the size of a paperclip - is equivalent to about 25 million kilowatt-hours of energy - or more than the total yearly energy use of 1,200 average americans. That's damned impressive and profound, without needing to draw in any mangled notions of dimensions or magical dimensional powers.

Higher dimensions are mind-blowingly powerful; even infinitely so. Such power is so expansive that it can’t have form, definition, or identity, like a ball of uranium or a human being, without finding expression in lower dimensions. The limitations of time and space allow infinite power to do something other than constantly annihilate itself.

Do I even need to respond to this?

Einstein’s equation E=mc2 bridges the fourth and the fifth dimensions, expressed as matter and energy. Imagine a discovery that bridges expressions of the fifth and sixth dimensions, such as energy and consciousness. Consciousness has the five-dimensional qualities of energy, but it can’t be “spent” in the way energy can because it doesn’t change form the way energy does. Therefore, it’s limitless.

And now we move from crackpottery to mysticism. Einstein's mass-energy equation doesn't bridge dimensions, and dimensionality has nothing do with mass-energy equivalence. And now our crackpot friend suddenly throws in another claim, that consciousness is the sixth dimension? Or consciousness is the bridge between the fifth and sixth dimensions? It's hard to figure out just what he's saying here, except for the fact that it's got nothing to do with actual dimensions.

Is there a sixth dimension? Who knows? According to some modern theories, our universe actually has many more than the 4 dimensions that we directly experience. There could be 6 or 10 or 20 dimensions. But if there are, those dimensions are just other directions that things can move. They're not abstract concepts like "consciousness".

And of course, this is also remarkably sloppy logic:

1. Consciousness has the 5-dimensional qualities of energy
2. Consciousness can't be spent.
3. Consciousness can't change form.
4. Therefore consciousness is unlimited.

The first three statements are just blind assertions, given without evidence or argument. The fourth is presented as a conclusion drawn from the first three - but it's a non-sequitur. There's no real way to conclude the last statement given the first three. Even if you give him all the rope in the world, and accept those three statements as axioms - it's still garbage.

## The Intellectual Gravity of Brilliant Baseball Players

Some of my friends at work are baseball fans. I totally don't get baseball - to me, it's about as interesting as watching paint dry. But thankfully, some of my friends disagree, which is how I found this lovely little bit of crackpottery.

You see, there's a (former?) baseball player named Jose Canseco, who's been plastering twitter with his deep thoughts about science.

</script

At first glance, this is funny, but not particularly interesting. I mean, it's a classic example of my mantra: the worst math is no math.

The core of this argument is pseudo-mathematical. The dumbass wants to make the argument that under current gravity, it wouldn't be possible for things the size of the dinosaurs to move around. The problem with this argument is that there's no problem! Things the size of dinosaurs could move about in current gravity with absolutely no difficult. If you actually do the math, it's fine.

If dinosaurs had the anatomy of human beings, then it's true that if you scaled them up, they wouldn't be able to walk. But they didn't. They had anatomical structures that were quite different from ours in order to support their massive size. For example, here's a bone from quetzlcoatlus:

See the massive knob sticking out to the left? That's a muscle attachement point. That gave the muscles much greater torque than ours have, which they needed. (Yes, I know that Quetzalcoatlus wwasn't really a dinosaur, but it is one of the kinds of animals that Canseco was talking about, and it was easy to find a really clear image.)

Most animal joints are, essentially, lever systems. Muscles attach to two different bones, which are connected by a hinge. The muscle attachement points stick out relative to the joint. When the muscles contract, that creates a torque which rotate the bones around the joint.

The lever is one of the most fundamental machines in the universe. It operates by the principal of torque. Our regular daily experiences show that levers act in a way that magnifies our efforts. I can't walk up to a car and lift it. But with a lever, I can. Muscle attachment points are levers. Take another look at that bone picture: what you're seeing is a massive level to magnify the efforts of the muscles. That's all that a large animal needed to be able to move around in earths gravity.

This isn't just speculation - this is stuff that's been modeled in great detail. And it's stuff that can be observed in modern day animals. Look at the skeleton of an elephant, and compare it to the skeleton of a dog. The gross structure is very similar - they are both quadripedal mammals. But if you look at the bones, the muscle attachment points in the elephants skeleton have much larger projections, to give the muscles greater torque. Likewise, compare the skeleton of an american robin with the skeleton of a mute swan: the swan (which has a maximum recorded wingspan of 8 feet!) has much larger projections on the attachment points for its muscles. If you just scaled a robin from its 12 inch wingspan to the 8 feet wingspan of a swan, it wouldn't be able to walk, much less fly! But the larger bird's anatomy is different in order to support its size - and it can and does fly with those 8 foot wings!

That means that on the basic argument for needing different gravity, Canseco fails miserably.

Canseco's argument for how gravity allegedly changed is even worse.

What he claims is that at the time when the continental land masses were joined together as the pangea supercontinent, the earths core moved to counterbalance the weight of the continents. Since the earths core was, after this shift, farther from the surface, the gravity at the surface would be smaller.

This is an amusingly ridiculous idea. It's even worse that Ted Holden and his reduced-felt-gravity because of the electromagnetic green saturn-star.

First, the earths core isn't some lump of stuff that can putter around. The earth is a solid ball of material. It's not like a ball of powdered chalk with a solid lump of uranium at the center. The core can't move.

Even if it could, Canseco is wrong. Canseco is playing with two different schemes of how gravity works. We can approximate the behavior of gravity on earth by assuming that the earth is a point: for most purposes, gravity behaves almost as if the entire mass of the earth was concentrated at the earths center of mass. Canseco is using this idea when he moves the "core" further from the surface. He's using the idea that the core (which surrounds the center of mass in the real world) is the center of mass. So if the core moves, and the center of mass moves with it, then the point-approximation of gravity will change because the distance from the center of mass has increased.

But: the reason that he claims the core moved is because it was responding to the combined landmasses on the surface clumping together as pangea. That argument is based on the idea that the core had to move to balance the continents. In that case, the center of gravity wouldn't be any different - if the core could move to counterbalance the continents, it would move just enough to keep the center of gravity where it was - so if you were using the point approximation of gravity, it would be unaffected by the shift.

He's combining incompatible assumptions. To justify moving the earths core, he's *not* using a point-model of gravity. He's assuming that the mass of the earths core and the mass of the continents are different. When he wants to talk about the effect of gravity of an animal on the surface, he wants to treat the full mass of the earth as a point source - and he wants that point source to be located at the core.

It doesn't work that way.

People are fascinated by the giant creatures that used to live on the earth. Intuitively, because we don't see giant animals in the world around us, there's a natural tendency to ask "Why?". And being the pattern-seekers that we are, we intuitively believe that there must be a reason why the animals back then were huge, but the animals today aren't. It can't just be random chance. So people keep coming up with reasons. Like:

1. Neal Adams: who argues that the earth is constantly growing larger, and that gravity is an illusion caused by that growth. One of the reasons, according to his "theory", for why we know that gravity is just an illusion, is because the dinosaurs supposedly couldn't walk in current gravity.
2. Ted Holden and the Neo-Velikovskians: who argue that the solar system is drastically different today than it used to be. According to Holden, Saturn used to be a "hyperintelligent green electromagnetic start", and the earth used to be tide-locked in orbit around it. As a result, the felt effect of gravity was weaker.
3. Stephen Hurrell, who argues similarly to Neal Adams that the earth is growing. Hurrell doesn't dispute the existence of gravity the way that Adams does, but similarly argues that dinosaurs couldn't walk in present day gravity, and resorts to an expanding earth to explain how gravity could have been weaker.
4. Ramin Amir Mardfar: who claims that the earth's mass has been continually increasing because meteors add mass to the earth.
5. Gunther Bildmeyer, who argues that gravity is really an electromagnetic effect, and so the known fluctuations in the earths magnetic fields change gravity. According to him, the dinosaurs could only exist because of the state of the magnetic field at the time, which reduced gravity.

There are many others. All of them grasping at straws, trying to explain something that doesn't need explaining, if only they'd bother to do the damned math, and see that all it takes is a relatively small anatomical change.

## Euler's Equation Crackpottery

One of my twitter followers sent me an interesting piece of crackpottery. I debated whether to do anything with it. The thing about crackpottery is that it really needs to have some content. Total incoherence isn't amusing. This bit is, frankly, right on the line.

Euler's Equation and the Reality of Nature.

a) Euler's Equation as a mathematical reality.

Euler's identity is "the gold standard for mathematical beauty'.
Euler's identity is "the most famous formula in all mathematics".
‘ . . . this equation is the mathematical analogue of Leonardo
da Vinci’s Mona Lisa painting or Michelangelo’s statue of David’
‘It is God’s equation’, ‘our jewel ‘, ‘ It is a mathematical icon’.
. . . . etc.

b) Euler's Equation as a physical reality.

"it is absolutely paradoxical; we cannot understand it,
and we don't know what it means, . . . . .’
‘ Euler's Equation reaches down into the very depths of existence’
‘ Is Euler's Equation about fundamental matters?’
‘It would be nice to understand﻿ Euler's Identity as a physical process
using physics.‘
‘ Is it possible to unite Euler's Identity with physics, quantum physics ?’

My aim is to understand the reality of nature.

Can Euler's equation explain me something about reality?

To give the answer to this. question I need to bind Euler's equation with an object – particle. Can it be math- point or string- particle or triangle-particle? No, Euler's formula has quantity (pi) which says me that the particle must be only a circle .

Now I want to understand the behavior of circle - particle and therefore I need to use spatial relativity and quantum theories. These two theories say me that the reason of circle – particle’s movement is its own inner impulse (h) or (h*=h/2pi).

a) Using its own inner impulse (h) circle - particle moves ( as a wheel) in a straight line with constant speed c = 1. We call such particle - ‘photon’. From Earth – gravity point of view this speed is maximally. From Vacuum point of view this speed is minimally. In this movement quantum of light behave as a corpuscular (no charge).

b) Using its own inner impulse / intrinsic angular momentum ( h* = h / 2pi ) circle - particle rotates around its axis. In such movement particle has charge, produce electric waves ( waves property of particle) and its speed ( frequency) is : c.

1. We call such particle - ‘ electron’ and its energy is: E=h*f.

In this way I can understand the reality of nature.

==.

Best wishes.

Euler's equation says that . It's an amazingly profound equation. The way that it draws together fundamental concepts is beautiful and surprising.

But it's not nearly as mysterious as our loonie-toon makes it out to be. The natural logarithm-base is deeply embedded in the structure of numbers, and we've known that, and we've known how it works for a long time. What Euler did was show the relationship between e and the fundamental rotation group of the complex numbers. There are a couple of ways of restating the definition of that make the meaning of that relationship clearer.

For example:

That's an alternative definition of what e is. If we use that, and we plug into it, we get:

If you work out that limit, it's -1. Also, if you take values of N, and plot , , , and , ... on the complex plane, as N gets larger, the resulting curve gets closer and closer to a semicircle.

An equivalent way of seeing it is that exponents of are rotations in the complex number plane. The reason that is because if you take the complex number (1 + 0i), and rotate it by radians, you get -1: .

That's what Euler's equation means. It's amazing and beautiful, but it's not all that difficult to understand. It's not mysterious in the sense that our crackpot friend thinks it is.

But what really sets me off is the idea that it must have some meaning in physics. That's silly. It doesn't matter what the physical laws of the universe are: the values of and e will not change. It's like trying to say that there must be something special about our universe that makes 1 + 1 = 2 - or, conversely, that the fact that 1+1=2 means something special about the universe we live in. These things are facts of numbers, which are independent of physical reality. Create a universe with different values for all of the fundamental constants - e and π will be exactly the same. Create a universe with less matter - e and π will still be the same. Create a universe with no matter, a universe with different kinds of matter, a universe with 300 forces instead of the four that we see - and e and π won't change.

What things like e and π, and their relationship via Euler's equation tell us is that there's a fundamental relationship between numbers and shapes on a two-dimensional plane which does not and cannot really exist in the world we live in.

Beyond that, what he's saying is utter rubbish. For example:

These two theories say me that the reason of circle – particle’s movement is its own inner impulse (h) or (h*=h/2pi). Using its own inner impulse (h) circle - particle moves ( as a wheel) in a straight line with constant speed c = 1. We call such particle - ‘photon’. From Earth – gravity point of view this speed is maximally. From Vacuum point of view this speed is minimally. In this movement quantum of light behave as a corpuscular (no charge).

This is utterly meaningless. It's a jumble of words that pretends to be meaningful and mathematical, when in fact it's just a string of syllables strung together nonsensical ways.

There's a lot that we know about how photons behave. There's also a lot that we don't know about photons. This word salad tells us exactly nothing about photons. In the classic phrase, it's not even wrong: what it says doesn't have enough meaning to be wrong. What is the "inner impulse" of a photon according to this crackpot? We can't know: the term isn't defined. We are pretty certain that a photon is not a wheel rolling along. Is that what the crank is saying? We can't be sure. And that's the problem with this kind of crankery.

As I always say: the very worst math is no math. This is a perfect example. He starts with a beautiful mathematical fact. He uses it to jump to a completely non-mathematical conclusion. But he writes a couple of mathematical symbols, to pretend that he's using math.

## Back to an old topic: Bad Vaccine Math

The very first Good Math/Bad Math post ever was about an idiotic bit of antivaccine rubbish. I haven't dealt with antivaccine stuff much since then, because the bulk of the antivaccine idiocy has nothing to do with math. But the other day, a reader sent me a really interesting link from what my friend Orac calls a "wretched hive of scum and quackery", naturalnews.com, in which they try to argue that the whooping cough vaccine is an epic failure:

(NaturalNews) The utter failure of the whooping cough (pertussis) vaccine to provide any real protection against disease is once again on display for the world to see, as yet another major outbreak of the condition has spread primarily throughout the vaccinated community. As it turns out, 90 percent of those affected by an ongoing whooping cough epidemic that was officially declared in the state of Vermont on December 13, 2012, were vaccinated against the condition -- and some of these were vaccinated two or more times in accordance with official government recommendations.

As reported by the Burlington Free Press, at least 522 cases of whooping cough were confirmed by Vermont authorities last month, which was about 10 times the normal amount from previous years. Since that time, nearly 100 more cases have been confirmed, bringing the official total as of January 15, 2013, to 612 cases. The majority of those affected, according to Vermont state epidemiologist Patsy Kelso, are in the 10-14-year-old age group, and 90 percent of those confirmed have already been vaccinated one or more times for pertussis.

Even so, Kelso and others are still urging both adults and children to get a free pertussis shot at one of the free clinics set up throughout the state, insisting that both the vaccine and the Tdap booster for adults "are 80 to 90 percent effective." Clearly this is not the case, as evidenced by the fact that those most affected in the outbreak have already been vaccinated, but officials are apparently hoping that the public is too naive or disengaged to notice this glaring disparity between what is being said and what is actually occurring.

It continues in that vein. The gist of the argument is:

1. We say everyone needs to be vaccinated, which will protect them from getting the whooping cough.
2. The whooping cough vaccine is, allagedly, 80 to 90% effective.
3. 90% of the people who caught whooping cough were properly vaccinated.
4. Therefore the vaccine can't possibly work.

What they want you to do is look at that 80 to 90 percent effective rate, and see that only 10-20% of vaccinated people should be succeptible to the whooping cough, and compare that 10-20% to the 90% of actual infected people that were vaccinated. 20% (the upper bound of the succeptible portion of vaccinated people according to the quoted statistic) is clearly much smaller than 90% - therefore it's obvious that the vaccine doesn't work.

Of course, this is rubbish. It's a classic apple to orange-grove comparison. You're comparing percentages, when those percentages are measuring different groups - groups with wildly difference sizes.

Take a pool of 1000 people, and suppose that 95% are properly vaccinated (the current DTAP vaccination rate in the US is around 95%). That gives you 950 vaccinated people and 50 unvaccinated people who are unvaccinated.

In the vaccinated pool, let's assume that the vaccine was fully effective on 90% of them (that's the highest estimate of effectiveness, which will result in the lowest number of succeptible vaccinated - aka the best possible scenario for the anti-vaxers). That gives us 95 vaccinated people who are succeptible to the whooping cough.

There's the root of the problem. Using numbers that are ridiculously friendly to the anti-vaxers, we've still got a population of twice as many succeptible vaccinated people as unvaccinated. so we'd expect, right out of the box, that better than 2/3rds of the cases of whooping cough would be among the vaccinated people.

In reality, the numbers are much worse for the antivax case. The percentage of people who were ever vaccinated is around 95%, because you need the vaccination to go to school. But that's just the childhood dose. DTAP is a vaccination that needs to be periodically boosted or the immunity wanes. And the percentage of people who've had boosters is extremely low. Among adolescents, according to the CDC, only a bit more than half have had DTAP boosters; among adults, less that 10% have had a booster within the last 5 years.

What's your succeptibility if you've gone more than 5 years without vaccination? Somewhere 40% of people who didn't have boosters in the last five years are succeptible.

So let's just play with those numbers a bit. Assume, for simplicity, than 50% of the people are adults, and 50% children, and assume that all of the children are fully up-to-date on the vaccine. Then you've got 10% of the children (10% of 475), 10% of the adults that are up-to-date (10% of 10% of 475), and 40% of the adults that aren't up-to-date (40% of 90% of 475) is the succeptible population. That works out to 266 succeptible people among the vaccinated, which is 85%: so you'd expect 85% of the actual cases of whooping cough to be among people who'd been vaccinated. Suddenly, the antivaxers case doesn't look so good, does it?

Consider, for a moment, what you'd expect among a non-vaccinated population. Pertussis is highly contagious. If someone in your household has pertussis, and you're succeptible, you've got a better than 90% chance of catching it. It's that contagious. Routine exposure - not sharing a household, but going to work, to the store, etc., with people who are infected still gives you about a 50% chance of infection if you're succeptible.

In the state of Vermont, where NaturalNews is claiming that the evidence shows that the vaccine doesn't work, how many cases of Pertussis have they seen? Around 600, out of a state population of 600,000 - an infection rate of one tenth of one percent. 0.1 percent, from a virulently contagious disease.

That's the highest level of Pertussis that we've seen in the US in a long time. But at the same time, it's really a very low number for something so contagious. To compare for a moment: there's been a huge outbreak of Norovirus in the UK this year. Overall, more than one million people have caught it so far this winter, out of a total population of 62 million, for a rate of about 1.6% or sixteen times the rate of infection of pertussis.

Why is the rate of infection with this virulently contagious disease so different from the rate of infection with that other virulently contagious disease? Vaccines are a big part of it.

## Vortex Garbage

A reader who saw my earlier post on the Vortex math talk at a TEDx conference sent me a link to an absolutely dreadful video that features some more crackpottery about the magic of vortices.

It says:

The old heliocentric model of our solar system,
planets rotating around the sun, is not only boring,
but also incorrect.

Our solar system moves through space at 70,000km/hr.

(Image of the sun with a rocket/comet trail propelling
it through space, with the planets swirling around it.)

The sun is like a comet, dragging the planets in its wake.
Can you say "vortex"?

The science of this is terrible. The sun is not a rocket. It does not propel itself through space. It does not have a tail. It does not leave a significant "wake". (There is interstellar material, and the sun moving through it does perturb it, but it's not a wake: the interstellar material is orbiting the galactic center just like the sun. Gravitational effects do cause pertubations, but it's not like a boat moving through still water, producing a wake.) Even if you stretch the definition of "wake", the sun certainly does not leave a wake large enough to "drag" the planets. In fact, if you actually look at the solar system, the plane the ecliptic - the plane where the planets orbit the sun - is at a roughly 60 degree angle to the galactic ecliptic. If planetary orbits were a drag effect, then you would expect the orbits to be perpendicular to the galactic ecliptic. But they aren't.

If you look at it mathematically, it's even worse. The video claims to be making a distinction between the "old heliocentric" model of the solar system, and their new "vortex" model. But in fact, mathematically, they're exactly the same thing. Look at it from a heliocentric point of view, and you've got the heliocentric model. Look at the exact same system from point that's not moving relative to galactic center, and you get the vortex. They're the same thing. The only difference is how you look at it.

And that's just the start of the rubbish. Once they get past their description of their "vortex" model, they go right into the woo. Vortex is life! Vortex is sprirituality! Oy.

If you follow their link to their website, it gets even sillier, and you can start to see just how utterly clueless the author of this actually is:

(In reference to a NASA image showing the interstellar "wind" and the heliopause)

Think about this for a minute. In this diagram it seems the Solar System travel to the left. When the Earth is also traveling to the left (for half a year) it must go faster than the Sun. Then in the second half of the year, it travels in a “relative opposite direction” so it must go slower than the Sun. Then, after completing one orbit, it must increase speed to overtake the Sun in half a year. And this would go for all the planets. Just like any point you draw on a frisbee will not have a constant speed, neither will any planet.

See, it's a problem that the planets aren't moving at a constant speed. They speed up and slow down! Oh, the horror! The explanation is that they're caught by the sun's wake! So they speed up when they get dragged, until they pass the sun (how does being dragged by the sun ever make them faster than the sun? Who knows!), and then they're not being dragged anymore, so they slow down.

This is ignorance of physics and of the entire concept of frame of reference and symmetry that is absolutely epic.

There's quite a bit more nonsense, but that's all I can stomach this evening. Feel free to point out more in the comments!

## Types Gone Wild! SKI at Compile-Time

Over the weekend, a couple of my Foursquare coworkers and I were chatting on twitter, and one of my smartest coworkers, a great guy named Jorge Ortiz, pointed out that type inference in Scala (the language we use at Foursquare, and also pretty much my favorite language) is Turing complete.

Somehow, I hadn't seen this before, and it absolutely blew my mind. So I asked Jorge for a link to the proof. The link he sent me is a really beautiful blog post. It doesn't just prove that Scala type inference is Turing complete, but it does it in a remarkably beautiful way.

Before I get to the proof, what does this mean?

A system is Turing complete when it can perform any possible computation that could be performed on any other computing device. The Turing machine is, obviously, Turing complete. So is lambda calculus, the Minsky machine, the Brainfuck computing model, and the Scala programming language itself.

If type inference is Turing complete, then that means that you can write a Scala program where, in order to type-check the program, the compiler has to run an arbitrary program to completion. It means that there are, at least theoretically, Scala programs where the compiler will take forever - literally forever - to determine whether or not a given program contains a type error. Needless to say, I consider this to be a bad thing. Personally, I'd really prefer to see the type system be less flexible. In fact, I'd go so far as to say that this is a fundamental error in the design of Scala, and a big strike against it as a language. Having a type-checking system which isn't guaranteed to terminate is bad.

But let's put that aside: Scala is pretty entrenched in the community that uses it, and they've accepted this as a tradeoff. How did the blog author, Michael Dürig, prove that Scala type checking is Turing complete? By showing how to implement a variant of lambda calculus called SKI combinator calculus entirely with types.

SKI calculus is seriously cool. We know that lambda calculus is Turing complete. It turns out that for any lambda calculus expression, there's a way rewriting it without any variables, and without any lambdas at all, using three canonical master functions. If you've got those three, then you can write anything, anything at all. The three are called S, K, and I.

• The S combinator is: .
• The K combinator is: .
• The I combinator is: .

They come from intuitionistic logic, where they're fundamental axioms that describe how intuitionistic implication works. K is the rule ; S is the rule ; and I is .

Given any lambda calculus expression, you can rewrite it as a chain of SKIs. (If you're interested in seeing how, please just ask in the comments; if enough people are interested, I'll write it up.) What the author of the post id is show how to implement the S, K, and I combinators in Scala types.

trait Term {
type ap[x <: Term] <: Term
type eval <: Term
}


He's created a type Term, which is the supertype of any computable fragment written in this type-SKI. Since everything is a function, all terms have to have two methods: one of them is a one-parameter "function" which applies the term to a parameter, and the second is a "function" which simplifies the term into canonical form.

He implements the S, K, and I combinators as traits that extend Term. We'll start with the simplest one, the I combinator.

trait I extends Term {
type ap[x <: Term] = I1[x]
type eval = I
}

trait I1[x <: Term] extends Term {
type ap[y <: Term] = eval#ap[y]
type eval = x#eval
}


I needs to take a parameter, so its apply type-function takes a parameter x, and returns a new type I1[x] which has the parameter encoded into it. Evaluating I1[x] does exactly what you'd want from the I combinator with its parameter - it returns it.

The apply "method" of I1 looks strange. What you have to remember is that in lambda calculus (and in the SKI combinator calculus), everything is a function - so even after evaluating I.ap[x] to some other type, it's still a type function. So it still needs to be applicable. Applying it is exactly the same thing as applying its parameter.

So if have any type A, if you write something like var a : I.ap[A].eval, the type of a will evaluate to A. If you apply I.ap[A].ap[Z], it's equivalent to taking the result of evaluating I.ap[A], giving you A, and then applying that to Z.

The K combinator is much more interesting:

// The K combinator
trait K extends Term {
type ap[x <: Term] = K1[x]
type eval = K
}

trait K1[x <: Term] extends Term {
type ap[y <: Term] = K2[x, y]
type eval = K1[x]
}

trait K2[x <: Term, y <: Term] extends Term {
type ap[z <: Term] = eval#ap[z]
type eval = x#eval
}


It's written in curried form, so it's a type trait K, which returns a type trait K1, which takes a parameter and returns a type trait K2.

The implementation is a whole lot trickier, but it's the same basic mechanics. Applying K.ap[X] gives you K1[X]. Applying that to Y with K1[X].ap[Y] gives you K2[K, Y]. Evaluating that gives you X.

The S combinator is more of the same.

// The S combinator
trait S extends Term {
type ap[x <: Term] = S1[x]
type eval = S
}

trait S1[x <: Term] extends Term {
type ap[y <: Term] = S2[x, y]
type eval = S1[x]
}

trait S2[x <: Term, y <: Term] extends Term {
type ap[z <: Term] = S3[x, y, z]
type eval = S2[x, y]
}

trait S3[x <: Term, y <: Term, z <: Term] extends Term {
type ap[v <: Term] = eval#ap[v]
type eval = x#ap[z]#ap[y#ap[z]]#eval
}



Michid then goes on to show examples of how to use these beasts. He implements equality testing, and then shows how to test if different type-expressions evaluate to the same thing. And all of this happens at compile time. If the equality test fails, then it's a type error at compile time!

It's a brilliant little proof. Even if you can't read Scala syntax, and you don't really understand Scala type inference, as long as you know SKI, you can look at the equality comparisons, and see how it works in SKI. It's really beautiful.

## Static Typing: Give me a break!

I'm a software engineer. I write code for a living. I'm also a programming language junkie. I love programming languages. I'm obsessed with programming languages. I've taught myself more programming languages than any sane person has any reason to know.

Learning that many languages, I've developed some pretty strong opinions about what makes a good language, and what kind of things I really want to see in the languages that I use.

My number one preference: strong static typing. That's part of a more general preference, for preserving information. When I'm programming, I know what kind of thing I expect in a parameter, and I know what I expect to return. When I'm programming in a weakly typed language, I find that I'm constantly throwing information away, because I can't actually say what I know about the program. I can't put my knowledge about the expected behavior into the code. I don't think that that's a good thing.

But... (you knew there was a but coming, didn't you?)

This is my preference. I believe that it's right, but I also believe that reasonable people can disagree. Just because you don't think the same way that I do doesn't mean that you're an idiot. It's entirely possible for someone to know as much as I do about programming languages and have a different opinion. We're talking about preferences.

Sadly, that kind of attitude is something that is entirely too uncommon. I seriously wonder somethings if I'm crazy, because it seems like everywhere I look, on every topic, no matter how trivial, most people absolutely reject the idea that it's possible for an intelligent, knowledgeable person to disagree with them. It doesn't matter what the subject is: politics, religion, music, or programming languages.

What brought this little rant on is that someone sent me a link to a comic, called "Cartesian Closed Comic". It's a programming language geek comic. But what bugs me is this comic. Which seems to be utterly typical of the kind of attitude that I'm griping about.

See, this is a pseudo-humorous way of saying "Everyone who disagrees with me is an idiot". It's not that reasonable people can disagree. It's that people who disagree with me only disagree because they're ignorant. If you like static typing, you probably know type theory. If you don't like static typing, that there's almost no chance that you know anything about type theory. So the reason that those stupid dynamic typing people don't agree with people like me is because they just don't know as much as I do. And the reason that arguments with them don't go anywhere isn't because we have different subjective preferences: it's because they're just too ignorant to understand why I'm right and they're wrong.

Most programmers - whether they prefer static typing or not - don't know type theory. Most of the arguments about whether to use static or dynamic typing aren't based on type theory. It's just the same old snobbery, the "you can't disagree with me unless you're an idiot".

Among intelligent skilled engineers, the static versus dynamic typing thing really comes down to a simple, subjective argument:

Static typing proponents believe that expressing intentions in a static checkable form is worth the additional effort of making all of the code type-correct.

Dynamic typing proponents believe that it's not: that strong typing creates an additional hoop that the programmer needs to jump through in order to get a working system.

Who's right? In fact, I don't think that either side is universally right. Building a real working system is a complex thing. There's a complex interplay of design, implementation, and testing. What static typing really does is take some amount of stuff that could be checked with testing, and allows the compiler the check it in an abstract way, instead of with specific tests.

Is it easier to write code with type declarations, or with additional tests? Depends on the engineers and the system that they're building.

## Sloppy Dualism Denies Free Will?

When I was an undergrad in college, I was a philosophy minor. I spent countless hours debating ideas about things like free will. My final paper was a 60 page rebuttal to what I thought was a sloppy argument against free will. Now, it's been more years since I wrote that than I care to admit - and I still keep seeing the same kind of sloppy arguments, that I argue are ultimately circular, because they're hiding their conclusion in their premises.

There's an argument against free will that I find pretty compelling. I don't agree with it, but I do think that it's a solid argument:

Everything in our experience of the universe ultimately comes down to physics. Every phenomenon that we can observe is, ultimately, the result of particles interacting according to basic physical laws. Thermodynamics is the ultimate, fundamental ruler of the universe: everything that we observe is a result of a thermodynamic process. There are no exceptions to that.

Our brain is just another physical device. It's another complex system made of an astonishing number of tiny particles, interacting in amazingly complicated ways. But ultimately, it's particles interacting the way that particles interact. Our behavior is an emergent phenomenon, but ultimately, we don't have any ability to make choice, because there's no mechanism that allows us free choice. Our choice is determined by the physical interactions, and our consciousness of those results is just a side-effect of that.

If you want to argue that free will doesn't exist, that argument is rock solid.

But for some reason, people constantly come up with other arguments - in fact, much weaker arguments that come from what I call sloppy dualism. Dualism is the philosophical position that says that a conscious being has two different parts: a physical part, and a non-physical part. In classical terms, you've got a body which is physical, and a mind/soul which is non-physical.

In this kind of argument, you rely on that implicit assumption of dualism, essentially asserting that whatever physical process we can observe isn't really you, and that therefore by observing any physical process of decision-making, you infer that you didn't really make the decision.

For example...

And indeed, this is starting to happen. As the early results of scientific brain experiments are showing, our minds appear to be making decisions before we're actually aware of them — and at times by a significant degree. It's a disturbing observation that has led some neuroscientists to conclude that we're less in control of our choices than we think — at least as far as some basic movements and tasks are concerned.

This is something that I've seen a lot lately: when you do things like functional MRI, you can find that our brains settled on a decision before we consciously became aware of making the choice.

Why do I call it sloppy dualism? Because it's based on the idea that somehow the piece of our brain that makes the decision is different from the part of our brain that is our consciousness.

If our brain is our mind, then everything that's going on in our brain is part of our mind. Taking a piece of our brain, saying "Whoops, that piece of your brain isn't you, so when it made the decision, it was deciding for you instead of it being you deciding.

By starting with the assumption that the physical process of decision-making we can observe is something different from your conscious choice of the decision, this kind of argument is building the conclusion into the premises.

If you don't start with the assumption of sloppy dualism, then this whole argument says nothing. If we don't separate our brain from our mind, then this whole experiment says nothing about the question of free will. It says a lot of very interesting things about how our brain works: it shows that there are multiple levels to our minds, and that we can observe those different levels in how our brains function. That's a fascinating thing to know! But does it say anything about whether we can really make choices? No.

## The Investors vs. the Tabby

There's an amusing article making its rounds of the internet today, about the successful investment strategy of a cat named Orlando..

A group of people at the Observer put together a fun experiment.
They asked three groups to pretend that they had 5000 pounds, and asked each of them to invest it, however they wanted, in stocks listed on the FTSE. They could only change their investments at the end of a calendar quarter. At the end of the year, they compared the result of the three groups.

Who were the three groups?

1. The first was a group of professional investors - people who are, at least in theory, experts at analyzing the stock market and using that analysis to make profitable investments.
2. The second was a classroom of students, who are bright, but who have no experience at investment.
3. The third was an orange tabby cat named Orlando. Orlando chose stocks by throwing his toy mouse at a
targetboard randomly marked with investment choices.

As you can probably guess by the fact that we're talking about this, Orlando the tabby won, by a very respectable margin. (Let's be honest: if the professional investors came in first, and the students came in second, no one would care.) At the end of the year, the students had lost 160 pounds on their investments. The professional investors ended with a profit of 176 pounds. And the cat ended with a profit of 542 pounds - more than triple the profit of the professionals.

Most people, when they saw this, had an immediate reaction: "see, those investors are a bunch of idiots. They don't know anything! They were beaten by a cat!"
And on one level, they're absolutely right. Investors and bankers like to present themselves as the best of the best. They deserve their multi-million dollar earnings, because, so they tell us, they're more intelligent, more hard-working, more insightful than the people who earn less. And yet, despite their self-alleged brilliance, professional investors can't beat a cat throwing a toy mouse!

It gets worse, because this isn't a one-time phenomenon: there've been similar experiments that selected stocks by throwing darts at a news-sheet, or by rolling dice, or by picking slips of paper from a hat. Many times, when people have done these kinds of experiments, the experts don't win. There's a strong implication that "expert investors" are not actually experts.

Does that really hold up? Partly yes, partly no. But mostly no.

Before getting to that, there's one thing in the article that bugged the heck out of me: the author went out of his/her way to make sure that they defended the humans, presenting their performance as if positive outcomes were due to human intelligence, and negative ones were due to bad luck. In fact, I think that in this experiment, it was all luck.

For example, the authors discuss how the professionals were making more money than the cat up to the last quarter of the year, and it's presented as the human intelligence out-performing the random cat. But there's no reason to believe that. There's no evidence that there's anything qualitatively different about the last quarter that made it less predictable than the first three.

The headmaster at the student's school actually said "The mistakes we made earlier in the year were based on selecting companies in risky areas. But while our final position was disappointing, we are happy with our progress in terms of the ground we gained at the end and how our stock-picking skills have improved." Again, there's absolutely no reason to believe that the students stock picking skills miraculously improved in the final quarter; much more likely that they just got lucky.

The real question that underlies this is: is the performance of individual stocks in a stock market actually predictable, or is it dominantly random. Most of the evidence that I've seen suggests that there's a combination; on a short timescale, it's predominantly random, but on longer timescales it becomes much more predictable.

But people absolutely do not want to believe that. We humans are natural pattern-seekers. It doesn't matter whether we're talking about financial markets, pixel-patterns in a bitmap, or answers on a multiple choice test: our brains look for patterns. If you randomly generate data, and you look at it long enough, with enough possible strategies,
you'll find a pattern that fits. But it's an imposed pattern, and it has no predictive value. It's like the images of jesus on toast: we see patterns in noise. So people see patterns in the market, and they want to believe that it's predictable.

Second, people want to take responsibility for good outcomes, and excuse bad ones. If you make a million dollars betting on a horse, you're going to want to say that it was your superiour judgement of the horses that led to your victory. When an investor makes a million dollars on a stock, of course he wants to say that he made that money because he made a smart choice, not because he made a lucky choice. But when that same investor loses a million dollars, he doesn't want to say that the lost a million dollars because he's stupid; he wants to say that he lost money because of bad luck, of random factors beyond his control that he couldn't predict.

The professional investors were doing well during part of the year: therefore, during that part of the year, they claim that their good performance was because they did a good job judging which stocks to buy. But when they lost money during the last quarter? Bad luck. But overall, their knowledge and skills paid off! What evidence do we have to support that? Nothing: but we want to assert that we have control, that experts understand what's going on, and are able to make intelligent predictions.

The students performance was lousy, and if they had invested real money, they would have lost a tidy chunk of it. But their teacher believes that their performance in the last quarter wasn't luck - it was that their skills had improved. Nonsense! They were lucky.

On the general question: Are "experts" useless for managing investments?

It's hard to say for sure. In general, experts do perform better than random, but not by a huge margin, certainly not by as much as they'd like us to believe. The Wall Street Journal used to do an experiment where they compared dartboard stock selection against human experts, and against passive investment in the Dow Jones Index stocks over a one-year period. The pros won 60% of the time. That's better than chance: the experts knowledge/skills were clearly benefiting them. But: blindly throwing darts at a wall could beat experts 2 out of 5 times!

When you actually do the math and look at the data, it appears that human judgement does have value. Taken over time, human experts do outperform random choices, by a small but significant margin.

What's most interesting is a time-window phenomenon. In most studies, the human performance relative to random choice is directly related to the amount of time that the investment strategy is followed: the longer the timeframe, the better the humans perform. In daily investments, like day-trading, most people don't do any better than random. The performance of day-traders is pretty much in-line with what you'd expect from probability from random choice. Monthly, it's still mostly a wash. But if you look at yearly performance, you start to see a significant difference: humans do typically outperform random choice by a small but definitely margin. If you look at longer time-frames, like 5 or ten years, then you start to see really sizeable differences. The data makes it look like daily fluctuations of the market are chaotic and unpredictable, but that there are long-term trends that we can identify and exploit.

## A Bad Mathematical Refutation of Atheism

At some point a few months ago, someone (sadly I lost their name and email) sent me a link to yet another Cantor crank. At the time, I didn't feel like writing another Cantor crankery post, so I put it aside. Now, having lost it, I was using Google to try to find the crank in question. I didn't, but I found something really quite remarkably idiotic.

(As a quick side-comment, my queue of bad-math-crankery is, sadly, empty. If you've got any links to something yummy, please shoot it to me at markcc@gmail.com.)

The item in question is this beauty. It's short, so I'll quote the whole beast.

MYTH: Cantor's Set Theorem disproves divine omniscience

God is omniscient in the sense that He knows all that is not impossible to know. God knows Himself, He knows and does, knows every creature ideally, knows evil, knows changing things, and knows all possibilites. His knowledge allows free will.

Cantor's set theorem is often used to argue against the possibility of divine omniscience and therefore against the existence of God. It can be stated:

1. If God exists, then God is omniscient.
2. If God is omniscient, then, by definition, God knows the set of all truths.
3. If Cantor's theorem is true, then there is no set of all truths.
4. But Cantor’s theorem is true.
5. Therefore, God does not exist.

However, this argument is false. The non-existence of a set of all truths does not entail that it is impossible for God to know all truths. The consistency of a plausible theistic position can be established relative to a widely accepted understanding of the standard model of Cantorian set theorem. The metaphysical Cantorian premises imply that Cantor’s theorem is inapplicable to the things that God knows. A set of all truths, if it exists, must be non-Cantorian.

The attempted disproof of God’s omniscience is, from a meta-mathematical standpoint, is inadequate to the extent that it doesn't explain well-known mathematical contexts in which Cantor’s theorem is invalid. The "disproof" doesn't acknowledge standard meta-mathematical conceptions that can analogically be used to establish the relative consistency of certain theistic positions. The metaphysical assertions concerning a set of all truths in the atheistic argument above imply that Cantor’s theorem is inapplicable to a set of all truths.

This is an absolute masterwork of crankery! It's remarkably silly argument on so many levels.

1. The first problem is just figuring out what the heck he's talking about! When you say "Cantor's theorem", what I think of is one of Cantor's actual theorems: "For any set S, the powerset of S is larger than S." But that is clearly not what he's referring to. I did a bit of searching to make sure that this wasn't my error, but I can't find anything else called Cantor's theorem.
2. So what the heck does he mean by "Cantor's set theorem"? From his text, it appears to be a statement something like: "there is no set of all truths". The closest actual mathematical statement that I can come up with to match that is Gödel's incompleteness theorem. If that's what he means, then he's messed it up pretty badly. The closest I can come to stating incompleteness informally is: "In any formal mathematical system that's powerful enough to express Peano arithmetic, there will be statements that are true, but which cannot be proven". It's long, complex, not particularly intuitive, and it's still not a particularly good statement of incompleteness.

Incompleteness is a difficult concept, and as I've written about before, it's almost impossible to state incompleteness in an informal way. When you try to do that, it's inevitable that you're going to miss some of its subtleties. When you try to take an informal statement of incompleteness, and reason from it, the results are pretty much guaranteed to be garbage - as he's done. He's using a mis-statement of incompleteness,and trying to reason from it. It doesn't matter what he says: he's trying to show how "Cantor's set theorem" doesn't disprove his notion of theism. Whether it does or not doesn't matter: for any statement X, no matter what X is, you can't prove that "Cantor's set theorem" or Gödel's incompleteness theorem, or anything else disproves X if you're arguing against something that isn't X.

3. Ignoring his mis-identification of the supposed theorem, the way that he stated it is actually meaningless. When we talk about sets, we're using the word set in the sense of either ZFC or NBG set theory. Mathematical set theory defines what a set is, using first order predicate logic. His version of "Cantor's set theorem" talks about a set which cannot be a set!

He wants to create a set of truths. In set theory terms, that's something you'd define with the axiom of specification: you'd use a predicate ranging over your objects to select the ones in the set. What's your predicate? Truth. At best, that's going to be a second-order predicate. You can't form sets using second-order predicates! The entire idea of "the set of truths" isn't something that can be expressed in set theory.

4. Let's ignore the problems with his "Cantor's theorem" for the moment. Let's pretend that the "set of all truths" was well-defined and meaningful. How does his argument stand up? It doesn't: it's a terrible argument. It's ultimately nothing more than "Because I say so!" hidden behind a collection of impressive-sounding words. The argument, ultimately, is that the set of all truths as understood in set theory isn't the same thing as the set of all truths in theology (because he says that they're different), therefore you can't use a statement about the set of all truths from set theory to talk about the set of all truths in theology.
5. I've saved what I think is the worst for last. The entire thing is a strawman. As a religious science blogger, I get almost as much mail from atheists trying to convince me that my religion is wrong as I do from Christians trying to convert me. After doing this blogging thing for six years, I'm pretty sure that I've been pestered with every argument, both pro- and anti-theistic that you'll find anywhere. But I've never actually seen this argument used anywhere except in articles like this one, which purport to show why it's wrong. The entire argument being refuted is a total fake: no one actually argues that you should be an atheist using this piece of crap. It only exists in the minds of crusading religious folk who prop it up and then knock it down to show how smart they supposedly are, and how stupid the dirty rotten atheists are.
• Scientopia Blogs