There's one kind of crank that I haven't really paid much attention to on this blog, and that's the real number cranks. I've touched on real number crankery in my little encounter with John Gabriel, and back in the old 0.999...=1 post, but I've never really given them the attention that they deserve.

There are a huge number of people who hate the logical implications of our definitions real numbers, and who insist that those unpleasant complications mean that our concept of real numbers is based on a faulty definition, or even that the whole concept of real numbers is ill-defined.

This is an underlying theme of a lot of Cantor crankery, but it goes well beyond that. And the basic problem underlies a lot of bad mathematical arguments. The root of this particular problem comes from a confusion between the *representation* of a number, and that number itself. "\(frac{1}{2}\)" isn't a number: it's a notation that we understand refers to the number that you get by dividing one by two.

There's a similar form of looniness that you get from people who dislike the set-theoretic construction of numbers. In classic set theory, you can construct the set of integers by starting with the empty set, which is used as the representation of 0. Then the set containing the empty set is the value 1 - so 1 is represented as { 0 }. Then 2 is represented as { 1, 0 }; 3 as { 2, 1, 0}; and so on. (There are several variations of this, but this is the basic idea.) You'll see arguments from people who dislike this saying things like "This isn't a construction of the natural numbers, because you can take the *intersection* of 8 and 3, and set intersection is meaningless on numbers." The problem with that is the same as the problem with the notational crankery: the set theoretic construction doesn't say "the empty set *is* the value 0", it says "in a set theoretic construction, the empty set *can be used as a representation* of the number 0.

The particular version of this crankery that I'm going to focus on today is somewhat related to the inverse-19 loonies. If you recall their monument, the plaque talks about how their work was praised by a math professor by the name of Edgar Escultura. Well, it turns out that Escultura himself is a bit of a crank.

The specify manifestation of his crankery is this representational issue. But the root of it is really related to the discomfort that many people feel at some of the conclusions of modern math.

A lot of what we learned about math has turned out to be non-intuitive. There's Cantor, and Gödel, of course: there are lots of different sizes of infinities; and there are mathematical statements that are neither true nor false. And there are all sorts of related things - for example, the whole ideaof undescribable numbers. Undescribable numbers drive people nuts. An undescribable number is a number which has the property that there's absolutely no way that you can write it down, ever. Not that you can't write it in, say, base-10 decimals, but that you can't ever write down *anything*, in *any* form that uniquely describes it. And, it turns out, that the *vast* majority of numbers are undescribable.

This leads to the representational issue. Many people insist that if you can't *represent* a number, that number doesn't really exist. It's nothing but an artifact of an flawed definition. Therefore, by this argument, those numbers don't exist; the only reason that we think that they do is because the real numbers are ill-defined.

This kind of crackpottery isn't limited to stupid people. Professor Escultura isn't a moron - but he is a crackpot. What he's done is take the representational argument, and run with it. According to him, the *only* real numbers are numbers that are representable. What he proposes is very nearly a theory of computable numbers - but he tangles it up in the representational issue. And in a fascinatingly ironic turn-around, he takes the artifacts of representational limitations, and insists that they represent real mathematical phenomena - resulting in an ill-defined number theory as a way of correcting what he alleges is an ill-defined number theory.

His system is called the New Real Numbers.

In the New Real Numbers, which he notates as \(R^*\), the decimal notation is fundamental. The set of new real numbers consists exactly of the set of numbers with *finite* representations in decimal form. This leads to some astonishingly bizarre things. From his paper:

3) Then the inverse operation to multiplication called division; the result of dividing a decimal by another if it exists is called quotient provided the divisor is not zero. Only when the integral part of the devisor is not prime other than 2 or 5 is the quotient well defined. For example, 2/7 is ill defined because the quotient is not a terminating decimal (we interpret a fraction as division).

So 2/7ths is not a new real number: it's ill-defined. 1/3 isn't a real number: it's ill-defined.

4) Since a decimal is determined or well-defined by its digits, nonterminating decimals are ambiguous or ill-defined. Consequently, the notion irrational is ill-defined since we cannot cheeckd all its digits and verify if the digits of a nonterminaing decimal are periodic or nonperiodic.

After that last one, this isn't too surprising. But it's still absolutely amazing. The square root of two? Ill-defined: it doesn't really exist. e? Ill-defined, it doesn't exist. \(pi\)? Ill-defined, it doesn't really exist. All of those triangles, circles, everything that depends on e? They're all bullshit according to Escultura. Because if he can't write them down in a piece of paper in decimal notation in a finite amount of time, *they don't exist*.

Of course, this is entirely *too* ridiculous, so he backtracks a bit, and defines a non-terminating decimal number. His definition is quite peculiar. I can't say that I really follow it. I think this may be a language issue - Escultura isn't a native english speaker. I'm not sure which parts of this are crackpottery, which are linguistic struggles, and which are notational difficulties in reading math rendered as plain text.

5) Consider the sequence of decimals,

(d)^na_1a_2...a_k, n = 1, 2, ..., (1)

where d is any of the decimals, 0.1, 0.2, 0.3, ..., 0.9, a_1, ..., a_k, basic integers (not all 0 simultaneously). We call the nonstandard sequence (1) d-sequence and its nth term nth d-term. For fixed combination of d and the a_j's, j = 1, ..., k, in (1) the nth term is a terminating decimal and as n increases indefinitely it traces the tail digits of some nonterminating decimal and becomes smaller and smaller until we cannot see it anymore and indistinguishable from the tail digits of the other decimals (note that the nth d-term recedes to the right with increasing n by one decimal digit at a time). The sequence (1) is called nonstandard d-sequence since the nth term is not standard g-term; while it has standard limit (in the standard norm) which is 0 it is not a g-limit since it is not a decimal but it exists because it is well-defined by its nonstandard d-sequence. We call its nonstandard g-limit dark number and denote by d. Then we call its norm d-norm (standard distance from 0) which is d > 0. Moreover, while the nth term becomes smaller and smaller with indefinitely increasing n it is greater than 0 no matter how large n is so that if x is a decimal, 0 < d < x.

I *think* that what he's trying to say there is that a non-terminating decimal is a sequence of finite representations that approach a limit. So there's still no real infinite representations - instead, you've got an infinite sequence of finite representations, where each finite representation in the sequence can be generated from the previous one. This bit is why I said that this is nearly a theory of the computable numbers. Obviously, undescribable numbers can't exist in this theory, because you can't generate this sequence.

Where this really goes totally off the rails is that throughout this, he's working on the assumption that there's a one-to-one relationship between *representations* and *numbers*. That's what that "dark number" stuff is about. You see, in Escultura's system, 0.999999... is *not* equal to one. It's *not* a representational artifact. In Escultura's system, there are no representational artifacts: the representations *are* the numbers. The "dark number", which he notates as \(d^*\), is (1-0.99999999...) and \(d^* > 0\). In fact, \(d^*\) is the *smallest* number greater than 0. And you can generate a complete ordered enumeration of all of the new real numbers, \({0,

d^*, 2d^*, 3d^*, ..., n-2d^*, n-d^*, n, n+d^*, ...}\).

Reading Escultura, every once in a while, you might think he's joking. For example, he claims to have disproven Fermat's last theorem. Fermat's theorem says that for n>2, there are no *integer* solutions for the equation \(x^n + y^n = z^n\). Escultura says he's disproven this:

The exact solutions of Fermat's equation, which are the counterexamples to FLT, are given by the triples (x,y,z) = ((0.99...)10^T,d*,10^T), T = 1, 2, ..., that clearly satisfies Fermat's equation,

x^n + y^n = z^n, (4)

for n = NT > 2. Moreover, for k = 1, 2, ..., the triple (kx,ky,kz) also satisfies Fermat's equation. They are the countably infinite counterexamples to FLT that prove the conjecture false. One counterexample is, of course, sufficient to disprove a conjecture.

Even if you accept the reality of the notational artifact \(d^*\), this makes no sense: the point of Fermat's last theorem is that there are no *integer* solutions; \(d^*\) is not an integer; \((1-d^*)10\) is not an integer. Surely he's not *that* stupid. Surely he can't possibly believe that he's disproven Fermat using non-integer solutions? I mean, how is this different from just claiming that you can use (2, 3, 35^{1/3}) as a counterexample for n=3?

But... he's serious. He's serious enough that he's published published a real paper making the claim (albeit in crackpot journals, which are the only places that would accept this rubbish).

Anyway, jumping back for a moment... You *can* create a theory of numbers around this \(d^*\) rubbish. The problem is, it's not a particularly useful theory. Why? Because it breaks some of the fundamental properties that we expect numbers to have. The real numbers define a structure called a *field*, and a huge amount of what we really do with numbers is built on the fundamental properties of the field structure. One of the necessary properties of a field is that it has *unique* identity elements for addition and multiplication. If you don't have unique identities, then everything collapses.

So... Take \(frac{1}{9}\). That's the multiplicative inverse of 9. So, by definition, \(frac{1}{9}*9 = 1\) - the multiplicative identity.

In Escultura's theory, \(frac{1}{9}\) is a shorthand for the number that has a representation of 0.1111.... So, \(frac{1}{9}*9 = 0.1111....*9 = 0.9999... = (1-d^*)\). So \((1-d^*)\) is *also* a multiplicative identity. By a similar process, you can show that \(d^*\) itself *must be* the additive identity. So either \(d^* == 0\), or else you've lost the field structure, and with it, pretty much all of real number theory.