## Hello World in ARM Assembly Language

Since K&R's book on C, it's become traditional to start any tutorial on a new language is to present a program that prints "Hello world". For ARM assembly running on Raspbian Linux, that traditional program looks like:

.global _start
_start:
MOV R7, #4
MOV R0, #1
MOV R2, #12
LDR R1, =string
SWI 0
MOV R7, #1
SWI 0
.data
string:
.ascii "Hello Worldn"


It's definitely a bit more cryptic than most languages, but it doesn't look all that bad, now does it? Before I can explain how that works, we'll need to talk a bit about what we're programming, and how we can program it. We're going to go through a bunch of introductory material here; everything that we touch on, I'll come back to in more detail in a later post.

In the diagram to the right, you can see my attempt to draw a diagram illustrating the important parts of an ARM CPU from our initial perspective. As we learn more about it, we'll gradually refine this picture, adding more details - but for now, this is what things look like.

For now, we'll say that the CPU has 5 main parts:

1. A collection of 16 registers. A register is a memory cell that's built in to the CPU. On an ARM processor, any time that you want to do any kind of operation - arithmetic, logic, comparison, you name it! - you'll need to have the values in a register. The first thirteen registers are available to you, to use for whatever you want. The last three are special; R13 is called the stack pointer (SP), R14 is called the link register (LR), and R15 is called the program counter (PC). We'll talk about what those three mean as we learn to program.
2. An arithmetic/logic unit (ALU). This is where the CPU does integer arithmetic and logic. Most of our programs will work exclusively with the ALU. (Floating point is important, but it's possible to do an awful lot of programming without it.)
3. A floating point unit (FPU). This is where the CPU does floating point arithmetic.
4. A status register. This is, like the other registers, a chunk of internal storage. But you can't manipulate it or access it directly. It's automatically updated by the ALU/FPU. Individual bits of the status register get updated to reflect various conditions about the current status of the CPU, and the results of the previous instruction. For example, the way that you can compare two values in the ARM is to subtract one from the other. If the two values were equal, then the ZERO flag in the status register will be set to 1; otherwise it will be set to 0. There's a branch instruction that only actually branches if the ZERO flag is set.
5. A data channel, called the bus. The bus connects the CPU to the rest of the computer. Memory, other storage devices, and input and output devices are all connected to the CPU via the bus. Doing anything that involves communicating through the bus is slow compared to doing anything that doesn't. For now, we'll say that memory is the only thing on the bus.

Now that we have a bit of a clue about the basic pieces of this thing we're going to program, we can start looking at our hello world program. We still need to talk about one other bit of background before we can get started.

For a computer, on the lowest level, a "program" is just a chunk of numbers. It's not even a chunk of instructions - it's just numbers. The numbers can be instructions, data, or both at the same time! That last bit might sound strange, but you'll see instructions like MOV R0, #4. That's saying load the literal value 4 into register R0. The 4 is a value encoded as a part of an instruction. So that 4 is both literal data sitting in the middle of a collection of instructions, and it's also a part of an instruction. The actual instruction doesn't really say "load the value 4"; it says "load the data value that's at this position in the instruction sequence".

We're not going to program the ARM using the numeric instructions directly. We're going to program the ARM using assembly language. Assembly language is a way of writing that chunk of numbers that is your program, but doing it with a syntax that easy for a human being to read. Then a program called an assembler will translate from that readable format into the raw numeric format used by the computer. Conceptually, the assembler sounds a lot like the compiler that you'd use with a higher level language. But it's quite different: compilers take your code, and change it. Frequently, if you look at code that your compiler generates, you'd have a hard time recognizing code that was generated for a program that you wrote! But an assembel doesn't change anything. There's no restructuring, no optimization, no changes at all. In an assembly language program, you're describing how to lay out a bunch of instructions and data in memory, and the assembler does nothing but generate that exact memory layout.

Ok. That said, finally, we can get to the program!

Programming in assembly is quite different from programming in any reasonable programming language. There are no abstractions to make your life easier. You need to be painfully explicit about everything. It really brings home just how many abstractions you generally use in your code.

For example, in assembly language, you don't really have variables. You can store values anywhere you want in the computer's memory, but you have to decide where to put them, and how to lay them out, by yourself. But as I said before - all of the arithmetic and logic that makes up a program has to be done on values in registers. So a value in memory is only good if you can move it from memory into a register. It's almost like programming in a language with a total of 16 variables - only you're only really allowed to use 13 of them!

Not only do you not have variables, but you don't really have parameters. In a high level programming language, you can just pass things to subroutines. You don't need to worry about how. Maybe they're going onto a stack; maybe there' doing some kind of fancy lambda calculus renaming thing; maybe there's some magic variables. You don't need to know or care. But in assembly, there is no built-in notion of parameter-passing. You need to use the computer's register and memory to build a parameter passing system. In the simplest form of that, which is what we're using here, you designate certain registers as carrying certain parameters. There's nothing in assembly to enforce that: if your program puts something into register R3, and a function was expecting it to be in R4, you won't get any kind of error.

In our "Hello world" program above, the first three instructions are loading specific values into registers expected by the operating system "print" function. For example, MOV R0, #4 means move the specific number 4 into register R0.

Loading literal values into registers are done using the MOV instruction. It's got two operands, the register to move the data into, and the source of the data. The source of the data can be either a literal value, or another register. If you want to load data from memory, you need to use a different instruction - LDR.

With the LDR instruction, we can see one of the conveniences of using assembly language. We want to print the string "Hello world". So we need to have that string in memory somewhere. The assembler lets us do that using a .ascii directive. The directive isn't an ARM instruction; it's an instruction to the assembler telling it "I want to put this string data into a block in memory". The .ascii directive is prefaced with a label, which allows us to refer to the beginning of the memory block populated by the directive. Now we can use "string" to refer to the memory block. So the instruction LDR R1, =string is exactly the same as saying LDR R1, address, where address is the memory location where the first byte of the string is stored.

These four instructions have been preparation for calling a function provided by the operating system. R0 and R7 are used by the operating system to figure out what function we want to call. R1 and R2 are being used to pass parameters to the function. The print function expects R1 to contain the memory location of the first byte in the string we want to print, and R2 to contain the number of characters in the string.

We call the function using SWI 0. SWI is the software interrupt function. We can't call the operating system directly! One of the purposes of the operating system is to provide a safe environment, where different programs can't accidentally interfere with one another. If you could just branch into an OS function directly, any program would be able to do anything it wanted! But we don't allow that, so the program can't directly call anything in the OS. Instead, what it does is send a special kind of signal called an interrupt. Before it runs our program, the operating system has already told the CPU that any time it gets an interrupt, control should be handed to the OS. So the operating system gets called by the interrupt. It sees the values in R0 and R7, and recognizes that the interrupt is a request to run the "print" function, so it does that. Then it returns from the interrupt - and execution continues at the first instruction after the SWI call.

Now it's returned from the print, and we don't want to do anything else. If we didn't put something here to tell the operating system that we're done, the CPU would just proceed to the next memory address after our SWI, and interpret that as an instruction! We need to specifically say "We're done", so that the operating system takes control away from our program. The way we do that is with another SWI call. This SWI is the operating system "exit" call. To exit a program and kill the process, you call SWI with R0=1 and R7=1.

And that's it. That's hello-world in assembly.

## Oy Veh! Power Series, Analytic Continuations, and Riemann Zeta

After the whole Plait fiasco with the sum of the infinite series of natural numbers, I decided it would interesting to dig into the real math behind that mess. That means digging in to the Riemann function, and the concept of analytic continuation.

A couple of caveats before I start:

1. this is the area of math where I'm at my worst. I am not good at analysis. I'm struggling to understand this stuff well enough to explain it. If I screw up, please let me know in the comments, and I'll do my best to update the main post promptly.
2. This is way more complicated than most of the stuff I write on this blog. Please be patient, and try not to get bogged down. I'm doing my best to take something that requires a whole lot of specialized knowledge, and explain it as simply as I can.

What I'm trying to do here is to get rid of some of the mystery surrounding this kind of thing. When people think about math, they frequently get scared. They say things like "Math is hard, I can't hope to understand it.", or "Math produces weird results that make no sense, and there's no point in my trying to figure out what it means, because if I do, my brain will explode. Only a super-genius geek can hope to understand it!"

That's all rubbish.

Math is complicated, because it covers a whole lot of subjects. To understand the details of a particular branch of math takes a lot of work, because it takes a lot of special domain knowledge. But it's not fundamentally different from many other things.

I'm a professional software engineer. I did my PhD in computer science, specializing in programming languages and compiler design. Designing and building a compiler is hard. To be able to do it well and understand everything that it does takes years of study and work. But anyone should be able to understand the basic concepts of what it does, and what the problems are.

I've got friends who are obsessed with baseball. They talk about ERAs, DIERAs, DRSs, EQAs, PECOTAs, Pythagorean expectations, secondary averages, UZRs... To me, it's a huge pile of gobbledygook. It's complicated, and to understand what any of it means takes some kind of specialized knowledge. For example, I looked up one of the terms I saw in an article by a baseball fan: "Peripheral ERA is the expected earned run average taking into account park-adjusted hits, walks, strikeouts, and home runs allowed. Unlike Voros McCracken's DIPS, hits allowed are included." I have no idea what that means. But it seems like everyone who loves baseball - including people who think that they can't do their own income tax return because they don't understand how to compute percentages - understand that stuff. They care about it, and since it means something in a field that they care about, they learn it. It's not beyond their ability to understand - it just takes some background to be able to make sense of it. Without that background, someone like me feels lost and clueless.

That's the way that math is. When you go to look at a result from complex analysis without knowing what complex analysis is, it looks like terrifyingly complicated nonsensical garbage, like "A meromorphic function is a function on an open subset of the complex number plain which is holomorphic on its domain except at a set of isolated points where it must have a Laurent series".

And it's definitely not easy. But understanding, in a very rough sense, what's going on and what it means is not impossible, even if you're not a mathematician.

Anyway, what the heck is the Riemann zeta function?

It's not easy to give even the simplest answer of that in a meaningful way.

Basically, Riemann Zeta is a function which describes fundamental properties of the prime numbers, and therefore of our entire number system. You can use the Riemann Zeta to prove that there's no largest prime number; you can use it to talk about the expected frequency of prime numbers. It occurs in various forms all over the place, because it's fundamentally tied to the structure of the realm of numbers.

The starting point for defining it is a power series defined over the complex numbers (note that the parameter we use is instead of a more conventional : this is a way of highlighting the fact that this is a function over the complex numbers, not over the reals).

This function is not the Riemann function!

The Riemann function is something called the analytic continuation of . We'll get to that in a moment. Before doing that; why the heck should we care? I said it talks about the structure of numbers and primes, but how?

The zeta function actually has a lot of meaning. It tells us something fundamental about properties of the system of real numbers - in particular, about the properties of prime numbers. Euler proved that Zeta is deeply connected to the prime numbers, using something called Euler's identity. Euler's identity says that for all integer values:

Which is a way of saying that the Riemann function can describe the probability distribution of the prime numbers.

To really understand the Riemann Zeta, you need to know how to do analytic continuation. And to understand that, you need to learn a lot of number theory and a lot of math from the specialized field called complex analysis. But we can describe the basic concept without getting that far into the specialized stuff.

What is an analytical continuation? This is where things get really sticky. Basically, there are places where there's one way of solving a problem which produces a diverging infinite series. When that happens you say there's no solution, that thepoint where you're trying to solve it isn't in the domain of the problem. But if you solve it in a different way, you can find a way of getting a solution that works. You're using an analytic process to extend the domain of the problem, and get a solution at a point where the traditional way of solving it wouldn't work.

A nice way to explain what I mean by that requires taking a
diversion, and looking at a metaphor. What we're talking about here isn't analytical continuation; it's a different way of extending the domain of a function, this time in the realm of the real numbers. But as an example, it illustrates the concept of finding a way to get the value of a function in a place where it doesn't seem to be defined.

In math, we like to play with limits. One example of that is in differential calculus. What we do in differential
calculus is look at continuous curves, and ask: at one specific location on the curve, what's the slope?

If you've got a line, the slope is easy to determine. Take any two points on the line: , where . Then the slope is . It's easy, because for a line, the slope never changes.

If you're looking at a curve more complex than line, then slopes get harder, because they're constantly changing. If you're looking at , and you zoom in and look at it very close to , it looks like the slope is very close to 0. If you look at it close to 1, it looks like it's around 2. If you look at it at x=10, it looks a bit more than 20. But there are no two points where it's exactly the same!

So how can you talk about the slope at a particular point ? By using a limit. You pick a point really close to , and call it . Then an approximate value of the slope at is:

The smaller epsilon gets, the closer your approximation gets. But you can't actually get to , because if you did, that slope equation would have 0 in the denominator, and it wouldn't be defined! But it is defined for all non-zero values of . No matter how small, no matter how close to zero, the slope is defined. But at zero, it's no good: it's undefined.

So we take a limit. As gets smaller and smaller, the slope gets closer and closer to some value. So we say that the slope at the point - at the exact place where the denominator of that fraction becomes zero - is defined as:

(Note: the original version of the previous line had a missing "-". Thanks to commenter Thinkeye for catching it.)

Since is getting closer and closer to zero, is getting smaller much faster; so we can treat it as zero:

So at any point , the slope of is . Even though computing that involves dividing by zero, we've used an analytical method to come up with a meaningful and useful value at . This doesn't mean that you can divide by zero. You cannot conclude that . But for this particular analytical setting, you can come up with a meaningful solution to a problem that involves, in some sense, dividing by zero.

The limit trick in differential calculus is not analytic continuation. But it's got a tiny bit of the flavor.

Moving on: the idea of analytic continuation comes from the field of complex analysis. Complex analysis studies a particular class of functions in the complex number plane. It's not one of the easier branches of mathematics, but it's extremely useful. Complex analytic functions show up all over the place in physics and engineering.

In complex analysis, people focus on a particular group of functions that are called analytic, holomorphic, and meromorphic. (Those three are closely related, but not synonymous.).

A holomorphic function is a function over complex variables, which has one
important property. The property is almost like a kind of abstract smoothness. In the simplest case, suppose that we have a complex equation in a single variable, and the domain of this function is . Then it's holomorphic if, and only if, for every point , the function is complex differentiable in some neighborhood of points around .

(Differentiable means, roughly, that using a trick like the one we did above, we can take the slope (the derivative) around . In the complex number system, "differentiable" is a much stronger condition than it would be in the reals. In the complex realm, if something is differentiable, then it is infinitely differentiable. In other words, given a complex equation, if it's differentiable, that means that I can create a curve describing its slope. That curve, in turn, will also be differentiable, meaning that you can derive an equation for its slope. And that curve will be differentiable. Over and over, forever: the derivative of a differentiable curve in the complex number plane will always be differentiable.)

If you have a differentiable curve in the complex number plane, it's got one really interesting property: it's representable as a power series. (This property is what it means for a function to be called analytic; all holomorphic functions are analytic.) That is, a function is holomorphic for a set if, for all points , you can represent the value of the function as a power series for a disk of values around :

In the simplest case, the constant is 0, and it's just:

(Note: In the original version of this post, I miswrote the basic pattern of a power series, and put both and in the base. Thanks to John Armstrong for catching it.)

The function that we wrote, above, for the base of the zeta function is exactly this kind of power series. Zeta is an analytic function for a particular set of values. Not all values in the complex number plane; just for a specific subset.

If a function is holomorphic, then the strong differentiability of it leads to another property. There's a unique extension to it that expands its domain. The expansion always produces the same value for all points that are within the domain of . It also produces exactly the same differentiability properties. But it's also defined on a larger domain than was. It's essentially what would be if its domain weren't so limited. If is the domain of , then for any given domain, , where , there's exactly one function with domain that's an analytic continuation of .

Computing analytic continuations is not easy. This is heavy enough already, without getting into the details. But the important thing to understand is that if we've got a function with an interesting set of properties, we've got a method that might be able to give us a new function that:

1. Everywhere that was defined, .
2. Everywhere that was differentiable, is also differentiable.
3. Everywhere that could be computed as a sum of an infinite power series, can also be computed as a sum of an infinite power series.
4. is defined in places where and the power series for is not.

So, getting back to the Riemann Zeta function: we don't have a proper closed form equation for zeta. What we have is the power series of the function that zeta is the analytic continuation of:

If , then the series for that function expands to:

The power series is undefined at this point; the base function that we're using, that zeta is the analytic continuation of, is undefined at . The power series is an approximation of the zeta function, which works over some specific range of values. But it's a flawed approximation. It's wrong about what happens at . The approximation says that value at should be a non-converging infinite sum. It's wrong about that. The Riemann zeta function is defined at that point, even though the power series is not. If we use a different method for computing the value of the zeta function at - a method that doesn't produce an incorrect result! - the zeta function has the value at .

Note that this is a very different statement from saying that the sum of that power series is at . We're talking about fundamentally different functions! The Riemann zeta function at does not expand to the power series that we used to approximate it.

In physics, if you're working with some kind of system that's described by a power series, you can come across the power series that produces the sequence that looks like the sum of the natural numbers. If you do, and if you're working in the complex number plane, and you're working in a domain where that power series occurs, what you're actually using isn't really the power series - you're playing with the analytic zeta function, and that power series is a flawed approximation. It works most of the time, but if you use it in the wrong place, where that approximation doesn't work, you'll see the sum of the natural numbers. In that case, you get rid of that sum, and replace it with the correct value of the actual analytic function, not with the incorrect value of applying the power series where it won't work.

Ok, so that warning at the top of the post? Entirely justified. I screwed up a fair bit at the end. The series that defines the value of the zeta function for some values, the series for which the Riemann zeta is the analytical continuation? It's not a power series. It's a series alright, but not a power series, and not the particular kind of series that defines a holomorphic or analytical function.

The underlying point, though, is still the same. That series (not power series, but series) is a partial definition of the Riemann zeta function. It's got a limited domain, where the Riemann zeta's domain doesn't have the same limits. The series definition still doesn't work at . The series is still undefined at . At , the series expands to , which doesn't converge, and which doesn't add up to any finite value, -1/12 or otherwise. That series does not have a value at . No matter what you do, that equation - the definition of that series - does not work at . But the Riemann Zeta function is defined in places where that equation isn't. Riemann Zeta at is defined, and its value is .

Despite my mistake, the important point is still that last sentence. The value of the Riemann zeta function at is not the sum of the set of natural numbers. The equation that produces the sequence doesn't work at . The definition of the Riemann zeta function doesn't say that it should, or that the sum of the natural numbers is . It just says that the first approximation of the Riemann zeta function for some, but not all values, is given by a particular infinite sum. In the places where that sum works, it gives the value of zeta; in places where that sum doesn't work, it doesn't.

## Leading in to Machine Code: Why?

I'm going to write a few posts about programming in machine language. It seems that many more people are interested in learning about the ARM processor, so that's what I'll be writing about. In particular, I'm going to be working with the Raspberry Pi running Raspbian linux. For those who aren't familiar with it, the Pi is a super-inexpensive computer that's very easy to program, and very easy to interface with the outside world. It's a delightful little machine, and you can get one for around \$50!

Anyway, before getting started, I wanted to talk about a few things. First of all, why learn machine language? And then, just what the heck is the ARM thing anyway?

## Why learn machine code?

My answer might surprise you. Or, if you've been reading this blog for a while, it might not.

Let's start with the wrong reason. Most of the time, people say that you should learn machine language for speed: programming at the machine code level gets you right down to the hardware, eliminating any layers of junk that would slow you down. For example, one of the books that I bought to learn ARM assembly (Raspberry Pi Assembly Language RASPBIAN Beginners: Hands On Guide) said:

even the most efficient languages can be over 30 times
slower than their machine code equivalent, and that’s on a good
day!

This is pure, utter rubbish. I have no idea where he came up with that 30x figure, but it's got no relationship to reality. (It's a decent book, if a bit elementary in approach; this silly statement isn't representative of the book as a whole!)

In modern CPUs - and the ARM definitely does count as modern! - the fact is, for real world programs, writing code by hand in machine language will probably result in slower code!

If you're talking about writing a single small routine, humans can be very good at that, and they often do beat compilers. Butonce you get beyond that, and start looking at whole programs, any human advantage in machine language goes out the window. The constraints that actually affect performance have become incredibly complex - too complex for us to juggle effectively. We'll look at some of these in more detail, but I'll explain one example.

The CPU needs to fetch instructions from memory. But memory is dead slow compared to the CPU! In the best case, your CPU can execute a couple of instructions in the time it takes to fetch a single value from memory. This leads to an obvious problem: it can execute (or at least start executing) one instruction for each clock tick, but it takes several ticks to fetch an instruction!

To get around this, CPUs play a couple of tricks. Basically, they don't fetch single instructions, but instead grab entire blocks of instructions; and they start retrieving instructions before they're needed, so that by the time the CPU is ready to execute an instruction, it's already been fetched.

So the instruction-fetching hardware is constantly looking ahead, and fetching instructions so that they'll be ready when the CPU needs them. What happens when your code contains a conditional branch instruction?

The fetch hardware doesn't know whether the branch will be taken or not. It can make an educated guess by a process called branch prediction. But if it guesses wrong, then the CPU is stalled until the correct instructions can be fetched! So you want to make sure that your code is written so that the CPUs branch prediction hardware is more likely to guess correctly. Many of the tricks that humans use to hand-optimize code actually have the effect of confusing branch prediction! They shave off a couple of instructions, but by doing so, they also force the CPU to sit idle while it waits for instructions to be fetched. That branch prediction failure penalty frequently outweighs the cycles that they saved!

That's one simple example. There are many more, and they're much more complicated. And to write efficient code, you need to keep all of those in mind, and fully understand every tradeoff. That's incredibly hard, and no matter how smart you are, you'll probably blow it for large programs.

If not for efficiency, then why learn machine code? Because it's how your computer really works! You might never actually use it, but it's interesting and valuable to know what's happening under the covers. Think of it like your car: most of us will never actually modify the engine, but it's still good to understand how the engine and transmission work.

Your computer is an amazingly complex machine. It's literally got billions of tiny little parts, all working together in an intricate dance to do what you tell it to. Learning machine code gives you an idea of just how it does that. When you're programming in another language, understanding machine code lets you understand what your program is really doing under the covers. That's a useful and fascinating thing to know!

## What is this ARM thing?

As I said, we're going to look at machine language coding on the
ARM processor. What is this ARM beast anyway?

It's probably not the CPU in your laptop. Most desktop and laptop computers today are based on a direct descendant of the first microprocessor: the Intel 4004.

Yes, seriously: the Intel CPUs that drive most PCs are, really, direct descendants of the first CPU designed for desktop calculators! That's not an insult to the intel CPUs, but rather a testament to the value of a good design: they've just kept on growing and enhancing. It's hard to see the resemblance unless you follow the design path, where each step follows directly on its predecessors.

The Intel 4004, released in 1971, was a 4-bit processor designed for use in calculators. Nifty chip, state of the art in 1971, but not exactly what we'd call flexible by modern standards. Even by the standards of the day, they recognized its limits. So following on its success, they created an 8-bit version, which they called the 8008. And then they extended the instruction set, and called the result the 8080. The 8080, in turn, yielded successors in the 8088 and 8086 (and the Z80, from a rival chipmaker).

The 8086 was the processor chosen by IBM for its newfangled personal computers. Chip designers kept making it better, producing the 80286, 386, Pentium, and so on - up to todays CPUs, like the Core i7 that drives my MacBook.

The ARM comes from a different design path. At the time that Intel was producing the 8008 and 8080, other companies were getting into the same game. From the PC perspective, the most important was the 6502, which
was used by the original Apple, Commodore, and BBC microcomputers. The
6502 was, incidentally, the first CPU that I learned to program!

The ARM isn't a descendant of the 6502, but it is a product of the 6502 based family of computers. In the early 1980s, the BBC decided to create an educational computer to promote computer literacy. They hired a company called Acorn to develop a computer for their program. Acorn developed a
beautiful little system that they called the BBC Micro.

The BBC micro was a huge success. Acorn wanted to capitalize on its success, and try to move it from the educational market to the business market. But the 6502 was underpowered for what they wanted to do. So they decided to add a companion processor: they'd have a computer which could still run all of the BBC Micro programs, but which could do fancy graphics and fast computation with this other processor.

In a typical tech-industry NIH (Not Invented Here) moment, they decided that none of the other commercially available CPUs were good enough, so they set out to design their own. They were impressed by the work done by the Berkeley RISC (Reduced Instruction Set Computer) project, and so they adopted the RISC principles, and designed their own CPU, which they called the Acorn RISC Microprocessor, or ARM.

The ARM design was absolutely gorgeous. It was simple but flexible
and powerful, able to operate on very low power and generating very little heat. It had lots of registers and an extremely simple instruction set, which made it a pleasure to program. Acorn built a lovely computer with a great operating system called RiscOS around the ARM, but it never really caught on. (If you'd like to try RiscOS, you can run it on your Raspberry Pi!)

But the ARM didn't disappear. Tt didn't catch on in the desktop computing world, but it rapidly took over the world of embedded devices. Everything from your cellphone to your dishwasher to your iPad are all running on ARM CPUs.

Just like the Intel family, the ARM has continued to evolve: the ARM family has gone through 8 major design changes, and dozens of smaller variations. They're no longer just produced by Acorn - the ARM design is maintained by a consortium, and ARM chips are now produced by dozens of different manufacturers - Motorola, Apple, Samsung, and many others.

Recently, they've even starting to expand even beyond embedded platforms: the Chromebook laptops are ARM based, and several companies are starting to market server boxes for datacenters that are ARM based! I'm looking forward to the day when I can buy a nice high-powered ARM laptop.

## More Basics: Compilers, Programs, and Languages

After my "what is an OS?" post, a couple of readers asked me to write a similar post about compilers.

Before I can answer what a compiler is, it's helpful to first answer a different question: what is a program?

And here we get to one of my pet peeves. The most common answer to that question is "a detailed step-by-step sequence of instructions". For example, here's what wikipedia says:

A computer program, or just a program, is a sequence of instructions, written to perform a specified task with a computer.

This is wrong.

Back when people first started to study the idea of computing devices, they talked about computing machines as devices that performed a single, specific task. If you think about a basic Turing machine, you normally define Turing machines that perform a single computation. They've got a built-in sequence of states, and a built in transition table - the machine can only perform one computation. It took one kind of input, and performed its computation on that input, producing its output.

Building up from these specific machines, they came up with the idea of a universal computing device. A universal computer was a computing machine whose input was a description of a different computing machine. By giving the universal machine different inputs, it could perform different computations.

The point of this diversion is that looking at this history tells us what a program really is: it's a description of a computing machine. Our computers are universal computing machines; they take programs as input to describe the computing machines we want them to emulate. What we're doing when we program is describing a computing machine that we'd like to create. Then we feed it into our universal computing machine, and it behaves as if we'd built a custom piece of hardware to do our computation!

The problem is, our computers are simultaneously very primitive and overwhelming complex. They can only work with data expressed in fixed-length sequences of on/off values; to do anything else, we need to find a way of expressing in terms of extremely simple operations on those on/off values. To make them operate efficiently, they've got a complex structure: many different kinds of storage (registers, l1 and l2 caches, addressable memory), complicated instruction sets, and a whole lot of tricky perfomance tweaks. It's really hard to program a computer in terms of its native instructions!

In fact, it's so hard to program in terms of native instructions that we just don't do it. What we do is write programs in terms of different machines. That's the point of a programming language.

Looked at this way, a program language is a way of describing computing machines. The difference between different programming languages is how they describe computing machines. A language like C describes von Neumann machines. Haskell describes machines that work via lambda calculus computations using something like a spineless G-machine. . Prolog describes machines that perform computations in terms of intuitionistic logical inference like a Warren Abstract Machine.

So finally, we can get to the point: what is a compiler? A compiler is a program that takes a description of a computing device defined in one way, and translates into the kind of machine description that can be used by our hardware. A programming language ets us ignore all of the complexities of how our actual hardware is built, and describe our computations in terms of a simple abstraction. A compiler takes that description, and turns it into the form that computer hardware can actually use.

For anyone who's read this far: I've gotten a few requests to talk about assembly language. I haven't programmed in assembly since the days of the Motorola 68000. This means that to do it, I'll need to learn something more up-to-date. Would you be more interested in seeing Intel, or ARM?

## Boot all the computers!

Moving on from last weeks operating system post, today we'll look at how a computer boots up and loads an operating system.

Let's start with why booting is a question at all. When a computer turns on, what happens? What we're using to seeing is that the disk drive turns on and starts spinning, and the computer loads something from the disk.

The question is how does the computer know how to turn on the disk? As I said in the OS post, the CPU only really knows how work with memory. To talk to a disk drive, it needs to do some very specific things - write to certain memory locations, wait for things to happen. Basically, in order to turn on that disk drive and load the operating system, it needs to run a program. But how does it know what program to run?

I'm going to focus on how modern PCs work. Other computers have used/do use a similar process. The details vary, but the basic idea is the same.

A quick overview of the process:

1. CPU startup.
2. Run BIOS initialization

As that list suggests, it's not a particularly simple process. We think of it as one step: turn on the computer, and it runs the OS. In fact, it's a complicated dance of many steps.

On the lowest level, it's all hardware. When you turn on a computer, some current gets sent to a clock. The clock is basically a quartz crystal; when you apply current to the crystal, it vibrates and produces a regular electrical pulse. That pulse is what drives the CPU. (When you talk about your computer's speed, you generally describe it in terms of the frequency of the clock pulse. For example, in the laptop that I'm using to write this post, I've got a 2.4 GHz processor: that means that the clock chip pulses 2.4 billion times per second.)

When the CPU gets a clock pulse, it executes an instruction from memory. It knows what instruction to execute because it's got a register (a special piece of memory built-in to the CPU) that tells it what instruction to execute. When the computer is turned on, that register is set to point at a specific location. Depending on the CPU, that might be 0, or it might be some other magic location; it doesn't matter: what matters is that the CPU is built so that when it's first turned on and it receives a clock pulse that starts it running, that register will always point at the same place.

The software part of the boot process starts there: the computer puts a chunk of read-only memory there - so when the computer turns on, there's a program sitting at that location, which the computer can run. On PCs, that program is called the BIOS (Basic Input/Output System).

The BIOS knows how to tell the hardware that operates your display to show text on the screen, and it knows how to read stuff on your disk drives. It doesn't know much beyond that. What it knows is extremely primitive. It doesn't understand things like filesystems - the filesystem is set up and controlled by the operating system, and different operating systems will set up filesystems in different ways. The BIOS can't do anything with a filesystem: it doesn't include any programming to tell it how to read a filesystem, and it can't ask the operating system to do it, because the OS hasn't loaded yet!

What the BIOS does is something similar to what the CPU did when it started up. The CPU knew to look in a special location in memory to find a program to run. The BIOS knows to look at a special section on a disk drive to find a program to run. Every disk has a special chunk of data on it called the master boot record (MBR). The MBR contains another program, called a boot loader. So the BIOS loads the boot loader, and then uses it to actually load the operating system.

This probably seems a bit weird. The computer starts up by looking in a specific location for a program to run (the BIOS), which loads something (the bootloader). The thing it loads (the bootloader) also just looks in a specific location for a program to run (the OS). Why the two layers?

Different operating systems are build differently, and the specific steps to actually load and run the OS are different. For example, on my laptop, I've can run two operating systems: MacOS, and Linux. On MacOS (aka Darwin), there's something called a microkernel that gets loaded. The microkernel is stored in a file named "mach_kernel" in the root directory of a type of filesystem called HFS. But in my installation of linux, the OS is stored in a file named "vmlinuz" in the root directory of a type of filesystem called EXT4. The BIOS doesn't know what operating system it's loading, and it doesn't know what filesystem the OS uses - and that means that it knows neither the name of the file to load, nor how to find that file.

The bootloader was set up by the operating system. It's specific to the operating system - you can think of it as part of the OS. So it knows what kind of filesystem it's going to look at, and how to find the OS in that filesystem.

So once the bootloader gets started, it knows how to load and run the operating system, and once it does that, your computer is up and running, and ready for you to use!

Of course, all of this is a simplified version of how it works. But for understanding the process, it's a reasonable approximation.

(To reply to commenters: I'll try to do a post like this about compilers when I have some time to write it up.)

## Basics: What is an OS?

A reader of this blog apparently likes the way I explain things, and wrote to me to ask a question: what is an operating system? And how does a computer know how to load it?

I'm going to answer that, but I'm going to do it in a roundabout way. The usual answer is something like: "An operating system or OS is a software program that enables the computer hardware to communicate and operate with the computer software." In my opinion, that's a cop-out: it doesn't really answer anything. I'm going to take a somewhat roundabout approach, but hopefully give you an answer that actually explains things in more detail, which should help you understand it better.

When someone like me sets out to write a program, how can we do it? That sounds like an odd question, until you actually think about it. The core of the computer, the CPU, is a device which really can't do very much. It's a self-contained unit which can do lots of interesting mathematical and logical operations, but they all happen completely inside the CPU (how they happen inside the CPU is way beyond this post!). To get stuff in and out of the CPU, the only thing that the computer can do is read and write values from the computer's memory. That's really it.

So how do I get a program in to the computer? The computer can only read the program if it's in the computer's memory. And every way that I can get it into the memory involves the CPU!

Computers are built so that there are certain memory locations and operations that are used to interact with the outside world. They also have signal wires called interrupt pins where other devices (like disk drives) can apply a current to say "Hey, I've got something for you". The exact mechanics are, of course, complicated, and vary from CPU to CPU. But to give you an idea of what it's like, to read some data from disk, you'd do something like the following.

1. Set aside a chunk of memory where the data should be stored after it's read. This is called a buffer.
2. Figure out where the data you want to read is stored on the disk. You can identify disk locations as a number. (It's usually a bit more complicated than that, but we're trying to keep this simple.
3. Write that number into a special memory location that's monitored by the disk drive controller.
4. Wait until the disk controller signals you via an interrupt that the data is ready. The data will be stored in a special memory location, that can be altered by the disk. (Simplifying again, but this is sometimes called a DMA buffer.)
5. Copy the data from the controller's DMA buffer into the application's memory buffer that you allocated.

When you down to that level, programming is an intricate dance! No one
wants to do that - it's too complicated, too error prone, and just generally
too painful. But there's a deeper problem: at this level, it's every program
for itself. How do you decide where on the disk to put your data? How can you
make sure that no one else is going to use that part of the disk? How can you
tell another program where to find the data that you stored?

You want to have something that creates the illusion of a much simpler computational world. Of course, under the covers, it's all going to be that incredibly messy stuff, but you want to cover it up. That's the job of an operating system: it's a layer between the hardware and the programs that you run that create a new computational world that's much easier to work in.

Instead of having to do the dance of mucking with the hard disk drive controller yourself, the operating system gives you a way of saying "Open a file named 'foo'", and then it takes that request, figures out where 'foo' is on the disk, talks to the disk drive, gets the data, and then hands you a buffer containing it. You don't need to know what kind of disk drive the data is coming from, how the name 'foo' maps to sectors of the disk. You don't need to know where the control memory locations for the drive are. You just let the operating system do that for you.

So, ultimately, this is the answer: The operating system is a program that runs on the computer, and creates the environment in which other programs can run. It does a lot of things to create a pleasant environment in which to write and run other programs. Among the multitude of services provided by most modern operating system are:

1. Device input and output. This is what we talked about above: direct interaction with input and output devices is complicated and error prone; the operating system implements the input and output processes once, (hopefully) without errors, and then makes it easy for every other program to just use its correct implementation.
2. Multitasking: your computer has enough power to do many things at once. Most modern computers have more than one CPU. (My current laptop has 4!) And most programs end up spending a lot of their time doing nothing: waiting for you to press a key, or waiting for the disk drive to send it data. The operating system creates sandboxes, called processes, and allows one program to run in each sandbox. It takes care of ensuring that each process gets to run on a CPU for a fair share of the time.
3. Memory management. With more than one program running at the same time on your computer, you need to make sure that you're using memory that isn't also being used by some other program, and to make sure that no other program can alter the memory that you're using without your permission. The operating system decides what parts of memory can be used by which program.
4. Filesystems. Your disk drive is really just a huge collection of small sections, each of which can store a fixed number of bits, encoded in some strange format dictated by the mechanics of the drive. The OS provides an abstraction that's a lot easier to deal with.

I think that's enough for one day. Tomorrow: how the computer knows how to run the OS when it gets switched on!

To me, the thing that makes probability fun is that the results are frequently surprising. We've got very strong instincts about how we expect numbers to work. But when you do anything that involves a lot of computations with big numbers, our intuition goes out the window - nothing works the way we expect it to. A great example of that is something called the birthday paradox.

Suppose you've got a classroom full of people. What's the probability that there are two people with the same birthday? Intuitively, most people expect that it's pretty unlikely. It seems like it shouldn't be likely - 365 possible birthdays, and 20 or 30 people in a classroom? Very little chance, right?

Let's look at it, and figure out how to compute the probability.

Interesting probability problems are all about finding out how to put things together. You're looking at things where there are huge numbers of possible outcomes, and you want to determine the odds of a specific class of outcomes. Finding the solutions is all about figuring out how to structure the problem.

A great example of this is something called the birthday paradox. This is a problem with a somewhat surprising outcome. It's also a problem where finding the right way to structure the problem is has a dramatic result.

Here's the problem: you've got a group of 30 people. What's the probability that two people out of that group of thirty have the same birthday?

We'll look at it with some simplifying assumptions. We'll ignore leap year - so we've got 365 possible birthdays. We'll assume that all birthdays are equally likely - no variation for weekdays/weekends, no variation for seasons, and no holidays, etc. Just 365 equally probable days.

How big is the space? That is, how many different ways are there to assign birthdays to 30 people? It's 36530 or something in the vicinity of 7.4*1076.

To start off, we'll reverse the problem. It's easier to structure the problem if we try to ask "What's the probability that no two people share a birthday". If P(B) is the probability that no two people share a birthday, then 1-P(B) is the probability that at least two people share a birthday.

So let's look at a couple of easy cases. Suppose we've got two people? What's the odds that they've got the same birthday? 1 in 365: there are 3652 possible pairs of birthdays; there are 365 possible pairs. So there's a probability of 365/3652 that the two people have the same birthday. For just two people, it's pretty easy. In the reverse form, there's a 364/365 chance that the two people have different birthdays.

What about 3 people? It's the probability of the first two having different birthdays, and the probability of the third person having a different birthday that either of those first two. There are 365 possible birthdays for the third person, and 363 possible days that don't overlay with the first two. So for N people, the probability of having distinct birthdays is .

At this point, we've got a nice recursive definition. Let's say that is the probability of people having distinct birthdays. Then:

1. For 2 people, the probability of distinct birthdays is 364/365. ()
2. For N>2 people, the probability of distinct birthdays is
.

Convert that to a closed form, and you get: . For 30 people, that's
. Work it out, and that's
0.29 - so the probability of everyone having distinct
birthdays is 29% - which means that the probability of at least
two people in a group of 30 having the same birthday is 71%!

You can see why our intuitions are so bad? We're talking about something where one factor in the computation is the factorial of 365!

Let's look a bit further: how many people do you need to have, before there's a 50% chance of 2 people sharing a birthday? Use the formulae we wrote up above, and it turns out to be 23. Here's the numbers - remember that this is the reverse probability, the probability of all birthdays being distinct.

 1 1 2 0.99726 3 0.991796 4 0.983644 5 0.972864 6 0.959538 7 0.943764 8 0.925665 9 0.905376 10 0.883052 11 0.858859 12 0.832975 13 0.80559 14 0.776897 15 0.747099 16 0.716396 17 0.684992 18 0.653089 19 0.620881 20 0.588562 21 0.556312 22 0.524305 23 0.492703 24 0.461656 25 0.4313 26 0.401759 27 0.373141 28 0.345539 29 0.319031 30 0.293684 31 0.269545 32 0.246652 33 0.225028 34 0.204683 35 0.185617 36 0.167818 37 0.151266 38 0.135932 39 0.12178 40 0.108768 41 0.0968484 42 0.0859695 43 0.0760771 44 0.0671146 45 0.0590241 46 0.0517472 47 0.0452256 48 0.039402 49 0.0342204 50 0.0296264

With just 23 people, there's a greater than 50% chance that two people will have the same birthday. By the time you get to just 50 people, there's a greater than 97% chance that two people have the same birthday!

As an amusing aside, the first time I saw this problem worked through was in an undergraduate discrete probability theory class, with 37 people in the class, and no duplicate birthdays!

Now - remember at the beginning, I said that the trick to working probability problems is all about how you formulate the problem. There's a much, much better way to formulate this.

Think of the assignment of birthdays as a function from people to birthdays: . The number of ways of assigning birthdays to people is the size of the set of functions from people to birthdays. How many possible functions are there? . is the number of days in the year - 365, and is the number of people in the group.

The set of assignments to unique birthdays is the number of injective functions. (An injective function is a function where .) How many injective functions are there? .

The probability of all birthdays being unique is the size of the set of injective functions divided by the size of the set of all assignments: .

So we've got the exact same result - but it's a whole lot easier in term of the discrete functions!

## The Elegance of Uncertainty

I was recently reading yet another botched explanation of Heisenberg's uncertainty principle, and it ticked me off. It wasn't a particularly interesting one, so I'm not going disassemble it in detail. What it did was the usual crackpot quantum dance: Heisenberg said that quantum means observers affect the universe, therefore our thoughts can control the universe. Blah blah blah.

It's not worth getting into the cranky details. But it inspired me to actually take some time and try to explain what uncertainty really means. Heisenberg's uncertainty principle is fascinating. It's an extremely simple concept, and yet when you realize what it means, it's the most mind-blowingly strange thing that you've ever heard.

One of the beautiful things about it is that you can take the math of uncertainty and reduce it to one simple equation. It says that given any object or particle, the following equation is always true:

Where:

• is a measurement of the amount of uncertainty
about the position of the particle;
• is the uncertainty about the momentum of the particle; and
• is a fundamental constant, called the reduced Plank's constant, which is roughly .

That last constant deserves a bit of extra explanation. Plank's constant describes the fundamental granularity of the universe. We perceive the world as being smooth. When we look at the distance between two objects, we can divide it in half, and in half again, and in half again. It seems like we should be able to do that forever. Mathematically we can, but physically we can't! Eventually, we get to a point where where is no way to subdivide distance anymore. We hit the grain-size of the universe. The same goes for time: we can look at what happens in a second, or a millisecond, or a nanosecond. But eventually, it gets down to a point where you can't divide time anymore! Planck's constant essentially defines that smallest unit of time or space.

Back to that beautiful equation: what uncertainty says is that the product of the uncertainty about the position of a particle and the uncertainty about the momentum of a particle must be at least a certain minimum.

Here's where people go wrong. They take that to mean that our ability to measure the position and momentum of a particle is uncertain - that the problem is in the process of measurement. But no: it's talking about a fundamental uncertainty. This is what makes it an incredibly crazy idea. It's not just talking about our inability to measure something: it's talking about the fundamental true uncertainty of the particle in the universe because of the quantum structure of the universe.

Let's talk about an example. Look out the window. See the sunlight? It's produced by fusion in the sun. But fusion should be impossible. Without uncertainty, the sun could not exist. We could not exist.

Why should it be impossible for fusion to happen in the sun? Because it's nowhere near dense or hot enough.

There are two forces that you need to consider in the process of nuclear fusion. There's the electromagnetic force, and there's the strong nuclear force.

The electromagnetic force, we're all familiar with. Like charges repel, different charges attract. The nucleus of an atom has a positive charge - so nuclei repel each other.

The nuclear force we're less familiar with. The protons in a nucleus repel each other - they've still got like charges! But there's another force - the strong nuclear force - that holds the nucleus together. The strong nuclear force is incredibly strong at extremely short distances, but it diminishes much, much faster than electromagnetism. So if you can get a proton close enough to the nucleus of an atom for the strong force to outweigh the electromagnetic, then that proton will stick to the nucleus, and you've got fusion!

The problem with fusion is that it takes a lot of energy to get two hydrogen nuclei close enough to each other for that strong force to kick in. In fact, it turns out that hydrogen nuclei in the sun are nowhere close to energetic enough to overcome the electromagnetic repulsion - not by multiple orders of magnitude!

But this is where uncertainty comes in to play. The core of the sun is a dense soup of other hydrogen atoms. They can't move around very much without the other atoms around them moving. That means that their momentum is very constrained - is very small, because there's just not much possible variation in how fast it's moving. But the product of and have to be greater than , which means that needs to be pretty large to compensate for the certainty about the momentum.

If is large, that means that the particle's position is not very constrained at all. It's not just that we can't tell exactly where it is, but it's position is fundamentally fuzzy. It doesn't have a precise position!

That uncertainty about the position allows a strange thing to happen. The fuzziness of position of a hydrogen nucleus is large enough that it overlaps with the the nucleus of another atom - and bang, they fuse.

This is an insane idea. A hydrogen nucleus doesn't get pushed into a collision with another hydrogen nucleus. It randomly appears in a collided state, because it's position wasn't really fixed. The two nuclei that fused didn't move: they simply didn't have a precise position!

So where does this uncertainty come from? It's part of the hard-to-comprehend world of quantum physics. Particles aren't really particles. They're waves. But they're not really waves. They're particles. They're both, and they're neither. They're something in between, or they're both at the same time. But they're not the precise things that we think of. They're inherently fuzzy probabilistic things. That's the source uncertainty: at macroscopic scales, they behave as if they're particles. But they aren't really. So the properties that associate with particles just don't work. An electron doesn't have an exact position and velocity. It has a haze of probability space where it could be. The uncertainty equation describes that haze - the inherent uncertainty that's caused by the real particle/wave duality of the things we call particles.

## Basic Data Structures: Hash Tables

I'm in the mood for a couple of basics posts. As long-time readers might know, I love writing about data structures.

One of the most important and fundamental structures is a hashtable. In fact, in a lot of modern programming languages have left hashtables behind, for reasons I'll discuss later. But if you want to understand data structures and algorithmic complexity, hashtables are one of the essentials.

A hashtable a structure for keeping a list of (key, value) pairs, where you can look up a value using the key that's associated with it. This kind of structure is frequently called either a map, an associative array, or a dictionary.

For an example, think of a phonebook. You've got a collection of pairs (name, phone-number) that make up the phonebook. When you use the phonebook, what you do is look for a person's name, and then use it to get their phone number.

A hashtable is one specific kind of structure that does this. I like to describe data structures in terms of some sort of schema: what are the basic operations that the structure supports, and what performance characteristics does it have for those operations.

In those schematic terms, a hashtable is very simple. It's a structure that maintains a mapping from keys to values. A hashtable really only needs two operations: put and get:

1. put(key, value): add a mapping from key to value to the table. If there's already a mapping for the key, then replace it.
2. get(key): get the value associated with the key.

In a hashtable, both of those operations are extremely fast.

Let's think for a moment about the basic idea of a key-value map, and what kind of performance we could get out of a cople of simple naive ways of implementing it.

We've got a list of names and phone numbers. We want to know how long it'll take to find a particular name. How quickly can we do it?

How long does that take, naively? It depends on how many keys and values there are, and what properties the keys have that we can take advantage of.

In the worst case, there's nothing to help us: the only thing we can do is take the key we're looking for, and compare it to every single key. If we have 10 keys, then on average, we'll need to do an average of about 5 steps before we find the key we're looking for. If there are 100 keys, then it'll take, on average, about 50 steps. If there are one million keys, then it'll take an average of half a million steps before we can find the value.

If the keys are ordered - that is, if we can compare one key to another not just for equality, but we can ask which came first using a "less than or equal to" operator, then we can use binary search. With binary search, we can find an entry in a list of 10 elements in 4 steps. We can find an entry in a list of 1000 keys in 10 steps, or one in a list of one million keys in 20 steps.

With a hashtable, if things work right, in a table of 10 keys, it takes one step to find the key. 100 keys? 1 step. 1000 keys? 1 step. 1,000,000,000 keys? Still one step. That's the point of a hashtable. It might be really hard to do something like general a list of all of the keys - but if all you want to do is look things up, it's lightning.

How can it do that? It's a fairly simple trick: the hashtable trades space for time. A hashtable, under normal circumstances, uses a lot more space than most other ways of building a dictionary. It makes itself fast by using extra space in a clever way.

A hashtable creates a bunch of containers for (key, value) pairs called buckets. It creates many more buckets than the number of (key, value) pairs than it expects to store. When you want to insert a value into the table, it uses a special kind of function called a hash function on the key to decide which bucket to put the (key, value) into. When you want to look for the value associated with a key, it again uses the hash function on the key to find out which bucket to look in.

It's easiest to understand by looking at some actual code. Here's a simple, not at all realistic implementation of a hashtable in Python:

  class Hashtable(object):
def __init__(self, hashfun, size):
self._size = size
self._hashfun = hashfun
self._table = [[] for i in range(size)]

def hash(self, key):
return self._hashfun(key) % self._size

def get(self, key, value):
self._table[self.hash(key)].append((key, value))

def get(self, key):
for k,v in self._table[self.hash(key)]:
if k == key:
return v
return None


If you've got a good hash function, and your hashtable is big enough, then each bucket will end up with no more than one value in it. So if you need to insert a value, you find an (empty) bucket using its hashcode, and dump it in: one step. If you need to find a value given its key, find the bucket using its hashcode, and return the value.

There are two big problems with hashtables.

First, everything is dependent on the quality of your hash function. If you hash function maps a lot of values to the same bucket, then your performance is going to suck. In fact, in the worst case, it's no better than just searching a randomly ordered list. Most of the time, you can come up with a hash function that does pretty good - but it's a surprisingly tricky thing to get right.

Second, the table really needs to be big relative to the number of elements that you expect to have in the list. If you set up a hashtable with 40 buckets, and you end up with 80 values stored in it, your performance isn't going to be very good. (In fact, it'll be slightly worse that just using a binary search tree.)

So what makes a good hash function? There are a bunch of things to consider:

1. The hash function must be deterministic: calling the hash on the same key value must always produce the same result. If you're writing a python program like the one I used as an example above, and you use the value of the key objects fields to compute the hash, then changing the key objects fields will change the hashcode!
2. The hash function needs to focus on the parts of the key that distinguish between different keys, not on their similarities. To give a simple example, in some versions of Java, the default hash function for objects is based on the address of the object in memory. All objects are stored in locations whose address is divisible by 4 - so the last two bits are always zero. If you did something simple like just take the address modulo the table size, then all of the buckets whose numbers weren't divisible by four would always be empty. That would be bad.
3. The hash function needs to be uniform. That means that it needs to map roughly the same number of input values to each possible output value. To give you a sense of how important this is: I ran a test using 3125 randomly generated strings, using one really stupid hash function (xoring together the characters), and one really good one (djb2). I set up a small table, with 31 buckets, and inserted all of the value into it. With the xor hash function, there were several empty buckets, and the worst bucket had 625 values in it. With djb2, there were no empty buckets; the smallest bucket had 98 values, and the biggest one had 106.

So what's a good hash function look like? Djb2, which I used in my test above, is based on integer arithmetic using the string values. It's an interesting case, because no one is really entirely sure of exactly why it works better than similar functions, but be that as it may, we know that in practice, it works really well. It was invented by a guy named Dan Bernstein, who used to be a genius poster in comp.lang.c, when that was a big deal. Here's djb2 in Python:

def djb2(key):
hash = 5381
for c in key:
hash = (hash * 33) + ord(c)
return hash


What the heck is it doing? Why 5381? Because it's prime, and it works pretty well. Why 33? No clue.

Towards the beginning of this post, I alluded to the fact that hashtables have, at least to some degree, fallen out of vogue. (For example, in the Go language standard library, the map type is implemented using a red-black tree.) Why?

In practice, it's rarely any faster to really use a hashtable than to use a balanced binary tree like a red-black tree. Balanced trees have better worst-case bounds, and they're not as sensitive to the properties of the hash function. And they make it really easy to iterate over all of the keys in a collection in a predictable order, which makes them great for debugging purposes.

Of course, hash tables still get used, constantly. The most commonly used data structures in Java code include, without a doubt, the HashMap and HashSet, which are both built on hashtables. They're used constantly. You usually don't have to implement them yourself; and usually system libraries provide a good default hash function for strings, so you're usually safe.

There's also a lot of really fascinating research into designing ideal hash functions for various applications. If you know what your data will look like in advance, you can even build something called a perfect hash function, which guarantees no collisions. But that's a subject for another time.

## Combining Non-Disjoint Probabilities

In my previous post on probability, I talked about how you need to be careful about covering cases. To understand what I mean by that, it's good to see some examples.

And we can do that while also introducing an important concept which I haven't discussed yet. I've frequently talked about independence, but equally important is the idea of disjointness.

Two events are independent when they have no ability to influence one another. So two coin flips are independent. Two events are disjoint when they can't possibly occur together. Flipping a coin, the event "rolled a head" and the event "rolled a tail" are disjoint: if you rolled a head, you can't roll a tail, and vice versa.

So let's think about something abstract for a moment. Let's suppose that we've got two events, A and B. We know that the probability of A is 1/3 and the probability of B is also 1/3. What's the probability of A or B?

Naively, we could say that it's P(A) + P(B). But that's not necessarily true. It depends on whether or not the two events are disjoint.

Suppose that it turns out that the probability space we're working in is rolling a six sided die. There are three basic scenarios that we could have:

1. Scenario 1: A is the event "rolled 1 or 2", and B is "rolled 3 or 4". That is, A and B are disjoint.
2. Scenario 2: A is the event "rolled 1 or 2", and B is "rolled 2 or 3". A and B are different, but they overlap.
3. Scenario 3: A is the event "rolled 1 or 2", and B is the event "rolled 1 or 2". A and B are really just different names for the same event.

In scenario one, we've got disjoint events. So P(A or B) is P(A) + P(B). One way of checking that that makes sense is to look at how the probability of events work out. P(A) is 1/3. P(B) is 1/3. The probability of neither A nor B - that is, the probability of rolling either 5 or 6 - is 1/3. The sum is 1, as it should be.

But suppose that we looked at scenario 2. If we made a mistake and added them as if they were disjoint, how would things add up? P(A) is 1/3. P(B) is 1/3. P(neither A nor B) = P(4 or 5 or 6) = 1/2. The total of these three probabilities is 1/3 + 1/3 + 1/2 = 7/6. So just from that addition, we can see that there's a problem, and we did something wrong.

If we know that A and B overlap, then we need to do something a bit more complicated to combine probabilities. The general equation is:

Using that equation, we'd get the right result. P(A) = 1/3; P(B) =
1/3; P(A and B) = 1/6. So the probability of A or B is 1/3 + 1/3 - 1/6 = 1/2. And P(neither A nor B) = P(4 or 5 or 6) = 1/2. The total is 1, as it should be.

From here, we'll finally start moving in to some more interesting stuff. Next post, I'll look at how to use our probability axioms to analyze the probability of winning a game of craps. That will take us through a bunch of applications of the basic rules, as well as an interesting example of working through a limit case.

And then it's on to combinatorics, which is the main tool that we'll use for figuring out how many cases there are, and what they are, which as we've seen is an essential skill for probability.

Older posts »

• Scientopia Blogs