Archive for the 'Programming' category

Hello World in ARM Assembly Language

Feb 11 2014 Published by under Machine Language, Programming

Since K&R's book on C, it's become traditional to start any tutorial on a new language is to present a program that prints "Hello world". For ARM assembly running on Raspbian Linux, that traditional program looks like:

.global _start
_start:
  MOV R7, #4
  MOV R0, #1
  MOV R2, #12
  LDR R1, =string
  SWI 0
  MOV R7, #1
  SWI 0
  .data
string:
  .ascii "Hello Worldn"

It's definitely a bit more cryptic than most languages, but it doesn't look all that bad, now does it? Before I can explain how that works, we'll need to talk a bit about what we're programming, and how we can program it. We're going to go through a bunch of introductory material here; everything that we touch on, I'll come back to in more detail in a later post.


arm
In the diagram to the right, you can see my attempt to draw a diagram illustrating the important parts of an ARM CPU from our initial perspective. As we learn more about it, we'll gradually refine this picture, adding more details - but for now, this is what things look like.

For now, we'll say that the CPU has 5 main parts:

  1. A collection of 16 registers. A register is a memory cell that's built in to the CPU. On an ARM processor, any time that you want to do any kind of operation - arithmetic, logic, comparison, you name it! - you'll need to have the values in a register. The first thirteen registers are available to you, to use for whatever you want. The last three are special; R13 is called the stack pointer (SP), R14 is called the link register (LR), and R15 is called the program counter (PC). We'll talk about what those three mean as we learn to program.
  2. An arithmetic/logic unit (ALU). This is where the CPU does integer arithmetic and logic. Most of our programs will work exclusively with the ALU. (Floating point is important, but it's possible to do an awful lot of programming without it.)
  3. A floating point unit (FPU). This is where the CPU does floating point arithmetic.
  4. A status register. This is, like the other registers, a chunk of internal storage. But you can't manipulate it or access it directly. It's automatically updated by the ALU/FPU. Individual bits of the status register get updated to reflect various conditions about the current status of the CPU, and the results of the previous instruction. For example, the way that you can compare two values in the ARM is to subtract one from the other. If the two values were equal, then the ZERO flag in the status register will be set to 1; otherwise it will be set to 0. There's a branch instruction that only actually branches if the ZERO flag is set.
  5. A data channel, called the bus. The bus connects the CPU to the rest of the computer. Memory, other storage devices, and input and output devices are all connected to the CPU via the bus. Doing anything that involves communicating through the bus is slow compared to doing anything that doesn't. For now, we'll say that memory is the only thing on the bus.

Now that we have a bit of a clue about the basic pieces of this thing we're going to program, we can start looking at our hello world program. We still need to talk about one other bit of background before we can get started.

For a computer, on the lowest level, a "program" is just a chunk of numbers. It's not even a chunk of instructions - it's just numbers. The numbers can be instructions, data, or both at the same time! That last bit might sound strange, but you'll see instructions like MOV R0, #4. That's saying load the literal value 4 into register R0. The 4 is a value encoded as a part of an instruction. So that 4 is both literal data sitting in the middle of a collection of instructions, and it's also a part of an instruction. The actual instruction doesn't really say "load the value 4"; it says "load the data value that's at this position in the instruction sequence".

We're not going to program the ARM using the numeric instructions directly. We're going to program the ARM using assembly language. Assembly language is a way of writing that chunk of numbers that is your program, but doing it with a syntax that easy for a human being to read. Then a program called an assembler will translate from that readable format into the raw numeric format used by the computer. Conceptually, the assembler sounds a lot like the compiler that you'd use with a higher level language. But it's quite different: compilers take your code, and change it. Frequently, if you look at code that your compiler generates, you'd have a hard time recognizing code that was generated for a program that you wrote! But an assembel doesn't change anything. There's no restructuring, no optimization, no changes at all. In an assembly language program, you're describing how to lay out a bunch of instructions and data in memory, and the assembler does nothing but generate that exact memory layout.


Ok. That said, finally, we can get to the program!

Programming in assembly is quite different from programming in any reasonable programming language. There are no abstractions to make your life easier. You need to be painfully explicit about everything. It really brings home just how many abstractions you generally use in your code.

For example, in assembly language, you don't really have variables. You can store values anywhere you want in the computer's memory, but you have to decide where to put them, and how to lay them out, by yourself. But as I said before - all of the arithmetic and logic that makes up a program has to be done on values in registers. So a value in memory is only good if you can move it from memory into a register. It's almost like programming in a language with a total of 16 variables - only you're only really allowed to use 13 of them!

Not only do you not have variables, but you don't really have parameters. In a high level programming language, you can just pass things to subroutines. You don't need to worry about how. Maybe they're going onto a stack; maybe there' doing some kind of fancy lambda calculus renaming thing; maybe there's some magic variables. You don't need to know or care. But in assembly, there is no built-in notion of parameter-passing. You need to use the computer's register and memory to build a parameter passing system. In the simplest form of that, which is what we're using here, you designate certain registers as carrying certain parameters. There's nothing in assembly to enforce that: if your program puts something into register R3, and a function was expecting it to be in R4, you won't get any kind of error.

In our "Hello world" program above, the first three instructions are loading specific values into registers expected by the operating system "print" function. For example, MOV R0, #4 means move the specific number 4 into register R0.

Loading literal values into registers are done using the MOV instruction. It's got two operands, the register to move the data into, and the source of the data. The source of the data can be either a literal value, or another register. If you want to load data from memory, you need to use a different instruction - LDR.

With the LDR instruction, we can see one of the conveniences of using assembly language. We want to print the string "Hello world". So we need to have that string in memory somewhere. The assembler lets us do that using a .ascii directive. The directive isn't an ARM instruction; it's an instruction to the assembler telling it "I want to put this string data into a block in memory". The .ascii directive is prefaced with a label, which allows us to refer to the beginning of the memory block populated by the directive. Now we can use "string" to refer to the memory block. So the instruction LDR R1, =string is exactly the same as saying LDR R1, address, where address is the memory location where the first byte of the string is stored.

These four instructions have been preparation for calling a function provided by the operating system. R0 and R7 are used by the operating system to figure out what function we want to call. R1 and R2 are being used to pass parameters to the function. The print function expects R1 to contain the memory location of the first byte in the string we want to print, and R2 to contain the number of characters in the string.

We call the function using SWI 0. SWI is the software interrupt function. We can't call the operating system directly! One of the purposes of the operating system is to provide a safe environment, where different programs can't accidentally interfere with one another. If you could just branch into an OS function directly, any program would be able to do anything it wanted! But we don't allow that, so the program can't directly call anything in the OS. Instead, what it does is send a special kind of signal called an interrupt. Before it runs our program, the operating system has already told the CPU that any time it gets an interrupt, control should be handed to the OS. So the operating system gets called by the interrupt. It sees the values in R0 and R7, and recognizes that the interrupt is a request to run the "print" function, so it does that. Then it returns from the interrupt - and execution continues at the first instruction after the SWI call.

Now it's returned from the print, and we don't want to do anything else. If we didn't put something here to tell the operating system that we're done, the CPU would just proceed to the next memory address after our SWI, and interpret that as an instruction! We need to specifically say "We're done", so that the operating system takes control away from our program. The way we do that is with another SWI call. This SWI is the operating system "exit" call. To exit a program and kill the process, you call SWI with R0=1 and R7=1.

And that's it. That's hello-world in assembly.

3 responses so far

Leading in to Machine Code: Why?

Jan 02 2014 Published by under Machine Language

I'm going to write a few posts about programming in machine language. It seems that many more people are interested in learning about the ARM processor, so that's what I'll be writing about. In particular, I'm going to be working with the Raspberry Pi running Raspbian linux. For those who aren't familiar with it, the Pi is a super-inexpensive computer that's very easy to program, and very easy to interface with the outside world. It's a delightful little machine, and you can get one for around $50!

Anyway, before getting started, I wanted to talk about a few things. First of all, why learn machine language? And then, just what the heck is the ARM thing anyway?

Why learn machine code?

My answer might surprise you. Or, if you've been reading this blog for a while, it might not.

Let's start with the wrong reason. Most of the time, people say that you should learn machine language for speed: programming at the machine code level gets you right down to the hardware, eliminating any layers of junk that would slow you down. For example, one of the books that I bought to learn ARM assembly (Raspberry Pi Assembly Language RASPBIAN Beginners: Hands On Guide) said:

even the most efficient languages can be over 30 times
slower than their machine code equivalent, and that’s on a good
day!

This is pure, utter rubbish. I have no idea where he came up with that 30x figure, but it's got no relationship to reality. (It's a decent book, if a bit elementary in approach; this silly statement isn't representative of the book as a whole!)

In modern CPUs - and the ARM definitely does count as modern! - the fact is, for real world programs, writing code by hand in machine language will probably result in slower code!

If you're talking about writing a single small routine, humans can be very good at that, and they often do beat compilers. Butonce you get beyond that, and start looking at whole programs, any human advantage in machine language goes out the window. The constraints that actually affect performance have become incredibly complex - too complex for us to juggle effectively. We'll look at some of these in more detail, but I'll explain one example.

The CPU needs to fetch instructions from memory. But memory is dead slow compared to the CPU! In the best case, your CPU can execute a couple of instructions in the time it takes to fetch a single value from memory. This leads to an obvious problem: it can execute (or at least start executing) one instruction for each clock tick, but it takes several ticks to fetch an instruction!

To get around this, CPUs play a couple of tricks. Basically, they don't fetch single instructions, but instead grab entire blocks of instructions; and they start retrieving instructions before they're needed, so that by the time the CPU is ready to execute an instruction, it's already been fetched.

So the instruction-fetching hardware is constantly looking ahead, and fetching instructions so that they'll be ready when the CPU needs them. What happens when your code contains a conditional branch instruction?

The fetch hardware doesn't know whether the branch will be taken or not. It can make an educated guess by a process called branch prediction. But if it guesses wrong, then the CPU is stalled until the correct instructions can be fetched! So you want to make sure that your code is written so that the CPUs branch prediction hardware is more likely to guess correctly. Many of the tricks that humans use to hand-optimize code actually have the effect of confusing branch prediction! They shave off a couple of instructions, but by doing so, they also force the CPU to sit idle while it waits for instructions to be fetched. That branch prediction failure penalty frequently outweighs the cycles that they saved!

That's one simple example. There are many more, and they're much more complicated. And to write efficient code, you need to keep all of those in mind, and fully understand every tradeoff. That's incredibly hard, and no matter how smart you are, you'll probably blow it for large programs.

If not for efficiency, then why learn machine code? Because it's how your computer really works! You might never actually use it, but it's interesting and valuable to know what's happening under the covers. Think of it like your car: most of us will never actually modify the engine, but it's still good to understand how the engine and transmission work.

Your computer is an amazingly complex machine. It's literally got billions of tiny little parts, all working together in an intricate dance to do what you tell it to. Learning machine code gives you an idea of just how it does that. When you're programming in another language, understanding machine code lets you understand what your program is really doing under the covers. That's a useful and fascinating thing to know!

What is this ARM thing?

As I said, we're going to look at machine language coding on the
ARM processor. What is this ARM beast anyway?

It's probably not the CPU in your laptop. Most desktop and laptop computers today are based on a direct descendant of the first microprocessor: the Intel 4004.

Yes, seriously: the Intel CPUs that drive most PCs are, really, direct descendants of the first CPU designed for desktop calculators! That's not an insult to the intel CPUs, but rather a testament to the value of a good design: they've just kept on growing and enhancing. It's hard to see the resemblance unless you follow the design path, where each step follows directly on its predecessors.

The Intel 4004, released in 1971, was a 4-bit processor designed for use in calculators. Nifty chip, state of the art in 1971, but not exactly what we'd call flexible by modern standards. Even by the standards of the day, they recognized its limits. So following on its success, they created an 8-bit version, which they called the 8008. And then they extended the instruction set, and called the result the 8080. The 8080, in turn, yielded successors in the 8088 and 8086 (and the Z80, from a rival chipmaker).

The 8086 was the processor chosen by IBM for its newfangled personal computers. Chip designers kept making it better, producing the 80286, 386, Pentium, and so on - up to todays CPUs, like the Core i7 that drives my MacBook.

The ARM comes from a different design path. At the time that Intel was producing the 8008 and 8080, other companies were getting into the same game. From the PC perspective, the most important was the 6502, which
was used by the original Apple, Commodore, and BBC microcomputers. The
6502 was, incidentally, the first CPU that I learned to program!

The ARM isn't a descendant of the 6502, but it is a product of the 6502 based family of computers. In the early 1980s, the BBC decided to create an educational computer to promote computer literacy. They hired a company called Acorn to develop a computer for their program. Acorn developed a
beautiful little system that they called the BBC Micro.

The BBC micro was a huge success. Acorn wanted to capitalize on its success, and try to move it from the educational market to the business market. But the 6502 was underpowered for what they wanted to do. So they decided to add a companion processor: they'd have a computer which could still run all of the BBC Micro programs, but which could do fancy graphics and fast computation with this other processor.

In a typical tech-industry NIH (Not Invented Here) moment, they decided that none of the other commercially available CPUs were good enough, so they set out to design their own. They were impressed by the work done by the Berkeley RISC (Reduced Instruction Set Computer) project, and so they adopted the RISC principles, and designed their own CPU, which they called the Acorn RISC Microprocessor, or ARM.

The ARM design was absolutely gorgeous. It was simple but flexible
and powerful, able to operate on very low power and generating very little heat. It had lots of registers and an extremely simple instruction set, which made it a pleasure to program. Acorn built a lovely computer with a great operating system called RiscOS around the ARM, but it never really caught on. (If you'd like to try RiscOS, you can run it on your Raspberry Pi!)

But the ARM didn't disappear. Tt didn't catch on in the desktop computing world, but it rapidly took over the world of embedded devices. Everything from your cellphone to your dishwasher to your iPad are all running on ARM CPUs.

Just like the Intel family, the ARM has continued to evolve: the ARM family has gone through 8 major design changes, and dozens of smaller variations. They're no longer just produced by Acorn - the ARM design is maintained by a consortium, and ARM chips are now produced by dozens of different manufacturers - Motorola, Apple, Samsung, and many others.

Recently, they've even starting to expand even beyond embedded platforms: the Chromebook laptops are ARM based, and several companies are starting to market server boxes for datacenters that are ARM based! I'm looking forward to the day when I can buy a nice high-powered ARM laptop.

20 responses so far

More Basics: Compilers, Programs, and Languages

Dec 20 2013 Published by under Basics, Programming

After my "what is an OS?" post, a couple of readers asked me to write a similar post about compilers.

Before I can answer what a compiler is, it's helpful to first answer a different question: what is a program?

And here we get to one of my pet peeves. The most common answer to that question is "a detailed step-by-step sequence of instructions". For example, here's what wikipedia says:

A computer program, or just a program, is a sequence of instructions, written to perform a specified task with a computer.

This is wrong.

Back when people first started to study the idea of computing devices, they talked about computing machines as devices that performed a single, specific task. If you think about a basic Turing machine, you normally define Turing machines that perform a single computation. They've got a built-in sequence of states, and a built in transition table - the machine can only perform one computation. It took one kind of input, and performed its computation on that input, producing its output.

Building up from these specific machines, they came up with the idea of a universal computing device. A universal computer was a computing machine whose input was a description of a different computing machine. By giving the universal machine different inputs, it could perform different computations.

The point of this diversion is that looking at this history tells us what a program really is: it's a description of a computing machine. Our computers are universal computing machines; they take programs as input to describe the computing machines we want them to emulate. What we're doing when we program is describing a computing machine that we'd like to create. Then we feed it into our universal computing machine, and it behaves as if we'd built a custom piece of hardware to do our computation!

The problem is, our computers are simultaneously very primitive and overwhelming complex. They can only work with data expressed in fixed-length sequences of on/off values; to do anything else, we need to find a way of expressing in terms of extremely simple operations on those on/off values. To make them operate efficiently, they've got a complex structure: many different kinds of storage (registers, l1 and l2 caches, addressable memory), complicated instruction sets, and a whole lot of tricky perfomance tweaks. It's really hard to program a computer in terms of its native instructions!

In fact, it's so hard to program in terms of native instructions that we just don't do it. What we do is write programs in terms of different machines. That's the point of a programming language.

Looked at this way, a program language is a way of describing computing machines. The difference between different programming languages is how they describe computing machines. A language like C describes von Neumann machines. Haskell describes machines that work via lambda calculus computations using something like a spineless G-machine. . Prolog describes machines that perform computations in terms of intuitionistic logical inference like a Warren Abstract Machine.

So finally, we can get to the point: what is a compiler? A compiler is a program that takes a description of a computing device defined in one way, and translates into the kind of machine description that can be used by our hardware. A programming language ets us ignore all of the complexities of how our actual hardware is built, and describe our computations in terms of a simple abstraction. A compiler takes that description, and turns it into the form that computer hardware can actually use.

For anyone who's read this far: I've gotten a few requests to talk about assembly language. I haven't programmed in assembly since the days of the Motorola 68000. This means that to do it, I'll need to learn something more up-to-date. Would you be more interested in seeing Intel, or ARM?

14 responses so far

Boot all the computers!

Dec 12 2013 Published by under Basics, Programming

Moving on from last weeks operating system post, today we'll look at how a computer boots up and loads an operating system.

Let's start with why booting is a question at all. When a computer turns on, what happens? What we're using to seeing is that the disk drive turns on and starts spinning, and the computer loads something from the disk.

The question is how does the computer know how to turn on the disk? As I said in the OS post, the CPU only really knows how work with memory. To talk to a disk drive, it needs to do some very specific things - write to certain memory locations, wait for things to happen. Basically, in order to turn on that disk drive and load the operating system, it needs to run a program. But how does it know what program to run?

I'm going to focus on how modern PCs work. Other computers have used/do use a similar process. The details vary, but the basic idea is the same.

A quick overview of the process:

  1. CPU startup.
  2. Run BIOS initialization
  3. Load bootloader
  4. Run bootloader
  5. Load and run OS.

As that list suggests, it's not a particularly simple process. We think of it as one step: turn on the computer, and it runs the OS. In fact, it's a complicated dance of many steps.

On the lowest level, it's all hardware. When you turn on a computer, some current gets sent to a clock. The clock is basically a quartz crystal; when you apply current to the crystal, it vibrates and produces a regular electrical pulse. That pulse is what drives the CPU. (When you talk about your computer's speed, you generally describe it in terms of the frequency of the clock pulse. For example, in the laptop that I'm using to write this post, I've got a 2.4 GHz processor: that means that the clock chip pulses 2.4 billion times per second.)

When the CPU gets a clock pulse, it executes an instruction from memory. It knows what instruction to execute because it's got a register (a special piece of memory built-in to the CPU) that tells it what instruction to execute. When the computer is turned on, that register is set to point at a specific location. Depending on the CPU, that might be 0, or it might be some other magic location; it doesn't matter: what matters is that the CPU is built so that when it's first turned on and it receives a clock pulse that starts it running, that register will always point at the same place.

The software part of the boot process starts there: the computer puts a chunk of read-only memory there - so when the computer turns on, there's a program sitting at that location, which the computer can run. On PCs, that program is called the BIOS (Basic Input/Output System).

The BIOS knows how to tell the hardware that operates your display to show text on the screen, and it knows how to read stuff on your disk drives. It doesn't know much beyond that. What it knows is extremely primitive. It doesn't understand things like filesystems - the filesystem is set up and controlled by the operating system, and different operating systems will set up filesystems in different ways. The BIOS can't do anything with a filesystem: it doesn't include any programming to tell it how to read a filesystem, and it can't ask the operating system to do it, because the OS hasn't loaded yet!

What the BIOS does is something similar to what the CPU did when it started up. The CPU knew to look in a special location in memory to find a program to run. The BIOS knows to look at a special section on a disk drive to find a program to run. Every disk has a special chunk of data on it called the master boot record (MBR). The MBR contains another program, called a boot loader. So the BIOS loads the boot loader, and then uses it to actually load the operating system.

This probably seems a bit weird. The computer starts up by looking in a specific location for a program to run (the BIOS), which loads something (the bootloader). The thing it loads (the bootloader) also just looks in a specific location for a program to run (the OS). Why the two layers?

Different operating systems are build differently, and the specific steps to actually load and run the OS are different. For example, on my laptop, I've can run two operating systems: MacOS, and Linux. On MacOS (aka Darwin), there's something called a microkernel that gets loaded. The microkernel is stored in a file named "mach_kernel" in the root directory of a type of filesystem called HFS. But in my installation of linux, the OS is stored in a file named "vmlinuz" in the root directory of a type of filesystem called EXT4. The BIOS doesn't know what operating system it's loading, and it doesn't know what filesystem the OS uses - and that means that it knows neither the name of the file to load, nor how to find that file.

The bootloader was set up by the operating system. It's specific to the operating system - you can think of it as part of the OS. So it knows what kind of filesystem it's going to look at, and how to find the OS in that filesystem.

So once the bootloader gets started, it knows how to load and run the operating system, and once it does that, your computer is up and running, and ready for you to use!

Of course, all of this is a simplified version of how it works. But for understanding the process, it's a reasonable approximation.

(To reply to commenters: I'll try to do a post like this about compilers when I have some time to write it up.)

4 responses so far

Basics: What is an OS?

Dec 05 2013 Published by under Programming

A reader of this blog apparently likes the way I explain things, and wrote to me to ask a question: what is an operating system? And how does a computer know how to load it?

I'm going to answer that, but I'm going to do it in a roundabout way. The usual answer is something like: "An operating system or OS is a software program that enables the computer hardware to communicate and operate with the computer software." In my opinion, that's a cop-out: it doesn't really answer anything. I'm going to take a somewhat roundabout approach, but hopefully give you an answer that actually explains things in more detail, which should help you understand it better.

When someone like me sets out to write a program, how can we do it? That sounds like an odd question, until you actually think about it. The core of the computer, the CPU, is a device which really can't do very much. It's a self-contained unit which can do lots of interesting mathematical and logical operations, but they all happen completely inside the CPU (how they happen inside the CPU is way beyond this post!). To get stuff in and out of the CPU, the only thing that the computer can do is read and write values from the computer's memory. That's really it.

So how do I get a program in to the computer? The computer can only read the program if it's in the computer's memory. And every way that I can get it into the memory involves the CPU!

Computers are built so that there are certain memory locations and operations that are used to interact with the outside world. They also have signal wires called interrupt pins where other devices (like disk drives) can apply a current to say "Hey, I've got something for you". The exact mechanics are, of course, complicated, and vary from CPU to CPU. But to give you an idea of what it's like, to read some data from disk, you'd do something like the following.

  1. Set aside a chunk of memory where the data should be stored after it's read. This is called a buffer.
  2. Figure out where the data you want to read is stored on the disk. You can identify disk locations as a number. (It's usually a bit more complicated than that, but we're trying to keep this simple.
  3. Write that number into a special memory location that's monitored by the disk drive controller.
  4. Wait until the disk controller signals you via an interrupt that the data is ready. The data will be stored in a special memory location, that can be altered by the disk. (Simplifying again, but this is sometimes called a DMA buffer.)
  5. Copy the data from the controller's DMA buffer into the application's memory buffer that you allocated.

When you down to that level, programming is an intricate dance! No one
wants to do that - it's too complicated, too error prone, and just generally
too painful. But there's a deeper problem: at this level, it's every program
for itself. How do you decide where on the disk to put your data? How can you
make sure that no one else is going to use that part of the disk? How can you
tell another program where to find the data that you stored?

You want to have something that creates the illusion of a much simpler computational world. Of course, under the covers, it's all going to be that incredibly messy stuff, but you want to cover it up. That's the job of an operating system: it's a layer between the hardware and the programs that you run that create a new computational world that's much easier to work in.

Instead of having to do the dance of mucking with the hard disk drive controller yourself, the operating system gives you a way of saying "Open a file named 'foo'", and then it takes that request, figures out where 'foo' is on the disk, talks to the disk drive, gets the data, and then hands you a buffer containing it. You don't need to know what kind of disk drive the data is coming from, how the name 'foo' maps to sectors of the disk. You don't need to know where the control memory locations for the drive are. You just let the operating system do that for you.

So, ultimately, this is the answer: The operating system is a program that runs on the computer, and creates the environment in which other programs can run. It does a lot of things to create a pleasant environment in which to write and run other programs. Among the multitude of services provided by most modern operating system are:

  1. Device input and output. This is what we talked about above: direct interaction with input and output devices is complicated and error prone; the operating system implements the input and output processes once, (hopefully) without errors, and then makes it easy for every other program to just use its correct implementation.
  2. Multitasking: your computer has enough power to do many things at once. Most modern computers have more than one CPU. (My current laptop has 4!) And most programs end up spending a lot of their time doing nothing: waiting for you to press a key, or waiting for the disk drive to send it data. The operating system creates sandboxes, called processes, and allows one program to run in each sandbox. It takes care of ensuring that each process gets to run on a CPU for a fair share of the time.
  3. Memory management. With more than one program running at the same time on your computer, you need to make sure that you're using memory that isn't also being used by some other program, and to make sure that no other program can alter the memory that you're using without your permission. The operating system decides what parts of memory can be used by which program.
  4. Filesystems. Your disk drive is really just a huge collection of small sections, each of which can store a fixed number of bits, encoded in some strange format dictated by the mechanics of the drive. The OS provides an abstraction that's a lot easier to deal with.

I think that's enough for one day. Tomorrow: how the computer knows how to run the OS when it gets switched on!

6 responses so far

Basic Data Structures: Hash Tables

Oct 20 2013 Published by under Data Structures

I'm in the mood for a couple of basics posts. As long-time readers might know, I love writing about data structures.

One of the most important and fundamental structures is a hashtable. In fact, in a lot of modern programming languages have left hashtables behind, for reasons I'll discuss later. But if you want to understand data structures and algorithmic complexity, hashtables are one of the essentials.

A hashtable a structure for keeping a list of (key, value) pairs, where you can look up a value using the key that's associated with it. This kind of structure is frequently called either a map, an associative array, or a dictionary.

For an example, think of a phonebook. You've got a collection of pairs (name, phone-number) that make up the phonebook. When you use the phonebook, what you do is look for a person's name, and then use it to get their phone number.

A hashtable is one specific kind of structure that does this. I like to describe data structures in terms of some sort of schema: what are the basic operations that the structure supports, and what performance characteristics does it have for those operations.

In those schematic terms, a hashtable is very simple. It's a structure that maintains a mapping from keys to values. A hashtable really only needs two operations: put and get:

  1. put(key, value): add a mapping from key to value to the table. If there's already a mapping for the key, then replace it.
  2. get(key): get the value associated with the key.

In a hashtable, both of those operations are extremely fast.

Let's think for a moment about the basic idea of a key-value map, and what kind of performance we could get out of a cople of simple naive ways of implementing it.

We've got a list of names and phone numbers. We want to know how long it'll take to find a particular name. How quickly can we do it?

How long does that take, naively? It depends on how many keys and values there are, and what properties the keys have that we can take advantage of.

In the worst case, there's nothing to help us: the only thing we can do is take the key we're looking for, and compare it to every single key. If we have 10 keys, then on average, we'll need to do an average of about 5 steps before we find the key we're looking for. If there are 100 keys, then it'll take, on average, about 50 steps. If there are one million keys, then it'll take an average of half a million steps before we can find the value.

If the keys are ordered - that is, if we can compare one key to another not just for equality, but we can ask which came first using a "less than or equal to" operator, then we can use binary search. With binary search, we can find an entry in a list of 10 elements in 4 steps. We can find an entry in a list of 1000 keys in 10 steps, or one in a list of one million keys in 20 steps.

With a hashtable, if things work right, in a table of 10 keys, it takes one step to find the key. 100 keys? 1 step. 1000 keys? 1 step. 1,000,000,000 keys? Still one step. That's the point of a hashtable. It might be really hard to do something like general a list of all of the keys - but if all you want to do is look things up, it's lightning.

How can it do that? It's a fairly simple trick: the hashtable trades space for time. A hashtable, under normal circumstances, uses a lot more space than most other ways of building a dictionary. It makes itself fast by using extra space in a clever way.

A hashtable creates a bunch of containers for (key, value) pairs called buckets. It creates many more buckets than the number of (key, value) pairs than it expects to store. When you want to insert a value into the table, it uses a special kind of function called a hash function on the key to decide which bucket to put the (key, value) into. When you want to look for the value associated with a key, it again uses the hash function on the key to find out which bucket to look in.

It's easiest to understand by looking at some actual code. Here's a simple, not at all realistic implementation of a hashtable in Python:

  class Hashtable(object):
    def __init__(self, hashfun, size):
      self._size = size
      self._hashfun = hashfun
      self._table = [[] for i in range(size)]

    def hash(self, key):
      return self._hashfun(key) % self._size

    def get(self, key, value):
      self._table[self.hash(key)].append((key, value))

    def get(self, key):
      for k,v in self._table[self.hash(key)]:
        if k == key:
          return v
      return None

If you've got a good hash function, and your hashtable is big enough, then each bucket will end up with no more than one value in it. So if you need to insert a value, you find an (empty) bucket using its hashcode, and dump it in: one step. If you need to find a value given its key, find the bucket using its hashcode, and return the value.

There are two big problems with hashtables.

First, everything is dependent on the quality of your hash function. If you hash function maps a lot of values to the same bucket, then your performance is going to suck. In fact, in the worst case, it's no better than just searching a randomly ordered list. Most of the time, you can come up with a hash function that does pretty good - but it's a surprisingly tricky thing to get right.

Second, the table really needs to be big relative to the number of elements that you expect to have in the list. If you set up a hashtable with 40 buckets, and you end up with 80 values stored in it, your performance isn't going to be very good. (In fact, it'll be slightly worse that just using a binary search tree.)

So what makes a good hash function? There are a bunch of things to consider:

  1. The hash function must be deterministic: calling the hash on the same key value must always produce the same result. If you're writing a python program like the one I used as an example above, and you use the value of the key objects fields to compute the hash, then changing the key objects fields will change the hashcode!
  2. The hash function needs to focus on the parts of the key that distinguish between different keys, not on their similarities. To give a simple example, in some versions of Java, the default hash function for objects is based on the address of the object in memory. All objects are stored in locations whose address is divisible by 4 - so the last two bits are always zero. If you did something simple like just take the address modulo the table size, then all of the buckets whose numbers weren't divisible by four would always be empty. That would be bad.
  3. The hash function needs to be uniform. That means that it needs to map roughly the same number of input values to each possible output value. To give you a sense of how important this is: I ran a test using 3125 randomly generated strings, using one really stupid hash function (xoring together the characters), and one really good one (djb2). I set up a small table, with 31 buckets, and inserted all of the value into it. With the xor hash function, there were several empty buckets, and the worst bucket had 625 values in it. With djb2, there were no empty buckets; the smallest bucket had 98 values, and the biggest one had 106.

So what's a good hash function look like? Djb2, which I used in my test above, is based on integer arithmetic using the string values. It's an interesting case, because no one is really entirely sure of exactly why it works better than similar functions, but be that as it may, we know that in practice, it works really well. It was invented by a guy named Dan Bernstein, who used to be a genius poster in comp.lang.c, when that was a big deal. Here's djb2 in Python:

def djb2(key):
  hash = 5381
  for c in key:
    hash = (hash * 33) + ord(c)
  return hash

What the heck is it doing? Why 5381? Because it's prime, and it works pretty well. Why 33? No clue.

Towards the beginning of this post, I alluded to the fact that hashtables have, at least to some degree, fallen out of vogue. (For example, in the Go language standard library, the map type is implemented using a red-black tree.) Why?

In practice, it's rarely any faster to really use a hashtable than to use a balanced binary tree like a red-black tree. Balanced trees have better worst-case bounds, and they're not as sensitive to the properties of the hash function. And they make it really easy to iterate over all of the keys in a collection in a predictable order, which makes them great for debugging purposes.

Of course, hash tables still get used, constantly. The most commonly used data structures in Java code include, without a doubt, the HashMap and HashSet, which are both built on hashtables. They're used constantly. You usually don't have to implement them yourself; and usually system libraries provide a good default hash function for strings, so you're usually safe.

There's also a lot of really fascinating research into designing ideal hash functions for various applications. If you know what your data will look like in advance, you can even build something called a perfect hash function, which guarantees no collisions. But that's a subject for another time.

19 responses so far

Everyone should program, or Programming is Hard? Both!

Oct 05 2012 Published by under Bad Math Education, Programming

I saw something on twitter a couple of days ago, and I promised to write this blog post about it. As usual, I'm behind on all the stuff I want to do, so it took longer to write than I'd originally planned.

My professional specialty is understanding how people write programs. Programming languages, development environment, code management tools, code collaboration tools, etc., that's my bread and butter.

So, naturally, this ticked me off.

The article starts off by, essentially, arguing that most of the programming tutorials on the web stink. I don't entirely agree with that, but to me, it's not important enough to argue about. But here's where things go off the rails:

But that's only half the problem. Victor thinks that programming itself is broken. It's often said that in order to code well, you have to be able to "think like a computer." To Victor, this is absurdly backwards-- and it's the real reason why programming is seen as fundamentally "hard." Computers are human tools: why can't we control them on our terms, using techniques that come naturally to all of us?

And... boom! My head explodes.

For some reason, so many people have this bizzare idea that programming is this really easy thing that programmers just make difficult out of spite or elitism or clueless or something, I'm not sure what. And as long as I've been in the field, there's been a constant drumbeat from people to say that it's all easy, that programmers just want to make it difficult by forcing you to think like a machine. That what we really need to do is just humanize programming, and it will all be easy and everyone will do it and the world will turn into a perfect computing utopia.

First, the whole "think like a machine" think is a verbal shorthand that attempts to make programming as we do it sound awful. It's not just hard to program, but those damned engineers are claiming that you need to dehumanize yourself to do it!

To be a programmer, you don't need to think like a machine. But you need to understand how machines work. To program successfully, you do need to understand how machines work - because what you're really doing is building a machine!

When you're writing a program, on a theoretical level, what you're doing is designing a machine that performs some mechanical task. That's really what a program is: it's a description of a machine. And what a programming language is, at heart, is a specialized notation for describing a particular kind of machine.

No one will go to an automotive engineer, and tell him that there's something wrong with the way transmissions are designed, because they make you understand how gears work. But that's pretty much exactly the argument that Victor is making.

How hard is it to program? That all depends on what you're tring to do. Here's the thing: The complexity of the machine that you need to build is what determines the complexity of the program. If you're trying to build a really complex machine, then a program describing it is going to be really complex.

Period. There is no way around that. That is the fundamental nature of programming.

In the usual argument, one thing that I constantly see is something along the lines of "programming isn't plumbing: everyone should be able to do it". And my response to that is: of course so. Just like everyone should be able to do their own plumbing.

That sounds like an amazingly stupid thing to say. Especially coming from me: the one time I tried to fix my broken kitchen sink, I did over a thousand dollars worth of damage.

But: plumbing isn't just one thing. It's lots of related but different things:

  • There are people who design plumbing systems for managing water distribution and waste disposal for an entire city. That's one aspect of plubing. And that's an incredibly complicated thing to do, and I don't care how smart you are: you're not going to be able to do it well without learning a whole lot about how plumbing works.
  • Then there are people who design the plumbing for a single house. That's plumbing, too. That's still hard, and requires a lot of specialized knowledge, most of which is pretty different from the city designer.
  • There are people who don't design plumbing, but are able to build the full plumbing system for a house from scratch using plans drawn by a designer. Once again, that's still plumbing. But it's yet another set of knowledge and skills.
  • There are people who can come into a house when something isn't working, and without ever seeing the design, and figure out what's wrong, and fix it. (There's a guy in my basement right now, fixing a drainage problem that left my house without hot water, again! He needed to do a lot of work to learn how to do that, and there's no way that I could do it myself.) That's yet another set of skills and knowledge - and it's still plumbing.
  • There are non-professional people who can fix leaky pipes, and replace damaged bits. With a bit of work, almost anyone can learn to do it. Still plumbing. But definitely: everyone really should be able to do at least some of this.

  • And there are people like me who can use a plumbing snake and a plunger when the toilet clogs. That's still plumbing, but it requires no experience and no training, and absolutely everyone should be able to do it, without question.

All of those things involve plumbing, but they require vastly different amounts and kinds of training and experience.

Programming is exactly the same. There are different kinds of programming, which require different kinds of skills and knowledge. The tools and training methods that we use are vastly different for those different kinds of programming - so different that for many of them, people don't even realize that they are programming. Almost everyone who uses computers does do some amount of programming:

  • When someone puts together a presentation in powerpoint, with things that move around, appear, and disappear on your command: that is programming.
  • When someone puts formula into a spreadsheet: that is programming.
  • When someone builds a website - even a simple one - and use either a set of tools, or CSS and HTML to put the site together: that is programming.
  • When someone writes a macro in Word or Excel: that is programming.
  • When someone sets up an autoresponder to answer their email while they're on vacation: that is programming.

People like Victor completely disregard those things as programming, and then gripe about how all programming is supercomplexmagicalsymbolic gobbledygook. Most people do write programs without knowing about it, precisely because they're doing it with tools that present the programming task as something that's so natural to them that they don't even recognize that they are programming.

But on the other hand, the idea that you should be able to program without understanding the machine you're using or the machine that you're building: that's also pretty silly.

When you get beyond the surface, and start to get to doing more complex tasks, programming - like any skill - gets a lot harder. You can't be a plumber without understanding how pipe connections work, what the properties of the different pipe materials are, and how things flow through them. You can't be a programmer without understanding something about the machine. The more complicated the kind of programming task you want to do, the more you need to understand.

Someone who does Powerpoint presentations doesn't need to know very much about the computer. Someone who wants to write spreadsheet macros needs to understand something about how the computer processes numbers, what happens to errors in calculations that use floating point, etc. Someone who wants to build an application like Word needs to know a whole lot about how a single computer works, including details like how the computer displays things to people. Someone who wants to build Google doesn't need to know how computers render text clearly on the screen, but they do need to know how computers work, and also how networks and communications work.

To be clear, I don't think that Victor is being dishonest. But the way that he presents things often does come off as dishonest, which makes it all the worse. To give one demonstration, he presents a comparison of how we teach programming to cooking. In it, he talks about how we'd teach people to make a soufflee. He shows a picture of raw ingredients on one side, and a fully baked soufflee on the other, and says, essentially: "This is how we teach people to program. We give them the raw ingredients, and say fool around with them until you get the soufflee."

The thing is: that's exactly how we really teach people to cook - taken far out of context. If we want them to be able to prepare exactly one recipe, then we give them complete, detailed, step-by-step instructions. But once they know the basics, we don't do that anymore. We encourage them to start fooling around. "Yeah, that soufflee is great. But what would happen if I steeped some cardamom in the cream? What if I left out the vanilla? Would it turn out as good? Would that be better?" In fact, if you never do that experimentation, you'll probably never learn to make a truly great soufflee! Because the ingredients are never exactly the same, and the way that it turns out is going to depend on the vagaries of your oven, the weather, the particular batch of eggs that you're using, the amount of gluten in the flour, etc.

To write complicated programs is complicated. To write programs that manipulate symbolic data, you need to understand how the data symbolizes things. To write a computer that manipulates numbers, you need to understand how the numbers work, and how the computer represents them. To build a machine, you need to understand the machine that you're building. It's that simple.

No responses yet

DNS and Yesterday's Outage

Sep 11 2012 Published by under Distributed Systems, Programming

As I'm sure at least some of you noticed, Scientopia - along with a huge number of other sites - was unreachable for most of yesterday. I say "unreachable" instead of "down" because the site was up. But if you wanted to get to it, and you didn't know it's numeric address, you couldn't reach it. That's because your computer uses a service called DNS to find things.

So what's this DNS thing? And what brought down so much of the net yesterday?

DNS stands for "domain name service". What it does is provide a bridge between the way that you think of the name of a website, and the way that your computer actually talks to the website. You think of a website by a URL, like http://scientopia.org/blogs/goodmath. That URL basically says "the thing named ''blogs/goodmath'' on a server named ''scientopia.org'', which you can access using a system called ''http''.".

The catch is that your computer can't just send a message to something named "scientopia.org". Scientopia is a server in a datacenter somewhere, and to send a message to it, it needs to know a numeric address. Numeric addresses are usually written as a set of four numbers, each ranging from 0 to 255. For example, scientopia.org is currently 184.106.221.182.

You don't want to have to remember those four meaningless numbers in order to connect to scientopia. But there's more to it than just remembering: those numbers can change. The physical computer that scientopia works on has been moved around several times. It lives in a datacenter somewhere, and as that datacenter has grown, and hardware has needed to be either expanded or replaced, the numeric address has changed. Even though I'm the administrator of the site, there've been a couple of times where it's changed, and I can't pinpoint exactly when!

The reason that all of that works is DNS. When you tell your computer to connect to another computer by name, it uses DNS to ask "What's the numeric address for this name?", and once it gets that numeric address, it uses that to connect to the other computer.

When I first got access to the internet in college, it was shortly before DNS first got deployed. At the time, there was one canonical computer somewhere in California. Every other computer on the (then) arpanet would contact that computer every night, and download a file named "hosts.txt". hosts.txt contained a list of every computer on the internet, and its numeric address. That was clearly not working any more, and DNS was designed as the replacement. When they decided to replace hosts.txt, they wanted to replace it with something much more resilient, and something that didn't need to have all of the data in one place, and which didn't rely on any single machine in order to work.

The result was DNS. It's a fascinating system. It's the first real distributed system that I ever learned about, and I still think it's really interesting. I'm not going to go into all of the details: distributed systems always wind up with tons of kinks, to deal with the complexity of decentralized information in the real world. But I can explain the basics pretty easily.

The basic way that DNS works is really simple. You take a domain name. For example, my personal machine is named "sunwukong", and my family network is "phouka.net" - so the hostname that I'm typing this on right now is "sunwukong.phouka.net". To look it up, what happens is you take the host name, and split it at the dots. So for the example, that's ["sun-wukong", "phouka", "net"].

You start with the last element at the list, which is called the top-level-domain, and you contact a special server called the root nameserver and ask "who do I ask for addresses ending with this tld? ("Who do I ask for '.net'?") It sends a response to you, giving you the address of another nameserver. Then you contact that, and ask it "who do I ask for the next element"? ("Who do I ask for phouka?") Finally, when you get to the last one, the address that it gives you is the address of the machine you're looking for, instead of the name of another DNS server. At each level of the heirarchy, there's a server or set of servers that's registered with the level above to be the canonical server for some set of names. The ".com" servers are registered with the root level servers. The next level of domains is registered with the TLD servers. And so on.

So, to look up sunwukong.phouka.net, what happens is:

  1. You send a request to the root nameserver, and ask for the server for ".net"?
  2. The root server responds, giving you the ".net" server.
  3. You send a request to the .net server address that you just got, asking "Who can tell
    me about phouka?"
  4. The .net server responds, and gives you an address for a DNS server that can tell
    you where things are in phouka.net.
  5. You ask the "phouka.net" server for the address of sunwukong
  6. It finally gives you the address of your server.

This distributes the information nicely. You can have lots of root name servers - all they need to be able to do is find DNS servers for the list of top-level domains. And there can be - and are - thousands and thousands of nameservers for addresses in those top-level domains. And then each administrator of a private network can have a top-level nameserver for the computers that they directly manage. In this mess, any nameserver at any level can disappear, and all that will be affected are the names that it manages.

The problem is, there's no one place to get information; and more importantly, every time you need to talk to another computer, you need to go through this elaborate sequence of steps.

To get around that last issue, you've got something called caches, on multiple levels. A cache is a copy of the canonical data, which gets kept around for some period of time. In the DNS system, you don't need to talk to the canonical DNS server for a domain if someone has the data they need in a cache. You can (and generally do) talk to cacheing DNS servers. With a cacheing DNS server, you tell it the whole domain name that you want to look up, and it does the rest of the lookup process, and gives you the address you're looking for. Every time it looks something up, it remembers what it did, so that it doesn't need to ask again. So when you look up a ".com" address, your cacheing service will remember who it can ask for ".com" addresses. So most of the time, you only talk to one server, which does the hard work, and does it in a smart way so that it doesn't need to unnecessarily repeat requests.

So now, we can get to what happened yesterday. GoDaddy, one of the major american DNS registries, went down. Exactly why it went down is not clear - a group of jerks claimed to have done a denial-of-service attack against them; but the company claims to have just had a configuration glitch. Honestly, I'm inclined to believe that it was the hackers, but I don't know for sure.

But - the canonical server for scientopia.org was gone. So if you needed to look up scientopia.org and it wasn't in your cache, then your cacheing nameserver would go to the .org server, and ask for scientopia; the .org nameserver would return the address of GoDaddy's nameserver, and that wouldn't respond. So you'd never reach us.

That's also why some people were able to reach the site, and others weren't. If your cacheing server had cached the address for scientopia, then you'd get the server address, and since the server was up the whole time, you'd connect right up, and everything would work.

No responses yet

Nulls

Aug 20 2012 Published by under Programming

So, my post on monads apparently set of a bit of a firestorm over my comments about avoiding null pointer exceptions. To show you what I mean, here's a link to one of the more polite and reasonable posts.

Ok. So. Let me expand a bit.

The reason that I think it’s so great that I don’t get NPEs when I use an option with something like Option isn’t because it makes me a super-programmer who's better than the lowly slime who deal with NPEs. It's because it changes how that I write code, in a way that helps me avoid making mistakes - mistakes that I make because I'm just a fallible idiot of a human.

There are two fundamental things about an option type, like what we have in Scala, that make a huge difference.

First, it narrows the field of errors. When I'm programming in Java, any call that returns a pointer could return a null. The language makes no distinction between a function/expression that could return a null, and one that can't. That means that when I get an NPE, the source of that null pointer could be anything in the dynamic slice leading to the error. With an option type, I've got two kinds of functions: functions that always return a non-null value, and functions that sometimes return a non-null value, and sometimes return a None. That's incredibly valuable.

Second, it forces me to explicitly deal with the None case. In Java, programmers constantly build code without null checks, because they know that a function won't return null. And then it does, and ker-splat. With an option type, I have no choice: I have to explicitly deal with the potential error case. Sure, I can forcibly code around it - in Scala, I can use Option.get, which will turn into an error analagous to an NPE. But it forces me to make that choice, and make it explicitly.

Even if I'm going the stupid, brute-force route and assuming that I know, without fail, that a function is going to return a non-null value... Consider an example:

   Java: :
  T g = f.doSomething()
  g.doSomethingElse()

   Scala:
  val g: Option[T] = f.doSomething()
  g.get.doSomethingElse()

The scala case has to explicitly deal with the fact that it's dealing with a potentially empty value, and using a statement that asserts the non-emptiness.

But in reality, if you're a decent programmer, you never use .get to directly access an option. (The only exception is in cases where you call the .get in a context dominated by a non-empty test; but even then, it's best to not, to avoid errors when the surrounding code is modified.) In real code, you pretty much always explicitly de-option a value using a function like getOrElse:

val f: User = getUser("markcc").getOrElse(new User("markcc"))

As I hope it has become plain, the point of avoiding NPEs through option-like type structures isn't that somehow it makes the entire category of unexpected result value disappear. It's that it changes the way that you code to distinguish where those errors can occur, and to force you to deal with them.

I think that ultimately, things like this are really just manifestations of the good-old static vs dynamic type wars. Type errors in a dynamically typed language are really just unexpected value errors. Strong typing doesn't stop you from making those errors. It just takes a bunch of common cases of those errors, and converts them from a run-time error to a compile-time error. Whether you want them to be run-time or compile-time depends on the project your working on, on your development team, and on your personal preferences.

I find in practice that I get many fewer errors by being forced to explicitly declare when a value might be null/None, and by being required to explicitly deal with the null/None case when it might occur. I've spent much less time debugging that kind of error in my year at foursquare than in the 15 years of professional development that I did before. That's not because I magically became a better programmer a year ago when I joined foursquare. It's because I'm using a better tool that helps me avoid mistakes.

No responses yet

Monads and Programming

Aug 19 2012 Published by under Category Theory, Programming

Sorry things have been so slow around here. I know I keep promising that I'm going to post more frequently, but it's hard. Life as an engineer at a startup is exhausting. There's so much work to do! And the work is so fun - it's easy to let it eat up all of your time.

Anyway... last good-math post 'round these parts was about monoids and programming. Today, I'm going to talk about monads and programming!

If you recall, monoids are an algebraic/categorical construct that, when implemented in a programming language, captures the abstract notion of foldability. For example, if you've got a list of numbers, you can fold that list down to a single number using addition. Folding collections of values is something that comes up in a lot of programming problems - capturing that concept with a programming construct allows you to write code that exploits the general concept of foldability in many different contexts.

Monads are a construct that have become incredibly important in functional programming, and they're very, very poorly understood by most people. That's a shame, because the real concept is actually simple: a monad is all about the concept of sequencing. A monad is, basically, a container that you can wrap something in. Once it's wrapped, you can form a sequence of transformations on it. The result of each step is the input to the next. That's really what it's all about. And when you express it that way, you can begin to see why it's such an important concept.

I think that people are confused by monads for two reasons:

  1. Monads are almost always described in very, very abstract terms. I'll also get into the abstract details, but I'll start by elaborating on the simple description I gave above.
  2. Monads in Haskell, which are where most people are introduced to them, are very confusing. The basic monad operations are swamped with tons of funny symbols, and the basic monad operations are named in incredibly misleading ways. ("return" does almost the exact opposite of what you expect return to do!)

In programming terms, what's a monad?

Basically, a monadic computation consists of three pieces:

  1. A monadic type, M which is a parametric type wrapper that can wrap a value of any type.
  2. An operation which can wrap a value in M.
  3. An operation which takes a function that transforms a value wraped in M into another value (possibly with a different type) wrapped in M.

Whenever you describe something very abstractly like this, it can seem rather esoteric. But this is just a slightly more formal way of saying what I said up above: it's a wrapper for a series of transformations on the wrapped value.

Let me give you an example. At foursquare, we do all of our server programming in Scala. In a year at foursquare, I've seen exactly one null pointer exception. That's amazing - NPEs are ridiculously common in Java programming. But in Scala at foursquare, we don't allow nulls to be used at all. If you have a value which could either be an instance of A, or no value, we use an option type. An Option[T] can be either Some(t: T) or None.

So far, this is nice, but not really that different from a null. The main difference is that it allows you to say, in the type system, whether or not a given value might be null. But: Option is a monad. In Scala, that means that I can use map on it. (map is one of the transformation functions!)

	val t: Option[Int] = ...
	val u: Option[String] = t.map( _ + 2 ).map(_.toString)

What this does is: if t is Some(x), then it adds two to it, and returns Some(x+2); then it takes the result of the first map, and converts it toa string, returning an Option[String]. If t is None, then running map on it always returns None. So I can write code which takes care of the null case, without having to write out any conditional tests of nullness - because optionality is a monad.

In a good implementation of a monad, I can do a bit more than that. If I've got a Monad[T], I can use a map-like operation with a function that takes a T and returns a Monad[T].

For an example, we can look at lists - because List is a monad:

val l: List[Int] = List(1, 2, 3)
l.flatMap({ e => List( (e, e), (e+1, e+2) )  })
res0: List[(Int, Int)] = List((1,1), (2,3), (2,2), (3,4), (3,3), (4,5))

The monad map operation does a flatten on the map steps. That means a lot of things. You can see one in the rather silly example above.

You can take values, and wrap them as a list. THen you can perform a series of operations on those elements of a list - sequencing over the elements of the list. Each operation, in turn, returns a list; the result of the monadic computation is a single list, concatenating, in order, the lists returned by each element. In Scala, the flatMap operation captures the monadic concept: basically, if you can flatmap something, it's a monad.

Let's look at it a bit more specifically.

  1. The monadic type: List[T].
  2. A function to wrap a value into the monad: the constructor function from List def apply[T](value: T): List[T]
  3. The map operation: def flatMap[T, U](op: T => List[U]): List[U].

(In the original version of this post, the I put the wrong type in flatMap in the list above. In the explanation demonstrating flatMap, the type is correct. Thanks to John Armstrong for catching that!)

You can build monads around just about any kind of type wrapper where it makes sense to map over the values that it wraps: collections, like lists, maps, and options. Various kinds of state - variable environments (where the wrapped values are, essentially, functions from identifiers to values), or IO state. And plenty of other things. Anything where you perform a sequence of operations over a wrapped value, it's a monad.

Now that we have some understanding of what this thing we're talking about it, what is it in mathematical terms? For that, we turn to category theory.

Fundamentally, in category theory a monad is a category with a particular kind of structure. It's a category with one object. That category has a collection of arrows which (obviously) are from the single object to itself. That one-object category has a functor from the category to itself. (As a reminder, a functor is an arrow between categories in the category of (small) categories.)

The first trick to the monad, in terms of theory, is that it's fundamentally about the functor: since the functor maps from a category to the same category, so you can almost ignore the category; it's implicit in the definition of the functor. So we can almost treat the monad as if it were just the functor - that is, a kind of transition function.

The other big trick is closely related to that. For the programming language application of monads, we can think of the single object in the category as the set of all possible states. So we have a category object, which is essentially the collection of all possible states; and there are arrows between the states representing possible state transitions. So the monad's functor is really just a mapping from arrows to different arrows - which basically represents the way that changing the state causes a change in the possible transitions to other states.

So what a monad gives us, in terms of category theory, in a conceptual framework that captures the concept of a state transition system, in terms of transition functions that invisibly carry a state. When that's translated into programming languages, that becomes a value that implicitly takes an input state, possibly updates it, and returns an output state. Sound familiar?

Let's take a moment and get formal. As usual for category theory, first there are some preliminary definitions.

  1. Given a category, C, 1C is the identity functor from C to C.
  2. Given a category C with a functor T : CC, T2 = T º T.
  3. Given a functor T, 1T : TT is the natural transformation from T to T.

Now, with that out of the way, we can give the complete formal definition of a monad. Given a category C, a monad on C is a triple: (T:CC, η:1CT, μ:T2T), where T is a functor, and η and μ are natural transformations. The members of the triple must make the following two diagrams commute.

monad-prop1.jpg

Commutativity of composition with μ


monad-prop2.jpg

Commutativity of composition with η

What these two diagrams mean is that successive applications of the state-transition functor over C behave associatively - that any sequence of composing monadic functors will result in a functor with full monadic structure; and that the monadic structure will always preserve. Together, these mean that any sequence of operations (that is, applications of the monad functor) are themselves monad functors - so the building a sequence of monadic state transformers is guaranteed to behave as a proper monadic state transition - so what happens inside of the monadic functor is fine - to the rest of the universe, the difference between a sequence and a single simple operation is indistinguishable: the state will be consistently passed from application to application with the correct chaining behavior, and to the outside world, the entire monadic chain looks like like a single atomic monadic operation.

Now, what does this mean in terms of programming? Each element of a monadic sequence in Haskell is an instantiation of the monadic functor - that is, it's an arrow between states - a function, not a simple value - which is the basic trick to monads. They look like a sequence of statements; in fact, each statement in a monad is actually a function from state to state. And it looks like we're writing sequence code - when what we're actually doing is writing function compositions - so that when we're done writing a monadic sequence, what we've actually done is written a function definition in terms of a sequence of function compositions.

Understanding that, we can now clearly understand why we need the return function to use a non-monad expression inside of a monadic sequence - because each step in the sequence needs to be an instance of the monadic functor; an expression that isn't an instance of the monadic functor couldn't be composed with the functions in the sequence. The return function is really nothing but a function that combines a non-monadic expression with the id functor.

In light of this, let's go back and look at the definition of Monad in the Haskell standard prelude.

class  Functor f  where
  fmap              :: (a -> b) -> f a -> f b

class  Monad m  where
  (>>=)  :: m a -> (a -> m b) -> m b
  (>>)   :: m a -> m b -> m b
  return :: a -> m a
  fail   :: String -> m a

  -- Minimal complete definition:
  --      (>>=), return
  m >> k  =  m >>= _ -> k
  fail s  = error s

The declaration of monad is connected with the definition of functor - if you look, you can see the connection. The fundamental operation of Monad is ">>=" - the chaining operation, which is basically the haskell version of the map operation, which is type m a -> (a -> m b) -> m b is deeply connected with the fmap operation from Functor's fmap operation - the a in m a is generally going to be a type which can be a Functor. (Remember what I said about haskell and monads? I really prefer map and flatMap to >> and >>=).

So the value type wrapped in the monad is a functor - in fact, the functor from the category definition! And the ">>=" operation is just the functor composition operation from the monad definition.

A proper implementation of a monad needs to follow some fundamental rules - the rules are, basically, just Haskell translations of the structure-preserving rules about functors and natural transformations in the category-theoretic monad. There are two groups of laws - laws about the Functor class, which should hold for the transition function wrapped in the monad class; and laws about the monadic operations in the Monad class. One important thing to realize about the functor and monad laws is that they are not enforced - in fact, cannot be enforced! - but monad-based code using monad implementations that do not follow them may not work correctly. (A compile-time method for correctly verifying the enforcement of these rules can be shown to be equivalent to the halting problem.)

There are two simple laws for Functor, and it's pretty obvious why they're fundamentally just strucure-preservation requirements. The functor class only has one operation, called fmap, and the two functor laws are about how it must behave.

  1. fmap id = id
    (Mapping ID over any structured sequence results in an unmodified sequence)
  2. fmap (f . g) = (fmap f) . (fmap g)
    ("." is the function composition operation; this just says that fmap preserves the structure to ensure that that mapping is associative with composition.)

The monad laws are a bit harder, but not much. The mainly govern how monadic operations interact with non-monadic operations in terms of the "return" and ">>=" operations of the Monad class.

  1. return x >>= f = f x
    (injecting a value into the monad is basically the same as passing it as a parameter down the chain - return is really just the identity functor passing its result on to the next step. I hate the use of "return". In a state functor, in exactly the right context, it does sort-of look like a return statement in an imperative language. But in pretty much all real code, return is the function that wraps a value into the monad.)
  2. f >>= return = f
    (If you don't specify a value for a return, it's the same as just returning the result of the previous step in the sequence - again, return is just identity, so passing something into return shouldn't affect it.)
  3. seq >>= return . f = fmap f seq
    (composing return with a function is equivalent to invoking that function on the result of the monad sequence to that point, and wrapping the result in the monad - in other words, it's just composition with identity.)
  4. seq >>= (x -> f x >>= g) = (seq >>= f) >>= g
    (Your implementation of ">>=" needs to be semantically equivalent to the usual translation; that is, it must behave like a functor composition.)

No responses yet

Older posts »