## Basic Data Structures: Hash Tables

I'm in the mood for a couple of basics posts. As long-time readers might know, I love writing about data structures.

One of the most important and fundamental structures is a hashtable. In fact, in a lot of modern programming languages have left hashtables behind, for reasons I'll discuss later. But if you want to understand data structures and algorithmic complexity, hashtables are one of the essentials.

A hashtable a structure for keeping a list of (key, value) pairs, where you can look up a value using the key that's associated with it. This kind of structure is frequently called either a map, an associative array, or a dictionary.

For an example, think of a phonebook. You've got a collection of pairs (name, phone-number) that make up the phonebook. When you use the phonebook, what you do is look for a person's name, and then use it to get their phone number.

A hashtable is one specific kind of structure that does this. I like to describe data structures in terms of some sort of schema: what are the basic operations that the structure supports, and what performance characteristics does it have for those operations.

In those schematic terms, a hashtable is very simple. It's a structure that maintains a mapping from keys to values. A hashtable really only needs two operations: put and get:

1. put(key, value): add a mapping from key to value to the table. If there's already a mapping for the key, then replace it.
2. get(key): get the value associated with the key.

In a hashtable, both of those operations are extremely fast.

Let's think for a moment about the basic idea of a key-value map, and what kind of performance we could get out of a cople of simple naive ways of implementing it.

We've got a list of names and phone numbers. We want to know how long it'll take to find a particular name. How quickly can we do it?

How long does that take, naively? It depends on how many keys and values there are, and what properties the keys have that we can take advantage of.

In the worst case, there's nothing to help us: the only thing we can do is take the key we're looking for, and compare it to every single key. If we have 10 keys, then on average, we'll need to do an average of about 5 steps before we find the key we're looking for. If there are 100 keys, then it'll take, on average, about 50 steps. If there are one million keys, then it'll take an average of half a million steps before we can find the value.

If the keys are ordered - that is, if we can compare one key to another not just for equality, but we can ask which came first using a "less than or equal to" operator, then we can use binary search. With binary search, we can find an entry in a list of 10 elements in 4 steps. We can find an entry in a list of 1000 keys in 10 steps, or one in a list of one million keys in 20 steps.

With a hashtable, if things work right, in a table of 10 keys, it takes one step to find the key. 100 keys? 1 step. 1000 keys? 1 step. 1,000,000,000 keys? Still one step. That's the point of a hashtable. It might be really hard to do something like general a list of all of the keys - but if all you want to do is look things up, it's lightning.

How can it do that? It's a fairly simple trick: the hashtable trades space for time. A hashtable, under normal circumstances, uses a lot more space than most other ways of building a dictionary. It makes itself fast by using extra space in a clever way.

A hashtable creates a bunch of containers for (key, value) pairs called buckets. It creates many more buckets than the number of (key, value) pairs than it expects to store. When you want to insert a value into the table, it uses a special kind of function called a hash function on the key to decide which bucket to put the (key, value) into. When you want to look for the value associated with a key, it again uses the hash function on the key to find out which bucket to look in.

It's easiest to understand by looking at some actual code. Here's a simple, not at all realistic implementation of a hashtable in Python:

  class Hashtable(object):
def __init__(self, hashfun, size):
self._size = size
self._hashfun = hashfun
self._table = [[] for i in range(size)]

def hash(self, key):
return self._hashfun(key) % self._size

def get(self, key, value):
self._table[self.hash(key)].append((key, value))

def get(self, key):
for k,v in self._table[self.hash(key)]:
if k == key:
return v
return None


If you've got a good hash function, and your hashtable is big enough, then each bucket will end up with no more than one value in it. So if you need to insert a value, you find an (empty) bucket using its hashcode, and dump it in: one step. If you need to find a value given its key, find the bucket using its hashcode, and return the value.

There are two big problems with hashtables.

First, everything is dependent on the quality of your hash function. If you hash function maps a lot of values to the same bucket, then your performance is going to suck. In fact, in the worst case, it's no better than just searching a randomly ordered list. Most of the time, you can come up with a hash function that does pretty good - but it's a surprisingly tricky thing to get right.

Second, the table really needs to be big relative to the number of elements that you expect to have in the list. If you set up a hashtable with 40 buckets, and you end up with 80 values stored in it, your performance isn't going to be very good. (In fact, it'll be slightly worse that just using a binary search tree.)

So what makes a good hash function? There are a bunch of things to consider:

1. The hash function must be deterministic: calling the hash on the same key value must always produce the same result. If you're writing a python program like the one I used as an example above, and you use the value of the key objects fields to compute the hash, then changing the key objects fields will change the hashcode!
2. The hash function needs to focus on the parts of the key that distinguish between different keys, not on their similarities. To give a simple example, in some versions of Java, the default hash function for objects is based on the address of the object in memory. All objects are stored in locations whose address is divisible by 4 - so the last two bits are always zero. If you did something simple like just take the address modulo the table size, then all of the buckets whose numbers weren't divisible by four would always be empty. That would be bad.
3. The hash function needs to be uniform. That means that it needs to map roughly the same number of input values to each possible output value. To give you a sense of how important this is: I ran a test using 3125 randomly generated strings, using one really stupid hash function (xoring together the characters), and one really good one (djb2). I set up a small table, with 31 buckets, and inserted all of the value into it. With the xor hash function, there were several empty buckets, and the worst bucket had 625 values in it. With djb2, there were no empty buckets; the smallest bucket had 98 values, and the biggest one had 106.

So what's a good hash function look like? Djb2, which I used in my test above, is based on integer arithmetic using the string values. It's an interesting case, because no one is really entirely sure of exactly why it works better than similar functions, but be that as it may, we know that in practice, it works really well. It was invented by a guy named Dan Bernstein, who used to be a genius poster in comp.lang.c, when that was a big deal. Here's djb2 in Python:

def djb2(key):
hash = 5381
for c in key:
hash = (hash * 33) + ord(c)
return hash


What the heck is it doing? Why 5381? Because it's prime, and it works pretty well. Why 33? No clue.

Towards the beginning of this post, I alluded to the fact that hashtables have, at least to some degree, fallen out of vogue. (For example, in the Go language standard library, the map type is implemented using a red-black tree.) Why?

In practice, it's rarely any faster to really use a hashtable than to use a balanced binary tree like a red-black tree. Balanced trees have better worst-case bounds, and they're not as sensitive to the properties of the hash function. And they make it really easy to iterate over all of the keys in a collection in a predictable order, which makes them great for debugging purposes.

Of course, hash tables still get used, constantly. The most commonly used data structures in Java code include, without a doubt, the HashMap and HashSet, which are both built on hashtables. They're used constantly. You usually don't have to implement them yourself; and usually system libraries provide a good default hash function for strings, so you're usually safe.

There's also a lot of really fascinating research into designing ideal hash functions for various applications. If you know what your data will look like in advance, you can even build something called a perfect hash function, which guarantees no collisions. But that's a subject for another time.

## Introducing Algebraic Data Structures via Category Theory: Monoids

Since joining foursquare, I've been spending almost all of my time writing functional programs. At foursquare, we do all of our server programming in Scala, and we have a very strong bias towards writing our scala code very functionally.

This has increased my interest in category theory in an interesting way. As a programming language geek, I'm obviously fascinated by data structures. Category theory provides a really interesting handle on a way of looking at a kind of generic data structures.

Historically (as much as that word can be used for anything in computer science), we've thought about data structures primarily in a algorithmic and structural ways.

For example, binary trees. A binary tree consists of a collection of linked nodes. We can define the structure recursively really easily: a binary tree is a node, which contains pointers to at most two other binary trees.

In the functional programming world, people have started to think about things in algebraic ways. So instead of just defining data structures in terms of structure, we also think about them in very algebraic ways. That is, we think about structures in terms of how they behave, instead of how they're built.

For example, there's a structure called a monoid. A monoid is a very simple idea: it's an algebraic structure with a set of values S, one binary operation *, and one value i in S which is an identity value for *. To be a monoid, these objects must satisfy some rules called the monad laws:

There are some really obvious examples of monoids - like the set of integers with addition and 0 or integers with multiplication and 1. But there are many, many others.

Lists with concatenation and the empty list are a monoid: for any list,
l ++ [] == l, [] + l == l, and concatenation is associative.

Why should we care if data structures like are monoids? Because we can write very general code in terms of the algebraic construction, and then use it over all of the different operations. Monoids provide the tools you need to build fold operations. Every kind of fold - that is, operations that collapse a sequence of other operations into a single value - can be defined in terms of monoids. So you can write a fold operation that works on lists, strings, numbers, optional values, maps, and god-only-knows what else. Any data structure which is a monoid is a data structure with a meaningful fold operation: monoids encapsulate the requirements of foldability.

And that's where category theory comes in. Category theory provides a generic method for talking about algebraic structures like monoids. After all, what category theory does is provide a way of describing structures in terms of how their operations can be composed: that's exactly what you want for talking about algebraic data structures.

The categorical construction of a monoid is, alas, pretty complicated. It's a simple idea - but defining it solely in terms of the composition behavior of function-like objects does take a bit of effort. But it's really worth it: when you see a monoidal category, it's obvious what the elements are in terms of programming. And when we get to even more complicated structures, like monads, pullbacks, etc., the advantage will be even clearer.

A monoidal category is a category with a functor, where the functor has the basic properties of a algebraic monoid. So it's a category C, paired with a bi-functor - that is a two-argument functor ⊗:C×C→C. This is the categorical form of the tensor operation from the algebraic monoid. To make it a monoidal category, we need to take the tensor operation, and define the properties that it needs to have. They're called its coherence conditions, and basically, they're the properties that are needed to make the diagrams that we're going to use commute.

So - the tensor functor is a bifunctor from C×C to C. There is also an object I∈C, which is called the unit object, which is basically the identity element of the monoid. As we would expect from the algebraic definition, the tensor functor has two basic properties: associativity, and identity.

Associativity is expressed categorically using a natural isomorphism, which we'll name α. For any three object X, Y, and Z, α includes a component αX,Y,Z (which I'll label α(X,Y,Z) in diagrams, because subscripts in diagrams are a pain!), which is a mapping from (X⊗Y)⊗Z to X⊗(Y⊗Z). The natural isomorphism says, in categorical terms, that the the two objects on either side of its mappings are equivalent.

The identity property is again expressed via natural isomorphism. The category must include an object I (called the unit), and two natural isomorphisms, called &lamba; and ρ. For any arrow X in C, &lamba; and ρ contain components λX and ρX such that λX maps from I⊗X→X, and ρX maps from X⊗I to X.

Now, all of the pieces that we need are on the table. All we need to do is explain how they all fit together - what kinds of properties these pieces need to have for this to - that is, for these definitions to give us a structure that looks like the algebraic notion of monoidal structures, but built in category theory. The properties are, more or less, exact correspondences with the associativity and identity requirements of the algebraic monoid. But with category theory, we can say it visually. The two diagrams below each describe one of the two properties.

The upper (pentagonal) diagram must commute for all A, B, C, and D. It describes the associativity property. Each arrow in the diagram is a component of the natural isomorphism over the category, and the diagram describes what it means for the natural isomorphism to define associativity.

Similarly, the bottom diagram defines identity. The arrows are all components of natural isomorphisms, and they describe the properties that the natural isomorphisms must have in order for them, together with the unit I to define identity.

Like I said, the definition is a lot more complicated. But look at the diagram: you can see folding in it, in the chains of arrows in the commutative diagram.

## Finger Trees Done Right (I hope)

A while ago, I wrote a couple of posts that claimed to talk about finger trees. Unfortunately, I really botched it. I'd read a bunch of data structure papers, and managed to get myself thoroughly scrambled. What I wrote about was distantly related to finger trees, and it was useful to help understand how fingertrees work - but it was not, in any way, shape, or form, actually a description of fingertrees. Since then, I've been meaning to write a proper post explaining finger trees - but with the work on my book, and with chaos at work, I just haven't had the time. This time, in order to do my damnedest to make sure that I don't screw it up again, I'm basically go to describe finger trees over a couple of posts by walking through the best finger-tree paper that I could find. The paper is "Finger Trees: a simple general-purpose data structure", by Ralf Hinze and Ross Patterson. This might by the paper that introduced the structure, but I'm not sure.

The point of finger trees is pretty simple. It's very similar to the point of zippers. Programming in functional languages is terrific. As I've described before, there are a lot of advantages to writing functional code. But there are also a lot of places where a naive implementation of an algorithm using a functional data structure is dreadfully inefficient. Functional code may be prettier, more maintainable, and more reusable - but imperative code is frequently much more efficient. When you're doing an operation that, conceptually, modifies a piece of a complex data structure, then functional code can really suck. Finger trees give you a way around that - for many common updatabale data structures, you can build finger-tree versions that are very close to or fully as good as imperative, updating structures.

## Zippers: Making Functional "Updates" Efficient

In the Haskell stuff, I was planning on moving on to some monad-related
post on data structures, focusing on a structured called a
zipper.

A zipper is a remarkably clever idea. It's not really a single data
structure, but rather a way of building data structures in functional
languages. The first mention of the structure seems to be a paper
by Gerard Huet in 1997
, but as he says in the paper, it's likely that this was
used before his paper in functional code --- but no one thought to formalize it
and write it up. (In the original version of this post, I said the name of the guy who first wrote about zippers was "Carl Huet". I have absolutely no idea where that came from - I literally had his paper on my lap as I wrote this post, and I still managed to screwed up his name. My apologies!)

It also happens that zippers are one of the rare cases of data structures
where I think it's not necessarily clearer to show code. The concept of
a zipper is very simple and elegant - but when you see a zippered tree
written out as a sequence of type constructors, it's confusing, rather
than clarifying.

## Finger Tree Update: I forgot something

As an alert commenter pointed out, I left out one really important thing in
my earlier post about finger trees. That's what I get for trying to write when
I'm sick :-). Seriously, this is a point that's implied by the post as it stands, but never explicitly stated - and since it's really important, it
should be stated explicitly.

The monoid-annotated tree structure doesn't replace the original
data structure: it's superimposed on it.

So, as I said: cons-cell style lists are ubiquitous and beloved by
functional and non-functional programmers. In finger trees, you're
not getting rid of them. The point of finger trees is to let
you keep the convenient data structure, with its basic operations
and properties intact, and to augment it with a tree that lets you search it efficiently.

To illustrate: here's the number list example from yesterdays post. The
original list is at the bottom, with green pointers representing the
original cons-list next-pointers. The monoid-annotated tree is on top,
with red pointers. The combination of the original list with the
monoid annotated tree is a finger tree.

The point of this is that you've still got your cons list. All of the
beautiful recursive/iterative algorithms that walk the list continue to
work exactly the way that they would in the traditional cons-list: for code that walks the list, the fact that there are finger-tree pointers
sitting on top of the list is irrelevant - and, in fact, completely invisible. For algorithms that want to search the list, the tree structure is there,
and allows the searches to be performed much more quickly than they could be on
the traditional list. The superposition of those two structures is the
genius of the finger tree.

## Finally: Finger Trees!

For ages, I've been promising to write about finger trees. Finger trees are
an incredibly elegant and simple structure for implementing sequence-based
data structures. They're primarily used in functional languages, but there's nothing
stopping an imperative-language programmer from using them as well.

In functional programming languages, lists are incredibly common. They show up everywhere; they're just so easy to use for recursive iteration that they're ubiquitous. I can't think of
any non-trivial program that I've written in Haskell, Lisp, or OCaml that doesn't use lists. (Heck, I've wound up using cons-cell libraries a couple of times in C++.)

Of course, lists aren't perfect. In fact, for some things, they absolutely suck. Suppose I've got a really long list - like a list of names and phone numbers from a telephone book. My local phone book has 600 pages of names; and the pages are incredibly dense - there've got to be at least a couple of hundred names per page. So in a phone book for a suburban region of New York City, the list of phone numbers will have 120,000 names. Now imagine that I want to look
up one name in that list - on average, I'm going to have to chase through 60,000 pointers. 60,000 isn't a huge number for a computer, but still, chasing 60,000 pointers is a pretty big deal. If I stored them in an array, it would be a lot less convenient for some tasks, but I could use
binary search to find names, and I could find any name in no more than 16 comparisons - and the whole shebang would be contiguous, so I wouldn't even be chasing pointers.

What finger trees do is give me a way of representing a list that has both the convenience
of the traditional cons list, and the search efficiency of the array based method.

## Two-Three Trees: a different approach to balance

This post is very delayed, but things have been busy.

I'm working my way up to finger trees, which are a wonderful
functional data structure. They're based on trees - and like many
tree-based structures, their performance relies heavily on the balance
of the tree. The more balanced the tree is, the better they perform.

In general, my preference when I need a balanced tree is a red-black
tree, which I wrote about before. But there are other kinds of balanced trees,
and for finger trees, many people prefer a kind of tree called a 2/3 tree.

A two-three tree is basically a B-tree with a maximum of two values per node.
If you don't know what a B-tree is, don't worry. I'm going to explain all the details.

A two-three tree is a tree where each node contains either one or two values. If
the node isn't a leaf, then its number of children is one more than its number
of values. The basic value constraints are the same as in a binary search tree:
smaller values to the left, larger values to the right.

The big difference in the 2/3 tree is balance: it's got a much stronger
balance requirement. In a 2/3 tree, every path from the tree root to the leaves
is exactly the same length.

## Gap Buffers, or, Don't Get Tied Up With Ropes?

Last week, I promised to continue my discussion of ropes. I'm going to
break that promise. But it's in a good cause.

If you're a decent engineer, one of the basic principles you should
follow is keeping this as simple as possible. One of the essential skills
for keeping things simple is realizing when you're making something
overly complicated - even if it's cool, and fun, and interesting.

Working out the rope code for more complex operations, I found myself
thinking that it really seemed complex for what I was doing. What I'm
doing is writing myself a text editor. I'm not writing a string
processing framework for managing gigabytes of data; I'm just writing a
text editor.

So I decided to try, for the purposes of testing, to look at one of
the simpler data structures. A classic structure for implementing editor
buffers is called a gap buffer. I'll explain the details below.
The drawback of a gap buffer is that large cursor motions are relatively
expensive - the cost of moving the cursor is proportional to the distance
that you move it.

So I wrote a test. I implemented a basic gap buffer. Then I built a
test that did 300,000 inserts of five characters; then slid the cursor
from the end, to the beginning, back to the end, back to the beginning,
and back to the end again. Total execution time? 150 milliseconds. And
that includes resizing the buffer 12 times, because of a deliberately
stupid buffer sizing and copying strategy.

And with that, out the window went the ropes. Because in Java,
using a slapped together, sloppy piece of test code, which does things in a dumb
brute force way, it can do something much more complicated than anything an editor
will encounter in routine use, the gap buffer achieves performance that is more than
adequately fast. (Based on some experiments I did back at IBM, delays of 1/10th
of a second are roughly when people start to notice that an editor is slow. If you can respond is less than 1/10th of a second, people don't perceive a troublesome delay.)
If the gap buffer can achieve the needed perfomance that easily, then there's
absolutely no reason to do anything more complicated.

## Ropes: Twining Together Strings for Editors

It's been a while since I've written about any data structures. But it just so happens that I'm actually really working on implementing a really interesting and broadly useful data structure now, called a Rope.

A bit of background, to lead in. I've got this love-hate relationship with some of the development tools that Rob Pike has built. (Rob is one of the Unix guys from Bell labs, and was one of the principal people involved in both the Plan9 and Inferno operating systems.) Rob has implemented some amazing development tools. The two that fascinate me were called Sam and Acme. The best and worst features of both are a sort of extreme elegant minimalism. There's no bloat in Rob's tools, no eye-candy, no redundancy. They're built to do a job, and do it well - but not to do any more than their intended job. (This can be contrasted against Emacs, which is a text editor that's grown into a virtual operating system.) The positive side of this is that they're incredibly effective, and they demonstrate just how simple a programmers text editor should be. I've never used another tool that is more effective than Acme or Sam. In all seriousness, I can do more of my work more easily in Sam than I can in Emacs (which is my everyday editor). But on the other hand, that extreme minimalist aesthetic has the effect of strictly eliminating any overlaps: there's one way to do things, and if you don't like it, tough. In the case of Acme and Sam, that meant that you used the mouse for damn-near everything. You couldn't even use the up and down arrows to move the cursor!

This is a non-starter for me. As I've mentioned once or twice, I've got terrible RSI in my wrists. I can't use the mouse that much. I like to keep my hands on my keyboard. I don't mind using the mouse when it's appropriate, but moving my hand from the keyboard to the mouse every time I want to move the cursor?. No. No damned way. Just writing this much of this post, I would have had to go back and forth between the keyboard and mouse over 50 times. (I was counting, but gave up when I it 50.) A full day of that, and I'd be in serious pain.

I recently got reminded of Acme, because my new project at work involves using a programming language developed by Rob Pike. And Acme would really be incredibly useful for my new project. But I can't use it. So I decided to bite the bullet, and use my free time to put together an Acme-like tool. (Most of the pieces that you need for a prototype of a tool like that are available as open-source components, so it's just a matter of assembling them. Still a very non-trivial task, but a possible one.)

Which finally, leads us back to today's data structure. The fundamental piece of a text editor is the data structure that you use to represent the text that you're editing. For simplicity, I'm going to use Emacs terminology, and refer to the editable contents of a file as a Buffer.

How do you represent a buffer?

As usual with data structures, you start by asking: What do I need it to do? What performance characteristics are important?

In the case of a text buffer, you can get by with a fairly small set of basic operations:

• Fast concatenation: concatenating blocks of text needs to be really fast.
• Fast insert: given a point in a block of text, you need to be able to quickly insert text at that point.
• Fast delete: given two points in a block of text, you need to be able to quickly remove the text between those points.
• Reasonably fast traversal: Lots of algorithms, ranging from printing out the text to searching it are based on linear traversals of the contents. This doesn't have to be incredibly fast - it is an intrinsically linear process, and it's usually done in the context of something with a non-trivial cost (I/O, regular-expression scanning). But you can't afford to make it expensive.
• Size: you need to be able to store effectively unlimited amounts of text, without significant performance degradation in the operations described above.

## B-Trees - Balanced Search Trees for Slow Storage

Another cool, but frequently overlooked, data structure in the tree family is called the B-tree. A B-tree is a search tree, very similar to a BST in concept, but optimized differently.

BSTs provide logarithmic time operations, where the performance
is fundamentally bounded by the number of comparisons. B-trees also
provide logarithmic performance with a logarithmic number of
comparisons - but the performance is worse by a constant factor. The
difference is that B-trees are designed around a different tradeoff. The B-tree is designed to minimize the number of tree nodes that need to be examined, even if that comes at the cost of doing significantly more
comparisons.

Why would you design it that way? It's a different performance tradeoff.
The B-tree is a da
ta structure designed not for use in memory, but instead for
use as a structure in hard storage, like a disk drive. B-trees are the
basic structure underlying most filesystems and databases: they provide
an efficient way of provide rapidly searchable stored structures. But
retrieving nodes from disk is very expensive. In comparison to
retrieving from disk, doing comparisons is very nearly free. So the design
goal for performance in a B-tree tries to minimize disk access; and when disk access is necessary, it tries to localize it as much as possible - to minimize the number of retrievals, and even more importantly, to minimize the number of nodes on disk that need to be updated when something is inserted.

Older posts »

• Scientopia Blogs