Remember the post I made a couple of weeks ago, flaming the wall-street idiots for
a bad graph? They were comparing the value of financial firms before and after the current
mess. But they way that they drew it was using circles, where the diameter of the
circle was proportional to the values, but the way it was drawn strongly suggested that
the area was the metric of comparison.
Well, an astute reader sent me another example of the same error - but it's even
worse. This one is misleading in two ways. Take a look and see if you can figure out
what the two errors are. I'll explain beneath the fold.
This post is very delayed, but things have been busy.
I'm working my way up to finger trees, which are a wonderful
functional data structure. They're based on trees - and like many
tree-based structures, their performance relies heavily on the balance
of the tree. The more balanced the tree is, the better they perform.
In general, my preference when I need a balanced tree is a red-black
tree, which I wrote about before. But there are other kinds of balanced trees,
and for finger trees, many people prefer a kind of tree called a 2/3 tree.
A two-three tree is basically a B-tree with a maximum of two values per node.
If you don't know what a B-tree is, don't worry. I'm going to explain all the details.
A two-three tree is a tree where each node contains either one or two values. If
the node isn't a leaf, then its number of children is one more than its number
of values. The basic value constraints are the same as in a binary search tree:
smaller values to the left, larger values to the right.
The big difference in the 2/3 tree is balance: it's got a much stronger
balance requirement. In a 2/3 tree, every path from the tree root to the leaves
is exactly the same length.
Continue Reading »
It's economics time again.
I hate economics. I find it hopelessly dull. But apparently my style of explaining
it is really helpful to people, so they keep sending me questions; and as usual, I do my best to try to answer them. Even if I don't particularly enjoy it.
So people have been asking me to explain what the proposed bank bailout plan is,
how it's supposed to work, and why so many people are upset about it.
Continue Reading »
It's been a while since I posted a recipe, and last week, I came up
with a real winner, so I thought I'd share it.
I absolutely love beef short ribs. They're one of the nicest cuts
of beef - they've got lots of meat, but they're well marbled with fat, and they're up against the bone, which gives them extra flavor. When cooked well, they've got an amazing flavor and a wonderful texture.
This recipe produces the best short ribs I've ever had. It's based,
loosely, on a chinese recipe, but it's cooked more in a western style.
There's one unusual ingredient, which is a chinese sauce that I've mentioned before on the blog, called sha cha sauce. It's made from brill shrimp,
garlic, and chili peppers. You can get it in a chinese grocery store. The english label is, unfortunately, "barbeque sauce", but you can identify it
by the ingredients, and by the picture of the jar over to the side.
- 4 lbs shortribs, bone in, cut flanken style. (That means
cut perpendicular to the bone, in chunks about 2 inches long.)
- One large onion.
- 4 cloves garlic. (more if you really like garlic)
- 1 cup soy sauce.
- 1 cup beef stock.
- 1 cup dry gin.
- 4 tablespoons sugar.
- One teaspoon xia cha sauce.
- Put the garlic and onion into a food processor, and
run it until they're nicely chopped. Then add the liquids to
the processor, and run it until the garlic and onions are a puree
mixed into the liquids.
- Put the short ribs into an oven-safe deep dish, and cover them with
the liquid. Put this into the fridge for a few hours to marinate.
- Heat the oven to 350, and put the marinated shortribs into the
oven - marinade and all. Cook for 3 hours, taking it out and basting it every 30 minutes.
- By now, you've got some very well-cooked shortribs, sitting in the marinade, along with a huge amount of fat that cooked out of them. Take them out of the liquid, and set them aside.
- Strain the liquid, and skim the fat. What's left is a very strong, but very flavorful sauce.
- Put the shortribs back into the now empty pan. Give them a light baste
with the sauce. Heat the oven up to broil, and when it's hot, put the
short ribs back in, just long enough to brown and crisp the outside.
And they're ready to eat. Serve it with the sauce on the side, along
with rice and some stir-fried vegetables.
As I mentioned, I'll be posting drafts of various sections of my book here on the blog. This is a rough draft of the introduction to a chapter on logic. I would be extremely greatful for comments, critiques, and corrections.
I'm a big science fiction fan. In fact, my whole family is pretty
much a gaggle of sci-fi geeks. When I was growing up, every
Saturday at 6pm was Star Trek time, when a local channel show
re-runs of the original series. When Saturday came around, we
always made sure we were home by 6, and we'd all gather in front of
the TV to watch Trek. But there's one one thing about Star Trek for
which I'll never forgive Gene Roddenberry or Star Trek:
"Logic". As in, Mr. Spock saying "But that would
not be logical.".
The reason that this bugs me so much is because it's taught a
huge number of people that "logical" means the same
thing as "reasonable". Almost every time I hear anyone
say that something is logical, they don't mean that it's logical -
in fact, they mean something almost exactly opposite - that it
seems correct based on intuition and common sense.
If you're being strict about the definition, then saying that
something is logical by itself is an almost meaningless
statement. Because what it means for some statement to be
logical is really that that statement is inferable
from a set of axioms in some formal reasoning system. If you don't
know what formal system, and you don't know what axioms, then the
statement that something is logical is absolutely meaningless. And
even if you do know what system and what axioms you're talking
about, the things that people often call "logical" are
not things that are actually inferable from the axioms.
Continue Reading »
I'd like to apologize for the slowness of the blog. Fortunately, there's a very good reason: I've got a book contract! "Good Math" will be published by "The Pragmatic Programmers" press. The exact publication date isn't set yet, but my schedule plans for a complete draft of the book by summer. (And I used the scheduling rules proposed by one of my favorite managers. He said that when a programmer gives you an estimate of how long something should take, multiply it by two and increase the unit. So if they say it'll take a day, assume two weeks. If they say a week, assume two months. In my experience, it's actually a really good predictor.)
Anyway... For the last couple of weeks, I've been setting up a new computer to use for writing the book (gotta keep my Google work and my private work separate!), finishing the first three chapters, and trying to get comfortable with the PP markup system.
While I'm working on the book, I'm going to be posting drafts of some sections as posts on the blog. As a result, you'll see some re-runs of older posts in a slightly different format. There will also be some brand new material in the book format. The book draft posts will be clearly marked, and for those, even more than usual, I'd appreciate feedback and corrections.
Of course, I'll also be posting non-book related stuff. For example, I hope to have a new data structures post ready this evening. As a result of my work on the book, I'm back on a Haskell binge, and I'm working up a post about a fascinating functional data structure called a finger-tree.
A lot of people, reading the reporting on the current financial
disaster, have been writing me to ask what people mean when they talk
about incentives. The traders, the bankers, the fund managers, and all
of the other folks involved in this giant cluster-fuck aren't
stupid. So naturaly, the question keeps coming up, why would they go
along with it? And the answer that we keep hearing is something along
the lines of "perverse incentives".
The basic idea is that the way that the people in the industry got paid,
it was actually in their interest do do things that they knew would
eventually cause a disaster. How could that work?
Continue Reading »