Archive for the 'Basics' category

More Basics: Compilers, Programs, and Languages

Dec 20 2013 Published by under Basics, Programming

After my "what is an OS?" post, a couple of readers asked me to write a similar post about compilers.

Before I can answer what a compiler is, it's helpful to first answer a different question: what is a program?

And here we get to one of my pet peeves. The most common answer to that question is "a detailed step-by-step sequence of instructions". For example, here's what wikipedia says:

A computer program, or just a program, is a sequence of instructions, written to perform a specified task with a computer.

This is wrong.

Back when people first started to study the idea of computing devices, they talked about computing machines as devices that performed a single, specific task. If you think about a basic Turing machine, you normally define Turing machines that perform a single computation. They've got a built-in sequence of states, and a built in transition table - the machine can only perform one computation. It took one kind of input, and performed its computation on that input, producing its output.

Building up from these specific machines, they came up with the idea of a universal computing device. A universal computer was a computing machine whose input was a description of a different computing machine. By giving the universal machine different inputs, it could perform different computations.

The point of this diversion is that looking at this history tells us what a program really is: it's a description of a computing machine. Our computers are universal computing machines; they take programs as input to describe the computing machines we want them to emulate. What we're doing when we program is describing a computing machine that we'd like to create. Then we feed it into our universal computing machine, and it behaves as if we'd built a custom piece of hardware to do our computation!

The problem is, our computers are simultaneously very primitive and overwhelming complex. They can only work with data expressed in fixed-length sequences of on/off values; to do anything else, we need to find a way of expressing in terms of extremely simple operations on those on/off values. To make them operate efficiently, they've got a complex structure: many different kinds of storage (registers, l1 and l2 caches, addressable memory), complicated instruction sets, and a whole lot of tricky perfomance tweaks. It's really hard to program a computer in terms of its native instructions!

In fact, it's so hard to program in terms of native instructions that we just don't do it. What we do is write programs in terms of different machines. That's the point of a programming language.

Looked at this way, a program language is a way of describing computing machines. The difference between different programming languages is how they describe computing machines. A language like C describes von Neumann machines. Haskell describes machines that work via lambda calculus computations using something like a spineless G-machine. . Prolog describes machines that perform computations in terms of intuitionistic logical inference like a Warren Abstract Machine.

So finally, we can get to the point: what is a compiler? A compiler is a program that takes a description of a computing device defined in one way, and translates into the kind of machine description that can be used by our hardware. A programming language ets us ignore all of the complexities of how our actual hardware is built, and describe our computations in terms of a simple abstraction. A compiler takes that description, and turns it into the form that computer hardware can actually use.

For anyone who's read this far: I've gotten a few requests to talk about assembly language. I haven't programmed in assembly since the days of the Motorola 68000. This means that to do it, I'll need to learn something more up-to-date. Would you be more interested in seeing Intel, or ARM?

14 responses so far

Boot all the computers!

Dec 12 2013 Published by under Basics, Programming

Moving on from last weeks operating system post, today we'll look at how a computer boots up and loads an operating system.

Let's start with why booting is a question at all. When a computer turns on, what happens? What we're using to seeing is that the disk drive turns on and starts spinning, and the computer loads something from the disk.

The question is how does the computer know how to turn on the disk? As I said in the OS post, the CPU only really knows how work with memory. To talk to a disk drive, it needs to do some very specific things - write to certain memory locations, wait for things to happen. Basically, in order to turn on that disk drive and load the operating system, it needs to run a program. But how does it know what program to run?

I'm going to focus on how modern PCs work. Other computers have used/do use a similar process. The details vary, but the basic idea is the same.

A quick overview of the process:

  1. CPU startup.
  2. Run BIOS initialization
  3. Load bootloader
  4. Run bootloader
  5. Load and run OS.

As that list suggests, it's not a particularly simple process. We think of it as one step: turn on the computer, and it runs the OS. In fact, it's a complicated dance of many steps.

On the lowest level, it's all hardware. When you turn on a computer, some current gets sent to a clock. The clock is basically a quartz crystal; when you apply current to the crystal, it vibrates and produces a regular electrical pulse. That pulse is what drives the CPU. (When you talk about your computer's speed, you generally describe it in terms of the frequency of the clock pulse. For example, in the laptop that I'm using to write this post, I've got a 2.4 GHz processor: that means that the clock chip pulses 2.4 billion times per second.)

When the CPU gets a clock pulse, it executes an instruction from memory. It knows what instruction to execute because it's got a register (a special piece of memory built-in to the CPU) that tells it what instruction to execute. When the computer is turned on, that register is set to point at a specific location. Depending on the CPU, that might be 0, or it might be some other magic location; it doesn't matter: what matters is that the CPU is built so that when it's first turned on and it receives a clock pulse that starts it running, that register will always point at the same place.

The software part of the boot process starts there: the computer puts a chunk of read-only memory there - so when the computer turns on, there's a program sitting at that location, which the computer can run. On PCs, that program is called the BIOS (Basic Input/Output System).

The BIOS knows how to tell the hardware that operates your display to show text on the screen, and it knows how to read stuff on your disk drives. It doesn't know much beyond that. What it knows is extremely primitive. It doesn't understand things like filesystems - the filesystem is set up and controlled by the operating system, and different operating systems will set up filesystems in different ways. The BIOS can't do anything with a filesystem: it doesn't include any programming to tell it how to read a filesystem, and it can't ask the operating system to do it, because the OS hasn't loaded yet!

What the BIOS does is something similar to what the CPU did when it started up. The CPU knew to look in a special location in memory to find a program to run. The BIOS knows to look at a special section on a disk drive to find a program to run. Every disk has a special chunk of data on it called the master boot record (MBR). The MBR contains another program, called a boot loader. So the BIOS loads the boot loader, and then uses it to actually load the operating system.

This probably seems a bit weird. The computer starts up by looking in a specific location for a program to run (the BIOS), which loads something (the bootloader). The thing it loads (the bootloader) also just looks in a specific location for a program to run (the OS). Why the two layers?

Different operating systems are build differently, and the specific steps to actually load and run the OS are different. For example, on my laptop, I've can run two operating systems: MacOS, and Linux. On MacOS (aka Darwin), there's something called a microkernel that gets loaded. The microkernel is stored in a file named "mach_kernel" in the root directory of a type of filesystem called HFS. But in my installation of linux, the OS is stored in a file named "vmlinuz" in the root directory of a type of filesystem called EXT4. The BIOS doesn't know what operating system it's loading, and it doesn't know what filesystem the OS uses - and that means that it knows neither the name of the file to load, nor how to find that file.

The bootloader was set up by the operating system. It's specific to the operating system - you can think of it as part of the OS. So it knows what kind of filesystem it's going to look at, and how to find the OS in that filesystem.

So once the bootloader gets started, it knows how to load and run the operating system, and once it does that, your computer is up and running, and ready for you to use!

Of course, all of this is a simplified version of how it works. But for understanding the process, it's a reasonable approximation.

(To reply to commenters: I'll try to do a post like this about compilers when I have some time to write it up.)

4 responses so far

What the heck is a DNS amplification DoS attack?

Apr 08 2013 Published by under Basics, woo

A couple of weeks ago, there was a bunch of news about a major DOS attack on Spamhaus. Spamhaus is an online service that maintains a blacklist of mail servers that are known for propagating spam. I've been getting questions about what a DoS attack is, and more specifically what a "DNS amplification attack" (the specific attack at the heart of last week's news) is. This all became a bit more relevant to me last week, because some asshole who was offended by my post about the Adria Richards affair launched a smallish DoS attack against scientopia. (This is why we were interrmitently very slow last week, between tuesday and thursday. Also, to be clear, the DNS amplification attack was used on Spamhaus. Scientopia was hit by a good old fashioned DDoS attack.)

So what is a DoS attack? And what specifically is a DNS amplification attack?

Suppose that you're a nastly person who wants to take down a website like scientopia. How could you do it? You could hack in to the server, and delete everything. That would kill it pretty effectively, right?

It certainly would. But from the viewpoint of an attacker, that's not a particularly easy thing to do. You'd need to get access to a privileged account on our server. Even if we're completely up to date on patches and security fixes, it's probably possible to do that, but it's still probably going to be a lot of work. Even for a dinky site like scientopia, getting that kind of access probably isn't trivial. For a big security-focused site like spamhaus, that's likely to be close to impossible: there are layers of security that you'd need to get through, and there are people constantly watching for attacks. Even if you got through, if the site has reliable backups, it won't be down for long, and once they get back up, they'll patch whatever hole you used to get in, so you'd be back to square one. It's a lot of work, and there are much easier ways to take down a site.

What you, as an attacker, want is a way to take the site down without having any kind of access to the system. You want a way that keeps the site down for as long as you want it down. And you want a way that doesn't leave easily traced connections to you.

That's where the DoS attack comes in. DoS stands for "denial of service". The idea of a DoS attack is to take a site down without really taking it down. You don't actually kill the server; you just make it impossible for legitimate users to access it. If the sites users can't access the site even though the server is technically still up and running, you've still effectively killed it.

How do you do that? You overwhelm the server. You target some finite resource on the server, and force it to use up that resource just dealing with requests or traffic that you sent to the server, leaving it with nothing for its legitimate users.

In terms of the internet, the two resources that people typically target are CPU and network bandwidth.

Every time that you send a request to a webserver, the server has to do some computation to process that request. The server has a finite amount of computational capability. If you can hit it with enough requests that it spends all of its time processing your requests, then the site becomes unusable, and it effectively goes down. This is the simplest kind of DoS attack. It's generally done in a form called a DDoS - distributed denial of server attack, where the attacker users thousands or millions of virus-infected computers to send requests. The server gets hit by a vast storm of requests, and it can't distinguish the legitimate requests from the ones generated by the attacker. This is the kind of attack that hit Scientopia last week. We were getting streams of a couple of thousands malformed requests per second.

This kind of attack can be very effective. It's hard - not impossible, but hard - to fight. You need to identify the common traits of the attackers, and set up some kind of filter to discard those requests. From the attacker's point of view, it's got one problem: price. Most people don't have a personal collection of virus-infected machines that they can use to mount an attack. What they actually do is rent machines! Virus authors run services where they'll use the machines that they've to run an attack for you, for a fee. They typically charge per machine-hour. So to keep a good attack going for a long time is expensive! Another problem with this kind of attack is that the amount of traffic that you can inflict on the server per attacker is also used by the client. The client needs to establish a connection to the server. That consumes CPU, network connections, and bandwidth on the client.

The other main DoS vector is network bandwidth. Every server running a website is connected to the network by a connection with a fixed capacity, called it's bandwidth. A network connection can only carry a certain quantity of information. People love to make fun of the congressman who said that the internet is like a series of tubes, but that's not really a bad analogy. Any given connection is a lot like a pipe. You can only cram so much information through that pipe in a given period of time. If you can send enough traffic to completely fill that pipe, then the computer on the other end is, effectively, off the network. It can't receive any requests.

For a big site like spamhaus, it's very hard to get enough machines attacking to effectively kill the site. The amount of bandwidth, and the number of different network paths connecting spamhaus to the internet is huge! The number of infected machines available for an attack is limited, and the cost of using all of them is prohibitive.

What an attacker would like for killing something like Spamhaus is an attack where the amount of work/cpu/traffic used to generate the attack is much smaller than the amount of work/cpu/traffic used by the server to combat the attack. That's where amplification comes in. You want to find some way of using a small amount of work/traffic on your attacker machines to cause your target to lost a large amount of work/traffic.

In this recent attack on Spamhaus, they used an amplification attack, that was based on a basic internet infrastructure service called the Domain Name Service (DNS). DNS is the service which is used to convert between the name of a server (like scientopia.org), and its numeric internet address (184.106.221.182). DNS has some technical properties that make it idea for this kind of attack:

  1. It's not a connection-based service. In most internet services, you establish a connection to a server, and send a request on that connection. The server responds on the same connection. In a connection-based service, that means two things. First, you need to use just as much bandwidth as the target, because if you drop the connection, the server sees the disconnect and stops processing your request. Second, the server knows who it's connected to, and it always sends the results of a request to the client that requested it. But DNS doesn't work that way. In DNS, you send a request without a connection, and in the request, you provide an address that the response should be sent to. So you can fake a DNS request, by putting someone else's address as the "respond-to" address in the request.
  2. It's possible to set up DNS to create very large responses to very small requests. There are lots of ways to do this. The important thing is that it's really easy to use DNS in a way that allows you to amplify the amount of data being sent to a server by a factor of 100. In one common form of DNS amplification, you send 60 byte requests, which generate responses larger than 6,000 bytes.

Put these two properties together, and you get a great attack vector: you can send tiny, cheap requests to a server, which don't cause any incoming traffic on your attacker machine, and which send large quantities of data to your target. Doing this is called a DNS amplification attack: it's an amplification attack which uses properties of DNS to generate large quantities of data send to your server, using small quantities of data sent by your attackers.

That's exactly what happened to Spamhaus last week. The attackers used a very common DNS extension, which allowed them to amplify 60 byte requests into 4,000 byte responses, and to send the responses to the spamhaus servers.

There are, of course, more details. (For example, when direct attacks didn't work, they tried an indirect attack that didn't target the spamhaus servers, but instead tried to attack other servers that spamhaus relied on.) But this is the gist.

7 responses so far

What is math?

Dec 07 2009 Published by under Basics, goodmath

File:Braque.woman.400pix.jpeg

I've got a bunch of stuff queued up to be posted over the next couple of days. It's
been the sort of week where I've gotten lots of interesting links from
readers, but I haven't had time to finish anything!

I thought I'd start off with something short but positive. A reader sent
me a link to a post on Reddit, with the following question:

Throughout elementary and high school, I got awful marks in math. I always
assumed I was just stupid in that way, which is perfectly possible. I also
hated my teacher, so that didn't help. A friend of mine got his PhD in math
from Harvard before he was 25 (he is in his 40's now) I was surprised the
other week when I learned he isn't particularly good at basic arithmetic etc.
He said that's not really what math is about. So my question is really for
math fans/pros. What is math, really? I hear people throwing around phrases
like "elegant" and "artistic" regarding math. I don't understand how this can
be. To me, math is add, subtract, etc. It is purely functional. Is there
something you can compare it to so that I can understand?

This hits on one of my personal pet peeves. Math really is a beautiful
thing, but the way that math is taught turns it into something
mechanistic, difficult, and boring. The person who posted this question
is a typical example of a victim of lousy math education.

So what is math? It's really a great question, and not particularly
an easy one to answer.

Continue Reading »

89 responses so far

Basics: Significant Figures

After my post the other day about rounding errors, I got a ton of
requests to explain the idea of significant figures. That's
actually a very interesting topic.

The idea of significant figures is that when you're doing
experimental work, you're taking measurements - and measurements
always have a limited precision. The fact that your measurements - the
inputs to any calculation or analysis that you do - have limited
precision, means that the results of your calculations likewise have
limited precision. Significant figures (or significant digits, or just "sigfigs" for short) are a method of tracking measurement
precision, in a way that allows you to propagate your precision limits
throughout your calculation.

Before getting to the rules for sigfigs, it's helpful to show why
they matter. Suppose that you're measuring the radius of a circle, in
order to compute its area. You take a ruler, and eyeball it, and end
up with the circle's radius as about 6.2 centimeters. Now you go to
compute the area: π=3.141592653589793... So what's the area of the
circle? If you do it the straightforward way, you'll end up with a
result of 120.76282160399165 cm2.

The problem is, your original measurement of the radius was
far too crude to produce a result of that precision. The real
area of the circle could easily be as high as 128, or as low as
113, assuming typical measurement errors. So claiming that your
measurements produced an area calculated to 17 digits of precision is
just ridiculous.

Continue Reading »

No responses yet

Rounding and Bias

Mar 01 2009 Published by under Basics, Numbers

Another alert reader sent me a link to a YouTube video which is moderately interesting.
The video itself is really a deliberate joke, but it does demonstrate a worthwile point. It's about rounding.

Continue Reading »

No responses yet

Mortgage Basics (part 1)

May 30 2008 Published by under Basics

One thing that I've been getting a lot of requests about is the ongoing
mortgage mess in the US. I wrote a bit about it a while ago, explaining
what was going on. But since then, I've gotten a lot of people asking
me to explain various things about how mortgages work, and what kinds
of trouble people have gotten into.

Continue Reading »

39 responses so far

Basics: Proof by Contradiction

Nov 14 2007 Published by under Basics

I haven't written a basics post in a while, because for the most part, that well has run dry, but once
in a while, one still pops up. I got an email recently asking about proofs by contradiction and
counterexamples, and I thought that would be a great subject for a post. The email was really
someone trying to get me to do their homework for them, which I'm not going to do - but I can
explain the ideas, and the relationships and differences between them.

Proof by contradiction, also known as "reductio ad absurdum", is one of the most beautiful proof
techniques in math. In my experience, among proofs of difficult theorems, proofs by contradiction are the
most easy to understand. The basic idea of them is very simple. Want to prove that something is true? Look
at what would happen if it were false. If you get a nonsensical, contradictory result from assuming its
false, then it must be true.

Continue Reading »

78 responses so far

Basics: Sets and Classes

May 07 2007 Published by under Basics

This is something that came up in some of the comments on the recent "nimbers" post, and I thought it was worth promoting to the front, and getting up under an easy-to-find title in the "basics" series.

In a lot of discussions in all different areas of math, you encounter talk about sets and classes, and you'll find people worried about whether they're talking about sets or classes. What's the difference? I mentioned this once before, but it's buried in a discussion of the concept of "meta", which is why I thought it was worth moving it to its own top-level post: if you don't know the difference, you're not going to look in the body of a discussion about the concept of going meta to find the explanation!

I'll start with just the definitions, and then I'll dive into the discussion of why we make the distinction.

  • A class is any collection of things which have some common property that defines them: the class of logical statements, the class of numbers.
  • A set is a class which is a member of a class.
  • A proper class is a class which is not a set.

Continue Reading »

No responses yet

Basics: Innumeracy

Apr 16 2007 Published by under Basics

I've used the term innumeracy fairly often on this blog, and I've had a few people write to ask me what it means. It's also, I think, a very important idea.

Innumeracy is math what illiteracy is to reading. It's the fundamental lack of ability to understand or use numbers or math. And like illiteracy, true innumeracy is relatively rare, but there are huge numbers of people who, while having some minimal understanding of number and arithmetic, are functionally innumerate: they are not capable of anything but the most trivial arithmetic; and how anything more complicated than simple basic arithmetic actually works is a total mystery to them.

Continue Reading »

No responses yet

Older posts »