So, I messed up a bit in the previous post. Let me get that out of the way before we move forward!

In measure theory, you aren't just working with sets. You're working with something called *σ-algebras*. It's a very important distinction.

The problem is, our intuition of sets doesn't always work. Sets, as defined formally, are really pretty subtle. We expect certain things to be true, because they *make sense*. But in fact, they are *not* implied by the definition of sets. A σ-algebra is, essentially, a *well-behaved* set - a set whose behavior matches our usual expectations.

To be formal, a sigma algebra over a set S is a collection Σ of subsets of S such that:

- Σ is closed over set complement.
- Σ is closed over countable union.

The reason why you need to make this restriction is, ultimately, because of the axiom of choice. Using the axiom of choice, you can create sets which are unmeasurable. They're clearly subsets of a measurable set, and supersets of other measurable sets - and yet, they are, themselves, not measurable. This leads to things like the Banach-Tarski paradox: you can take a measurable set, divide it into non-measurable subsets, and then combine those non-measurable subsets back into measurable sets whose size seem to make no sense. You can take a sphere the size of a baseball, slice it into pieces, and then re-assemble those pieces into a sphere the size of the earth, without stretching them!

These non-measurable sets blow away our expectations about how things should behave. The restriction to σ algebras is just a way of saying that we need to be working in a space where all sets are measurable. When we're looking at measure theory (or probability theory, where we're building on measures), we need to exclude non-measurable sets. If we don't, we're seriously up a creek without a paddle. If we allowed non-measurable sets, then the probability theory we're building would be inconsistent, and that's the kiss of death in mathematics.

Ok. So, with that out of the way, how do we actually use Kolmogorov's axioms? It all comes down to the idea of a *sample space*. You need to start with an experiment that you're going to observe. For that experiment, there are a set of possible outcomes. The set of all possible outcomes is the sample space.

Here's where, sadly, even axiomatized probability theory gets a bit handwavy. Given the sample space, you can define the structure of the sample space with a function, called the probability mass function, *f*, which maps each possible event in the sample space to a probability. To be a valid mass function for a sample space S, it's got to have the following properties:

- For each event
*e*in*S*,*f(e) ≥ 0 and f(e) <= 1.*. - The sum of the probabilities in the sample space must be 1: \(Sigma_{e in S} f(e) = 1\)

So we wind up with a sort of circularity: in order to describe the probability of events, we need to start by knowing the probability of events. In fact, this isn't really a problem: we're talking about taking something than we observe in the real world, and mapping it into the abstract space of math. Whenever we do that, we need to take our observations of the real world and create an approximation as a mathematical model.

The point of probability theory isn't to do that primitive mapping. In general, we already understand how rolling a single die works. We know how it should behave, and we know how and why its actual behavior can vary from our expectation. What we want to know is really how many events combine.

We don't need any special theory to figure out what the probability of rolling a 3 on a six-sided die is: that's easy, and it's obvious: it's 1 in 6. But what's the probability of winning a game of craps?

If all days of the year 2001 are equally likely, then we don't need anything fancy to ask what the probability of someone born in 2001's birthday being July 21st. It's easy: 1 in 365. But if I've got a group of 35 people, what's the probability of two of them sharing the same birthday?

Both of those questions start with the assignment of a probability mass function, which is trivial. But they involve combining the probabilities given by those mass functions, and use them with Kolmogorov's axioms to figure out the probabilities of the complicated events.

Maybe this is a little bit nitpicking, but I think you should demand Sigma to contain S and emptyset at least. Otherwise one could simply choose the "empty collection" for Sigma.

Anyway, have a nice day,

Semidel.

@Semidel

Allowing Σ = ∅, allows the category of sample spaces over a set to have an initial object. This is a decent property to have it would seem.