Archive for: October, 2011

Fun with Functors

Oct 25 2011 Published by under Category Theory

So far, we've looked at the minimal basics of categories: what they are, and how to categorize the kinds of arrows that exist in categories in terms of how they compose with other arrows. Just that much is already enlightening about the nature of category theory: the focus is always on composition.

But to get to really interesting stuff, we need to build up a bit more, so that we can look at more interesting constructs. So now, we're going to look at functors. Functors are one of the most fundamental constructions in category theory: they give us the ability to create multi-level constructions.

What's a functor? Well, it's basically a structure-preserving mapping between categories. So what does that actually mean? Let's be a bit formal:

A functor \(F\) from a category \(C\) to a category \(D\) is a mapping from \(C\) to \(D\) that:

  • Maps each member \(m in Obj(C)\) to an object \(F(m) in Obj(D)\).
  • Maps each arrow \(a : x rightarrow y in Mor(C)\) to an arrow \(F(a) : F(x) rightarrow F(y)\), where:
    • \(forall o in Obj(C): F(1_o) = 1_{F(o)}\). (Identity is preserved by the functor mapping of morphisms.)
    • \(forall m,n in Mor(C): F(n circ o) = F(n) circ F(o)\). (Commutativity is preserved by the Functor mapping of morphisms.)

Note: The original version of this post contained a major typo. In the second condition on functors, the "n" and the "o" were reversed. With them in this direction, the definition is actually the definition of something called a covariant functor. Alas, I can't even pretend that I mixed up covariant and contravariant functors; the error wasn't nearly so intelligent. I just accidentally reversed the symbols, and the result happened to make sense in the wrong way.

That's the standard textbook gunk for defining a functor. But if you look back at the original definition of a category, you should notice that this looks familiar. In fact, it's almost identical to the definition of the necessary properties of arrows!

We can make functors much easier to understand by talking about them in the language of categories themselves. Functors are really nothing but morphisms - they're morphisms in a category of categories.

There's a kind of category, called a small category. (I happen to dislike the term "small" category, but I don't get a say!) A small category is a category whose collections of objects and arrows are sets, not proper classes.

(As a quick reminder: in set theory, a class is a collection of sets that can be defined by a non-paradoxical property that all of its members share. Some classes are sets of sets; some classes are not sets; they lack some of the required properties of sets - but still, the class is a collection with a well-defined, non-paradoxical, unambiguous property. If a class isn't a set of sets, but just a collection that isn't a set, then it's called a proper class.)

Any category whose collections of objects and arrows are sets, not proper classes, are called small categories. Small categories are, basically, categories that are well-behaved - meaning that their collections of objects and arrows don't have any of the obnoxious properties that would prevent them from being sets.

The small categories are, quite beautifully, the objects of a category called Cat. (For some reason, category theorists like three-letter labels.) The arrows of Cat are all functors - functors really just morphisms between categories. Once you wrap you head around that, then the meaning of a functor, and the meaning of a structure-preserving transformation become extremely easy to understand.

Functors come up over and over again, all over mathematics. They're an amazingly useful notion. I was looking for a list of examples of things that you can describe using functors, and found a really wonderful list on wikipedia.. I highly recommend following that link and taking a look at the list. I'll just mention one particularly interesting example: groups and group actions.

If you've been reading GM/BM for a very long time, you'll remember my posts on group theory. In a very important sense, the entire point of group theory is to study symmetry. But working from a set theoretic base, it takes a lot of work to get to the point where you can actually define symmetry. It took many posts to build up the structure - not to present set theory, but just to present the set theoretic constructs that you need to define what symmetry means, and how a symmetric transformation was nothing but a group action. Category theory makes that so much easier that it's downright dazzling. Ready?

Every group can be represented as a category with a single object. A functor from the category of a group to the category of Sets is a group action on the set that is the target of the functor. Poof! Symmetry.

Since symmetry means structure-preserving transformation; and a functor is a structure preserving transformation - well, they're almost the same thing. The functor is an even more general abstraction of that concept: group symmetry is just one particular case of a functor transformation. Once you get functors, understanding symmetry is easy. And so are lots of other things.

And of course, you can always carry these things further. There is a category of functors themselves; and notions which can be most easily understood in terms of functors operating on the category of functors!

This last bit should make it clear why category theory is affectionately known as abstract nonsense. Category theory operates at a level of abstraction where almost anything can be wrapped up in it; and once you've wrapped something up in a category, almost anything you can do with it can itself be wrapped up as a category - levels upon levels, categories of categories, categories of functors on categories of functors on categories, ad infinitum. And yet, it makes sense. It captures a useful, comprehensible notion. All that abstraction, to the point where it seems like nothing could possibly come out of it. And then out pops a piece of beautiful crystal. It's really remarkable.

16 responses so far

Category Diagrams

Oct 23 2011 Published by under Category Theory, Uncategorized

One of the most unusual things about category theory that I really love is category diagrams. In category theory, many things that would be expressed as equations or complex formulae in most mathematical formalisms can be presented as diagrams in category theory. If you are, like me, a very visual thinker, than category diagrams can present information in a remarkably clear form - and the categorical form of many statements of proofs can be much clearer than the alternatives because it can be presented in diagram form.

A category diagram is a directed graph, where the nodes are objects from a category, and the edges are morphisms. Category theorists say that a graph commutes if, for any two paths through arrows in the diagram from node A to node B, the composition of all edges from the first path is equal to the composition of all edges from the second path. (But it's important to note that you do need to be careful here: merely because you can draw a diagram doesn't mean that it necessarily commutes, just like being able to write an equation doesn't mean that the equation is true! You do need to show that your diagram is correct and commutes.)

As usual, an example will make that clearer.


This diagram is a way of expressing the associativy property of morphisms: \(f circ (g circ h) = (f circ g) circ h\). The way that the diagram illustrates this is: \(g circ h\) is the morphism from A to C. When we compose that with \(f\), we wind up at D. Alternatively, \(f circ g\) is the arrow from B to D; if we compose that with \(h\), we wind up at D. The two paths: \(f circ (A rightarrow C)\) and \((B rightarrow D) circ H\) are both paths from A to D, therefore if the diagram commutes, they must be equal. And the arrows on the diagram are all valid arrows, arranged in connections that do fit the rules of composition correctly, so the diagram does commute.

Let's look at one more diagram, which we'll use to define an interesting concept, the principal morphism between two objects. The principle morphism is a single arrow from A to B such that any composition of morphisms that goes from A to B will end up being equivalent to it.

In diagram form, a morphism m is principle if \(forall x : A rightarrow A, forall y: A rightarrow B\), the following diagram commutes:


In words, this says that \(f\) is a principal morphism if for every endomorphic arrow \(x\), and for every arrow \(y\) from A to B, \(f\) is is the result of composing \(x\) and \(y\). There's also something interesting about this diagram that you should notice: A appears twice in the diagram! It's the same object; we just draw it in two places to make the commutation pattern easier to see. A single object can appear in a diagram as many times as you want to to make the pattern of commutation easy to see. When you're looking at a diagram, you need to be a bit careful to read the labels to make sure you know what it means.

One more definition by diagram: \((x, y)\) is a a retraction pair, and A is a retract of B (written \(A < B\)) if the following diagram commutes:


That is, \(x : A rightarrow B, y: B rightarrow A\) are a retraction pair if \(y circ x = 1_A\).

No responses yet

I get mail: Brown's Gas and Perpetual Motion

Oct 17 2011 Published by under Bad Physics

In the past, I've written about free-energy cranks like Tom Bearden, and I've made several allusions to the Brown's gas" crankpots. But I've never actually written in any detail about the latter.

Brown's gas is a term used primarily by cranks for oxyhydrogen gas. Oxyhydrogen is a mixture of hydrogen and oxygen in a two-to-one molar ratio; in other words, it's exactly the product of electrolysis to break water molecules into hydrogen and oxygen. It's used as the fuel for several kinds of torches and welders. It's become a lot less common, because for most applications, it's just not as practical as things like acetylene torches, TIG welders, etc.

But for free-energy cranks, it's a panacea.

You see, the beautiful thing about Brown's gas is that it burns very nicely, it can be compressed well enough to produce a very respectable energy density, and when you use it, its only exhaust gas is water. If you look at it naively, that makes it absolutely wonderful as a fuel.

The problem, of course, is that it costs energy to produce it. You need to pump energy into water to divide it into hydrogen and oxygen; and then you need to use more energy to compress it in order to make it useful. Still, there are serious people who are working hard on things like hydrogen fuel cell power sources for cars - because it is an attractive fuel. It's just not a panacea.

But the cranks... Ah, the cranks. The cranks believe that if you just find the right way to burn it, then you can create a perfect source of free energy. You see, if you can just burn it so that it produces a teeny, tiny bit more energy being burned that it cost to produce, then you've got free energy. You just run an engine - it keeps dividing the water into hydrogen and oxygen, and then you burn it, producing more energy than you spent to divide it; and the only by-product is water vapor!

Of course, this doesn't work. Thermodynamics fights back: you can't get more energy out of recombining atoms of hydrogen and oxygen than you spent splitting molecules of water to get that hydrogen and oxygen. It's very simple: there's a certain amount of latent energy in that chemical bond. You need to pump in a certain amount of energy to break it - if I remember correctly, it's around 142 Joules per gram of water. When you burn hydrogen and oxygen to produce water, you get exactly that amount of energy back. It's a state transition - it's the same distance up as it is back down. It's like lifting a weight up a step on a staircase: it takes a certain amount of energy to move the weight up one step. When you drop it back down, it won't produce more energy falling that you put in to lift it.

But the Brown's gas people won't let that stop them!

Here's an email I recieved yesterday from a Brown's gas fan, who noticed one of my old criticisms of it:

Hi Mark,

My name is Stefan, and I recently came across your analysis regarding split water technology to power vehicle. You are trying to proof that it makes no sense because it is against the physic low of energy conservation?

There is something I would like to ask you, if you could explain to me. What do you think about the sail boat zigzagging against the wind? Is it the classical example of perpetual motion?

If so, I believe that the energy conversion law is not always applicable, and even maybe wrong? Using for example resonance you can destroy each constructions with little force, the same I believe is with membrane HHO technology at molecular level?

Is it possible that we invented the law of impossibility known as the Energy Conservation Law and this way created such limitation? If you have some time please answer me what do you think about it? This World as you know is mostly unexplainable, and maybe we should learn more to better understand how exactly the Universe work?

The ignorance in this is absolutely astonishing. And it's pretty typical of my experience with the Brown's gas fans. They're so woefully ignorant of simple math and physics.

Let's start with his first question, about sailboat tacking. That's got some interesting connections to my biggest botch on this blog, my fouled up debunking of the downwind-faster-than-the-wind vehicle.

The tacking sailboat is a really interesting problem. When you think about it naively, it seems like it shouldn't be possible. If you let a leaf blow in the wind, it can't possibly move faster than the wind. So how can a sailboat do it?

The anwser to that is that the sailboat isn't a free body floating in the wind. It's got a body and keel in the water, and a sail in the air. What it's doing is exploiting that difference in motion between the water and the air, and extracting energy. Mathematically, the water behaves as a source of tension, resisting the pressure of the wind against the sail, and converting it into motion in a different direction. Lift the body of the sailboat out of the water, and it can't do that anymore. Similarly, a boat can't accelerate by "tacking" against the water current unless it has a sail. It needs the two parts in different domains; then it can, effectively, extract energy from the difference between the two. But the most important point about a tacking sailboat - more important than the details of the mechanism that it uses - is that there's no energy being created. The sailboat is extracting kinetic energy from the wind, and converting it into kinetic energy in the boat. There's no energy being created or destroyed - just moved around. Every bit of energy that the boat acquires (plus some extra) was removed from the wind.

So no, a sailboat isn't an example of perpetual motion. It's just a very typical example of moving energy around from one place to another. The sun heats the air/water/land; that creates wind; wind pushes the boat.

Similarly, he botches the resonance example.

Resonance is, similarly, a fascinating phenomenon, but it's one that my correspondant totally fails to comprehend.

Resonance isn't about a small amount of energy producing a large effect. It's about how a small amount of energy applied over time can add up to a large amount of energy.

There is, again, no energy being created. The resonant system is not producing energy. A small amount of energy is not doing anything more than a small amount of energy can always do.

The difference is that in the right conditions, energy can add in interesting ways. Think of a spring with a weight hanging on the end. If you apply a small steady upward force on the weight, the spring will move upward a small distance. When you release the force, the weight will fall to slightly below its apparent start point, and then start to come back up. It will bounce up and down until friction stops it.

But now... at the moment when it hits its highest position, you give it another tiny push, then it will move a bit higher. Now it's bounce distance will be longer. If every time, exactly as it hits its highest point, you give it another tiny push, then each cycle, it will move a little bit higher. And by repeatedly appyling tiny forces at the right time, the forces add up, and you get a lot of motion in the spring.

The key is, how much? And the answer is: take all of the pushes that you gave it, and add them up. The motion that you got from the resonant pattern is exactly the same as the motion you'd get if you applied the summed force all at once. (Or, actually, you'd get slightly more from the summed force; you lost some to friction in the resonant scenario.

Resonance can create absolutely amazing phenomena, where you can get results that are absolutely astonishing; where forces that really seem like they're far to small to produce any result do something amazing. The famous example of this is the Tacoma Narrows bridge collapse, where the wind happened to blow just right to created a resonant vibration which tore the bridge apart:

But there's no free energy there; no energy being created or destroyed.

So, Stefan... It's always possible that we're wrong about how physics work. It's possible that conservation of energy isn't a real law. It's possible that the world might work in a way where conservation of energy just appears to be a law, and in fact, there are ways around it, and that we can use those ways to produce free energy. But people have been trying to do that for a very, very long time. We've been able to use our understanding of physics to do amazing things. We can accelerate particles up to nearly the speed of light and slam them together. We can shoot rockets into space. We can put machines and even people on other planets. We can produce energy by breaking atoms into pieces. We can build devices that flip switches billions of times per second, and use them to talk to each other! And we can predict, to within a tiny fraction of a fraction of the breadth of a hair how much energy it will take to do these things, and how much heat will be produced by doing them.

All of these things rely on a very precise description of how things work. If our understanding were off by the tiniest bit, none of these things could possibly work. So we have really good reasons to believe that our theories are, to a pretty great degree of certainty, accurate descriptions of how reality works. That doesn't mean that we're right - but it does mean that we've got a whole lot of evidence to support the idea that energy is always conserved.

On the side of the free energy folks: not one person has ever been able to demonstrate a mechanism that produces more energy than was put in to it. No one has ever been able to demonstrate any kind of free energy under controlled experimental conditions. No one has been able to produce a theory that describes how such a system could work that is consistent with observations of the real world.

People have been pushing Brown's gas for decades. But they've never, every, not one single time, been able to actually demonstrate a working generator. No one has ever done it. No one has been able to build a car that actually works using Brown's gas without an separate power source. No one has build a self-sustaining generator. No one has been able to produce any mathematical description of how Brown's gas produces energy that is consistent with real-world observations.

So you've got two sides to the argument about Brown's gas. On one side, you've got modern physics, which has reams and reams of evidence, precise theories that are confirmed by observation, and unbelievable numbers of inventions that rely on the precision of those theories. On the other side, you've got people who've never been able to to do a demonstration, who can't describe how things work, who can't explain why things appear to work the way that they appear, who have never been able to produce a single working invention...

Which side should we believe? Given the current evidence, the answer is obvious.

38 responses so far

Category Intuition by Example

Oct 09 2011 Published by under Category Theory

The good thing about category theory is that it abstracts everything down to a couple of very simple, bare-bones concepts, and lets us play with those concepts in really fascinating ways. But the bad thing about category theory is that it abstracts everything down so much that it can be difficult to have any intuitive sense about what things actually mean.

We said that a category is, basically, a collection of objects connected by arrows, and a composition operation over the arrows. But what the heck is an object, and what is an arrow?

In some sense, the point of category theory is that it doesn't matter. There's a set of fundamental concepts that underly all composable mapping-like things, and category theory focuses in on those fundamental concepts, and what they mean.

But still, to understand category theory, you need to be able to get some kind of intuitive handle on what's going on. And so, we'll take a look at a few examples.

The Category Set

One of the easiest examples is the category Set. The objects of the category are, obviously, sets; the morphisms are functions between sets; and composition is (obviously) function composition.

That much is easy. Now, what about the special arrows? What do all of the epi, iso, endo, and auto arrows in the category of sets?

An monomorphism over the category of sets is an injective function - that is, a function which maps each value in the domain to a distinct value in the range. So you can always reverse the function: given a value in the range of the function, you can always say, specifically, exactly what value in the domain maps to it. In a monomorphism in the category set, there may be values in the target of the morphism that aren't mapped to by any element of the range; but if a value of the range is mapped to, you know which value in the domain mapped to it.

The key property of the monomorphism is that it's left-cancellative - that is, if \(f\) is a monomorphism, then we know that \(f circ g == f circ h\) is only true if \(g = h\). We can cancel the left side of the composion. Why? Because we know that \(f\) maps each value in its domain to a unique value in its range. So \(f circ g\) and \(f circ h\) can only be the same if they map all of the same values in their domain to the same values in their range - that is, if they're the same function.

An epimorphism is an onto function - that is, a mapping \(f\) from a set \(X\) to a set \(Y\) in which for every value \(y in Y\), there's some value in \(x in X\) such that \(f(x)=y\). It's a dual of the notion of a monomorphism; for each value in the range, there is at least one value in the domain that maps to it. But it's not necessary that the domain values be unique - there can be multiple values in the domain that map onto a particular value in the range, but for every element of the domain, there's always at least one value that maps to it. It's right-cancellative in the same way that the monomorphism is left-cancellative.

What about iso-morphism? If you combine the ideas of mono-morphism and epi-morphism, what you end up with is a function where every member of the domain is mapped onto a unique value in the range, and every value in the range is mapped onto by at least one value: it's a one-to-one function.

The Category Poset

Poset is the category of partially ordered sets. A partially ordered set is a set with a binary relation, \(le\), which satisfies the following properties:

  1. Reflexivity: \(forall a: a le a\)
  2. Antisymmetry: if \(a le b land b le a\) then \(a = b\).
  3. Transitivity: if \(a le b land b le c\) then \(a le c\)

In the category of partially ordered sets, the arrows are monotonic functions - that is, functions that preserve the partial ordering. So if \(x le y\), then if \(f\) is monotonic, \(f(x) le f(y)\).

The arrow composition operation, \(circ\), is still just function composition. In terms of meaning, it's pretty much the same as it was for simple sets: a monomorphism is still a surjective function; etc. But in Poset it needs to be a surjective function that preserves the ordering relation.

We know that monotonic functions fit the associative and identity properties of category arrows - they're still just functions, and monotonicity has no effect on those. But for posets, the composition is a bit tricky - we need to ensure that composition maintains the partial order. Fortunately, that's easy.

  • Suppose we have arrows \(f : A rightarrow B\), \(g : B rightarrow C\). We know that \(f\) and \(g\) are monotonic functions.
  • So for any pair of \(x\) and \(y\) in the domain of \(f\), we know that if \(x le y\), then \(f(x) le f(y)\).
  • Likewise, for any pair \(s,t\) in the domain of \(g\), we know that if \(s le t\), then \(g(s) le g(t)\).
  • So we just put those together: if \(x le y\), then \(f(x) le f(y)\). \(f(x)\) and \(f(y)\) are in the domain of \(g\), so if \(f(x) le f(y)\) then we know \(g(f(x)) le g(f(y))\)

The category Grp

Our last example is the category of groups, which is called Grp. In this category, the objects are (obviously) groups. What are arrows? Group homomorphisms - that is, essentially, functions between sets that preserve the group structure.

What's a homomorphism in this category? That's a bit confusing, because we get stuck with some overloaded terminology. But it's basically just a symmetry-preserving surjection - that is, it's a reversable mapping from a group into a second group, where the group symmetry of the first group is mapped onto the group symmetry of the second by the arrow. You should be able to follow the meanings of the other kinds of morphisms by using similar reasoning.

These are all, of course, very simple examples. They're all concrete categories, where the objects are (speaking very loosely) sets. There are categories - many very interesting ones - where the objects aren't sets or even particularly set-like - but we'll get to those later. For now, the ideas of objects as sets gives you an intuitive handle to grab onto.

11 responses so far