In the last post, I talked about what symmetry means. A symmetry is an immunity to some kind of transformation. But I left the idea of transformation informal and intuitive. In this post, I'm going

to move towards formalizing it.

The group theoretic notion of immunity to transformation is defined in terms of group isomorphisms. A group isomorphism is a structure preserving mapping between two different groups. If you know category theory, it's defined very easily: a group isomorphism is an iso arrow in the category of groups. Of course, that's a bit of a hand-wave, because I haven't explained just how to define the category of groups. To do that, I would need to explain how to structure the group, and what all of the arrows are.

The category of groups, commonly called **Grp**, contains all groups as objects. There are arrows between groups if and only if there is a homomorphism between the corresponding groups. Given two groups,

(A,+) and (B,×), a function f : A→B is a homomorphism if and only if ∀ x,y ∈A : f(x+y) = f(x)×f(y).

What that means, informally, is that the mapping from the group A to the group B preserves the

group-theoretic properties of A: I can apply the group operation of A on any two elements before mapping to B, or I can apply the mapping to B and then use B's group operation, and I'll get the same result. So the group-properties of A are *embedded* in B. The fact of the structure preserving properties

of the homomorphism means that f preserves identity - that is, f(1_{A}) = 1_{B}, and it also means that the mapping preserves inverses: ∀x∈A, f(x^{-1}) = f(x)^{-1}.

On the other hand, there are some properties that aren't preserved by a homomorphism. A homomorphism

isn't onto: while every member of A is mapped to a member of B, not every member of B is necessarily

mapped onto by a member of A - so there are members of B that have no corresponding value in A, and B's

group operation doesn't have to preserve A's structure when you perform an operation using one of those

values. The mapping also doesn't have to be one-to-one: multiple members of A can be mapped onto a single

element of B. Any distinctions between those values is (obviously) lost in the mapping.

If we fix those two weaknesses, by requiring that the mapping be one-to-one and onto, then f is an

*isomorphism* from A to B. If there's an isomorphism from A to B, that means that there is a fully

structure-preserving mapping between A and B: A and B are equivalent. You cannot tell the difference

between the two of them using their group operations.

To drop back to the category theory for a moment: saying that to be a isomorphism, a homomorphism

must be one-to-one and onto is really just another way of saying that in the category of groups, **Grp**, where homomorphisms are arrows, an isomorphism is an iso-arrow. An arrow is iso in category theory when it's got a particular kind of relationship with identity; if there's an isomorphism between A and B, then A and B are *cancelable* by an arrow in the category: if there's an iso arrow from

A to B, and there are homomorphisms from G to A and B to H, then

there's a homomorphism from G to B and from A to H; the step from A to B can be canceled by a composition that reduces it to identity.

An easy way to understand this is to look at one of the most canonical examples of group theory: permutation groups. Permutation groups go right to the heart of group theory; they're the first groups

that were studied; in fact, you could reasonably make an argument that group theory was originally developed specifically to study permutation groups.

A permutation group is a group where you can re-arrange the elements without creating

a visible difference. Suppose you've got a pentagram: that is, a graph made of five points, where every pair of points is connected by an edge, like the diagram to the right. This can form a group P, where the edges of the pentagram are the elements of a group. The group operation of P is edge-sums: if there's an edge ab and an edge bc, then ab+bc=ac.

An isomorphism from a pentagram group to itself is a permutation: that is, a mapping that re-arranges the edges. You can map any edge of the original group to any other edge. As long as the mapping is total, 1:1, and onto, after you've finished the mapping, you've still got a pentagram group. You can't tell the difference. You can, for example, switch A and C in the pentagram, which will reshuffle a bunch of edges - but when you're done, there's no way to detect that you've changed anything. It's still a pentagram; there are still edges ab, ac, ad, and ae; and those edges still get the same answers in edge-sums. It's indistinguishable from the original.

Getting back to the original point, we can now say exactly what we mean by symmetry in group theory. A transformation of a group is a group isomorphism: an operation that changes the group, mapping it onto either another group, or a permutation of itself; and after that mapping, the result is indistinguishable

from the original group.

There is, of course, a bit more to it. (Isn't there always?) I've said that the simple addition group

can be considered a definition of mirror symmetry; but with what I've explained so far, there's no way of using the addition group to describe mirror transformations of anything but real numbers. Clearly a proper definition of mirror symmetry needs to be able to work on more than integers; and we'd certainly like to

be able to have a single definition of it that works not just on numbers, but on *anything* where

we can observe a kind of mirror symmetry. That's the topic of the next post: group operations.

I'm sorry, but this is very confused. You call the pentagram graph a group, saying that the elements thereof are the edges. But from your definition, you can't add AC and ED. Naughty! And what's the identity, anyhow?

"A permutation group is a group where you can re-arrange the elements without creating a visible difference."

I'm not sure what this is supposed to mean, but it sounds wrong. A permutation group is simply a group whose elements are permutations of some fixed set and where the group operation is composition (and a permutation of a set is just a bijection of the set with itself).

"Suppose you've got a pentagram: that is, a graph made of five points, where every pair of points is connected by an edge, like the diagram to the right. This can form a group P, where the edges of the pentagram are the elements of a group. The group operation of P is edge-sums: if there's an edge ab and an edge bc, then ab+bc=ac."

Again, this doesn't seem to make much sense. From your example I guess you've defined adding edges that share a vertex, but what about ab+cd? What is the identity element of the group?

@Mark: Some things here are wrong.

"There are arrows between groups if and only if there is a homomorphism between the corresponding groups."

Well it's true, but I think you meant "The arrows from G to H are exactly the homomorphisms from G to H". What you wrote is basically vacuous because there is ALWAYS a homomorphism and always an arrow.

Also your definition of "permutation group" is borked. It really just means a subgroup of the symmetric group on a set; up to isomorphism all groups are permutation groups. The article at wikipedia is quite good: http://en.wikipedia.org/wiki/Permutation_group

Finally your pentagram example doesn't seem quite right. You haven't defined what (for example) ab+ab is, and there doesn't seem to be an identity. (I guess you mean there to be an identity, which is "no edge" and ab+ab to be the identity, but note then your can't "re-arrange the elements without creating a visible difference" because you have to map the identity to itself).

@Omar @Matt:

I agree that Mark's definition of the "pentagram group" is a bit lacking. But one plausible interpretation is: take the free Abelian group generated by the edges, and mod out by the subgroup generated by the relations xy + yz = xz where x, y, and z are distinct vertices.

With less jargon: consider all formal sums of edges. Adding an edge ab to itself yields 2ab, adding an edge ab to an edge dc yields the formal sum "ab + dc". Now stipulate that some of these formal sums are equal, e.g. "ab + bc = ac" and so on. Otherwise, all the usual rules of addition apply (e.g. you can formally subtract edges as well, to get things like "dc - ab", and ab-ab=0).

This interpretation is, I believe, consistent with his later statements about automorphisms of the "pentagram group," as the group I've constructed here retains the symmetries inherited from the complete graph K_5.

PS - I put "pentagram group" in quotes because I've never seen this group nor heard its name before, and a few quick Google searches don't give any promising leads either.

PPS - If you know some group theory, you can quickly come to the conclusion that the above group I defined is in fact the Abelian group consisting of 10-dimensional vectors over the integers modulo 2. It then becomes clear that this group has many more symmetries than those inherited from the complete graph K_5. If this is indeed the group Mark meant to define, this might explain why it doesn't have a name other than (Z/2Z)^10. (Although (Z/2Z)^2 is the "Klein four group," so you never know.)

Have you done a post on category theory? I felt a bit lost in the definitions.

Many thanks again Mark for another instructive article.

However, regarding the explanation of an iso arrow, isn't it easier to say that an arrow f : A -> B in a category is an isomorphism if and only if there is a reverse arrow g : B -> A that composes with it to produce identity (g o f = id_A and f o g = id_B); i.e., an iso arrow is invertible? The example of A, B, G, and H in the article does not demonstrate an isomorphism - the composition property holds for all arrows, and the "reducing to identity" just verbally restates that the arrow is an isomorphism (rather than demonstrating what the arrow structure of an isomorphism looks like).

[There are arrows between groups if and only if there is a homomorphism between the corresponding groups. Given two groups, (A,+) and (B,×), a function f : A→B is a homomorphism if and only if ∀ x,y ∈A : f(x+y) = f(x)×f(y).]

You mean this as different than the iso arrow above, right? You mean the homo arrow?

[The group operation of P is edge-sums: if there's an edge ab and an edge bc, then ab+bc=ac.]

I can fairly easily see that associativity will hold, since (gh+hi)+ij=gi+ij=gj, and gh+(hi+ij)=gh+hj=gj for edge summation. But, what do we have for our identity? I mean we have ab+bx=ax, and in such a case we would we have bb as our identity, since ab+bb=ab. But, bc+cc=bc, thus cc equals our identity. But, according to group theory, the identity works out as unique. Did I misunderstand edge-summation here or something? Also, supposing we have bb as our identity, then ab+by=bb... which works out as impossible for the 'edge summation', since we necessarily have a for the first part. I think of edge summation like matrix composition. Did I mis-understand your term?

[It's indistinguishable from the original.]

You appear to keep trying to say something like, if we replace each corresponding letter with its correlate.

Josh,

[With less jargon: consider all formal sums of edges. Adding an edge ab to itself yields 2ab, adding an edge ab to an edge dc yields the formal sum "ab + dc". Now stipulate that some of these formal sums are equal, e.g. "ab + bc = ac" and so on. Otherwise, all the usual rules of addition apply (e.g. you can formally subtract edges as well, to get things like "dc - ab", and ab-ab=0).]

In which case, we have ({A, B, C, D, E, AB, AC..., AB+AB, AB+AC, ... AB+AB+AB, AB+AB+AC... 0, -A, -B, -C, ...}, +) as our structure? We would then have 0 as part of our "pentagram graph", or we have a number joined to our graph for the group structure, as well as all the negatives of the edges joined to the graph for the group structure, right? If so, it looks like you don't really deal with a group in the graph, but you can impose numbers and 'negative edges' on the graph to get a group structure.

Doug,

"In which case, we have ({A, B, C, D, E, AB, AC..., AB+AB, AB+AC, ... AB+AB+AB, AB+AB+AC... 0, -A, -B, -C, ...}, +) as our structure?"

Almost, but not quite. I was thinking of just using the edges of the graph, and not the vertices. So: what you said, but without A,B,C,D,E, or their negatives.

"We would then have 0 as part of our "pentagram graph", or we have a number joined to our graph for the group structure, as well as all the negatives of the edges joined to the graph for the group structure, right?"

I don't know what you mean "have 0 as part of our 'pentagram graph'" or "a number joined to our graph". The group can be defined abstractly, without reference to the graph. The graph is simply a useful mnemonic device for remembering the rules for which group elements are equal, such as AB + BC = AC. But this can just as easily be remember by the good choice of notation.

"If so, it looks like you don't really deal with a group in the graph, but you can impose numbers and 'negative edges' on the graph to get a group structure."

I don't know what you mean by "a group *in* the graph", but this sentence seems right on the money. The group I've defined has been almost entirely divorced from the graph it "came from."

[lmost, but not quite. I was thinking of just using the edges of the graph, and not the vertices. So: what you said, but without A,B,C,D,E, or their negatives.]

But, we have -AB, -AC, ..., right?

[I don't know what you mean "have 0 as part of our 'pentagram graph'" or "a number joined to our graph".]

We have 0 as part of the algebraic structure. I meant to imply we don't analyze the graph or get a group structure from it, which you've agreed with.