If you're looking at groups, you're looking at an abstraction of the idea of numbers, to try to reduce it to minimal properties. As I've already explained, a group is a set of values with one operation, and which satisfies several simple properties. From that simple structure comes the

basic mathematical concept of symmetry.

Once you understand some of the basics of groups and symmetry, you can move in two directions. You can ask "What happens if I *add* something?"; or you can ask "What happens if I *remove* something?".

You can either add operations - which can lead you to a two-operation structure called a ring; or you can add properties - in which the simplest step leads you to something called an *abelian group*. When it comes to removing, you can remove properties, which leads you to a simpler structure called a *groupoid*. Eventually, I'm going to follow both the upward and the downward paths. For now, we'll start with the upward path, since it's easier.

Building up from groups, we can progress to *rings*. A group captures one simple property of

a set of number-like objects. A ring brings us closer to capturing the structure of the system

of numbers. The way that it does this is by adding a second operation. A group has one operation

with symmetric properties; a ring adds a second symmetric operation, with a well-defined relationship

between the two operations.

A ring is a set of values R, along with two operations, "+" and "×". We'll generally call the

two operations addition and multiplication, although it's important to remember that they are

*not* the standard numeric add and multiply that we're familiar with: these are abstract

operations, and we can construct rings in which the "add" operation has very little similarity with what

we think of as addition.

The first operation in the ring is addition. In the ring, (R,+) is an abelian group, with identity

value is written "0", and where the inverse of value a∈R with respect to "+" is written "-a".

The second operation, multiplication, in a ring doesn't have to have all of the group operation

properties. (Remember, what we're trying to do is look at very simple structures, so we don't want to

add more requirements that we absolutely have to; the more abstract we can leave the definition, the

more structures the definition can encompass. We want to add just enough to be able to explore the

concepts.) The only requirements on the second operation are that it is associative, and has an identity

element. There's no requirement that it be commutative, and there's no requirement that there be

multiplicative inverses.

Finally, the relationship between the two operations has to satisfy one condition: the addition

operation must be distributive over the multiplication operation. To be formal, ∀a,b,c∈R,

a×(b+c) = a×b + a×c ∧ (a+b)×c = a×c + b×c.

So with all of this stuff, what have we really described? We've created something that is an

abstract definition of something that is very similar to the integers. Integers with addition form a

group. But integers can't form a group with multiplication, because there are multiplicative inverses in

the integers. So we've created a structure that lets us describe things that behave like integers.

So, obviously, the integers with integer addition and multiplication are a ring. So are the real numbers, with real addition and multiplication, and the complex numbers with complex addition and multiplication. Even single-variable polynomials form a ring using simple polynomial addition and multiplication.

More interestingly, you can form fields from some very different things, which

turn out to share this basic structure. For example, if you take an arbitrary

set S, then the powerset of S (that is, the set of all subsets of S) form

a ring with symmetric set difference as the addition operation, and intersection as the

multiplication operation. This one always amazes me, from the first time I saw it. When

you study groups and rings, you tend to see a lot of examples that all, in some sense, satisfy

your intuition for either addition or multiplication. Then you encounter this - and it really

hits home that this abstraction process has given you something that's really quite different - these basic properties of addition and multiplication on the integers can describe very different

things that don't fit your intuition well at all.

There are also a ton of example rings that are easier to understand in a category theoretic

model. or example, if you have an Abelian group, (G,+), then the set of endomorphisms

of (G,+) are a ring are addition and composition of the endomorphisms, using the

category-theoretic version of morphism addition. I won't go into that in any more detail

here, because I'm eventually going to get into looking at a lot of this abstract algebra

material in terms of category theory, so I'll save it for then.

The powerset example was indeed amazing.

There are some other typos (such as "there are [no] multiplicative inverses in the integers") that doesn't bother me too much, but this abstract category example I can't make head and tails of. Which is the addition operation (composition?) and which is the ring multiplication (morphism addition?)?

Oops, which is heads and which is tails would come from the distributive property of course. Forget my stupid question.

The powerset example gets slightly less amazing once you realize that the powerset of S can be thought of as the set of functions from S to {0,1}, mapping a subset A of S to the functions that is 1 on A and 0 outside A. But {0,1} can be thought of as the integers modulo 2, which is a ring - a field even - and so the set of functions S→{0,1} inherits a ring structure by doing pointwise addition and multiplication. Now it is an easy exercise to check that the resulting operations correspond to symmetric difference and intersection.

@#3: well, obviously this is indeed

slightlyless amazing, isn't it?So would a vector space be considered a ring? I assume it has a more specific classification since it is composed of 2 sets (scalar and vector) and 2 operations.

Jon L:

A vector space would not be a ring. In order to be a ring, you would need to multiply two vectors and get a third vector. That usually does not happen in a vector space. (Yes. I know about cross products but they only occur in 3 dimensional euclidean space and they would not have an identity associated with them.)

If you consider R, the ring of real numbers (it's actually a field, a much more restrictive and rich type of ring), then a vector space over R would be considered as something called a module over a ring.

The best example of a non-commutative ring is the collection of nxn matrices over R. (Or maybe Q, the rationals. Take your pick.) The matrices have all of the properties Mark alluded to and it is easy to see they are non-commutative. They are a terrific example to keep in mind when thinking about rings.

#6: "I know about cross products but they only occur in 3 dimensional euclidean space"

-- and in 7 dimensional Euclidean space!

They can show up in any dimension if you don't insist that it be a binary operation.

Jon L: Pat F gave a good example of a vector space that is a ring-- the ring of nxn matrices. I don't know why he answered your question in the negative then gave a perfect example that answered it in the positive.

Anyway, a vector space that is also a ring is called an algebra.

[You can either add operations - which can lead you to a two-operation structure called a ring; or you can add properties - in which the simplest step leads you to something called an abelian group.]

I'd consider adding idempotency simpler, or at least as simple. In other words a group where a+a=a.

That might be simple, but it's very restrictive. Only the trivial group satisfies idempotency (I think). Any operation which limits you to the trivial group isn't very useful. ^_^

For example, if you take an arbitrary set S, then the powerset of S (that is, the set of all subsets of S) form a ring with symmetric set difference as the addition operation, and intersection as the multiplication operation.Also, S serves as both the multiplicative and additive identity, so 1=0 in that ring.

Please ignore the previous comment. {} is the additive identity, not S.

Actually, it's easy to prove that in any ring with more than one element, 1 does not equal 0.

Proof: Let a be any element.

0a = 0a + 0 (def'n of 0)

= 0a + ( a + (-a) ) (def'n of (-a) )

= (0a + a) + (-a) (associativity)

= (0a + 1a) + (-a) (def'n of 1)

= (0+1)a + (-a) (distributivity)

= (1)a + (-a) (def'n of 0)

= a + (-a) (def'n of 1)

= 0 (def'n of (-a) )

Now let a be any nonzero element. Then 1a = a and 0a = 0, so 1a != 0a. Thus 1 != 0.