While doing some reading on rings, I came across some interesting stuff about

Monoids and syntax. That's right up my alley, so I decided to write a post about that.

We start by defining a new property for monoids - a kind of equivalence

relation called a *monoid congruence*. A Monoid congruence defines

equivalence classes within the set of values in a monoid. The monoid congruence

relation is generally written as "~", and it's a relation between two values in a

monoid: ~⊆M×M. A monoid congruence has all of the usual properties

of an equivalence relation: it's symmetric, reflexive, and transitive. It also

respects the monoid operator: if a, b, c, and d are all members of a monoid M,

a~c, and b~d, then aºb~cºd.

If we have a monoid congruence relation "~", then we can obviously use it to create a collection of equivalence classes, called *congruence classes*. If a is a member of monoid (M,º), then we can define the congruence class [a]={m∈M | x~a } - that is, the set of objects in M that are "~" with a.

We can then define a new operation on the congruence classes: if a and b are members of a monoid (M,º), then we can define an operation "*" by

[a]*[b]=[aºb]. If you look at the set of congruence classes of M given by "~", and the "*", what do you get? *Another monoid*, which we call a *quotient monoid*. The quotient monoid created by a particular congruence relation "~" is

written M/~.

Now, we take a new step, and this is where it starts to get really interesting. Remember how I described the relationship between monoids and computing devices? We're going to start taking advantage of that. If we take a monoid, and use concatenation

as the notation for the monoid operator, then we can describe products as strings: "abc" is (aºb)ºc.

In automata theory, we talk about formal languages, which are specific sets of strings of symbols. A formal language is usually described by some system of rules, which describe

how the strings in the language are generated. For a very simple example, we can

use a regular expression, like "a^{+}b^{*}" to represent the language consisting of any number of "a"s, possibly followed by any number of "b"s. We can

talk about more complicated languages, by using grammars. For example, the classic

first language that can't be described by simple regular expressions is the following:

X→ (X)X→XXX→ ε

This language consists of sequences of open and close parens where the number of opens and closes is balanced: strings like "()", "(())", "((()(()(()))))", and so on.

If we think of a monoid as a meta-computing machine, then we can

talk about the kinds of languages that can be accepted by a particular

monoid - that is, if we're working with a machine built from the monoid, what kinds of languages can that machine accept? It turns out that we can define that

set using monoid quotients!

We can start by defining a *syntactic quotient*, using contatenation to denote the monoid operator. If M is a monoid, and S is a subset of M,

then we can define left and right *syntactic quotients* of S:

- If m∈M, then the
*right syntactic quotient*of S by m,

written S/m = { a∈M | am ∈ S }. - If n∈M, then the
*left syntactic quotient*of S by n,

written nS = { b∈M | mb∈S }.

We can then use the syntactic quotients to define congruence relations,

exactly as we did above. We get two congruence relations, the *right syntactic equivalence*, ~s, which is the congruence relation induced by the right syntactic quotient, and the *left syntactic equivalence* s~, which is the congruence relation implied by the left syntactic quotient. And finally, we can define a total syntactic congruence (also called a double-sided congruence), written ~_{s} using both:

a ~_{s} b ⇔ (∀x,y∈M: xay∈S ⇔ xby∈S

So, here's where the connection to computable languages comes in. The syntactic quotient, following the procedure above, defines a new monoid - the syntactic monoid of S, which we call M(S). The syntactic monoid of S is a monoid which *accepts*

the set S. We call a *language* - consisting of the strings of symbols that

concatenated together by M's monoid operator result in values in S. So we've finally really gotten to the point where we can really see how a monoid represents a computing device. In computation theory, we can describe every computing device in terms of languages. We can likewise describe computable languages in terms of monoids.

In particular, we can define what it means for a language to be recognizable by a

monoid, M. Take a language, L⊆M. If the set of quotients {L/m|m∈M} is finite,

then L is recognizable by M.

There's more to it than this - transition monoids are particularly interesting. But that's enough for this post. We'll save it for later.

You have a typo in your definition of left quotient: you say m when you mean n.

It's very neat: the algebraic structure ends up giving you a Myhill-Nerode style theorem. As a relative uninitiate, I'd be interested to see how quotients of different equivalence relations gives you languages in different levels of the Chomsky hierarchy, e.g., how to define the bicyclic monoid for the Dyck language. Do you end up with similarly tidy relations between the quotient submonoid and the language class?

Nice post!

I'm disappointed that this thread is not getting more comments. You've taken the abstract stuff and used it to clarify some other abstract stuff -- which happens to lie at the beating heart of Computation. Come on, guys and gals, you're reading this RIGHT NOW on a computer. Here's your chance to understand more deeply how that is possible. ** end rant **

Re: sequences of open and close parens where the number of opens and closes is balanced: strings like "()", "(())", "((()(()(()))))", and so on."

I'd like to point out that when you enumerate these sequences, you get the beautiful Catalan Numbers. A pretty page on that is:

Stanley, Richard and Weisstein, Eric W. "Catalan Number." From MathWorld--A Wolfram Web Resource.

The Catalan numbers show up all over the place in math, of course. I'm sure you knew that. They even showed up in my research in combinatorial game theory describing numbers of binary trees given a certain number of nodes, I believe. It was quite good fun finding that out! (And at first it seemed useful for determining complexity of computation for the chess variant referred to as the Pawn Game, though I later decided it was not.)

As for the stuff in the post, most of it I don't understand, but probably because I haven't been reading up on the monoid posts as much. I did have to blink in amazement a couple times. I like seeing theory and practice collide like that. I'm probably going to go back and read all the monoid posts now, just to see what this all means more fully.

Is it then the case that the equivalence classes defined by the minimal automaton A for a Language L coupled with the concatenation operation (.) is the syntactic monoid for L i.e. M=(eqclasses(A),.)?