Over the weekend, a couple of my Foursquare coworkers and I were chatting on twitter, and one of my smartest coworkers, a great guy named Jorge Ortiz, pointed out that type inference in Scala (the language we use at Foursquare, and also pretty much my favorite language) is Turing complete.

Somehow, I hadn't seen this before, and it absolutely blew my mind. So I asked Jorge for a link to the proof. The link he sent me is a really beautiful blog post. It doesn't just prove that Scala type inference is Turing complete, but it does it in a remarkably beautiful way.

Before I get to the proof, what does this mean?

A system is Turing complete when it can perform *any* possible computation that could be performed on any other computing device. The Turing machine is, obviously, Turing complete. So is lambda calculus, the Minsky machine, the Brainfuck computing model, and the Scala programming language itself.

If type inference is Turing complete, then that means that you can write a Scala program where, in order to type-check the program, the compiler has to run an arbitrary program to completion. It means that there are, at least theoretically, Scala programs where the compiler will take forever - *literally forever* - to determine whether or not a given program contains a type error. Needless to say, I consider this to be a bad thing. Personally, I'd really prefer to see the type system be less flexible. In fact, I'd go so far as to say that this is a fundamental error in the design of Scala, and a big strike against it as a language. Having a type-checking system which isn't guaranteed to terminate is bad.

But let's put that aside: Scala is pretty entrenched in the community that uses it, and they've accepted this as a tradeoff. How did the blog author, Michael Dürig, prove that Scala type checking is Turing complete? By showing how to implement a variant of lambda calculus called SKI combinator calculus entirely with types.

SKI calculus is seriously cool. We know that lambda calculus is Turing complete. It turns out that for any lambda calculus expression, there's a way rewriting it without any variables, and without any lambdas at all, using three canonical master functions. If you've got those three, then you can write anything, *anything at all*. The three are called S, K, and I.

- The S combinator is: \(S x y z = x z (y z)\).
- The K combinator is: \(K x y = x\).
- The I combinator is: \(I x = x\).

They come from intuitionistic logic, where they're fundamental axioms that describe how intuitionistic implication works. K is the rule \(A Rightarrow (B Rightarrow A)\); S is the rule \((A Rightarrow (B Rightarrow C)) Rightarrow ((A Rightarrow B) Rightarrow C)\); and I is \((A Rightarrow A)\).

Given any lambda calculus expression, you can rewrite it as a chain of SKIs. (If you're interested in seeing how, please just ask in the comments; if enough people are interested, I'll write it up.) What the author of the post id is show how to implement the S, K, and I combinators in Scala types.

trait Term { type ap[x <: Term] <: Term type eval <: Term }

He's created a type `Term`

, which is the supertype of any computable fragment written in this type-SKI. Since everything is a function, all terms have to have two methods: one of them is a one-parameter "function" which applies the term to a parameter, and the second is a "function" which simplifies the term into canonical form.

He implements the S, K, and I combinators as traits that extend `Term`

. We'll start with the simplest one, the `I`

combinator.

trait I extends Term { type ap[x <: Term] = I1[x] type eval = I } trait I1[x <: Term] extends Term { type ap[y <: Term] = eval#ap[y] type eval = x#eval }

`I`

needs to take a parameter, so its apply type-function takes a parameter `x`

, and returns a new type `I1[x]`

which has the parameter encoded into it. Evaluating `I1[x]`

does exactly what you'd want from the `I`

combinator with its parameter - it returns it.

The apply "method" of `I1`

looks strange. What you have to remember is that in lambda calculus (and in the SKI combinator calculus), everything is a function - so even after evaluating I.ap[x] to some other type, it's *still* a type function. So it still needs to be applicable. Applying it is exactly the same thing as applying its parameter.

So if have any type A, if you write something like `var a : I.ap[A].eval`

, the type of `a`

will evaluate to `A`

. If you apply `I.ap[A].ap[Z]`

, it's equivalent to taking the result of evaluating `I.ap[A]`

, giving you `A`

, and then applying that to `Z`

.

The `K`

combinator is much more interesting:

// The K combinator trait K extends Term { type ap[x <: Term] = K1[x] type eval = K } trait K1[x <: Term] extends Term { type ap[y <: Term] = K2[x, y] type eval = K1[x] } trait K2[x <: Term, y <: Term] extends Term { type ap[z <: Term] = eval#ap[z] type eval = x#eval }

It's written in curried form, so it's a type trait `K`

, which returns a type trait `K1`

, which takes a parameter and returns a type trait `K2`

.

The implementation is a whole lot trickier, but it's the same basic mechanics. Applying `K.ap[X]`

gives you `K1[X]`

. Applying that to `Y`

with `K1[X].ap[Y]`

gives you `K2[K, Y]`

. Evaluating that gives you X.

The `S`

combinator is more of the same.

// The S combinator trait S extends Term { type ap[x <: Term] = S1[x] type eval = S } trait S1[x <: Term] extends Term { type ap[y <: Term] = S2[x, y] type eval = S1[x] } trait S2[x <: Term, y <: Term] extends Term { type ap[z <: Term] = S3[x, y, z] type eval = S2[x, y] } trait S3[x <: Term, y <: Term, z <: Term] extends Term { type ap[v <: Term] = eval#ap[v] type eval = x#ap[z]#ap[y#ap[z]]#eval }

Michid then goes on to show examples of how to use these beasts. He implements equality testing, and then shows how to test if different type-expressions evaluate to the same thing. And all of this happens *at compile time*. If the equality test fails, then it's a type error at compile time!

It's a brilliant little proof. Even if you can't read Scala syntax, and you don't really understand Scala type inference, as long as you know SKI, you can look at the equality comparisons, and see how it works in SKI. It's really beautiful.

Same is true for Haskell: http://www.haskell.org/haskellwiki/Type_SK

I'm not sure I understand the Scala proof, but it seems that it only uses fairly basic programming constructs, while the Haskell proof relies on more exotic constructs ("multi-parameter typeclasses, functional dependencies, and undecidable instances").

Is there anyone who has looked into these issues and would like to comment?

They're actually pretty similar.

In Scala, traits are a weird hybrid beast. They're effectively a combination of interfaces in Java, and type classes in Haskell. The things that you do with typeclasses in Haskell, you can do in Scala using traits (with some help from implicits.)

The stuff that's going on in the Scala is actually pretty seriously hairy. It's deceptively simple looking, but it's not really any crazier than the Haskell.

Could you construct a Haskell proof that only used type classes (and basic function definitions) and not the other features mentioned? I don't really understand the Haskell proof either, but I noted the flag 'allow-undecidable-instances'. I don't know what it does but I assume it was necessary to set it to make the proof work. Otoh, the Scala proof appears to use the kind of constructs you would use in every day programming.

If the Scala proof is written using only basic features while the Haskell proof requires advanced (or even experimental) features, then I'd say that the situation for Scala is worse.

You're right that UndecidableInstances is what makes it work, in the sense that without it, the type checker can't infinitely loop.

The UndecidableInstances extension does two things: It allows the FlexibleInstances extension (which allows a much more liberal instance syntax over Haskell 98), and removes two restrictions which prevent the type checker from every infinitely looping. The full details are in sections 7.6.3.2 and 7.6.3.3 of the GHC manual.

However, both Haskell 98 and FlexibleInstances disallow some things that programmers really want, like typeclass synonyms.

In fact, the reason why UndecidableInstances is a language extension is precisely because the problem has been known for quite some time (tl;dr Turing machines implemented in the Hugs typesystem). Indeed, the problem that Hindley-Milner plus the wrong kind of overloading can result in an undecidable type system was more-or-less common knowledge before Haskell (tl;dr Post correspondence problem implemented in a small extension to ML's type system).

I do not see a problem with a type system that is Turing complete. The point is: It should be hard/uncomfortable enough to construct types that require computations which might not terminate. (BTW: Shen is another language which has a Turing complete type system.)

I can imagine many situations where a Turing complete type system could cause problem. Consider any large application where type checking loops or takes a very long time.

If one restricts the set of programs so that type checking always succeeds, the type system is of course no longer Turing complete.

Oh... I feel honoured to see my article featured on this high profile blog

My real name is Michael Dürig. MichiD is one of the online identities I use for blogging and hacking.

Hey, you did really beautiful work, glad to meet you! I'll update the post.

Thanks for the links, decourse! So in 1989 Wadler and Blott proposed a mechanism for overloading in languages that use Hindley-Milner type inference, and two years later Volpano and Smith showed that such a system would be undecidable. If I understand things correctly, the Haskell proof relies on this feature.

But it seems to me that the Scala proof is very different. As far as I can tell (and I don't really know Scala) the proof only uses basic features of the language. The big question is of course, how likely is it that these issues will have an impact on the use of Scala in large-scale development? Have there been any reports of Scala programs that take an unreasonably long time to compile?

Hi, cool post. Two remarks:

1. I don't think that having a Turing Complete Type system is a bad thing, since one could claim analogously that having a language permitting non-terminating programs is bad...

2. The axiom of intuitionistic logic corresponding to the S combinator is

(A => (B => C)) => ((A => B) => (A => C)). It can be easily derived by "typing" the lambda term corresponding to S.