Very big, very exciting news in the theoretical comp sci world today!
A group at HP research has published a proof that, if correct, shows that the classic problem of computational complexity has been solved, once and for all. It's still far from certain that it's correct. But it's the most credible attempt that I've ever seen, and it's getting some preliminary favorable feedback from big-names in complexity theory. In fact, one of the biggest names in complexity theory, Stephen Cook, has apparently said that it should be taken seriously. If Cook thinks it's credible, then it's damn-well credible. It might not be correct, but it's not just crackpottery either.
For the non-CS folks out there, here's my attempt to explain what the problem is, and what the solution means.
One of the most important areas of theoretical computer science is complexity theory. What complexity theory does is try to characterize, in an abstract way, how long it takes for an algorithm to solve a given problem. In some cases, we can do better that showing how long a single algorithm takes; we can actually analyze the problem, and show intrinsic properties of the problem that dictate a minimum amount of time to compute a solution.
We tend to describe the complexity in terms of functions on the size of an input. So, for example, suppose you're given a list of N names, and you'd like to get them into sorted order. You can't do than with fewer than \(n lg_2(n)\) comparisons. So, speaking very roughly, we can say that sorting a list of strings has complexity \(O(n lg_2(n))\) in number of comparisons.
One way of proving that is based on Kolmogorov-Chaitin complexity. Given a list of \(N\) names, there are \((N!)\) possible orderings of that list. To be able to sort the list, essentially, we need to identify which ordering of the list we were given, so that we can select the right permutation the re-arrange it into sorted order. The particular permutation is one of the \((N!)\) possible ones. To select one of those, we need to know the number that identifies the permutation in a list. So we need \(lg_2(N!)\) bits - which is \(O(n lg_2(n))\) bits. A comparison generates exactly one bit of data. So we need to do \(O(n lg_2(n))\) comparisons.
Moving on, we can broadly characterize the complexity of algorithms in terms of the functions that describe the complexity bounds. We can talk about constant-time algorithms; sub-linear time algorithms; linear time; quadratic time; etc.
The most important distinction in terms of complexity is polynomial time versus exponential time. A problem or algorithm is polynomial time if its complexity function is no larger than something which can be expressed as a fixed polynomial. We call the set of problems that can be solved in polynomial time P. It's exponential time if it's no smaller than something which must be expressed with a variable exponent; we call things that require exponential time EXP.
In very general terms, we tend to say that something which is polynomial time is solvable in practical terms; whereas something which is exponential time, we can't solve non-trivial examples before the universe comes to an end.
Within the exponential-time domain, there's a really important subset, which we care about, called NP. NP stands for nondeterministically polynomial-time. Once again, speaking very loosely, problems in NP are problems where to the best of our knowledge, computing a solution to the problem is in EXP; but checking a solution for correctness is in P.
NP contains a lot of very classical and very important problems. For example, there's the traveling salesman problem: given a set of N cities, what's the shortest route which visits all of them? Actually getting the optimum answer for that is amazingly difficult! It's an NP problem.
One of the questions that has plagued computer scientists for decades is: does P = NP? In other words, does the fact that a solution is testable in polynomial time mean that the solution is computable in polynomial time? There are some good arguments either way - some people think that the fact that there's a polynomial time test is simply irrelevant; others think that there's a deep relationship between the testability of a solution, and the process of generating that solution. Without getting into some hairy math, it's hard to explain the two arguments any more deeply - but many people a lot smarter than I am have spent years thinking about it.
If this paper is correct, then it shows, once and for all, that P != NP; that the problems that we know are NP have no polynomial time solutions.
I'm reading the paper, but it's rough going. If there's anything wrong with it, I doubt that I'll be able to find it. But so far, it's difficult but absolutely fascinating. They've taken a really interesting approach to analyzing the problem. If it holds up, they're probably in line for a Fields medal.