Giving IDists too much credit: the Pandas Thumb and CSI

Being a Nice Jewish BoyTM, Christmas is one of the most boring days of the
entire year. So yesterday, I was sitting with my laptop, looking for something interesting to read. I try to regularly read the [Panda's Thumb][pt], but sometimes when I don't have time, I just drop a bookmark in my "to read" folder; so on a boring Christmas afternoon, my PT backlog seemed like exactly what I needed.
[One of the articles in my backlog caught my interest.][pt-sc] (I turned out to be short enough that I should have just read it instead of dropping it into the backlog, but hey, that's how things go sometimes!) The article was criticizing that genius of intelligent design, Sal Cordova, and [his article about Zebrafish and the genetics of regeneration
in some zebrafish species.][sc] I actually already addressed Sal's argument [here][bm-sc].

What I wanted to comment on was the PT critique of Sal's foolish statement. Sal's article
>In information science, it is empirically and theoretically shown that
>noise destroys specified complexity, but cannot create it. Natural
>selection acting on noise cannot create specified complexity. Thus,
>information science refutes Darwinian evolution. The following is a
>great article that illustrates the insufficiency of natural selection
>to create design.
This is an entirely bogus statement. What concerned me, though, was the rebuttal
from the Pim van Meurs at PT:
>In fact, quite to the contrary, simple experiments have shown that the processes of natural
>selection and variation can indeed create specified complexity. In other words, contrary to
>the scientifically vacuous claims of Sal, science has shown that information science, rather
>than refuting Darwinian evolution, has ended up strongly supporting it.
Plenty of simple experiments have shown that evolution can create complexity,
irreducible complexity, etc. But complex specified information is a meaningless quantity - it
*cannot* be measured. It can only be described in informal, unquantifiable ways. By admitting
to the validity of this thoroughly nonsensical concept, we give creationists like
Sal an undeserved gift that aids their arguments.
As I've [said before][specnonsense], specified complexity *is a meaningless term in
information theory*. Complexity is, pretty clearly, the same thing as complexity commonly
means in information theory - that is, information content or entropy.
*Specification* is the problem. It's used in two different ways in discussions by IDists. One of them is precisely the *opposite* of complexity - that is, it means that there is a precise, complete description of the system which is *short* - meaning that it is a system with *low* information content. If you use this definition of specification, then "specified complexity" means "a system which contains a lot of information but which doesn't contain a lot of information." In other words, the definition is self-contradictory - and therefore *nothing* can have CSI.
The other sense in which specification is used is for a system which can be *informally*
described in a concise way. Reduced to information theoretic terms, it means that you can take
a system with high information complexity, and *partially* describe the system with a very
small amount of information in a way that allows an intelligent observer to recognize that the
complex system matches the simple partial description. Well, again from information theory,
you can *always* extract a short *partial* description where there is a simple predicate for
recognizing whether a full complex system matches the partial description. (For example,
that's exactly what [digital signatures][digsig] do.) In this case, *everything* complex
contains CSI.
The trick that IDists use is to present the "complexity" part in a formal way, but
the "specification" part informally - that is, the specification is an english sentence
recognizable by a human as a concise description of the system. But "comprehensible by a human" is not a meaningful term in the mathematics of information theory. In IT, the short description is no different from a digital signature - a small piece of summary information
with a simple predicate for verifying whether the full information matches the summary.
So CSI can be either a meaningless term that includes all complex systems, or it can be
a meaningless term that cannot include any systems at all. We shouldn't try to debate
the IDists by arguing about whether or not CSI can be produced in any particular way,
when the entire argument is predicated on nonsense. It's like trying to build a skyscraper on shifting sand; the foundation is rotten, and no matter how well you design and build the skyscraper, it's still going to fall down.

No responses yet

  • Bronze Dog says:

    If you're bored, you might want to join in a game of ours.

  • I celebrated Gravmas with the traditional fig-filled cookies.

  • Matt McIrvin says:

    A lot of this "specified complexity" stuff becomes a lot more understandable when you realize they're just taking the ancient apologetic concept of "causal adequacy" and dressing it up in pseudoscientific language. William Dembski has even used the actual phrase "causal adequacy" in this context and some scientists unfamiliar with it thought he coined it. But it's actually an old theological notion that David Hume spent some ink on attacking hundreds of years ago.

  • Chris Hyland says:

    I think that in CSI complexity means extremely low probability, so Im guessing that you could end up with something that is complex by Dembski's definition but still specified in that it has a low information content. My main problem with the concept is that even if you accept the sloppy definition of specification it is still completely impossible to calculate for biological systems because you can't calculate the probabilities.

  • Scarlet Seraph says:

    Right. Dembski (and most certainly Sal as well, since Sal does no actual thinking on his own) defines the 'complexity' part of CSI as the probability of the occurrence of the event, and the 'specified information' as how easy it is to describe.
    So what you're left with is something that's easy to describe, but really improbable. Which does not, of course, mean a darn thing with regard to 'designed' objects - unless you think that description applies to both a Ferrari and a billiard ball.

  • Mark C. Chu-Carroll says:

    Right. Dembski (and most certainly Sal as well, since Sal does no actual thinking on his own) defines the 'complexity' part of CSI as the probability of the occurrence of the event, and the 'specified information' as how easy it is to describe.
    So what you're left with is something that's easy to describe, but really improbable. Which does not, of course, mean a darn thing with regard to 'designed' objects - unless you think that description applies to both a Ferrari and a billiard ball.

    Except that it doesn't even leave you that, because the whole probability line is a sham.
    There is an alternative formulation of information theory as
    probability. Dembski and friends like to throw that formulation around in order to introduce the connection between probability and information. But they cheat: they
    redefine terms and play fast and loose with the definitions so that they can say, for example, that a sequence of coin tosses "HHTHHHTHHHHHTHHHHHHHT" is high information content because of probability, but specified because the length of the head runs are the sequence of prime numbers. The problem is, the probabilistic formulation they use for that is not the information theoretic one. From the standpoint of information theory, that's not a complex string - it doesn't
    have a lot of information content. Pounding the H and T keys on my keyboard randomly will create a string with higher
    information content. The "specification" property there translates to "has low information content"; the probability formulation tries to make it *look* like the string is highly complex when in fact it is not - and so it's just a low information content string.

  • Chris Hyland says:

    The thing to remember here is that if you use any accepted meaning of information then its pretty easy to show that evolution can create it. The object of Dembski's exercise is to come up with something that evolution supposedly can't create, even if the concept is meaningless to both evolution and information theory. As it happens there is no in principle reason why evolution cant create Dembski's definition of information either.

  • Mark C. Chu-Carroll says:

    That's almost exactly my point. What Dembski wants to do is create a formulation of something that evolution can't create. But he can't do that legitimately. So what he does instead is create a nonsense definition of "specified complexity" that he can *claim* evolution can't create. Then, because the definition is nonsense, he can *always* play games with it so that any example of anything produced by evolution isn't *really* specified complexity.
    Admitting to the validity of the nonsensical concept of specified complexity, we give Dembski and friends a huge rhetorical gift. Instead of requiring them to defend an invalid concept, we allow them to shift the grounds of the debate to a place where *they* call the shots.

  • Torbjörn Larsson says:

    Instead of requiring them to defend an invalid concept, we allow them to shift the grounds of the debate to a place where *they* call the shots.

    PvM has this style, which is infuriating at times. IIRC he recently had a series of polemical PT posts around design where it went so far that some commenters suggested that perhaps he was a stealth creationist when he couldn't recognize adaptation. ( )
    It is hard to say why he does this, it could be a philosophical bent, or it could be a theistic (evolution) view. The later is as interventionist and as much creationist when it discusses intervention to create special laws or species (humans or equivalent). At any rate, it isn't a good tactic since the discussions becomes confused and unnecessarily inflamed.
    But besides the phony debate, the argument that evolution has no problem to pick up information from the environment is correct as Tom Shneider showed. And "The n-Category Café" blog poster John Baez noted recently:

    I spent yesterday talking to my friend Chris Lee. He's working on very general concepts of data analysis, pattern recognition and information theory as applied to genomics.
    Right now Chris is trying to understand natural selection from an information-theoretic standpoint. At what rate is information passed from the environment to the genome by the process of natural selection? How do we define the concepts here precisely enough so we can actually measure this information flow?
    Chris pointed out an interesting analogy between natural selection and Bayesian inference.
    The analogy is mathematically precise, and fascinating. In rough terms, it says that the process of natural selection resembles the process of Bayesian inference. A population of organisms can be thought of as having various 'hypotheses' about how to survive -- each hypothesis corresponding to a different allele. (Roughly, an allele is one of several alternative versions of a gene.) In each successive generation, the process of natural selection modifies the proportion of organisms having each hypothesis, according to Bayes' law!

    And he went on to make the analogy more precise by pointing out the same equation in a model of asexual selection in population genetics. ( )
    Perhaps Lee will produce something tangible regarding evolution and information flow. In any case, it is "extremely low probability" that Dembski will.

  • Torbjörn Larsson says:

    I should note that since I was too lazy to check the debate with PvM the "IIRC" is to be taken seriously - I could easily misremember and misstate. And that the "Bayesian inference" is based on observable frequencies and finite data and so is an adequate procedure for all probability interpretations, in case that discussion is provoked.

  • Torbjörn Larsson says:

    And I also forgot the punch line, about this being a relationship between selection and genetic programming, bayesian methods and machine learning.

  • Troublesome Frog says:

    I get the impression that PvM just has a habit of allowing certain creationist assumptions for the sake of argument and then refuting the argument even if those assumptions are true. I agree, though, that it sometimes lends too much legitimacy to some of the weirder axioms of creationist debate--CSI probably being the worst of them. Debaters should just blow a collective raspberry whenever specified complexity is mentioned until Dembski can come up with a meaningful definition for it and demonstrate that it can be measured.

  • Coin says:

    Honestly I consistently get the feeling PVM simply doesn't put a whole lot of thought into what he's saying. He barely even punctuates correctly most of the time*.
    He seems to understand things well when he bothers to learn about them, and he often produces extremely interesting insights, but in general he seems to have a habit of just saying what is on his mind and moving on to something else rather than making sure he understands the subject before commenting.
    * (Although checking on google just now, I find people mentioning he's of Dutch origin-- is English not his first language? PVM's punctuation tends to drive me batty, but if he's not a native English speaker then I can't really blame him for it.)

Leave a Reply