Before They've Even Seen Stephen Meyer's New Book, Darwinists Waste No Time in Criticizing Darwin's Doubt
At the group blog Panda's Thumb, University of Washington geneticist Joe Felsenstein recently posted a piece titled "Stephen Meyer Needs Your Help" (go here). He attempts to disparage Meyer's forthcoming book Darwin's Doubt before its publication this June by suggesting that right-thinking readers at PT kindly contact Meyer now in the hope of redressing the book's likely flaws (to my knowledge, neither Felsenstein nor anyone else at PT has an advance copy of Meyer's manuscript). Says Felsenstein: "I suggest we help Meyer with his book. These days a book can be revised up until perhaps a month before publication, so there is still time for Meyer to take our advice."
Felsenstein tries to get the ball rolling by offering Meyer the following advice:
Let me start with my suggestion (but you [i.e., PT readers] will have others to add). Dr. Meyer should explain the notion of Complex Specified Information (CSI) and deal carefully with the criticisms of it. Many critics of Intelligent Design argued that it is meaningless. But even those who did not consider it meaningless (and I was one) found fatal flaws in the way Meyer's friend William Dembski used it to argue for ID. Dembski's Law of Conservation of Complex Specified Information was invoked to argue that when we observe adaptation that is much better than could be achieved by pure mutation (monkeys-with-genomic-typewriters), that this must imply that Design is present. But alas, Elsberry and Shallit in 2003 found that when Dembski proved his theorem, he violated a condition that he himself had laid down, and I (2007) found another fatal flaw -- the scale on which the adaptation is measured (the Specification) is not kept the same throughout Dembski's argument. Keeping it the same destroys this supposed Law. Meyer should explain all this to the reader, and clarify to ID advocates that the LCCSI does not rule out natural selection as the reason why there is nonrandomly good adaptation in nature.Felsenstein's request for clarification could just as well have been addressed to me, so let me respond, making clear why criticisms by Felsenstein, Shallit, et al. don't hold water.
There are two ways to see this. One would be for me to review my work on complex specified information (CSI), show why the concept is in fact coherent despite the criticisms by Felsenstein and others, indicate how this concept has since been strengthened by being formulated as a precise information measure, argue yet again why it is a reliable indicator of intelligence, show why natural selection faces certain probabilistic hurdles that impose serious limits on its creative potential for actual biological systems (e.g., protein folds, as in the research of Douglas Axe), justify the probability bounds and the Fisherian model of statistical rationality that I use for design inferences, show how CSI as a criterion for detecting design is conceptually equivalent to information in the dual senses of Shannon and Kolmogorov, and finally characterize conservation of information within a standard information-theoretic framework. Much of this I have done in a paper titled "Specification: The Pattern That Signifies Intelligence" (2005) and in the final chapters of The Design of Life (2008).
But let's leave aside this direct response to Felsenstein (to which neither he nor Shallit ever replied). The fact is that conservation of information has since been reconceptualized and significantly expanded in its scope and power through my subsequent joint work with Baylor engineer Robert Marks. Conservation of information, in the form that Felsenstein is still dealing with, is taken from my 2002 book No Free Lunch. In 2005, Marks and I began a research program for developing the concept of conservation of information, and we have since published a number of peer-reviewed papers in the technical literature on this topic (note that Felsenstein published his critique of my work with the National Center for Science Education, essentially in a newsletter format, and that Shallit's 2003 article finally appeared in 2011 with the philosophy of science journal Synthese, essentially unchanged in all those intervening years). Here are the two seminal papers on conservation of information that I've written with Robert Marks:
- "The Search for a Search: Measuring the Information Cost of Higher-Level Search," Journal of Advanced Computational Intelligence and Intelligent Informatics 14(5) (2010): 475-486
- "Conservation of Information in Search: Measuring the Cost of Success," IEEE Transactions on Systems, Man and Cybernetics A, Systems & Humans, 5(5) (September 2009): 1051-1061
It follows that another way to see that Felsenstein is blowing smoke is to note that he simply is not up to date on the literature dealing with conservation of information. Moreover, if the example of Jeffrey Shallit is any indicator, that ignorance of the recent conservation of information literature gives the appearance of being willful. In an attempt to engage Shallit on this newer approach to conservation of information, I sent him an email some time back asking for his considered response to it. Here's the email he sent me in reply:
I already told you -- since you have never publicly acknowledged even one of the many errors I have pointed out in your work -- I do not intend to waste my time finding more errors in more work of yours.Actually, I did acknowledge an arithmetic error that Shallit found in my book No Free Lunch, though the error itself did not affect my conclusion. But most of what he calls errors have seemed to me confusions in his own thinking. The fact is, Shallit and I were together at a conference in the early 2000s and butted heads there on the question of complex specified information. In this encounter, I was frankly surprised that he could not grasp a crucial yet very basic distinction involving Kolmogorov complexity, namely, that even though it assigns high complexity to incompressible sequences taken individually, it can also assign high complexity to compressible sequences when taken as a subclass within a broader class of sequences.I find your failure to acknowledge the errors I have pointed out completely indefensible, both ethically and scientifically.
Jeffrey Shallit
The animating impulse behind Shallit's email, and one that Felsenstein seems to have taken to heart, is that having seen my earlier work on conservation of information, they need only deal with it (meanwhile misrepresenting it) and can ignore anything I subsequently say or write on the topic. Moreover, if others use my work in this area, Shallit et al. can pretend that they are using my earlier work and can critique them as though that's what they did. Shallit's 2003 paper that Felsenstein cites never got into my newer work on conservation of information with Robert Marks, nor did Felsenstein's 2007 paper for which he desires a response. Both papers key off my 2002 book No Free Lunch along with popular spinoffs from that book a year or two later. Nothing else.
So, what is the difference between the earlier work on conservation of information and the later? The earlier work on conservation of information focused on particular events that matched particular patterns (specifications) and that could be assigned probabilities below certain cutoffs. Conservation of information in this sense was logically equivalent to the design detection apparatus that I had first laid out in my book The Design Inference (Cambridge, 1998).
In the newer approach to conservation of information, the focus is not on drawing design inferences but on understanding search in general and how information facilitates successful search. The focus is therefore not so much on individual probabilities as on probability distributions and how they change as searches incorporate information. My universal probability bound of 1 in 10^150 (a perennial sticking point for Shallit and Felsenstein) therefore becomes irrelevant in the new form of conservation of information whereas in the earlier it was essential because there a certain probability threshold had to be attained before conservation of information could be said to apply. The new form is more powerful and conceptually elegant. Rather than lead to a design inference, it shows that accounting for the information required for successful search leads to a regress that only intensifies as one backtracks. It therefore suggests an ultimate source of information, which it can reasonably be argued is a designer. I explain all this in a nontechnical way in an article I posted at ENV a few months back titled "Conservation of Information Made Simple" (go here).
So what's the take-home lesson? It is this: Stephen Meyer's grasp of conservation of information is up to date. His 2009 book Signature in the Cell devoted several chapters to the research by Marks and me on conservation of information, which in 2009 had been accepted for publication in the technical journals but had yet to be actually published. Consequently, we can expect Meyer's 2013 book Darwin's Doubt to show full cognizance of the conservation of information as it exists currently. By contrast, Felsenstein betrays a thoroughgoing ignorance of this literature. Consequently, if Felsenstein is representative of the help that PT has to offer the ID community, then Meyer can afford to do without it.