497,335 comments posted so far... Make sure to enter the ScienceBlogs 500,000th Comment Contest

Click here to find out more!

Search this blog


Profile

markcc.jpg
Mark Chu-Carroll (aka MarkCC) is a PhD Computer Scientist, who works for Google as a Software Engineer. My professional interests center on programming languages and tools, and how to improve the languages and tools that are used for building complex software systems.

Other Information

Add this blog to my Technorati Favorites!

Recent Posts

Recent Comments

Categories

Blogroll

Old Topic Indices

Great Online Books

« Games and Graphs: Searching for Victory | Main | Using Graphs to Represent Information: Lattices and Semi-Lattices »

A Glance at the Work of Dembski and Marks

Category: goodmath > information theory; bad math > Debunking Creationism > intelligent design
Posted on: September 17, 2007 9:00 AM, by Mark C. Chu-Carroll

Both in comments, and via email, I've received numerous requests to take a look at the work of Dembski and Marks, published through Professor Marks's website. The site is called the "Evolutionary Informatics Laboratory". Before getting to the paper, it's worth taking just a moment to understand its provenance - there's something deeply fishy about the "laboratory" that published this work. It's not a lab - it's a website; it was funded under very peculiar circumstances, and hired Dembski as a "post-doc", despite his being a full-time professor at a different university. Marks claims that his work for the EIL is all done on his own time, and has nothing to do with his faculty position at the university. It's all quite bizarre. For details, see here.

On to the work. Marks and Dembski have submitted three papers. They're all in a very similar vein (as one would expect for three papers written in a short period of time by collaborators - there's nothing at all peculiar about the similarity). The basic idea behind all of them is to look at search in the context of evolutionary algorithms, and to analyze it using an information theoretic approach. I've picked out the first one listed on their site: Conservation of Information in Search: Measuring the Cost of Success

There's two ways of looking at this work: on a purely technical level, and in terms of its presentation.

On a technical level, it's not bad. Not great by any stretch, but it's entirely reasonable. The idea of it is actually pretter clever. They start with NFL. NFL says, roughly, that if you don't know anything about the search space, you can't select a search that will perform better than a random walk. If we have a search for a given search space that does perform better than a random walk, in information theoretic terms, we can say that the search encodes information about the search space. How can we quantify the information encoded in a search algorithm that allows it to perform as well as it does?

So, for example, think about a search algorithm like Newton's method. It generally homes in extremely rapidly on the roots of a polynomial equation - dramatically better than one would expect in a random walk. For example, if we look at something like y = x2 - 2, starting with an approximation of a zero at x=1, we can get to a very good approximation in just two iterations. What information is encoded in Newton's method? Among other things, it's working in a Euclidean space on a continuous, differentiable curve. That's rather a lot of information. We can actually quantify that in information theoretic terms by computing the average time to find a root in a random walk, compared to the average time to find a root in Newton's method.

Further, when a search performs worse than what is predicted by a random walk, we can say that with respect to the particular search task, that the search encodes negative information - that it actually contains some assumptions about the locations of the target that actively push it away, and prevent it from finding the target as quickly as a random walk would.

That's the technical meat of the paper. And I've got to say, it's not bad. I was expecting something really awful - but it's not. As I said earlier, it's far from being a great paper. But technically, it's reasonable.

Then there's the presentation side of it. And from that perspective, it's awful. Virtually every statement in the paper is spun in a thoroughly dishonest way. Throughout the paper, they constantly make statements about how information must be deliberately encoded into the search by the programmer. It's clear the direction that they intend to go - they want to say that biological evolution can only work if information was coded into the process by God. Here's an example from the first paragraph of the paper:

Search algorithms, including evolutionary searches, do not generate free information. Instead, they consume information, incurring it as a cost. Over 50 years ago, Leon Brillouin, a pioneer in information theory, made this very point: "The [computing] machine does not create any new information, but it performs a very valuable transformation of known information" When Brillouin's insight is applied to search algorithms that do not employ specific information about the problem being addressed, one finds that no search performs consistently better than any other. Accordingly, there is no "magic-bullet" search algorithm that successfully resolves all problems.

That's the first one, and the least objectionable. But just half a page later, we find:

The significance of COI [MarkCC: Conservation of Information - not Dembski's version, but from someone named English] has been debated since its popularization through the NFLT [30]. On the one hand, COI has a leveling effect, rendering the average performance of all algorithms equivalent. On the other hand, certain search techniques perform remarkably well, distinguishing themselves from others. There is a tension here, but no contradiction. For instance, particle swarm optimization [10] and genetic algorithms [13], [26] perform well on a wide spectrum of problems. Yet, there is no discrepancy between the successful experience of practitioners with such versatile search algorithms and the COI imposed inability of the search algorithms themselves to create novel information [5], [9], [11]. Such information does not magically materialize but instead results from the action of the programmer who prescribes how knowledge about the problem gets folded into the search algorithm.

That's where you can really see where they're going. "Information does not magically materialize, but instead results from the action of the programmer". The paper harps on that idea to an inappropriate degree. The paper is supposedly about quantifying the information that makes a search algorithm perform in a particular way - but they just hammer on the idea that the information was deliberately put there, and that it can't come from nowhere.

It's true that information in a search algorithm can't come from nowhere. But it's not a particularly deep point. To go back to Newton's method: Newton's method of root finding certainly codes all kinds of information into the search - because it was created in a particular domain, and encodes that domain. You can actually model orbital dynamics as a search for an equilibrium point - it doesn't require anyone to encode in the law of gravitation; it's already a part of the system. Similarly in biological evolution, you can certainly model the amount of information encoded in the process - which includes all sorts of information about chemistry, reproductive dynamics, etc.; but since those things are encoded into the universe, you don't need to find an intelligent agent to have coded them into evolution: they're an intrinsic part of the system in which evolution occurs. You can think of it as being like a computer program: computer programs don't need to specifically add code into a program to specify the fact that the computer they're going to run on has 16 registers; every program for the computer has that wired into it, because it's a fact of the "universe" for the program. For anything in our universe, the basic facts of our universe - of basic forces, of chemistry, are encoded in their existence. For anything on earth, facts about the earth, the sun, the moon - are encoded into their very existence.

Dembski and Marks try to make a big deal out of the fact that all of this information is quantifiable. Of course it's quantifiable. The amount of information encoded into the structure of the universe is quantifiable too. And it's extremely interesting to see just how you can compute how much information is encoded into things. I like that aspect of the paper. But it doesn't imply anything about the origin of the information: in this simple initial quantification, information theory cannot distinguish between environmental information which is inevitably encoded, and information which was added by the deliberate actions of an intelligent agent. Information theory can quantify information - but it can't characterize its source.

If I were a reviewer, would I accept the paper? It's hard to say. I'm not an information theorist; so I could easily be missing some major flaw. The style of the paper is very different from any other information theory paper that I've ever read - it's got a very strong rhetorical bent to it which is very unusual. I also don't know where they submitted it, so I don't know what the reviewing standards are - the reviewing standards of different journals are quite different. If this were submitted to a theoretical computer science journal like the ones I typically read, where the normal ranking system is (reject/accept with changes and second review/weak accept with changes/ strong accept with changes/strong accept), I would probably rank it either "accept with changes and second review" or "weak accept with changes".

So as much as I'd love to trash them, a quick read of the paper seems to show that it's a mediocre paper, with an interesting idea. The writing sucks: it was written to try to make a point that it can't make technically, and it makes that point with all the subtlety of a sledgehammer, despite the fact that the actual technical content of the paper can't support it.

Comments

#1

Even if the information theoretic part is okay, I would be appalled to see this paper in a journal. What if someone wrote an (otherwise) excellent paper about a new quantum programming technique, but constantly throughout the paper kept saying how it proved coat hangers are edible?

Posted by: Andrew | September 17, 2007 10:40 AM

#2

Read "Intelligent Design and the NFL Theorems: Debunking Dembski" by Olle Häggström (pretty smart probability theorist) http://www.math.chalmers.se/~olleh/papers.html

Posted by: mo | September 17, 2007 11:56 AM

#3

one can say: this paper tells a lot of interesting and true things. the only problem is that true things in it are not interesting, and interesting things are not true.

Posted by: krisztian pinter | September 17, 2007 1:00 PM

#4

Mark,

Your analysis is very well stated. As you point out, the technical side of their work and the presentation thereof are two separate matters. I agree that the presentation has some serious problems. It serves to obscure rather than clarify concepts that I find rather trivial, and it seems intended to create an impression of ID-friendliness.

As for the technical side, they simply explore the various ways that searches can be adjusted to increase their efficiency. I'm sure that this has been done many times before, even in homework assignments.

The parts that I have problems with are:
a) Casting the concepts in terms of information
b) The quantification of the increase in efficiency
c) The arbitrary nature of selecting a baseline search

In classical information theory, one information measure of a message is the surprisal, which is the negative log of the probability of the message. This is the measure that D&M use, but they use it in a strange way: They take the probability that a search will succeed -- that is, the "message" is binary, either "success" or "failure" -- but instead of associating the information with the success/failure outcome of the search, they associate it with the search itself, which is confusing to say the least.

What's even more confusing is their definition of active information. AI is not the negative log of a probability; rather, it's the negative log of a ratio of probabilities. So how do we pinpoint the "message" that contains this information?

Here's an attempt. The event A associated with the active information is defined such that P(A)=P(B)/P(E) where B is the success of a baseline search and E is the success of an efficient search. Let's pretend that the parameters of the efficient search were chosen from all possible parameters; we'll call this selection event X. NFL tells us that P(X)*P(E|X) = P(X). After the negative log transformation, I(A)

But how is that useful? I'm having a hard time seeing the significance of this measure. Can we conclude anything useful from it that we didn't already have to know in order to calculate it? As far as I can tell, no.

As far as the baseline search, D&M tell us only that it's a blind search. But what is the search space? And what is the search structure? For instance, in this paper, they explore two blind searches, one of which is much more effective than the other (or so they say) because of its search structure. Which of those should be used as a baseline? It seems rather arbitrary.

(If you've read this far, I'll throw in another tidbit. I'm pretty sure that the numerical results and conclusions in the paper cited above are completely wrong, so D&M will have to rewrite it. You heard it here first.)

Posted by: secondclass | September 17, 2007 1:14 PM

#5

Dang less-than's and greater-than's. Here's another try:

"Here's an attempt. The event A associated with the active information is defined such that P(A) = P(B)/P(E) where B is the success of a baseline search and E is the success of an efficient search. Let's pretend that the parameters of the efficient search were chosen from all possible parameters; we'll call this selection event X. NFL tells us that P(X)*P(E|X) <= P(B), so P(X) <= P(B)/P(E|X), so P(A) >= P(A). After the negative log transformation, I(A) <= I(X). That is, the active information measure gives us a lower bound on the information associated with the selection of the particular search."

Posted by: secondclass | September 17, 2007 1:24 PM

#6

i gotta say, i'm with Andrew. the technical ideas in the paper are acceptable, yeah, but they're being abused (blatantly and inelegantly abused at that) in a vain attempt at making a theological point. it ruins the paper's conclusions, it stops any less critical reader from getting the most out of the information they provide and it gives them free publicity; it doesn't belong in a computing journal, imo.

Lepht

Posted by: Lepht | September 17, 2007 3:08 PM

#7

So, to generalize, what if one would write a very simple but very general algorithm (let's call it POKE-AROUND) that would quickly but randomly create other algorithms, a tiny fraction of which might turn out to be useful for search. Would it be conceivable that, in time, our POKE-AROUND algorithm would stumble across a more efficient search algorithm that may actually encode by chance, some knowledge about the search space? And because POKE-AROUND itself is a very simple program, can it be that itself could be a result of chance, especially given a long time?

Posted by: n3w6 | September 17, 2007 3:11 PM

#8

n3w6:

Nope. Using a meta-search to find a search doesn't work. A meta-search that chooses from some set of options or random inputs to find a search is really just itself another search - and falls victim to NFL in the same way as any other search.

The thing is, that's not a problem. Dembski likes to try to claim NFL as a much stronger result than it actually is. NFL only talks about the properties of searches averaged over all possible search spaces. It doesn't saying anything about how searchable particular sets of spaces are.

For some search spaces, it's very easy to find search algorithms that converge on solutions very quickly. NFL relies on the fact that you're talking about performance averaged over all possible search spaces - and like most mathematical structures, most theoretically possible search spaces are highly irregular, and have no properties that are easily exploitable.

Think of it like this: most functions - the overwhelming majority of functions - are neither differentiable nor continuous. Given an arbitrary function which you know nothing about, there's no way to find its zeros faster than just randomly guessing until you get one. But if you're working with polynomials, then you can easily find its zeros using a simple search process.

Search is exactly the same way. Given a search space about which you know nothing, there's no way to pick an algorithm that will do better than random. But if you know that your landscape is a smooth, continuous, differentiable surface in R3, there are a ton of search algorithms that
will perform better than random.

Posted by: Mark C. Chu-Carroll | September 17, 2007 4:21 PM

#9

"those things are encoded into the universe"
Yep, and who encoded the universe ...
I think it's just another way to say "if there's evidence of evolutions it's only because God wanted it to be that way". Same reason he went through the trouble of burying those fern leaves in the coal; artistic license so to speak.

Posted by: Mu2 | September 17, 2007 4:53 PM

#10

Correction to my correction: Replace "so P(A) >= P(A)" with "so P(A) >= P(X)" in my second post above.

Posted by: secondclass | September 17, 2007 6:43 PM

#11

Mark-

Interesting post, but you should prepare yourself psychologically for the inevitable distortion of what you wrote over at Dembski's blog. A while back I wrote a similar post about one of Dembski's papers, describing things in much the same way as you did. I concluded that technically the paper was acceptable, but that it was abysmally written and that its broader conclusions were not correct. Along the way I remarked that the proofs seemed to be correct and that Dembski knew how to manipulate his symbols. It wasn't long before Salvador Cordova was gushing over at Uncommon Descent that I had said Dembski's paper was correct. Ugh.

Also, I think you mean “random search” as opposed to “random walk.” A random walk usually refers to a situation where you are moving through some discrete space in which from every point you have known probabilities of moving to certain subsequent points. A random search is when you choose the points to sample at random from the search space. Thus, there is no connection between the point you sample next and the previous points you have already sampled. I think it was the latter meaning that is intended in the NFL theorems.

Posted by: Jason Rosenhouse | September 17, 2007 10:38 PM

#12

Let me put your critique in a nutshell to check my understanding. If a given search method works better than a random walk in some search space, that fact tells us about (some) properties of the search space. However, it tells us nothing about where those properties came from, and nothing about how that search method came to be used on that search problem. Is that it?

Posted by: RBH | September 17, 2007 11:09 PM

#13

See Mathworld for the hotlinks and citations to the mathematical literature for this excerpted definition:

Random Walk. From MathWorld--A Wolfram Web Resource.

"A random process consisting of a sequence of discrete steps of fixed length. The random thermal perturbations in a liquid are responsible for a random walk phenomenon known as Brownian motion, and the collisions of molecules in a gas are a random walk responsible for diffusion. Random walks have interesting mathematical properties that vary greatly depending on the dimension in which the walk occurs and whether it is confined to a lattice."

Posted by: Jonathan Vos Post | September 18, 2007 1:13 AM

#14

Also see the recent PT thread
http://www.pandasthumb.org/archives/2007/09/how_does_evolut.html

where at least one Tom English weighs in, not so much on the math as on the Baylor controversy.

I agree that the technical points about "active information" are valid, but only mildly interesting. re his Chengdu keynote, I find it hard to believe that the "ev" algorithm contributed negative information per these definitions. Is Marks being misleading or is he cherry picking an example, the way UD bloggers like to harp on Dawkins' WEASEL?

ev is itself an intersting case. Schneider makes some effort to make it a "biologically realistic" model, and makes some argument that this realism is important. But then so much about the model isn't realistic, it kind of undercuts his argument.

Posted by: David vun Kannon | September 18, 2007 4:47 AM

#15

Mark,

I appreciate your balanced response. I'm "somebody named English" who wrote in 1996 that NFL is a consequence of conservation of Shannon information in search. I'm affiliated now with Bob Marks and the EvoInfo (virtual) lab, but I'm an adversary of ID. You can find more about that at The Panda's Thumb.

NFL only talks about the properties of searches averaged over all possible search spaces. It doesn't saying anything about how searchable particular sets of spaces are.

Not to nitpick, but in the NFL framework the search space (aka solution space) is the domain of the cost functions, which is fixed. I wish Wolpert and Macready hadn't spoken so much of averages in their plain-language remarks. Their theorems actually show that all algorithms have identically distributed results. A search result is, loosely, the sequence of costs obtained by the algorithm, and search performance is a function of the search result. If all algorithms have identically distributed results, then all algorithms have identically distributed performance and identical average performance.

For some search spaces, it's very easy to find search algorithms that converge on solutions very quickly.

Easy for an intelligent agent like you, but easy for some algorithm? Dembski wants precisely to show that you, and not an algorithm, can easily design a search algorithm with higher performance.

... and like most mathematical structures, most theoretically possible search spaces are highly irregular, and have no properties that are easily exploitable.

Yes, almost all cost functions are algorithmically random, or nearly so. A consequence is that for the typical cost function, almost all algorithms obtain good solutions rapidly. (Intuitively, there are just as many good solutions as bad ones, and they're scattered all about. It's hard not to bump into one.) To put it another way, all search algorithms are almost universally efficacious. See Optimization Is Easy and Learning Is Hard in the Typical Function.

My next paper will demonstrate clearly that this is a theoretical result, not a practical one.

Given a search space about which you know nothing, there's no way to pick an algorithm that will do better than random.

Some care is required in speaking of an algorithm being better or worse than random search. There is the performance of a particular realization of random search of a cost function, and there is also the expected performance of random search of that function.

When there is no information to suggest that any (deterministic) algorithm will perform better than any other on a cost function to be drawn randomly (not necessarily uniformly), selecting an algorithm uniformly is the best you can do. But a single run of random search is precisely equivalent to uniformly selecting a deterministic algorithm and running it (see No More Lunch). So picking an algorithm randomly vs. applying random search is a distinction without a difference.

Hope I didn't talk your ear off. I love this stuff.

Posted by: Tom English | September 18, 2007 8:07 AM

#16

The Newton's method comparison impressed me with how much information the problem space contributes. In evolution the problem space is constrained by both physical and chemical laws.

And as evolution is also a natural process there is a lot of contribution of inherent information here too. A lot of mechanisms aren't accounted for in these papers. If we measure information as randomness, there is randomness inherent in evolution. Both in evolutionary mechanisms, such as crossovers in sexual reproduction, and evolutionary processes, such as fixation during selection, or drift.

For the papers the discussion we have here and elsewhere will help creationists generally (the interest and rational analysis stroking their egos) and specifically (in making the papers better). But as they can't support what they pretend to support (teleology in evolution) it will not matter much.

Perhaps they can find that biologically inspired software models have amounts of "artificial" information inserted. But they don't need to as targets can be randomly selected and constraints are natural. After all, evolutionary theory is explicitly non-teleological in its description of biological systems behavior so we can make natural models the same.

Ironically, observations in sciences are most often artificially produced from experiments, and they can still be used to test predictions. But ID wants to tear the whole of science down, so they ignore that.

Posted by: Torbjörn Larsson, OM | September 18, 2007 8:23 AM

#17
Is Marks being misleading or is he cherry picking an example ... Schneider makes some effort to make it a "biologically realistic" model, and makes some argument that this realism is important. But then so much about the model isn't realistic, it kind of undercuts his argument.

He is cherry-picking for all I can see. He runs a comparison with ev over a wide range of parameters outside the area Schneider expressly says will model the biological situation and hence evolution in action. There isn't really a discussion in the paper, but in the conclusion Marks and Dembski weasel-word the following:

The success of ev was not due to its evolutionary search procedure but to a fortunate matching between the search structure and the problem being solved.

Which is exactly true, of course. The fortunate matching is when the parameters match the biological model.

Also, Marks totally disregards that the ev perceptron itself models the genetic machinery that has earlier resulted from evolution. So as I understand it there is at least two technical problems in that paper.

Yes, Schneider's model isn't fully realistic, he discusse a lot of approximations and omissions. Also, he assumes (which is the fortunate matching) that the program should mimic independent variation. This is the usual situation in nature, but note that the biological theory itself neither demands or requires that, only "variation".

But as a demonstration that the genome accepts Shannon information from the environment, it is an interesting experiment. (I wouldn't say test, since evolutionary theory doesn't concern information.) But I wish Schneider had picked a simpler and more clearcut model to demonstrate it with.

Posted by: Torbjörn Larsson, OM | September 18, 2007 8:57 AM

#18

Oops. Too hasty there:

He is cherry-picking for all I can see.

As it was put here, misleading or cherry-picking, he is also misleading IMO. The cherry-picking is in using an evolutionary algorithm modeling a natural system instead of examples of designed algorithms. The misleading is in the discussion of Schneiders choice of parameters and the weasel words in the conclusion.

biological theory itself neither demands or requires that,

biological theory itself neither demand or predicts that,

Posted by: Torbjörn Larsson, OM | September 18, 2007 9:06 AM

#19

RBH:

Yes, exactly. Well said!

Posted by: Mark C. Chu-Carroll | September 18, 2007 9:33 AM

#20

When I beta tested John Holland's book on the Genetic Algorithm (1975-1976) and was the first to use it to find solutions to an unsolved problem in the scientific literature, I was guided by Oliver Selfridge (Father of Machine Perception).

He had a bunch of grad students compete in a learning problem, the repeated "coin guessing" problem (2x2 payoff bimatrix, but similar in learning complexity to rock-paper-scissors).

My program came in second of over a dozen competing. It was a GA with its own parameters coded into the "gene" string, which varied its chromosome length and other parameters based on the history of the competition.

Mine lost to a program that bundled many smaller programs within it, and passed the token between them to the one whose score would have been best so far if it had been the one with the token all along.

The (British) author of the winner was hired by IBM, and moved to Florida to work on the not-yet-released PC.

Few people today seem to understand the implicit parallelism in GA (in that it is exponentially evolving sampled "schema" of chromosomes in a larger search space concurrently with evolving chromosomes in the base search space).

I still have unanswered questions on the meta-GA which evolves its own parameters concurrently with evolving populations. My questions and partial results of 1/3 century ago have been cited in refereed papers by Prof. Philip V. Fellman.

In the No Free Lunch Theorem, and its abuses, I am still unsure about the definition of "algorithm" and "search space" and "cost function" and the like. I suspect that there are hidden assumptions about distributions and the spaces of possible spaces.

Posted by: Jonathan Vos Post | September 18, 2007 1:05 PM

#21

Easy for an intelligent agent like you, but easy for some algorithm? Dembski wants precisely to show that you, and not an algorithm, can easily design a search algorithm with higher performance.

What is an "intelligent agent"?

Posted by: Coin | September 18, 2007 3:11 PM

#22
What is an "intelligent agent"?

Hmm, coin. Know anything about collective intelligences (COINs)?

I was trying to echo Dembski. I should have written "an intelligence like yours" to stay clear of embodiment and agency. In intelligent design, an intelligence is a supernatural (ID proponents used to say "non-natural," and now say "non-material") source of information. If the active information in a search seems too much to have arisen naturally, Dembski will say it must have come from an intelligence.

Few who seriously investigate intelligence in animals believe there is any one thing that constitutes intelligence, or that intelligence is anything but a hypothetical construct. The norm is to define intelligence operationally, and definitions differ hugely from study to study. Unfortunately, some very bright scientists and engineers slip into treating intelligence as a vital essence that inheres in some systems and not in others. They haplessly play into the hands of ID advocates who are better philosophers than they.

No doubt many ID proponents secretly equate "unembodied" intelligence with spirit. That is, humans are able to create information because they, created in the image of God, are spiritual, and not just physical entities.

Posted by: Tom English | September 19, 2007 1:55 AM

#23

Hi, David.

I find it hard to believe that the "ev" algorithm contributed negative information per these definitions.

The measure is relative, and is negative simply because the ev doesn't perform as well as random search does on average.

There have been several times over the years that I have suggested in reviews of conference papers that the authors compare their fancy new algorithms to random search. Given that random search is, loosely speaking, the average search, using it to establish a baseline makes a lot of sense.

Posted by: Tom English | September 19, 2007 2:32 AM

#24

Tom English:
"So picking an algorithm randomly vs. applying random search is a distinction without a difference."

Isn't this true when the algorithms being selected contain no information about the target, but not true if it does?

Posted by: Anonymous | September 19, 2007 10:07 AM

#25

Tom English:
"No doubt many ID proponents secretly equate "unembodied" intelligence with spirit. That is, humans are able to create information because they, created in the image of God, are spiritual, and not just physical entities."

Well said. So why don't you agree?

Posted by: Anonymous | September 19, 2007 10:21 AM

#26

Anonymous:

Tom English: "No doubt many ID proponents secretly equate "unembodied" intelligence with spirit. That is, humans are able to create information because they, created in the image of God, are spiritual, and not just physical entities."

Well said. So why don't you agree?

That would be because it's nonsense. First - when we're talking about information theory, the idea of "unembodied", "spiritual", "not just physical" are all undefinable concepts. They just don't mean anything in terms of the theory. If you want to adopt information theory for an argument, you're stuck working in terms of the concepts that are defined in the framework of information theroy.

Second, according to the definition of information in information theory, information is *constantly* produced by what are presumed to be un-intelligent, purely physical entities, by natural processes that are effectively random.

The ID folks want to create some special distinguished kind of "information" which can only be produced by intelligent agents. That's really the idea behind specified complexity, irreducibly complexity, and several other similar arguments. The problem is, they can't define what an intelligent agent *is* by anything other than a silly circular argument. What's an intelligent agent according to Dembski? An agent that can produce specified complexity. What's specified complexity? Complexity which has a property that could only be created by an intelligent agent.

They drown those ideas in dreadful prose and massive amounts of hedging, to try to distract people from noticing the ultimate circularity of it. But look at anything by Dembski: does he *ever* offer a precise definition of specification, which doesn't contradict his definition of complexity?

Posted by: Mark C. Chu-Carroll | September 19, 2007 11:12 AM

#27

Tom English:

You're entirely welcome to talk my ear off all you want.

Two of my favorite things on my blog are having people who know more than me drop by and teach me something; and having someone involved in something I'm writing about come by to join the conversation.

Posted by: Mark C. Chu-Carroll | September 19, 2007 11:15 AM

#28

It is unlikely that this conversation will resolve the question: "is intelligence the result of entirely physical processes?"

That is a metaphysical question.

Most neurophysiologists assume that intelligence can be reduced to an emergent property of neurons (possibly with DNA, RNA, Protein interaction of some sort as well as electrochemical) of specific structure in a network of specific structure which learns by specific structural changes.

Most practioners or theorists of Artificial Intelligence assume that intelligence is an emergent property of software (perhaps in AI languages) running on commercial hardware.

The argument about existence or nonexistence of "spirits" of various kinds, elves, angels, demons, gods, hinges on the metaphysical stance.

The argument about animal rights, on the basis that animals are intelligent in the same way (albeit a different quantity) as humans hinges on the metaphysical stance.

After spending several years operating within the cult of Strong AI, back in the early and mid 1970s in grad school, I have retreated to being a strong AI agnostic.

The late Alex the Parrot slightly shifted my belief in animal intelligence in the direction that John Lilly tried to persuade me decades earlier about dolphins.

I think that ID is a metaphysical stance trying to pretend that it is a Scientific Theory. It is rather difficult to apply Math to Metaphysics. I have joked here before about Theomathematics and Theophysics. But the ID advocates are not joking.

Posted by: Jonathan Vos Post | September 19, 2007 12:12 PM

#29

Anonymous says:

First - when we're talking about information theory, the idea of "unembodied", "spiritual", "not just physical" are all undefinable concepts. They just don't mean anything in terms of the theory.

Really? Have you read what Dembski says about "unembodied designers" in NFL? It's quite brilliant, you know.

If you want to adopt information theory for an argument, you're stuck working in terms of the concepts that are defined in the framework of information theroy.

Dembski has no problem incorporating "unembodied designers" into information theory via quantum mechanical probabilities. You should read it.

Second, according to the definition of information in information theory, information is *constantly* produced by what are presumed to be un-intelligent, purely physical entities, by natural processes that are effectively random.

Keith Devlin, I believe it is, wrote a review of NFL. In it he points out the rather severe limitations of both Shannon information and Kolmogorov complexity. CSI, is a much more realistic concept of what we generally mean by "information". Let' remember that both Shannon and Kolmogorov were dealing with digital codes; hardly the stuff of normal day life (except for code writers).

The ID folks want to create some special distinguished kind of "information" which can only be produced by intelligent agents. That's really the idea behind specified complexity, irreducibly complexity, and several other similar arguments. The problem is, they can't define what an intelligent agent *is* by anything other than a silly circular argument. What's an intelligent agent according to Dembski? An agent that can produce specified complexity. What's specified complexity? Complexity which has a property that could only be created by an intelligent agent.

Really? Is it a circular argument? Let's see: we find in nature something that is both complex and specified by some independent pattern; and if the complexity is of sufficient magnitude, then design is inferred. What's circular about that? It's the conjunction of a specified pattern and a high level of complexity that allows us to draw such an inference. There is no special property of complexity. Complexity ends up being simply the inverse of Shannon information. You seem entirely comfortable with that notion, right?


They drown those ideas in dreadful prose and massive amounts of hedging, to try to distract people from noticing the ultimate circularity of it. But look at anything by Dembski: does he *ever* offer a precise definition of specification, which doesn't contradict his definition of complexity?

The problem with defining 'specification' is that it involves a simultaneous intellectual act, and to define its mathematical constituents is not easy, nor does it lend itself to simple exposition. It's generally the recognition of a pattern which induces a rejection region in the extremal ends of a uniform probability distribution of such magnitude as to exceed the universal probability bound of 1 in 10^-150.

Now, if you want circularity, how's this: Who survives? The fittest. Who are the fittest? Those who survive.

Posted by: Lino D'Ischia | September 19, 2007 12:55 PM

#30

Lino:

I've read Dembski's NFL stuff, and I've commented on it on this blog multiple times. It doesn't do anything to define just what an "unembodied" intelligence is.

Specified complexity is, as I've argued numerous times, a
nonsensical term. Dembski is remarkably careful in presentations and writings to never precisely define just what specification means.

There's a good reason for that. Because specification, as
he defines it informally, translated into formal terms, means one of two things.

One possibility, the more charitable one, is that specification is a kind of subset property of information. That is, a specification of a system is a partial description of it - a description which includes some set of properties that the full information must have. A system that matches the specification contains the properties described by the specification - in information theoretic terms, the embodiment of the specification contains a superset of the information in the specification. The problem with this one is that under this definition of specification, every complex system is specifiable. You can always extract a subset of the information in a system, and use it to create a specification of that system; and every specification can be realized by an infinite number of complex systems. If everything complex has specified complexity; and every specification can be realized by a variety of complex systems, then SC is useless and meaningless.

The other possible sense of specification is the opposite of complexity. Under this definition, a
specifiable system is a system that can be completely described by a simple specification. But if the specification is simple and completely describes the system, then according to information theary, the system cannot by complex. Using this definition (which Dembski implies is the correct one in several papers, while leaving enough weasel-space to wiggle out),
a system with "specified complexity" is a system which
has both high information content (complex) and low information content (specification) at the same time.

And I'll point out that you engage in exactly the same kind of weaseling as Dembski. You can't define specification. You want to claim that Dembski's math defines some new kind of information theory, and that that theory gives you a handle on how to capture ideas which cannot be represented in conventional information theory. But you can't give a mathematical definition. You can't define what specification means, or how to compute it. Why is that?

Finally, your question about the circularity of survival of the fittest: any time you reduce a complex scientific theory down to a trivial one-sentence description, you're throwing out important parts of it. If all evolution said was embodied in "survival of the fittest", then you'd be right that it would be an empty, meaningless thing that explained nothing: the individuals that live to reproduce are the individuals that live to reproduce.

But in fact, that description of evolution is an example of the first possible definition of "specification" as given above. The fact that some individuals survive and some do not, and only the ones that survive reproduce - is a crucial
ingredient in the process of evolution. You could call it a specification of one necessary aspect. But just like that definition of specification doesn't do what Dembski wants it to do, it doesn't work well in this case. Because it's incomplete, and can be matched by both the real observed phenomenon of evolution, and numerous other phenomena as well.

"Survival of the fittest" leaves out crucial parts of the real definition of evolution. Evolution isn't just the fact that some survive and reproduce, and some don't. It also includes change: the population of individuals is undergoing a constant process of change. Every individual has mutations in their genes. When those mutations help, the individual might manage to survive when others wouldn't. When those mutations hurt, the individual might not survive where others would. The effect of change combined with differential success means that the genetic makeup of the population is changing over time.

Even that is a simplification, but a far more informative and complete one than "survival of the fittest". And it demonstrates why there's no real circularity.

On the other hand, Dembski, by refusing to provide real definitions of specification, intelligence, etc., turns his
voluminous writings into a meaningless pile of rubbish, because at its foundation, it has no meaning. Because it lacks any actual meaningful foundations, the whole thing collapses under its own weight. It's just a smoke-screen, trying to hide the fact that there's nothing really there.

Posted by: Mark C. Chu-Carroll | September 19, 2007 1:55 PM

#31

Lino:

Dembski's CSI is utter crap. See my paper with Elsberry, http://www.talkreason.org/articles/eandsdembski.pdf , which explains in detail why CSI is incoherent and doesn't have the properties Dembski claims.

Posted by: Jeffrey Shallit | September 19, 2007 2:36 PM

#32

Lino D'Ischia :

Dembski has no problem incorporating "unembodied designers" into information theory via quantum mechanical probabilities.

And they differ from classical probabilities how?

It's generally the recognition of a pattern which induces a rejection region in the extremal ends of a uniform probability distribution of such magnitude as to exceed the universal probability bound of 1 in 10^-150.

This is crap á la Dembski.

First, there is no "universal probability bound". Sometimes it is useful to exclude improbable events, but that is always made in a specific model which tells you what limits to use.

Second, you assume that the process you observe has a uniform probability. That is uncommon in natural processes. Every energy driven process that dissipates energy will see the system visit improbable states where it is driven. Dissipation requires such states or the energy would be conserved. And the biosphere is energy driven by the sun and dissipating into space.

Third, we know that selection enhances evolution rates so that new traits appears and fixates on much shorter time scales than the above bound implies. For example, human populations have evolved lactose tolerance several times in recent history, when effective population sizes have been a few thousand in the herders areas. So we are discussing evolution rates for new traits of at least 10^-6 traits/generation or so, in sexual populations. Those traits aren't planned but is the process response to the environment.

Posted by: Torbjörn Larsson, OM | September 20, 2007 2:37 PM

#33

And they differ from classical probabilities how?

Well, they're complex. And as we all know, God is an imaginary number.

Posted by: Coin | September 20, 2007 2:49 PM

#34

Mark C. Chu-Carroll wrote, "It also includes change: the population of individuals is undergoing a constant process of change."

Which was precisely the argument that disconcerted a couple of evengelicals who came to my door yesterday.

I pointed out that things designed by humans are largely identical. There is little variation in the shape of a door or a window, extruded vinyl siding is incredibly homogenous; things that are designed by an intelligence are often made as identical copies to the best of our abilities. (Which is one of the points of the six-sigma initiatives.)

Things growing by natural processes have far greater differences than those designed by man. (With obvious exceptions of course.)

I pulled a few leaves off the ivy and showed them the vast differences found even on the same plant. Size, shape, and color, were all explainable by natural processes but not even close to what we see in items which are designed.

I didn't convince them, but I think they might have seen my point. To them, I suspect, it made their creator even more impressive.

Posted by: Flex | September 20, 2007 3:25 PM

#35
"So picking an algorithm randomly vs. applying random search is a distinction without a difference."

Isn't this true when the algorithms being selected contain no information about the target, but not true if it does?

I'm giving this a very casual response. For any algorithm with information there's a corresponding algorithm with misinformation (negative information). If you fix the cost function and randomly draw algorithms a large number of times, the positive and negative information cancel one another out.

Posted by: Tom English | September 20, 2007 9:35 PM

#36

Mark,

I've read and enjoyed your comments many times. When I was in grad school, a friend of mine used to say, as he headed off to teach, "Well, guess I'll go stomp me out some ignorance." You keep on stomping, guy.

Tom

Posted by: Tom English | September 20, 2007 10:15 PM

#37

Tom,

Thanks for responding to my comment. I understand the claim that ev was relatively worse than random search and therefore contributed "negative information". My question, looking at the slides in chengdu.ppt on the EvoInfo resources page, was how that claim is supported. Even allowing for all the skipped steps that I would expect in a keynote, not a rigorous presentation, I find Marks' claims difficult to believe. The numbers thrown around on those slides just don't make a coherent argument to me.

I'm happy to discuss the weakneses of ev if it comes to that, just like I'm happy to discuss the weaknesses of WEASEL. That's what I call cherrypicking a weak example. However, if Marks' numbers are wrong, that is what I would (charitably) call misleading.

Posted by: David vun Kannon | September 20, 2007 11:58 PM

#38

http://www.thenation.com/doc/20071008/hacking

Root and Branch
by IAN HACKING
The Nation
[from the October 8, 2007 issue]

First the bright side. The anti-Darwin movement has racked up one astounding achievement. It has made a significant proportion of American parents care about what their children are taught in school. And this is not a question of sex or salacious novels; the parents want their children to be taught the truth. None of your fancy literary high jinks here, with truth being "relative." No, this is about the real McCoy.

According to a USA Today/Gallup poll conducted this year, more than half of Americans believe God created the first human beings less than 10,000 years ago. Why should they pay for schools that teach the opposite? These people have a definite and distinct idea in mind. Most of the other half of the population would be hard-pressed to say anything clear or coherent about the idea of evolution that they support, but they do want children to learn what biologists have found out about life on earth. Both sides want children to learn the truth, as best as it is known today.

The debate about who decides what gets taught is fascinating, albeit excruciating for those who have to defend the schools against bunkum. Democracy, as Plato keenly observed, is a pain for those who know better. The public debate about evolution itself, as opposed to whether to teach it, is something else. It is boring, demeaning and insufferably dull.

[truncated]

The Discovery Institute, a conservative think tank, states that "neo-Darwinism" posits "the existence of a single Tree of Life with its roots in a Last Universal Common Ancestor." That tree of life is enemy number one, for it puts human beings in the same tree of descent as every other kind of organism, "making a monkey out of man," as the rhetoric goes. Enemy number two is "the sufficiency of small-scale random variation and natural selection to explain major changes in organismal form and function." This is the doctrine that all forms of life, including ours, arise by chance. Never underestimate the extraordinary implausibility of both these theses. They are, quite literally, awesome.

[truncated]

Posted by: Jonathan Vos Post | September 22, 2007 2:53 PM

#39

Jeff Shallit:

"Dembski's CSI is utter crap. See my paper with Elsberry, http://www.talkreason.org/articles/eandsdembski.pdf , which explains in detail why CSI is incoherent and doesn't have the properties Dembski claims.

It's taken me a little while to work through your paper; so sorry for the delay. As to the paper, I don't see any substantive criticism by you and Elsberry that makes any serious dents in Dembski's explanation of CSI. What I detect in your criticism, in most instances, is a confusion between the notion of "information" and "Complex-Specified-Information" = CSI, and between "specifying" and the more formal "specification". Now, having said that, the whole notion of what a "specification" is is no easy task. (That's what I alluded to in the previous post.) So it's very understandable that there is a struggle to fully grasp the concept (it proves to be a rather slippery concept), but most, if not all, of your objections, I believe, can be countered.

Not being of the mind to write a 90 page paper to rebut every argument you make, I would be happy to discuss any of these arguments with you. Just select one.

If I may, to get things started, I'll just give one (almost glaring) example of where you fail to distinguish between "information" and CSI, with the result that your argument ends up dissolving away.

In Section 9, "The Law of Conservation of Information", your argument runs along these lines: Ω0 ⊆ Σ*, where Σ and Δ are finite alphabets . . . Dembski justifies his assertion by transfomring the probability space Ω1 by ∫-1. This is reasonable under the causal-history-based interpretation. But under the uniform probability interpretation, we may not even know that j is formed by fi. In fact, it may not even be mathematically meanignful to perform this transform, since j is being viewd as part of larger unifrom probability space, and f -1may not even be defined there.
This error in reasoning can be illustrated as follows. Given a binary string x we may encode it in "pseudo-unary" as follows: append a 1 on the front of x, treat the result as a number n represented in base 2, and then write down n 1's followed by a 0. . . . If we let f: Σ* → Σ * be the mapping on binary strings giving a unary encoding, then it is easy to see that f can generte CSI. For example, suppose we consider an 10-bit binary string chosen randomly and uniformly from the space of all such strings, of cardinality 1024. The CSI in such a string is clearly at most 10 bits. Now, however, we transform this space using f. The result is a space of strings of varying length l, with 1025≤ l≤ 2048. If we viewed this event f(i) for some i we would , under the uniform probability interpretation of CSI, interpret it as being chosen from the space of all strings of length l. But now we cannot even apply f-1 to any of these strings, other than f (i)! Furthermore, because of the simple structure of f(i) (all 1's followed by a 0), it would presumably be easily specified by a target with tiny probability. The result is that f (i) would be CSI, but i would not be."

The first error I see is that you have equated CSI with a 10-bit string. But Dembski very clearly assigns an upper probability bound of 10150, or 2500, or 500 bits. You acknowledge the upper probability bound in Section 11 (CSI and Biology). Since 10-bits falls well short of the 500 bits necessary, it is meaningless to speak of CSI. IOW, both i and f(i) do not exhibit CSI. Now, if you were to use string lengths i of sufficient length (i.e., ≥500), using this "pseudo-unary" program, we would find that the output f(i) would then be between 10140 and 10150 1's. Now there are only 1080 particles in the entire universe, so even if you lined up all the atoms that exist in the world, you would be way short of what you needed.
The second error occurs in the next paragraph on p. 26 where you invoke the Caputo case as a instance of "specification", much like you did in the penultimate sentence I quoted above. The "reference class of all possibilities" in the Caputo case was about a half a trillion. The 40 D's and 1 R was simply one "event" that belonged to that reference class. In order for CSI to be present, the reference class would have to be comprised of at least 10150 elements/events. So, indeed, the 40 D's and 1 R of the Caputo case is certainly "specified", but it doesn't constitute a "specification" because the "rejection region" it defines is not of sufficient complexity.
The third error I see again involves "specification". As I just mentioned, in Dembski's technical defintion of CSI, a "specification" is a true "specification" when the pattern that is identified by the intelligent agent induces a rejection region such that, including replicational resources and specificational resources, the improbability of the conceptual event that coincides with the physical event is less probable than 1 in 10150.
I've already gone farther than I intended. But, before I leave, I want to ask you something about your SAI, formulated in Appendix A. Below are two bit strings, A and B. Using any compression programs you have available to you (I have none; or if I do have them available I sure don't know how to get to them), which of the two ends up with the smallest input string; i.e., which has the greater SAI? And, then, if you can tell me, which of the two is "designed"?

Here they are:
A:
1001110111010101111101001
1011000110110011101111011
0110111111001101010000110
1100111110100010100001101
1001111100110101000011010
0010101000011110111110101
0111010001111100111101010
11101110001011110
B:
1001001101101000101011111
1111110101000101111101001
0110010100101100101110101
0110010111100000001010101
0111110101001000110110011
0110100111110100110101011
0010001111110111111011010
00001110100100111


A:


1001110111010101111101001
1011000110110011101111011
0110111111001101010000110
1100111110100010100001101
1001111100110101000011010
0010101000011110111110101
0111010001111100111101010
11101110001011110

B:

1001001101101000101011111
1111110101000101111101001
0110010100101100101110101
0110010111100000001010101
0111110101001000110110011
0110100111110100110101011
0010001111110111111011010
00001110100100111

Posted by: Lino D'Ischia | September 22, 2007 4:35 PM

#40

Sorry. I don't know how the two bit-strings got duplicated. But that is what it is: a simple duplication. So please ignore the repeat.

Posted by: Lino D'Ischia | September 22, 2007 4:38 PM

#41

I'd heard that Salvador was going offline. Is that true?

In any case, with regard to disembodied designers, I do wonder what the bandwidth of information transfer is "at the limit" as the energy approaches zero.

Posted by: Unsympathetic reader | September 22, 2007 5:34 PM

#42

Unsympathetic Reader:

"I'd heard that Salvador was going offline. Is that true?

In any case, with regard to disembodied designers, I do wonder what the bandwidth of information transfer is "at the limit" as the energy approaches zero.

"

If you're interested in just what "unembodied designers" can do, Dembski talks about that very thing in NFL. He has a very interesting QM take on it. It's really quite brilliant.

As to Sal, what kind of commentary on the biology community is it when someone like Sal has to disappear from blogs so as to not threaten his newly-started up university education?

Is this modern-day Lysenkoism?

Posted by: Lino D'Ischia | September 22, 2007 7:23 PM

#43
If you're interested in just what "unembodied designers" can do, Dembski talks about that very thing in NFL. He has a very interesting QM take on it. It's really quite brilliant.

Sure, it's quite brilliant, provided what you mean by "quite brilliant" is utter nonsense cleverly written to make it appear as if it says something deep while actually saying absolutely nothing.

Dembski is a master at weaseling around, making compelling looking arguments while leaving enough gaping holes in the argument to allow him to weasel out of any possibly critique.

One of the sad things about quantum theory is how it's become a magnet for liars. Because pretty much no one really understands it, it's easy for people like Dembski to jump in, wave his hands around shouting "quantum, quantum", and pretending that it somehow supports what he's saying.

As for what Sal's disappearance says about the biology community, I'd argue that what is really says is: "If you want to have any chance of being taken seriously as a researcher, you probably don't want to be known as a slimy,
quote-mining, lying sycophant to a bunch of loonie-tune assholes".

Posted by: Mark C. Chu-Carroll | September 22, 2007 9:02 PM

#44

Mark C. Chu-Carroll:

"One of the sad things about quantum theory is how it's become a magnet for liars. Because pretty much no one really understands it, it's easy for people like Dembski to jump in, wave his hands around shouting "quantum, quantum", and pretending that it somehow supports what he's saying."

Mark, I would agree with you on this point. You quite frequently find people extending and extrapolating QM to places and in ways that should never be. But what Dembski does is quite legitimate. He simply points out that the statistical nature of QM permits events taking place that don't involve the imparting of energy but simply a rearranging of the elements of the probability distribution. I don't think I would have ever thought of it.


Posted by: Anonymous | September 22, 2007 10:16 PM

#45

He simply points out that the statistical nature of QM permits events taking place that don't involve the imparting of energy but simply a rearranging of the elements of the probability distribution.
The last part of your sentence doesn't make any sense. Are you talking about measurement of entangled states? This is a cop-out, which distribution?

Here's the quote from Dembski (via talk.origins):
"Thermodynamic limitations do apply if we are dealing with embodied designers who need to output energy to transmit information. But unembodied designers who co-opt random processes and induce them to exhibit specified complexity are not required to expend any energy. For them the problem of "moving the particles" simply does not arise. Indeed, they are utterly free from the charge of counterfactual substitution, in which natural laws dictate that particles would have to move one way but ended up moving another because an unembodied designer intervened. Indeterminism means that an unembodied designer can substantively affect the structure of the physical world by imparting information without imparting energy." [p. 341]
"For now, however, quantum theory is probably the best place to locate indeterminism." [p. 336]

The problem: the processes are not random, they're stochastic. The results will follow QM distributions.

Where is this information being imparted? In atoms? Fermions? Bosons? Spin states? Momentum states? Will I always roll a spin-up? 1st excited state? Left circular polarization? You still need energy to create the perturbation that would favor a quantum state with certain information.

Consider teleportation: If the "unembodied designer" wanted to simply copy his quantum information into the quantum information of another atom, he would still require two extra atoms(or photons, electrons, Josephsen Junctions) along with some pertubations to both couple and change the atoms' state.

Posted by: creeky belly | September 23, 2007 3:52 AM

#46

...Josephsen Junctions) along with some pertubations to both couple and change the atoms' state.
I should mention that the perturbations in this case are creating the two extra atoms, since they can't be co-opted from others (what state would they be in?).
More information on teleportation here.

Posted by: creeky belly | September 23, 2007 4:06 AM

#47

Lino:

Well, I'll give you credit for one thing: at least you've actually read the paper and responded to it, which is more than Dembski has done.

To respond to your critiques: first, you claim that one must have 500 bits to constitute CSI. I say, take that up with Dembski, then because on page 159 of his book "Intelligent Design", Dembski says, "The sixteen-digit number on your VISA card is an example of CSI".

Second, you object to our simple example of how CSI can be generated if one doesn't specify the probability space correctly. But you have failed to understand the objection. The point is that f(i), when viewed as an element of the space of binary strings, does exhibit CSI, since it has 1024 bits. i itself does not because it is too short, but that is precisely our point! Here we have constructed CSI out of applying a function to something that isn't -- something that Dembski claims is impossible.

As for specification, I think you also fail to understand that Dembskian concept. Specification only deals with the assignment of an event to a subset of a reference class of events; there is nothing inherent in a specification that says it must refer to a subset with low probability. Go read section 1.4 of No Free Lunch again. Or go to page 111, where Dembski writes, "The 'complexity' in 'specified complexity' is a measure of improbability". So if the word complexity refers to improbability, it follows that the specified part must not, it itself, be related to probability.

As for your last question, I think you are confused. I am not claiming that Dembski or SAI can "detect design". It is the whole point of our paper that "detecting design" is not something one can determine by mathematical arguments alone.

Posted by: Jeffrey Shallit | September 23, 2007 7:52 AM

#48

You still need energy to create the perturbation that would favor a quantum state with certain information.

No you don't. Unembodied designers can do whatever the hell they want. All Dembski said was that, "for now, however, blah blah indeterminism, blah blah." (I'm paraphrasing.)

Note the tentative "for now, blah blah blah blah."

Lol, "unembodied designers". What hooey!

Posted by: 386sx | September 23, 2007 4:27 PM

#49

"Indeterminism means that an unembodied designer can substantively affect the structure of the physical world by imparting information without imparting energy."

So essentially, the "unembodied designer" is a perpetual motion machine?

Posted by: Tyler DiPietro | September 23, 2007 5:31 PM

#50

Anonymous:

That's exactly what I mean by chanting "quantum, quantum" while waving hands around.

Quantum physics says that there's some level where we don't understand what's going on, and which we can only describe in terms of a probability distribution.

Dembski's argument is, basically, saying that because we don't understand what's happening on that level, that he can stick the actions of his "disembodied designer" into that unexplained level.

It's a clever way of arguing, because it's playing with something that is, genuinely, deeply mysterious. And since we don't have a particularly good understanding of what's going on on that level - even the best experts find it largely incomprehensible - it's very hard for a layman to make any argument against it. So the laymen can't really respond. But Dembski *also* doesn't actually show where/how his "unembodied designer" fits into the intricate and subtle math of quantum physics - so it's too vague for an expert to form a good argument against.

In other words, it's classic Dembski. It sounds very impressive, it's full of obfuscatory math to make it look and sound complicated, but it's so vague and ultimately meaningless that you can't pin it down enough to conclusively debunk it as the nonsense that it is: any attempt at debunking it will simply be met with "But that's not what I meant".


Posted by: Mark C. Chu-Carroll | September 23, 2007 7:35 PM

#51

Jeff:

To respond to your response: First, you dispute my claim that 500 bits of information are necessary to have CSI. But then in responding to my second objection, you say: "The point is that f(i), when viewed as an element of the space of binary strings, does exhibit CSI, since it has 1024 bits." And in disputing my claim you quote Dembski from his book "Intelligent Design", which is why, I guess, in the preamble of your paper you indicate that unless Dembski refutes something from his prior writings, you consider everything he wrote in play (since, as is clear to anyone who compares, the section on Visa cards and phone numbers in "Intelligent Desgin" has been deleted from NFL).

Secondly, in responding to my objection to your example of the "pseudo-unary" function, you say that the output represents 1024 bits, far beyond the 500 bits necessary. But, of course, these are 1024 "pseudo-bits", since the output is a unary output in binary form. Prescinding from this for the moment, for the sake of argument, let's say this really did represent 1024 bits of information. The question is this: Does this, or does it not, represent CSI? I guess you think that this output bit string represents CSI because, like Caputo's string of 40 D's and 1 R, this bit string is "specified". Well here, as I mentioned the first time, I would say you've missed the technical meaning of "specification". CSI is an ordered pair of events (T,E) with T inducing a rejection function that in turn forms a rejection region within the reference class of events. IOW, CSI represents the conjunction of a physical event and a conceptual event. [This is all abundantly clear in "No Free Lunch"]. In the case of this 1024 bit-string, which represents the "physical event", what is the "conceptual event" that describes it and, in describing it, induces a rejection region? You don't provide any such description nor rejection region. We're left with one-half of CSI, and so we can't call an ordinary bit-string CSI.

Further, as is clear in Dembski's discussion on pp. 152-154, T induces a rejection region onto Ω0. In the example you use, no such rejection region is mentioned or specified in any way. So, for the sake of argument, let's say that your definition of a 10-bit string, the input parameter, represents T0, the rejection region in the reference class Ω0. The cardinality of this rejection region, as you point out, is 1024. Now let us suppose that the "conceptual event", C0 falls in this rejection region, and is identical with the physical event E0. Then the probablity of C0 = E0 =1 in 1024. Now let's look at the output reference class Ω1. The function f transforms T0 to T1, the rejection region in Ω1, C0 to C1, and E0 to E1. Now if the size of the rejection region hasn't changed, then the probability measure of the CSI involved doesn't change, and so the CSI remains unchanged in terms of bits. T0, defined on Ω0 as a 10-bit binary string, has 1024 elements. T1 defined on Ω0, the "pseudo-unary" output also involves just 1024 elements; that is, the unary strings of all ones ending with a zero, with length between l=1025 to l = 2048. You allude to this indirectly when, in your paper, you and Elsberry write: "If we viewed the event f(i) for some i we would, under the uniform probability interpretation of CSI, interpret it as being chosen from the space of all strings of length l. But now we cannot even apply f-1 to any of these strings, other than f(i)!" If it were any different from this, then the probability of T0 would be different from that of T1. In both cases, however, it is 1 in 1024, as Dembski claims it should be.

Third, we come back to "specification" and what that entails. Here's what you say about specification: "Specification only deals with the assignment of an event to a subset of a reference class of events; there is nothing inherent in a specification that says it must refer to a subset with low probability." This is not how I read Dembski. Specification indeed involves the identification of a rejection region (subset) within some reference class of events. But this is called "T", as in the example above. Now, for there to be a specification, there now has to be a physical event, E, which, as you state, falls into the rejection region. And it is imperative that the rejection region be of fantastically low probability. We agree, you and I, on an event, E, falling into a prescribed region of a reference class of events. But to say that there is "nothing inherent in a specification that says it must refer to a subset with low probability" is too vague, and hence misleading. By phrasing it this way, you almost imply a refutation of what Dembski means by a rejection region, yet, if properly understood, there isn't a problem. Here's what I mean: yes, the rejection region can consist of events of infinitely high individual probability(is that what you mean by "[there is] nothing inherent in a specification that says it must refer to a subset with low probability"?); yet, nevertheless, it's possible for these events of infinitely high individual probability to form a subset/rejection region, relative to the overall reference class, that is nevertheless extremal, that is, of extremely low probability, and which, thusly, constitutes a legitimate rejection region. (There are two tails in a Gaussian distribution) Dembski makes this point quite clear I think.

In advising me to go to page 111, where I would find Dembski writing, "The 'complexity' in 'specified complexity' is a measure of improbability", were you aware that Dembski, in the paragraph you quote from, was stating Elliot Sober's criticism of his mathematical work? From what I read, the words in question are meant to be taken as Sober's words, not Dembski's. But maybe "talk.origin" wasn't careful enough and accidentally "quote-mined" Dembski. Obviously, any conclusions you might want to draw based on this quote lose any force they might otherwise have had.

Finally, Dembski makes the claim that the identification of CSI allows us to infer design. You're now saying that SAI can't do that. So, obviously, SAI and CSI differ in this regard. But you might object saying: "Well Dembski might think he can do that, but there's no way he can do that." Well, if that's the case, then I'm not sure why you would bother to compare them. My only conclusion is that you see some value in the concept of CSI, but, that you see problems with it, and, for the sake of the much more limited objective of identifying 'information', SAI is better equipped.

So, then, I won't ask you to tell me which one is designed. However, which one, according to SAI, has more information? A, or B?

A:

1001110111010101111101001
1011000110110011101111011
0110111111001101010000110
1100111110100010100001101
1001111100110101000011010
0010101000011110111110101
0111010001111100111101010
11101110001011110

B:

1001001101101000101011111
1111110101000101111101001
0110010100101100101110101
0110010111100000001010101
0111110101001000110110011
0110100111110100110101011
0010001111110111111011010
00001110100100111

Posted by: Lino D'Ischia | September 23, 2007 8:31 PM

#52

creeky belly:

"The problem: the processes are not random, they're stochastic. The results will follow QM distributions.

Where is this information being imparted? In atoms? Fermions? Bosons? Spin states? Momentum states? Will I always roll a spin-up? 1st excited state? Left circular polarization? You still need energy to create the perturbation that would favor a quantum state with certain information."


Here's what Dembski writes on pp. 340-341:

"Consider, for instance, a device that outputs 0s and 1s and for which our best science tells us that the bits are independent and identically distributed so that 0s and 1s each have probability 1/2. (The device is therefore an idealized coin tossing machine; note that quantum mechanics offers us such a device in the form of photons shot at a polaroid filter whose angle of polarization is 45 degrees in relation to the polarization of the photons--half the photons will go through the filter, counting as a "1"; the others will not, counting as a "0".) Now, what happens if we control for all possible physical interference with this device, and nevertheless the bit string that this device outputs yields and English text-file in ASCII code that delineates the cure for cancer (and thus a clear instance of specified complexity)? We have therefore precluded that a designer imparted a positive amount of energy (however miniscule) to influence the ouput of the device. . . . Any bit when viewed in isolation is the result of an irreducibly chance-driven process. And yet the arrangement of the bits in sequence cannot reasonably be attributed to chance and in fact points unmistakably to an intelligent designer." (For those interested, you should read the fuller account in Section 6.5 of NFL)


Mark C. Chu-Carroll:
"So the laymen can't really respond. But Dembski *also* doesn't actually show where/how his "unembodied designer" fits into the intricate and subtle math of quantum physics - so it's too vague for an expert to form a good argument against."

I think what Dembski describes simply suggests that it is possible for an "unembodied designer" to "act", that is, impart information, without energy being imparted. Remember, his jump-off point for this is Paul Davies remark that "At some point God has to move the particles."


Posted by: Lino D'Ischia | September 23, 2007 9:18 PM

#53

It's a clever way of arguing, because it's playing with something that is, genuinely, deeply mysterious.

Sorry but I don't see what's so clever about it. His "unembodied desiger" (lol) is immune from everything and...

"For now, however, quantum theory is probably the best place to locate indeterminism."

..."for now", he will stick it in quantum theory. But if that doesn't work out very well then hey too bad because the unembodied designer didn't need no quantum theory anyway because it is immune from everything no matter what.

What is so clever about that? It sounds frakking stupid to me. Shrug!

Posted by: 386sx | September 23, 2007 10:01 PM

#54

"But, of course, these are 1024 "pseudo-bits", since the output is a unary output in binary form. Prescinding from this for the moment, for the sake of argument, let's say this really did represent 1024 bits of information."

Well, no. From what I read in the paper, Shallit is describing a reversible encoding function, not much different than bijective mappings of bits onto the natural numbers. As a description method it does indeed represent 1024 bits of information. But that, of course, presumes a Kolmogorov definition of "information". I'm not entirely sure what definition you are implying here.

"In the case of this 1024 bit-string, which represents the "physical event", what is the "conceptual event" that describes it and, in describing it, induces a rejection region?"

The problem here is that you are assuming that Dembski's CSI can meaningfully measure physical events. That hasn't been demonstrated by any means, much less from Dembski himself.

"If it were any different from this, then the probability of T0 would be different from that of T1. In both cases, however, it is 1 in 1024, as Dembski claims it should be."

Well, no. A factorial calculation of all possible strings of length between 1025 and 2048 forms a set with cardinality that dwarfes all possible strings of length 10. This is a pretty elementary error, and may be the cause of confusion.

Posted by: Tyler DiPietro | September 23, 2007 10:09 PM

#55

"As a description method it does indeed represent 1024 bits of information."

This should read, "as the result of a reversible encoding function, it does indeed represent 1024 bits of information."

Posted by: Tyler DiPietro | September 23, 2007 10:25 PM

#56

"Consider, for instance, a device that outputs 0s and 1s and for which our best science tells us that the bits are independent and identically distributed so that 0s and 1s each have probability 1/2. (The device is therefore an idealized coin tossing machine... Now, what happens if we control for all possible physical interference with this device, and nevertheless the bit string that this device outputs yields and English text-file in ASCII code that delineates the cure for cancer (and thus a clear instance of specified complexity)?

Okay, reasonable enough then: Obtain the cure for cancer by scrying, and then you can make an argument about the scientific value of Intelligent Design.

Until that happens, though, you don't appear to have actually answered creeky belly's question. Cheeky belly's question as I understood it was, how can an anything, unembodied or otherwise, inspire a stochastic process to output something specific instead of random data as according to its probability distribution? Instead of answering, you simply gave a hypothetical example in which a stochastic process does output a specific message, rather than randomly as according to its probability distribution. Okay. How did that happen? How does it work? If the example occurred according to the actual laws of quantum physics, you'd have to do it the way cheeky belly described-- you'd have to expend energy to perform some operation that changes the probability distribution. Right?

I think what Dembski describes simply suggests that it is possible for an "unembodied designer" to "act", that is, impart information, without energy being imparted.

Okay, so God can perform miracles. But why does God need quantum physics to perform miracles? Why does it make any more sense to suggest that supernatural beasties can influence the outcome of a stochastic system, than it would to suggest they can influence the outcome of a classical one? As long as we're making up new laws of physics, why not think big?

Posted by: Coin | September 23, 2007 11:04 PM

#57

Tyler DiPietro:

"Well, no. A factorial calculation of all possible strings of length between 1025 and 2048 forms a set with cardinality that dwarfes all possible strings of length 10. This is a pretty elementary error, and may be the cause of confusion."

It's essentially a unary output. What factorial calculation does this imply?

Coin:
"Okay. How did that happen? How does it work? If the example occurred according to the actual laws of quantum physics, you'd have to do it the way cheeky belly described-- you'd have to expend energy to perform some operation that changes the probability distribution. Right? "

No, you wouldn't have to expend any energy. That's really the point. And, to find out "how" it happened---well, good luck, because QM, being limited by its probabilistic interpretation, and because of the Uncertainty Principle, just doesn't allow much peering around to see just what happened. That's why I said it's brilliant. Wish I had thought of it first!

Coin:

"Okay, so God can perform miracles. But why does God need quantum physics to perform miracles? Why does it make any more sense to suggest that supernatural beasties can influence the outcome of a stochastic system, than it would to suggest they can influence the outcome of a classical one? As long as we're making up new laws of physics, why not think big?"

Well, we know that universe was built from the smallest particles upwards, not from the largest downwards. So, you would have to think that if an "unembodied designer" is going to tinker, it would be done at the particle (QM) level, and not at the classical level. When you're at the classical level, that kind of tinkering, indeed, implies what is normally meant by "miracle."

Posted by: Lino D'Ischia | September 24, 2007 12:05 AM

#58

"It's essentially a unary output. What factorial calculation does this imply?"

1. The output of the encoding is unary, but neither this nor the encoding is known the observer. All is known is that it is one of all possible strings of binary digits of length l, where 1025 l

2. The factorial calculation is used for calculating all possible combinations of n objects. Shallit was talking about two completely different sets when talking about the number of possible permutations of binary digits in strings of length 10 (which has cardinality 1024) and possible permutations of binary digits in strings of all possible lengths between 1025 and 2048.

Posted by: Tyler DiPietro | September 24, 2007 12:29 AM

#59

All is known is that it is one of all possible strings of binary digits of length l, where l is between 1025 and 2048.

Fixed.

Posted by: Tyler DiPietro | September 24, 2007 12:30 AM

#60

The end of my first point was also eaten.

Continuing: Furthermore, the reversal (decoding) of the function f-1 is defined for f(i) but for no other strings in the set. Thus f(i) has CSI while it's image does not, but contradicts Dembski's claim that functions cannot generate CSI.

Posted by: Tyler DiPietro | September 24, 2007 12:37 AM

#61

As a biologist, I usually skip over ID arguments since they would clutter my brain with useless information -- it's hard enough keeping relevant information at hand. But, this discussion piqued my interest since it relates to some things I have been thinking about recently.

Reading through the Dembski and Marks paper at least taught me where those "bit scores" in sequence searches come from. One ID argument is the big number low probability of rationalization that chance cannot explain even the evolution of a single enzyme. In the paper, D&M use the cytochrome c protein with ~400 bits of information with the search space reduced to ~110 bits because of the different frequencies of amino acids arising out of the genetic code. They then go on to say that the ~10^-35 probability is still small. However, a very quick search with a bacterial cytochrome c I found an example of two functionally identical proteins from different bacteria. The "bit score" was 51 bits out of a possible of 122 bits for an exact match with 44/169 identical amino acids. This is not a raw bit score, which seems to be used in D&M, but is dependent upon the size of the database searched. The probability of finding a sequence in the protein database with the same bit score is actually 4x10^-5, much smaller than the ~10^-35 quoted in the paper. For a real description of the probabilities see the BLAST short course at NCBI.

My point here is that the set of sequences that can perform the same catalytic reaction is larger than is usually assumed in ID arguments. From a practical point of view, most biologists that use these searches for similar proteins really don't need to know the specifics of the probabilities. They can use it as one uses a scale to weigh something - it gives a useful measure but you don't need to know Newtonian physics to use them.

Setting up a search algorithm that mimics evolution is difficult: How does one score the virtual protein in terms of functionality so that the program selects the winners in a population? For a typical enzyme, there are only a handful of amino acids that actually contact its substrate. Typically, changing these amino acids results in inactivation, but because of the chemical redundancy in the 20 amino acids there are "conservative" changes to a similar amino acid that may not significantly affect function.

In essence, the basic requirements for function is that a few amino acids are positioned correctly in an environment that is conducive for catalysis. Thus, the other amino acids are there to provide the structure and environment, and the rules measuring these parameters are very flexible. Hidden Markov methods have been used to deal with this flexibility and several programs have been built so that you can take any protein sequence and ask "What sort of function may this protein have?" At best, these analyses only assign a protein to a family of enzymes catalyzing the same molecular rearrangements but with different substrates. One extreme example is the active site of RNA polymerase. The yeast and bacterial enzymes share little sequence identity, but their active site structures can be aligned with only a few Angstroms difference.

If it is difficult to estimate the probability of a specific sequence having function X, are there independent ways to arrive at the probability? Enter evolution in a test tube. A while ago, Keefe and Szostak reported a method where they were able to translate proteins in vitro and keep the protein attached to the message from which it was translated. They used this to select for proteins that bound to ATP from a pot of ~10^12 different proteins 80 amino acids in length. They obtained 4 different proteins whose sequences did not match any known ATP binding site. They and others have followed up on this, but it has not caught on widely because it is technically difficult with many potentially good binding sites lost due to insolubility of the protein. With regard to this discussion, the main point is that for any specific function there may be many different sequences with different structures that are functionally equivalent. In other words, the set of sequences that can perform a specific function is likely much larger than what can be estimated from known examples, leading to a relatively low probability to obtain something that works. Low enough so that ID need not be invoked.

Posted by: Anonymous | September 24, 2007 1:08 AM

#62

No, you wouldn't have to expend any energy.

Why not?

And, to find out "how" it happened---well, good luck, because QM, being limited by its probabilistic interpretation, and because of the Uncertainty Principle, just doesn't allow much peering around to see just what happened.

People ask for an explanation of how something, which you claim to be possible, works; you repeatedly respond only with "well, if it happened, you wouldn't be able to explain why". Do you not see why people might not take this entirely seriously?

You've given us no reason to expect this "rearranging of the elements of the probability distribution" thing is possible or would ever happen-- no reason to expect there is any need to "peer around to see what happened". Rather than saying anything to convince us the effect you're describing exists, you're just giving us claimed reasons why the effect you're describing would be undetectable and inexplicable. But makes the proposition of its existence less credible, not more; you're basically reinforcing what MarkCC said in comment #50.

That's why I said it's brilliant. Wish I had thought of it first!

Um, well it's not like it's that original-- I mean, I can think of two noteworthy science fiction novels that centrally henge on the basic idea you've been describing here (sentient beings gain the ability to preferentially pick among the potential ways quantum waveforms can "randomly" collapse, thus obtaining godlike powers), and both of them predate at least The Design Inference...

Posted by: Coin | September 24, 2007 3:02 AM

#63

Um, well it's not like it's that original-- I mean, I can think of two noteworthy science fiction novels that centrally henge on the basic idea you've been describing here (sentient beings gain the ability to preferentially pick among the potential ways quantum waveforms can "randomly" collapse, thus obtaining godlike powers), and both of them predate at least The Design Inference...

Right on man. I don't think it's "original". I don't think it's "brilliant". I just think it's stupid. :-)

Posted by: 386sx | September 24, 2007 4:30 AM

#64

Tyler DiPietro:
"The factorial calculation is used for calculating all possible combinations of n objects. Shallit was talking about two completely different sets when talking about the number of possible permutations of binary digits in strings of length 10 (which has cardinality 1024) and possible permutations of binary digits in strings of all possible lengths between 1025 and 2048.

Furthermore, the reversal (decoding) of the function f-1 is defined for f(i) but for no other strings in the set. Thus f(i) has CSI while it's image does not, but contradicts Dembski's claim that functions cannot generate CSI."

The confusion here is that reference classes and rejection regions are being equated. Once you separate these two out, it is all conformable with what Dembski writes.

In Shallit's example, he makes no mention of a rejection region. You cannot have a "specification" a la Dembski if you don't describe, mathematically, some rejection region. So, if you don't describe a rejection region, then it is impossible to talk about CSI, even if you're talking about a million bits of "information" (which is not equatable with CSI). If you then set as the rejection region in Ω0, the space of all binary digits (which is then the same as Ω1) the subset of the first ten binary positions, a ten-bit string, this encompasses 1024 elements. After the function is performed, the rejection region in the "pseudo-unary" output has 1024 elements as well. This "rejection region" resides in the space of all binary numbers between l= 1024 to 2048. But this is simply a subset of the reference class of all binary numbers. When you reverse (invert) the function, the function only operates on those 1024 elements of the new subspace that constitute the new rejection region. Since the event E is one of those 1024 elements in the one rejection region and then the other, the probabilities remain the same, and hence if there is sufficient complexity (improbability) to constitute CSI in the one rejection region, then it will still represent CSI in the other rejection region.

Posted by: Lino D'Ischia | September 24, 2007 9:37 AM

#65

Anonymous:

" The probability of finding a sequence in the protein database with the same bit score is actually 4x10^-5, much smaller than the ~10^-35 quoted in the paper.

Your point seems to be that evolution can simply build new function upon old function and that these two are not separated by much improbability.

What you're leaving out here is the importance of cytochrome c. Cytochrome c is essential to cell duplication, and hence, is one of the most highly conserved proteins encountered. This is why, I'm sure, Dembski and Marks worked with cytochrome c. Whatever your thoughts regarding the ease of switching from enzyme function A to enzyme function B, it is all completely irrelevant if cytochrome c doesn't exist. So the improbability of its coming into existence sets a minimum barrier for all future function. But thanks for pointing out their paper. I didn't know they were working with proteins. I'll have to now go and read it.

Posted by: Lino D'Ischia | September 24, 2007 9:52 AM

#66

Coin:

"No, you wouldn't have to expend any energy."

Why not?

Dembski makes it clear that no energy was needed in the case of photons. Are photons particles? Do they have momentum? Well, their binary output should turn out to be 1/2 went up and 1/2 went down even when 'coding for a cancer cure'. Lo and behold, that's exactly what binary strings end up doing: averaging out to 1/2 0s and 1/2 1s. The probability distribution is undisturbed, hence no addition of energy.

Coin:

"Rather than saying anything to convince us the effect you're describing exists, you're just giving us claimed reasons why the effect you're describing would be undetectable and inexplicable.

Haven't you ever heard of "spooky" quantum effects? In your last comment about collapsing wave-functions and such, you've touched on the problem of QM: no one knows how this collapse takes place. It's hidden from us. That's just the nature of the beast. If I point out what is undetectable and inexplicable, it is because nature is such. I can't help that.

Posted by: Lino D'Ischia | September 24, 2007 10:11 AM

#67

Lino:

The problem is that you're playing the same old quantum mystery game that I criticized Dembski for originally.

That is - you don't have *any* mechanism for what you're claiming. You don't have any math to support its possibility. All you have is that wonderful word - "quantum", and the mystery that surrounds it. Anyone can project anything that they want into that mystery, and for the moment, no one can disprove it, because we just don't understand it.


That's why I say that far from being a "brilliant" discussion of the capabilities of a disembodied designer that it's nothing but a clever-sounding smoke-screen. Dembski isn't saying anything remotely deep or interesting - he's just taking advantage of something we don't understand to shove his pre-conceived idea of a designer behind the curtain.

If Dembski actually bothered to do the math - to show what he's claiming that his intelligent designer can do without expending any energy, how much of an effect it could produce, how it works mathematically - then it might be an interesting argument. But that's not what he does. He doesn't demonstrate *any* understanding of the actual math of quantum phenomena. It's just a smokescreen: invoke the magic "quantum", wave your hands around, and you're a brilliant scientist!

Posted by: Mark C. Chu-Carroll | September 24, 2007 11:20 AM

#68

Anonymous writes: "He [Dembski] simply points out that the statistical nature of QM permits events taking place that don't involve the imparting of energy but simply a rearranging of the elements of the probability distribution."

As creeky belly notes, this would make teleportation possible. And FTL communication. Whee!

And how is the probability distribution altered non-energetically? Morphic resonance? Lino D'Ischia, I'm afraid the Dembski quote provides nothing in the way of a mechanism for altering the probability distributions. You can rephrase his words but there's still no "there" there. Is Dembski a believer in psychic phenomena too? After all, if someone can alter any probability distribution at will, they can easily control the firing of neurons in the brain. Maybe ID has application in explaining "visions" and occult phenomena as well?

In any case, perhaps Bill and the DI should consider brewing a really warm cup of tea and recording the swirling patterns for signs of a Bible Code or something. At least then he might have a leg to stand upon.

I agree that it's a pity Salvador left. He was a useful foil. But if you make a habit of saying many ridiculous things that reveal horrendously bad scientific judgement, it's going to catch up with you eventually.

Posted by: Unsympathetic reader | September 24, 2007 11:21 AM

#69

Mark CC:

"That is - you don't have *any* mechanism for what you're claiming. You don't have any math to support its possibility. All you have is that wonderful word - "quantum", and the mystery that surrounds it. Anyone can project anything that they want into that mystery, and for the moment, no one can disprove it, because we just don't understand it."

Isn't this a silly contestation on your part?

"Yes, you say that an 'embodied designer' can act without being detected. Well that's fine and dandy. But what mechanism do you propose? Without a mechanism, this is just hand-waving?"

So what you're asking for is that I come up with a mechanism for an action that is undectectable. This is ludicrous. If an action is undectectable, how would you know that your mechanism accounts for it. What if you were wrong: how would you know that? This is just silliness.

There's another way of looking at this, for those hung up on 'energy'. We live in a world that can only be tested down to certain limits. There is, after all, a Planck time and Planck length. For example, the human eye has a flicker rate of 57 flickers/sec. That's why electricity is 60 cycles/sec, so that even though the light is going on and off 60 times a second, as far as we're concerned, it's on all the time. Well, let us suppose that the universe has some kind of a flicker rate, operating at flicker rate likely greater than the inverse of the Planck time. (BTW, I think this kind of 'flickering' is what lies at the heart of quantum tunneling.) Now, the Heisenberg Uncertainty Principle is generally described using position and momentum, but it can also be applied to energy and time. For energy, it's: ΔE x Δ t = h/2π. So, if you have an 'unembodied designer' who can act 'infinitely fast', then an 'infinite' amount of energy could be imported into the universe without detection. Just like we can't detect the on/off behavior of your average light bulb, so we wouldn't be able to observe such an input of energy if it is done infinitely fast, or even nearly so.

Absurd, you say. Well, let me ask you this: we know that it is possible for particles to travel faster than light. Since the universe is expanding at a fast rate, light traveling along the fabric of space is then traveling super-luminally because space itself has a velocity. So, please explain to me where this added space comes from allowing for the expansion of the universe. Do you have any kind of answer at all? And, if you do, please indicate to me the 'mechanism' that's at work.

Posted by: Anonymous | September 24, 2007 12:52 PM

#70

Post #69 is mine.

Posted by: Lino D'Ischia | September 24, 2007 12:54 PM

#71

Unsympathetic Reader:
"As creeky belly notes, this would make teleportation possible. And FTL communication. Whee!"

You accuse me of "horrendously bad scientific judgment". Well, UR, do you read scientific literature? It has already been proposed that the theoretic capability exists to teleport an atom from over here, to over there. It seems that it will just be a matter of time before scientists are able to do this. Of course, this is a long way from 'teleporting' Mr. Spock up to the Enterprise', but it is 'teleportation' nonetheless.

And in an article that appeared in the last two weeks, scientists working with quantum tunneling say that they have measured FTL travel. So, whose scientific judgment is really in question here?

Posted by: Anonymous | September 24, 2007 1:09 PM

#72

I didn't realize I was 'signed out'. I just fixed it. Sorry, post #71 is mine also.

Posted by: Anonymous | September 24, 2007 1:11 PM

#73

I got the message: "Thanks for signing in Lino D'Ischia", and there was no slot for a name, and yet it still came up anonymous. I'm lost. Anyway, post# 71 is mine.

Posted by: Lino D'Ischia | September 24, 2007 1:13 PM

#74

Lino:

I agree with you entirely that Dembski's claims have changed over time; that's what makes it so hard to write a definitive refutation. To a Dembski believer, no claim is subject to refutation because one can always find another passage in Dembski's works that implies the exact opposite. If credit cards are not an example of CSI, why did Dembski say so? And if he no longer believes that to be the case, maybe you can point me to a passage in his voluminous writings where he explicitly disavows this? Thanks.

You say, You don't provide any such description nor rejection region. We're left with one-half of CSI, and so we can't call an ordinary bit-string CSI. I'm sorry, I thought you would be bright enough to provide the details yourself. The specification is "all bit strings of a given length with at most one 0". This is exactly the same as the specification given by Dembski for the Caputo case, so you can't turn around and say this is not a specification. The rejection region is exactly the same as in the Caputo case; namely, I define a function f that counts the number of 0's in the string, and I set up a rejection region corresponding to f(x) exactly the same as in the Caputo case, that is, f(x)


You have provided no evidence for your claim that a specification necessarily corresponds to identifying a subset S of low probability. There is nothing Dembski's works that says this, and you have provided no quote to justify it. To identify something as CSI, you need two things: a specification, and the probability induced by that specification that is sufficiently low. These things are independent. As I read Dembski, you can have a specification that gives a region with relatively high probability, in which case you don't get enough CSI to infer design. But this doesn't mean it's not a specification; it's just not a good enough specification to infer design (pace Dembski). I contend you continue to misunderstand this distinction.


As for SAI, if you had read the section carefully, you would see that the purpose of SAI is simply to show how, if one wanted, one could put Dembski's ideas on a theoretically sound footing. There is nothing really new there; all the basic ideas are contained in the paper of Li and Vitanyi that we refer to. It has nothing to do with "detecting design"; rather, it is about distinguishing strings that would likely arise from the flipping of a fair coin, versus those that probably don't. Also, if you had read the section carefully, you would have seen that SAI, as based on Kolmogorov complexity, is not a computable quantity; it can only be approximated.

Posted by: Jeffrey Shallit | September 24, 2007 1:35 PM

#75

Lino writes: "You accuse me of "horrendously bad scientific judgment". Well, UR, do you read scientific literature?"

I was speaking of Salvador. Case in point: His continual droning about a decay in the speed of light that points to a young universe. There are many others (i.e. defense of Walt Brown on kcfs.org & etc.)

Yes I read the literature. About "atom teleportation": It's about the transfer of *quantum states* between particles. This is different from altering the probability distributions such that all the atoms of an object localize a few feet (or miles) to the left or right. Funny thing about that work on quantum state transfer: It involves a testable mechanism and embodied designers.

Lino elsewhere: "So what you're asking for is that I come up with a mechanism for an action that is undectectable. This is ludicrous. If an action is undectectable, how would you know that your mechanism accounts for it. What if you were wrong: how would you know that? This is just silliness."

The silliness is proposing an undetectable mechanism for an event and thinking one can leave it at that, not in asking for a mechanism or details about the actual feasiblity. One can posit a million undetectable mechanisms much like an emperor can wear any number of "invisible" clothes.

Posted by: Unsympathetic reader | September 24, 2007 2:30 PM

#76

Lino:

You're making my argument for me.

The point is: you've said that Dembski has made a brilliant argument for the capabilities of an "unembodied designer". In fact, he hasn't - he hasn't described the capabilities of anything. What he's done is wave his hands and declare that
his unembodied designer exists in a realm where his actions and capabilities are completely beyond our ability to describe or understand. That's not a description of the capabilities of the designer - that's just a typical quantum handwave.

In terms of the understood math of quantum physics: what can the designer do without expending any energy? The answer to that isn't the usual quantum babble about "Oh, he can tweak the stuff we can't observe" - in *the math* of quantum physics, what does Dembski say about the capabilities of the unembodied designer?

If he doesn't say anything in terms of the actual math, then he isn't saying anything: he's just blowing smoke. You see, when it comes to quantum phenomena, we don't know what's going on. We don't understand it. All we have is math: we have some very good mathematical descriptions of how things behave. Any actual arguments about what can and can't happen at a quantum level can *only* be done in math.

Posted by: Mark C. Chu-Carroll | September 24, 2007 2:35 PM

#77

Hmm...
One thing is for sure, Mark: You needn't propose a mechanism for events that you don't know happened*.

*e.g. Unembodied designers loading CSI to into objects such as plant chromosomes in order to create pretty fragrances.

Posted by: Unsympathetic reader | September 24, 2007 3:03 PM

#78

Unsympathetic Reader:
"The silliness is proposing an undetectable mechanism for an event and thinking one can leave it at that, not in asking for a mechanism or details about the actual feasiblity."

Let's say you ran the experiment that Dembski outlines, using a polarization filter to turn photons into a 0 and 1 machine. Out pops the cure for cancer in binary code, maybe even in English. What would be the mechanism producing that? Could you come up with it in a million years? No. So why is a mechanism being demanded. That is silliness. What Dembski suggests is that there might be a way for "unembodied intelligence" to express itself bodily without any residue. Why not take it for what it's worth, a plausbile interpretation, and not get silly and ask for a mechanism that is impossible to provide?

Posted by: Lino D'Ischia | September 24, 2007 5:19 PM

#79

Mark CC:

"Any actual arguments about what can and can't happen at a quantum level can *only* be done in math."

You make my point: did the cure for cancer violate the math involved in quantum mechanics?

Posted by: Anonymous | September 24, 2007 5:42 PM

#80

Lino: "Let's say you ran the experiment that Dembski outlines, using a polarization filter to turn photons into a 0 and 1 machine. Out pops the cure for cancer in binary code, maybe even in English. What would be the mechanism producing that?i>"

Invisible Pink Unicorns. Why bother with quantum handwaving when IPUs explain everything just as well? Sheesh.

"So why is a mechanism being demanded."
It's not that an mechanism for a miracle is demanded, it's that Dembski *himself* outlined a highly questionable mechanism. Neither he nor you have seen fit to flesh out this 'brilliant' idea with actual, um, facts. If he'd like to retract that idea as being merely a highly speculative, trial balloon, that is fine with me because so far he's not even demonstrated that any phenomenon requires such an explanation.


Here's another idea: *You* run photon polarization experiment and get back to us if anything interesting pops up. It sounds like a great way to demonstrate the existence of an unembodied designer. I guess the only question that would remain then is who would get the patent for the cancer cure?

Posted by: Unsympathetic reader | September 24, 2007 5:52 PM

#81

"But this is simply a subset of the reference class of all binary numbers. When you reverse (invert) the function, the function only operates on those 1024 elements of the new subspace that constitute the new rejection region."

That's the point. The reason the pseudo-unary output is significant is because the decoding function is only defined for those outputs. Identifying a binary string that decodes into the preimage is an event (physical, if you'd prefer) that exhibits CSI as Dembski defines it, yet this observation contradicts his claim that functions cannot generate CSI.

"Since the event E is one of those 1024 elements in the one rejection region and then the other, the probabilities remain the same, and hence if there is sufficient complexity (improbability) to constitute CSI in the one rejection region, then it will still represent CSI in the other rejection region."

The rejection region is corresponds, as far as I can tell, to all the outputs of 10 bit strings from f(x). It's trivially true that both the images and targets of the function would have the same cardinality. When the latter set is embedded within a the broader class of all binary strings of length between 1025 and 2048, you have a larger class where the elements of said set are "improbable", a la Dembski's formulation, and thus constitute CSI.

Posted by: Tyler DiPietro | September 24, 2007 6:16 PM

#82

Now, what happens if we control for all possible physical interference with this device, and nevertheless the bit string that this device outputs yields and English text-file in ASCII code that delineates the cure for cancer (and thus a clear instance of specified complexity)?

Let's say you ran the experiment that Dembski outlines, using a polarization filter to turn photons into a 0 and 1 machine. Out pops the cure for cancer in binary code, maybe even in English. What would be the mechanism producing that? Could you come up with it in a million years? No. So why is a mechanism being demanded. That is silliness. What Dembski suggests is that there might be a way for "unembodied intelligence" to express itself bodily without any residue. Why not take it for what it's worth, a plausbile interpretation, and not get silly and ask for a mechanism that is impossible to provide?

This is monkeys on a typewriter. If the output of the detector is 50/50, there is no way to fix the output before hand, except by changing the polarization. That's the definition of stochastic, and it's why they've been able to make true stochastic, random number generators. Quantum cryptography relies on being able to detect eavesdroppers in a quantum circuit, and the no-cloning theorem prevents you from both measuring and transmitting a quantum state.

But here's what I would do: I would set up the machine with a BB84 protocol and perform key validity checks. If information is being introduced on the channel, it will be detected through error rates. Just because it's quantum, doesn't mean we can't figure out what's inside the box!

BTW, polarization is a biproduct of going through a material with asymmetric oscillation susceptibility, so how is a new polarization induced without a change in the transverse EM field?

Posted by: creeky belly | September 24, 2007 7:26 PM

#83

Dembski, as quoted by Lino:

Now, what happens if we control for all possible physical interference with this device, and nevertheless the bit string that this device outputs yields and English text-file in ASCII code that delineates the cure for cancer (and thus a clear instance of specified complexity)?
...
And yet the arrangement of the bits in sequence cannot reasonably be attributed to chance and in fact points unmistakably to an intelligent designer.

I see a few problems with this. One problem is that ASCII English text does not have the statistical properties predicted by QM, so the proposed result would indicate that QM is wrong. But this problem seems reparable -- we'll just suppose that the output does have the right statistical properties, and that it yields English text when it's run through a standard decompression algorithm.

But does that really fix the problem? According my understanding of mainstream QM, the output of the polarized filter is genuinely undetermined. If there is a causal agent, embodied or not, that can determine it, then QM needs to be revised to account for it. Why is Dembski okay with tweaking QM but not the conservation of energy?

And then we have a problem of semantics in the description of the experiment: "...what happens if we control for all possible physical interference..." What's the distinction between physical and non-physical interference? Can non-physical entities cause physical effects? We've crossed from science into metaphysics here.

And finally, we have the ever-present problem in Dembski's conclusion of design, namely that Dembski has never given a definition of design that makes sense. He can't logically conclude design if he doesn't have a logically coherent definition of the word.

Posted by: secondclass | September 24, 2007 7:32 PM

#84

Lino:

You make my point: did the cure for cancer violate the math involved in quantum mechanics?

Probably, yes.

But since you're not doing the math to show how that text is allegedly being generated in terms of quantum phenomena, I can't answer the question.

If I assert that x2+y=z has no real roots, can you prove that I'm wrong? It should be easy, after all, quadratic equations are well understood, right?

Obviously, you can't prove that. Because I've deliberately
left the problem underdefined - so any attempt to prove that there are real roots to that equation is easy for me to refute.

That's the same game that you and Dembski are playing. You're creating a fake phenomenon, and claiming that it could be the result of some unspecified phenomenon without breaking the rules of quantum behavior. But you're not doing any math. The actual rules of quantum behavior are very intricate and very subtle. As far as I understand quantum physics, I don't think that what you're describing would be at all consistent with the mathematical properties of actual quantum behavior. But since you're not specifying what the actual mathematical properties of your scenario are, anything that I can say to show that can be refuted by saying "but that's not what I meant".

If you want to make the argument that what you're describing is anything more than an attempt to hide bullshit behind the curtain of the word "quantum" - you need to actually do enough math to show us that what you're claiming is actually what you're claiming it is. Can you describe any way that your scenario is actually consistent with quantum phenomena, in terms of the actual math of quantum phenomena?


Pretty much anything that you can come up with that specifically describes some way that a quantum process could output

Posted by: Mark C. Chu-Carroll | September 24, 2007 8:13 PM

#85

Coin (#62):

Um, well it's not like it's that original-- I mean, I can think of two noteworthy science fiction novels that centrally henge on the basic idea you've been describing here (sentient beings gain the ability to preferentially pick among the potential ways quantum waveforms can "randomly" collapse, thus obtaining godlike powers), and both of them predate at least The Design Inference...

I'm guessing that one of them is Greg Egan's Quarantine, but I'm drawing a blank on the other.

Posted by: Blake Stacey | September 25, 2007 10:16 AM

#86

Oh, I should also say that I thought of the same idea for a story I started but never finished, in the sixth grade. It's really unoriginal.

Posted by: Blake Stacey | September 25, 2007 10:19 AM

#87

Re: #62, #85,

There are many fine novels which touch on this, of which I've read most. I am personally fond of Moving Mars and Hard Questions (where the viewpoint character is locked into a box and experiences being a Schrodinger cat) for the egotistical reason that I was an informant to the authors of those two as they were writing, and Ian Watson credited me as such in his acknowledgments, just as Greg Bear did in an earlier novel (The Forge of God) where my wife and I appear under our own names as characters.

To use the list from here
http://nextquant.wordpress.com/quantum-computer-sci-fi/

* Brasyl (2007) by Ian McDonald
Features illegal quantum computing and parallel universes.

* Simple Genius (2007) by David Baldacci
This recent thriller describes quantum computers as being worth countries going to war for.

* Shanghai Dream (2005) by Sahr Johnny
Quantum neural networks achieve a breakthrough in Artificial Intelligence.

* The Traveler (2005) by John Twelve Hawks
Quantum computer communicates with other realms and tracks interdimensional travel.

* The Labyrinth Key (2004) by Howard V. Hendrix
Quantum computer and the Cold War between China and the U.S.

* Blind Lake (2003) by Robert Charles Wilson
Self-improving neural quantum supercomputers allow visual observation of distant planets.

* Dante's Equation (2003) by Jane Jensen
A quantum computer named Quey is used to solve a previously intractable physics problem. The book also involves parallel universes.

* Hominids (Neanderthal Parallax) (2003) by Robert J. Sawyer
A failed quantum computer experiment transfers a Neanderthal scientist from a parallel universe into our world.

* The Footprints of God (2003) by Greg Iles
The secret Trinity Project involves some of the best minds in the world in order to create the first practical quantum computer. Quantum OS Trinity by D-Wave Systems was called after The Trinity Project.

* Light (2002) by M. John Harrison
A serial killer invents a quantum computer that enables interplanetary travel.

* Schild's Ladder (2002) by Greg Egan
Future humans abandon physical bodies and trasfer their minds to a quantum computer named Qusps.

* Finity (1999) by John Barnes
Using quantum computers one can jump into an alternate parallel universe.

* Timeline (1999) by Michael Crichton
Quantum computer "faxes" objects and persons into parallel universes.

* Digital Fortress (1998) by Dan Brown
NSA operates a code-breaking quantum computer named TRANSLTR.

* Factoring Humanity (1998) by Robert J. Sawyer
Quantum computers are used for integer factorization and code breaking. Their working principle is based on parallel universes.

* Hard Questions (1996) by Ian Watson
A powerful quantum computer operates in parallel universes, becomes self-aware and creates own realities.

* Moving Mars (1993) by Greg Bear
The book features self-aware quantum computers.

* Quarantine (1992) by Greg Egan
One of the first sci-fi books using the concept of quantum computation.

Posted by: Jonathan Vos Post | September 25, 2007 1:29 PM

#88

Desperately trying to drag the thread back on topic...

The point of the analysis in the Marks and Dembski paper is that ev works, but slowly. So slowly that random search is faster (on average). This is something that ev's author acknowledges.

The implicit admission of "it works slowly" is that "it works". Marks and Dembski are very careful to avoid saying that ev is not relevant to biology, even if it is a crude model.

Schneider's slowpoke ev is at least trying for biological relevance. The random search comparison is not at all a relevant model of how biology works.

(And why is ev so slow? Ridiculously small population size and no crossover (asexual reproduction) are to blame.)

There might be some shifting of goalposts here, but at least it is in the right direction. Previously, UD stalwarts such as DaveScot and GilDodgen held out that evolutionary algorithms all worked by front loading and hiding the goal in the code. If the discussion has moved on to "yeah, but they are so slow they are not realistic" that is an improvement.

BTW, Marks and Dembski are wrong about how many trials ev took in the runs in Schneider's paper. They quote a number around 45,000, arrived at by multiplying the population size by the number of generations. Actually the number of trials was half that, since only half the population got replaced in each generation. It doesn't inspire confidence in the rest of their math.

An interesting way to test Marks and Dembski's assertion on how fast random search can solve the binding problem of ev is to set the population size to some number greater than 439. Since the first generation of ev's population is generated randomly, a population greater than 439 should on average contain at least one perfect scoring individual, according to M&D. A population of 4390 should contain 10 on average. Since ev's source code is available on the internet, it would have been a good check for M&D to include in their paper.

Posted by: David vun Kannon | September 25, 2007 4:29 PM

#89

David vun Kannon:

Yes, I had wandered away from the centroid of the thread.

True, the evolutionary algorithm runs much faster with cross-over, i.e. sexual reproduction, as I verified in my 1973-19777 doctoral work, and was frustrated that the MIT AI lab did not consider when I got them to do their own EA work. Don't know about Dembski, et al, but my parents reproduced sexually.

How MUCH faster is the question at the core of the evolution of sexual reproduction, and those species which have both sexual and asexual (including vertebrates).

Classical Population Genetics has plenty of equations about evolutionary rate as a function of population size. Infinite populations and continuous evolution are another matter, with a different set of known results and open problems.

Dembski et al are either genuinely ignorant of all of the above, or pretending to be, or mildly aware but totally confused. Hard to tell. The Institutional No Free Lunch Theorem says that it is arbitrarily hard to distinguish between malice and incompetence.

Posted by: Jonathan Vos Post | September 25, 2007 4:50 PM

#90

With regards to the speed of evolution, people might find the following paper to be of interest:

Kashtan, N. et al. (2007) Varying environments can speed up evolution. PNAS, 104, 13711-13716.

Simulations of biological evolution, in which computers are used to evolve systems toward a goal, often require many generations to achieve even simple goals. It is therefore of interest to look for generic ways, compatible with natural conditions, in which evolution in simulations can be speeded. Here, we study the impact of temporally varying goals on the speed of evolution, defined as the number of generations needed for an initially random population to achieve a given goal. Using computer simulations, we find that evolution toward goals that change over time can, in certain cases, dramatically speed up evolution compared with evolution toward a fixed goal. The highest speedup is found under modularly varying goals, in which goals change over time such that each new goal shares some of the subproblems with the previous goal. The speedup increases with the complexity of the goal: the harder the problem, the larger the speedup. Modularly varying goals seem to push populations away from local fitness maxima, and guide them toward evolvable and modular solutions. This study suggests that varying environments might significantly contribute to the speed of natural evolution. In addition, it suggests a way to accelerate optimization algorithms and improve evolutionary approaches in engineering.

Posted by: SteveF | September 25, 2007 5:52 PM

#91

David vun Kannon:

BTW, Marks and Dembski are wrong about how many trials ev took in the runs in Schneider's paper. They quote a number around 45,000, arrived at by multiplying the population size by the number of generations. Actually the number of trials was half that, since only half the population got replaced in each generation. It doesn't inspire confidence in the rest of their math.

My understanding is that the losing half of the population is replaced by copies of the winning half, and then about 3/4 of the population experiences a point mutation. After that, the whole population is evaluated again. If we don't count repeated evaluations for unmutated organisms, then the correct number of queries should be about 34000. I suspect, though, that Marks and Dembski didn't exclude repeated queries for unmutated organisms.

An interesting way to test Marks and Dembski's assertion on how fast random search can solve the binding problem of ev is to set the population size to some number greater than 439. Since the first generation of ev's population is generated randomly, a population greater than 439 should on average contain at least one perfect scoring individual, according to M&D. A population of 4390 should contain 10 on average. Since ev's source code is available on the internet, it would have been a good check for M&D to include in their paper.

Yes, that's a clever way to quickly check M&D's results. Out of curiosity, what do you think would happen if we tried this? Do you intuitively think that a population of 4390 would contain 10 perfect organisms? (I'm not looking for a right or wrong answer; I'm just curious to know what your intuition tells you.)

Posted by: secondclass | September 25, 2007 7:25 PM

Post a Comment

Enter the ScienceBlogs 500,000th Comment Contest!

We're fast approaching our 500,000th reader comment, and to celebrate the occasion we're giving away some great prizes. By submitting your email address with your comment, you'll be automatically entered in the ScienceBlogs 500,000th Comment Contest (we won't use your address for any other purpose). Learn more...

Want to enter but don't want to comment? Signing up for our brand new newsletter, the SB Weekly Recap is just as good! (If you don't want to enter at all, click through to the Contest Info & Rules. You'll be missing out on some fabulous prizes though...)

(Email is required for authentication purposes only. Comments are moderated for spam, your comment may not appear immediately. Thanks for waiting.)

If you have a TypeKey identity, you can sign in to use it here.





Having problems commenting?

Search All Blogs

Blogs in the Network

Click here!

Top Five: Most Emailed

  1. What's the best way to praise a child? Be specific. 09.13.2007 · Cognitive Daily 53
  2. Welcome A Few Things Ill-Considered! 09.20.2007 · denialism blog 11
  3. Volokh on the "One True Purpose" Fallacy 07.10.2006 · Dispatches from the Culture Wars 10
  4. Sodomy and the Religious Right 10.31.2004 · Dispatches from the Culture Wars 8
  5. Britney Spears is the high priestess of trailer trash 07.05.2006 · Pure Pedantry 6

&lt;IMG SRC="http://m1.2mdn.net/1292852/HP182_BASW_Seed_160x600A.gif" WIDTH="160" HEIGHT="600" usemap="#default_160x600" BORDER=0&gt;

Top Science Stories



Site Meter