Bill Dembski Gets A Paper Published

Huh. Looks like Bill Dembski got
a paper
published in
IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, Volume 39 Issue 5, Sept. 2009.

I haven’t had a chance to read this yet, but here’s the abstract:

Abstract—Conservation of information theorems
indicate that any search algorithm performs, on average, as well as
random search without replacement unless it takes advantage of
problem-specific information about the search target or the
search-space structure. Combinatorics shows that even a mod- erately
sized search requires problem-specific information to be successful.
Computers, despite their speed in performing queries, are completely
inadequate for resolving even moderately sized search problems without
accurate information to guide them. We propose three measures to
characterize the information required for successful search: 1)
endogenous information, which measures the difficulty of finding a
target using random search; 2) ex- ogenous information, which measures
the difficulty that remains in finding a target once a search takes
advantage of problem- specific information; and 3) active information,
which, as the differ- ence between endogenous and exogenous
information, measures the contribution of problem-specific information
for successfully finding a target. This paper develops a methodology
based on these information measures to gauge the effectiveness with
which problem-specific information facilitates successful search. It
then applies this methodology to various search tools widely used in
evolutionary search.

From a quick glance, it looks like he’s still on his No Free Lunch
Theorem kick. In the conclusion, the authors write:

To have integrity, search
algorithms, particularly computer simulations of evolutionary
search, should explicitly state as follows: 1) a numerical mea-
sure of the difficulty of the problem to be solved, i.e., the
endogenous information, and 2) a numerical measure of the
amount of problem-specific information resident in the search
algorithm, i.e., the active information.

which to me sounds like they think that people who use evolutionary
algorithms are cheating by using a search method that performs well
given the problem at hand.

In any case, I’m sure this paper will be
bandied about
as a sterling example of the research cdesign proponentsists are
doing.

Update, Aug. 21: Mark Chu-Carroll has
weighed in
on this paper, and pretty much confirms my suspicion: at the core of
the paper is a moderately-interesting idea — that it’s possible
to quantify the amount of information in a search algorithm, i.e., how
much it knows about the search space in order to produce quick results
— along with some fluff that allows him to brag that he got
a
peer-reviewed pro-ID article in mainstream […] literature“.

3 thoughts on “Bill Dembski Gets A Paper Published”

  1. Unfortunately, my IEEE membership doesn’t give me access to that particular journal, so I haven’t had a chance to read the paper. It looks potentially interesting, depending on how concrete his results are. The real trick will be in the “applies the methodology” part, which has historically not been Dembski’s strong suit.

    I’m not sure why so much is being read into the NFL theorem. I had never heard of it until it was brought up in the context of this debate, so I grabbed the paper. It’s a proof of a (not surprising) result. Unless I’m misreading or misremembering something, the basic idea is that no search algorithm can give you “good” results averaged across all possible arrangements of data.

    The paper is cool in that it proves a theorem, and proofs are cool. The theorem isn’t all that shocking because… well.. why would you expect to find a magic algorithm that that can efficiently search data whose elements are not necessarily related to one another? Maybe I’m just really smart or a really lucky guesser, but doesn’t this seem like one of those things that you’re almost certainly sure is true when you start on the proof?

    Like

  2. Troublesome Frog:

    I’m not sure why so much is being read into the NFL theorem.

    Mark Chu-Carroll explains the reasoning:

    Dembski has been trying to apply the NFL theorems to evolution: his basic argument is that evolution (as a search) can’t possibly produce anything without being guided by a supernatural designer – because if there wasn’t some sort of cheating going on in the evolutionary search, according to NFL, evolution shouldn’t work any better than random walk – meaning that it’s as likely for humans to evolve as it is for them to spring fully formed out of the ether.

    I think that means “Isn’t it remarkable that natural selection exists, as the mechanism that led to us? Who could have designed natural selection this way, if not God a highly intelligent but unspecified designer?”

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.