## Saturday, January 12, 2013

### Coming clean about an error that is but isn't

Infinity is a tricky subject.

As is well known, Christian apologists like William Lane Craig like to beat infinity to a pulp, completely misusing it because (a) it applies to their thoughts on God, and (b) its incredibly non-intuitive nature. Mathematically savvy atheists, therefore, have a responsibility to help clear all that muddy water for these apologists, who should be doing their jobs as honestly as possible (particularly given the ethical demands they believe they are under from their imaginary God).

Well, infinity is a tricky subject, and sometimes even mathematicians make elementary-grade mistakes about it. I'm calling myself guilty, sort of, because the mistake I made is, but isn't.

Where is this mistake?

In the fifth chapter of my book, God Doesn't; We Do, I argue that the probability that God exists is zero, almost surely. "Almost surely" is a precise mathematical term that means, in probability theory, "true off a set of measure zero." A set of "measure zero" is one whose total weight is zero, under measure theory (which is the modern foundation of grown-up calculus and was revolutionary to mathematics almost at the same time as special relativity revolutionized physics). This is all quite technical, but it's rather intuitive in some examples. The example I give in the fifth chapter of God Doesn't; We Do, it turns out, is a little too intuitive, but it is probably the most natural and accessible example. I'll give a more formal one here to show that it can easily be done, but technically, there's a snag in what I presented, kind of.

In the interest of full disclosure, I was dimly aware of this snag when I wrote that example into the chapter, and technically I should have been more formal about checking it and then chosen a better example. I was strongly drawn to using this example, though, because of its overwhelming intuitiveness--which is rather the siren-song of playing with infinity in a sloppy way (right, Dr. Craig?). There is a very high probability that had I been more thorough at the time of writing, though, that I still would have motivated the proper example with this very intuitive construction, needing only to add a short paragraph to the chapter (see below) to close off that issue.

So, in the interest of full disclosure, I can say this: because I self-published my book, I technically could go emend that problem in the book right now with only a relatively small number of people ever being the wiser. I could also do it in a way where I disclose the fact that I did it (tucked away in an endnote, of course :-) ). I am not doing that, though, because it is not entirely academically honest to do so--even with the endnote admission. I published that work, and I will stand by what I published--and so here I make a full admission, neat correction, and an argument that it is, but really isn't, an error. On the other hand, if some significant publishing house picks up the work and wants to publish a new edition, which would be needed anyway because of the small number of typographical errors that survived (and were caused by) my editing process--which I leave in intentionally for the same reason of honesty about my work--I will correct the text to do the construction better justice.

What is the mistake?

The most intuitive example of constructing the idea of an infinitely unlikely chance is had by examining successive cases that diminish toward that "zero, almost surely" status that I want to describe. The simplest and most intuitive construction for that purpose is to consider a "uniform distribution" on subsets of the natural numbers (the natural numbers are 1, 2, 3, and so on through every positive integer). A uniform distribution simply means that every number in the set has the same probability of being chosen. When someone says, "choose a (natural) number at random (inclusively) between 1 and 5," for instance, they mean that under the assumption of the uniform distribution of the numbers 1, 2, 3, 4, 5, i.e. the assumption that each of those numbers is equally likely to be picked, i.e. with a probability of 1/5 for each number.

This can be extended to get diminishing probabilities. What is the probability of choosing, say, 5 at random from {1,2,3,4,5}? It's 1/5 (0.20). What is the probability of choosing 5 at random from {1,2,...,10}? It's 1/10 (0.10). What is the probability of choosing 5 at random from {1,2,...,1000}? It's 1/1000 (0.001). We can carry on with this to whatever ending number we want, and the probability of choosing some particular value gets really small as the last number gets really big.

The mistake I made was in going ahead and extending this all the way to the entire set of natural numbers, i.e. "to infinity," which isn't a number. Then the intuitive idea is that the probability of choosing five, at random, from amongst all of the numbers (uniformly distributed) would be "1/infinity," i.e. 0. Since it is conceivable, though, that five could be chosen, we can't say 0 in the "exactly zero" sense that philosophers take to mean "logically impossible," because drawing five out seems like it is not logically impossible (actually, it kind of is in reality, although the intuitive construction indicates that it is not--see below). Saying that the probability is zero, almost surely, is the kind of understanding that modern mathematics, thanks to measure theory, affords... except that in this case, it technically doesn't. That's the mistake.

Why is it a mistake?

The problem is that there is no such thing as a uniform probability distribution over all of the natural numbers.

What? As intuitive as it is to keep extending our sampling interval ad infinitum, it actually cannot be done if we want to maintain a meaningful probability density function. Why? Measure theory creates a problem there, which has the distinct feeling of being an unfortunate loophole in the axioms (see ** at the bottom), but which, nonetheless, is a problem in the reductio ad absurdum way that Dr. Craig always likes to (incorrectly) talk about with regard to infinity-based reasoning.

Warning, tech-speak: The problem is that measures have to be "countably additive," which means that if you take the measure over a union (putting together) of countably many (see below*) non-overlapping measurable sets, then the measure of that union had better be equal to the value of the infinite series that adds up all the measures of the individual sets in it. Attempting to take the limit of all of these expanding uniform distributions turns out to violate countable additivity and thus does not result in a probability density function at all. Either we add up countably many zeros, one for each natural number's individual probability, and get zero, or we add up an infinite number of equal nonzero numbers and get infinity. A probability distribution, though, must have total probability one, not zero or infinity. NB: The tech-speak warning here is precisely why I (would have) opted for the intuitive example anyway in the book. I know... it's like reading gobbledegook.

*There are many sizes of infinity. The size that enumerates the natural numbers is called "countable infinity," while the other sizes are bigger and are collectively known as being examples of "uncountable infinities." This might not sit well with you, but it's true, and here isn't the place to elaborate on why it is true. Wikipedia knows.

In more lay speak, the problem is that every number would have a probability zero, almost surely, of being chosen, so the probability that any number at all would be chosen is 0+0+0+... (one zero for every number), which is still zero. But in reality, the probability that you would pick any number at all has to be one. This is a big problem because it's impossible for zero to equal one. The net result is that the idea of a uniform distribution on the natural numbers is not possible, i.e. it is actually "meaningless" (regarding the quotation marks: as we'll discuss) to talk about drawing any number from the natural numbers where every natural number has the same probability of being chosen.

So, in short, it's a mistake in the book because according to the usual framework of measure theory, which I use to justify doing what I did, I can't do what I did.

Why isn't it a mistake?

This might get a little sticky and gobbledegookish.

As it turns out, there are a number of reasons why it doesn't really matter, particularly given the usefulness of the intuitive nature of the discussion.

First: There are many, many examples of measure-zero sets anyway, and the concept is all I really need to do what I do with them.

Consider this, for example. Suppose we look at the interval from [0,1] on the real number line. Inside that interval, we can find the reciprocal of every one of the natural numbers: 1/1, 1/2, 1/3, 1/4, and so forth. As it turns out, we can but a uniform distribution on the segment [0,1], and in that case, the probability of picking the reciprocal of, say, 5, meaning picking 1/5, is zero, almost surely. It is immediately straightforward to identify the concept of "choosing 5 from infinitely many choices" via the process of choosing a random value in the real interval [0,1] and then identifying 'choosing 5' with having picked 1/5." This isn't quite as intuitive a construction, but this paragraph alone (without this sentence but with one explaining that the above intuitive construction isn't totally on the up-and-up) would suffice to correct the "mistake" in Chapter 5 of my book.

Since all I need is the concept of a measure-zero set, and there are copious examples of these (including sets like the Cantor set that are composed of uncountably (see below) many points in [0,1] and yet still have zero (Lebesgue) measure), my actual argument in God Doesn't; We Do is unaffected by the construction I used to convey the idea of what they are.

Second: The example just above really makes the "mistake" look annoyingly trivial and perhaps even unnatural (see ** below).

Believe it or not, the interval [0,1] of real numbers is uncountable in cardinality. That means, particularly, that there are more numbers in it than there are natural numbers... by a very wide margin. But we can put a probability measure on it. Why? That "countable additivity" thing doesn't apply to uncountable sets like [0,1]. Seemingly weirdly, in measure theory if you add up enough zeros, more than countably many of them, you can get nonzero values--indeed, all of continuous probability theory and all of calculus itself depend upon this fact. That this works for bigger, less tractable sets but not for more conceivable ones is a conundrum that has raised a lot of consternation among mathematicians for over a century.

The problem is that the only work-around is to drop that countable additivity requirement in defining the measures, but the next thing down is "finite additivity," meaning the property only applies in cases when we have finitely many things to add up, and this creates a whole slough of problems for defining how these finitely additive measures work. Since there's nothing between finite and countably infinite, and that gap is literally an unbridgeable jump (so far as I'm aware), we get this loophole-feeling problem (see ** again) at countably infinite sets (and thus lose a lot of really intuitive examples).

That last sentence is as good a place as any to remind ourselves of something: Infinity isn't intuitive, right Dr. Craig?

Third: There are lots of interesting, but mostly unsatisfactory, ways of working around this problem.

On the one hand, as seems to be espoused by Richard Carrier (a historian and philosopher) given some of my interaction with him and what he's written on his blog, some mathematicians have developed a strange mathematics of infinitesimals, meaning nonzero number-like things (technically, of the kind "1/infinity") that are so small that they're not positive. In some regard, calculus is the art of playing with these objects, so it's not too out there to have done so, but unfortunately, working with these extended number systems is kind of like the AD&D of the mathematics world--it's pretty far from the mainstream, even if a lot of people get interested enough in it to play around, have some fun, and then don't say much about it out in the light of day. In these constructions, my example works as given, taking "probability zero, almost surely" to mean that the actual probability is an infinitesimal of a particular kind. The important take-home for this in terms of the God conversation is that this still means "not a positive number."

NB: Saying this is the AD&D of the analysis world doesn't imply that it's not important; indeed the next revolution in understanding some of the lingering questions in analysis may be solved by creating something thoroughly cogent in this department.

As it turns out, when I say that there are "lots" of ways of working around this, mathematics built upon infinitesimals account for several of them, since there is not just one monolithic mathematics of infinitesimals but rather several of them that do not necessarily agree with one another and that vary in usefulness. Applying finitely additive measures of different kinds accounts for many others.

One example of this, on the other hand, there is a concept defined on the natural numbers called "natural density" that was developed in the 1930s to handle problems of this kind. For my purposes, it is essentially satisfactory. I haven't checked (pardon this lack of due diligence, and savor the slight irony given why I'm writing this in the first place), but I believe the natural density approach creates a finitely additive uniform probability measure on the natural numbers with the limitation that not all subsets of the natural numbers can be measured in this way. The "natural density" interpretation is precisely what I was getting at with my intuitive construction, and since it is a reasonable enough way to manage this particular issue, I will leave the matter at that with the caveat that there is still a key issue with even this construction.**

So, on these grounds, I'm stating that I made a mistake that kind of isn't a mistake but really is a substantive mistake, and that I probably would have done things the same way anyway because of the intuitiveness of the construction, adding only a brief mention of something more formally correct, had I done it more carefully in the first place. Particularly, I'm aware of the problem, choosing not to correct it in this edition due to (what may be an odd form of) academic integrity, and will correct it in any subsequent editions of the book. I'll note that if encouraged by the right people, then I might fix it with the endnote admission anyway.

Though I'm sure I won't get such kind treatment for all of this as I'm about to assume, thanks for recognizing the matter and not taking my little ones, metaphorically, and dashing them upon the rocks (Ps. 137:9).

-------------------------------------------------------------------------------------------------------------------------------------

If you enjoy my writing (and honesty!), you can read more of it in my first book, God Doesn't; We Do: Only Humans Can Solve Human Challenges. If you choose to pick it up, I thank you for your support of myself, my family, and indie authors in general.

-------------------------------------------------------------------------------------------------------------------------------------

**Even though this construction provides the requisite P(n)=0, almost surely, for every n in the natural numbers, it isn't technically able to handle the problem that the probability of choosing any number from amongst all natural numbers is still going to be zero. By introducing an infinite cardinal (more gobbledegook, I know), or even evaluating simple limit-based arguments, it is easy to show that the probability that the infinite cardinal, not a number, will be selected is 1, almost surely, i.e. that if asked to pick a number at random from all of the natural numbers, the number will be "infinite," which is to say that it won't be a number at all. To see that it's true, the "natural density" of the set {n, n+1, ...} is still one for every natural number n, so the probability is one, almost surely, that the number being selected will be larger than n for every natural number n, which is to say it won't be a natural number at all.

All said, this is weird stuff, and it lends credence to the idea that a uniform distribution on a countable set really is an inconsistent idea, however intuitive it is, and that it's not just a loophole of countable additivity applied to a countable set.