Thursday, November 28, 2013

Infinitely many possibilities? Pushing epistemology to the edges

Christian apologists bring up interesting questions from time to time, even if they do so for what are likely to be disingenuous reasons, like trying to cut the legs out from under science. I recently was asked the following question by apologist and philosopher Vincent Torley:
As I pointed out, "Our observations provide support for the hypothesis that the sun always rises at the same time every day – but they’re equally consistent with the hypothesis that the sun rises at the same time every day until the year 2050, after which it sails off into space, or the hypothesis that it rises at the same time until 1 January 2437, after which it turns into a green dragon. In short, there are infinitely many alternative hypotheses about the future path of the sum [sic] which are also fully consistent with the observations we’ve made to date. The question we need to ask ourselves is: why is it rational for us to single out just one hypothesis – the hypothesis that the sun always rises at the same time every day and always will – and ignore all the other hypotheses about the future course of the sun which are fully consistent with the evidence?"
I don't want to get into a long-winded discussion of something so common sense as why we wouldn't seriously bet anything we didn't wish to lose on a hypothesis like the sun turning into a green dragon on 1 January 2437, but I want to comment on why I've worded it this way--about betting. Hopefully in the process, I'll address the question in brief and move on to something more interesting to talk about that I'm not sure Torley realizes.

Betting

Why would I call this a bet? Because that's how we should be thinking about things. The certainty that philosophers classically chased with regard to epistemology--how we know things--is dead. Only when the topics are abstract, as in mathematics, philosophy, and theology, might it make any sense to talk about "certain" knowledge, and in those cases, certainty comes with a caveat: it's held as certainty given the axioms underlying the abstract framework in which the question makes sense. Humorously, theology is the only of these three fields in which even given certain abstractions, it seems unlikely that they can make statements of certainty--how can anyone be certain of anything to do with God?

In the real world, things are a little more difficult--or perhaps they're actually not, but we can't know that. Knowledge, it seems, is pretty clearly mental stuff, perhaps emergent phenomena upon the activity of neurons, etc., or something similar. We don't know reality, then, we know mental models of reality that are hopefully pretty good. The gap between the reality and the mental model that maps it out for us is what can be called an "epistemic gap," and the width of that epistemic gap tells us something about how good our knowledge is.

The best tool I'm aware of for measuring the width of an epistemic gap in many, many cases is a subjective probability assessment, which we can identify with a degree of confidence in the model we're using. That probability assessment can be read as a measurement how likely it is that reality is giving us something that seems to be explained by our model, and since it's a probability assessment, we could use it to make bets.

So let's talk about the green dragon and the sun for a minute this way, and hopefully readers can lead themselves to understand the matter for themselves. Would anyone put a serious bet upon the possibility that on 1 January 2437 the sun will turn into a green dragon? Would anyone not put a serious bet upon the sun "rising" (that is, the earth rotating so that the reader's locality turns to provide an unobstructed straight-line path to the sun)? What kind of odds would a bookie put against the sun failing to "rise" tomorrow, or any other day in the future?

If we ponder those questions seriously even for a few minutes, we get an immediate sense of why the green dragon thing is beneath consideration. Perhaps we could assess the probability that such a thing would happen and then create odds, but without even trying I'm quite confident in saying that such a probability assessment would be very, very small resulting in very, very long odds, made all the longer by the very specific date given. I'm also quite confident in saying that you are as confident as I am.

Our experiences, but more importantly a clear understanding of the physical processes involved in the rotation of the earth and the shining of the sun, among a few other things (like no good reason to believe in dragons, green or otherwise), provide us with a very, very narrow epistemic gap when dismissing the green dragon example as nonsense. We don't need to be certain, which would indicate having no gap. The gap that's there is so narrow that we don't even have to blink to jump over it and get on with our lives. It's fundamentally impractical to give it any of our concern, not least because literally nothing could give us certainty anyway.

Finally, the interesting part!

Having put aside Torley's nonsense and hopefully stoked our intuitions for how we can claim to know things, he does bring up one important and interesting point:
In short, there are infinitely many alternative hypotheses about the future path of the sum [sic] which are also fully consistent with the observations we’ve made to date. (emphasis mine)
Indeed, there are, especially if we just keep changing the date (and time, to arbitrary specification) for the sun to change into a green dragon--maybe with one toe on each foot, or two toes, or three, or four, or..., and in the context of Torley's example, we're not even limited to dragons and could make up anything. Infinitely many possibilities.

This presents a problem with assigning a very low probability to each of them, as I discuss in Chapters 12 and 13 of Dot, Dot, Dot: Infinity Plus God Equals Folly. One of three situations must present itself since the total probability of all possibilities must converge to the value one and infinitely many "very low probabilities" can only add up to one under very specific circumstances. Either (1) Torley is wrong (me too) about there being a potentially infinite number of possible hypotheses, (2) most of the potential hypotheses get probability zero, almost surely, which is normally a taboo in Bayesian reasoning but can be handled using calculus, so far as I can tell, or (3) the probabilities must converge anyway, and so the probabilities within the space have to diminish (rapidly) according to some kind of schema.

This problem is actually interesting, whereas the question of how we get on living our lives calling narrow epistemic gaps "knowledge" is by comparison embarrassingly dull. My guess on this matter, for what it's worth, is option 2, but I don't actually know the answer to this question. Option 1 might be right too, or option 3, but I don't know how to begin to justify that one.

Even more interesting!

Back to Torley, though, to motivate a bit more, though we'll need to slog through some dull, obvious stuff again, if only to have addressed it plainly.
The question we need to ask ourselves is: why is it rational for us to single out just one hypothesis...and ignore all the other hypotheses...which are fully consistent with the evidence?
A point (that I'd classify as dull, but important) is that we're not doing that. We're not singling out a hypothesis and treating it differently, and we're not ignoring all of the others. What we're doing is using some method--Bayesian reasoning--to assign plausibilities to every possible hypothesis and letting the chips fall where they may. In this case, the possible hypothesis that we're right about the earth spinning so that we get a sunrise tomorrow--over and over again--gets a very high probability. Bullshit like green dragons gets a very low probability.

He might argue that we can't assess all of the other possibilities, but we can do so sufficiently without much work once we get one decent hypothesis. The total probability must be one, so if we have one possibility that we can assess as 90% likely, all of the other hypotheses combined have a total plausibility of only 10%. With a hypothesis like the sunrise being normal, the plausibility is so high for that hypothesis that only a tiny, tiny fraction of a percent is left for all of the other potential hypotheses combined, those having to share that tiny, tiny fraction in some way. In other words, we can immediately conclude that the vast majority of the rival hypotheses must have a negligibly low plausibility and thus do not have to examine them directly to know they can be dismissed.

Of course, we "ignore" possible hypotheses in proportion to how unlikely they are assessed. If we have a hypothesis with a very high probability assigned to it for good reasons, then we often call that hypothesis "the explanation" or "knowledge" or "what will happen," etc., and skip the song-and-dance about there being other very unlikely possibilities. Life's too short for all that.

We can get back to something interesting now that we see Torley's deep question isn't even hard--the answer is actually obvious for anyone that understands Bayesian reasoning and most people with common sense, even if they can't phrase it. The interesting bit relates to options 2 and 3 above. Maybe we do decide to ignore almost all (not all) other hypotheses, i.e. we give such low plausibilities to them, including zero, almost surely, that we effectively do ignore them.

It's worth noting that this isn't so insidious. We could note the above discussion about the total plausibility, but even without it, we never even think of almost all potential hypotheses. We'll never even invent names, nor could we, for all of the potential imaginary creatures that the sun could turn into on 1 January 2437. What plausibility do we give those hypotheses? Why do we ignore them?

In the second and third possibilities noted above, particularly in the second, we have to have some method by which we decide which hypotheses are admissible, i.e. given a non-negligible or non-zero (almost surely) plausibility, and which aren't. Finding a philosophical justification for such an epistemic paradigm is an actually interesting question, to be contrasted with Torley's abysmally boring one waiting outside these fabulous gates and looking in the wrong direction.

I can't answer that question, but I'm thrilled to be putting it out there and even to be exploring it. As I note in Dot, Dot, Dot, I don't think that these epistemic paradigms are even static entities. Indeed, I'd argue that some of the God hypotheses were defensibly within epistemic paradigms before a few hundred years ago and that they are not now.

2 comments:

  1. Interesting elaboration what looks like an apologist trying to undermine evidentialism.

    Basically, what happens here is that we start looking into various approaches to epistemology. The theists have their favorites, the ones that make theism "rational", like Plantinga's Reformed Epistemology. Others like evidentialism or reliablism aren't nearly as favorable to theists, which they'll attack.

    ReplyDelete
  2. Interesting. It looks like he is asking a question that is similar to Goodman's 'New Problem of Induction'.

    ReplyDelete