Monday, December 3, 2012

Loftus's Outsider Test for Faith viewed in HD with Bayes's Theorem

In a recent post, I mentioned that anyone interested in the discussion regarding "atheism versus faith" should be reading John Loftus. Particularly, I noted an argument from his excellent Why I Became an Atheist, the "Outsider's Test for Faith" (OTF), which he is elaborating upon in a new book of that title to be published by Prometheus Books next spring. Subsequently, I made a comment that piqued his interest by saying that "[the OTF could be] greatly strengthened by appropriately applying Bayes's theorem." Since my background is in mathematics, this caught John's attention.

To quickly clarify what I mean by that, I do not mean to imply that I can improve upon the OTF itself but rather that a clear understanding of Bayes's theorem, which is a mathematical result that underlies essentially all hypothesis-test reasoning, will greatly enhance the rigor with which someone might view what the OTF is telling them--and perhaps get them to take it much more seriously than they might have otherwise. That's one thing about mathematics--it's nearly impossible to argue with it, try as some ideologues might (maybe they should call Nate Silver about that?).

A few things I do not want to do here is make this post heavy on the mathematics, derive or prove Bayes's theorem, or even go into depth explaining it. I don't even intend to plug numbers in and work things out in detail--I'll just provide how I did it and the results. All of the explaining is done reasonably well on Wikipedia and elsewhere on the Web for anyone curious. Please, feel encouraged to read as much as you think you need to about it to be clear on it.

An embarrassingly brief and vague introduction to Bayes's theorem:

To very briefly summarize, in the way we will be looking at it, Bayes's theorem is used to adjust a prior probability that a hypothesis is correct (we might call this a guess at the likelihood of the validity of a hypothesis, in a loose sense, that precedes evidence) to an informed probability. The formula is quite simple, as it turns out, and essentially asks us to consider a few different quantities in order to assess the impact of evidence to our hypothesis. Essentially all reasoning that attempts to assess how evidence impacts how likely a hypothesis is boils down to applying Bayes's theorem, even if the reasoner doesn't realize it.

Rather amazingly, reasoning via Bayes's theorem--thus all of a particular class of reasoning--boils down to a debate about three numbers. These are often called the "prior" and two "consequents." The prior is the reasonable guess at the probability that the hypothesis is true before the evidence is considered, and the two consequents essentially attempt to estimate how likely the evidence we have is against the assumptions that the hypothesis is true or that it is false. In other words:
  • The prior attempts to make a reasonable guess at the likelihood before having the knowledge imparted by evidence related to it; I will call this number a (for a priori or the prefix for "without");
  • One consequent assesses how likely the evidence we have is against the assumption that the hypothesis is the true explanation; I will call this number t (for true); and
  • The other consequent assesses how likely the evidence we have is against the assumption that the hypothesis is not the true explanation, i.e. that there is some other explanation that explains the evidence; I will call this number n (for not true).
What is fascinating about Bayes's theorem is that literally all arguments over the likelihood of a hypothesis boil down to arguments about these three numbers, all three of which are probabilities (numbers between 0 and 1, inclusive). Thus, if one is interested in rephrasing almost any argument about "truth" of claims, it can be done by putting it in the language of Bayes's theorem and then haggling plainly and clearly about why each of those three numbers should be what it is.

As a noteworthy and interesting aside, the way the formula for Bayes's theorem is structured reveals immediately that if we have equal probabilities assigned to each consequent, then the formula does not adjust the prior probability at all, and our uninformed guess is exactly all we end up with. This makes sense because assigning equal probabilities to "the evidence is explained by the hypothesis" and "the evidence is explained by other hypotheses" essentially says that we aren't valuing the evidence at all and all we have is our guess. This is a fancy mathematical way of saying "if you don't value evidence, all you're doing is guessing."

How Bayes's theorem applies in the OTF

My essential claim here is that the role of the Outsider's Test for Faith, which is simply to step outside of one's belief and consider the faith as if as an outsider, is to adjust all three of those numbers that appear in Bayes's theorem. That is to say that an "insider" will estimate those three probabilities differently from an "outsider" and thus will reason on the question fundamentally differently. It is my transliteration of John Loftus's position that the insider's position provides bad estimates for these three numbers while the outsider's position provides better ones. This is a weighty claim that essentially every book that takes the case against religious faith makes very plainly and that I will pay some attention to as we go along, though I will not set aside space specifically for it here.

The prior probability, a

The prior probability is the "guessed" probability that the hypothesis is true in the absence of examining the evidence. If we consider the question of "Is religion X true?" there is a lot that goes into that evaluation. I claim that "insiders" are likely to overestimate this number, some arguing that they are 100% certain that their religion is true (a=1), though more honest but devout folks may leave a gap of some uncertainty. It is my expectation that very few serious religious people would provide a prior probability of less than 50% (a=0.5)--tantamount to an "I have no idea" coin-toss--or even something substantially higher like a=0.9. Who would bother with a religion that they feel has only a one-in-ten chance of being true (a=0.1)? Would anyone? Why? (Particularly since it is unlikely that we can actually choose to believe things that we do not thing are true but rather believe that which we think is most likely to be accurate.) We can be very comfortable, then, in assuming that every religious believer would put forth a prior significantly larger than a=0.1, to say very little.

This is almost certainly an overestimate based only on one fact (which we could do a Bayesian analysis on itself (and are, implicitly!)): there are a lot of disparate religions out there (including lots of them that haven't even been thought of yet and some that have fallen out of use). Here, we might haggle about how many different religions there are. There are some 41,000 denominations of Christianity. There is Islam, with sects. There is Judaism, with sects. There is Hinduism, Buddhism, religious Taoism, Jainism, Sikhism, Shintoism, Secular Humanism, and a list that could wind on and on only within major religions. Indeed, there are 20 (or 21, depending on the survey) major religions listed today, plus however many minor religions (literally hundreds). Even if we limit only to the 20 biggest, we still face a prior probability (remember, it applies no evidence) of one-in-twenty (a=0.05, a 5% chance) that any particular one is true. This too is likely to be a gross overestimate.

(Note that the "I don't know" position lazily labelled "agnosticism" of a prior of 50% is itself an overestimate. Simply viewing the situation of Christianity, Islam, and no religion as three competing possibilities, these three give an upper-bound of--that is, it must be less than--33% for the prior probability.)

Here, then, we can see how the OTF acts on the prior probability. A believer, an insider, is likely to overestimate the prior probability, and the OTF asks us to reconsider the prior from a broader, less biased perspective. This will always lower this estimate for any serious believer, often substantially (from greater than 90% to less than 1% in many truly honest cases, if the insider can step outside his beliefs far enough).

Remember, this number is merely the prior, the probability assessment outside of examining evidence, so even the most sophisticated apologetics that support Christianity and bash Islam should hold no sway on it. The OTF helps hold us to that.

For what it's worth, my most controversial claim in God Doesn't; We Do is that given the potential creativity with which we could define a God or a "true religion" to worship that God, without hard evidence, the prior is actually zero, almost surely. See Chapter 5.

The "positive consequent," t

This consequent measures how likely the evidence we have is given what we see (and conspicuously do not see but should--see my argument in God Doesn't; We Do regarding Modus Tollens and probability in Chapter 5). So this evidence spans all of the evidence that we have in the world, and we are asking ourselves how likely this evidence is in light of the idea that the hypothesis that "religion X is true" is valid, or rather, how well religion X explains the evidence. My claim here is that being a believer causes someone to overestimate this number as well. This number is the most problematic of the three, by far.

There is a lot of room for argument here since believers are unlikely to be able to see the gaping holes their religion leaves in the explanations of the evidence, many of which have been papered over by doctrine, apologetics, and ad hoc, a posteriori attempts to usurp the evidence. These holes are also likely to have been thoroughly (though rather pitifully) rationalized, as have been the enormous failures that we might expect out of the Godheads of these religions, like the Problem of Evil (generalized in God Doesn't; We Do to "The Problem of a Silent God"). Thus, we're likely to see very high, unwarranted estimates here from believers. Fundamentalists would heartily give a 100% (t=1) chance that the evidence is what we would see if the religion was true, and again, even more cautious believers are likely to give a value above 90% (t=0.9) and certainly above 50% (t=0.5).

This, for what it is worth, is the main reason we see the religions actively trying to usurp the findings of science, along with all attempts to put science and faith in entirely separate magisteria of thought. Science simply has shown religion to be wrong on too many hypotheses to protect this number, but if that science can be claimed to be part of God's design in some religion, then that science can be claimed in favor of the religions, i.e. as points toward a higher t. It is also why we see so many disingenuous attempts to paint God as above moral law, because his many, many moral failings by everything we define as moral absolutely obliterate this number. Furthermore, it is the main motivation for religions to bash upon one another as the very existence of other religions lowers the estimation of this number since they would not be predicted to exist if any one religion were actually true.

The OTF invites us outside of the belief structure to consider the evidence more carefully. Again, the sheer number of religions that exist in the world, which is evidence, absolutely shatters the idea that even something like t=0.5 is a reasonable guess at this number. The exceptionally unlikely claims religions make--like that beings can rise from the dead or that we have the ability to live forever magically by choosing to accept a simple proposition and live by it--make matters worse.

The Problem of Evil absolutely destroys this number. Indeed, the claim that God is benevolent and loving combined with the fact that God allowed slavery and the holocaust, among many other horrors, to proceed, doing nothing to stop it although leaving in scriptures attributed to God sanctions and even celebrations on the practice, pushes the outsider's view of this number very low. Christopher Hitchens's famous observation that God ostensibly ignored humanity in our frustration in squalor for more than 99% of our history on the planet, dealing another huge blow here. Indeed, the main argument I make in Chapter 5 of God Doesn't; We Do concerning the probability that God exists being zero, almost surely, rests upon the fact that this number is zero, almost surely, not the zero, almost surely, prior mentioned in the previous section.

Just by considering these abuses, which fails to address so many other easily preventable evils--including the utter lack of effective medicine in the scriptures--places t at a tiny, tiny fraction of a percent, far less than one in billions--the number of decent folks that we know for sure have lived who would have done something to prevent atrocities like slavery or the holocaust if they had the capacity to have done so quite literally at a word (or put a stop to terrible diseases if they knew how). Personally, I find it impossible to see this number as being higher than zero, almost surely.

So, the OTF in any case, invites us out of the bubble of belief and allows us to carefully examine the question of "what would the evidence, meaning the world, look like if the religious hypothesis X were true?" This has the effect of lowering the insider's estimate for t, probably substantially.

The negative consequent, n

The negative consequent asks us to estimate how likely it would be that we would see the evidence we do due to other explanations, rather than the hypothesis being tested (here the validity of religion X). This number admits almost no debate, but still it gets it. The naturalist point of view is specifically "we see exactly the world we would see if all we saw were nature." Thus, n is almost surely 1.

Still, strong religious belief, particularly where it actively denies science (including naturalism, more broadly), will underestimate this number. This is the heart of the anti-evolution argument, in fact: an attempt to lower n, because it is in the best interests of the religious argument to try to lower this number as far as possible. These are at the hearts of the attacks on science as a discipline (including this bogus "scientism" assault)--the number n feels like it must be protected if religious faith is to be protected. This is because otherwise, the Bayesian reasoning process doesn't favor maintaining the faith.

Thus, we can expect that religious belief will underestimate this number, perhaps even saying that we just don't know (n=0.5), although I find any number lower than this so incompatible with modernity to be very easily dismantled--even though hard fundies may offer numbers near zero. My guess is that a typical believer might estimate this number at around three quarters (n=0.75).

Since the OTF invites us to look at the naturalist point of view honestly, it is very likely to raise this number from the religious insider position, which is an underestimate. An outsider who isn't using their faith to inform their thinking should give a score very near to n=1 for no other reason than that the question of "what would the world look like if there were no God?" if answered without religious bias, is "just like this one."

Putting it together

Now we can examine the numbers themselves by considering some hypothetical cases based upon the above discussion. My goal is to highlight what the effects of the OTF are, to illustrate the rather dramatic effect it can have in light of Bayesian analysis. For the purposes of demonstrating the effect the OTF can have in different circumstances, we will look at a few examples. We will assume the three numbers a, t, and n, based upon various circumstances, then examine what a Bayesian analysis will lead someone in those circumstances to conclude, and then we will explore various ways that the OTF can change the situation.

The formula, for these variables is that the posterior probability is given by

(a*t)
--------------------------
(a*t)+[(1-a)*n]

Case 1: A Fundie

Consider first a near-complete fundamentalist that is 99% sure his religion is true (a=0.99), 99% sure that the evidence is what he'd see if his religion is true (t=0.99), and sure that we just don't know what the world would look like if his religion is false (n=0.5). If we run these numbers through Bayes's theorem, his approach to reasoning gives him even more certainty that his religion is true: 99.5% sure.

Now, suppose he signs up to take the OTF but does a relatively poor job of being able to let go of his religious ideas on his first attempt. Still, he sees some validity in the argument about the prior and drops his sureness that his religion is correct to 50% (a=0.5), while admitting some possibility on the evidence (t=0.90 and n=0.75). He'd still call this "pretty sure," and yet Bayes's theorem tells us now that his line of reasoning here only lets him be 51.4% sure his religion is true. This is a huge drop from 99.5% under his non-OTF assumptions about those numbers, although to be fair, it is larger than his prior by a small amount, so a repeated effort here on the same assumptions but higher prior will eventually push his feeling of certainty up. This is to be blamed on the reasoning process and inputs, though, not a failure of the OTF, as we will see.

Now he takes the OTF a little more seriously and admits to the idea that there are at least four major religions, one of which is his, (a=0.25), admits that the evidence only matches his religion's claims 75% of the time (t=0.75), and thinks the naturalist explanation accounts for about 90% of what we see (n=0.9). In this case, Bayes's theorem tells us that he can only be 21.7% sure that his religion is correct. Notice here that Bayes's theorem actually lowered how sure he could be from the prior because of the more honest evaluation of evidence--hence repeated looks at the question starting with this new position will actually push the number down over repeated investigations. Also, look at the change from his original insider perspective now.

Now suppose a disaster happens in his personal life, say his dad dies in an accident, and he revisits the OTF. This time, he backtracks to a 90% prior because he's desperate for it to be true (a=0.9), but the evidence just isn't something he can accept matches his religious belief, so he evaluates t=0.01 (there's only a 1 in 100 chance that his God would allow what he saw) and n=1. Now Bayes's theorem tells him that despite his high prior probability, the evidence indicates that there's only an 8.2% chance his religion is true. The evidence just doesn't hold up for him.

Case 2: A Moderate

Now we'll consider a few cases of moderate faith to see what the OTF can do. Here, we have a fellow who is 80% sure his religion is true (a=0.8), is about 60% sure the world looks at it would if that religion were true because some of what's in the scripture just can't be reconciled with science (t=0.6), and can see most of the merit in the claims about naturalism (n=0.9). Bayes's theorem will tell him that he can be 73% sure that his religion is true, which is enough to be getting on with but not enough to be shoving it down anyone's throat.

If this person takes the OTF seriously, though, and spends some time outside of his religious bubble, he might see that there are 20 major world religions that are essentially incompatible with one another, and thus he would drop his prior to 5% (a=0.05). He would also admit that the Problem of Evil is a pretty big problem and that there are some gaping holes in his religion's attempt to explain science (t=0.1), and sees that naturalism really could handle the vast majority of the explanation (n=0.99). Bayes's theorem here would tell him that there is only a 0.5% chance that his religion is true. The Outsider's Test for Faith here--on very generous assumptions (see below)--has a huge impact on his ability to accept the tenets of his faith. On the inside, this reasonable moderate has grounds to be 73% sure he's right, but from the outside, he has a certainty of just 0.5%.

Case 3: Whole-hog--more realistic (though still generous) assumptions

If one of these believers goes almost whole-hog with the OTF after reading God Doesn't; We Do, he may realize that there are literally hundreds of competing religions, with tens of thousands of denominations that damn each other, and that as a true outsider trying to pick which, it seems a bit unlikely that, if he's fair and honest about it, his prior can be anything greater than 1 in 10000 (a=0.0001), which he still sees as pretty generous. The Problem of a Silent God is overwhelming, rendering a billion-to-one odds as hugely generous (t=0.000000001). There's no question that the naturalist approach would give the same world we see now (n=1). Bayes's theorem, then, leaves this fellow with only a 0.000 000 000 01% chance that his religion is true, roughly the same odds as throwing 13 six-sided dice and having them all come up 6 on the first throw--NOT something anyone would bet their life on!

Summary

To briefly summarize, Bayes's theorem allows us to consider numerically what the effects of considering different pieces of information have on the likelihood (and thus acceptability) of a hypothesis we wish to put to the test. In the case of Loftus's Outsider's Test for Faith, we will approach the numbers that we put into Bayes's theorem far differently than we would without the OTF. In particular,
  • The OTF has the goal of having someone consider their assumption of the prior probability that their religion is true more honestly against a broader perspective, greatly reducing the tendency of a biased believer inside a faith to overestimate the prior probability. The OTF will cause someone to lower their assumed prior probability to a more realistic value. (This is what I have called a here, usually labelled something like P(h|b) in math-speak.)
  • The OTF has the goal of having someone consider how well the evidence (i.e. the universe) matches their religion's claims about it more honestly, greatly reducing the tendency of a biased believer inside a faith to overestimate the degree to which the assumption of truth of their religion predicts the evidence presented by the world. The OTF will cause someone to evaluate the failure of their religion to explain reality more seriously, lowering this consequent to a more realistic value. (This is what I have called t here, usually labelled something like P(e|h,b) in math-speak.)
  • The OTF has the goal of having someone consider how well other explanations of the evidence would explain what is seen on the presumption that their religion is false. This number is hard to see honestly from within a religious framework and is likely to be underestimated by believers inside it, and the OTF will have the effect of reducing the tendency to underestimate this number. (This is what I have called n here, usually labeled something like P(e|~h,b) in math-speak.) 
A few further notes on each:
  • The overestimation of the prior probability from the insider's perspective is likely to be a wild exaggeration of reality that completely neglects the true diversity of incompatible religions and the viability of non-religious positions. 
  • The overestimation of how well the evidence matches what we should see under the assumption that the religion is true is likely to be the most incredibly wildly exaggerated number in the construction, often being relatively close to 1 for strong believers whereas an outsider is likely to see this number as so close to zero (or actually zero, if you believe me) as to literally destroy the entire basis for faith. Here is where the biggest struggle will take place for most believers who take the OTF--seeing that the world really would be a vastly different place if their religion was true. Only by truly stepping outside of it is there any hope of that, which is exactly what the OTF asks people to do.
  • The underestimation of how well the evidence is explained by other explanations is likely to be severe as well. Even on the assumption that a religion must be the thing to explain it, with a complete rejection of naturalism, if someone admits that there are 20 religions attempting to do so, then this number has to be at least 19/20 (95%) from an outsider's perspective (instead of the 50-60% likely from believers--though some will go lower toward 0!). Even a cursory glance at science from the perspective of the outsider will point to hundreds if not thousands of ways that science has outperformed religion in explaining the world in even modestly educated folks, rendering this number very near 1 for anyone that takes the OTF honestly.


-------------------------------------------------------------------------------------------------------------------------------------

If you enjoy my writing, you can read more of it in my first book, God Doesn't; We Do: Only Humans Can Solve Human Challenges. If you choose to pick it up, I thank you for your support of myself, my family, and indie authors in general.

9 comments:

  1. Thanks for this excellent summary. I've tried to avoid explicitly invoking Bayes in the past, partly because it generally requires a lot of explanation and partly because it's so easily abused by people who are either ignorant or dishonest. Having read this, I may have to become far more of a Bayes nut.

    ReplyDelete
    Replies
    1. Thanks! I feel like I can still do a lot better with this, because like you, I've avoided doing these kinds of analysis (for the same reasons you note) up until this particular point was raised. Stay tuned: I intend to add more as I think of it, and I've thought of more to use to craft another post this evening.

      Delete
  2. James, thanks so much for helping me understand how to really use Bayes' theorem in this way (especially with OTF as a framework).

    I'm unfortunately quite dysmathic, but I'm slowly learning how the theorem works, and how it applies here.

    One thing I've learned to do when trying to come up with both the prior probability and the "t" probability, is to ask a series of serious questions, particularly regarding theodicy (the problem of evil). Specifically, if a believer accepts that, say, a mudslide in Java (that's not a programming euphemism! :) has no possible good aspect, then I can help extrapolate that one instance to {n} instances using Google - which can help the believer understand how many problems there are with generically thinking the "proof" for a god's existence and inherent goodness is much lower than they thought.

    Rinse and repeat for earthquakes, school shootings, tsunamis, plagues, industrial accidents, and so on.

    Of course, in the believer's mind, this is all accepted only theoretically, but showing how just a few instances of tragedy can affect the likelihood of their god's existence (or involvement) can place an annoying seed of doubt, which, if carefully nurtured, can result in the freeing of someone's mind from the dead weight of religion. I try to water regularly!

    Naturally, we're never going to be able to reason someone out of a belief system they weren't reasoned into; but it's well worth the effort!

    Thanks again for the article, it's been really helpful!

    ReplyDelete
  3. How does add hoc reasoning get calculated in a Bayesian analysis?

    For example, Christians typically respond to the Problems of Evil and Absence with bogus ad hoc explanations; with Bayes theorem, we can see that their motive in this is to raise the positive consequent probability of their "God" hypothesis, which those atheistic arguments mentioned greatly reduce.

    These explanations surely now complicate their hypothesis and, thus, reduce its prior. But do they succeed in curing the poor positive consequent as well??? It doesn't seem that they should, given that such ad hoc explanations have no evidence supporting themselves. But I can't quite make out how to formally express this via such a Bayesian analysis.

    ReplyDelete
    Replies
    1. This is a good question, or set of questions. On the one hand, I think it gets calculated in their estimates of the three relevant probabilities (this three-number approach not being the best way to use Bayes's theorem, I've since found out--another topic for another day).

      First, though, I think they will just use their ad hoc reasoning to come up with poor estimates for the two consequents. That this reduces their prior is part of the blindness that makes up this particular cognitive bias, so they're unlikely to adjust their prior legitimately because of it.

      If we truly wanted to evaluate this carefully via Bayes's Theorem, I think we'd have to consider the positive consequent itself (being multiple hypotheses now) with a separate application of the theorem, but I'm not completely sure about that. Technically speaking, they have not succeeded in raising the consequent as they want to, but since Bayesian reasoning is inherently subjective (these numbers not usually being able to be accurately estimated) to some degree, they won't realize it.

      A simpler approach would probably be to take the ad hoc reasoning and add it to the hypothesis being tested from the get-go, but that gets complicated.

      Delete
  4. "this three number approach not being the best way to use Bayes theorem..."

    Really? I would be very interested to hear your thoughts on this(!), for that simple feature is what makes the logic appealing.

    ReplyDelete
    Replies
    1. Yes. The problem is that the "negative consequent," as I've called it, is almost impossible to actually estimate. It feels easy to estimate, but it's actually not. Indeed, the "negative consequent" sweeps a lot of details under the rug.

      A good example is to test the validity of general relativity as a hypothesis. The prior can be estimated by a number of means (and really doesn't matter much), the "positive consequent" is also very easy to estimate (and very near one). What about the negative consequent, though? Well, first, we have to consider a number of alternative hypotheses--it's not terribly informative to simply lump them all under "relativity is not true." Second, we have the enormously difficult prospect of estimating the evidence based upon the untruth of GR, which rather necessarily implies the accuracy of some other hypothesis. We know Newtonian mechanics is worse, but other hypotheses might explain the evidence pretty well.

      A better way to do it is a four-number approach that directly compares two hypotheses, weighing them against one another. Instead of coming up with a probability that some hypothesis H1 is true, you end up with a comparison of the validity of two hypotheses, H1 and H2. The four numbers in question are the two priors and the two positive consequents on those hypotheses. Even here, though, the numbers are still *hard* to estimate with validity.

      Carrier's three-number approach is probably useful in the case of theism versus atheism because of the nature of the question--there being absolutely no evidence to support theism that cannot be explained at least as well by materialistic naturalism while gaping holes exist in the theism construction.

      Delete
    2. I should add to my GR example: it is also enormously difficult to clarify what is meant by "GR is not true," which is required to assess the negative consequent. Like absolutely true? Or how true is it, given that it explains a whole lot of phenomena to very high accuracy but doesn't marry quantum mechanics?

      If we want to say "not absolutely true," we'd have to assign a probability very near 1 for the negative consequent as well because even a single observation (including QM behavior) that doesn't match what GR says takes GR and throws it under the bus. This would also be the case if GR is absolutely accurate to, say, 15 decimal places (in some set of units) but not always 16 (or 159 but not 160). Obviously, that can't be what is meant by ~GR, but Bayes's theorem isn't giving us the kind of clarification needed to parse that out.

      On the other hand, if we want to have our estimate account for how well it does fit the data somehow while admitting that other hypotheses might do it better, how on earth is someone to deal with that?

      In terms of the religion example, we really should be using more numbers: the prior probability that religion A is true times A's positive consequent plus the prior probability that religion B is true times B's positive consequent plus the prior probability that religion C is true times C's positive consequent plus so on and so forth until all of the significant-enough religions are accounted for. This gets pretty ugly, especially if we admit the possibility that there is a true religion that is either dead or hasn't been invented yet (or might never be invented!), and it shows how much is being swept under the rug with this three-number method. That negative consequent is, unfortunately, really just a guess with some plausible argument resting on it.

      Delete