Monday, June 30, 2014

Installed Disqus Commenting Platform

Just a quick note to any regular readers and any others that happen to come across this (low likelihood as that may be), I have installed Disqus commenting platform on the blog, hopefully to facilitate more in-depth, constructive, and readable conversations in the comments beneath my posts.

This may result in some technical difficulties that I ask you to bear with as they crop up, and it appears that their importation of the existing comments on this blog came out a little weird. Any confusion that creates is unintentional, and my apologies are offered here to the degree that they're meaningful.

At any rate, my hope is that having installed Disqus will encourage more lively conversations here, and so I look forward to seeing your commentary on this new platform.

Monday, June 23, 2014

The illusion of moral choice

Following Sam Harris's "Moral Landscape Challenge" has provided me with no real surprises. I've talked enough shop with moral philosophers not only to have expected the response given by the winning entrant, Ryan Born (whose winning essay can be read here), but probably to have been able to write it. In fact, and though I'm quite a fan of Russell Blackford and his work generally, I also have come to know enough about the topic to have known Blackford's general soft-spot--or ability to see the accurate rationality, if it applies, to be fair--for Born's argument to have been able to have guessed something like that would win. I almost did write something along those lines in a vaguely Sokalish mood, in fact, but thought better of it because of my agreement with Harris's arguments both in The Moral Landscape and in Lying.

I must say that Harris's response to Born proved mostly predictable as well, as it's similar to what I would have said in his place and more importantly the obvious extension of the thoughts he's published on the topic before. It is an excellent read, though, and it pushes into the philosophical limelight a few concepts that really need to be there, particularly the one about inhabiting a "single epistemic sphere."

Born, on his blog, responded to Harris's response in a way that was, again, utterly predictable given familiarity with his case. Harris will probably have no trouble dealing with it, but the circumstances remind me of Tim Minchin's metaphor about playing tennis on the opposite ends of two different courts and executing well-aimed serves with great technical skill and minimal point.

As I see it

Not to oversimplify the matter--for there twists and turns that are relevant to their discussion that I will not bother with here (or probably anywhere, ever)--Ryan Born's case boils down to the issue with the fact-value distinction, that we have to import at least one value into our conceptual space to get started with the ethical enterprise, and moral philosophy isn't just best suited to do this, it is uniquely suited to do it. Harris's case rejects that claim, sort of. I agree with Harris, perhaps more deeply than he's written so far on this matter, and this essay of mine seeks to outline why.

As a quick disclaimer, one that will prove more germane as this essay develops, I want to make sure it is clear that I am in no way trying to, or even interested in, stealing Harris's thunder. I trust he can, maybe will, make a similar or the same argument, but as I often do best to clarify my own thinking on a matter by trying to spell it out for others, I've elected to do so here. Rather of proof of that fact, I also have no interest whatsoever in engaging with Born's arguments with Harris in any level of detail. That's for Harris, if he chooses to do it, obviously, and not some upstart with a blog.

I see a fundamental error in Ryan Born's case that absolutely no moral philosopher is likely to agree with. I don't think we can choose our moral values, though it's far more accurate for me to say that I don't know that we can choose our moral values mostly because I don't actually know what "choose" means on this level of analysis. Under certain definitions that would follow from compatibilist views on free will (that the fact that we're a phenomenon of nature and its mechanistic, though not always deterministic, laws is compatible with free will, a position held by philosophers including Daniel Dennett, and contrasted against incompatibilist views), perhaps we can and do choose our values, though even in that case I doubt it, but in that I found Harris's Free Will to be as persuasive as his other printed efforts, I don't camp in a compatibilist tent.

The essence of a case like Born's for the utter necessity of moral philosophy boils down to the idea that moral values are arrived at by thinking about them. The Bard, earning his title, put it as well as it has been put when he had it escape the mouth of Hamlet in the Second Act, addressing the hopeless intelligencer Rosencrantz, "There is nothing either good or bad, but thinking makes it so." Goodness and badness, the only meaningful moral currency, are defined by our values, and it is up to us to think about and then choose what we will value. I don't think this is coherent if we really get to the bottom of things, though it certainly makes sense in how we do, and maybe ought to, approach it in the day-to-day sense. "Ought," here, of course, being as Harris describes: an assessment of what we think will return the best result as measured on some metric of well-being and suffering, and "think" being the progressing set of mental states entering our conscious awareness to which we are treated with a uniquely personal experience.

Harris's case in The Moral Landscape is that there's something fundamentally broken with this idea, pushing the idea that "good" and "bad" can all be boiled down to a single overarching value that compares well-being and suffering. It is critical to understand Harris's point to fully appreciate the genius--which is the right word even if he is wrong about this--required to attempt to foist such a revolution on one of the oldest branches of philosophical thought. That point is that the terms "good" and "bad" are only truly intelligible against the standard of well-being and suffering. As those states are conceivably measurable by meaningful, though undefined and surely complex, metrics, moral values fall within the purview of scientific inquiry, broadly construed. While Harris admits we must import moral axioms to start the effort of a moral science, his case is not just that this is a trivial point that doesn't justify the expansive edifice of moral philosophy as the only vanguard to ethical reasoning, it's also that the axiom he proposes is the only one that makes sense at all, which means it would satisfy the usual "either self-evident or incorrigible" definition of "properly basic" that philosophers are often so concerned with.

He imagines a "moral landscape," something like a function on a (perhaps complicated, many-dimensional independent variable space) that defines some reasonable, but hypothetical, metric measured in well-being and suffering. This landscape is likely to have many peaks and valleys, none of which is guaranteed to be the unique best or worst, even if they are maximal. All nadirs in this space, which if they are multiple must return the same value (reasonably a negative value) in the metric, occur at any point, meaning from any moral framework, that generates "the worst possible suffering for everyone." Different sets of human, (or moral, if we must), values will correspond to different values in the space, and almost anything that moves us along a positive gradient can be construed as a form of moral progress. (This point is in reality somewhat complicated because something that moves us upward but to a relatively low peak that we might mistake for a zenith maybe shouldn't be branded "moral progress," but this is a digression from anything constructively meaningful at the moment.)

Born's case accuses Harris of choosing this axiom, taking well-being and suffering as the bedrock moral value, in a way that is decidedly philosophical, not scientific, and not being sufficiently appreciative of this point. Harris's point is that, really, that doesn't matter. Philosophers, be they moral or otherwise, fuss about with axioms, and those underlie every endeavor. Furthermore, some kind of consequentialism is crucial to every values framework in a way that philosophers seem not to fully appreciate. Still, the meat of our ethical work, Harris's case makes out, is not in choosing this axiom with whatever philosophical machinery is needed to do so; it is that once this is done, so is the moral philosophy because moral science can take over.

I actually don't think Harris goes far enough here.

Can we choose?

The moral philosopher's case, as I'm understanding it, depends upon the notion that we can choose to value whatever we want, including valuing certain things simply for their own sakes. I think this is an illusion (and really, I think Harris does too, given his take on free will).

Until recently, I haven't been able to articulate the necessary points that support my suspicion that we're dealing with yet another of our mind's many fantastic illusions--illusions that probably make living life worthwhile, if not possible in the first place. (The details of this point are largely irrelevant, but the fact is that the brain takes our raw sensory material and filters it, makes it coherent, and feeds what we call our conscious mind a fiction that we call our everyday experience, and it is more than just plausible that a successful livelihood as an animal in a competitive world depends upon experiencing only a remastered construction of what our senses are actually telling us.) In this case, the illusion is a case of being, in a manner of speaking, a bit too smart for our own good. It arises from being able to imagine counterfactual possibilities with some degree of clarity and the ability to make predictions using inferences from both our experiences and imagined experiences, including those counter to the real state of the world at any point in our pasts.

In short, I don't think we choose what we value at all; we simply imagine that we could choose to value different things, in principle. In the past, we imagine and judge our behavior against the consequences we experienced and how we imagine the consequences would have come out had we behaved differently. This thought is used to shape our inferences about consequences of future behaviors, and thus the likelihood that those will align with our values and thereby produce a "good" result.

From my incompatibilist perspective, I don't think we choose any of this at all--depending upon what "choose" means in this context--and I don't think that we could have chosen to do something else in the past. In the future, our choices will manifest as whatever combination of factors, mental states, inputs, and whatever else, cause it to seem to us that they have the best chance of achieving our goals, which are inextricably knotted up with our values. But if we cannot choose any of this, it seems fairly absurd to believe that we're choosing our values, or that we could choose different values. The illusion of choice lies in imagining counterfactuals as legitimate possibilities.

But values change

Of course, our values aren't static. It strikes me as increasingly likely that our values are best construed as certain states of mind, these having been created by the various inputs--experiences, genetics, and what-have-us--that have defined who we are at the present moment. Various inputs anywhere along the line can change the state of our brains, and this is happening literally all the time. This ongoing process leads us to value somethings other than what we did previously. Imagining that things might have been different satisfies part of the illusion that we somehow chose those values, the rest of it being supplied by the usual mistake of believing that we are the conscious author of our own thoughts. None of this, though, is the same as choosing our values, or choosing to value something other than we did, unless, as I've suggested is possibly reasonable, this is how we're defining "choose."

Since we, construed as our conscious minds, only become aware of our choice after it happens, via a change in the state of our brains, it seems very odd to suggest that we are choosing our choices in the usual sense, that of a mind driving itself. This, of course, is not how it feels to think, to live, to experience, or to choose, but how it feels isn't necessarily how it is, even if it does happen to be the best way to act. If we know anything at all about ourselves, we should certainly include that fact amongst the known.

What are values?

This leads to a surprising difficulty with the idea that we could choose to value something other than what we value, and with that thought, I hope to address what I've left out until now: where our values come from, or, rather, what they are. Indulge yourself for a few minutes in the following exercise. Attempt to explain why you value anything that you value without referencing perceived consequences of acting in accordance with those values. My bet is that the only examples you can come up with are ones that somehow seem biologically hard-wired but that drive us in the opposite direction of what we feel like we truly value. Note that those too are consequences-based, but they may be at odds with other more refined notions that we hold for surprisingly similar reasons.

This brings me to a discussion that will seem admittedly a little weird, but I do hope that my reasoning becomes clear as I develop it. As it turns out, some, perhaps most, plants emit certain chemicals from their leaves in response to various circumstances. For instance, the tomato vine is known to emit a compound from its leaves when attacked by certain kinds of insects, and that compound has the effect of causing any tomato plant it happens to land upon, other parts of itself or its neighbors, to exude foul-tasting chemicals into the leaves and stems that make them unpalatable to the bugs. This is action as a result of biological hard-wiring.

Now pretend for an instant that plants can think. In this thought experiment, it isn't simply particular enzymes being released in the damaged leaves that trigger a response of emitting other compounds that engender the same effects in other nearby plants, it is a decision on the part of the plant to defend itself and its neighbors (which are likely to be kin) in this way. This decision, based upon the plant's imaginary ability to imagine that things would go worse (consequences) if it didn't perform this action (based upon evaluating a counterfactual), is almost impossible to construe as anything but a value not to be eaten by bugs, itself based upon values related to what all living things do, which is produce more entities of roughly the same kind. (And "roughly" here is important because it is a consequence of evolution, and thus that it is a successful strategy for biological things to employ, not to simply copy themselves but rather to make genetically distinct facsimiles.)

This is going to sound extraordinarily controversial, but the only real difference between ourselves and a tomato plant in these kinds of circumstances is that we would never say that tomato vines value not being eaten by bugs because we don't imagine them as being able to think about it. Tomato vines are our cousins. Somewhere on the phylogenetic tree, admittedly way back near the bottom, there was a common ancestor that gave rise both to tomatoes and to us. If we imagine our parallel evolutionary paths, the notion that we can value not being eaten by bugs but that a tomato plant can't starts to lose a little of its force. This is a feeling that we humans are critically different in a fundamental way as most or all other animals, although the only real critical difference might be thinking so. Thinking this way is a form of stealth dualism that creeps upon us frequently, even when trying to avoid anthropocentric solipsism, and I think its endemic to most of moral philosophical thought.

In our evolutionary past, we had "values" in the same way that tomato vines have them today, as a pure consequence of our biological system reacting to its environment. My thought is that nothing has changed in this regard. We are still biological systems reacting to our environments in ways that we perceive, broadly construed, to be to our benefits. It's simply that our ability to imagine counterfactuals and to make predictions, along with a few other factors intimately related to our general measure of intelligence, like sociality, make these systems intractably complex (even in a way we don't attribute to tomato vines despite the fact that we didn't even know about these messenger chemicals until quite recently). Our predictive capacity is still a reaction to our environment, as is the ability to imagine various possibilities that can be weighed, projected both into the future and into the past.

Murky complexity from abundantly clear simplicity

There are reasons we value what we value, and they have everything to do with our evolution, however complex our set of inputs or our capability for attempting to process them. The reasons we originally valued precious stones, and thus largely still do, for instance, are murky. It is probable that almost all of us have picked up a shiny or peculiar rock at some point in our lives because we thought it looked cool (aesthetics, which has psychological impact), that it would be neat to show to our friends (social, including possibly status), or it might well be worth holding onto in case it was in some way able to be used or traded. All of these are consequences related to the meeting the challenges we are evolutionarily adapted to handle.

Likewise, it is probable that almost everyone who has picked up rocks in that manner, geological rock-hound or not, has at one point or another reflected on the "objective" worthlessness of the stone, especially if it is not precious or even semi-precious. And yet we picked it up and held onto it, maybe for a considerable amount of time, because we valued it, even if for inscrutable reasons. Of note, this behavior is not unique to humans, and our reasons may ultimately be quite similar to why magpies and ravens collect shiny things as well, though it feels like a bit unlike how moral philosophers use the term to suggest that the birds value their treasures.

Our variety of values, all of which can be construed in terms of "the good life," which is well-being or suffering, arises from our evolutionary past, including its social aspects, and is probably too complicated to describe in satisfactory detail. For simplicity, then, I will focus only on a generic term that is definitely the mental result of our evolution and all that came with it, dubbing them "psychological pathways." There are reasons for our values that may be quite simple, as in the valuation of adequate nutrition to sustain our lives, or quite complicated, as with valuing abstract art, but in these cases, a common, underlying theme is that we are affecting our psychological pathways in ways we subjectively perceive as being positive or negative, producing or precluding well-being, causing or ameliorating suffering.

But we don't choose our psychological pathways, at least not really, as we discussed before. It is more accurate to say that we consciously acquiesce to particular thoughts that crop up in our minds, those being particular patterns of neurological activity, chemistry, and related processes. We don't even choose to acquiesce, though; our acquiescence is yet another expression of the same kind of physical phenomena. Tautological as it sounds, we value what we value because we value it, and those reasons have everything to do with the wide set of inputs that have shaped our mental and psychological pathways and nothing to do with choices consciously made. Instead of rendering this position a pointless tautology, though, what this perspective provides is the ability to see values as certain kinds of facts about the world.

It comes down to need

This, of course, only partially answers the question of where our values come from. The more direct answer to their genesis is from needs--many of these complicated psychosocial phenomena that are nearly intractable and often inscrutable. But the chain doesn't need to end there because our needs aren't a stealth-dualistic concept either. If we compare ourselves, or a tomato plant, or a virus for that matter, to a rock, it seems clear where our "needs" come from: goal-directed behavior. And in this case, the goal isn't chosen either. The goal is self-replication.

In order to self-replicate, as all genetic material does by definition, there are two obvious requirements. First there is the need for the raw materials and paraphernalia (e.g. enzymes) needed to create a copy, and second there is the need to avoid destruction long enough to replicate. These are not intentional goals. They are fundamental physical requirements that apply equally to intentional biological organisms and utterly mindless single-molecule chemical self-replicators. At this level, though we don't usually call it "good" or "bad," it is "good" to succeed at self-replication (for a self-replicating molecule is then doing what it does) and "bad" to fail at it (because it is not doing what it does). There is no evaluative process involved here, just simple statements of whether or not a kind of thing meets a definition (self-replicating).

The biological process, even in its most rudimentary form operating only on simple RNA molecules, inherently--this being critical--has the goal of self-replication that requires access to the appropriate materials and allotment of time to succeed. Those materials, including the space of time, constitute needs. Needs lay the basis for all simple values, whether we conceive of them broadly, as with tomato plants and viruses, or rather narrowly, as with intelligent animals like human beings. Complex values are servants to the simple ones, which have nothing directly to do with our conscious experiences, all their intricacy being due to the long and ruthlessly competitive operation of natural selection, which pits self-replicating chemical structures against each other in a battle for restricted resources. That means not only do the most successfully prolific self-replicating molecules possess the best mechanisms for preservation and replication, they create for themselves mechanisms that are best at serving self-replication. (And that variation in the expression of the replicating thing is one such mechanism is as incidental to the fact as is that it produced sentient animals that can marvel at such things.)

Why do we value what we value? Because we have needs. Why do we have needs? Because we are biological, which is ultimately to say because we are self-replicating chemistry in action--and given our position a few billion years into the evolutionary process, it's a reasonable guess that our needs, thus values, are attendant to extraordinarily successful self-replicators. All of our values, simple or complex, are reflections of this fact, and we certainly do not choose them. They all exist because successful self-replicating molecules in long-term competitive restricted-resource environments must be extraordinarily successful self-replicators, one consequence of which seems to be the ability to modify the environment around oneself to meet those utterly basic, physical needs of chemical processes). The complicated evaluative systems that are our brains drive thoughts and actions in the directions of guesses that it estimates are most likely to satisfy our labyrinthine collection of needs, all of which are slaves to self-replicating molecules that have only two simple values: more raw materials and enough time to use them.

But, but, but...

What about the fact that people value very odd things that are hard to make sense of? What, though, about the facts that some people don't want to replicate themselves? What about the fact that some people don't value replicating themselves, or anyone doing so at all? What about the notion that it's possible to "value" that all life be eliminated (supposing that's even possible)?

A digression into pica

Why do we value odd things that are hard to make sense of? Psychological pressures that are the evolutionary product of the two simple needs of self-replication. The psychological disorder--meaning deviation from the norm--called pica creates a good example. A person who suffers pica has an appetite for non-nutritive substances like ice and dirt and acts upon it, eating those things, often to deleterious effects (particularly on one's dentition).

While the fact that it is a psychological disorder (somehow related to the obsessive-compulsive spectrum) indicates that people may not actively value behaviors like eating dirt, but if we understand action upon a psychological pressure as a means of reducing psychological distress and achieving some modicum of relief from suffering, it can be seen as a kind of value indeed. The specific behavior doesn't define the value; the reduction of psychological distress does. But what is psychological distress for? It is for helping the organism pursue what it has deemed necessary, perhaps not intentionally, to maintain itself.

Pica makes an interesting example as well because despite its recognition with OCD (and in some cases cultural factors--a psychosocial phenomenon integral to the evolutionary product we call human beings), it is also recognized to be tied in certain ways in many cases to particular mineral deficiencies--a case where the self-replicating bits register a shortage of a necessary raw material to their overall self-replication machine (here: a human organism).

Values that are hard to make sense of are easier to understand in light of the concept of psychological pressures, which are evolutionary products of self-replicating molecules no matter how complicated. That these, as with pica or other psychological disorders, can go awry or be hijacked is not surprising nor any reason to believe that we need to assume that our realm of thoughts is somehow a privileged domain for the existence of values, accessible in principle only by philosophers and empirically tractable in principle only by normative means, however useful those approaches are in practice.

The other objections

The objections are only superficially meaningful. Part of the consequence of our evolutionary heritage is having developed the capacity to imagine potential futures and to evaluate the consequences of those. That capacity brought with it psychology, which brings upon each individual an experience that may not press individual replication in the form of reproduction to the forefront of importance but still has everything to do with self-replication at the level of the thoughtless clockwork of chemicals that make up a huge proportion of the cells in their bodies. It's simply misleading to think of this problem in terms of individual human beings seeking to produce human offspring, however many of their social activities and attendant psychological pressures are slaves to that drive anyway.

When it comes to valuing destruction of sentience, that is, some kind of destructive nihilistic ambition, there are still reasons for those values that reduce to psychological pathways. Self-destruction can follow from the need to ease a pathological psychological pathway, or as we commonly put it, suffering. To imagine that all sentience, all life, or all self-replicating molecules are bad and should be destroyed is simply to commit a gross error in believing that suffering is a universal feature of sentience, life, or mindless self-replicating molecules in a way so complete as to warrant its destruction as the only real positive gradient on the Moral Landscape.

If one conceives of the nadirs on the Moral Landscape as being below sea level, negative values in which the balance of experience is tipped toward the suffering, a need to reduce suffering can make self-destruction appear to be a moral positive. Valuing something extreme like the utter destruction of life is to have come to believe that the only achievable nonnegative point in this space under any metric rests at exactly zero by means of flattening the Landscape by rendering it moot.

(Note: This matter of ethical responsibility to handling well-being states that land irrevocably in the negative, at the level of individuals, is more challenging a question than it appears on the surface. The entire ethical debate about euthanasia--the right to die--is centered upon this question, and it can be construed as a question of whether or not permanently negative states of well-being can actually exist for someone. This important discussion is unnecessarily and damagingly obfuscated by inhumane, dogmatic religious taboos.)

Anyone who seems to be valuing these things must be understood to be, like everyone else who values anything, in thrall their psychological circumstances, which themselves are an evolutionary byproduct of the simple fact that there exist self-replicating molecules, some better at the game than others. A self-replicating chemical system that can bring itself more raw materials and prevent its own destruction for long enough to replicate itself one or many times has a natural advantage in the self-replication game over others and will proliferate. Apparently, brains that exhibit psychology is a sufficiently successful strategy to have led us to our present state of affairs, in which those amazingly clever brains can convince themselves of fantastic things. One such fantasy is that our values arise purely from thoughts that can be chosen and are distinct from the satisfaction of needs that all, ultimately, exist to play their role in the self-replication of certain molecules that do what they do simply because it's what they do.

We have values, period.

It isn't, then, that we choose even our first value. We simply have values as a fact of nature, and as a fact of nature, those values are in principle discoverable by the means and methods that are collectively known as science. Construing them as well-being and suffering puts primacy on sentience, but why shouldn't we? There seems to be something central about sentience to moral values (and, obviously, to the "human values" Harris wrote his book about), and well-being and suffering aren't anything like an arbitrary choice. The very meanings of those terms--which arise from and cannot be freed from our empirical experiences, a point that often seems lost on philosophers--are expressions of our psychological pathways. Those, though, are just another expression of our evolutionary heritage as chemical systems consisting of and in service to self-replicating molecules. We value what we value because that's what value means. Harris is right, then: it's simply unintelligible to talk about human values in any other way.

But philosophy is important

Moral choice is an illusion, but this need not and should not be the death of moral philosophy or its primacy in the ethical arena. Morality is simply too complicated to be addressed empirically in a clear way in practice at present, and that may always be the case. We haven't even anything like the proper metrics or the tools with which we could measure much by way of various attempts at human values, and even if we did, the data we would find might simply be too unwieldy to wrangle into anything fully sensible. Further, the philosophical matter upon which moral philosophy ultimately hinges, the compatibilist versus incompatibilist interpretation of free will, must continue and seems primarily a job for philosophy. Whether that discussion is settled or not, philosophers should aim to make clear and useful sense of the notion of "choice" under incompatibilism, which also looms over the salience of moral philosophy as a uniquely priviledged endeavor in making sense of human values. For all of these debates, one thing is clear: philosophers interested in the task aren't going to get anywhere meaningful without taking on a lot of neuroscience.

Therefore, this does make a definitive case that to the degree that moral philosophy remains relevant, which should be very significant, it must be informed by and, in the parts where it is relevant to do so, ceded to the nascent moral sciences. "Lead, follow, or get out of the way," the saying goes, and moral philosophy is now justified in the position to do all three as is appropriate, but it no longer has permission to use the illusion of moral choice to justify confusion on which it should do in what circumstance.

Friday, June 13, 2014

A shorter discussion of evidence

About a week and a half ago, I finished and published a ridiculously long essay about what constitutes evidence, totalling more than 10,000 words, because I felt like there was a whole lot of ground to cover to shore up the many things I was trying to address and make sense of given the topic. The original essay about evidence can be read here. My purpose here is to give a shorter, summarized version of that very long discussion.

How I think we should use the word evidence

In an effort to blend the lay uses of the word "evidence" with the more careful scientific use of the word, I'm offering the following as a prototype for a philosophical characterization of the term, hoping to replace what I feel are far worse interpretations of the word (e.g. "any observation that increases the probability that a hypothesis is true," which is broken to pieces and very misleading).

My suggested meaning for the term "evidence" is
A body of observations O is evidence for a hypothesis H if, and only if, it is a consistent part of a larger body of observations called evidential closure of O, comprised of all observations bearing significantly upon H, such that the probability that H is true given O (plus its evidential closure) is sufficiently great to warrant justified belief that H is true. In this case, we could call an observation A in O an evidential observation.
The key points to note about this characterization of the term "evidence" are:
  • "Evidence" refers not to individual observations but rather to a body of observations;
  • A body of observations only constitutes "evidence" for a hypothesis if that hypothesis is actually (provisionally) true (I will elaborate upon this below);
  • A body of observations cannot constitute "evidence" if it is cherry picked from a broader range of observations that collectively fail to render (provisional, qualified) acceptance of a hypothesis;
  • What we call "evidence" can be qualified by the statistical confidence with which we are claiming provisional acceptance of the hypothesis (i.e. O is evidence for accepting H at the 95% confidence level);
  • There is no evidence for anything we have really good reasons to think is false.
Understanding the idealization

To really get a keen idea of what I'm going for with this definition, it is helpful to examine its idealized expression, the kind of expression we would use if we had some way to apprehend clearly the truth instead of the provisional truths we're often obliged to rely upon.

Idealizing this definition would read something like this:
An observation A is evidential for a hypothesis H if it raises the probability that we're right to think H is true and H is actually true.
The idea here is to mirror the Platonic definition of knowledge: justified true belief. As I'm putting it, evidence supports knowledge, and knowledge requires that the belief in question is true. Thus, there is no evidence for anything false. That is, knowledge has to accurately reflect reality.

Evidence for a hypothesis?

The phrase "evidence for a hypothesis" is a common abuse of terminology. In reality, we have observations that are consistent with or not consistent with various hypotheses. Hypotheses represent a very specific kind of abstract construction by which we attempt to understand the world, and they should not be confused with the reality that we are attempting to describe. When I use the phrase "a body of observations is evidence for a hypothesis...," what I mean is that the given body of observations is consistent enough with a hypothesis that is sufficient for us to determine at some reasonable level of confidence that the hypothesis is provisionally true.

Range of applicability

I don't want to go overboard with this section, but it is important to realize that "evidence," like the hypotheses it supports, necessarily has a range of applicability. Many of our everyday observations constitute splendid evidence for Newtonian mechanics--which is to say that Newtonian mechanics will give spectacularly accurate predictions for the phenomena we're observing--even though we know that Newtonian mechanics isn't the whole story. On its range of applicability, which can be delimited with regard to special and general relativity and quantum mechanics, though, Newtonian mechanics is provisionally true with very high confidence, and thus we can call the body of observations in that range "evidence" for Newtonian mechanics (on that range).

The original

Once again, I encourage you, O Reader, to give the longer essay on evidence a look if you want to understand my thoughts and motivations better--and I do think they're important, which is why I  put the time into a 10,000-word essay on the topic in the first place. This is the TL;DR summary of the original. 

Thursday, June 12, 2014

The apologist two-step--McGrew and Marshall on Boghossian

Something has been bothering me, thinking back upon the "debate" between Christian apologist and philosopher Tim McGrew and Peter Boghossian, author of A Manual for Creating Atheists. The "debate" was on a Christian radio program called Unbelievable?, hosted by Justin Brierley, and among a number of other troubles with what McGrew had to say, I'm bothered by McGrew's attempt to dismiss what seems to be an extremely relevant part of typical Christian faith--that it is, indeed, as Boghossian said, belief without evidence (yes, without, see what I mean here). (NB: Boghossian's "belief without evidence" isn't as strict in meaning as mine, as we will see.)

What troubles me here is that McGrew is deliberately engaging in what we might call the apologist two-step. The way this works is that apologists say radically different things about the same topic, allowing believers who come to them for cherry-picked support of their beliefs (if we're honest) to cherry pick whichever support best suits them. To illustrate what I mean, I want to compare what McGrew has to say about the definition of faith espoused by many Christians and apologized for directly and in absolutely plain English, without the least bit of philosophical mumbo-jumbo, by the surprisingly popular apologists Norman Geisler and Frank Turek, of I Don't Have Enough Faith to Be an Atheist ignominy.

Geisler and Turek

I'm particularly taken by the characterization of faith given by Geisler and Turek in that book, which can be found on Page 26 in the introduction: "We mean that the less evidence you have for your position, the more faith you need to believe it (and vice versa). Faith covers a gap in knowledge." This is their working definition of faith, and it comports exactly with the characterization that Boghossian presented in his Manual for Creating Atheists, the object of much of his discussion with McGrew. For all their abuses of the term evidence, and the observations so characterized, throughout their famous book, Geisler and Turek are at least quite clear and forthright upon the meaning of the word faith.

For comparison, note, on p. 23 of Boghossian's Manual, that he writes, "'Faith' is the word one uses when one does not have enough evidence to justify holding a belief, but when just goes ahead and believes anyway." This is exactly the meaning given by Geisler and Turek on p. 26 of their own book. "Faith covers a gap in knowledge" fits perfectly on the end of Boghossian's characterization--and his whole theme about epistemology--without the need for a single change in anything he said. This is also more than enough to dispel the rampant pedantry that Boghossian changed his terms from "belief without evidence" to "believe without sufficient evidence," despite his acquiescence (on the spot) in the "debate" that there's an important distinction (there's not). Simply, Boghossian clarified exactly what he meant immediately after the "without evidence" as I just quoted.


Timothy McGrew says...

At one point in the interview (at timestamp 21:28), Boghossian asks directly, "So, what do you think they [Norman Geisler and Frank Turek] meant when they said, 'I don't have enough faith to be an atheist.'? When they wrote a book about that, what do you think they meant by that?"

McGrew replies, "Right, I think what they mean is that--there's a debased sense of the word faith going around, and it's been picked up--mostly by critics--to mean a belief in something in the face of certain difficulties--and they say, 'well, if it's a matter of comparing the difficulties on the one side and comparing the difficulties on the other, there are greater difficulties lying on the one side than on the other. But I think that that's also partly a bit of a concession to a debased sense of the word that has got mostly prominence in atheist and freethinking circles, and so they're picking up on that aspect of the semantic range of it and saying, 'well, if that's how you're gonna use it, if you have this pejorative sense of it, then let us spin it around on you and say that if that's what you want to mean by the term then, on your own terms, we're going to say, "well, no, actually the shoe is on the other foot."'"

At this point, Brierley interrupts McGrew to say, "My suspicion is that if you actually went to ask Frank Turek and Geisler they'd probably agree with Tim in terms of how faith should be defined."

Back to the not-so-fun kind of G&T, then

While I cannot say what Geisler and Turek would agree to if asked, particularly on the spot, particularly if they were aware of the context of the conversation, I can say that McGrew's take is enormously curious, at least to anyone who has read the beginning of Geisler's and Turek's book. The entirety of the introduction to I Don't Have Enough Faith to Be an Atheist lays out exactly what they're talking about, and in it they spend much of the introduction making a sustained case that the term "faith," whether it applies to religious believers, skeptics, atheists, or anyone else, means concluding something is correct without "exhaustive information to support it" (p. 25). What they mean is what Boghossian said.

Consider a few more quotes from Geisler's and Turek's introduction:
  • "While some faith is required for our conclusions, it's often forgotten that faith is also required to believe any worldview, including atheism and pantheism." (p. 25)
  • "Nevertheless, some faith is required to overcome the possibility that we are wrong." (p. 25)
  • "Since Barry, like Steve, is dealing in the realm of probability rather than absolute certainty, he has to have a certain amount of faith to believe God does not exist." (p. 26, emphasis original)
  • "Although he claimed to be an agnostic, Carl Sagan made the ultimate statement of faith in atheistic materialism when he claimed that, 'the Cosmos is all that is, ever was, or ever will be.' How did he know that for sure? [JAL's Note: by definition.] He didn't. How could he? He was a limited human being with limited knowledge. Sagan was operating in the realm of probability just like when Christians are when they say God exists. The question is, who has more evidence for their conclusion?" (p. 26, emphasis original)
  • "Even skeptics have faith. They have faith that skepticism is true. Likewise, agnostics have faith that agnosticism is true." (p. 27)
  • "[W]hat we are saying is that many non-Christians do the same thing: they take a "blind leap of faith" that their non-Christian beliefs are true simply because they want them to be true. In the ensuing chapters, we'll take a hard look at the evidence to see who has to take the bigger leap." (p. 30, emphasis original)
  • "Since all conclusions about [religious truth claims] are based on probability rather than absolute certainty, they all--including atheistic claims--require some amount of faith." (p. 32)
I'll note that nowhere in the introduction of their book do Turek and Geisler mention that they're using a "debased" version of the word "faith," nor do they so much as mention that atheists and skeptics have introduced this perverted version of the word and that their desire is to beat them at their own game. Geisler's and Turek's conveyance is clear: they take faith to mean that which extends the justified warrant of the evidence in order to confer belief.

So what conclusion can we draw about McGrew's on-the-spot characterization of Geisler's and Turek's understanding of faith? I think it's fairly hard to escape the conclusion that McGrew was actively warping it to his purposes. Certainly, Geisler and Turek wanted to show that there's "more evidence" for Christianity than any other religious position or, particularly, none, so Christianity requires "less faith" than, say, atheism, but it's abundantly clear what they mean by "faith" in the process, and McGrew was screwed by it.

It gets worse

Boghossian recognizes the importance of this particular point and presses it, despite Brierley's interruption and attempted deflection from the topic. Boghossian asks McGrew directly, "What percentage of Christians, Tim, do you think use the word faith in the way that I've defined it? Not the pretending but the first definition [belief without (sufficient) evidence]." Bear in mind that Boghossian's characterization, the "first definition," comports perfectly with that of Christian apologists Geisler and Turek.

McGrew responds boldly: "Something, something well below 1%. And I'm talking about people across all, I'm talking about people across all levels of academic achievement and study, from people who never got out of high school, to people who've got doctorates, people in the churches, people in the pews." 

Pardon me while I try to pull my straining left eyebrow back down and put my bugging-out left eyeball back in.

Who reads who?

It's impossible to overlook the fact that Geisler and Turek's book is wildly popular amongst Christians. In fact, it is in the top ten best-selling Christian apologetics books on Amazon (significantly outselling Boghossian's Manual as well). I can't count the times I've been told--both online and in person--a regurgitation of Geisler's and Turek's title and subsequent characterization of faith by Christians, as compared to the whopping zero times I've heard McGrew's strange, complicated, stretched (read: ad hoc) definition. Of course, maybe I've somehow unfortunately only run into that "well below 1%" out there, and perhaps most of those people buying and repeating Geisler's and Turek's line do so because they disagree with them (or are executing a carefully calculated rebuttal to a debased definition, so surreptitiously deployed that, just like Geisler and Turek, they never mention the fact that they're doing it).

In short, we have every reason to believe that far more Christian people read Geisler and Turek and accept their definition of faith than read McGrew and his technical, weird definition. It's unfortunate for McGrew's case against Boghossian that Geisler's and Turek's characterization matches Boghossian's exactly.

But there's a double-standard, of course

Let's turn our attention where we shouldn't, to Christian apologist David Marshall, for instance. Quoting him from my own blog, in the comment thread on my post following the Boghossian-McGrew discussion: "Dr. McGrew and I co-wrote the chapter for True Reason in which we set forth the definition of faith that he used in this debate: 'trusting, holding to, and acting on, what one has good reason to believe is true, in the face of difficulties.'" One might surmise that Marshall, co-author of their odd (read: ad hoc) definition would have been appalled by Geisler's and Turek's take on faith. Nope. At least, apparently he wasn't.

David Marshall's review of Geisler's and Turek's book awards it four stars (compare two for Boghossian, as well as for Dawkins, Dennett, and Harris, and just one for Hitchens, for their most famous "New Atheism" titles), under the banner "A Wealth of Evidence, Mostly Good" and takes no issue with their should-be-egregious interpretation of the term "faith." Thus it is made very curious that he vehemently opposes Peter Boghossian's application of exactly the same meaning--that which fills in the gap between justification and the extended degree of confidence. Apparently, on the accusation that he's "pretending to know" things he's presumably 100% sure of, on faith, across a gap in knowledge that even conservatives like Geisler and Turek are willing to admit exists, David Marshall is offended deeply enough to apply a double standard.

He creates the opportunity to call out Geisler and Turek, but then he doesn't do it. Marshall can't marshal the nerve to criticize his own for exactly what he yammers incessantly about when it comes to his opponents. Marshall writes, "Several critics assume that Christian faith means 'a firm belief in something for which there is no proof,' or that religion 'tells us to ignore reason and accept faith.' Having just completed a historical study of Christian thought on faith and reason from the 2nd Century to modern times, I would argue that this is not at all what Christians usually mean by faith. In fact, as physicist and theologian John Polkinghorne points out, faith in the Chrisitian sense is arrived at by means rather similar to scientific hypothesizing." On the matter of it covering the gap in knowledge, Marshall makes no comment.

He wasn't quite so generous to Boghossian's take in his two-star review of his Manual, ranting instead that Boghossian is "pretending to know what he doesn't know" about faith. Instead, he asserts directly that what (Christians, apologists even!) Norm Geisler and Frank Turek explicitly mean by faith in a book about faith written for Christians isn't what Christians mean by faith.

On his (decidedly feral) blog, he had more to say about Boghossian, though, for exactly the same characterization of faith as given by Geisler and Turek: "[I]f Peter Boghossian really believes that crack-pot definition of faith on which he bases his entire book, and apparently his career as a government-paid proselytizer for atheism, then he is deeply and probably willfully ignorant. This is why he does not seem to like to interact with informed Christians, but just pick off the lame caribou foals at the back of the herd, like his young and ignorant students." I'd love to see him say the same about Geisler and Turek, but he already wasted that opportunity on a vain attempt at self-glorification.

And so we see the two-step

Most Christians are treated to the apologist two-step here. Each can read the very popular (amongst them) Geisler and Turek, who clearly agree on this point with Peter Boghossian in their own book about faith, give it four or five stars, repeat it at want or need, and yet fall back on the "more sophisticated" (read: more sophistic and ad hoc) characterization given by McGrew and Marshall that "refutes" Boghossian if anyone presses them on it. And so it is always with apologists.

The two-step is their game. The way it's played is simple: give multiple characterizations for everything, including God, faith, Christian, etc., and then whenever someone calls you out for the problems in any one of them (and there are always problems), switch to another. Dance, dance, dance. Pretend, pretend, pretend. Whatever it takes to avoid having the cherished beliefs treated with intellectual honesty, which would destroy them.


Afterword: Please, though it is very long, take at least a few minutes to look through or read what I wrote about evidence a while ago, explaining why I think "belief without evidence" is the correct understanding of faith, in that I do not think that there is any evidence for God or Christianity, not none. Update: I've added a briefer version of my long essay about evidence. The shorter version can be found here.

Tuesday, June 3, 2014

One fact that all Christians, including William Lane Craig, miss

Back in January, I wrote a short post on this blog titled "The one fact almost all Christians miss." (Incidentally, I decided a few days later that there's another fact almost all Christians miss, so my title was a bit hasty.) Somehow, this blog post made its way in front of the very famous Christian apologist William Lane Craig, who responded in an interview about it with Kevin Harris. A transcript is available on Craig's Reasonable Faith website (Link).

It appears that Craig missed my point spectacularly, probably because he wears Jesus-colored glasses. The point I was making is simple--every reading of the bible is just an interpretation. As Craig amply demonstrates, he (and probably more than a few Christians more widely) gets that interpretation is necessary, but what they fail to acknowledge is that all interpretations of the Bible are ultimately unjustifiable and untestable--and hence not worth considering, particularly as candidates for The Absolute Truth™. This is one fact that almost all Christians miss, and it appears that William Lane Craig, the most noted apologist amongst them, has missed it as well.

Before I get to examining some of Craig's specific statements about what I said, I need to point out why he and almost all Christians miss this fact, other than the Jesus-colored glasses. Craig reveals it by spending a great deal of the time that he doesn't use to belittle me and my capacity to think clearly talking about the "science" (cough!) of hermeneutics. That's how he missed my point. It seems to me that Craig may think I'm arguing on his terms rather than saying that his terms are idiotic. Because a Christian cannot simultaneously maintain belief and believe that the terms of that belief are idiotic, the fact, and thus my point, is missed rather like a kid who swings a baseball bat so hard, and whiffs, that he spins around and falls over.

Though I'm not sure it deserves it, I'll do a little to respond to Craig's statements with the rest of this post, but I think I'll hold myself to answering his rhetorical questions about me in the main. That will, at least, be fun, and doing much else runs the risk of taking his terms too seriously, which no one should do.

Craig: "I wonder if he is at all aware of the whole disciple known as hermeneutics which is the science of interpretation."

Yes, I am. "Science." :snigger:
This one deserves a little attention, though, because this is another fact that many Christians, including Craig, miss. People live their lives by the interpretation of Christianity they pretend is The Absolute Truth™. They use their interpretation to guide their decisions, to inform their ethics, to feel super important about their ethics, and to browbeat people personally and politically on matters related to their ethics. 

We do not, on the other hand, do that with most other texts. We might engage in careful hermeneutics of Jefferson's letters to gather information about how he thought the United States should be ordered and play out, but outside of a band of people on the fringe, we do not order our lives or even the main working of our nation on the assumptions that (a) even if we determine what Jefferson truly meant and intended by his words, that they are The Absolute Truth™, and (b) that anything Jefferson said even is The Absolute Truth™.

This differs from what Craig and other Christians do with the Bible in an extremely important way. It also misses the other point I made--how on earth could differences in interpretation be resolved? There are an awful lot of interpretations of the Bible, and the only real "hope" of resolution comes down to interpreting a cobbled-together collection of mythological texts as if it is real and then using the "Inner Witness of the Holy Spirit" to declare victory. Of note, in 2007, Donald McKim published an authoritative Dictionary of Major Biblical Interpreters, a book of more than 1100 pages chronicling the history of how, to quote John W. Loftus, "Biblical interpretation is like looking into a mirror since believers think that God believes whatever they do!" To read histories of Christian interpretation is to watch the vigorous branching of belief structures based on little more than "that guy's a heretic;" "no I'm not!" into some 40,000 distinct denominations, all of whom believe in their hearts that the others are all wrong in some degree.

Craig: "I think any reflective Christian is aware [that whenever you read the Bible you are interpreting this piece of literature]."

"God said it; I believe it; that settles it." Dr. Craig may not be aware that they make bumper stickers that say that and that people actually buy them and put them on their cars. I even saw one personally a few months ago.

I am so glad, though, that he said "piece of literature." Indeed. Thanks, Bill. That's rather my point.

Craig: "As we'll see as the blog proceeds, he gets into this post-modernist nonsense about texts having no objective meaning, and this whole thing is so self-refuting."

Huh? Craig seems to misunderstand that I'm talking about the Bible, not "texts" in general. This is probably because Christians fail to understand that the Bible is largely fiction. "Objective meaning" is such a weird phrase to use for a work of fiction.

Craig: "Any text, when you read it, has an interpretation, including Lindsay's blog. So as you say, this is about the breading [sic] and raising of hamsters. That's what this really is, and Lindsay has really helped us to see."

Um, Dr. Craig, I appreciate your attempt at satire, but, um, your cheese, uh, slipped off your, uh, cracker, sir.

Craig: "But actually, Kevin, I interpret it differently. I think here he is using irony and satire to emphasize how objective and true the Bible is. Really he wants to strengthen people's confidence and faith in the Bible. That's my interpretation of the blog."

Um, Dr. Craig, your, uh, cheese, sir.

But to give this a little attention, in no way do I suggest that Christians accept any interpretation of the Bible. That would be ridiculous. No Christian (that I'm aware of) uses the Bible to argue that the moon is made out of mashed potatoes. Christians use the Bible like a mirror, like Loftus suggested. They use biblical interpretation to reflect and pseudo-justify what they already believe.

Christians believe that death is a bad, scary lie, so they use the Bible to pretend they'll get to live forever. Christians believe that they are hugely important and that their mistakes and wrongdoings have cosmic importance, so they use the Bible to make sense of that. Christians believe that it sucks that life isn't fair, so they use the Bible to pretend that a loving God is in control. Christians believe that certain behaviors are bad, icky, or inappropriate, and while some (e.g. murder and rape--both occasionally condoned and commanded by the Bible) certainly are, others aren't, but Christians use biblical interpretation to browbeat people for arbitrary things (like how they dress, who they have sex with, what they eat, if they use bad fucking words, and whatnot).

The point I made, though, is that they don't have a way to resolve these kinds of thing. Catholics have to confess and eat crackers to go to Heaven; Protestants do all kinds of different things. Some Christians are mortally against birth control, abortion, homosexuality, racial integration, women not being a man's property, and so on and so forth, and other Christians hold opinions that radically oppose those--and all use biblical interpretation as their arbiter of, you guessed it, The Absolute Truth™, which they cannot know but excel in pretending to know, via biblical interpretation.

Craig: "Doesn't he think that there might be a correct interpretation [of the Bible]? That there is an interpretation that is true that reflects the meaning that the author [sic] actually had in mind?"

Thanks for asking. Yes. I do. It's ancient mythological ramblings by a variety of authors who didn't agree with each other, didn't have the faintest idea of how the world works, and were pretty barbarous. It belongs on a shelf wedged between the Bhagavad Gita and The Epic of Gilgamesh, near other titles like the Yijing, Odýsseia, and the Iliad. In a slightly different circumstance, we would find The Silmarillion on that shelf as well, but because it isn't ancient, we don't call it mythology and just recognize it as fiction. That's the correct interpretation of the Bible.

Now, that's not quite the question Craig asked. Do I think that there's an interpretation that is true that reflects the meaning that the authors had in mind? No. No, I do not. It's mythology, which hits like a Zeus-hurled thunderbolt on that "true" thing.

Here, I have to do a block-quote of their interview to make a short response,
Kevin Harris: He says, “Indeed, that interpretation of the Bible defines what passes as being 'true Christianity' and it is the role of faith to glaze over that fact.”
Dr. Craig: And that is obviously incorrect. You do not glaze over the fact that there are multiple interpretations of certain texts. Sometimes it is difficult to determine the meaning of the text, and in other cases the text is very, very clear and there is widespread agreement on the interpretation of the text. It will vary from passage to passage.
I'm not sure Dr. Craig is in touch with how the vast majority of the Christians in the world use faith. As such, he didn't respond to the point of that comment, which was the role of faith, along with the idea that some often-unique biblical interpretation, for most Christians, passes for them as The Absolute Truth™. Reading this part of their interview, then, is a bit like watching someone strike out at tee-ball.

Kevin Harris quotes me: "He says, 'This, then, brings us to the central question posed of all religious believers--a question that they cannot answer: How do you know your interpretation is correct? And it generalizes: How do you know any interpretation is correct?'"

Craig: "And the science of hermeneutics attempts to address that question. ... The science of hermeneutics attempts to answer it by laying down principles of literary interpretation about the meaning of words, the historical context in which the passage was written, the literary genre of the type of text that we are interpreting, and so on and so forth." (emphasis mine, both bold and italics)

You'd think after a couple of thousand years, instead of rampantly diverging into 40,000 denominations, each with a different interpretation, each filled with churches with variations on that interpretation, each filled with believers with variations on that interpretation, they'd have narrowed in on an answer instead of consistently diverging. And all the while they call it The Absolute Truth™. You'd think.

This "science" of hermeneutics is the problem, or rather its application to a work of fiction to obtain an interpretation that is not fictional and, indeed, isn't even real. It is one thing to work to interpret the collected works of Shakespeare to gain insight into the social and political realities of seventeenth century England because we actually know that seventeenth century England was a real place and historical period. Likewise, it is reasonable to interpret the Bible to gain insight into the times and cultures depicted in its pages, since we know those exist. It is not, however, reasonable to use it to draw conclusions about theology for the same reason it is not reasonable to use the Harry Potter novels to draw interpretive conclusions about magic. It isn't even reasonable to use the Bible to draw conclusions about many or most of its chief characters, again for the same reason that a careful hermeneutical anaylsis of Harry Potter and the Chamber of Secrets tells us absolutely nothing about Salazar Slytherin.

Craig: "Fine. So he wants to say that scientific models are more or less, I think, accurate descriptions of reality which are subject to revision. That is just fine. Now, why can't our interpretation of his blog be similar – we have a pretty good idea of what was meant by this blog and that is subject to revision."

I don't think they do, clearly.

Craig: "Maybe he will correct us; write you a letter, Kevin, and tell you, “Wait a minute, this isn't really about hamsters. Craig was right. This was an endorsement of reasonable faith.” [laughter]"

Yes, haha! Hahahaha! (Dr. Craig, your, um, cheese, sir....)

Craig: "He can correct our misinterpretation if we do so, but nevertheless I think we can say we have a pretty accurate handle upon what he wants to say in this blog."

Your cheese, sir...

Craig: "And see here is, again, this sort of naïveté where he fails to realize that the evidence is conveyed by language."

Sigh. Look, in some tee-ball circles, they give you five strikes, not three, before you're out. Pick yourself up out of the dust, hop back in the batter's box, and try again. If you want to know what I think about evidence, though, you can read that here.

It's not about conveyance of things by language. It's about not having any evidence by which you can settle theological disputes. The sciences, by comparison, luckily have lots of it, and they eventually reach consensus, and that consensus is affirmed whether people believe it or not. Christianity cannot--cannot--offer such a boast, though it could if some strain of it were actually true.

Craig: "Notice, Kevin, here there has been a shift from talking about the interpretation of literature to saying faith lacks this reality check in the evidence. Where in the world did that come from? I thought we were talking about interpretive principles?"

It came from reality mattering, not just talking in circles about interpreting literature.

Craig: "How do you interpret a piece of literature like the Bible, and how can you be confident or reasonably sure that your interpretation is the correct one? That has nothing to do with whether or not faith has supportive evidence for what it affirms."

Yes, literature that has nothing to do with whether or not there is evidence for what it's talking about. Exactly.

Craig: "I think his real agenda emerges in that last sentence where he has this kind of caricature of faith as belief in the absence of evidence."
 
A Christian apologist accused me of having an agenda. Fascinating. Faith as belief in the absence of evidence, though? Yes, absolutely. Caricature? Not so much.

Craig: "Well, I think it is all right to identify an agenda if something is agenda-driven."
 
Can you believe he said that? Has he ever read anything he's ever written?
 
Craig: "...he wants to enunciate a view of the interpretation of texts that would apply to the Bible and thereby undermine its objectivity and truth..."

Objectivity and truth? And the question has been begged.

Craig: "What would correspond to the material evidence supporting scientific models and theories would be the material evidence supporting the truth claims of the Bible. For example, archaeology and history that goes to confirm the accuracy of, for example, the New Testament accounts of the life of Jesus and the early church. The book of Acts is so abundantly attested by extra-biblical literature concerning what it says that its historicity even in matters of detail, I think, cannot be doubted. So we have that kind of material evidence in support of the truth of the Bible."
 
I'll just say this part again: It is not, however, reasonable to use it to draw conclusions about theology for the same reason it is not reasonable to use the Harry Potter novels to draw interpretive conclusions about magic. It isn't even reasonable to use the Bible to draw conclusions about many or most of its chief characters, again for the same reason that a careful hermeneutical anaylsis of Harry Potter and the Chamber of Secrets tells us absolutely nothing about Salazar Slytherin.
 
Block quotes again:
Kevin Harris: I think so. I think the other agenda is: if the Bible is inspired then why are there so many interpretations and why are there so many denominations of Christianity?
Dr. Craig: Well, maybe so, but he doesn't raise that point.
That's pretty spectacular. First, we have Harris missing the fact that an obvious fact about Christianity is not an "agenda," and then we have Craig missing the fact that it was implied--indeed he responded to it at the beginning of the interview when they mentioned that I had written, "Whether we're looking at hyper-liberal Anglicanism, evangelical Protestantism, mega-fundamentalist literalism, Christian-Left Catholicism, C.S. Lewis's creedal "mere" Christianity, or anything between or beyond, every one of them requires a reading of the Bible that is an interpretation of the Bible." (emphasis original)


It's not even just that there are so many denominations, though, this being the more relevant point that I actually made. They disagree as fundamentally and radically as it may be possible to disagree on something, and they grow further apart over time. hyper-liberal Anglican John Shelby Spong would have been burned as a heretic only a few centuries ago, mega-fundamentalist literalists only sprang up in the wake of Darwin, Ingersoll, and Russell, et al., in the last 150 years, and within each we see seeds of more divergence. Consider blowhard super-conservative Catholics like the shills on FOX News (Bill Donahue comes immediately to mind) and compare them to the hyper-liberal Christian Left, which is composed of many Catholics who see eye-to-eye with the likes of Donahue on essentially nothing of substance.

Craig: 'That would then be something to discuss as well because certainly there are doctrines or passages in the Bible that we do not know exactly what interpretation is correct. There is a diversity of views on these. For example, one of the most notorious is in 1 Corinthians 15 where Paul says, “If the dead are not raised then why are people baptized on their behalf?” Nobody knows what Paul is talking about because that, though known to the ancient Corinthians, is not something that has endured in church history so we don't know. There are all kinds of speculations about what the meaning of Paul's question was when he talked about being baptized on behalf of the dead. In a case like this, we just have to say, I think, that we don't have the resources to be confident how to properly interpret that question.' (emphasis mine)

My entire commentary, though I realize I'm taking it out of context, is the emphasis I added above. That and this: speculations.

This, though, is my point, and it's the one Craig missed with them. All Christian belief structures are just interpretations of a fictional text, not The Absolute Truth™.

Craig: 'But other things that Paul says clearly, like “If Christ has not been raised from the dead, we are of all men most miserable; you are still in your sins.” There it is very clear what Paul is asserting.'

Of all of the possible examples he could have chosen, he chose this one. I couldn't be more pleased--though he seems also to miss the point that being "still in your sins" isn't clear in meaning whatsoever.

---
NB: Since Craig was responding in an interview, I'll be courteous and charitable and will simply publish this as I wrote it the first time through, extemporaneously and without revision. Pardon any errors or lack of clarity, but I feel like it's fair to respond on a somewhat level playing field.


Monday, June 2, 2014

The canard about the lottery, liars for Jesus

In many discussions with apologists for Christianity, the overwhelmingly poignant fact that a bunch of people believing something like the Resurrection story or other miracle claims is far more believable than the notion that any miracles actually happened. Granting that resurrections or miracles are even possible, which we have no right to believe, they're certainly very rare.

This frequently leads to the canard about the lottery. It goes something like this, which is something like how it showed up in a comment on this very blog the other day.
Winning the lottery is a very rare event, but you have no trouble believing that it is more likely that someone won the lottery than that it's a story (implied: that isn't true).
Sigh. If I had a dollar for every time I ran into this kind of thing, I wouldn't need to win the lottery to fund my dream of driving a Tesla that I power entirely by solar panels. We don't even have to get into the "rare occurrences occur all the time" stuff for this one.

The objection is superficially convincing, particularly to people who want to be convinced, and without a little experience with these kinds of things, the error in thinking is a bit difficult to spot. The fact is, though, that it's equivocation. Either the person who is making this argument is confused or prevaricating.

Here's how it works:

That my friend Cal wins the lottery, meaning the jackpot, is, indeed, a very rare event. If he buys a ticket, Cal's odds are just one in 175,223,510 for winning the jackpot of the American Powerball lottery. If Cal comes to me and says, "hey man, I won the Powerball jackpot!" my initial reaction is skepticism.
"Really?!"

For me to believe Cal, he's going to have to produce evidence, even though he's my friend. It's simply easier to believe that Cal is putting me on than that he won a contest that he is 99.9999994% likely to lose. Even producing a winning ticket might not be good enough because producing an elaborate forgery for a joke is still more likely than winning. The lottery commission saying he won, though, would be different.

On the other hand, and here we see the slight of mind, someone winning the lottery jackpot is not a rare event. It happens all the time, several times per year.

How can this be? Volume and equivocation.
Volume: Lots of lottery tickets are sold every week.
Equivocation: Someone winning the lottery means at least one of those lots of tickets came up a winner. Cal winning the lottery means his comparatively very few tickets, or maybe just one ticket, came up a winner. The first of these is not rare. The second of these is very rare. By changing the focus from "someone" to "Cal," it's easy to dupe someone.

Let's do some numbers to show what I mean, as if someone winning the lottery several times a year isn't good enough to convince us. To calculate the odds that at least one ticket wins, we calculate the probabilistic complement that all of the tickets lose. If the probability that one ticket wins is w, the probability that that ticket loses is 1-w. The probability that n tickets all lose is (1-w)^n. The probabilistic complement that tells us the probability that at least one ticket wins is thus 1-((1-w)^n) when considering n tickets.

Here are some various chances of winning:
Cal buys one lottery ticket: 0.00000057%
Cal buys two lottery tickets: 0.0000014%
Cal buys five lottery tickets: 0.0000029%
Cal buys ten lottery tickets: 0.0000057%
Cal buys one hundred lottery tickets: 0.000057%
This isn't looking good for Cal. Winning the lottery is a rare event for any individual.

But,
America buys 500,000 lottery tickets: 0.285% chance that someone wins
America buys 1,000,000 lottery tickets: 0.569% chance that someone wins
America buys 10,000,000 lottery tickets: 5.55% chance that someone wins
America buys 100,000,000 lottery tickets: 43.5% chance that someone wins
Every American buys one lottery ticket: 83.3% chance that someone wins
Every American buys two lottery tickets: 97.2% chance that someone wins

The chance that someone wins the lottery isn't small at all. It's not a rare event for someone to win the lottery. It would be rare if it was a specific person predicted in advance.

This line of argument that is supposed to increase confidence that the Jesus Resurrection story, or miracle claims, might be legitimate is a canard that should embarrass the person making it, and now you know why.

A Problem with Evidence

How do we know when to call something we observe "evidence" for some idea we have about the world?

As it turns out, this is a very hard question. The answer matters too. On some of the going understandings of the concept, it is correct to say that there is evidence for God, Santa, and flying saucers, which I think causes most people to raise a skeptical eyebrow. Should we use such an understanding of evidence? Personally, I don't think so. I think there's something profoundly broken about the idea that, properly understood, there can be evidence for something that is not the case.


I. Backstory and introduction

I have become involved in a discussion, largely with philosophers of religion about evidence (non-professional philosophers, to be clear--just as I'm neither a philosopher or a scientist, which will prove a relevant admission as we go from here). They are operating by one definition of the term, and frankly, I don't like it. This is an attempt to put down some of my thoughts about the topic of evidence. It, unfortunately, is lengthy.

I don't want to bog this down with a lengthy background, so instead I'll offer a short summary of the key points.
  • There are a number of usages for the word "evidence" that all fall within the same general sphere of meaning but which differ from each other significantly;
  • In many cases, philosophers and scientists tend to use different definitions of evidence, and many scientists seem to hold a working understanding of evidence that is at odds with the more common philosophical definitions (this being complicated because many scientists don't bother to consider the philosophical definitions, the philosophical definitions seem superficially good, some notable scientists seem to agree with the definition, and the usual so on that makes any controversial topic complex);
  • I feel the more common philosophical definitions, though sophisticated, have some problems, and I wish to present those here. To be more specific, I feel like they are irresponsibly misleading and miss a great deal of what people seem to mean by the term "evidence";
  • I will offer something of a definition for "evidence" that attempts to account for some or all of the problems I feel are present in the main philosophical definition I encounter, one that I think falls nearer to what is meant by the scientific use of the term. Particularly, I will make the case that when talking about evidence, we are referring to a body of observations, except in certain special cases. I do not think it is likely to be in our best interests to discuss whether or not individual observations constitute evidence except in those special cases;
  • Because my purposes here are directly rooted in arguments about God and religious belief, I will entertain a number of asides to discuss how the material under discussion applies to the question of God's existence; and
  • Mirroring the definition of knowledge as justified true belief, which requires that the belief be true to qualify, I will make a case that we should only use the word "evidence" for information that points to something true. That is, I will argue that there is no evidence for anything false.
A number of terms crop up here that may also not have meanings that are universally agreed upon. Importantly, among these is the word "observation," which I have borrowed from the philosophical definition I'm most interested in critiquing. (I think I would have used the word "data" had I not taken "observation" from their language.) Ideally, I think observations should be considered objective information that we have somehow gathered about reality.

There is no way that I'm aware of that I could be comprehensive with this endeavor, particularly in the given format and without a very long period of serious research that I simply don't have time for (nobody pays me to do this stuff, folks). I don't pretend that I've given anything like a final word on the immensely complex and unsettled topic of "evidence," although I do feel like I am trying to add to the conversation about it in a productive manner, by which I mostly mean for philosophers as scientists seem to be getting along quite well with whatever their working definitions happen to be.

Regretfully, this is far too long as it is. More regretfully, it's not nearly long enough to do the thing right.


II. The complexity of the problem

Evidence is a tricky term. It has one (or more) meaning(s) in everyday usage, which academics refer to as the "folk concepts," the statements of which can be found in any English-language dictionary. It has another, more precise meaning in science (three, actually, that are similar and situational). It has a variety of meanings to philosophers.

Unfortunately, none of these definitions for "evidence" agrees except in general spirit, which makes it a ripe area for arguments and publishing lots of papers in sophisticated philosophy journals, notably in the philosophy of science and epistemology. The academic debate about what constitutes evidence is rather hotter than most people realize or care (or, I'll note, will ever care, at least directly), and curiously, the people who rely on it perhaps the most and who are most readily identifiable with it, scientists, appear to care the least. I would guess that this is because they are largely satisfied with their working definition because it does what all things scientific must do, it works.

Worse, when people use the word "evidence," sometimes they clearly refer to a single observation ("These flowers are evidence she loves me.") and at other times to a collection of observations ("The evidence for the Higgs boson allows us to conclude it exists."). These two usages have to be--but can't really be--teased apart to get some much-needed clarity on the notion of what we mean by the term evidence and thus how we should use that term. I am sympathetic to the notion that evidence should be viewed in most cases as a body of observations, although there are circumstances in which a single observation suffices.

Complicating matters still further is the usage surrounding the critical role that evidence plays in legal proceedings. In legal matters, we talk about different kinds of evidence, notably circumstantial and direct evidence, where circumstantial evidence relies upon inference to reach the conclusion and direct does not. The usual legal understanding is that when lacking direct evidence, circumstantial evidence has to accrue (via corroborating circumstantial evidence) into a body of observations that, together, are sufficient to make and decide the case (to a particular standard of burden of proof, higher in criminal cases than in civil). In law, circumstantial evidence becomes worth more as possible alternative explanations for the observations are ruled out.

Law is significant in this discussion because "the evidence" is used to make a case that is judged to favor either the plaintiff or the defendant. This causes a difficulty in the form of giving the illusion that "the evidence," meaning the available body of information pertinent to both the situation and the case, can point to something that is not true, as when an innocent man is convicted or a guilty one acquitted. Critical to note is that legal cases are adjudicated upon by people approximating the net worth of various observations, labelled "evidence," in deciding a case, and people are lamentably a rather unreliable indicator of the truth of the matter at trial. Something else to realize here is that there are stated, though slightly fuzzily applied, standards in different kinds of trials ("preponderance of evidence," i.e. greater than 50% favors one party or the other, and "beyond reasonable doubt," which is far stronger). These situations can be likened to statistical confidence tests at different levels of confidence.

Yet another example of a common, and complicating, statement in legal proceedings would be, "to make a decision, the court needs more evidence." This kind of a statement could be taken to mean a variety of things, but it carries the implication that observations constitute evidence on their own because they support one case or another. Some better ways to phrase this, I think, would be to say that "the court needs more information," that "the court needs more potential evidence," or that "the information provided constitutes evidence only at a confidence level of p when a decision of this type requires a confidence level of q" (with q larger than p).

Clearly, there's a lot going on in the idea of evidence, much of it not-too-clear, and different people use it in many different ways. The group I haven't talked about yet is philosophers.


III. Getting to my specific problem, a philosophical definition

The definition of evidence that many philosophers are using currently--one very popular in the philosophy of religion--is one I think has some pretty serious problems with it. (Note: philosophers recognize at least four distinct definitions of evidence, based upon Jeffrey Jay Lowder's recent brief Google-hangout summary of The Book of Evidence by noted philosopher Peter Achinstein at John Hopkins University.) I'll state it in a rough form here:
An observation A is evidence for a hypothesis H if the probability that H is true given A (along with background information) is larger than the probability that H is true given background information alone.
If I were J.R.R. Tolkien writing The Hobbit right now, I'd say, "Now, this definition has a few pretty serious problems with it that you, no doubt, saw at once, but you would not have done nearly as well if you had dedicated your entire life to parsing out abstract ideas in their most pure form without necessarily caring how they attach to reality." Of course, I'm not him, so I'll just point out that under this definition, J.R.R. Tolkien's brilliant and popular children's novel, The Hobbit, constitutes evidence for hobbits, dwarves, dragons, elves, talking giant spiders that hate being called attercop, and all manner of other imaginary things. The probability that they exist is higher because someone talked about them than it would be if no one ever thought to. This shouldn't bode well for such a definition.

Indeed, to really upset ourselves about this definition, let's consider just a sparse few more things that it technically tells us because the conclusion is slightly more likely given the observation than without it:
  • Being from Baltimore is evidence that you are a murderer.
  • Fire is evidence for dragons.
  • Water molecules on Mars are evidence that Mars is made out of cheese.
  • Owning a gun is evidence that you will commit suicide.
  • Being from the wrong neighborhood is evidence that you're a criminal.
  • Having Hussein as a middle name is evidence that you're a Muslim.
  • The existence of jet contrails is evidence for chemtrails.
  • Their surprising popularity is evidence that homeopathic remedies work.
  • The existence of arguments for a hypothesis is evidence for that hypothesis, as is the existence of a believer in the argument. 
  • The (statistical) wave nature of light is evidence for the discredited luminiferous aether.
  • The (erroneous) observation of neutrinos that (actually did not) travel faster than the speed of light is evidence that there are particles that travel faster than the speed of light (at least when this observation is taken on its own, which is a possibility under this bizarre definition, which considers individual observations and grades them as evidence or not). 
  • That they are occasionally "right" is evidence for astrological horoscopes and thus astrology.
  • Easter baskets are evidence for the Easter Bunny.
  • Everything is evidence for a God with a Plan.
  • Correlation is evidence for causation.
In case you believe that I'm being flippant in bringing up some of these examples, to "better" make the point about how this definition of "evidence" works, some of the philosophically sophisticated are happy to pass around a "proof" that there is legitimately evidence for Santa Claus, something to do with the sound or appearance of footprints.

What's the problem? Well, to be Tolkienish, I'm sure you can see it now, unless, that is, you're a philosopher who has spent an awful lot of time getting into this line of thinking: it flies in the face of what we mean (folk concept) when we use the word evidence. My wife, who has little interest in such squabbles, upon finding out about the "evidence for Santa" thing said, and I quote her verbatim, "If they say they have evidence for Santa, that means that they at least partially believe Santa exists. If they're adults, don't talk to those people." She refused to accept the "more sophisticated" definition.

Worse, this definition, because it allows such statements, is profoundly misleading, not just potentially but actively and actually, as my wife's comment demonstrates. People think evidence for something implies truth. Of course, the real power of science literally lies in rejecting this understanding of evidence (see last bullet point above).

A curious point is also raised about the usefulness of this definition of evidence. Note, for instance, that the observation that you have a lottery ticket is simultaneously evidence for winning the lottery during the next drawing and losing it.

Being fair

Please do not let me mislead you into thinking that philosophers who concern themselves with this matter are stupid because they're not. They're just too technical and abstract at the same time. They have a robust and sophisticated understanding of the matter that makes this definition still work in practice, and I am led to understand that some prominent scientists and many lay people agree heartily to it once they understand it.

Now, it must be granted that all such examples would be qualified with "these are examples of very weak evidence...," but the point remains that under this definition, each of those statements, and many more besides, are technically true. Thus this definition is misleading. The problem, like I say, is that it is not how most people are willing to use the term; it is misleading; and it isn't terribly useful for science on its own (or, as one scientist I spoke with worded it, "it is terrible and almost useless!") Take note that true-believers in anything will latch onto the phrase "there is evidence for [insert whacky belief]" and run with it--thus, it's not just misleading, it's irresponsibly misleading and potentially dangerous.

To elaborate on that critically important parenthetical point, beliefs like religious beliefs--which despite all else routinely lead to horrible abuses--are maintained on biases like cherry picked "evidence." For a fundamentalist, "there is evidence for God," particularly from an atheist, is more than all he needs to get on with, and so too for most typical believers. A common colloquial understanding of the term "evidence" is that it constitutes sufficient justification to warrant belief. Worse, for an apologist this definition is pure gold, and it is no wonder I ran into this definition first by dealing with people interested chiefly in the philosophy of religion. Their main line, with which they bamboozle themselves and other believers, is "we have evidence for our faith." Supporting religious beliefs, or reinforcing tools to allow others to do so, on a technicality is simply dangerous and irresponsible. If a better definition is available, it is unquestionable that we should use it instead.

Philosophers deal with this problem by pointing out that we use evidence to make a case for a hypothesis or against it. It isn't really the evidence but the case being what we use to determine the truth-value of a hypothesis. Cases can include arguments or not, but typically they involve many evidential observations collected together and pointing in a single direction.

In other words, what really matters for philosophers using this definition, when it comes to making decisions, isn't evidence exactly, but rather bodies of evidence and the case made by them. "Sure, footprints in the fireplace are evidence for Santa," they might argue, "but the whole body of evidence about Santa collectively weighs against belief." Thus, for them, there is no problem. Never mind the fact that they're putting on their most serious attitude and making a case that there's evidence for an obvious fiction.

Their position is that evidence comes in various degrees, and strong evidence outweighs weak evidence, perhaps extraordinarily heavily. Put all together, a case is formed that allows us to decide upon hypotheses. One could gather evidence bearing on the Santa question, see that there are thousands of bits of very weak evidence in favor (mostly circumstantial or correlative), realize that the evidence against is very strong, and conclude that there is no Santa. The assumption is that even in complex matters that aren't obvious works of fiction, a rational agent or team of them using corrective measures like scientific protocol and peer review will do just that. Their definition is fine and gives them a relationship between observations and the notion of evidence for a hypothesis or other idea.

This is all well and good, but it the problem of it being misleading and not directly useful in the sciences lingers.

An aside into a curious possible loophole


This definition admits a curious loophole that I think is actually pretty important, though this is something of a fringe opinion--not that it matters much because I don't want to use this philosophers' definition for "evidence" anyway.

If we allow that a hypothesis or belief can have a probability of zero, almost surely, either a priori or on background knowledge, then there is only one kind of evidence that could have a chance of making that probability higher: almost certain evidence, the kind that grants a conclusion with literally all but 100% certainty.

Here's an example: Santa Claus, this time done right. I suggest that the reason we recoil against the idea that there is any evidence for Santa Claus is because we know Santa is a fiction as part of our background knowledge. As I'm framing it, one way to view it would be that knowing Santa is a fictional character implies that the probability of the Santa-hypothesis is zero, almost surely. (NB: "Almost surely" is a technical mathematical term that needs to be included to avoid question-begging categorical denial.) Put in mathematical shorthand,
P(Santa | background)=0, almost surely, because part of our background knowledge is knowing that Santa is a fictional character and thus almost surely not real.
If I'm right, we are almost absolutely sure that Santa doesn't exist, but we still leave open the tiniest possibility of being stood corrected. What I mean is that it would take almost certain evidence for Santa--like actually meeting an unequivocal Santa on Christmas night--to have any chance of raising the probability that Santa exists. In this situation, no circumstantial observation changes the probability of the Santa claim because it's totally overwhelmed by the fact that we know Santa is a fiction.

Critically, the only kind of observation or collection of observations that could constitute evidence by the philosophical definition under examination is one that bears almost sure confirmation; literally no other observation can qualify as "evidence." Since we do not possess any almost sure evidence for Santa, we can safely conclude that we do not possess any evidence for Santa. (Mathematically, this could possibly be handled by using l'Hôpital's rule, or something like it, to handle the resulting indeterminate forms arising in Bayes's theorem). To construe it like the philosophers do,
If P(Santa | background)=0, a.s., then P(Santa | obs. + background) can only get bigger if the observation in question for Santa is almost surely evidential for Santa Claus.
This raises the question of whether or not almost sure evidence can exist for a known fiction, which is an interesting enough question for people to work on, I suppose. I would contend that it is, at least in principle, but perhaps not. (Would a real-life Santa that matches enough of the properties in the stories really be the Santa in the stories?) I do think it's beyond question, though, that we could have almost certain evidence for a sufficiently Santa-like something to get on with calling it Santa Claus without too much fuss. (This philosophical beef jerky is more entertaining if we imagine finding paleolithic teddy-bear-like creatures on a forest moon in the far reaches of the galaxy--are they, or are they not, Ewoks?)

What about God?

Where this question gets particularly curious, and poignant, is regarding "theism," a blanket term for about a bajillion different positions that don't agree with each other. I would contend that this question can be analyzed by (a) not choosing a prior probability for "the God hypothesis" at all, (b) considering all of the background knowledge we have that bears on the matter, which I'd contend renders the background probability for "the God hypothesis" arbitrarily small, which is to say effectively zero, almost surely, and then (c) concluding that no observation but almost certain confirmation would raise the probability that God exists at all. This would make it the case that unless something absolutely and unquestionably had God's fingerprints on it, it couldn't be construed as evidence for God--and this is on their definition of evidence, the one I think is way too permissive.

Importantly, I don't categorically deny that this is possible, in principle, and I certainly don't just assume it isn't possible to have any kind of evidence for God, but wishy-washy crap like "life" and "consciousness" simply shouldn't cut it (especially since one not even need assume strict materialism to come up with other probably bad pseudo-explanations for life and consciousness). I simply think that there are no observations that point to the existence of God sufficiently to qualify as evidence, although there could have been, and books like the Bible use mythology to show us how ancient people pretended there was.

A more accurate representation of my position is that I think there are certain things that would count (like continued miracles, unequivocal and obvious benefits to believing the right religion, the obvious fulfilment of the promises of Jesus, or God directly and unequivocally communicating directly with every person), but our background knowledge of the world contains none of these, or any sort of thing like them. Instead, every set of observations for which we have explanations can be satisfactorily accounted for without a God, and that set is getting pretty small in every avenue that it might matter. Instead of a single observation that truly points to God, we get mysteries (e.g. consciousness), arguments about abstractions (like ontological ones, among others), and heaps of confirmation bias (and listing only these is actually being quite kind).

At any rate, if it makes sense to say that it is possible to have almost sure evidence, thus almost sure probabilities for hypotheses (based upon our backgrounds, e.g. knowing that a story is a fiction, ancient mythology, a etc.), which I think it is (because to say I'm almost sure my desk exists seems pretty reasonable), then in the cases where that occurs, an observation or body of them would only qualify as evidence in the cases when it is almost certain itself.


IV. Another philosophical definition

Philosophers knock on an important door with another definition, according to Lowder's summary of Achinstein: that which raises the probability that a hypothesis is true to greater than half, i.e. something that makes the truth of a hypothesis more likely than not. This is the "preponderance of evidence" definition in the legal sense. There are some curious issues here, some of which may have resulted from misunderstanding the very brief introduction I saw for it.

First, this definition is a weird one in the context of the discussion about "an observation A is evidence when [something]" because other than direct, concrete observations, individual observations only rarely would meet this criterion. (An example of a direct, concrete observation: "I have goats on my farm. Here, see this goat?" "By Jove, you do have goats on this farm!" These, I think, being almost sure and conferring 100% confidence, almost surely.) Indeed, this definition seems to apply best to collections of observations that collectively raise our confidence above some stated limit, here half. This may be the point I misunderstood from their brief introduction, but to my credit, it appears that they also got snagged on this point in their conversation.

Also, notice that this definition isn't very useful except in civil law--the "preponderance" requirement is, in an important sense, arbitrary and weak. Scientists, engineers, and courts routinely require much stronger requirements before we consider a matter settled and the observations to be evidence, like 95%, 99%, 99.9%, five-sigma, six-sigma, and "beyond reasonable doubt," none of which are accounted for by this restricted definition. That the "evidence" bar in this other philosophical attempt to define the term is set so low renders it rather misleading as well as largely useless.


V. The scientific definition(s)

Apparently, this whole argument arose because I differ from my philosophically inclined friends by firmly espousing something like the scientific definition (perhaps my training as a physicist, though incomplete, served some purpose).

The scientific definition can best be summarized in a colloquial fashion by the usual humility and honesty of science. One needs only listen to how readily scientists will say that some observation X is not evidence for some hypothesis H the moment they know that H is discredited, wrong, or false. Fire is not evidence for phlogiston; the wave nature of light is not evidence for the luminiferous aether; and life is not evidence for vitalism. Scientists do not tend to say that there is evidence for discredited or obsolete scientific models. (Note that one important difference relevant to another heated philosophy/science discussion that rages currently is that part of how science makes so much progress so rapidly is that it discards discredited ideas, which philosophy has some intrinsic problems with doing.)

In other words, generally speaking, scientists tend not accept that the term "evidence" applies to anything we know to be false. I have personally been using this line of argument for a while now--there is no evidence for something false, only evidence that can be misattributed to a false idea. In that, I think of evidence as a body of facts and observations that reflect reality, and only reality. Note that we have another word for such observations: data (or, in some cases, just "observations"). That raises the question of when data qualify as evidence, which is really just where we started.

Before we get to that, there's another problem that scientists have with the "raises the probability of the hypothesis" philosophical approach to evidence. The idea that a single observation can be considered evidence for some hypothesis, outside of immediately sufficient circumstances, which I will discuss later, is dangerously misleading. In talking with a working scientist about this last week, his immediate, recoiling response was that "a single datum could be construed to be evidence for almost anything, and the error bars on a single datum's support for any model would be so enormous to render it meaningless!" Thinking of evidence as single observations is probably an error--except in the cases where we have direct, concrete observation, like seeing a goat on a farm or recognizing that something hit a particle detector.

That last point is one I think is important in this discussion also; the role the hypothesis plays isn't trivial in determining if something is evidence or not. Particularly, we want to avoid confirmation bias. Notice that we don't need a theory that encompasses protons to notice when one hits a particle detector, and thus we don't have to get caught up with the confirmation bias-laden activity of starting with a hypothesis and then seeking evidence for it.

In science, the real heavy-lifting kind of observations are predictive ones that would disconfirm the model if they weren't satisfied, so instead of seeking to confirm a model, we seek to break it and call it good only if it resists our best efforts. Only when that happens are careful scientists usually eager to consider their observations to be evidence. In science, models and hypotheses and theories are all throw-away entities, the data itself is ultimately the core of what matters. The data become "evidence" when the model starts getting sufficient confirmation.

On the other hand, a single observation of something hitting a particle detector doesn't necessarily count as evidence--there's too much room for error. A few decades ago, a sensitive bit of equipment was seeking to detect magnetic monopoles, which are known to be very rare if they exist naturally at all, and it had the right kind of signal come up one day. But that signal occurred when no one was present (labs being far less sophisticated then than now), and it registered in a way that left far too much doubt that there might have been some kind of coincidental interference that it was a legitimate observation--perhaps the foundation of the building settled ever so slightly and jarred the detector. No one knows. That observation is evidence under the philosophical definition, but no scientist I am aware of seriously considers it to be evidence for natural magnetic monopoles.

Overall, the scientific usage of the term evidence seems to run along the ideas that evidence is a body of knowledge that supports a model that is provisionally true. Both of these conditions need to be satisfied to qualify a set of data as evidence.

Regarding "provisional truth," this is determined by a number of complex factors including the support of all relevant data, the explanatory salience of the model, the predictive power of the model, the ranges the model is considered valid over, the confidence with which we can say the data supports the model, and the consistency of the model with other successful models in related fields of science. It is the goal of many scientific endeavors to offer models that qualify as being taken as provisionally true.

NB: There are a few understandings of evidence from the scientific perspective, at least three, but they generally run along the same theme and apply in different arenas. I'm sidestepping this detail in the interest of brevity, which I've already lost with a great deal left to go.


VI. A modest proposal

My preliminary proposal is pretty straightforward, and it sort of blends two of the understandings that philosophers use and tries to keep to the scientific understanding of evidence, which is actually useful and not misleading. Further, I think it reflects the everyday "folk" use of the word in many applications.
A body of observations O is evidence for a hypothesis H if, and only if, it is a consistent part of a larger body of observations called evidential closure of O, comprised of all observations bearing significantly upon H, such that the probability that H is true given O (plus its evidential closure) is sufficiently great to warrant justified belief that H is true. In this case, we could call an observation A in O an evidential observation.
To summarize this definition in plainer language, I'm saying that an observation should only be considered "evidence" (more carefully, an evidential observation) for a hypothesis if it is a consistent part of a large number of observations that taken together, along with all other observations that have relevance, constitute support that justifies belief in the hypothesis. In short, we only have evidence if all of the relevant information we have, taken together, justifies accepting the hypothesis at a given level of confidence, and then the specific body of observations that provide inferential or direct support for the hypothesis is the evidence.

The body of observations that collectively justify acceptance of the hypothesis, not any observation individually, is what we should consider to be evidence, and we could call an observation in that body an "evidential observation" if we wanted to. The key here is that something should only constitute evidence for a hypothesis if that hypothesis has, on the whole, strong enough reasons to be believed to be taken as provisionally true.

Thinking of evidence as a body of observations, instead of thinking of individual observations themselves as being evidence, comports fairly well, but imperfectly, with the way lay people, scientists, and lawyers use the word, so it is not a radical overhaul to suggest that it be treated specifically as such.

So, about God...?

As a consequence of this definition, being from Baltimore is not evidence for being a murderer. Owning a gun is not evidence that you will commit suicide. Water molecules on Mars are not evidence that Mars is made of cheese. There's no evidence for Santa Claus; there is no evidence for the Easter Bunny. In all of these cases, the body of observations relevant to these hypotheses are not sufficient to justify provisional truth, and so these observations are neither evidential observations nor are they evidence.

Also, and for the same reason, there is no evidence for God. The total body of observations relevant to the question of God's existence simply isn't sufficient to justify knowledge that God exists, and thus these observations do not constitute evidence for God's existence. Those who believe and apologize for belief may have observations, even suggestive ones, to support their belief, but they do not have evidence.

Again, this need not necessarily be the case, though. I think it is imminently reasonable to suggest that if we found the world ordered in the way that the ancient scriptures, like the Bible, imagined it was ordered, we would have sufficient reasons to believe that God exists. Thus the relevant observations supporting some specific brand of theism would constitute evidence. It's the fact that we cannot conclude that the existence of God is a truth of the world that prevents the observations we have from constituting evidence for the existence of God. (Of course, an epistemically hidden God presents a superficially tough nut because belief in such a thing can only proceed on faith, by which I mean belief without evidence, but it gets weirder than that because a properly epistemically hidden technically cannot even have observations that lead to belief. Thus, this case may be moot.)

NB: Concerning the Bible, on the suggested definition of evidence, the Bible does not constitute evidence that the world was ordered in the way the Bible imagines, although on the philosophical definition I have a problem with, it does.

The evidential closure

The purpose of the introduction of the evidential closure of a set of observations into the definition is that otherwise a set of observations O is subject to having been cherry picked, and that's not acceptable. My thought here is that the evidential closure of O includes all observations that bear significantly upon O, and if O doesn't confer justification in light of those other observations, O shouldn't be called evidence because it isn't truly sufficient to justify belief.

For example, if we limit ourselves to "usual" sizes, speeds, and gravitational environments, Newtonian mechanics has an enormous body of observations that would (and should--as we'll see) constitute evidence for it, but the observation of the precession of the perihelion of Mercury, for example, bears upon O without being a part of it, indicating that something more robust than Newtonian mechanics is needed (here, general relativity seems to do the trick). The precession of the perihelion of Mercury is in the evidential closure of the body of evidence for Newtonian mechanics. In this case, the evidential closure of O contains observations that are relevant to the hypothesis, the "truth" of Newtonian mechanics, that one could accidentally or dishonestly avoid by choosing O to suit a belief that Newtonian mechanics is "true." This kind of cherry-picking shouldn't be acceptable, but as we will see shortly, the matter with Newtonian mechanics is a special kind of case that complicates matters.

Best-available evidence

One other quick note to make is that there is a subset of what I called the evidential closure of O that we have to call the "best-available evidence." This is almost exactly what it says it is, the best evidence that we have at the given moment. There's a little bit of an issue here.

The "best available evidence" regarding life on Mars currently leads us to conclude that there is no life on Mars. There are tantalizing observations that suggest it is possible, maybe even probable, but we cannot make the conclusion that there is life on Mars by the "best available evidence" we have. The issue is that if we use this phrase to say "the (best available) evidence indicates that there is no life on Mars," and later we find life on Mars, we will not continue to say that the evidence indicates no life on Mars. Instead, people will say "by the best available evidence at the time, we could not conclude that there was life on Mars." This may actually be an abuse of the term "evidence," as is revealed by the fact that exchanging "data" or "potential evidence" for "evidence" in "best available evidence" completely eliminates the problem.

Confidence and ranges

Range

The example of Newtonian mechanics, brought up above, is a pretty good one for talking about ranges (of relevance) and provisional truth--thus evidence. Scientists, I think, would all agree that there is copious evidence for Newtonian mechanics even though they universally know better than to say Newtonian mechanics is "true." General relativity supersedes it. Of course, we must stay constantly aware that we know that "true" isn't a real property of scientific models. Provisional truth is the relevant idea.

Newtonian mechanics is, like all models, an approximation that is useful and provisionally true, provided that we are limiting our range of relevance to large, "slow" objects in particular gravitational circumstances. In the cases where the error is small enough, Newtonian mechanics is provisionally true even though general relativity is more accurate. (GPS is an example where very small gravitational influences matter profoundly.) On the ranges where it has low error, Newtonian mechanics is provisionally true, and thus we have evidence for it on those ranges.

This last bit needs highlighting. Scientific models seek to be useful and to provide some decent degree of explanatory salience, their utility being to describe and make predictions about phenomena. Use is limited to the range over which the model is sufficiently accurate. For another example, the small angle approximation that says that the sine of an angle is approximately equal to the measure of that angle for small angles provides a model that's useful over a certain range of small angles, and that's all that matters.

When we say that a hypothesis is provisionally true, part of what that entails is an acknowledgement that we're only referencing the range over which it is useful, limits that can be described quite accurately when we are aware of them.

Confidence

In science, confidence is measured by statistics performed on bodies of data. To be able to draw good conclusions, a fair amount of data may be needed, ranging from dozens of elements in the sample for some kinds of conclusions (e.g. some medical and psychological studies) to billions for others (like in particle accelerators). Confidence is a measure of how sure we can be that our model is "true" in the sense that it accurately describes and predicts the data over its relevant ranges. It's worth noting that this is all very well understood and can be applied quite effectively by scientists working with the statistical tools relevant to their fields (something that other scientists are eager to point out should another use the wrong statistical methods since it is a virtually guaranteed publication to do so).

Here's where the "justified as provisionally true" part of the definition that I offered for "evidence" comes in. One of the philosophical definitions says that something is evidence if it makes the probability that the hypothesis is true greater than half (so the hypothesis is more likely to be true than false, the "preponderance of evidence" from law). Science is already doing this, but it's not doing it with such a low bar. Statistical confidence intervals do a better job, and those are usually 95% or something much stricter, not 50%.

And here we can see that by using statistics we can grade a body of observations in terms of what we mean by calling them "evidence." We can state our confidence, as a probability, that some hypothesis H is true is given set of observations O (plus background) that is sufficiently great to warrant justified belief that H is true. If we are using 95% confidence, we can say that the observations constitute evidence at the 95% confidence level (and, by implication, not necessarily stricter levels).

For many kinds of research, 95% is sufficiently great, and for other kinds, we need to be sure to better than one part in millions or billions. Importantly, statistics on a body of observations allows us not only to decide when they constitute evidence, but they allow us to state exactly the confidence we have in that determination. If we conclude with 95% confidence that a body of observations constitute evidence for a hypothesis, we're automatically stating that there's a 5% chance that we're wrong and that those observations are not actually evidence for that hypothesis at all.

What could we call observations that aren't good enough to qualify as evidence? Observations. Data. Potential evidence. We already have words for this, and it may be a grave error to haggle over whether or not they are evidence. We lose nothing by making statements like, "the data suggest that [insert hypothesis here] is true." When the data is sufficiently suggestive to conclude provisional truth, we can consider it evidence. When we start to suspect that the data is pointing in a particular direction, we could call it "potential evidence" or merely "data." This still bucks the lay usage, of course, by making it more precise, but it makes it more careful in a way that is far, far less misleading and far, far more useful than the profoundly misleading philosophical definition.

Idealizing

Idealizing this definition would read something like this, putting it roughly for convenience:
An observation A is evidential for a hypothesis H if it raises the probability that we're right to think H is true and H is actually true.
This is an idealization because in many important cases we cannot know that a hypothesis is true. (For an example where we can, I feel that I'm fully justified in saying that it is true that I'm typing this on a desktop computer with a black keyboard. For an example where we cannot, we cannot technically know whether a coin is perfectly fair, though we could become confident to arbitrarily good precision given enough time.) The idealization here mirrors another one that is famous to philosophy, the issue of knowledge.

Going back to Plato, knowledge has been understood to be justified true belief. The issue is that we use data, which is construed as evidence, to justify a belief, and, not knowing for sure what is and isn't true in most cases, we use data, construed as evidence, to determine whether or not a belief is true. Plato idealized, and "truth" hung out there as an ideal, never mind if it could be reached or not. In this conundrum, the entire field of epistemology has its roots.

"True," though, in this definition of knowledge, is an idealization of what I called provisionally true earlier. Nothing is true but reality, and we can only know what's true for (almost) certain in pretty special circumstances, which we'll discuss in more detail momentarily. For all the rest of the cases, the most interesting ones, we have to rely upon the provisional truth (based upon the many things mentioned earlier) of our hypotheses instead of their (certain) truth, which is out of our epistemic reach. (NB: Mathematicians might say that certain statements are "true" and that they can know it, but those truths are abstract logical consequences of axioms to which they are slaves and thus "true" in a meaningfully different sense than what is meant by something being "true" about reality.)

So, my conception of evidence is designed to mirror the definition of knowledge. No matter how justified a belief may be, it does not constitute knowledge unless it is also the case that that belief is true. That is, knowledge has to accurately reflect reality. Similarly, it seems to me that within the core of the general idea of evidence is that it represents a set of observations supporting knowledge, not mere beliefs.

Thus, for an observation to be considered evidence for a hypothesis, it is my contention that we should require also that the hypothesis it supports is actually true. In that sense, evidence is a stronger form of data; evidence is data that is sufficient to justify that it supports something true. (Again, I've already accounted for the fact that we often can't know what is true, above, by discussing confidence values and provisional truth.)

God?

Now consider the question of the existence of God, looking at the idealized form of the suggested definition. If it is the case that God does not indeed exist, since the claim that God does exist would be false, no body of observations would constitute evidence under the idealized definition. I've worded this more eloquently in the past: If God does not exist, there is no evidence for God, only evidence misattributed to God.


VII. On the special cases, the sufficient ones

It is reasonable to conclude that some observations carry enough potency to confer immediate knowledge and thus to, on their own, constitute evidence. If, for instance, you are dealt the queen of hearts from a deck of cards, that observation alone is sufficient to conclude that you are holding the queen of hearts, a red card, a face card, a card worth ten points in various games, and the like. The thing here is that a sufficient condition to justify the almost sure truth of certain hypotheses has been met.

A single observation can be sufficient to raise the probability that the given hypothesis is true beyond reasonable doubt and thus constitutes direct evidence and should be treated as such. The law, in fact, calls this "direct evidence," and it carries immense weight in a case ("We have direct evidence that the gun that fired the shots was on the person of the defendant on the night of the crime" means that there is no doubt that the murder weapon was in the hands of the accused at the right time, and though it may be circumstantial to having committed the murder, it is enormously heavy in supporting that case.)

Of note, we see this kind of thing come up in the sciences, particularly in terms of the discovery of new kinds of objects--species, states of matter, planets, physical processes, and so on. A single observation is sufficient to establish the truth of a hypothesis and thus constitutes, on its own, evidence for that hypothesis. This is not a challenge to the suggested definition because any body of relevant observations that includes a sufficient observation will automatically pass the bar of whatever reasonable confidence level we wish to state. (Indeed, I suspect that evidence of this kind appears often in the form of almost sure evidence, if that line of thinking carries validity.)  

Strongly suggestive but insufficient observations

A single observation can be sufficient to raise the probability to a high) degree of confidence as well. For example, there is a diagnostic test for a cartilage tear in the shoulder (the passive distraction test for a SLAP tear) with 94% specificity. This means that a positive passive distraction test result strongly suggests a SLAP tear in the shoulder. Incidentally, the actual confidence in the hypothesis of a SLAP tear given a positive test result depends upon both the sensitivity (53%) and the specificity, and turns out to be 72% certainty in this case (citation). Here, a single observation confers 72% confidence in the relevant hypothesis.

On both philosophical definitions I have discussed, this constitutes evidence for a SLAP tear, but on the more robust definition I am suggesting, it only constitutes evidence up to the 72% level of confidence. If a decision, like surgery, requires a higher degree of confidence than 72%, one should not call this observation evidence on its own, though it is a highly suggestive observation. Imagine for a moment an orthopedic surgeon coming into the exam room and telling you that your positive passive distraction test is evidence for a SLAP tear and that you need surgery. If you have your wits about you, you will probably immediately ask how good the evidence is from just one manipulative test, since for most people the words "need surgery" reasonably require a pretty strong justification. Now imagine finding out that it is only 72% certain. How do you feel about the word "evidence" in this case? Personally, I feel it too strong for the circumstances and thus misleading. I think you would agree if having agreed to a surgery, the surgeon returned to tell you the good news that your labrum was not actually torn in the first place.

This case is important (and one example of many) because it isn't just very weak evidence under the philosophical definition that is misleading. A positive PDT result is strong evidence under the philosophical definition, and yet it is still potentially misleading even to use the word "evidence" without qualifying it to its confidence level. On its own, a positive PDT result is evidence that there is a 72% chance of having a SLAP tear, not that there is a SLAP tear. The difference is that the italicized statement is true.

The same is true in the lottery ticket example I mentioned earlier. Holding a lottery ticket is extraordinarily strong "evidence" that you will lose the lottery jackpot--a 99.9999994% chance of losing, if by "losing" we mean "not winning," using the current one-in-175,223,510 Powerball Jackpot odds. Doesn't it feel more than a bit presumptive, though, to say that having a lottery ticket is evidence that you will not win the lottery?

If such an attitude were common, I don't think many tickets would sell (and even if this is a moral victory, my point is that I don't think we necessarily think of evidence that way). The fact that "you will not win the lottery" technically may not be a true statement stands on the head of calling an observation like that evidence, even if it does constitute evidence for a very high confidence in that belief. (So the moral victory is available here not just by trying to get people to accept a weird definition of "evidence" but also by teaching them enough introductory statistics to understand evidence in light of the confidence value justifying it--as many universities now do by requiring introductory statistics as a service course required of many non-technical majors.)


VIII. Additional issues

There are five additional issues that stick up still, at least on this preliminary attempt to lay out my thoughts about this complicated topic. They are
  1. The idea that arguments can be construed as evidence;
  2. The issue of "background";
  3. Talking about the probability that a hypothesis is true at all;
  4. The idea of certainty in general; and
  5. Not all concepts are hypotheses, some are beliefs and stories instead.
Arguments as evidence?

It's a pretty odd thing that I'm seeing people attempting to make the case that arguments can constitute evidence since arguments are a linguistic construct (premises connected by logic to a conclusion) dealing ultimately with abstractions while evidence is data, meaning direct observations of reality. Calling an argument evidence is a category error, and using a definition of "evidence" that allows such a thing has to face this problem.

There is an important relationship between data and evidence that involves arguments: for data to be considered evidence, we need an argument to connect it to the hypothesis. This doesn't make the argument itself evidence, though. It just says that we need linguistic constructs dealing with abstractions to connect reality to the abstractions that we use to describe reality.

In the sciences, because we don't do science just out of nowhere, the connection is frequently pretty clear, though. The model under investigation is itself already an attempted description of reality and thus is already connected to the observations that led to formulating the model. (Remember elementary-school science: a hypothesis is an educated guess.) This connection is reinforced or broken by investigating how successfully the model is able to make accurate predictions within its useful range.

Generally, I'd say the same was true of theism until real science came along, at which point superstitions (including the superstitions that support belief in revelation) were revealed to be unreliable methods of gaining accurate knowledge about anything (this being a huge difference between accurate and merely useful ideas). God was an attempted description of reality, but the lack of predictive power--and, indeed, the lack of descriptive and explanatory salience, together with its failure to mesh with other fields of inquiry--of theistic attribution cut our ties to that model, showing us that observations do not support it. All that supports it now are the arguments that try to shoehorn cherry-picked data back into the unfit model, and those arguments do not constitute evidence, nor do they make evidence out of observations that aren't evidential.

Background

Determining which observations are in the background and which aren't is critical to the success of any definition of evidence that compares against background knowledge. This determination, in fact, might be very hard to make. For instance, if the observation in question is part of the background knowledge, the union (combination) of {observation} and background is just background, so conditioning the probability of the truth of the hypothesis on the background together with the observation will yield the same probability as conditioning on the background alone--which makes the observation not evidence (read: evidential) on the philosophical definition I'm taking issue with (unless we can consider it evidential for another reason).

Picking this apart seems to require calculating probabilities from counterfactual states we pretend are background, ones in which by "background" is really meant "background knowledge except this particular observation." This casts a shadow on the whole approach. Many things are interconnected causally, and removing an observation from our body background knowledge may require substantial modifications. For instance, if we take the observation of life out of our background knowledge to determine if life constitutes evidence for God, we may have to take out a number of other properties that are causally linked to life, and then we're comparing apples and oranges. Simply put, not everything can be treated like an experiment, and those things that can't be shouldn't be.

This whole pile of rot can be avoided simply, though, just by ceasing to think of evidence as individual observations and starting to think of it as much of our everyday language indicates: as a body of observations bearing upon a matter of fact (consider: "the evidence for biological evolution" instead of "the evidences for biological evolution"). (Note, facts are, by definition, true.)
Probability that a hypothesis is true and certainty

When we talk about the probability that a hypothesis is true, it is technically a discussion of what we know, not of reality itself. A hypothesis is either true or false, and in a manner of speaking, except in the special sufficient cases, we can't really know which it is. Reality, though, is always true. This means that we're not really talking about the probability that the hypothesis is true; we're talking about how likely we are to be right if we say it is true. So when we're saying something like "p is the probability that hypothesis H is true," what we're really saying is "p is how confident I can be that hypothesis H reflects reality accurately." This is accounted for by statistical confidence testing, as discussed above.

Importantly, thinking about the probability that a hypothesis is true is kind of wrongheaded on its own and is therefore likely to be misleading. It causes us to put a bit too much stock in our hypotheses instead of putting it where it belongs, in the observations themselves. Our hypotheses follow from our observations in the first place, and the observations are the relevant bits that we can be quite sure have something to do with reality, which is not necessarily true for hypotheses, particularly when broadly construed as they often are in this discussion. Calling a hypothesis "true," though, seems really to be a reflection of how useful, salient, and consistent it is with better-established knowledge. Confusing the map for the terrain is enormously common, and learning to reject it is a properly big deal.

That might not be a hypothesis

Philosophers and scientists squabble also about what constitutes a hypothesis. Particularly, scientists have a real and legitimate issue with the idea that just any idea constitutes a hypothesis, which seems to be the implication of the philosophical definitions of evidence.

A hypothesis is, so far as I can tell and without getting technical, an idea about the world formed by examining the information that we have, including other models that have proven to be quite successful at what they do (the background or a preliminary set of interesting data). They have to make some kind of testable predictions, and there has to be a way to falsify them. Another quality that's a bit more ambiguous is that hypotheses really shouldn't be too ad hoc.

So that brings us back to God's existence. Is it even a hypothesis? I don't think so. First of all, I don't think there's an object of attribution more ad hoc than a deity, particularly God, as it is frequently conceived. God is usually left undefined except that "whatever we see, God is the explanation for it somehow." This is why I am an "ignatheist," someone who thinks that the notion of God is too vague to deserve any consideration, although when specifics happen to be given, say as in classical theism, I reject them (for good reasons).

Secondly, "God did this or that" is not falsifiable, and "God exists" is not falsifiable. In fact, "God provides an explanation for this or that" is also not falsifiable because whatever real explanation is given, "God did it that way" is a natural reply that keeps people believing, and that reply is not falsifiable. Take biological evolution, for example--a huge swath of the American public tacitly denies biological evolution by saying that it happened because God guided it the way it went, which is a weak form of Intelligent Design. In the same sense, the idea of God doesn't make any testable predictions either. It seems to, but in every instance, it's possible simply to say that things went according to God's Plan, which apparently includes not being put to any tests.

This raises a question that demands a pretty good answer: What on earth does it mean to say that an (individual) observation is evidence for a "hypothesis" that doesn't even qualify as a hypothesis?

Additionally, now that we have developed better tools to make sense of the world--proven by the fact that we can do things like put functional robots on Mars and treat a huge swath of deadly diseases, for example--than were available thousands or even hundreds of years ago, particularly the maturation of science, our background knowledge no longer leaves any room for the addition of God. This is why I suggest that the working probability of God, treated like a "hypothesis," is zero, almost surely. That "hypothesis" is off the table. It's not a hypothesis. Almost certain evidence would be needed to overturn that fact.

This, again, to briefly divert from the topic of evidence, is the position I've called ignatheism, which can be summarized thusly: Theism is not even wrong except when it bothers to be, and then it's still wrong.


IX. Summary

I hope here I've made a case that:
  • I don't think we should call observations evidence unless they support a hypothesis that is (provisionally) true, this mirroring the formal definition for knowledge, which requires that a belief be true to constitute knowledge. This would elevate evidence as a special kind of set of observations, the kind that points us to provisionally true hypotheses and thus away from error and nonsense;
  • There is a way to do this by taking evidence to be a body of observations that all together, including all additional observations that bear on the matter, lead us to justified belief that the hypothesis is true;
  • That lacking certainty, this can be accomplished by understanding provisional truth, including stating the confidence values that describe our degree of justification and thus our level of certainty with which we feel the observations constitute evidence;
  • The pure and abstract philosophical definition under examination and gaining traction lately amongst some philosophers isn't just misleading, it's irresponsibly and dangerously misleading, and should not be promoted without serious amendment, for which I've offered a tentative first attempt; and
  • Theism shouldn't be on this table at all until something like almost certain evidence is available. I have recently made a case that continued miracles would be sufficient for the task, so I'm not putting as impossible a bar here as people may think I am.
TL;DR: If nothing else gets taken away here, the big idea I've argued for is that we shouldn't call observations evidence except when they support ideas that are true. Even better, we should consider evidence to be bodies of observations that are collectively sufficient to determine provisional truth of the matter in question.