Friday, February 28, 2014

The Pope, Pretending to Know

I'd like to take a moment to illustrate exactly how much pretense can be packed into suprisingly little theological utterance. I'll use a single sentence tweeted by Pope Francis earlier today.
"The Eucharist is essential for us: it is Christ who wishes to enter our lives and fill us with his grace."
Francis's statement is unquestionably a claim to knowledge--knowledge that must be based upon faith. As Peter Boghossian has pointed out, it is typically apt to replace "faith" when used in the religious context with "pretending to know what you don't know." Here's a list of some of what Pope Francis pretends to know (but doesn't) and leads other people to pretend to know just in this sentence alone.
  1. The Eucharist is essential for us. It is in no way clear that a ceremony to commemorate the alleged Last Supper is essential for anyone. Francis pretends to know otherwise.
  2. The Eucharist is Christ. It is in no way clear that the "transubsantiation" in the Eucharist is Christ. Francis pretends to know otherwise.
  3. Consecration does something. It is in no way clear that consecrating bread or wine--or anything else--does anything at all. It is even more dubious that the result of ceremonial consecration "transubstantiates" food and wine into human flesh and blood, and the matter is only made worse by saying it is the flesh and blood of God. Francis pretends to know otherwise about all of these matters.
  4. Christ wishes to enter our lives. This is not known. Indeed, it is not even clear what this means (and may not mean anything). Francis pretends to know otherwise.
  5. Christ can enter our lives. In order to act upon the stated wish, this action has to be possible. It is in no way clear that this is possible or, again, meaningful beyond the metaphorical. Francis pretends to know otherwise.
  6. Christ wishes to (and can) fill us with his grace. This is not known, and it is not even clear what this means. Note that this is really two claims to pretended knowledge. Francis pretends to know otherwise to both.
  7. Christ's grace is important. If the Eucharist is Christ wishing to enter our lives to fill us with his grace, and the Eucharist is essential for us, then clearly we can assume that Christ's grace, instead of being meaningless or metaphorical, is essential for us. It is in no way clear that this is the case. Francis pretends to know otherwise.
  8. And, of course, Christianity is true (or at least Catholicism), with a wide variety of other claims to knowledge in tow, e.g. about the existence of God and the reliability of the Bible. It isn't just unclear that this is the case; it's rather clear that it is not the case. Francis pretends to know otherwise.
As we know, though, Francis, along with many others, pretends to be informed by the light of pretending to know what he doesn't know. See this video for another example.

Tuesday, February 25, 2014

On religion, partial inoculation, and treatment-resistance disease.

It was the mid-1960s, and the NASA programs that would put a handful of human beings on the moon for the first time within the decade were in full swing. Following Kennedy's famous pronouncement, this fact was a major element of the era's decidedly scientific zeitgeist. The Second Great War was behind us, and partially because of it, the economy was booming. Money, though, didn't all go to the top, even if lots of it did. Economic inequality in the United States in the 1960s was low and still steadily cruising toward its eventual nadir. In other huge news, widespread application of antibiotics had changed the face of disease seemingly overnight just a couple of decades earlier. And vaccines had just entered the scene, dropping the incidence and death rates of many serious diseases, again seemingly overnight. Life was good, and on April 8, 1966, TIME magazine's cover asked, "Is God dead?"

Not quite. God was not dead, though he was knocking at death's door. Though the 1960s were far from idyllic, all of the necessary conditions were in place to inoculate modern citizens from what, in later years, some call "the faith virus." Science was big and in the public eye, satisfying our innate needs to understand the world and to feel like our unpredictable circumstances can be controlled. A fair degree of economic equality gave enough opportunity to enough people to feel hopeful and secure without God. At the same time, we were crushing disease, which has been traditionally believed to be the wrath of God enacted upon us lowly sinners. In the 1960s, it wasn't only scientists who agreed with Laplace's famous observation that there was no need for "that hypothesis"--God--in the model. It was popular sentiment.

But we didn't know an awful lot about these things in the 1960s, and I'd dare to suggest that we know better now, largely because of that failure, and hopefully it's not too late. We have a burden upon us, though, not to repeat the mistakes of our past.


Antibiotics and vaccines did what Jesus could not. They formed the nearest thing we can probably imagine to a miracle. Even if we accept them on their ridiculous face, Jesus' miracles cured only tens. Antibiotics and vaccines took common serious illnesses dropped their mortality rates almost to zero, and they did it blindingly fast. Where Jesus is said to have cured tens with his miraculous powers, antibiotics and vaccines cured tens of millions, in far less time and without a shred of doubt. Scourges of mankind like smallpox, many a theologian's delight for the fear it commanded, were effectively eradicated from the planet in a time comparable to the whole of Jesus' purported ministry.

But we didn't realize our danger. Evolution works quickly in extreme circumstances, allowing lifeforms to cling to existence beyond any hope. In rapidly reproducing bacteria and viruses, in which whole generations can be measured in minutes or hours instead of decades, the opportunity to evolve resistance to antibiotics and vaccines is stunning--and one of our greatest contemporary perils. Many disease-causing bacteria are evolving in response to our crusade against them, and in some cases they have evolved antibiotic-resistant strains (that's the R in the flesh-eating MRSA). Likewise, there is a serious threat that some of our vaccines may be obsolete within a few decades as new strains emerge beyond our protection. It is hard to imagine a worse situation than the resurgence of horrendous illnesses that are both unpreventable and untreatable.

There's a certain trick about evolution, though. Extinct species do not evolve. Unless it is somehow released from one of the handful of laboratories in which it still resides, smallpox isn't likely to be making a comeback. Indeed, even within local populations, say a particular host of a particular disease, at least some individuals must survive the onslaught that besets them to have a chance to adapt to the stress. The ones that survive, of course, are the ones hardiest to the adversity, which is one reason that antibacterial soaps that promise to kill 99.9% of germs are a little frightening. The hardiest 0.1% are the survivors that go on to reproduce, and the proportion of their offspring resistant to would-be toxic chemistry goes down.

With antibiotics it is less their application than their bad application that has led to horror stories like treatment-resistant tuberculosis arising in India. One must apply the right drugs and then see them completely through so that between the drug and the sick person's own immune system, there aren't enough survivors to be concerned with. This does not always happen for a variety of reasons, most of them bad or downright heinous (as with the state of medical treatment for Indian "Untouchables" for the worst of all possible reasons--religious ones). What happens reliably instead is that antibiotic resistant strains of diseases evolve, and dimly remembered horrors threaten to reawaken in our future.

With vaccines, the matter is similar. The reason smallpox was all-but eradicated, rather like polio was as well, is that the vaccines for these illnesses were spread globally via concerted efforts to stamp out the diseases. Nearly everyone was vaccinated, so the disease could not get almost any toehold, and where it did, it could not spread. The application was complete. This, though, is not always possible. Diseases that we've vaccinated nearly out of existence often cannot have hosts other than humans, but others can be carried by other animals in addition to us. To these, we must continually immunize ourselves and our newborn babies.

That issue was simple: it became standard praxis to provide a series of vaccinations to children starting almost from the hour of their birth. It worked wonderfully, but because of unscientific yammering, this situation has been reversed. Many parents refuse to vaccinate their children for very bad reasons. Predictably, these diseases--mumps, measles, rubella, pertussis, and so on--are making a roaring comeback. And the prediction is dire. Within a few decades, it seems, our vaccines may be mostly useless.

Incomplete application of an inoculation leaves open a dangerous door. The pathogens evolve, and the forms that survive are often more dangerous and considerably less treatable. Our God-is-dead hope of the 1960s teeters on this balance, because when the specter of deadly infectious disease comes back onto the scene, so will a desperate belief in God (and a manipulative one about his wrath).


Since the 1960s, we have learned that pushing for socioeconomic equality causes certain vectors of the inequality virus to work overtime to wheedle out ways in which they can exercise their sociopathic privilege to the ruin of many. The New Deal and ensuing Great Society spawned a cult of individual sovereignty that served as the perfect cover for the societal sickness known as plutocracy to creep back on us, and the date can be traced to roughly 1971 for the start of the full-force effort.

Somehow, American culture, enjoying the fruits of the Great Society, were not put in a position to understanding clearly that it is the society that makes great societies work, nor did they properly understand the role that wealth and income inequality play in it. Particularly, as those forms of inequality increase, the society becomes sick in profound ways. For the individual, the trend is not at all clear--more money means more opportunity means good things are on the track. For the society, though, now that we've looked, the fact stands out like a sore thumb. Wealth and income inequality are societal diseases, and many of the social ills we now bear witness to are symptoms. One of those is a return to distrustful individualism--a rejection of society--and thus the symptom exacerbates the cause.

Like with disease, the inoculation against this sickness was not complete in the US, and the reforms from the Roosevelt through Eisenhower eras served as the basis for plutocratic greed to evolve. Evolve it did, and by embracing the anti-New-Deal reactionary "value" of individual sovereignty, it brought itself back from the edge of death in a way with more popular appeal than ever. In 1980, with Ronald Reagan as its figurehead, government--which is to say society--became "the problem," and the cult of the rugged individual rocketed into the mainstream. American society at large was reinfected with the plutocracy virus that now threatens to tear it apart, and the virus is spreading abroad. European and Australian plutocrats, among others, are picking up the thought processes that have divided America, starting in the 1970s, and now their own Great Societies are being torn apart. The inoculation of progressive liberalism was not fully administered, largely because it poisoned itself with the ridiculous, postmodern relativism.

We know better now, of course, but in the 1960s we did not realize fully enough that wealth and income inequalities are such pathogenic socioeconomic forces. Again, the God-is-dead hope of the 1960s is threatened by this because desperate economic situations, which leave people feeling less autonomous however intensely they pretend to be ruggedly individually sovereign, lead to a resurgence of desperate beliefs in God (and again, a ripe opportunity for religious leaders to capitalize on the problem).


God wasn't dead in 1966, but a great deal of what went by that name was. In isolated corners of American culture, a new and decidedly fundamentalist, evangelical variant on Protestant Christianity was initiating a Great Revival. Others, notably the nearly immutable Roman Catholic Church, plodded along as ever. For typical Americans, belief in God might have been largely irrelevant, perhaps even quaint, but the inoculation against the faith virus was not made complete, and the surviving religious cells were positioning themselves to go big-time.

And they did. And they still are. A variety of forces contributed to this effect, social, political, economic, and even theological, but for the last few decades of the last millennium, America underwent a huge religious revival, turning back from the attitudes that characterized the middle of the twentieth century. The problem was that the faith virus was not treated fully. God became irrelevant before faith was exposed as a contagious cognitive flaw, and so susceptible minds were taken in a boomerang effect the revival came to full force. Some of this revival is accounted for by what appears to be a natural boomerang effect with regard to religious attitudes, but for that to have happened requires an incomplete inoculation against the faith virus in the first place.

New Atheism

"New Atheism," as it is called, is an enough-is-enough response to this revival, which took place not only within American Christianity but also in other faiths throughout the world, most notably Islam. The landmark event, of course, took place on September 11, 2001, which could be taken to be the first exclamation point in the story of the world's reinfection with an intense, recalcitrant strain of the faith virus after a brief remission toward enlightenment. Since the World Trade Center came down in flame, smoke, ash, and death, the religious story has been told more and more fervently, often in capslock, and we are left wondering how grim the situation is. A very hard to treat, profoundly virulent strain of the faith virus has taken root, and the medicine of "New Atheism" is bitter. "New Atheism" is unwavering rejection of religious authority, and a certain amount of steeled nerve to the cries of butt-hurt offense that flow from it.

Here, then, we see a difference between the God-is-dead from a half-century past and the "New Atheism" of today. "New Atheists" are profoundly less likely to ever be taken by the faith virus again because they understand both that God is irrelevant and that faith is a cognitive flaw loaded with pretense. "New Atheists" have been properly inoculated. The mental infrastructure that keeps the demon out is robust, solid, and clear, and it is held for clearly articulated reasons.

We need to take our medicine,...

...but not everyone wants to.

A growing movement of accommodationist atheists, faitheists, in the phrasing of Chris D. Stedman, author of a book called Faitheist, seem to prefer a kinder, gentler, less rebellious attitude from atheists toward faith. They, like NYU professor and social psychologist Jonathan Haidt, are "not anti-religion," and they beg "New Atheists" to see--and honor--what we have to learn from the faithful instead of making a hard-nosed stand against the faith virus. To see what I mean, consider this recent piece from Steadman (dubbed a "must read for ALL atheists" by one of Steadman's fans on Twitter)--or read his book--and this one from Haidt.

Steadman and Haidt, and their growing group of followers (particularly among the Progressive Left), prefer the shortsighted obvious. Taking antibiotics often makes one feel ill and is a hassle, and the feeling of being sick often subsides days or weeks before the full course of the prescription is run. Vaccines require getting an injection that is sometimes painful, can cause mild symptoms, and can also be a hassle--even without the ignorant unscientific fear that they cause autism--and who on earth really gets whooping cough? (N.B.: A friend of mine just did thanks to some unvaccinated kids at the playground where his (vaccinated) kids play.) Standing up in a hard-line fashion to religious authority and privilege hurts people's feelings and is mentally and emotionally draining. Steadman's position asks, can't we all just get along instead?

Sure, we can. And, if we prefer to, we can partially inoculate against the faith problem--one that possesses a potent opportunity to lead to calamity (take evangelical Christianity's near-universal religiously "justified" denialism of climate change, of all not-religious things, for example). What we cannot do, though, is delude ourselves into believing that it will serve to solve the unique problem that religious faith presents in our world. And we should recognize that if faith survives its brush with "New Atheism" so far, it will be stronger than ever for the encounter and at least as much of a problem.

"New Atheism" has shaken the world of faith like a cultural antibiotic, and in its wake it provides a potent vaccine. The disease isn't going easily, though, as is sometimes the case (the reader is encouraged to investigate the full treatment experience for hepatitis or tuberculosis, not to mention chemotherapy for cancer). There is a desire for harmony and peace that calls some to a mission to abandon the "New Atheist" treatment protocol in favor of the kind of deference that feeds religion forever. I understand that and wish for it as well, and I am quite sure it is wrong.

Ameliorative measures

There are ways to polish the rough edges of "New Atheism" to make the medicine a little easier to swallow. Up until now, it has admittedly at times been quite a blunt instrument, but as it matures, it is being refined. One of the best and most obvious suggestions has been brought to light by hard-liner Peter Boghossian, author of A Manual for Creating Atheists. Boghossian calls above all for authenticity and honesty. Be real. Be honest. Be willing to change your mind when the reasons are good. But do not mistake authenticity and honesty for deference to delusion. These achieve all of the goals of the anti-"New Atheist" crowd without their main failures. Authenticity of this kind does not condescend to the faithful by assuming that they need their faith to get through the day, and it does not give unwarranted deference to religious privilege. Authenticity and honesty are not merely palliative measures but are clear refinements of the "New Atheist" medicine, a course of treatment that we need to see through. Failure will preserve the disease in a state more resistant to future treatment.

Here's how to do it. Be ruthlessly honest with yourself, even if you do not have the nerve to do it with other people. By applying relentless honesty to one's own positions, neither faith nor deference to it is possible, and the anti-"New Atheism" confusion is revealed as a way to walk away from the cure before it is effected while nourishing the disease. People deserve dignity, and this obviously includes religious people, but ideas do not. Honesty and authenticity, even if unappreciated, are the best ways to dignify a person regardless of the quality of their ideas. It does nothing like offering dignity to a person to coddle their bad ideas, and so we shouldn't. And there is "New Atheism" in a sentence. So ask yourself, what are people who are against "New Atheism" really against?

Sunday, February 23, 2014

A hard truth from Arizona SB1062

Arizona's governor, Jan Brewer, has to decide whether she should sign SB1062, a controversial bill that will allow Arizona business owners to discriminate on the basis of any religiously held conviction. Many of the law's opponents note that one such particularly transparent reason for the bill is the widespread belief among conservative Christians that reviles homosexuality and and the looming reality of marriages for same-sex couples. Proponents of the bill argue that the bill is about protecting religious freedom, of course, in the usual sense where religious freedom and religious privilege get conflated.

Paresh Dave did a write-up about SB1062 for the LA Times, and in it he covered some of the arguments of those who support the bill. One of these, forwarded by attorney Jon LaRue, who helped write the legislation, almost appears to be a good argument for the law. LaRue said,
There is a law that bans discrimination at public accommodations based on religion in Arizona. Let’s pretend that I’m a bakery and that in my town here in Arizona, Westboro Baptist Church comes to picket a funeral of a soldier, and they tell me to bake a cake. They want it to say, "God hates ..." and that terrible word they use.

It would offend my dignity. I don’t want to give voice to that horrible message. Right now, they could sue me for discriminating based on their religious beliefs. If the Arizona courts went the way of the New Mexico courts, I would lose and if they targeted me,  I could lose my business because of the damages I’d have to pay out. I would never be able to assert my Religious Freedom Restoration Act defense because it’s available only if the government is prosecuting me.
I'll admit that when I first read this statement, I was taken by it. And then I remembered the First Amendment, a two-edged blade that many conservative Christians absolutely love when it cuts in their favor, though they seem to hate it the rest of the time. LaRue's argument appeals to our dignity and our better moral sense, but that doesn't matter. Our dignity and our better moral sense aren't the law (and if they were, conservative Christianity would likely rue the result). That's freedom, and to quote LaRue again, "Freedom is too important to leave to chance."

I do not like the idea of being required by law to write "God hates fags!" or any other obscene and hateful thing on a cake ordered from my bakery (if I owned one). And that's too bad for me and the worse for my luck that they chose my establishment. As much as I am repulsed and outraged by the message of the Westboro Baptist Church, if I found myself in that situation, I would face a choice. LaRue pointed it out for us: discriminate or do not discriminate. If the price of discrimination is losing one's business, however apparently noble the reason, then that's the price of religious freedom. It does not come for free.

Here's the thing, though. Let's take a moment to explore the singular reason that such a disagreeable thing as this is the price of religious freedom. If we refuse to lie to ourselves, it is because religion is uniquely poised to be utterly vile for indefensible reasons, a trait we have decided it is paramount to defend above any insult to our dignity.

Jon LaRue is appalled by the notion that he should have to promote the hateful message of the Westboro Baptist Church, which it is their religious freedom to hold and promote. The only reason he doesn't see it this way is that LaRue gives special prioirity to his own beliefs, which he holds on the same justifications given by the Westboro Baptists: a particular reading of the Bible and faith. His bill, obviously, isn't about religious freedom. It's about using religion to protect the privilege of a group of sufficiently mainline, sufficiently conservative to act like their beliefs are really the only ones that matter.

LaRue must envision an America where religion means only those beliefs he agrees with or is willing to tolerate--whatever intolerance those rain upon the people that his religion vilifies as sinners, degenerates, and infidels. The only reason he has that opportunity is because every American citizen is precluded from acting upon the same idea. I find Christianity disturbing and repugnant, for example, and LaRue's only protection from my opinion is the same one forcing him to pipe "God hates fags!" out of a confectioner's bag at the risk of his livelihood if confronted with the request from a sufficiently sincere believer in the Westboro mold.

And let's be honest. Jon LaRue, and those like him, dislike the message of the Westboro Baptist Church because they believe that the Westboro Baptists pretend to know something they don't know about the Creator of the Universe, namely that He hates fags. But Jon LaRue, and those like him, along with all other religious believers, pretend to know but don't know everything they claim to know about God. For many of the people who desire this bill to pass into law, of course, this pretense includes the proposition that "God hates fags"--but just enough to protect their "right" to act upon it by denying wedding arrangements to same-sex couples, even if not quite so much as to put it in all caps on a hideous placard protesting a military funeral.

On the Pleasure of Changing My Mind, perhaps too soon

I am a big fan of Sam Harris. The primary reason is that he routinely impresses the hell out of me with his keen insights and carefully parsed out reasoning. Even in the instances when I disagree with his opinions, which turn out to be surprisingly rare, I'm consistently impressed with his ability to lay out arguments in a beautifully reasoned way that reflects a sharp understanding of the underlying facts. And this is why I was surprised by his most recent blog post, "The Pleasure of Changing My Mind." I'm left feeling distinctly like Fangorn, bemused about a folk more hasty than his own.

I have to be clear mostly about what this essay is not. Thus, the first thing I want to say about my surprise is that I think Harris has drawn the right conclusion about his willingness to change his mind and for very good reasons. He also presented it extremely well, one of his better rebuttals to a detractor (which is saying something). I just think he has chosen a strange example to do it--though also for a very good reason if I might venture a guess at it.

I should add that there is back story involving primarily Harris and NYU professor Jonathan Haidt, whose allegations form the subject of Harris's blog post, and it is my assumption that readers are familiar with it. If not, it may be worth correcting that for context and, perhaps, for some appreciation of the ironic. There is also back story with characters related to Harris's example, though he covers those sufficiently in his piece.

In his blog post, Harris indicates that he can have his mind changed even by people who are his "enemies," making a fool Jonathan Haidt's recent charges against him. "Enter Jeremy Scahill," Harris writes, the emphasis his, going on to elaborate,
This is just to say that, while I don’t usually think of myself as having enemies, if I were going to pick someone to prove me wrong on an important topic, it probably wouldn’t be Jeremy Scahill. I am, in Haidt’s terms, highly motivated to reason in a “lawyerly” way so as not to give him the pleasure of changing my mind. But change it he has.
The manner in which Scahill has changed Harris's mind is via Scahill's recent documentary, Dirty Wars. To quote Harris about it directly,
However, last night I watched Scahill’s Oscar-nominated documentary Dirty Wars—twice. The film isn’t perfect. Despite the gravity of its subject matter, there is something slight about it, and its narrow focus on Scahill seems strangely self-regarding. At moments, I was left wondering whether important facts were being left out. But my primary experience in watching this film was of having my settled views about U.S. foreign policy suddenly and uncomfortably shifted. As a result, I no longer think about the prospects of our fighting an ongoing war on terror in quite the same way.
So now a second thing I want to say: I don't know if Scahill is right, and for the present discussion I don't care. (More generally, I do care rather seriously.) My surprise with Harris came about for the choice of this example, though now I can state why I think he chose it: it's (almost) perfect. It's an example of a recent mind-changing incident in Harris's life that was brought about by someone he has reasons to be motivated to distrust. A better example to refute Haidt's point against Harris might have had to have been made up, and Harris applied it admirably for that purpose.

I've got one more aside before I can get to my point, but it transitions there. After watching a documentary about nearly anything, I typically feel like Harris reported feeling after watching Dirty Wars. "The film isn't perfect." "There's something slight about it." It is "strangely self-regarding." "I was left wondering whether important facts were being left out." I feel this way so strongly after watching almost any documentary that I almost never watch them anymore.

I've referred to this feeling privately as the "documentary effect," and it is comparable to the result of watching a polished, less-annoying infomercial. Documentaries on grave subjects, like infomercials for junk one does not need, are expensive efforts made specifically to be strongly persuasive. I don't trust documentaries about highly motivated topics, not least because I know they have tremendous power to change my mind in a way I will later have to reverse. That it was a documentary on a political topic fails to spark my surprise, and it's not that I disagree with Scahill (I do not know if I do or not) or Harris (I do not disagree with him on the topic he was writing about, Jonathan Haidt's analysis, and am in a similar quandary regarding US foreign policy).

So now the point. My surprise with Harris stems from the fact that Harris reported having his mind changed--perhaps rightly--by an event that is so fresh. Leaving aside the point that he couldn't not have had his mind changed by the film, as he knows, this is still something that he is very likely to understand better than in terms of the operation of the human mind.

When my mind changes, particularly when I had pretty strong reasons for believing what I believed, I almost always experience a subsequent oscillation in my attitudes on the topic. My thoughts churn between the previous position and the new one as I work to ease the cognitive tension between them. Sometimes my mind changes, and I accept the new view. Sometimes, I realize some, but not all, details of my prior beliefs are in error, and I end up somewhere between the two positions. At other times, I'm able to work out what lies of omission and spin played a role in creating the "documentary effect," and my mind is little changed in the end. This process usually takes days at best, sometimes weeks or months while the new ideas roll around with the old and with those I find in follow-up research, so I find it a matter of curiosity that he chose such a fresh example. (Note: Perhaps I'm alone in this, or it doesn't apply to Harris, but out of necessity, I am assuming it's a relatively general phenomenon.)

I will not speculate upon whether Harris's choice of this example was "motivated" by his desire to refute Haidt's ridiculous analysis. Besides being conjectural, it would be immaterial to do so. For one thing, Haidt's analysis was handily revealed to be ridiculous even without the Scahill example. For another more important one, there is very little doubt Harris has had his mind changed many times. It is tremendously unlikely that anyone who gets so much right could do so by any other means than getting a great deal wrong over his life and being both honest and fastidious enough to work to correct those mistakes. It is also equally clear that he is wide open to modifying his views again in the future when the reasons are good. Indeed, I expect it will be interesting to watch his thoughts on US foreign policy develop now.

Rather like the experience of a tannic red wine, there is pleasure in changing one's mind once the route to its appreciation is understood. In this case, a necessary component is holding high esteem for being less wrong. Harris clearly has acquired this taste, whatever Haidt and others want to say about him. To speak of wine, though, it feels like Harris has uncorked a bottle of Dirty Wars, vintage 2013, and served it hastily and straight from the bottle. It will be interesting to see how his mind continues to change now that this young cab has a chance to breathe.

Thursday, February 20, 2014


Is atheism a lack of belief or a belief of its own sort? Do we all have beliefs about the existence of some being, entity, or force called “God”? Must a denial of belief in God's existence entail a belief that God does not exist?

These questions seem to be rising to the surface of many debates, and one answer that is gaining some traction is the affirmative. It is being identified as the (rationally justified) positive atheism position, and among its other attitudes, it holds that so-called “agnostic atheism” is both a misnomer and an untenable position. Belief, in some degree, must tip to one side or the other.

While a clarification like what is being proposed by the positive atheists is needed and interesting in its own right, it is not clear that they have closed the door on the matter. There is at least one position that stands apart from the three usual ones, theism, atheism, and agnosticism, and it is called ignosticism. Where they say, “yes,” “no,” and “not sure,” respectively, ignosticism says, “can't say.”


Ignosticism says that the statement “God exists” is too poorly defined to contain meaning. The reason ignostics take this stance is that the term “God” is itself too poorly defined to make sense of, unless it is defined in a way (abstractly) in which the term “exists” instead is not clear enough.

Ignosticism is distinct from theism, atheism, and agnosticism. The central point of difference is that these three positions all interpret the phrase “God exists” as being an inherently meaningful one, and ignosticism does not. Theism asserts that “God exists” is meaningful and true; atheism asserts that it is meaningful and false; and agnosticism asserts that it is meaningful with an unknown truth value. Of course, agnosticism takes a wide variety of possible manifestations that tip one way or the other, and Richard Dawkins's Spectrum of Theistic Possibility covers the gamut. Note that Dawkins's Spectrum assumes that “God exists” is a meaningful statement. (Dawkins, and many who hold his arguments in esteem, hold that they are a 6 or a 6.9 on his 7-point scale, indicating strong but not absolute atheism.)

By contrast, ignosticism takes the tack that the statement “God exists” is inherently meaningless or otherwise insufficiently clear to be assigned a truth value at all, even a fuzzy agnostic one. In contrast to the total agnostic point on the Spectrum of Theistic Possibility, given a score of 4, the ignostic position does not appear on the Spectrum at all. For the ignostic, every value on the scale is wrong because the question for which those values give some answer, “how sure are you that God exists?” is invalid. Particularly, then, ignostics do not necessarily hold a belief about the truth value of the statement “God exists.”

Ignosticism and Ewoks

Ignosticism may be the position that we already hold regarding many fictional creations. Take the Ewoks of George Lucas's Star Wars, for example. The films and books in that fictional series offer a great deal of depiction of Ewoks, enough to where they are a believable construct in the context of the Star Wars universe. In fact, they are a believable construct in the real universe in that it seems possible that we could one day venture throughout the galaxy, and upon finding a forest-covered moon populated with tribal teddiursoid creatures, we could conclude, with much surprise, that we had, indeed, discovered Ewoks. But would we have really?

The answer seems to be no. However much the extraterrestrial species we met resembled Lucas's Ewoks, it would not really be the creature that Lucas was describing. Whether the Ewoks were invented whole-cloth for the Star Wars series (notably, Episode VI: The Return of the Jedi) or imported from some other set of ideas, it is certain that any species, however Ewok-like, that we might find anywhere in the wider galaxy was not the inspiration for those in Star Wars. We may call such persons Ewoks, should we find them, but even if identical in every discernible detail to the Lucasian descriptions, they are distinct from Ewoks.

It seems that the preceding makes a case for holding a position analogous to atheism regarding the existence of Ewoks because we know them to be a fictional creation. I don't think this is valid, though, because if we did find such beings, particularly in the correct circumstances, we would almost certainly agree that we had found Ewoks. In fact, the argument that we would not have actually found them, that being impossible even in principle, would probably ring quite hollow.

Remote as the possibility probably is, it is possible that an Ewok-like species of extraterrestrials exists, with as much minutiae as desired. Finding them would be more than enough to conclude that we had found “Ewoks.” In that regard, the proper position about the question of the existence of Ewoks is the one that parallels ignosticism: what we mean by “Ewok” is not sufficiently clear to make a meaningful statement about their existence. Admittedly, Ewoks feel disanologous to “God,” but the reason is that the situation is worse when it comes to “God.”

At least Ewoks make sense

The ignostic problem is actually multiplied for the notion that “God exists.” While Ewoks are conceivable as a real type of extraterrestrial life, the analogous statement about “God” appears not to be true.

If we found immensely capable “god-like” extraterrestrials, or even something like the gods described in ancient polytheistic mythology, say Zeus or his relations, it is not clear that these would be categorized as being “God.” In fact, it seems downright unlikely, given the proclivity among theistic believers to defending their religious beliefs as they are.

And they would have a case because the notion of a solitary Supreme Being that is unrivaled by any other is not parallel to notions like Zeus and Thor. The concept called “God” belongs in a different category altogether, and that category possesses no salient description. Indeed, it seems to be a haphazard combination of several abstract notions, some of which have more meaning than others (a Creator, for instance, seems more meaningful as an entity than do the objective sources of all purpose, moral values, and the capacity to think).

Lacking any salient description for “God,” when faced with the question of its existence, we are left in a position in which we are unable to make any salient assessment. This position is called ignosticism, and I believe it is more commonly held amongst self-identifying atheists than is atheism (the belief that no gods exist, including “God”).

Sam Harris, et al., and ignosticism

Though I will not attempt to speak for him, I suspect, in fact, that Sam Harris is ignostic more than he is atheistic (though I leave open the caveat that he, like many of us, are probably best described by the term proposed in this piece, “ignatheist,” if we must have a term given to us). Consider the opening portion of his December 2005 “Atheist Manifesto,” published originally on TruthDig.
Somewhere in the world a man has abducted a little girl. Soon he will rape, torture and kill her. If an atrocity of this kind is not occurring at precisely this moment, it will happen in a few hours, or days at most. Such is the confidence we can draw from the statistical laws that govern the lives of 6 billion human beings. The same statistics also suggest that this girl's parents believe at this very moment that an all-powerful and all-loving God is watching over them and their family. Are they right to believe this? Is it good that they believe this?


The entirety of atheism is contained in this response. Atheism is not a philosophy; it is not even a view of the world; it is simply a refusal to deny the obvious.  Unfortunately, we live in a world in which the obvious is overlooked as a matter of principle. The obvious must be observed and re-observed and argued for. This is a thankless job. It carries with it an aura of petulance and insensitivity. It is, moreover, a job that the atheist does not want.

It is worth noting that no one ever needs to identify himself as a non-astrologer or a non-alchemist. Consequently, we do not have words for people who deny the validity of these pseudo-disciplines. Likewise, atheism is a term that should not even exist. Atheism is nothing more than the noises reasonable people make when in the presence of religious dogma.

Though the term ignostic predates Harris's piece by 45 years, his writing here and in other places seems to fit the ignostic perspective better than it does the atheist one (since we're rather pedantically splitting hairs). Indeed, his piece would perhaps be better titled “The Ignostic's Manifesto,” if it weren't for two facts. First, no one really knows or cares what ignostic means, and second, it's still not quite right. Harris's piece reads most accurately as a case for what I am calling ignatheism, a term suggested by a generous and witty friend. Further, I would contend that most of the other self-identifying atheists who are both intellectually honest and who haven't adopted positive atheism are also ignatheists. This includes Richard Dawkins, Daniel Dennett, Peter Boghossian, and any other atheist who says, and accepts, things like, “there is (very) probably no God” and "there isn't enough evidence to believe in God, so I don't."


The fatal issue with ignosticism is that it requires an untenable stretch of intellectual honesty to accept that the phrase “God exists” is so meaningless that we must render it beyond assessment. Most people, though they could be wrong about this matter, strongly feel like this phrase means something quite specific, even if clarity still wants for all the specificity. I would argue that when any person uses the word “God,” except to say that it is meaningless, she has a sense of what she means by that term. I do not mean to imply that what anyone thinks she means by it matches what it really might mean.

To make matters worse, when all but the most strict ignatheists so much as hear the word “God,” they also have at least one sense of what is meant by that word. Often, the hearer has two or more, what he thinks the word means and what he thinks the speaker might have meant by it, plus potentially many other possibilities drawn from broader cultural interpretations. In other words—and this is one of the most significant problems humanity has ever faced and still faces—the term “God” is unclear to the point of meaninglessness in general but is laden with meaning in specific.

Ignatheism in both contexts

Enter ignatheism, the intellectual turf I would argue that most self-identifying atheists actually occupy. The ignatheist is simultaneously part ignostic and part atheistic. She identifies as an atheist and feels like she shouldn't have to. She denies that it is reasonable or good for theistic believers to hold the beliefs they do—and may be quite anti-religious and anti-theistic as a result—and she views her lack of belief as a lack of belief. She lacks belief in God, but she may not believe that there is no God.

That second, more specific position feels like the logical consequence of the first, but it is not. Believing there is no God requires a sense that there is meaning in the term “God” in general, and that the ignatheist denies. An ignatheist is as Harris described, the "atheist" for whom his manifesto stands. Ignatheists are simply refusing to deny the obvious.

When taken in generality, the term “God” is insufficiently meaningful to judge, and so the ignatheist doesn't concern himself with it. When examined in specific, though, the term “God” is always given a meaning to which either disbelief (atheism) or skeptical agnosticism are the proper assessments. The ignatheist is skeptically agnostic about the deistic absent creator, atheistic to any of the Gods of the world's religions, and believes that the general matter of God's existence is owed no consideration because it doesn't mean anything.

Ignatheism, then, is the position that holds that the matter of God's existence, in general, cannot be evaluated, and, in specific, nothing that has been proposed to go by the name “God” exists as a part of reality. A stronger variant on ignatheism would contend that while “God exists” is too meaningless to work with, it is very unlikely (though not impossible) that anything exists that would merit applying the term “God.” Ignatheists, then, live their lives as though no God exists without committing to a general belief that says so. Belief that God does or doesn't exist is immaterial because it is nonsense. To quote Harris again, ignatheism really is “nothing more than the noises reasonable people make when in the presence of religious dogma.”


I don't know that the world needs more labels for people or even for intellectual turf, especially when those labels are silly-sounding hybrids of technical words whose meanings aren't completely agreed upon. I do think, though, that for the purposes of making matters a bit more clear, at least having considered this term may be helpful.

Ignatheism is effectively atheism under the proviso that we realize that even it concedes more than it has to on claims about God's existence. Atheism takes the matter seriously, and it is not at all clear that the subject in question deserves it.

Tuesday, February 18, 2014

On miraculous healing

I've heard a lot recently about miraculous healings serving as evidence for God and Christianity. Obviously, I have major reservations about these claims, and I could go a lot of ways with what I'm about to say. I'll give some of the points a quick superficial treatment, and then I'll get to a little mathematics to reveal what looks like a paradoxical result that works against these weird occurrences as proof of miracles, Christianity, or God.

Odds and ends
  1. Every religion makes these claims, as do some supernaturalists who do not believe in God, so while they could conceivably be taken as evidence for the supernatural, it's less likely that they could be considered evidence for God and even less likely that they constitute evidence for any particular religion. The Christian has to be able to explain Islamic, Mormon, Hindu, Shamanic, and all sorts of other miraculous healing claims--seeing as those would not be in accordance with the Christian faith--to be able to claim that such claims constitute evidence for Christianity or even for God generally.
  2. If the (epistemic) probability of the supernatural is effectively zero or otherwise utterly negligible, then even if such events can be taken as evidence for the supernatural, God, or some religion, they don't get us very far at all. In fact, they get us effectively nowhere.
  3. As was pointed out to a believer in such miracles on Twitter, the problem with all "evidence" of this kind is that it is anecdotal, which means that we are completely ignorant of which details are left out of the anecdote and thus cannot use them to draw many, if any, conclusions.
  4. As pointed out on this blog by a commenter the other day, the Texas sharpshooter fallacy accounts for much of the observation of such "evidence." In short, we have lots of data of people who heal via whatever means, some of them had religious circumstances around them, and then the believer circles all of those examples together and "paints a bulls-eye" around them (it's a form of cherry picking).
That's all the addressing I'm doing of those points, at least for now. On to math.

Nuts and bolts

Really, this is a mathematical analysis on the fourth point above, the Texas sharpshooter fallacy. I will be using a very simple Bayesian-style analysis to show numerically just how fallacious the Texas sharpshooter fallacy can be.

Consider the following circumstances.
  1. Let's suppose, generously, that if a religiously influenced medical miracle occurs, we have a 100% chance of healing, which is to say that when religious circumstances effect a miracle, it has a 100% cure rate of whatever disease or problem is presented.
  2. Let's also suppose, in many cases generously to the believer again, that there's a baseline 5% natural recovery rate, be that spontaneous remission or any other modicum of healing that operates on purely natural causes. (Even many aggressive cancers have better survival rates than this.)
  3. Now, since miracles are rare occurrences, we will also assume, yet again quite generously to the believer, that religious interventions effect a miracle in only one out of every one thousand (0.1%) of cases in which they are called for. This may feel ungenerous, but if miracle healings were happening more frequently for this, a very easy case could be made that this discussion wouldn't even be happening. Miracles are rare. (Note: at 1/1000 chances, if one opportunity arose per day on most days, the average person should expect to run into one "miracle" just by chance roughly every two years--in the context of a church of hundreds of people each of whom know hundreds more, such events should occur routinely just by chance, but I digress.)
Now, suppose that given these generous assumptions, we witness what appears to be a genuine medical miracle. There was some medical problem; there was a sincere religious intervention on the patient; there was an apparently spontaneous healing; and the doctors were left flummoxed as to how it happened. To all appearances, a miracle occurred, but did it?

We can apply Bayes's theorem to get an estimate of how confident we can be in concluding a miracle occurred in these circumstances. The theorem looks like this (mostly just to show you that I'm not making things up).

P(Miracle|Healing) =  -------------------------------------------------------------------
                                    P(H|M)P(M) + P(H|no M)P(no M)

I hope it is clear that H is shorthand for "Healing" and M is for "Miracle," and "no M" means "no miracle occurred."

To further disambiguate, P means "probability," specifically the probability of the event symbolically described inside the parentheses immediately following the P. The vertical bar is read "given" and means that the event represented by the stuff before the bar is conditioned upon the occurrence of what comes after the bar. Specifically, P(H|M) is read "the probability of a healing given a miracle," and it means the probability that healing occurred if we are assuming that a miracle did, in fact, occur.

In words, what we're calculating is the probability--the confidence with which we can assert--that a miracle occurred given that we saw a healing. That's the P(Miracle|Healing) on the left side of the equals sign. The right-hand side of the equals sign evaluates this possibility via Bayes's theorem.

In the light of our assumptions, we know that P(Healing|Miracle), the probability that we have healing given that a miracle occurred, is 100%, or 1.00. The probability that a miracle occurred at all, P(M), is 0.1%, or 0.001. Also, P(Healing|no miracle), the probability that healing occurred without a miracle and just by matters of usual circumstance (to which the religious intervention is irrelevant), is 5%, or 0.05. Finally, by symmetry, the probability that no miracle occurred, P(no M), is 99.9%, or 0.999.

I will not drag us through the calculation but will simply report the result.

P(Miracle|Healing) = 0.01963

In plain language, what this number means is that given the assumption that miracles are always effective at healing, but that they are 1-in-1000 rare events, with spontaneous natural recovery happening at a rate of 5%, if we witness a healing that appears miraculous, there's only a 1.96% chance that we're right to believe it is actually a miracle. In other words, on these assumptions, if a churchgoer believes he has witnessed a miracle, even under these generous assumptions, we can conclude that there's more than a 98% chance that he is simply wrong about it.

To put this number in context, if we wanted to have a 50% confidence in concluding that miracles of this kind do occur, we would have to have at least 43 independent reports come in. For 95% confidence, we'd need 186 independent reports, and for 99.99% confidence, 572 of them. These seem like low bars to clear, but I have never heard of a church clearing them. Indeed, they only seem like low bars to clear because so many people are so susceptible to this 98% likely error (on these very generous assumptions).

Tightening the bolts

We've done some studies on medical miracles, and we've detected no statistically significant difference between religious interventions and a lack thereof. Therefore, it seems too generous to offer a 1/1000 chance that religiously induced medical miracles occur, if we haven't ruled them out completely.

I don't know how likely medical miracles might be, which is to say that I don't know what a fair estimate for them is. I think it's a good bet that one-in-a-thousand is too likely, given that the studies we've done haven't detected them, so let's tighten the bolts in light of this fact. For now, let's assume that medical miracles are one-in-a-million shots.

On the assumption that miracles only occur in one out of every one million cases, with the rest of our assumptions remaining as generous as before, the confidence that we can put in an apparent medical miracle--an unexplained healing that occurred in or following a religious intervention--is a staggeringly low 0.002%, one chance in fifty thousand.

Pause for context here. If miracles are completely effective, but one-in-a-million chance rare events, and we have a 5% natural recovery rate, and we witness an apparent miracle, we'd be wrong to conclude that a miracle occurred 49,999 times out of 50,000, 99.998% of the time.

In this case, we would need 34,657 independent cases, all of which have been admitted by a thorough medical analysis to be unexplainable by any known natural means, just to have a coin-toss's chance, 50%, of being right in saying that medical miracles happen. To be 99% sure, we'd need 149,785 confirmed cases. This kind of thing would stand out.

Tightening the context

Just reflect for a minute on our assumptions now that we've seen these numbers. One of them is that we were paying attention only to medical circumstances with a 5% survival/remission/healing rate by natural means. To be sure, there are many illnesses with such grim circumstances, but heart attacks have a 4.7% death rate, or a 95.3% survival rate (according to the University of Michigan). 

Obviously, the higher the chances of survival, remission, or healing, the higher the likelihood that believers would attribute false positives (for miraculous intervention) to their beliefs. Indeed, if miracles are only one-in-a-million events, surviving a heart attack would only be miraculous in one out of 953,000 cases. In reference to this actual data, we'd need more than 660,000 independent and confirmed (otherwise unexplained) cases for a 50% chance of miraculous intervention and more than 2.5 million for 95% certainty.

In this light, we have no good reasons whatsoever to accept from anyone any anecdotal claims that miraculous religious healings occurred. Even being ridiculously generous to the possibility of a miraculous healing claim results in more than a 98% chance that the claimant is wrong, and by being a bit more honest, almost a 99.9999% chance.

Saturday, February 15, 2014


Even if I come up with some that I think are clever from time to time (supertruth and talisman meme, e.g.), I'm not so big a fan of coining new terms. Still, I feel I should suggest another one, philosophism.

This term is not entirely new, but the meaning I'm offering is new-ish (I doubt I am the first to use it this way, but it isn't common). I do so largely in response to the hot-button term of "scientism." Scientism has been getting a lot of play, most notably in the popular press following the 2010 publication of Sam Harris's 2010 bestselling The Moral Landscape: How Science Can Determine Human Values.

Much of the blowback to Harris's book reveals varying degrees of command of term "scientism." To wit, most of it, of course, has arisen from religious apologists. Still, a surprising amount (mostly of more careful use) has arisen within two communities of self-identifying atheists: the academic left and philosophers, particularly moral philosophers. Since leftism already has a name, I'm focusing here on the character of some of the objections made by some philosophers.

Philosophism is kind of already a word

In doing a cursory search for the term "philosophism," I've found that most define philosophism as "spurious philosophy; the love or practice of sophistry," rather tracing the word to Webster's dictionary, 1913.

There's only a little bit wrong with this definition from my perspective. One thing is that sophistry already covers the topic, and another is that it doesn't possess resonance with the intended meanings, both denotative and connotative, of scientism, which does not mean practicing pseudoscience (although the humorous term "scientifical" has been suggested for that meaning). Indeed, pseudoscientists are among the most vociferous in denouncing scientism, which they consider a major problem for the obvious reason.

I feel a parallel usage is also apropos in some contexts and regretfully may be needed, and so I'm suggesting one.

Philosophism, noun

Specifically, I suggest that the term philosophism can be taken to mean something that parallels the meaning of the term scientism, as science certainly is not the only field of thought that can overstep its bounds.
Philosophism: 2. A tendency to overassert the relevance of philosophy in other fields (that it necessarily underwrites), often including a tendency to overassert the importance for professionals in those other fields, or the public, to engage more seriously with philosophy or show it greater respect (than they do and/or than it deserves).
(3. Projection of intellectual superiority by philosophers or regarding philosophy, especially when intentional.)
There are three topics contained in these definitions that need to be addressed, and after something that shouldn't be necessary but is, I will spend the rest of this essay developing them. First, philosophy can overassert its relevance. Second, not everyone working in various fields needs to be an expert in philosophy, nor should they be. And third, intellectual superiority happens among philosophers too, and it isn't helping anyone.

The disclaimer that won't prevent the knee-jerk but should

This piece is not an attack on philosophy, nor does it encourage attacking philosophy. It is also not a promotion of scientism or of science. Particularly, it is not a tu quoque fallacious defense of scientism. Further, it is not an accusation of all philosophers. Importantly, it does recognize the importance of philosophy in practice and in underwriting other fields (including science). It does not claim that the field of philosophy does not deserve respect by professionals in other fields or the public. Lastly, it doesn't intend to fuel a turf war between philosophers and scientists, who should be working together and not bemoaning when members of either field wade into what may be the other's ponds in ways that prove substantive.

The goal of this piece is to point out that philosophers can overassert just as easily and at least as obnoxiously as can anyone else. Doing so is every bit as anti-intellectual, since this word gets dragged in, as scientism is considered to be.

(Please, of course, feel free to leave comments denouncing me as a scientismist or anti-philosophy-ist, or whatever else you want to be wrong about, but you will have missed the point--not that this has stopped you before.)

1. Philosophy can overassert

I cannot speak in absolutes that cover all people, nor do I care to. Likewise, I do not care too much about this example or that, however notable, of people who overassert for science (or some other field). See the disclaimer. What I can speak to is the fact that a great many people, probably the very wide majority of them, who are relevant to this discussion already accept that philosophy underwrites nearly every intellectual field, science in particular.

Thanks for the reminder, but...

One of the primary ways that philosophy does overassert, in fact, is by unnecessarily pointing out that philosophy underwrites this field or that. The statement that science cannot verify itself--showing strict straw scientism to be self-refuting--is a great example of this, herring-flavored canard though it is.

Of course, pointing out this fact is not really philosophistic on its own, but it is easy to stumble into the problem by going on to assert that there is an onus upon the non-philosophical expert to be extraordinarily philosophically cautious or even philosophically savvy because of it. We will return to this point shortly, as it is the second main one to develop.

It smells like a turf war

Despite my desire to avoid getting specific, a good deal of philosophistic overassertion arises in fields related to ethics and mind. These fields, of course, are the traditional and lingering province of philosophy--as was naturalism only a few hundred years ago. Of course, as we get to know more and more about the brain and its function, the dominion of science to bear upon these topics will become increasingly relevant. This doesn't eliminate the role of philosophy to serve as intellectual infrastructure for these budding sciences, but it will bind philosophers and their views in these fields to that science.

Philosophism in this regard appears to contend that a future science of morality and a future science of mind are impossible, or at least that they are utterly neutered without heavy intervention by the relevant philosophy. Further, instead being seen in the role of underwriter to those potential scientific disciplines, philosophism maintains that philosophy is the primary (at times, only) legitimate way to probe these intriguing domains. (Science can merely act as an informer for them.) These philosophistic assertions come with a variety of justifications, many of which smack surprisingly of something very like dualism.

On dualism

Speaking of dualism, another more insidious form of philosophism arises, one that is tied tightly to the extant definition of the word. Some philosophically inclined people feel that it is very important to entertain theological (or other dualistic or supernatural) arguments with tremendous seriousness, provided that they are sufficiently philosophically slick. A certain enchantment with slick arguments seems to characterize this aspect of philosophistic overassertion.

(Of course, it is important to offer sound rebuttals to as much of this specious nonsense as those interested have time and passion for, but these kinds of arguments, in my opinion, deserve far less serious consideration than many individuals give them. Also, it's worth noting that theologians appear loath, if not completely resistant, to abandoning a refuted theological/philosophical religious argument, so rebutting them should be done for a broader audience.)

I'll take a moment to illustrate why we do not need to take sophistry of this sort seriously, however slick. The attitude that theology, dualism, and supernaturalism deserve is no more than acknowledging a very remote possibility. The attendant posterior plausibility for theology, dualism, and supernaturalism, which is very low, is enough to justify not taking these kinds of arguments with much seriousness, at least not on the popular level. (What professional philosophers want to wrestle with is their professional business until a large enough contingent of them agree upon something to garner popular attention.)

This section is long enough as it is, though, and I will leave it here to say that philosophers can and do overassert the relevance and importance of philosophy. When they do so, they are engaging in a problem that deserves to be branded philosophism, with all the negative connotation they would assign to the term scientism.

2. Other professionals don't need to engage in much philosophy

Perhaps the real issue here is simply one of acknowledgement, but I don't think so. It strikes me that philosophism often crops up by arguing that non-philosophical professionals, especially scientists, should take greater care with the philosophy related to their fields, even to being savvy with it. I disagree.

Who has time?

First, an economic argument: the law of comparative advantage disagrees. It is possible that being more philosophically savvy would lead scientists, etc., to be better scientists, but the time demands on doing science (even at a modest level) are already substantial--even without having to acknowledge that many research scientists spend at least a fifth of their working year attempting to convince someone to give them enough money to still have a job next year. Philosophers have time to do good philosophy because they are philosophers, which is usually not true of working scientists because they just don't have time or the inclination that often comes with such leisure.

While it may be possible that some scientists, historians, etc., would do better to become philosophically savvy in the philosophy that underwrites their fields, it's easy to make the case that they would do better still to get better at the techniques directly related to doing their science (with the philosophy underwriting it left implicit). Philosophy is hard and takes a lot of time, particularly to do well, and scientists get the same twenty-four hours as everyone else to carve away at their increasingly specialized, technical, and challenging niches of human knowledge.

We have philosophers for a reason

Second, I don't think they should care. Philosophy is important to philosophers. Science is important to scientists. History is important to historians. Mathematics is important to mathematicians. Sometimes these fields cross each other's paths, perhaps more and more frequently as the research pushes forward and what to do with the frontiers becomes less clear. That does not imply, however, that a person in field X has to feel that field Y is all that important for them. (NB: possibly--that depends upon the field, e.g. theology.) It is sometimes in particular situations but not generally. (Some, but not all, mathematical probability theorists have, for example, a real need to understand the various (philosophical) interpretations of probability to do their work in certain circumstances.)

There is something of a double-standard here, and it is justified. Scientists have philosophy underwriting their field, to be sure, but most of them can do their day-to-day work without paying it much heed. (Compare this situation with their relationship with the field of mathematics.) Most scientists do the majority of their work very well without more philosophy than a vague nod to empiricism and utilitarianism are typically enough. Scientists need philosophy largely to clarify interpretations and some of the boundaries of their methodologies at the weird frontiers (quantum mechanical interpretations and string theory are a clear example of this). Beyond that, and for most day-to-day work in those fields, philosophy is kind of irrelevant--in the, "yeah, it's there implicitly" way. Philosophy, on the other hand, is bound by science now.

Philosophy bound

When I say philosophy is bound by science, what I mean is that while philosophers are more than welcome to do philosophy in ignorance of science, it will usually be bad philosophy--or will become so very quickly. Science is what tethers philosophy to reality more than anything else. Without science, philosophy can very quickly get a bit meta and end up effectively on acid somewhere in the stratosphere--to its major discredit. Good philosophy is bound by science now that we have it, and what's left over is the older definition of philosophism, something of a turf war within philosophy that science and other fields don't need to be dragged into.

(Mathematicians, incidentally, aren't really bound by reality and don't really care--if their axioms end up being nonphysical, I'd think most would be likely to shrug it off with a, "So what? Who ever said they were physical?" That honesty about living abstractly protects them from this criticism, not that they'd care anyway.)

Asserting that scientists need to be savvy about philosophy is only relevant when those scientists are working at the raw edges of the most forward-pushing, abstract fields in which we aren't yet sure of what is going on. Our average solar astronomer, for example, does not need to be the least bit philosophically savvy to have an effective working sense of what constitutes a good model in her field or how to interpret the solar wind data that are being collected to test those models. It would be philosophism, and a waste of her precious, underpaid time, to argue otherwise, even if she said that the only way to gather reliable knowledge about the sun is through science.

3. Intellectual superiority

Philosophy is hard. It is far easier to go wrong than it is to go right with philosophy (also true of science, but science is grounded by evidence and so the wrongness may often be more readily apparent). Therefore, it isn't surprising to find that those capable of doing good philosophy (and good science) often have a bit of an intellectual chip on their shoulders regarding other fields. Often, in fact, there are interdepartmental jokes in universities, especially the colleges of arts and sciences, poking fun about the others' clear "inferiority."

(My favorite such joke, for what it's worth, reads that the second-cheapest department in any university is the mathematics department, as all its faculty need to do their work is paper, pencils, and waste baskets. The cheapest is the philosophy department because they don't need the waste baskets.)

I would argue that the "harder" a field is, the more its members tend to feel they can dig on the others. Overcoming high intellectual barriers to entry can have this effect on our psychology, particularly in people who have couched much of their self-esteem in their intellects. This is bad enough when it discredits fields outside the imaginary circle, but it's often downright nasty when happening between them.

The hard sciences, those requiring heavy-duty mathematics, often pride themselves on being more concerned with reality than, say, the philosophy and mathematics departments, and for good reasons. In contrast, the mathematics and philosophy departments often pride themselves on being more savvy with the abstract--and here's the problem: hence more intelligent--than those in the hard sciences. Sometimes it's vocalized; sometimes it's not, but the attitudes are common nonetheless. Everyone cherishes his own field, and none more than those whose fields are hard. (What that means is "mathematical" or "formally logical," not difficult, an unfair term--I don't think I can write a novel, and so writing one would be terribly hard, but few, if any, "hard" science/math folks see literature as a "hard" field).

Thus, I don't think it's surprising that philosophers can come off as smug and intellectually self-aggrandizing. Worse, because of the way philosophy operates, it comes off that way naturally to almost everyone else, creating a challenging problem for those who would popularize philosophy.

"There's a category for that, and let me tell you how that makes you wrong."

For instance, philosophers categorize everything, often with very precise, very abstruse terms that are very difficult to get a firm grasp on. Philosophers become adept at navigating these categories, understanding their ins, their outs, their strengths, their weaknesses, there genera, and their specifics. And then, when someone talks to a philosopher about philosophical-type ideas, the philosopher immediately categorizes those ideas and can do things with the classification, often sowing doubt and leaving the person they're talking with feeling quite stupid, typically very unfairly.

Here's a tip for philosophers that would popularize philosophy: people hate that. Hate it. Work as hard as you can not do it, ever. (NB: Many of those I would suggest play at being philosophismists have great disdain for Peter Boghossian, whose main message is the kind of authenticity that gets broken by philosophical categorizing and the ensuing assumptions.)

Philosophers argue as much as lawyers

Philosophers are also trained to try and get everything right, which they seek to achieve by learning to look for holes in arguments that they then tear apart. By implication, this leaves the person holding the torn-apart argument feeling wrong, which is often sufficient justification for the philosopher to tear apart the argument in the first place. Philosophers are also trained to defend their ideas and tear down opposing ones with pitiless mental tooth and claw. So philosophers are trained to find holes in arguments, tear them apart, and they do it like second nature.

Here's a tip for philosophers that would popularize philosophy: people hate that. Hate it. Work as hard as you can to do this with extreme care. This, again, falls on the crap side of authenticity. [Indeed, a humorous one-word sentence summarizing a thesis in philosophy (presented on humor site captures something of the problem: "Philosophers will almost always lie to you to prove their point" (Link).]

Incidentally, in the sciences, despite some scientists who are jerks, evidence is always the final arbiter whenever it is available. This is not the case in philosophy or mathematics. Additionally, in mathematics, at least in the mainstream, the axioms are usually far less contentious than in philosophy, and so the proof is just the proof (most Ph.D. dissertation defenses in mathematics are just a talk about the main results of the thesis since with proofs, there's no need to argue to defend one's position). This leaves philosophy heavily dependent upon arguments, which easily become motivated ones, to defend itself, and the trait shows. Notice, philosophers who would popularize philosophy, how nearly all jokes about lawyers go.

The dark side

Sometimes the dark side comes out, though. Philosophers, particularly those enamored with their own ideas or the importance of their own fields, will at times summon an air of intellectual superiority as a subtle ad hominem against the opponents' positions. Doing so seeks to undermine confidence in the opposing argument by showing everyone how much dumber (or less savvy) its presenters are than the philosopher. This has been running rampant lately with all the references--even from Daniel Dennett!--to what undergraduates in philosophy should and shouldn't understand (but so-and-so, apparently doesn't). This is philosophism.

So here is a clear issue with philosophism and a critical part of its connotation: it is condescending. This is hurtful to any productive dialogue and the reputation of philosophy (which is already far weaker than the reputation of science and thus may suffer the insult far more grievously). Like other forms of condescension, we should endeavor to keep them to a minimum.

Bonus: Sophisticated Theologians

I will not make this section long. Sophisticated Theologians(TM) (meaning theologians or apologists who categorize themselves as philosophers or philosophically savvy) have nothing* but philosophical or philosophistic arguments (sophistry) for their God, so they are usually quite quick to play the role of the philosophist. The reason is plain: they wish to keep the relevance of philosophy in all fields overasserted so they can keep misusing it to steal opportunities to talk about theism like it is a position that deserves our intellectual respect.

*They also have their "inner witness" and "divine sense" claims not to need arguments, and they can keep it and the Great Pumpkin. Atheist philosophers insisting upon some philosophical respectability to these beliefs is a clear instantiation of philosophism.


If scientism is a legitimate word that voices a legitimate criticism, philosophism is as well and for the same reasons. It is as anti-intellectual as anything that deserves to be called scientism, and it is as bad for philosophy--and everyone--as scientism is said to be for science. Indeed, it may be worse, because whereas scientism is science overstepping, philosophism is philosophy strangling itself. Science isn't going away, however scientistic it gets, but philosophy gone philosophistic will be beaten into the margins at great cost to us all.

Tuesday, February 11, 2014

Is Alvin Plantinga being rational about atheism?

Is Atheism Irrational? To plumb the question, Gary Gutting interviewed Christian philosopher and noted theologian Alvin Plantinga for The Stone, in the New York Times Opinionator. Unsurprisingly, Plantinga made the case that atheism is not a rational position, but equally unsurprisingly, there are some problems with his assessment.

To keep this as brief as may be (it's long), I will try to quote Gutting and Plantinga as little as possible, seeking only to make their discussion clear enough for context. I will also seek to follow the flow of the interview.

What is Plantinga after?

Very early on, Plantinga makes it clear what he's after by writing, "I take atheism to be the belief that there is no such person as the God of the theistic religions." Thus, he shows that he's most interested in presenting, to borrow a phrase, a rearguard defense of theism that tries to discredit atheism in favor of agnosticism. Agnosticism, of course, is taken to be a position of "we don't know enough to say." My assessment is that Plantinga wants to leave open that door to theism so that believers can continue believing. I think he also seeks to make the uncertainty on the matter appear far larger than it is.

This is another form of the God of the Gaps argument. There's a knowledge gap for us about God--we can't prove God doesn't exist. Plantinga wants to hide God in that gap and is making a presentation that aims to make agnosticism look more credible than atheism, using a definition that most atheists do not accept for themselves. By doing so, he apparently hopes to allow theologians to hide belief in God in the uncertainty, which he will esteem to be far greater than many self-described agnostics would.

Lack of evidence--is it enough?

Plantinga starts off by confusing the matter. To quote Plantinga directly,
But lack of evidence, if indeed evidence is lacking, is no grounds for atheism. No one thinks there is good evidence for the proposition that there are an even number of stars; but also, no one thinks the right conclusion to draw is that there are an uneven number of stars. The right conclusion would instead be agnosticism.
In the same way, the failure of the theistic arguments, if indeed they do fail, might conceivably be good grounds for agnosticism, but not for atheism. Atheism, like even-star-ism, would presumably be the sort of belief you can hold rationally only if you have strong arguments or evidence.
This is really quite astounding, isn't it? Let's deal with the number of stars thing, a ridiculously false comparison, first and then get back to the more general matter.

Regarding the number of stars, there are three possibilities (yes, three) that we can consider: (1) The number of stars is even, (2) the number of stars is odd, and (3) the number of stars is infinite (thus neither even nor odd). We apparently have to profess "agnosticism" about (3), but if we exclude it, we still face a challenging situation.

It seems direct to believe that, supposing only finitely many stars in the universe, we can say that there's a 50% chance that the number of stars is even and a 50% chance that the number is odd. This isn't quite right, though, because it only holds if we accept a "propensitist" interpretation of probability, and not everyone does. On the "frequentist" position, this statement doesn't have any meaning because, so far as we know, there is only one universe. The number of stars is what it is, and it doesn't make any sense to talk about the probability that the number is this kind of number or that.

In this situation, then, if we take the word "agnostic" to very broadly mean that we don't, or can't, know, it applies here. The parity of the number of stars is unlikely to be knowable--perhaps even in principle unless we are able to show that the universe is necessarily finite in scope. (Note: the term agnostic technically only refers to religious or sometimes metaphysical positions and not usually to this broader use.)

Plantinga asserts that no one would conclude on this lack of evidence, perhaps even in principle, that we should reject "even-star-ism" and claim an "uneven" number of stars as a result of the lack of evidence. Of course not, but that doesn't make his analogy sound!

The sheer ridiculousness of this is revealed if we default to the propensitist interpretation of probability, which may be the only one that saliently applies here. We know there's a 50/50 shot, assuming the question is meaningful (a finite number of stars in the universe) that it is one or the other. To adopt either position is to know there's equal odds that you're wrong. This is not the same as the situation with theism, which has to get more and more abstruse to hide from the fact that there is no good evidence that supports it.

Let's expand this a bit. We can't know if the number of stars is divisible by one billion either, but on the propensitist interpretation, we can conclude with billion-to-one odds that it isn't. When we change the number from two to something larger, all of a sudden, we see that "agnosticism" doesn't imply knowing so little that we have to conclude even odds. This "even-star-ism" example is egregiously misleading and outstandingly silly.

Sadly, this isn't even the most ridiculous part of this comparison. Who on earth cares if the number of stars is even or odd? More importantly, who makes any decisions based upon such a belief if one is foolish enough to hold one that points one way or another? People live and die--and kill--by theism. Getting it right matters in a way that a guess at the number of stars in the universe simply cannot.

More broadly

We lack any credible evidence for the reality of theism--the existence of any deities--and every theistic religious tradition puts forth a cornucopia of examples of expected evidence, none of which gets verified by observation. That means we have no good reasons to believe theism is true and lots of good reasons not to believe theism is true. A lack of evidence is not evidence of absence except when that evidence is predicted and expected (see modus tollens). Plantinga clearly wants to distract from this simple fact.


The interview turns from this topic to the famous analogy by Bertrand Russell of a teapot in orbit around the Sun between Earth and Mars. Here, Plantinga strains the careful reader's mind by explaining exactly what I mentioned above regarding an absence of (expected) evidence:
Clearly we have a great deal of evidence against teapotism. For example, as far as we know, the only way a teapot could have gotten into orbit around the sun would be if some country with sufficiently developed space-shot capabilities had shot this pot into orbit. No country with such capabilities is sufficiently frivolous to waste its resources by trying to send a teapot into orbit. Furthermore, if some country had done so, it would have been all over the news; we would certainly have heard about it. But we haven’t. And so on. There is plenty of evidence against teapotism. So if, à la Russell, theism is like teapotism, the atheist, to be justified, would (like the a-teapotist) have to have powerful evidence against theism.
I feel like I need to point out the irony (or is it special pleading?) that prevents Plantinga from considering the possibility that the celestial teapot could have formed by chance with the solar system, or that it is somehow a metaphysically necessary part of our universe (does it matter that it seems absurd in a field like metaphysics?). Also, I'd dare say that if someone succeeded in proving God really exists, that too would have been on the news, and we would have heard about it.

But recall that Plantinga is arguing only against the firm denial of theism when he says "atheism," which I take to be an equivocation on terms despite his open definition. He wants to make a case for agnosticism without making a case for the background likelihoods (be those probabilities, plausibilities, or simple credibility) associated with the various hypotheses. Instead of a careful discussion, he apparently wants to leave open a gap of doubt, without caring how narrow it is, that allows him to keep talking about his beliefs as if they are rational ones.

The Problem of Evil

Gary Gutting immediately raises the Problem of Evil at this point, saying that atheists use it as evidence against the God posited, at least, by Christianity. Plantinga recognizes the potency of this rock of atheism.
The so-called “problem of evil” would presumably be the strongest (and maybe the only) evidence against theism. It does indeed have some strength; it makes sense to think that the probability of theism, given the existence of all the suffering and evil our world contains, is fairly low.
The way he deals with it is bizarre. First he says, "But of course there are also arguments for theism. Indeed, there are at least a couple of dozen good theistic arguments."

There are? I'm well aware that there are some arguments that theologians and some philosophers consider to be good arguments for theism (note: arguments, not evidence), but I haven't seen one yet that doesn't have a sound refutation unless it devolves into metaphysical speculation. I wonder which ones he counts? I'm guessing his own, which are, in my estimation, not at all good ones, as we'll discuss.

Plantinga then says,
So the atheist would have to try to synthesize and balance the probabilities. This isn’t at all easy to do, but it’s pretty obvious that the result wouldn’t anywhere nearly support straight-out atheism as opposed to agnosticism.
This is more of the same--dodging the fact that there is a very small plausibility of theism by denying "straight-out atheism" in favor of agnosticism. If he defined atheism less to his advantage but still very strongly, say that one is 99.99999% sure that there is no God, for example, I wonder if he could make the same conclusion.

The issue here is that once he establishes "agnosticism" as being the only rational position, he can then play with the probabilities (which conceivably would range over the entire range of possible subjective plausibilities), or attempt to hide them, without having to admit that he's very, very much on the losing end of them. A Bayesian analysis of the evidence uniformly points away from theism, so given any prior plausibility but certainty (or no prior at all), a rational conclusion is that the probability of theism is so small as to be negligible.

By the way, see Dot, Dot, Dot for my take on his challenge. I assess that the balance of the probabilities is nowhere near anything he'd like. (I think zero, almost surely, or otherwise negligibly low is the only position that can be defended.)

Sensus divinitatis

Oh boy. When challenged by Gutting on the point that theistic arguments are not decisive, Plantinga used a characteristic dodge into the unfalsifiable (taking the epistemic question one step further up the chain instead of dealing with it directly), in this case the sensus divinitatis,
I should make clear first that I don’t think arguments are needed for rational belief in God. In this regard belief in God is like belief in other minds, or belief in the past. Belief in God is grounded in experience, or in the sensus divinitatis, John Calvin’s term for an inborn inclination to form beliefs about God in a wide variety of circumstances.
Muslim apologists, Hindus, Mormons, and the ghosts of the Aztecs all agree, but notably not on what that "sense" reveals. Atheists chalk it up to psychology and the experience of experiencing.

There's a bit to unpack here, though. First, the belief in the existence of other minds doesn't seem to pair up with the existence of God. Clearly, we see other beings, people and animals, who behave as though they have minds like we ourselves seem to. We cannot know this to be the case (every person we ask about the experience could be misleading us, after all), but it is perhaps the simplest explanation for the phenomena we observe. Similarly, belief in the past may be illusory, but it is the most credible way to understand what we experience. Theism, whatever Plantinga imagines, isn't on a level here.

So again, we see more of the same, an outright attempt to allow theistic belief to ride in a tiny epistemic gap that Plantinga has motivated reasons not to discuss honestly. Worse, here we see how he tries to skip the gap and say that it doesn't exist at all for those who favor theism. He's claiming that atheism (in its certainty) cannot be rationally held but, because of an imagined and unfalsifiable non-physical sense, theism rationally can be held with certainty without arguments for it.

If this is the best they've got, they're in big trouble.

As an aside, Plantinga seems to try to couch his assessment in the arguments for theism in the respectability of philosophy (which is a dig on philosophy if I've ever read one). Hopefully at least one noteworthy philosopher will stick this statement as full of pins as it deserves.
I think there are a large number — maybe a couple of dozen — of pretty good theistic arguments. None is conclusive, but each, or at any rate the whole bunch taken together, is about as strong as philosophical arguments ordinarily get.

The oh-boy factor keeps rising when Plantinga offers the fine-tuning argument as an example of a good one for theism. Let's not linger on this, as others have done tremendous jobs with it, but we will play with it a bit. Plantinga wrote,
If it had been even slightly different, our kind of life would not have been possible. In fact the universe seems to be fine-tuned, not just for life, but for intelligent life. This fine-tuning is vastly more likely given theism than given atheism.
There are a few fun comments to make here. First, observe that Plantinga seems to be assuming that intelligent life seems to be a goal of the universe. That's interesting considering, however we measure it, non-intelligent bacteria have outlived us in every regard we can imagine--they evolved first, we depend upon them for our own life, they outnumber us immeasurably, they can endure environments far beyond what we can, and they will be one of the major consumers of us when we die. Indeed, when the last lifeform on this planet dies, it's a reasonable bet that it will be bacteria that survived while eating all of the others that died first. Why shouldn't we say that this universe fine-tuned for bacteria?

A very easy argument exists that we exist only as a means for certain bacteria to have a sheltered existence in which their biological needs are provided for them by the expenditure of others--we could be their servants. And if we're going to invoke metaphysical arguments to the purpose of the universe, why not that one instead of the one that aggrandizes ourselves?

Plantinga's last sentence in this passage, though, really has some problems universal to theologians. "This fine-tuning is vastly more likely given theism than given atheism." Let me correct it for him: "This [apparent] fine-tuning [that might only be a coincidence or an illusion and that ignores how not fine-tuned most of the universe is, and may not be for us at all,] is vastly more likely given [the unfalsifiable assumptions I call] theism [that are designed specifically to account for this matter] than given atheism."

Note also that Plantinga is quite clear about what he means by "atheism," so clear in fact that he's using it in a way that is obscurantist all by itself (by differing intentionally from the most commonly accepted meaning of the word). He is not, however, clear at all in what he means by the word "theism."

The reason is that he is not at all clear--no one ever is--on what the word "God" means. Part of the reason that "fine-tuning" seems so much more likely on theism than on atheism is because he isn't being honest about what theism means (to wit, does he really mean deism? or does he mean theism? or does he mean Christian theism? or does he mean the type of Christian theism that I, Alvin Plantinga, don't think count as heresies?). The other part has already been noted: the definition of God is usually just an explanation to give a superficially satisfactory response to all riddles of this sort.

Suffering, sin, and best-possible worlds

Gutting turns the topic back to the problem of evil, given Plantinga's attempt to use fine-tuning as a good theistic argument. In doing so, Gutting mentions that the universe isn't perfect, which gives Plantinga an opportunity to say some truly astounding things.
I suppose your thinking is that it is suffering and sin that make this world less than perfect. But then your question makes sense only if the best possible worlds contain no sin or suffering.
Well, since sin is deviation from divine law, and we have no good reasons to believe there is divine law, I'm not sure why Plantinga would want to argue that atheists think that "sin" somehow makes the world less than perfect.

I'm not at all clear on--and I don't think Plantinga is either--what is meant by the Panglossian expression "best possible world." For all we know, there is one world. It is best-possible by default (and is likewise worst-possible, rendering this analysis meaningless). But then he gets truly weird, in a way that only religious people can.
Maybe the best worlds contain free creatures some of whom sometimes do what is wrong. Indeed, maybe the best worlds contain a scenario very like the Christian story.
Think about it: The first being of the universe, perfect in goodness, power and knowledge, creates free creatures. These free creatures turn their backs on him, rebel against him and get involved in sin and evil. Rather than treat them as some ancient potentate might — e.g., having them boiled in oil — God responds by sending his son into the world to suffer and die so that human beings might once more be in a right relationship to God. God himself undergoes the enormous suffering involved in seeing his son mocked, ridiculed, beaten and crucified. And all this for the sake of these sinful creatures.
I’d say a world in which this story is true would be a truly magnificent possible world. It would be so good that no world could be appreciably better. But then the best worlds contain sin and suffering.
I wish I was making that up. Maybe the best possible worlds contain one of the most ridiculous and disturbing stories possible, and therefore it would be so good that no world could be appreciably better. That's what I read here.

Let's pause to recall, though, that God--if Abrahamic theism were true--didn't boil us in oil, verily. He drowned the entire planet (Gen. 6-9) and promises to destroy it in fire later (Revelation 16).

There's so much nonsense in this story, though, that books could be written unpacking how ridiculous it is. I'll just illustrate some points:
  1. Why would free creatures in sound mind rebel against a perfectly good being that provides everything for them if they know it exists? The Abrahamic account of this is ridiculous and makes God out to be pretty bad--losing it completely over being deceived instead of blindly obedient.
  2. "God responds by sending his son into the world to suffer and die so that human beings might once more be in a right relationship to God." This is horrible and no part of it makes the first bit of sense. How would this establish a "right relationship"? By guilt? (The Catholics, among others, seem to think so.)
  3. "God himself undergoes the enormous suffering involved in seeing his son mocked, ridiculed, beaten and crucified." What? How does Plantinga know that? What does it even mean for God to suffer?
  4. "And all this for the sake of these sinful creatures." And so appears the hallmark Abrahamic reminder of what worthless failures all people are by birthright--this being an essential characteristic of Plantinga's best possible, "truly magnificent" world that is "so good that no world could be appreciably better."
Alvin Plantinga has stars in his eyes for his cherished beliefs, and he views them through Jesus-colored glasses. No other reasonable conclusion can be drawn from this passage in the interview.

Back to evidence

Gutting presses Plantinga on the idea that we need God to explain the phenomena of the universe, and Plantinga responds predictably by pointing out again that absence of evidence is not sufficient to claim what he's calling atheism.
Some atheists seem to think that a sufficient reason for atheism is the fact (as they say) that we no longer need God to explain natural phenomena — lightning and thunder for example. We now have science. As a justification of atheism, this is pretty lame.
Not if he's ever read his Bible or any other scripture, or perhaps talked to believers in the real world. The explanation of at least some worldly phenomena seems to be a chief role of the biblical God and the God almost every practicing Christian believes in. Then he gets truly weird again with a very bizarre analogy,
We no longer need the moon to explain or account for lunacy; it hardly follows that belief in the nonexistence of the moon (a-moonism?) is justified. A-moonism on this ground would be sensible only if the sole ground for belief in the existence of the moon was its explanatory power with respect to lunacy. (And even so, the justified attitude would be agnosticism with respect to the moon, not a-moonism.) The same thing goes with belief in God: Atheism on this sort of basis would be justified only if the explanatory power of theism were the only reason for belief in God. And even then, agnosticism would be the justified attitude, not atheism.
We know the moon exists for other reasons (like having been there). This is not true of God. Plantinga understands this, but he seems to fail to grasp that the primary mechanism behind belief in the existence of God is its apparent explanatory power with respect to phenomena of the world (including moral and other values held by sentient beings that evolved in it, along with psychology related both to the mundane and numinous experience). All religious belief has is bad philosophy and misattributed evidence to lend it support, which sadly is more than enough.

To linger on a phrase, "the explanatory power of theism," I'd like to point out that this is one of the most pernicious aspects of theism--the mythological aspect. Theism, since it simply posits a deity as the answer to the unexplained or unexplainable, offers no explanatory power at all. "God did it" doesn't tell us anything at all that passes as an explanation. It's stuffing in lieu of an explanation that, for the simple or the desperate, will fill the cognitive hole. It's no more of an explanation than is "it is how it is," which clearly no one sees as a very good explanation of anything.

Philosophy, evidence, and religious experience

Gutting does a great job of pressing Plantinga, not that it seems to matter much. He asks what further grounds there are for believing in God. Plantinga responds,
The most important ground of belief is probably not philosophical argument but religious experience.
Again, Muslim apologists, Mormons, Hindus, and the ghosts of the Aztecs agree, just not with Plantinga's take on the matter.
Many people of very many different cultures have thought themselves in experiential touch with a being worthy of worship. They believe that there is such a person, but not because of the explanatory prowess of such belief. Or maybe there is something like Calvin’s sensus divinitatis.
There's a simple explanation for this: psychology (and being wrong about states of mind). Human beings are very different from one another in many ways, culture being important among those, but not that different. The psychology behind these matters has been explored in depth, has high explanatory salience, offers predictive potency, and yet does not touch Plantinga, who is apparently attached to his sensus divinitatis because it allows him to pretend to have epistemic grounds for his cherished beliefs.

We see high religious diversity all claiming to have similar experiences. It's not a stretch--indeed, it's the one Plantinga seems to be taking--to assume that there's a single underlying explanation for it. Human psychology is a vastly more reasonable guess than a supernatural one, particularly a supernatural speculation that appears to be grounded by yet another, a magic sense that allows us to detect the supernatural directly.

Let's unpack Plantinga's next sentence,
Indeed, if theism is true, then very likely there is something like the sensus divinitatis.
The first thing to notice is the second word: "IF." What's the plausibility behind that if? I say zero, or as near to that as can be defended philosophically. But look at how this pressing question gets lost, apparently even to Plantinga, in his wistful sentence. Notice also how the rest of the sentence is awash with even more likelihood terms: "very likely" and "something like." And why should we believe either of those are good assessments? How can he know?

Plantinga wants us to take sensus divinitatis seriously as a grounding for theism, but even when he presents it, it is couched upon "if theism is true."

Good reasons to be creeped out by God

In the next question in the interview, about why so many very smart people don't believe in God, Plantinga provides us with some great reasons to be creeped out by God. I'll offer them without commentary.
Thomas Nagel, a terrific philosopher and an unusually perceptive atheist, says he simply doesn’t want there to be any such person as God. And it isn’t hard to see why. For one thing, there would be what some would think was an intolerable invasion of privacy: God would know my every thought long before I thought it. For another, my actions and even my thoughts would be a constant subject of judgment and evaluation.
Basically, these come down to the serious limitation of human autonomy posed by theism. This desire for autonomy ... can perhaps also motivate atheism.


The existence of beliefs apparently makes believing in materialism (that the material universe is all that there is) very difficult for Alvin Plantinga. He sets the stage,
First, if materialism is true, human beings, naturally enough, are material objects. Now what, from this point of view, would a belief be? My belief that Marcel Proust is more subtle that Louis L’Amour, for example? Presumably this belief would have to be a material structure in my brain, say a collection of neurons that sends electrical impulses to other such structures as well as to nerves and muscles, and receives electrical impulses from other structures.
But in addition to such neurophysiological properties, this structure, if it is a belief, would also have to have a content: It would have, say, to be the belief that Proust is more subtle than L’Amour.
This requires more buildup, 
I’m interested in the fact that beliefs cause (or at least partly cause) actions. For example, my belief that there is a beer in the fridge (together with my desire to have a beer) can cause me to heave myself out of my comfortable armchair and lumber over to the fridge.
But here’s the important point: It’s by virtue of its material, neurophysiological properties that a belief causes the action. It’s in virtue of those electrical signals sent via efferent nerves to the relevant muscles, that the belief about the beer in the fridge causes me to go to the fridge. It is not by virtue of the content (there is a beer in the fridge) the belief has.
And then he lays it out,
Because if this belief — this structure — had a totally different content (even, say, if it was a belief that there is no beer in the fridge) but had the same neurophysiological properties, it would still have caused that same action of going to the fridge. This means that the content of the belief isn’t a cause of the behavior. As far as causing the behavior goes, the content of the belief doesn’t matter. 
So much has to be unpacked again. My first temptation is simply to ask how Plantinga can claim to know that the neurophysiological properties of a belief can be divorced from the content of the belief. He tries to make a case for it, but it is unconvincing. It is not at all clear that two opposing beliefs can have the same neurophysiological properties. (What would distinguish them?) Plantinga seems to see the beliefs two ways at once, one of which is ultimately dualistic. By conflating them, he seems to be confusing himself on this matter.

I think perhaps something like this can occur, though, in a way, and we see may it in various psychological disorders. In those cases, the belief (being how the person cognitively interprets the neurophysiological properties that form it) and the actions that manifest from the belief are clearly out of accord with one another (and/or reality). Those who suffer these problems often do not want to suffer these problems and sometimes even know that their behaviors and beliefs do not agree. Far from making Plantinga's case, then, I think these examples shatter it.

His case is that the cognitive interpretation of the neurophysiological properties that form a belief do not matter in terms of creating actions as much as do the neurophysiological properties themselves. These conditions, along with many others, demonstrate that fact, but we consider them to be cognitive pathologies when the beliefs do not line up with reality or behavior.

Plantinga is really using this line of discussion to introduce the close of the interview, which focuses on one of his most famous arguments.


The rest of the interview is dedicated to Plantinga's Evolutionary Argument Against Naturalism (EAAN), which I'd rather ignore but must comment upon because it's really a trainwreck of bad assumptions.

He introduces it this way,
Evolution will have resulted in our having beliefs that are adaptive; that is, beliefs that cause adaptive actions. But as we’ve seen, if materialism is true, the belief does not cause the adaptive action by way of its content: It causes that action by way of its neurophysiological properties. Hence it doesn’t matter what the content of the belief is, and it doesn’t matter whether that content is true or false. All that’s required is that the belief have the right neurophysiological properties. If it’s also true, that’s fine; but if false, that’s equally fine. Evolution will select for belief-producing processes that produce beliefs with adaptive neurophysiological properties, but not for belief-producing processes that produce true beliefs. Given materialism and evolution, any particular belief is as likely to be false as true.
As I mentioned above, it seems to matter what the content of the belief is because when it doesn't comport with the actions or other beliefs, we see it as a cognitive pathology. We identify it as a pathology particularly because it seems detrimental to have, if not to oneself than to others.

He is right, though, to note that evolution will select for adaptive, but not necessarily true, beliefs. Of course, in many cases, true beliefs are adaptive--so part of his argument is bogus on that ground. In other cases, false beliefs can be adaptive, like being irrationally afraid of the dark or snakes or having overactive senses of patternicity and agenticity--all of which we present.

One point to raise before continuing is that his last sentence in this passage hides an important error: "Given materialism and evolution, any particular belief is as likely to be false as true." No, that's not true. If I roll a pair of fair six-sided dice with the belief that I'm due to roll numbers that sum to 11 (as often happens in the gambler's fallacy at the craps table), that belief only has a 1/18 chance of being true and 17/18 chance of being false.

This example may seem too narrow, but it extends to just about everything we can conceive of. Maybe there's a hunting panther in that rustling bush only one out of every one hundred times a nearby bush rustles like that, so to believe that there is, is to hold a belief that is 99% likely to be false and only 1% likely to be true. It may be adaptive to get away from that rustle, but it may be maladaptive as well (expends energy, takes us away from food sources, etc., needlessly if there is no real threat).

Evolution is a process that balances those stresses--in the most cruel and least fine-tuned way imaginable--with tremendous efficiency. Perhaps the adaptive balance is to (incorrectly) assume that there's a 65% chance that the bush contains a panther that will kill you. That dictates how our belief-forming mechanisms will evolve then, and this kind of assessment accounts exactly for what we observe. The belief-forming mechanism is adapted to survival and reproduction (including child-rearing), not to getting things right all the time, but it's undeniable that getting things right all the time would be highly adaptive. Thus, we should expect to see our belief-forming mechanisms tipped toward forming true beliefs about many of the challenges our paleolithic ancestors (and earlier) would have faced.

Imagine, though, if using a small amount of energy, our paleolithic friend here were able to determine that the rustle is only 30% likely to indicate a life-ending panther. He would still be wrong, but he would certainly would be doing better by this assumption than by one that is more wrong because he would commit fewer maladaptive mistakes. Thus, we see an evolutionary argument to produce a belief-forming mechanism (not beliefs themselves) that invests in getting things more right than might otherwise be.

As it turns out, apparently beyond Plantinga's ability to understand, overactive agenticity and patternicity, along with phobia formation and superstition--which is rampant as well in humans--indicate that we probably didn't evolve to generate true beliefs. And so what? Even pigeons have been shown to be superstitious. That doesn't imply that there's no adaptive advantage to seeking to overcome our psychological biases away from true beliefs. Science is the most powerful tool we've developed to date for doing so, and just look at what it has done in a couple of centuries in terms of increasing potential to survive, mate, and raise our young! How much evidence does he need to see that true beliefs, at least when about mundane matters related to survival, are clearly strongly adaptive?

We don't have really good reasons to believe that we have reliable belief-producing faculties, nor do we need them to conclude that evolution could occur without the assumption of supernatural forces. Indeed, even in this scientific age, ardent belief in pseudoscience and other forms of outright bullshit run rampant among people--without even having to mention religion. Not only that, but marketers and charlatans are easily able to make careers out of exploiting our unreliable belief-producing faculties. Tide detergent really will make my clothes cleaner and my life better! If I drink Coors Light, I'll be as cool and funny as Jean Claude Van Damme! The right chiropractic treatment will cure my daughter of autism and allergies! If I rip out this child's beating heart and eat it in this ritual, the volcano will not explode! If I eat this cracker at church while pretending to be serious, I'll live forever!

So, rather like a pigeon on a chessboard, Plantinga concludes,
Right. In fact, given materialism and evolution, it follows that our belief-producing faculties are not reliable.
No kidding. Look at your beliefs, Plantinga. Turn your argument back on yourself and see what it reveals about theism!

I want to let this lie now, but because my background is in mathematics, I should address his explanation.
Here’s why. If a belief is as likely to be false as to be true, we’d have to say the probability that any particular belief is true is about 50 percent. Now suppose we had a total of 100 independent beliefs (of course, we have many more). Remember that the probability that all of a group of beliefs are true is the multiplication of all their individual probabilities. Even if we set a fairly low bar for reliability — say, that at least two-thirds (67 percent) of our beliefs are true — our overall reliability, given materialism and evolution, is exceedingly low: something like .0004. So if you accept both materialism and evolution, you have good reason to believe that your belief-producing faculties are not reliable.
There are issues here.

First, we already discussed that "any particular belief" having equal likelihood of being true or false is probably not accurate except in the case of vague beliefs. I don't think, though, he can even make a statement like this--for the same reason he tried to use when talking about the number of stars being even. The probability of truth of an unspecified belief is unlikely to mean anything, and with many beliefs, a probability assessment of the truth is still unlikely to be assessable.

Here's an intriguing example. Suppose I have a six-sided die given to me. I know nothing about this die except that when I roll it, it will show a face and then immediately explode, destroying itself. I cannot assess the probability that any given face will show (I do not know that the die is fair, and I cannot use repeated rolls to determine it). I cannot conclude anything about the probability that the belief that the die is fair is either true or false.

Second, the word "independent" here is dubious. I don't know that it's clear that many, if any, of our beliefs are truly independent. All of our beliefs exist, if you will, as part of the fabric of beliefs we call a "worldview," and they all exist, again, if you will, in relationship to that. That suggests that for many, if not all, of our beliefs, there will be some correlation between them (we call this "coherence"). On beliefs that matter, the correlation is likely to be strong in many cases. Again, his construction is too abstractly sterile to be real.

Third, as noted previously, we don't actually have good reasons to believe that our belief-producing faculties are reliable. Humanity has demonstrated an outstanding track record for coming up with utterly horrible--often barbaric and brutal--nonsense beliefs. Many of these are central to the lives of the people who hold them. Most often, the worst and most sincerely held among these take the form of religious beliefs. Let's not forget that Christianity itself is a religion based centrally upon a blood sacrifice of a human/God that allegedly confers eternal life upon those who accept it and blame themselves for its need.

That's why we need something so enormously disciplined and self-correcting as science to glean (provisional) truths about the world. Our intelligence may allow us to do science, but it is in spite of our belief-forming faculties that we do it, not because of them. The clear rational and adaptive advantages to getting things right is sufficient motivation for the considerable energy expenditure to overcoming our biases, so his argument falls flat even on his oversimplifying assumptions.


Alvin Plantinga's main theme seems to reveal a fear that atheism can be held rationally, and that as always, God can hide in the possible-therefore-likely-therefore-warranted gap. He is in love with his ideas (particularly about theism), not about getting things right. Sic semper theologus.