Monday, March 24, 2014

Beatrice, Nebraska, Church Explosion in 1950--Miracle or Coincidence?

I came across an interesting fact last night. Apparently, in Beatrice, Nebraska, in 1950, the West Side Baptist Church exploded just five minutes into the scheduled choir practice on the Wednesday evening of March 1, and yet not a single person was hurt or killed. As it happened, the furnace was the source of the explosion, but there were no injuries or deaths because not a single member of the choir showed up to practice on time that night. Every last one of them was late, and they all missed the catastrophe. Of course, immediately and ever since, this has been taken by those who believe in miracles as a clear example of divine intervention, a miracle of God. But is it, or was it just an extremely fortunate coincidence?

Of course, there's an obvious discussion to be had about the fact that the church blew up in the first place. God could have simply prevented the explosion instead of mysteriously delaying each member of the choir by a few minutes for a variety of mundane reasons while still destroying their church. That's not what I'm interested in, though. Instead, I want to investigate what the rough chances of such an event are. To do so, I've had to assume a number of approximations that I need to explain, and to be generous, I feel like I've low-balled on the majority of them. As I often do, I'll run the numbers a second time for different assumptions that I will also explain.

The assumed numbers

Churches: As it turns out, there are roughly 450,000 churches in the United States (and more around the world). Of course, many of these are new constructions, but since there are churches all over the world, and such a miracle at any of them would count by religious apologetics standards, I'll take this number as it is. I realize that this number overestimates going into the past, but as I've explained about the wider world, and since I'm underestimating so profoundly throughout the rest of this discussion, I'll stick with this number. (Note also that the same miraculous status would be accorded to disasters averted in religiously motivated meetings at other places, like Bible studies in someone's home, at a library, or any other public facility.)

Meetings: Different churches have choir practices different numbers of times per week, so to be generous, I'm assuming that a typical church practices once per week, forty weeks of the year, i.e. 40 choir practices per year. Note that this is a severely low-ball estimate because it's irrelevant that it was a choir practice. All anyone wanting to claim a miracle would need for this to count is any meeting at a church (or almost anywhere else), and those happen several times per week, at least, almost every week of the year at most churches. Even to modify this number to 300 meetings per year is quite generous (many Protestant churches have at least this many outright church services a year with others not far behind).

Time frame: If this had happened in 1565 instead of 1950, we'd still be hearing about it from apologists, probably with similar frequency, so I feel it is generous to extend my focus across one century, 100 years. To stay generous, and because the numbers of churches is likely to drop off drastically before the 1850s, my more realistic assumptions will extend this only by an additional fifty years. Note that under the more generous assumptions, this yields 1,800,000,000 (choir) meetings within the relevant time frame, and on the (still generous) more realistic assumptions, 20,250,000,000 church meetings within the time frame.

Catastrophes: According to FEMA, in the mid-1990s, American churches experienced roughly 1300 church fires per year on average. To be generous, I'll assume that there have been approximately 1000 church catastrophes (of any kind) per year anywhere in the United States, and further that only 1% of those (10) are serious enough to lead to considerable injury or death if any potential victims happen to be present. Again, this assumption is overwhelmingly favorable to the possibility that a miracle happened because there are many other kinds of catastrophes that aren't fires (explosions, obviously, building collapsing, accidents, mass shootings, weather events, etc.) that would all work for the miracle claim. A more realistic, but still generous, number would probably triple this one, 3000 church catastrophes (of any kind) per year anywhere in the U.S., and would double the likelihood of an accident being serious, i.e. with 2% (60) being serious enough to lead to severe injury or death for any victims that are present.

If we assume that these catastrophes occur randomly, this yields a probability of 0.000095 such a serious catastrophe (one entailing serious injury or death to victims) in any given five-minute span anytime on the clock anywhere in the United States for the more generous assumptions, and a probability of 0.00057 in any given five-minute span under more realistic ones. I chose a five-minute span as being relevant because the explosion at the West Side Baptist Church occurred five minutes after choir practice was slated to begin.

Note that it's even very generous to assume that catastrophes like this occur evenly distributed around the clock and throughout the year. This is not the case generally, and it is not the case with the Beatrice explosion. In Beatrice, the exploding furnace had been lit specifically for the choir practice, so catastrophes like fires and certain kinds of explosions are more likely when people are scheduled to be there, say for choir practice. This is also true of intentional attacks like mass shootings. Further, other than arson (the leading cause of church fires, incidentally), we should expect most human-caused church catastrophes to happen during hours when people would also schedule events like choir practices or church services. All of these factors indicate that to be fully realistic to this problem, the odds approximated in this section should be higher than they are.

Being late: A quick Google search about how often people are tardy led me to a survey regarding how often people are late to work (note: punctuality at work is more important than at church choir practices) and determined the following low-ball estimates by making them more generous:
  • 10% of people are chronically late, to which I give a 90% chance of lateness;
  • 20% of people are sometimes late, with 75% chance of lateness;
  • 20% of people are occasionally late, with 50% chance of lateness;
  • 20% of people are rarely late, with 25% chance of lateness;
  • 30% of people are very rarely late, with 10% chance of lateness.
Putting all of these numbers together by their weights gives us a 42% chance that an "average" person will be late to any given event.

I do not feel like this number is out of the ballpark in how realistic it is, particularly for something of low importance like church choir practice, but I will not separate it into very generous and more realistic numbers. I will note, however, that the lateness likelihood given in the Snopes entry is stated in two incompatible ways. The article insists a 25% chance of lateness for any given member and on a one-in-a-million chance of lateness for the whole group of fifteen, but these numbers are incompatible. For a group of fifteen independent people to have a one-in-a-million chance of all being late simultaneously, the individual tardiness rate would have to be 40%, not 25%.

With a 42% chance of an average person being late, the odds that everyone would miss all at once, just by chance, is 1 in 447,982, or 0.00000223. Since this means that out of every 447,982 times that fifteen average people get together at a scheduled time, one of those times will result in all fifteen being simultaneously a few minutes late, this really isn't that ridiculous an assumption. Note also that this assumes that the choir members have independent reasons for being late, which is also generous. Bad traffic on the way to the church holding up several would destroy independence, as would carpooling members or two members coming from the same household and carrying the same excuse. Any dependence in lateness between members would raise the odds that all would be tardy simultaneously.

Crunching the numbers

Here's what we're looking at. I have calculated the odds that all fifteen members of the church choir would independently and simultaneously be late to an event at their church thereby missing the five-minute-long span of time in which a serious enough catastrophe occurs that, were the present, it would result in serious injury or death for some or all of them.

Then, I use that number to find the odds that such an event would occur merely by chance anytime in the last century. To do so, I determined the chance of it occurring at any given time (as just above), then used the complement of that to evaluate the chance that it would not occur at any point whatsoever over the stated time frames. The complement of that number is the odds that we'd see at least one Beatrice, Nebraska, event purely by chance.

Math: If we call the probability of such an event occurring by chance p, and we say that the number of meetings over the given time frame is m, the number I have calculated is 1-((1-p)^m). The chance p is calculated by multiplying the individual probabilities from above, using the (generous) assumption that all of these factors are independent.

Super-generous assumptions: In the case that we use the far more generous assumptions above (40 meetings per year per church and 10 would-be-serious catastrophes per year in churches anywhere in the U.S., evaluated over a century), the chance that we would see at least one event like occurred in Beatrice, Nebraska, sometime within the last century are 31.7%.

What this means is that even under ridiculously generous assumptions, there's nearly a one in three chance that just by utter chance, we would have on record something as freaky-rare, in seeming, as the event on March 1, 1950, in Beatrice, Nebraska, where an serious accident at a church occurred but was missed by all fifteen choir members because they were all simultaneously late. This doesn't look like divine intervention by any stretch, particularly when we keep in mind how generous the assumptions are. It's more likely than having two children, both of whom are girls.

More realistic assumptions that are still very generous: In the case where we use the generous, but more realistic, assumptions mentioned (300 meetings per year per church with 60 would-be-serious incidents per year, evaluated over a century and a half), the chance that we would see at least one event like occurred in Beatrice, Nebraska, sometime roughly since the Civil War is a virtual certainty--the probability isn't worth reporting because it's of the form 99.99...9%. Instead, the odds are all but one in roughly 151 billion. These are the same odds as, given a typical circular above-ground swimming pool filled with white granulated sugar and a single grain of salt, when choosing a single grain at random, you'd choose sugar. No sign of divine intervention here at all. (You're more likely win the lottery by making the decision to buy the lottery ticket by flipping a quarter nine times, buying it only if you get heads on every flip, and then winning the lottery with that ticket.)

Nota bene: Because of the values I chose in my super-generous assumptions, the number calculated in that case is right on the cusp where the probabilities change very rapidly by fiddling only in small ways with the assumptions. The less-generous assumptions are included here primarily to illustrate that this phenomenon is mostly an artifact of trying to be extraordinarily generous to the case of the miracle-claimant. I plainly admit, however, that being even a little more generous in my assumptions would yield a low (but not necessarily miraculously low) chance of such an event happening by coincidence. For example, if we use all of the other super-generous assumptions and assume a ~30% chance that a typical person would be late to church practice (splitting the middle of the two values given in Snopes slightly to the generous side), we only have about a 0.3% chance of seeing an event like the Beatrice, NE, explosion. Even then, we should expect such an event roughly once every three or four hundred years or so, so divine intervention still seems to be a hasty conclusion.

It simply cannot be overstated that the generous assumptions here are ridiculously generous, and even the more realistic assumptions are still quite generous, given the circumstances. We have every reason to believe that realistic numbers are nothing like my very generous assumptions and are far nearer the more realistic ones. Simply put, we have every good reason to believe that some event like this would have happened at least once within American history. I've only included the above note for full integrity of my presentation.

The overall takeaway from this assessment is that under truly realistic assumptions, we shouldn't really be surprised about the story of the Beatrice, Nebraska, West Side Baptist Church explosion on March 1, 1950, and we certainly have no good reasons to accept it as some kind of miracle claim.

Sunday, March 23, 2014

The Big Bang, Gravity Waves, and Shoehorning Jesus into Physics

A couple of days ago, I noticed a piece on CNN's Belief Blog starting to make the rounds, the title asking the question "Does the Big Bang Breakthrough Offer Proof of God?". Being busy--I leave for China in a few days--I assumed that the author of the piece was a scientist (in which I was correct) who was dispelling this truly absurd claim (about which I was apparently incorrect).

The scientist behind this stunning piece of confusion is Leslie Wickman, the director of the Center of Research in Science at Azusa Pacific University. Wickman's bio indicates that she was formerly an engineer for Lockheed Martin Missiles & Space, where she worked on the Hubble Space Telescope and the International Space Station programs. Incidentally, the last sentence in her bio disclaims her views as her own and not necessarily reflective of CNN or the Belief blog it maintains (which raises a fascinating, if ultimately dour and cynical, question about why editors are so eager to publish these kinds of things).

Impressive though her credentials may be, Wickman's case completely defies credulity in them.
Touted as evidence for inflation (a faster-than-the-speed-of-light expansion of our universe), the new discovery of traces of gravity waves affirms scientific concepts in the fields of cosmology, general relativity, and particle physics.
The new discovery also has significant implications for the Judeo-Christian worldview, offering strong support for biblical beliefs.
I still would have raised an eyebrow at a more cautious last clause: "...offering strong support for biblical beliefs," but to call this "strong support" nearly incapacitates my ability to accept that Wickman is, indeed, a scientist. Surely, as a scientist, she knows what strong support for a hypothesis, claim, or notion looks like? (Hint: Not like science shoehorned into a mythological narrative in a very unconvincing way that never would have arisen from the myth as a prediction.)

Of course, Wickman isn't so glib as to just assert her conclusion. She asserts it and then uses other assertions to give it a false air of support, all under the banner of a matter-of-fact two-word sentence that stands as its own paragraph: "Here's how."
The prevalent theory of cosmic origins prior to the Big Bang theory was the “Steady State,” which argued that the universe has always existed, without a beginning that necessitated a cause.
However, this new evidence strongly suggests that there was a beginning to our universe.
Not to sound wishy-washy postmodern, but that depends on what we mean by "beginning to our universe." If we mean a time when the cosmos inflated into what we call the observable universe of which we are a part, then, yes, probably so. To suggest the kind of ex nihilo creation that tends to inspire a "God did it!" cry as a matter of "metaphysical necessity" from theologians, et al., probably not. In short, there is a meaningful case to be made that the Inflation marks something that can be called "the beginning of the universe" but not at all in a way that should be conflated with an ultimate beginning, or worse, outright act of creation.

Here's where it's worth taking stock of what Alan Guth says about his own idea, the one that appears to have been given a significant amount of empirical corroboration (peer review pending, confirmation by other observations also pending) by astronomer John M. Kovac's BICEP2 team. Dennis Overbye, summarizing for the New York Times, included this statement.
Confirming inflation would mean that the universe we see, extending 14 billion light-years in space with its hundreds of billions of galaxies, is only an infinitesimal patch in a larger cosmos whose extent, architecture and fate are unknowable.
Notice that there is no implication here of a "beginning to our universe" that can be taken to be the entirety of the cosmos in the "Judeo-Christian" (read: grandly solipsistic) sense. Indeed, compare the above with something Wickman offers to set the context of this discovery.
The creation message in Genesis tells us that God created a special place for humans to live and thrive and be in communion with him; that God wants a relationship with us, and makes provisions for us to have fellowship with him, even after we turn away from him.
Later, she doubles down on her Judeo-Christian solipsism by pretending to know that
These physical laws established by God to govern interactions between matter and energy result in a finely tuned universe that provides the ideal conditions for life on our planet.

And now look at something Guth himself likes to say, "It’s often said that there is no such thing as a free lunch, but the universe might be the ultimate free lunch." What he means by this is exactly the opposite of what Wickman derives from the discovery regarding his theory of Inflation. Though his explanation is long, even in simplified form, I'll provide all of it. Guth, as in an interview with MIT, writes,
Modern particle theories strongly suggest that at very high energies, there should exist forms of matter that create repulsive gravity. Inflation, in turn, proposes that at least a very small patch of the early universe was filled with this repulsive-gravity material. The initial patch could have been incredibly small, perhaps as small as 10-24 centimeter, about 100 billion times smaller than a single proton. The small patch would then start to exponentially expand under the influence of the repulsive gravity, doubling in size approximately every 10-37 second. To successfully describe our visible universe, the region would need to undergo at least 80 doublings, increasing its size to about 1 centimeter. It could have undergone significantly more doublings, but at least this number is needed.

During the period of exponential expansion, any ordinary material would thin out, with the density diminishing to almost nothing. The behavior in this case, however, is very different: The repulsive-gravity material actually maintains a constant density as it expands, no matter how much it expands! While this appears to be a blatant violation of the principle of the conservation of energy, it is actually perfectly consistent.

This loophole hinges on a peculiar feature of gravity: The energy of a gravitational field is negative. As the patch expands at constant density, more and more energy, in the form of matter, is created. But at the same time, more and more negative energy appears in the form of the gravitational field that is filling the region. The total energy remains constant, as it must, and therefore remains very small.

It is possible that the total energy of the entire universe is exactly zero, with the positive energy of matter completely canceled by the negative energy of gravity. I often say that the universe is the ultimate free lunch, since it actually requires no energy to produce a universe.

At some point the inflation ends because the repulsive-gravity material becomes metastable. The repulsive-gravity material decays into ordinary particles, producing a very hot soup of particles that form the starting point of the conventional Big Bang. At this point the repulsive gravity turns off, but the region continues to expand in a coasting pattern for billions of years to come. Thus, inflation is a prequel to the era that cosmologists call the Big Bang, although it of course occurred after the origin of the universe, which is often also called the Big Bang. (Bold mine)
Overbye summarizes how this might have worked by analogy,
Under some circumstances, a glass of water can stay liquid as the temperature falls below 32 degrees, until it is disturbed, at which point it will rapidly freeze, releasing latent heat.
Similarly, the universe could “supercool” and stay in a unified state too long. In that case, space itself would become imbued with a mysterious latent energy.
Inserted into Einstein’s equations, the latent energy would act as a kind of antigravity, and the universe would blow itself up. Since it was space itself supplying the repulsive force, the more space was created, the harder it pushed apart.

What might have disturbed the pre-Inflationary universe? Quantum fluctuations are one candidate, as amply described by cosmologist Lawrence Krauss in his book A Universe from Nothing. Strictly speaking, here, we need not be talking about nothingness but rather the pre-Inflationary state, about which we may not be able to know anything. Overbye summarizes again,
We might never know what happened before inflation, at the very beginning, because inflation erases everything that came before it. All the chaos and randomness of the primordial moment are swept away, forever out of our view.

Note, by the bye, that even Overbye's use of the term "beginning," which gets used quite often in this context (and is hard to avoid using while speaking colloquially) overreaches, though. If everything that came before Inflation is erased by Inflation, we don't actually know that there is a "beginning" before it. This short video by Henry Reich of minutephysics helps make it clear (hang with it until the end). If the pre-Inflationary state were not so jumbled up to have destroyed our notion of time, our necessary ignorance about that time fails to provide any information about how long it lasted or what came before it.

As should be clear now, Wickman's claim that "this new evidence strongly suggests that our universe had a beginning" is only meaningful if we interpret "beginning" to mean "Inflationary Period," which is exactly not what she needs to make her case and only compatible with Genesis via the most generous potential reading of that text. And from that mistake, Wickman's piece is almost entirely the standard apologetic gobbledegook, the chief difference being that her pretentious confusion is poured out from a mind that has apparently had considerable formal scientific training. To borrow from Peter Boghossian, to reach the conclusion she does, Wickman is simply pretending to know things she doesn't know.

To give a sense of what I'm talking about with this claim, let's analyze a bit more of Wickman's piece. First, of course, consider that she's pretending to know that the evidence for an Inflationary Period implies "the universe had a beginning." She doesn't--can't--know that, whatever colloquial speech does to confuse us on this matter. She makes this particular point worse, though, with a cubic zirconium that could have been lifted from William Lane Craig's hoard,
If the universe did indeed have a beginning, by the simple logic of cause and effect, there had to be an agent – separate and apart from the effect – that caused it.
The simple logic of cause and effect might suggest that the universe had to have a cause here--supposing "the universe" is itself a thing like all of the things we see in the universe, a contention we have no good reason to accept--but it definitely does not automatically suggest an agent cause. Wickman pretends to know that there must have been a cause of the universe and that the cause is an agent. This is definitely not an auspicious start to her case, and she makes it worse by continuing: "That sounds a lot like Genesis 1:1 to me: 'In the beginning God created the heavens and the Earth.'" (I'm not kidding. She really wrote that! And CNN actually published it!)

The next portion of her piece turns to straight apologetics of the kind that only stands a chance of convincing those who are seeking to be convinced.
We also need to remember that God reveals himself both through scripture and creation. The challenge is in seeing how they fit together. A better understanding of each can inform our understanding of the other.
It’s not just about cracking open the Bible and reading whatever we find there from a 21st-century American perspective. We have to study the context, the culture, the genre, the authorship and the original audience to understand the intent.
This is the kind of thing that sounds good on paper so long as you can keep your attention diverted from considering essentially everything else in the Bible, starting somewhere a bit further down on the same page. But, Wickman enlightens us, "So, we know that Genesis was never intended to be a detailed scientific handbook, describing how God created the universe. It imparts a theological, not a scientific, message. (Imagine how confusing messages about gravity waves and dark matter might be to ancient Hebrew readers.)" I'll note that, technically, if her Judeo-Christian beliefs are true, she's pretending to know this as well--not knowing the mind of her God or his intent in writing the most important book ever--while also pretending to know that ancient Hebrew readers are so silly and uninformed that their all-powerful God couldn't make them understand the universe in a 21st century context.

In terms of pretending to know things she doesn't, let's linger briefly on "it imparts a theological, not a scientific, message." What exactly is a theological message? More importantly, how does one distinguish a "theological message" from an idle, but grand and usually solipsistic, speculation reinforced by pretending to know things about it that you definitely do not? In short, since all of theology is a semi-polished turd of pretending to know what people don't know, to admit that Genesis imparts a "theological message" is to illustrate that she's pretending to know things she doesn't.

I don't wish to quote the brief remainder of Wickman's theological rambling, punctuated with brief reminders that she is, indeed, a scientist who loves the intricacy of the natural world and how it all works so "synergistically." It is, of course, ripe with the standard Christ-confused pretense to knowledge known more widely by the term (Judeo)-Christian "faith." Instead, I'll just comment upon her conclusion,
If God is truly the creator, then he will reveal himself through what he’s created, and science is a tool we can use to uncover those wonders.
Here we have yet another desperate effort on the part of the hopeless conciliatory brigade that likes to pretend that science and theology are in some way compatible endeavors. That entire effort balances upon the idea that certain speculations (theological ones) can be effectively taken as supertruths, and then those can be shoehorned into whatever science allows us to determine is actually the case about the universe. This is to say that given a set of facts, mythological narratives can be written and re-written around them. Of course, this entire effort must willfully ignore the fact that the role of science in enlightening theology has in total been one that slashes and burns the entire theological lot where it has even the least to do with the functioning of the universe.

I cannot urge Leslie Wickman strongly enough to apply to herself the intellectual honesty and rigor required by her scientific training and to stop pretending to know things she does not know, repudiating Christianity as nonsense and retracting this ridiculous Belief Blog post from CNN's website.

Monday, March 17, 2014

Spirituality, erm.. something like that, for atheists

In God Doesn't; We Do, I devoted the tenth chapter to discussing "spirituality" without God (after using the ninth to explore my thoughts on how religion tends to make for "Spirituality Lite"). Sam Harris, who is certainly expert in this field, will be releasing a book about the topic this fall, entitled Waking Up. The idea is important to most people, and at least some of us who don't believe in God (and those that believe there is no God), and yet there is tremendous resistance to it by many who reject the supernatural.

The heart of the resistance is that the term "spirituality" is loaded with baggage, either being connotative of religion or outright religious. Perhaps this is an accurate assessment--I don't know. I do know that after a lot of trying, I'm still almost at a bit of a loss for anything like a better term for the baby that I think gets thrown out with this woo-soiled bathwater.

To reclaim a term?

Part of the discussion hinges on the term. If "spirituality" is really the best term--even if those reasons are merely the force of the tradition of language--then we should make an effort to reclaim it from the religious and put it squarely in the light of the naturalistic. In particular, it belongs in the brain, that organ that apparently defines all of conscious experience. Whatever is going on--and it is something--is a fact of neurobiology and, to some extent, sociology, and clarifying this in a way that removes the antisupernaturalist taboo on the whole endeavor would be nothing but a boon for humanity.

If, instead, we can think of a better term that captures all of the legitimate, non-dualistic meanings that are understood by that word, then perhaps we should use the alternative. Doing so would help cleave a stronger divorce to the fantastic realm of spirits, which would possibly be helpful at least in the short term. Note, though, that people will continue to imbibe "spirits" and express team "spirit" while rooting for the home team without batting an eye about the equivalent dualistic baggage. Also, the chocolate I had the other night was absolutely divine (godlike), and now I'm considering getting more of it to share with my angelic (like angels) children unless their behavior becomes diabolical (like devils). It's really quite glorious (full of the splendor of God).

To replace the term?

Following the thoughts of a friend, I tentatively suggest using the term "the numinous" for what normally gets called "spirituality," though there are issues with this word as well. Mainly, "numinous" is directly tied up in meaning with overtly religious ideas, so it isn't actually baggage-free. Indeed, "numinous" comes from the Latin word numen, which means divine will. People who wish to maintain special statuses for their cherished religious beliefs will be quick to point this fact out and exploit it, and those who want to separate themselves as much as possible from worldviews of that kind will complain likewise.


On the other hand, hence offering "the numinous" as a suggestion, the term is relatively unknown, and the connotative baggage in popular use is almost absent. That gives us an opportunity, one that will almost certainly be poisoned if we try to take it, to define the connotation of the term "the numinous" to mean "spirituality without the dualistic baggage about spirits and souls."

A tempting alternative would seem to be to retreat to the Greek for mind, nous, and use the word "noetic." That's not possible, though, as some of the most outlandish woo-meisters around have already appropriated that word, even talking of "noetic sciences." Spare us all. "Transcendent" is similarly poisoned.

Since much of what I would bundle under the "spiritual" umbrella is tied tightly to Abraham Maslow's idea of self-actualization, the term "actualization" could be used, but there are at least two problems with doing this. The first big problem with it is that it doesn't actually capture everything under the "spiritual" umbrella, which includes so-called "transcendent" experiences, the positive-psychological states sometimes called "elevation" and "flow," and social elements. A second is that used in this way, "actualization" doesn't come off as much less woo-ish than does "spirituality," despite being centered upon "actual" instead of "spirits."

Ultimately, given these challenges, unless a properly new word that fits can be suggested, "spirituality" might be the best term, and it should probably be reclaimed. I'm open to good suggestions.

The difficulties--meaning

There's a particular issue with the term "spirituality" if we want to replace it with some completely naturalistic alternative word; it's notoriously hard to define. Consider the textbook Psychology of Religion: An Empirical Approach, 4th ed., by Hood, Hill, and Spilka, which punts on the effort, saying it, like religion, is too nebulous to admit good definitions. In the vein of this challenge, they make a significant case in the first chapter that "nothing but" kinds of definitions for religion and spirituality have a very good chance of being wrong. Suffice it to say that if some of the leading experts in the field writing a popular textbook for it feel "spirituality" is a word too difficult to define, finding a superior alternative is going to be one hell of a challenge.

"Spirituality" simply means a lot of things, and much of what it means is tied up in whatever is meant by "religion." Hood, Hill, and Spilka, and the legions of people who declare themselves "spiritual but not religious"--a rapidly growing segment of the population that I think most atheists should warmly welcome despite its inherent problems with falling prey to nonsense--agree that "religion" and "spirituality" are, in fact, distinct, however much their Venn Diagrams overlap. For my part, I tend to think of "spirituality" as being a fundamental part of psychological and sociological functioning for human beings, and religion is a parasite on spirituality. (In this light, discarding the word "spiritual" because of its ties to religion is a bit like discarding dogs and cats as pets because without fastidious treatment they are all but certain to carry fleas.)

And then, making matters worse, there's another "spirit" word at play here. It is the one the antisupernaturalists hate; it is the one most of the "spiritual but not religious" people mean when they say that spirituality is important to them. That word is spiritualism, and it does not mean the same thing as spirituality, despite the similarities.

The lay definition of spiritualism is what we associate with the frauds called mediums who claim to talk to the dead, and the philosophical term means "the doctrine that the spirit exists as distinct from matter, or that spirit is the only reality." Though it frequently is, it should not be conflated with spirituality, and being against spiritualism is not a good reason to decry human spirituality.

In fact, looking up the entry for "spirituality" in Wikipedia returns something very interesting. Note, particularly, the first line. (Links and emphasis in original.)

The term "spirituality" lacks a definitive definition,[1][2] although social scientists have defined spirituality as the search for "the sacred," where "the sacred" is broadly defined as that which is set apart from the ordinary and worthy of veneration.[3]
The use of the term "spirituality" has changed throughout the ages.[4] In modern times, spirituality is often separated from Abrahamic religions,[5] and connotes a blend of humanistic psychology with mystical and esoteric traditions and eastern religions aimed at personal well-being and personal development.[6] The notion of "spiritual experience" plays an important role in modern spirituality, but has a relatively recent origin.[7]
Whatever we mean by the term "spirituality," we formally agree it shouldn't be confused with the idea that spirits exist distinct from matter.

Personally, I think that the only parts that need to be retained are that these traditions involve practices (the traditions not mattering) that reliably produce certain brain states, and that those neurobiological events are tied to personal well-being and personal development. I agree with many psychologists in that some dimension of human experience that usually gets called "spiritual" is vitally important--a need--though I don't believe it extends in any significant way beyond neurobiology, psychology, and sociology.


The difficulties--cultural role of "spirituality"

Another enormous difficulty exists when we talk about the role that spirituality plays for people. Though I often disagree with his interpretations, I find social psychologist Jonathan Haidt (of NYU's business school now) to be profoundly insightful with many of his observations. One of those is that human morality can be understood as psychosocial valuation, means by which we evaluate ourselves and others, and that human psychosocial valuation is at least a three-dimensional universe. The three dimensions, in brief, are kinship/closeness, reputation, and "divinity," which is at least linguistically tied to being "spiritually good."

This third, abstruse, probably-poorly-named dimension is often invisible unless one knows how to look for it, but it defines most, if not all, of the psychosocial evaluative rules (culturally peculiar moral frameworks) that can't be explained via closeness or reputation. In the contemporary grumpy-atheist subculture, carrying such low esteem for woo that one completely rejects whatever constitutes spirituality is a state of "divinity" in that culture, which is to say "spiritual goodness." "Good atheists" don't buy into nonsense, after all. More widely, that guy at the gym with that hat and that strut that you recognize to be a "douche" is a "douche" because he scores low on your "divinity" spectrum.

And, other than all of the "cool" people you've ever known, guess who scores highly on the divinity spectrum of any given culture or subculture? (NB: I think it is the moral values framework that defines the culture, not the other way around.) "Spiritual" people, at least usually. Why do so many of us turn to priests, pastors, gurus, shamans, swamis, and "the enlightened," for certain insights of wisdom? Further, why do we define these people as being "good" in the first place? They are so because they're "divine," which is to say that they're doing well in this abstruse dimension of psychosocial valuation Haidt called "divinity."

Being devout in a religion is a fast shortcut to being "good" in any culture or subculture that has positive estimation of that religion. For those who think poorly of religion, we should note that secular atheists have their own sets of moral frameworks (no, we are not one unified cultural group), and even in the cases when ours are more grounded in the real human values (moral facts), we have our own "divine" role models. This is true even when we reject authority and even reputation, as often happens, and the ways this comes up are myriad and nearly ubiquitous. Since this is an integral component of how we evaluate ourselves and others, we cannot simply ignore it out of our systems. We do it without realizing we're doing it even when we pretend we're not because it's a huge part of how we make sense of ourselves and each other. (Indeed, I'd argue it's the biggest part since we're only close with a relatively small number of people and, likewise, only a relative few are famous, even in the niche circles we sometimes run in.)

The enlightened

To explore further, let's turn to the term "the enlightened" because it crosses a lot of borders. One need not get caught up in Buddhism or Taoism to use the term "enlightened" to describe someone who, in rougher language, really has their shit together. We nearly all venerate the wise and the generally "good." And note that this is stronger when moral values are tied into it. Consider just one example: How many moral vegans consider themselves "enlightened" to the reality of animal suffering? Is it all of them? And does it matter that many of these people are atheists who don't believe in the first bit of Buddhist doctrine? Not in the least. "The enlightened" are those who see some truth, and we hold high (moral) esteem for these people, sometimes even when they are outside of our own cultural values frameworks. The "holy," the "divine," the locally morally "good" all have this in common.

This means that spirituality--here, doing well by one's cultural moral framework--is tied up with moral evaluation, which is utterly integral to human cultures, thus humanity. Perhaps it is more accurate to say things the other way around, that by attempting to refine oneself in the moral framework of one's particular culture or subculture, one is engaging in some element of spirituality. To intentionally attempt to "better" oneself in any way tied to the third dimension of psychosocial valuation is a significant part of what would be called "spiritual practices" in any context that would let itself use that word.

It's ridiculous, but it's not

Of course, many "spiritual practices" are ridiculous because they're based upon superstitions or upon rituals and traditions based upon superstitions. Many others are not, even if the practices themselves got their start in the ridiculously superstitious. Sometimes it's a complicated blend. Take the Catholic practice of confession, for instance. It seems that this action, which has nothing to do in reality with beseeching a curious holy and impartial intermediary to help the all-powerful, all-knowing, all-loving, perfectly just Creator of the Universe to forgive you for whatever transgressions, has a potent psychosocial role. It resets people's "to hell with it, I'm bad, so I might as well be bad" self-assessment. Note that it can't even be taken as asking forgiveness from society since confessions are done in private (supposing a priest that doesn't gossip). This magic little ritual, superstitions and all, works profoundly well and may actually have become part of the Catholic doctrinal structure because it works, even though no one knew why until recently. (Note that understanding the mechanism can allow a secular alternative that is likely to be as effective or more so with the superstitious elements removed.)

It is not an unfair assessment to see much of the function of religion as very early attempts at psychology and sociology, and despite their dramatic limitations due to lacking falsifiability and statistical analysis, they were able to wheedle out many keen observations relevant to these fields. (And note how often these fields discover something that validates something about religion--a fact no religious apologist will let you ignore for long!) Inexact, dogmatic, desperately prone to getting things profoundly wrong, and hung up on superstition as religions are, still a good deal of that which made its way into religious doctrine counts as fairly solid psychological and sociological observation.

This abstruse third dimension of psychosocial valuation, "divinity," is the result of some of what goes by the name "spirituality," and most religions have tapped into its importance and psychosocial utility (going rather too deep, in fact). Going to church regularly will make you "good." Keeping the Sabbath will make you "good." Taking off your shoes before entering the ashram will make you "good." Turning to face along the great circle connecting you to Mecca and praying five times a day will make you "good." Attending to the rites, rituals, and traditions will make you "good." Honoring what your culture finds "divine" will make you "good." Denying this fact isn't going to make it go away, but it will cripple us from working with it in an effective manner, while religions continue to exploit it.

Ridiculous, but not ridiculous, practices

Other spiritual practices, like meditation, are similar. Meditation, on the face of it, is ridiculous. It seems a patent waste of time. The instructions are to sit there--or stand there, or lay there--and either to do some obscenely mundane and boring attentional task like focusing on the breathing or the wavering flame of a candle or, even more crazy, nothing at all. And yet not only will you hear from the personal experiences of those who meditate that it is emphatically not a waste of time, so shows hard research. In The Happiness Hypothesis, Jonathan Haidt indicates, in fact, that ten minutes of quiet sitting, even by people who don't think of it as meditation and don't know "how" to meditate, can be as effective as Prozac (or similar drugs) or cognitive-behavioral therapy for dealing with problems like depression and anxiety. Other research corroborates these suggestions, among other unequivocal benefits to this traditionally "spiritual" practice. To paraphrase a popular sentiment among atheists, meditation, it works, bitches.

And with meditation, if done seriously, comes stuff that's hard to describe in words that don't start sounding very "spiritual." This happens to people with certain kinds of brain injuries and to people on certain drugs as well. Whatever the neurobiology of these states of consciousness happens to be, the language that deals with it up to and including now is nearly all "spiritual" language. If that's the best way we have to think about brain states of these kinds, then we're kind of stuck with thinking about them that way. Perhaps Harris's new book will clear this up, or perhaps it will argue instead for reclaiming the word from the woo-meisters and the Spirituality-Lite parasites.

Nothing in the nature of these experiences and practices or in the measurable changes in the brain and cognition demonstrably caused by them suggests anything but that they are in some way of rather substantial value. And nothing in them suggests the first bit of dualistic woo. To imagine otherwise and thus reject the lot because one doesn't like the connotations of the word "spirituality" is a grievous error and profound tragedy, and it is based on little more than a knee-jerk overreaction. Certainly those who explored these matters before we knew enough to describe them well could only do the best they could with it, which resulted in steeping them with overt spiritualism, and certainly the frauds who exploit people with it take advantage of the desperation often tied up in spiritualistic fantasizing. These failures do not generalize to the field in total.

The rituals, the traditions, the values about "purity" in one form or another, and even the pursuit of wisdom via means like meditation all seem ridiculous, and they are--but they're not. Recognizing that they form an important part of what it means to be human beings, we being ultrasocial animals with complex psychologies and societies, is something that is largely missing from secular society, and it may be one of secularism's major weaknesses. Still, I empathize with people who dislike the term, and I warmly invite them to do better if they can.

Thursday, March 13, 2014

Moral philosophy and values-mysticism

Last September, I wrote an essay titled "Are moral philosophers scientific pessimists?" wondering if part of the reason that moral philosophers are so upset by Sam Harris's The Moral Landscape is that they believe that science simply won't ever be capable to answer questions about human values. (To be fair, rather than a moral philosopher, the target of my commentary in that piece was conservative Op-Ed columnist Ross Douthat, writing for the New York Times.)

I still do think many moral philosophers are scientific pessimists in exactly the way I outlined in my previous essay, but I don't think this accounts entirely for their reluctance to accept the notion of a science of moral facts. In addition to their pessimism about science--which may be justified in a system as complex as neurobiology and its interactions in socieites--I suspect that many moral philosophers are also values-mystics. They elevate the notion of "value" beyond fact into a mystical realm which only they can effectively plumb.

An unlikely connection

I have known for a long time that I get much of my best work done while listening to music from video games, mostly those I played as a child and teenager (in fact, I'm doing so now). And I'm not alone in this. I know several other people who have experienced exactly the same circumstances, and we've even noted together at times that the soundtracks for video games outstrip even classical music for getting many of our tasks done. I had, until today, attributed this largely to a combination of the fact that the music is largely neo-classical instrumental and, more importantly, nostalgic.

I ended up wondering today why video game music seems so conducive to getting work of various kinds done, and I realized that video game music is composed specifically for this purpose. The music playing in the background on a video game, like with movies, is meant to enhance mood and add to the overall scene, but in contrast to scores timed with the silver screen, video game music it is also designed specifically to keep the player engaged in the game without distracting her focus. (It's also often upbeat in a task-oriented kind of way.)

Then the operational question struck me: how did the composers of video game backgrounds make music so conducive to this particular goal. The problem seems inordinately hard to solve, but rooting out how they did it wasn't. Obviously, experience and knowledge of musical composition for setting moods played a role, and getting it just right required some kind of an editorial (read: evolutionary) process by which musical formulae that worked were kept and those that didn't were rejected. And the measuring stick by which various songs and types of songs were rejected or accepted was user experience. This isn't hard to accept, and it isn't really either here or there.

The ah-ha!

The next deeper question provided the eureka moment: how did the game design team know which kinds of music facilitated gameplay without distracting the player from her task? "Experience" is too vague a response. The answer is obvious, though, and superficially banal: with their brains. That's the insight I've been looking for regarding the fact-value distinction for months. With their brains.

A truly sophisticated instrument

The thought I've been trying to think for months about moral philosophy is that the human brain is a remarkably sophisticated tool. Particularly, it is a magnificent instrument for receiving certain kinds of complex inputs and interpreting them into what we call "the human experience." Thus, even without having any of the relevant hard data about the interaction of musical and psychological theory, the brain is able to crunch out the facts about video game background music that create an experience that will work--enhancing engagement without distracting focus--for most human psychologies.

The same is true for another complex phenomenon integral to the human experience, psychosocial valuation, the various means by which we evaluate ourselves and others. One of the key aspects of psychosocial valuation is morality. In that sense, then, the brain is a remarkably sophisticated instrument to work out the details about moral facts.

What are moral facts, then? To quote physicist Sabine Hossenfelder (this link is well worth the diversion), "Morals and values are just thought patterns that humans use to make decisions." Moral facts are the facts relevant to those thought patterns, including an assessment of the consequences of applying them. Sam Harris's Moral Landscape saliently grounds moral facts in the quality of the experiences of conscious beings ("well-being and suffering," in Harris's formulation). The facts are goal-directed, and we call them "values," and generally speaking, the goal is "the good life" (more on this shortly).

When we have a goal, there are immediately questions raised that bear upon its satisfaction, and the human brain is a remarkably sophisticated tool for trying on and testing the answers to those questions--just like with music and many other complex phenomena. Indeed, at least for now, the human brain is the best tool that we have for evaluating moral facts. We simply do not possess the technology to have developed better ones. In fact, we don't even possess the technology to know if we can develop better ones. In this ignorance hides the scientific pessimism of many moral philosophers, which may also be their hope.

The matter of whether we can develop a better tool lies apart from this discussion, though it's entirely fair to point out that the effort will be probably be unimaginably hard. Whatever the device is will very likely have to be designed to have a decent understanding of what it is like to be human, since ultimately we are talking about human values--facts that arise from within and bear upon the human experience. That we may never succeed in the endeavor to outstrip the human brain in this regard, though, is immaterial to the point that what the brain is dealing in is facts.

To elevate the status of values beyond the level of being a particular kind of fact is to flirt with a kind of mysticism for values, and it is the part and parcel of too many moral philosophers and all religious people. It's almost a sort of dualism, something that says that there's something fundamentally different between a human and an oak in which we can be said to be acting according to our values in choosing nutritious foods while the tree cannot when it grows its foliage in the areas that optimize sunlit surface area for a given amount of wood. (If an oak is too distant a cousin of ours, pick the most complicated animal you're willing to say can't value meeting its basic needs--surely there is one.)

A note about the good life

The invariable question here, one that muddies up the water, is what makes the good life good, as if this implies we must insert a mystical value, one independent of the facts about the world, to make sense of a "good life."

I don't think "good" is arbitrary, though, and Harris did a great job with articulating the reason. If we imagine a hypothetical state of the worst possible suffering for every conscious being as the pinnacle of whatever is meant by the word "bad," then "good" is that which moves us away from it. (Harris is right to ponder rhetorically what else the word "bad" could intelligibly mean.) That we wouldn't necessarily call the circumstances "good" until we've reached a certain elevation in Harris's metaphorical moral landscape need not distract us from this understanding of "good." The "good life" is a life that succeeds at the goals of living--meeting our various biological, psychological, and sociological needs--well enough for it to outstrip being a misery by some arbitrary margin. It need not be more complicated than this.

What about the fact that different people value different things? So what? That's the chief effort in the moral dimension of the human drama, trying to integrate, streamline, and mitigate various individual differences in what helps the most conscious beings achieve their own best-possible "good life."

Another question stirs the muck. Do people have to value the "good life"? This question--which would only be asked by a philosopher--is a weird one. I'd urge any tangled up in it to decide if the word "value" applies if we answer in the negative. (Note that what people consider a "good life" might be quite twisted by other people's standards, and we would recognize this as a pathology, that which is aberrant enough and injurious enough to others' pursuit of the good life, or one's own well-being, to warrant the special treatment.)

Moral values versus human values

Indulge one more brief aside for me to explain a difference between "moral values" and "human values." A science of morality would deal with the latter. The former appear to be cultural artifacts, but I think it's actually the other way around. A culture can largely be defined in terms of the moral values that the people in it espouse. Of course, most of the time, for a great many values, many moral values align with human values (read: moral facts). On the other hand, they can go staggeringly wrong, and moral values can--and have and still do--fall very much afoul of human values.

The difference between the two, really, comes down to a matter of guesswork. Moral values are the guesses about human values that various peoples have made and codified into moral frameworks that define their cultures and subcultures. (People like, and may need, relatively simple rules of thumb and thus moral codes.) Though often insightful about human values, moral values are almost universally tainted by superstition and other forms of bad thinking. Moral values are normative. The human values they attempt to approximate aren't.


Facts are facts

No appeal to the complexity of the situations in which moral facts hold salience can elevate human values out of the realm of facts or put them outside the province of scientific inquiry. Such arguments are simply statements that moral facts are sufficiently complex that the only tool we possess that is sufficiently honed to the job of sorting them out is the human brain. And so what? Moral facts are facts that, for now, require a human brain to make sense of, subject to all of the systematic and instrumental error that comes as a byproduct. Complicated and error-prone measuring instruments do not a fact unmake.

When it comes to attempting to sort out ethics, the only elements involved are certain kinds of facts and epistemically limited estimates (probabilities, broadly) about what will become facts in the future. We say that we "should" do this or that, implying that we hope to achieve some goal or another, and "should" merely reflects that we perceive our chances of success to be higher with that course of action than with any others we have considered. Moral values are particular patterns of thought for handling certain kinds of psychosocial data as they come into and are processed by the human brain. Thus, the effort by moral philosophers to divorce values from facts is misleading, relying on something like mysticism about the nature of the human mind and its capacity to choose its values.

To wit, the composition of musical pieces that achieve a particular effect on the psychological state of the listeners is a clear-cut case where we have information that is extraordinarily complicated, admits exceptions across the spectrum of human experiences, and yet is considered to be well within the realm of facts about the world. The music impacts our nervous systems via our senses, the changes in our brains have psychological and thus physiological implications, and we don't lose for a moment that we're talking about the interactions of various kinds of sound waves and and the immensely complex activity of the human brain--that we're talking about facts.

Suddenly, though, when talking about the aspect of human psychosocial valuation that we call morality, we have to treat values as though they are something different, indeed special. That various incidents in our lives, including the behavior of ourselves and others, impact our nervous systems in ways we consider evaluative, lead to changes in our brains that have psychological implications in terms of valuation, and happen to be about the interactions of various states of the world with the immensely complex activity of the human brain, we're mystically not talking about facts anymore; we're talking about something completely different, values. This seems to be little more than an overassessment of how much free choice we have about our values, a cognitive error that we could value different things than those we do if we wanted to. This overassessment, the separation of facts and values, is values-mysticism.

Free will had to show up eventually

Of course, we "could" value different things (which is a prerequisite to feeling that we "should" value something else), but not really. I do not intend to devote much time or space in this essay to addressing the problem of free will, but it simply seems like we do not have it to work with. For the moment, it suffices to say that if we cannot account for what we think and have no real and fundamental freedom of choice, the values we could "choose" are seriously limited, as in uniquely determined.

Certainly, lacking free will, we cannot choose to value anything, at least not in the way we usually mean by that term. We still make decisions, of course, but the "we" doing it isn't what we normally think of as "we," and the process isn't the same as what we imagine it to be. We tend to think of ourselves as the conscious authors of our thoughts, to paraphrase Sam Harris, but in reality, "we" are "whatever brain process works with whatever input [we] receive," to quote physicist Sabine Hossenfelder again, from the same piece as earlier. (Note that Hossenfelder doesn't argue that free will is impossible, just that we don't have any good reasons to believe we have it.)

Critically, we, as the illusory conscious authors of our thoughts, cannot choose to value something other than what we value. Our values are simply goal-directed facts about ourselves, facts that change over time for various reasons--facts related to our biological, psychological, and sociological welfare--and we cannot account for them as being the result of free choices. I have no more influence over what I value than I do over what I believe. Both are mental states--facts--based upon other mental states, that which I esteem to be true. (Whether "true" means true or provisionally true is immaterial here.)

Values-mysticism

Values-mysticism takes Hume's fact-value distinction at face value without accepting that values ultimately either are a kind of facts or are fancies of the mind (which seems to have been Hume's point). It asserts that the fact-values distinction is a real one and that values are their own kind of mental object, and thus it concludes that science cannot even in principle address matters of human values. All it can do is hope to provide facts upon which the high-priests, moral philosophers, can work their magic to provide us with moral insight.

Pause for a moment to appreciate how preposterous this is, unless we just take it to mean that the human brain is still the best instrument going for working with moral facts. The basic assumption is, that which is impossible to handle by any amount of careful observation and testing can instead be dealt with instead by mulling it over in an armchair. Either this admits that the brain is a sophisticated instrument honed by natural selection to work with moral facts, or it is so wrongheaded that it should be offensive to our better sense. If human values are so out of the reach of science that we cannot even in principle deliver a science of moral facts, then they are even further out of reach from the armchair.

Values-mysticism is a bane of moral progress, and it is the provenance of moral philosophers who refuse to remake themselves as moral scientists. Not only does values-mysticism overcomplicate an already desperately intractable field, it leaves open the door to the real priests--those values-mystics who have so mystified themselves about moral values as to believe they can only be made sense of by deifying an abstract concept they call "God." The un-hijacking of morality by religion remains impossible so long as human values are treated as mystical.

Saturday, March 8, 2014

A Chinese story and Three Mile Island for Tom Gilson

In a blog post today that has nothing really to do with me, Tom Gilson mentioned me so that he could talk about his current suffering (it's his blog, so that makes sense) and how it has helped him to realize something important about context. Thanks to the kind reader who pointed this out to me.

To make a point that stabs vaguely in the direction of the Problem of Evil, Gilson talks about his foot injury and "context," motivating the discussion by noting that I "played the context card" on him in a comment a while back (a point he raised on the "Deeper Waters" podcast recently as well, linked here for those willing to suffer through the listen).*

Tom culminates his post with the following deep thought, presented long-form to maintain the context.
We don’t know what’s going on behind the scenes, out of sight, in the zone of the unknown. A stress fracture is good news when put up against a failed tendon transplant.

What if I’d had a stress fracture, though, without knowing that the alternative was a torn tendon? That’s pretty far-fetched for me in my current medical context, but what about the first time I had a stress fracture, back in the late 1990s? It happened while I was recovering from a lesser case of the same accessory ossicle issue. The treatment for that stress fracture then might have kept me from tearing the tendon then.

Or it might have kept me from a car accident. Or from accidentally hurting someone some other way. Or … who knows? Maybe it brought me closer to the Lord, and stronger in character.

Pain and suffering is a problem, no doubt, and atheists and skeptics often tell us that gratuitous suffering means there is no all-powerful, all-good God. Who knows what’s really gratuitous, though? Who knows what’s going on behind the scenes? My doctor was relieved when he found out my situation wasn’t what it might have been, but which one of us can compare our pain to what might have been? How could we know what might have been? [Gilson's emphasis]
This "might-have-been perspective" isn't new to Tom Gilson (or most teenagers when they stumble upon it for the first time), and he's right that it isn't terribly persuasive--except to the gullible and the desperate. It is, of course, a deepity, and it seems to get this quality from a sort of application of the Texas sharpshooter fallacy in that the bullseye is painted around the better outcomes instead of others, once they're known.

An old Chinese story

Once upon a time, a farmer and his son came out to find that their best horse, upon which they depended to do much of their work on the farm, had jumped the fence and fled. "This is a tragedy!" the farmer cried.

A neighbor, overhearing the farmer's lament, spoke up, "Good or bad, who can say?"

A few days later, the farmer heard an uproar in his fields and ran outside. What he saw amazed him. Not only had his best horse returned of its own volition, it had brought with it six fine and sturdy wild horses, each worth a great deal. "The neighbor was right!" the farmer exclaimed. "Who can know what is good and what is bad? This is great!"

The neighbor again spoke up, "Good or bad, who can say?"

As chance would have it, a few days later while the farmer's son was working with the new horses, trying to tame them, one of the wild beasts kicked him and broke his leg--and right before the critical harvest season when all hands would be needed. The farmer didn't miss a beat. "What terrible fortune! The old neighbor is right again! This is very bad!"

The neighbor again spoke up, "Good or bad, who can say?"

After caring for his injured son for a few days, a knock came at the farmer's door. It was a recruiter from the local garrison. The nation was going to war, and the farmer's son was to enlist. But there he was lying with a broken leg, and the recruiter saw him and gave the farmer's son a medical deferment. The farmer would not lose his son off in a bloody war. "What great luck!" shouted the farmer.

The neighbor again spoke up, "Good or bad, who can say?"

Three Mile Island

The story of the 1980 accident at the Three Mile Island nuclear facility is familiar enough to require no telling, as are a couple of other nuclear accidents at this point. The "might-have-been perspective" about the Three Mile Island incident (and catastrophes at Chernobyl and Fukushima) might suggest that the event wasn't all that bad, say if it led to the kind of nuclear regulatory activity that would prevent a far more serious future disaster. The nuclear accident didn't kill millions, and the farmer's son didn't have to go to war. (Note that we don't know if the farmer's house burns down later, killing the son who would have been at war otherwise....)

This "might-have-been perspective" kind of thinking sort of has a name given to it by Daniel Dennett in 1996 in Darwin's Dangerous Idea. Dennett uses the phrase "Three Mile Island Effect" to refer to the notion that it is possible to consider the Three Mile Island accident an overall good instead of the obvious bad that it seems to be. And it's fair enough to bring up the challenge to certain lines of moral reasoning (as Dennett is doing).

The issue is--as Dennett points out--that we do not possess the necessary knowledge to use a Three Mile Island Effect argument as a context card that lets us do much of anything with it, and it certainly doesn't mitigate the Problem of Evil.

The Problem of Evil

Superficially, it seems that Gilson is just making an observation about perspective--how could we possibly determine whether our pain and suffering are good or bad? Our pain might cause us to avoid greater pain, or death. Our suffering might bring us "closer to the Lord."

But Gilson isn't making a case against particular lines of moral reasoning, and the statement he's making isn't vacant. He says, "This might-have-been perspective is just one way of looking at the problem of pain and suffering." Gilson, then, is appealing to the Three Mile Island Effect to poke at the Problem of Evil. Of course, it doesn't work. It's an argument from ignorance: "Good or bad, who can say?" The Rock of Atheism doesn't just go away because of an argument that effectively says that we can't prove that the bad is not a "might-have-been" good.

It's likely, in fact, that Tom Gilson agrees with the view presented by (somehow) noted theologian and philosopher Alvin Plantinga in this insane interview with Gary Gutting in the New York Times, The Stone, Opinionator. Plantinga waxes positively giddy about the Problem of Evil, painting one of the best portraits of Doctor Pangloss seeing the world through Jesus-Colored Glasses that has ever been expressed in print. Tom Gilson is very likely to agree with this madness:
I suppose your thinking is that it is suffering and sin that make this world less than perfect. But then your question makes sense only if the best possible worlds contain no sin or suffering. And is that true? Maybe the best worlds contain free creatures some of whom sometimes do what is wrong. Indeed, maybe the best worlds contain a scenario very like the Christian story.

Think about it: The first being of the universe, perfect in goodness, power and knowledge, creates free creatures. These free creatures turn their backs on him, rebel against him and get involved in sin and evil. Rather than treat them as some ancient potentate might — e.g., having them boiled in oil — God responds by sending his son into the world to suffer and die so that human beings might once more be in a right relationship to God. God himself undergoes the enormous suffering involved in seeing his son mocked, ridiculed, beaten and crucified. And all this for the sake of these sinful creatures.

I’d say a world in which this story is true would be a truly magnificent possible world. It would be so good that no world could be appreciably better. But then the best worlds contain sin and suffering.
Here's news for Alvin Plantinga and Tom Gilson with him: Three Mile Island isn't getting you from here to there, and neither is a stress fracture in your foot. The Problem of Evil doesn't go away because of a raving call that maybe, indeed, this world is the best possible world, or that, hey, it could have been worse. To possess the imagination to understand that it could have been worse is to be fully equipped to see plainly that it could have been better too, and that's an analysis that an all-good God cannot survive.


--- --- ---
*The "context card" I played on Gilson referred to his understanding of the Doubting Thomas story from the Gospel of John (20:24-29). I had asserted that a plain reading of the story indicates that Jesus advocates believing without adequate evidence. Gilson's reply is that in context, the story has Jesus mildly "rebuking" Thomas for doubting given the testimony of the other disciples, which Gilson argues should have been sufficiently good evidence for Thomas to believe.

I'll give Gilson this: he is interpreting the context of the story correctly. Then again, so was I. Thomas was presented with one of the most extraordinary, unlikely claims in the history of the world--and Christians not only realize this, they cherish it, taking the Resurrection story of Jesus to be uniquely true while the dozens of others strewn throughout ancient literature are just ridiculous myths or legends. The only evidence Thomas had for this literally unbelievable claim was the testimony of other people he knew to be zealots in the Jesus movement, and Thomas rightly concluded that for the claim, it wasn't good enough.

Jesus--a.k.a. God--disagreed. Jesus reportedly** said to Thomas, after allegedly providing him with solid evidence, "Because you have seen me, you have believed; blessed are those who have not seen and yet have believed." 

So--in context--the Doubting Thomas story does not admonish people to believe in Christianity on no evidence, it admonishes people to believe on bad evidence. 

In Thomas's case, the evidence is testimony, which we know is horribly flawed due to all sorts of peculiar biases and other psychological issues human beings have, not to mention lying. (Commenters at Gilson's place seem to like to bring up the role of testimony in courts of law, but they're advised to note both that our criminal justice system is far from perfect and that testimony is always trumped by good evidence like DNA.) In the case of the Evangelist "John," we're very likely to be relying heavily on hearsay already, probably several times removed and, shall we suggest, embellished (see **). In the case of Tom Gilson, though, we're talking at least another step of removal, perhaps several, depending upon how we count them. In all of these cases, the evidence is bad. Bad, bad, bad, and it is stacked against a claim so extraordinary that it is literally ridiculous. Thus, we have Jesus allegedly admonishing Thomas (and by extension all Christians--something he surely knew?) to believe on testimony and hearsay of testimony (that we know possibly has been tampered with), which is to say very bad evidence.

Gilson, then, didn't play the context card; he employed the term "context" as a talisman meme--an easily reproduced idea that is waved about defensively to shut down incisive discussion. Of course, my awareness of this fact can be seen rather plainly in the original comment I left that seems to have bothered him so.

--- --- ---
**It should go without saying, but even most Bible scholars consider the Gospel of John to be a legendary enhancement of whatever core narrative captures whatever constitutes the real Jesus story, if there was one. Indeed, the non-synoptic-but-canonical Gospel has been referred to (by John W. Loftus, I believe) as an outright "Christian manifesto," an assessment I fully agree with. That said, "reportedly" and "allegedly" are the best that can be said of the words and actions attributed to Jesus by that story.

Friday, February 28, 2014

The Pope, Pretending to Know

I'd like to take a moment to illustrate exactly how much pretense can be packed into suprisingly little theological utterance. I'll use a single sentence tweeted by Pope Francis earlier today.
"The Eucharist is essential for us: it is Christ who wishes to enter our lives and fill us with his grace."
Francis's statement is unquestionably a claim to knowledge--knowledge that must be based upon faith. As Peter Boghossian has pointed out, it is typically apt to replace "faith" when used in the religious context with "pretending to know what you don't know." Here's a list of some of what Pope Francis pretends to know (but doesn't) and leads other people to pretend to know just in this sentence alone.
  1. The Eucharist is essential for us. It is in no way clear that a ceremony to commemorate the alleged Last Supper is essential for anyone. Francis pretends to know otherwise.
  2. The Eucharist is Christ. It is in no way clear that the "transubsantiation" in the Eucharist is Christ. Francis pretends to know otherwise.
  3. Consecration does something. It is in no way clear that consecrating bread or wine--or anything else--does anything at all. It is even more dubious that the result of ceremonial consecration "transubstantiates" food and wine into human flesh and blood, and the matter is only made worse by saying it is the flesh and blood of God. Francis pretends to know otherwise about all of these matters.
  4. Christ wishes to enter our lives. This is not known. Indeed, it is not even clear what this means (and may not mean anything). Francis pretends to know otherwise.
  5. Christ can enter our lives. In order to act upon the stated wish, this action has to be possible. It is in no way clear that this is possible or, again, meaningful beyond the metaphorical. Francis pretends to know otherwise.
  6. Christ wishes to (and can) fill us with his grace. This is not known, and it is not even clear what this means. Note that this is really two claims to pretended knowledge. Francis pretends to know otherwise to both.
  7. Christ's grace is important. If the Eucharist is Christ wishing to enter our lives to fill us with his grace, and the Eucharist is essential for us, then clearly we can assume that Christ's grace, instead of being meaningless or metaphorical, is essential for us. It is in no way clear that this is the case. Francis pretends to know otherwise.
  8. And, of course, Christianity is true (or at least Catholicism), with a wide variety of other claims to knowledge in tow, e.g. about the existence of God and the reliability of the Bible. It isn't just unclear that this is the case; it's rather clear that it is not the case. Francis pretends to know otherwise.
As we know, though, Francis, along with many others, pretends to be informed by the light of pretending to know what he doesn't know. See this video for another example.

Tuesday, February 25, 2014

On religion, partial inoculation, and treatment-resistance disease.

It was the mid-1960s, and the NASA programs that would put a handful of human beings on the moon for the first time within the decade were in full swing. Following Kennedy's famous pronouncement, this fact was a major element of the era's decidedly scientific zeitgeist. The Second Great War was behind us, and partially because of it, the economy was booming. Money, though, didn't all go to the top, even if lots of it did. Economic inequality in the United States in the 1960s was low and still steadily cruising toward its eventual nadir. In other huge news, widespread application of antibiotics had changed the face of disease seemingly overnight just a couple of decades earlier. And vaccines had just entered the scene, dropping the incidence and death rates of many serious diseases, again seemingly overnight. Life was good, and on April 8, 1966, TIME magazine's cover asked, "Is God dead?"

Not quite. God was not dead, though he was knocking at death's door. Though the 1960s were far from idyllic, all of the necessary conditions were in place to inoculate modern citizens from what, in later years, some call "the faith virus." Science was big and in the public eye, satisfying our innate needs to understand the world and to feel like our unpredictable circumstances can be controlled. A fair degree of economic equality gave enough opportunity to enough people to feel hopeful and secure without God. At the same time, we were crushing disease, which has been traditionally believed to be the wrath of God enacted upon us lowly sinners. In the 1960s, it wasn't only scientists who agreed with Laplace's famous observation that there was no need for "that hypothesis"--God--in the model. It was popular sentiment.

But we didn't know an awful lot about these things in the 1960s, and I'd dare to suggest that we know better now, largely because of that failure, and hopefully it's not too late. We have a burden upon us, though, not to repeat the mistakes of our past.

Medicine

Antibiotics and vaccines did what Jesus could not. They formed the nearest thing we can probably imagine to a miracle. Even if we accept them on their ridiculous face, Jesus' miracles cured only tens. Antibiotics and vaccines took common serious illnesses dropped their mortality rates almost to zero, and they did it blindingly fast. Where Jesus is said to have cured tens with his miraculous powers, antibiotics and vaccines cured tens of millions, in far less time and without a shred of doubt. Scourges of mankind like smallpox, many a theologian's delight for the fear it commanded, were effectively eradicated from the planet in a time comparable to the whole of Jesus' purported ministry.

But we didn't realize our danger. Evolution works quickly in extreme circumstances, allowing lifeforms to cling to existence beyond any hope. In rapidly reproducing bacteria and viruses, in which whole generations can be measured in minutes or hours instead of decades, the opportunity to evolve resistance to antibiotics and vaccines is stunning--and one of our greatest contemporary perils. Many disease-causing bacteria are evolving in response to our crusade against them, and in some cases they have evolved antibiotic-resistant strains (that's the R in the flesh-eating MRSA). Likewise, there is a serious threat that some of our vaccines may be obsolete within a few decades as new strains emerge beyond our protection. It is hard to imagine a worse situation than the resurgence of horrendous illnesses that are both unpreventable and untreatable.

There's a certain trick about evolution, though. Extinct species do not evolve. Unless it is somehow released from one of the handful of laboratories in which it still resides, smallpox isn't likely to be making a comeback. Indeed, even within local populations, say a particular host of a particular disease, at least some individuals must survive the onslaught that besets them to have a chance to adapt to the stress. The ones that survive, of course, are the ones hardiest to the adversity, which is one reason that antibacterial soaps that promise to kill 99.9% of germs are a little frightening. The hardiest 0.1% are the survivors that go on to reproduce, and the proportion of their offspring resistant to would-be toxic chemistry goes down.

With antibiotics it is less their application than their bad application that has led to horror stories like treatment-resistant tuberculosis arising in India. One must apply the right drugs and then see them completely through so that between the drug and the sick person's own immune system, there aren't enough survivors to be concerned with. This does not always happen for a variety of reasons, most of them bad or downright heinous (as with the state of medical treatment for Indian "Untouchables" for the worst of all possible reasons--religious ones). What happens reliably instead is that antibiotic resistant strains of diseases evolve, and dimly remembered horrors threaten to reawaken in our future.

With vaccines, the matter is similar. The reason smallpox was all-but eradicated, rather like polio was as well, is that the vaccines for these illnesses were spread globally via concerted efforts to stamp out the diseases. Nearly everyone was vaccinated, so the disease could not get almost any toehold, and where it did, it could not spread. The application was complete. This, though, is not always possible. Diseases that we've vaccinated nearly out of existence often cannot have hosts other than humans, but others can be carried by other animals in addition to us. To these, we must continually immunize ourselves and our newborn babies.

That issue was simple: it became standard praxis to provide a series of vaccinations to children starting almost from the hour of their birth. It worked wonderfully, but because of unscientific yammering, this situation has been reversed. Many parents refuse to vaccinate their children for very bad reasons. Predictably, these diseases--mumps, measles, rubella, pertussis, and so on--are making a roaring comeback. And the prediction is dire. Within a few decades, it seems, our vaccines may be mostly useless.

Incomplete application of an inoculation leaves open a dangerous door. The pathogens evolve, and the forms that survive are often more dangerous and considerably less treatable. Our God-is-dead hope of the 1960s teeters on this balance, because when the specter of deadly infectious disease comes back onto the scene, so will a desperate belief in God (and a manipulative one about his wrath).

Economics

Since the 1960s, we have learned that pushing for socioeconomic equality causes certain vectors of the inequality virus to work overtime to wheedle out ways in which they can exercise their sociopathic privilege to the ruin of many. The New Deal and ensuing Great Society spawned a cult of individual sovereignty that served as the perfect cover for the societal sickness known as plutocracy to creep back on us, and the date can be traced to roughly 1971 for the start of the full-force effort.

Somehow, American culture, enjoying the fruits of the Great Society, were not put in a position to understanding clearly that it is the society that makes great societies work, nor did they properly understand the role that wealth and income inequality play in it. Particularly, as those forms of inequality increase, the society becomes sick in profound ways. For the individual, the trend is not at all clear--more money means more opportunity means good things are on the track. For the society, though, now that we've looked, the fact stands out like a sore thumb. Wealth and income inequality are societal diseases, and many of the social ills we now bear witness to are symptoms. One of those is a return to distrustful individualism--a rejection of society--and thus the symptom exacerbates the cause.

Like with disease, the inoculation against this sickness was not complete in the US, and the reforms from the Roosevelt through Eisenhower eras served as the basis for plutocratic greed to evolve. Evolve it did, and by embracing the anti-New-Deal reactionary "value" of individual sovereignty, it brought itself back from the edge of death in a way with more popular appeal than ever. In 1980, with Ronald Reagan as its figurehead, government--which is to say society--became "the problem," and the cult of the rugged individual rocketed into the mainstream. American society at large was reinfected with the plutocracy virus that now threatens to tear it apart, and the virus is spreading abroad. European and Australian plutocrats, among others, are picking up the thought processes that have divided America, starting in the 1970s, and now their own Great Societies are being torn apart. The inoculation of progressive liberalism was not fully administered, largely because it poisoned itself with the ridiculous, postmodern relativism.

We know better now, of course, but in the 1960s we did not realize fully enough that wealth and income inequalities are such pathogenic socioeconomic forces. Again, the God-is-dead hope of the 1960s is threatened by this because desperate economic situations, which leave people feeling less autonomous however intensely they pretend to be ruggedly individually sovereign, lead to a resurgence of desperate beliefs in God (and again, a ripe opportunity for religious leaders to capitalize on the problem).

God

God wasn't dead in 1966, but a great deal of what went by that name was. In isolated corners of American culture, a new and decidedly fundamentalist, evangelical variant on Protestant Christianity was initiating a Great Revival. Others, notably the nearly immutable Roman Catholic Church, plodded along as ever. For typical Americans, belief in God might have been largely irrelevant, perhaps even quaint, but the inoculation against the faith virus was not made complete, and the surviving religious cells were positioning themselves to go big-time.

And they did. And they still are. A variety of forces contributed to this effect, social, political, economic, and even theological, but for the last few decades of the last millennium, America underwent a huge religious revival, turning back from the attitudes that characterized the middle of the twentieth century. The problem was that the faith virus was not treated fully. God became irrelevant before faith was exposed as a contagious cognitive flaw, and so susceptible minds were taken in a boomerang effect the revival came to full force. Some of this revival is accounted for by what appears to be a natural boomerang effect with regard to religious attitudes, but for that to have happened requires an incomplete inoculation against the faith virus in the first place.

New Atheism

"New Atheism," as it is called, is an enough-is-enough response to this revival, which took place not only within American Christianity but also in other faiths throughout the world, most notably Islam. The landmark event, of course, took place on September 11, 2001, which could be taken to be the first exclamation point in the story of the world's reinfection with an intense, recalcitrant strain of the faith virus after a brief remission toward enlightenment. Since the World Trade Center came down in flame, smoke, ash, and death, the religious story has been told more and more fervently, often in capslock, and we are left wondering how grim the situation is. A very hard to treat, profoundly virulent strain of the faith virus has taken root, and the medicine of "New Atheism" is bitter. "New Atheism" is unwavering rejection of religious authority, and a certain amount of steeled nerve to the cries of butt-hurt offense that flow from it.

Here, then, we see a difference between the God-is-dead from a half-century past and the "New Atheism" of today. "New Atheists" are profoundly less likely to ever be taken by the faith virus again because they understand both that God is irrelevant and that faith is a cognitive flaw loaded with pretense. "New Atheists" have been properly inoculated. The mental infrastructure that keeps the demon out is robust, solid, and clear, and it is held for clearly articulated reasons.

We need to take our medicine,...

...but not everyone wants to.

A growing movement of accommodationist atheists, faitheists, in the phrasing of Chris D. Stedman, author of a book called Faitheist, seem to prefer a kinder, gentler, less rebellious attitude from atheists toward faith. They, like NYU professor and social psychologist Jonathan Haidt, are "not anti-religion," and they beg "New Atheists" to see--and honor--what we have to learn from the faithful instead of making a hard-nosed stand against the faith virus. To see what I mean, consider this recent piece from Steadman (dubbed a "must read for ALL atheists" by one of Steadman's fans on Twitter)--or read his book--and this one from Haidt.

Steadman and Haidt, and their growing group of followers (particularly among the Progressive Left), prefer the shortsighted obvious. Taking antibiotics often makes one feel ill and is a hassle, and the feeling of being sick often subsides days or weeks before the full course of the prescription is run. Vaccines require getting an injection that is sometimes painful, can cause mild symptoms, and can also be a hassle--even without the ignorant unscientific fear that they cause autism--and who on earth really gets whooping cough? (N.B.: A friend of mine just did thanks to some unvaccinated kids at the playground where his (vaccinated) kids play.) Standing up in a hard-line fashion to religious authority and privilege hurts people's feelings and is mentally and emotionally draining. Steadman's position asks, can't we all just get along instead?

Sure, we can. And, if we prefer to, we can partially inoculate against the faith problem--one that possesses a potent opportunity to lead to calamity (take evangelical Christianity's near-universal religiously "justified" denialism of climate change, of all not-religious things, for example). What we cannot do, though, is delude ourselves into believing that it will serve to solve the unique problem that religious faith presents in our world. And we should recognize that if faith survives its brush with "New Atheism" so far, it will be stronger than ever for the encounter and at least as much of a problem.

"New Atheism" has shaken the world of faith like a cultural antibiotic, and in its wake it provides a potent vaccine. The disease isn't going easily, though, as is sometimes the case (the reader is encouraged to investigate the full treatment experience for hepatitis or tuberculosis, not to mention chemotherapy for cancer). There is a desire for harmony and peace that calls some to a mission to abandon the "New Atheist" treatment protocol in favor of the kind of deference that feeds religion forever. I understand that and wish for it as well, and I am quite sure it is wrong.

Ameliorative measures

There are ways to polish the rough edges of "New Atheism" to make the medicine a little easier to swallow. Up until now, it has admittedly at times been quite a blunt instrument, but as it matures, it is being refined. One of the best and most obvious suggestions has been brought to light by hard-liner Peter Boghossian, author of A Manual for Creating Atheists. Boghossian calls above all for authenticity and honesty. Be real. Be honest. Be willing to change your mind when the reasons are good. But do not mistake authenticity and honesty for deference to delusion. These achieve all of the goals of the anti-"New Atheist" crowd without their main failures. Authenticity of this kind does not condescend to the faithful by assuming that they need their faith to get through the day, and it does not give unwarranted deference to religious privilege. Authenticity and honesty are not merely palliative measures but are clear refinements of the "New Atheist" medicine, a course of treatment that we need to see through. Failure will preserve the disease in a state more resistant to future treatment.

Here's how to do it. Be ruthlessly honest with yourself, even if you do not have the nerve to do it with other people. By applying relentless honesty to one's own positions, neither faith nor deference to it is possible, and the anti-"New Atheism" confusion is revealed as a way to walk away from the cure before it is effected while nourishing the disease. People deserve dignity, and this obviously includes religious people, but ideas do not. Honesty and authenticity, even if unappreciated, are the best ways to dignify a person regardless of the quality of their ideas. It does nothing like offering dignity to a person to coddle their bad ideas, and so we shouldn't. And there is "New Atheism" in a sentence. So ask yourself, what are people who are against "New Atheism" really against?