Rabbi Sacks penned an op-ed for the New York Times today, arguing that religion was and is evolutionarily adaptive because it strengthens the group bonds that give rise to altruism, and therefore is a Good Thing.
Minor point: I thought it was interesting that in his list of "New Atheists," Sacks leaves out Dan Dennett, who wrote an entire book arguing that religion may have served an evolutionarily adaptive function.
Anyway, here are my thoughts. I actually don't disagree with him that religion strengthens group ties, and I think it's possible this offered survival benefits in the past. (Of course, religion is not a precondition to group living, and it is group living that is probably the key to the evolution of altruism. Many non-human primates live in social groups with cooperative ties, and as far as I know, they don't attend church or shul.)
But, there is a side he ignores. Religion is not just group-oriented--it is usually tribalistic. There is intense concern and strong altruism towards one's group, but often less concern--if not apathy or derogation--towards other groups. (A friend of mine recently told me a story about when he was in a modern orthodox high school and formed a group to raise awareness of ethnic cleansing in Darfur. A school rabbi expressed clear disapproval to him, because his time and effort could instead be used to help poor Jews in Israel.)
There is much evidence that social groups influence prosocial reactions, and the stronger the tribalistic ties within a group are, the more apathy or derogation will tend to arise towards other groups, or even to those seen as "black sheep" within one's group. (See: the Spanish Inquisition, or Rabbi Nuchem Rosenberg being doused with bleach for speaking out against sexual abuse, thus betraying his community's norms.)
The point is, religion may have been adaptive in the past. But adaptiveness of a trait depends on the current environment--so this does not tell us whether religion is adaptive in a globalized world, in which countries are pointing nuclear weapons at each other. Rabbi Sacks cannot extrapolate from one to the other. In a world in which Syria's biological and chemical weapons might become up for grabs, does religious tribalism prove maladaptive in a way that outweighs the benefits of tight-knit religious communities? What about the ideology and norms that serve as a bedrock for communal interaction and trust, but may lead to discrimination against women and gay people?
Put more simply, most atheists probably recognize there are warm benefits of religion and religious communities. The question is, are the costs of tribalism and ideology even greater? And, are there ways to gain the social benefits of these types of communities while removing the ideology and tribalism? After all, one doesn't need a very tight-knit community to be altruistic and prosocial. If we face a tradeoff--communities that are less cohesive and committed, but which come with less tribalism--it is worth considering.
Monday, December 24, 2012
Saturday, July 21, 2012
Science Does Not Require Faith
Rabbi Slifkin has claimed more than once that "the entire enterprise of modern science emerged from the monotheistic worldview." He cites the following:
1) A universe has regularities in it. 2) Intelligent lifeforms evolve. 3) The beings that survive best are those that evolve the cognitive abilities to abstract out regularities from their daily experiences, which lets them predict what will happen next.
This is exactly what we seem to do. Developmental cognitive scientist Alison Gopnik has argued--based on much evidence--that young children are constantly building theories as they learn about the world: they learn abstract rules from multiple experiences, make predictions from those rules, and update their theories when they are falsified. They act like little scientists. In this view, scientific experimentation is simply a formalized, more complex, adult, conscious version of the learning we are hardwired to do as children.
So no, there is no reason to assume science involves some blind faith in a creator; it simply involves the types of observation and inferential reasoning wired into us by evolution because they work.
"The philosophy of experimental science… began its discoveries and made use of its method in the faith, not the knowledge, that it was dealing with a rational universe controlled by a Creator who did not act upon whim nor interfere with the forces He had set in operation… It is surely one of the curious paradoxes of history that science, which professionally has little to do with faith, owes its origins to an act of faith that the universe can be rationally interpreted, and that science today is sustained by that assumption."I respectfully disagree. The assumption that stable laws govern our universe is an inference from the regularities we constantly see, not a random act of blind faith in a rational universe. Here is a naturalistic account of how humans would develop this type of reasoning:
1) A universe has regularities in it. 2) Intelligent lifeforms evolve. 3) The beings that survive best are those that evolve the cognitive abilities to abstract out regularities from their daily experiences, which lets them predict what will happen next.
This is exactly what we seem to do. Developmental cognitive scientist Alison Gopnik has argued--based on much evidence--that young children are constantly building theories as they learn about the world: they learn abstract rules from multiple experiences, make predictions from those rules, and update their theories when they are falsified. They act like little scientists. In this view, scientific experimentation is simply a formalized, more complex, adult, conscious version of the learning we are hardwired to do as children.
So no, there is no reason to assume science involves some blind faith in a creator; it simply involves the types of observation and inferential reasoning wired into us by evolution because they work.
Monday, May 28, 2012
Reparative Therapy Study Author Renounces His Work
Those committed to reparative therapy frequently point to the one published study that tried to suggest such therapy may have an effect. This study, published by Dr. Robert Spitzer in 2003, interviewed 200 gay men and women who reported some "minimal change" in their sexual orientation lasting 5 years since reparative therapy. Now, the New York Times is reporting that Dr. Spitzer recently renounced the study (see here or here) .
They study's methodology was so problematic, it is stunning it was published. First, the sample was entirely self-selected (i.e. people who had undergone reparative therapy and wanted to participate could volunteer), including ex-gay political advocates. This means the people who participated may have simply been those particularly motivated to convince themselves or others they were not gay, or those particularly motivated to believe reparative therapy works, or those of more ambiguous sexual orientation to begin with, etc. (Even among this population, the changes reported were limited!) There was no random assignment to a therapy group and a control group, which is the hallmark of a proper experiment.
Worse, the study relied entirely on retrospective self-report: participants were asked, for example, how often they had desired someone of the same sex in the year before therapy--which was years prior at the time of the interview--and how much they had desired it in the past year. There is no guarantee whatsoever about the accuracy of their memories, or even that they were reporting the truth, which is particularly problematic amongst those so motivated to report change.
So, it is interesting to hear that Dr. Spitzer himself has recently admitted to these problems and renounced the study. Hopefully this study can be laid to rest in public discourse.
They study's methodology was so problematic, it is stunning it was published. First, the sample was entirely self-selected (i.e. people who had undergone reparative therapy and wanted to participate could volunteer), including ex-gay political advocates. This means the people who participated may have simply been those particularly motivated to convince themselves or others they were not gay, or those particularly motivated to believe reparative therapy works, or those of more ambiguous sexual orientation to begin with, etc. (Even among this population, the changes reported were limited!) There was no random assignment to a therapy group and a control group, which is the hallmark of a proper experiment.
Worse, the study relied entirely on retrospective self-report: participants were asked, for example, how often they had desired someone of the same sex in the year before therapy--which was years prior at the time of the interview--and how much they had desired it in the past year. There is no guarantee whatsoever about the accuracy of their memories, or even that they were reporting the truth, which is particularly problematic amongst those so motivated to report change.
So, it is interesting to hear that Dr. Spitzer himself has recently admitted to these problems and renounced the study. Hopefully this study can be laid to rest in public discourse.
Wednesday, May 16, 2012
Study Finds Analytic Thinking Promotes Religious Disbelief
A research article published in Science recently examined the relationship between analytic thinking and religious belief. The authors tested the following idea: an intuitive cognitive system can often give rise to religious belief (see my series on this topic for some background). An analytic thinking system can override the intuitive system. Therefore, analytic thinking may be associated with religious disbelief.
To test this hypothesis, the authors had study participants complete an analytic thinking task that requires overriding intuitions to reach the correct response, e.g. "If it takes 5 machines 5 min to make 5 widgets, how longs would it take 100 machines to make 100 widgets?" (The intuitive response is 100; the correct answer, if you reason it out, is 5). Participants then reported on their religiosity. As expected, analytic thinking was negatively associated with religious belief--i.e., the individual tendency to override intuitions with analytic thinking and get the right answers was associated with religious disbelief.
Of course, this is only a correlational result. In a series of four follow-up experiments, the authors directly induced analytic mindsets in randomly assigned participants by using subtle, previously validated manipulations. For example, prior work has found that when people read a piece of text written in a difficult-to-read font, the extra effort and engagement leads to greater analytic thinking (for example, people show increased performance on logic puzzles). In the present study, participants were randomly assigned to fill out the same religiosity questionnaire in an easy-to-read font or a difficult-to-read font. As predicted, completing the questionnaire with difficult-to-read font led participants to report lower levels of religious belief.
Or, for example, prior tests had found that simply viewing the statue The Thinker primes an analytic mindset in people, relative to viewing a control statue (again, they show increased performance on logic puzzles). When randomly assigned participants were primed with viewing The Thinker, they again reported lower religious belief than a control group. And so on with other methods used to induce an analytic mindset.
These results suggest that analytic thinking tends to override religious intuitions. The authors carefully caveat that their results do not comment on the inherent truths involved or what one should believe, and that this is just a descriptive account of cognitive processes involved in religious belief/disbelief. But, given what we know about how misleading intuitions can be, I would say what the authors do not (and cannot in a scientific paper): the results fit perfectly with the idea that religious belief is often founded on intuitions, and these intuitions are often found to be flawed when examined critically.
For those who followed my series on the psychology of religious intuitions, this finding shouldn't be too surprising!
Further Reading: Gervais, W.M, & Norenzayan, A. (2012). Analytic thinking promotes religious disbelief. Science, 336.
To test this hypothesis, the authors had study participants complete an analytic thinking task that requires overriding intuitions to reach the correct response, e.g. "If it takes 5 machines 5 min to make 5 widgets, how longs would it take 100 machines to make 100 widgets?" (The intuitive response is 100; the correct answer, if you reason it out, is 5). Participants then reported on their religiosity. As expected, analytic thinking was negatively associated with religious belief--i.e., the individual tendency to override intuitions with analytic thinking and get the right answers was associated with religious disbelief.
Of course, this is only a correlational result. In a series of four follow-up experiments, the authors directly induced analytic mindsets in randomly assigned participants by using subtle, previously validated manipulations. For example, prior work has found that when people read a piece of text written in a difficult-to-read font, the extra effort and engagement leads to greater analytic thinking (for example, people show increased performance on logic puzzles). In the present study, participants were randomly assigned to fill out the same religiosity questionnaire in an easy-to-read font or a difficult-to-read font. As predicted, completing the questionnaire with difficult-to-read font led participants to report lower levels of religious belief.
Or, for example, prior tests had found that simply viewing the statue The Thinker primes an analytic mindset in people, relative to viewing a control statue (again, they show increased performance on logic puzzles). When randomly assigned participants were primed with viewing The Thinker, they again reported lower religious belief than a control group. And so on with other methods used to induce an analytic mindset.
These results suggest that analytic thinking tends to override religious intuitions. The authors carefully caveat that their results do not comment on the inherent truths involved or what one should believe, and that this is just a descriptive account of cognitive processes involved in religious belief/disbelief. But, given what we know about how misleading intuitions can be, I would say what the authors do not (and cannot in a scientific paper): the results fit perfectly with the idea that religious belief is often founded on intuitions, and these intuitions are often found to be flawed when examined critically.
For those who followed my series on the psychology of religious intuitions, this finding shouldn't be too surprising!
Further Reading: Gervais, W.M, & Norenzayan, A. (2012). Analytic thinking promotes religious disbelief. Science, 336.
Wednesday, February 1, 2012
The Psychology of Religious Intuitions: Confirmation Biases & Tim Tebow
This series explores the psychology of intuition and cognitive illusion, specifically as applied to religious intuitions. See background, representativeness, randomness & chance, and imagination & availability.
Confirmation Biases
Why does the phone always ring when you're in the shower? Why are there always more subways or buses going in the opposite direction? Was God helping Tim Tebow?
Answers: It doesn't, there aren't, and no. We are subject to "confirmation biases:" our experience, memory, and interpretation of events are frequently influenced not only by objective reality, but also by our expectations and motivations.
- Some events are "one-sided," in the sense that people only really notice one of two outcomes. For example, you notice when the phone rings when you're in the shower: it's salient and annoying. When it doesn't ring, though, that's a "non-event" that doesn't register at all. ("Hey, did you hear that? The phone didn't ring just now. Hey, it did it again!") You develop a biased store of memories and believe in a false connection, because of noticing events that confirm it but not the non-events that disconfirm it.
- Some events are "two-sided," such that you do notice either outcome--like a win or a loss in gambling. In those cases, though, people often explain away losses as "near wins," and feel even more sure they are right. ("I should have won that time, but XYZ happened! I'll get it next time.") People explain away unwanted data but unquestioningly accept desired data, allowing them again to confirm what they expect or want to see.
- Sometimes, we have ambiguous facts, and we often interpret them to confirm what we expect or want to see. For example, consider the saying, "bad things happen in threes." There is no definition of "bad things" set ahead of time, and no endpoint specified in time. (If the stock market slumps a few points, does that count as bad? If three bad things happen over a year, does that count?) It is very, very easy to stretch the facts to fit an ambiguous story, if you have not specified clear criteria in advance as to what counts and what does not count as an event.
These biases show up frequently in religious thought. People cite miracles performed by rabbis, like curing someone through prayers. However, failures to cure are never reported--making them non-events that never receive attention and are not remembered. In the unlikely event that someone does hear of a failure, it is likely to be explained away as a "near-success" that failed for any number of reasons. (You can also imagine applying these points to alleged "personal miracles.")
Meanwhile, Tim Tebow's favorite bible verse--John 3:16--seemed to be appearing everywhere in his games. In one game, Tebow had 316 yards passing, averaged 31.6 yards, and the final quarter-hour television ratings for the game was 31.6 million! Aside from comedian Jamie Kilstein's response, is there anything else we can say?
As you may notice, there are no definitions here of what counts as a meaningful event. Should we only accept 3.16 as a sign from God, or does 316 count? What about other combinations of these numbers? Would his second-favorite verse count, or his fourth? Meanwhile, there are thousands of statistics you could examine for signs: viewers in the first quarter-hour, viewers in the second quarter-hour, the number of yards the Broncos collectively ran, yards passed in the first half-hour, etc. Without specifying in advance what counts, it is easy to find something that works, and then convince yourself it's what you were looking for all along.
This is why science a) defines its terms ahead of time, and is clear about what counts or does not count as evidence, b) insists on replicability of findings--by third parties, in particular--and c) uses formal statistical procedures to evaluate whether or not something happened due to chance. Because, that is, our intuitions and memory are subject to biases.
Sunday, January 22, 2012
The Psychology of Religious Intuitions: Imagination, Science & Availability
This series explores the psychology of intuition and cognitive illusion, specifically as applied to religious intuitions. See background, representativeness, and randomness & chance.
Availability
Consider the letter R in most English words: in words of at least three letters, do you think R is more likely to appear in the first position of a word or in the third position?
In particular, people will often use the ease of imagining a scenario to judge how likely it is. For example, if you see a car accident, it becomes easier for you to imagine one, and you will have an increased estimate of the likelihood of getting in an accident.
In reality, though, the ease of remembering or imagining a scenario does not necessarily tell you how likely it is. So, for example, people heavily overestimate the number of deaths from tornadoes each year but underestimate the number of deaths from drowning each year, because we hear more on the news about tornadoes than drowning--it is more available.
Like all heuristics, this one can be useful in daily life, but it is just a best guess that can easily be wrong--especially when we are dealing with the nature of the universe, which is complex. Daily life simply doesn't give us a basis for imagining what happens over eons (i.e. evolution), or at the quantum level (i.e. quantum indeterminacy), or at the level of the astronomical (i.e. the big bang). Therefore, scientific explanations can often be true and well-evidenced, yet difficult to imagine and thus counterintuitive.
Availability
Consider the letter R in most English words: in words of at least three letters, do you think R is more likely to appear in the first position of a word or in the third position?
When Tversky and Kahneman asked this question to survey participants, a large majority answered that R appears more frequently in the first position. The correct answer is that more words in English feature R in the third spot. Why would people intuitively think differently?
It is easier, Tversky and Kahneman answered, to think of words that begin with R--and people use this ease of recall as a cue about frequency. The intuition goes something like: "Hmm...rabbit, road, race, rock, roll...it sure is easy to think of words where R comes first...card...care?...this one is harder. There must be more words where R comes first." This rule of thumb is known as the availability heuristic: we think what is more mentally available (i.e. easier to recall or imagine) is more likely.
A caveman would have a hard time imagining how a computing machine would ever be built. That doesn't change the likelihood it would happen. |
In reality, though, the ease of remembering or imagining a scenario does not necessarily tell you how likely it is. So, for example, people heavily overestimate the number of deaths from tornadoes each year but underestimate the number of deaths from drowning each year, because we hear more on the news about tornadoes than drowning--it is more available.
Like all heuristics, this one can be useful in daily life, but it is just a best guess that can easily be wrong--especially when we are dealing with the nature of the universe, which is complex. Daily life simply doesn't give us a basis for imagining what happens over eons (i.e. evolution), or at the quantum level (i.e. quantum indeterminacy), or at the level of the astronomical (i.e. the big bang). Therefore, scientific explanations can often be true and well-evidenced, yet difficult to imagine and thus counterintuitive.
Religious intuitions often draw on what is easily imaginable. I.e., "I just can't imagine how all life could have evolved on its own without a Creator." A commenter once wrote on this blog:
Every facet of the universe is so unbelievably complex, that any single element within the whole sufficiently testifies to the handiwork of the Creator. If a magnificent work of art cannot be formed by merely spilling paint, certainly the entire world cannot have been fashioned by accident. For the logical mind, seemingly there is nothing more irrational than suggesting G-d not create the world.
This comment is largely based on the representativeness heuristic, which was previously discussed. Beyond that, though, the author's confidence comes from the difficulty of imagining any alternative. It is indeed difficult to imagine how something complex can come about without conscious design--but that doesn't actually make it unlikely! It is the availability heuristic that incorrectly transforms a difficulty of imagination into an intuition of probability, and a gut instinct into a false sense of logic.
Indeed, it takes work and careful reasoning--not simply imaginative impulse--to override intuition and figure out how each piece came to be. In the case of evolution, for example, one can read books that carefully map out each step, like The Blind Watchmaker. But some people stop before that; they just take the fact that something is hard to imagine and incorrectly use that difficulty as a cue that it is unlikely.
Indeed, it takes work and careful reasoning--not simply imaginative impulse--to override intuition and figure out how each piece came to be. In the case of evolution, for example, one can read books that carefully map out each step, like The Blind Watchmaker. But some people stop before that; they just take the fact that something is hard to imagine and incorrectly use that difficulty as a cue that it is unlikely.
Philosopher Dan Dennett aptly writes that a failure of imagination should not be mistaken for an insight into necessity. The availability heuristic explains why.
Further reading:
Wednesday, January 11, 2012
The Psychology of Religious Intuitions: Randomness & Chance
This series explores the psychology of intuition and cognitive illusion, specifically as applied to religious intuitions. See background and representativeness.
Randomness & Chance
Consider two questions:
1) You flip a fair coin six times. Which of the following two sequences is more likely to occur: H-T-H-T-T-H or H-H-H-H-T-H?
2) A basketball player is on a major hot streak, landing him a spot on the cover of Sports Illustrated. Soon afterwards, his performance drops and he is no longer noteworthy. Is this evidence for the "Sports Illustrated Jinx," according to which appearing on the cover of Sports Illustrated jinxes athletes to bad performance?
1) If you're like most people surveyed in the experiments of famed psychologists Daniel Kahneman and Amos Tversky, you probably think the answer to question 1 is the first sequence: it just looks more random, right? The correct answer is that they are both about equally likely under the laws of chance.
A random series of coin flips is expected to be half heads and half tails only in the very long run. But, people use a mental shortcut (the representativeness heuristic) that expects short runs to be representative of long runs: we think any sequence of coin flips should look as fair as a long run. This leads to a systematic bias called the clustering illusion: people think clusters are meaningful patterns even when they are frequently produced by chance, because that's not what we expect chance to look like. We are terrible intuitive statisticians. Randomness actually creates clusters, according to the laws of probability--but we intuitively think it won't.
Religious intuitions often involve claims about patterns that "could not possibly be due to chance." (I don't have in mind evolution here, because evolution is actually a systematic process, not a chance process--a point often misunderstood by creationists.) People cite supposed miracles or facets of the world that "could not be coincidence;" they see shapes of Jesus in crackers when elements cluster together more than they intuitively expect, and see divine signs in the world around them. But as we have seen, people are notoriously bad at judging intuitively what randomness looks like: we are very prone to seeing patterns where there are none. The only way to ascertain randomness is by using formal statistical procedures; logic and scientific thinking are trustworthy where intuition leads astray.
2) If you are like many people, you might intuitively think the Sports Illustrated jinx is real, or at least eery. But, it is easily explained by an overlooked fact called regression toward the mean. A basketball player's performance is due both to his actual skill level and to some degree of luck--aka random chance. If someone is suddenly performing well enough to make the cover of SI, they probably have been experiencing unusually good luck. Since unusually good luck is...well, unusual, we don't expect it to continue, meaning that they go back (regress) to how they normally play (their mean). Appearing on the cover of SI just coincides with the time of unusually good luck, which we expect to end anyway. The SI jinx is an example of a general tendency people have: we are very good at seeing cause and effect (or inventing explanations) when random fluctuations are actually at work.
In his book How We Know What Isn't So, psychologist Thomas Gilovich relates an experience he had on a trip to Israel:
The moral of the story is that people are biased to see patterns and cause and effect where there is randomness, leading to religious intuitions. Scientific thinking must be employed to understand complex phenomena and the nature of the universe. In this case, carefully applied statistics can tell us what is random and what is not, when our intuitions turn us into hyperactive pattern-detectors.
In his book How We Know What Isn't So, psychologist Thomas Gilovich relates an experience he had on a trip to Israel:
"A flurry of deaths by natural causes in the northern part of the country led to speculation about some new and unusual threat. It was not determined whether the increase...was within the normal fluctuation in the death rate that one can expect by chance. Instead, remedies for the problem were quickly put in place. In particular, a group of rabbis attributed the problem to the sacrilege of allowing women to attend funerals, formerly a forbidden practice. The remedy was a decree that subsequently barred women from funerals in the area. The decree was quickly enforced, and the rash of unusual deaths subsided" (Gilovich, 1991, pg. 28.)A string of deaths can be expected by chance fluctuations, just like the string of "heads" above. But, people saw a meaningful pattern because we wrongly expect randomness not to come in clusters. Meanwhile, the actual death rate at a given moment is due to the average death rate plus some degree of luck (chance); if the death rate rose unusually, it means there was a string of unusually bad luck that is unlikely to continue (the death rate should regress to its mean). So, surprise, surprise: the rabbis enacted their decree while there was unusually bad luck and the death rate went down to normal, and--voilá!--they see cause and effect.
The moral of the story is that people are biased to see patterns and cause and effect where there is randomness, leading to religious intuitions. Scientific thinking must be employed to understand complex phenomena and the nature of the universe. In this case, carefully applied statistics can tell us what is random and what is not, when our intuitions turn us into hyperactive pattern-detectors.
Further reading:
Gilovich, T. (1991) How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life. Chapter 2: Something Out of Nothing: The Misperception and Misinterpretation of Random Data.
Kahneman, D., & Tversky, A. (1972) Subjective Probability: A Judgment of Representativeness. Cognitive Psychology.
Gilovich, T. (1991) How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life. Chapter 2: Something Out of Nothing: The Misperception and Misinterpretation of Random Data.
Kahneman, D., & Tversky, A. (1972) Subjective Probability: A Judgment of Representativeness. Cognitive Psychology.
Subscribe to:
Posts (Atom)