Disconfirmation Bias

In their paper, “A Disconfirmation Bias in the Evaluation of Arguments,” psychologists Kari Edwards and Edward Smith (Journal of Personality and Social Psychology, 1996, Vol. 71, No. 1, 5-24) explore this tendency. They begin their article as follows:

When evaluating an argument, can one assess its strength independently of one’s prior belief in the conclusion? A good deal f evidence indicates the answer is an emphatic no. This phenomenon, which we refer to as the prior belief effect, has important implications. Given two people, or groups, with opposing beliefs about a social, political, or scientific issue, the degree to which they will view relevant evidence as strong will differ. This difference, in turn, may result in a failure of the opposing parties to converge on any kind of meaningful agreement, and, under some circumstances, they may become more extreme in their beliefs.

Edwards and Smith then summarize a classic experiment by Lord, Ross, and Lopper ( 1979). This study focused on “people’s evaluations of arguments about whether the death penalty is an effective deterrent against murder.”

They selected two groups of participants, one known to believe that the death penalty is an effective deterrent and one known to believe that it is not an effective deterrent. Both groups were presented with two arguments, one that pointed to the deterrent efficacy of the death penalty and one that pointed to its inefficacy as a deterrent. Each argument consisted of a brief description of the design and findings of a study supporting or opposing the death penalty (e.g., a study showing that a state’s murder rate declined after institution of the death penalty) and was followed by criticisms of the study itself, as well as rebuttals of these criticisms. The best-known finding associated with this study is that the pro-death-penalty and anti-death-penalty participants became more polarized in their beliefs– and hence more different from one another–as a result of reading the two arguments. Note, however, that this result is a logical consequence of another more basic finding obtained by Lord et al.: When participants were asked to rate how convincing each study seemed as evidence (i.e., assessments involved participants’ judgment of the argument’s strength rather than their final belief in the conclusion), proponents of the death penalty judged the pro-death-penalty arguments to be more convincing or stronger than the anti-death-penalty arguments, whereas the opponents of the death penalty judged the anti-death-penalty arguments to be more convincing. This is the prior belief effect, and it has as one of its consequences the polarization of belief.

Edwards and Smith then introduce their model:

When faced with evidence contrary to their beliefs, people try to undermine the evidence. That is, there is a bias to disconfirm arguments incompatible with one’s position. This idea can be developed into a disconfirmation model by making the following assumptions.

1. When one is presented an argument to evaluate, there will be some automatic activation in memory of material relevant to the argument. Some of the accessed material will include one’s prior beliefs about the issue.

2. If the argument presented is incompatible with prior beliefs, one will engage in a deliberative search of memory for material that will undermine the argument simply. Hence, “scrutinizing an argument” is implemented as a deliberate memory search, and such a search requires extensive processing.

3. Possible targets of the memory search include stored beliefs and arguments that offer direct evidence against the premises and conclusion of the presented argument.

4. The outputs of the memory search are integrated with other (perhaps unbiased) considerations about the current argument, and the resulting evaluation serves as the basis for judgments of the current argument’s strength.

Edwards and Smith then conducted two experiments that supported this model and also showed that emotional conviction influences the magnitude and form of the disconfirmation bias.

It makes sense that a disconfirmation bias would exist. If the human brain is wired to defend its preconceptions with confirmation bias, attacking beliefs that threaten those preconceptions would likely be part of the same strategy. This undercuts Michael Shermer’s belief that “Skepticism is the antidote for the confirmation bias.” In reality, hyper-skepticism, or selectively applied skepticism, may simply be another facet of the same brain processes that generate confirmation bias.

What does disconfirmation bias look like? I would like to propose three possible signs that disconfirmation bias is taking place, where one may be defending their preconceptions more so than playing the honest skeptic who is simply trying to “follow the evidence.”

1. According to Edwards and Smith, “When one is presented an argument to evaluate, there will be some automatic activation in memory of material relevant to the argument.” Searching one’s “memory banks” can easily become relying on stereotypes. A stereotype, after all, is the brain’s “summation” of previous experience that is linked by certain cues. Thus, I hypothesize that when one is confronted with an argument that challenges their preconceptions, the more that person relies on stereotypes, the more it is likely they are exhibiting disconfirmation bias to protect a preconception.

How can you tell if stereotype is involved? Often, it is obvious. For example, if a critic on the Internet poses his own argument against my design hypothesis, and I begin to rail against Richard Dawkins, obviously my brain has been tapping into information about Dawkins to interpret my opponent. Often times, however, the evidence is more subtle. And that takes me to my second sign.

2. If one’s brain is on “a deliberative search of memory for material that will undermine the argument simply” and the “search include stored beliefs and arguments that offer direct evidence against the premises and conclusion of the presented argument,” it stands to reason the person with disconfirmation bias will have a strong tendency to link a current argument with the perceived failures of previously-experienced arguments. This creates a mental inertia that leads to two expressions of disconfirmation bias:

a. Misrepresentation –  Let’s say that Jones develops an argument that threatens the preconceptions of Smith. But let us also say that Smith had previously successfully dismantled a similar, but different, argument that was once posed by Miller. The memory of this experience will shape the way Smith reacts to Jones. The brain processes involved in disconfirmation bias will cause Smith to morph Jones’ position into that of Miller’s. Smith will feel vindicated by the disconfirmation bias, while Jones will recognize that Smith is attacking a “straw man.”

b. Faulty Extrapolation – This is a more subtle version of misrepresentation, where Smith’s brain is so highly activated that it is sensitized to “cues” from Jones that lead Smith to believe that Jones is reaching for Miller’s point. Smith will not focus on the actual argument Jones is making, but will be trying to “anticipate” where he thinks the argument is going in order to cut it off. In this case, Smith is not really disconfirming Jones’ argument; he is creating an illusion of disconfirmation in his mind because he thinks he knows where the argument is going (when it may not even be going in that direction).

3. Finally, there is the dead give-away of personal attack. When someone attacks another person by questioning their motivations or with ridicule (and more), they are seeking to discredit the argument by discrediting the person who makes the argument. If such personal attacks are linked to stereotypes, it becomes clear the person’s brain is adopting an “end justifies the means” approach to disconfirmation bias.

In summary, skepticism is a good thing, but skepticism can be just another facet of the way brains defend “their territory.” Add tribalisitic group behavior to the picture and the whole process is amplified and entrenched. I propose that you can detect disconfirmation bias at work, in individuals or groups, when hyper-skepticism, stereotype, misrepresentation, faulty extrapolations, and personal attacks occur more often than not.

Advertisements

2 responses to “Disconfirmation Bias

  1. Michael,

    Sometimes it is plain fun to rebut a hyper-skeptic with their own medicine. Wood already chopped makes easy fuel when adding to the hyper-skeptic fire and then redirecting the flame back on them.

    😉

    Some people take themselves way to serious as mini-me gods in having all known authority. And I believe there are times when appropriate ridicule for attackers and their positions can be unleashed.

    While disconfirmation bias can be utilized as a generalization for opposing sides. It cannot account for all sport and entertainment at times of the moment. Nor can it always account for justified rebukes of arrogant people.

    Once kindness and patience has run its course and the hyper-skeptic refuses honest insight and correction then ridicule of the mockers deception helps bring to light their deception or incompetence.

    Humor, satire, and laughter have their purpose at appropriate times.

  2. So we do we bother discussing the issue on forums? Will the discussion change anyone’s mind?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s