Confirmation Bias

The Original Error

Lesson 1:

Our biased brains

1.1 Defining confirmation bias

Confirmation bias is often referred to as the “original error” in human cognition. In simple terms, it is the mind’s tendency to seek out information that supports the views we already hold by selectively filtering data and distorting analyses (Casad and Luebering, 2025)—even when evidence to the contrary is available (Nickerson, 1998). This bias is not a minor quirk. Instead, it is a cognitive phenomenon that influences every phase of scientific work, from designing experiments and formulating hypotheses  to interpreting data.

In this unit, the term “confirmation bias" also covers two related cognitive biases: Expectation bias and observer bias. Expectation bias (also experimenter's bias) is the tendency for researcher expectations to influence subjects and the outcomes. Similarly, observer bias is the tendency for researcher expectations to influence their perceptions during a study, thus affecting outcomes.

Let’s look at both of these related biases more closely!

Expectations can produce changes. In the 1960s, Robert Rosenthal and Kermit Fode asked students to train 2 breeds of rats ("maze-bright" and "maze-dull") to solve mazes.

The rats were actually genetically identical. Student expectations, however, affected training and created performance differences by the end of the study. In other words, the expectations of the student experimenters produced an actual, measurable difference in performance in the two groups of mice.

Expectations can also affect data collection. In a similar study, Tuyttens et al. (2014) asked students to rate the sociability of two breeds of pigs ("normal" and "high social breeding value" [SBV+]) from video recordings. The videos were actually identical, yet students rated the SBV+ pigs as more social. 

Unlike the Rosenthal and Fode study, the students did not interact with the pigs, so differences could only result from different human perceptions of the same video data. In other words, the expectations of the data collectors caused them to perceive differences that were not actually there.

"Confirmation Bias . . . is the mind’s tendency to seek out information that supports the views we already hold by selectively filtering data and distorting analyses."
Casad and Luebering, 2025

Activity: Deduce the Number Rule

Before delving further into definitions, try out a revamped version of a task first developed by Peter Cathcart Wason in 1960. Your goal is to try and guess a secret rule that matches a sequence of 3 numbers.

But there is a catch—you can’t guess the rule directly!

Form a hypothesis by interpreting results from your guesses, then submit your final hypothesis to see if you were correct!

Embedded Webpage
Click anywhere to start

Post-activity questions:

  • How did that go? Did you falsify your hypothesis?
  • When testing sequences, did you match the hypothesis you had in mind or try to falsify that hypothesis? Why?
  • Were there moments when you realized you needed a new strategy?
  • How might collaboration with others help avoid some of the pitfalls demonstrated in the task?

1.2 Real-world studies of confirmation bias

As evidenced by this task, humans are inclined to confirm rather than falsify hypotheses. Many quickly latch onto an initial pattern (say, “increasing by 2”). Once an early hypothesis is formed, subsequent searches for evidence tend to focus on confirming that rule, with little effort to seek out counterexamples (Wason, 1960). This results in unconsciously filtering out disconfirming evidence.

By distorting thinking, confirmation bias skews how scientific hypotheses are conceived, tested, and evaluated
(Kahneman & Tversky, 1996).

When it comes to research, by distorting thinking, confirmation bias skews how scientific hypotheses are conceived, tested, and evaluated (Kahneman & Tversky, 1996). Further, it thwarts the goal of scientific research to build objective experiments and achieve an impartial interpretation of experimental results.

This inclination toward confirmation bias can be managed, however. In the activity inspired by the Wason task, for instance, deliberately looking for counterexamples that might refute a guess is a conscientious counter to the implicit pull toward confirming beliefs. 

In other scientific research, confirmation bias can be mitigated by:

  • Building habits to recognize bias in thinking.
  • Placing checks to pause and reflect on choices.
  • Reducing error via principles of rigorous experimental design.
  • Making clear distinctions between exploratory and testing research.

Numerous studies have illustrated how confirmation bias can subtly yet powerfully affect behavior and decision-making. For example, a study by Dror and colleagues  (2006) showed how contextual cues can undermine forensic expertise. In a within‐subject design, five fingerprint examiners (averaging 17 years’ experience) reviewed prints they had previously matched. 

When given misleading context that the FBI had misidentified the prints in a high-profile case, four changed their opinions: three ruled the prints non-matches and one deemed the evidence inconclusive, despite clear instructions to ignore the extra information. Only one examiner stuck with the original match. The study highlights the impact of cognitive bias on professional judgment and supports safeguards like blind examinations in forensic science.

In another published example of this tendency to discount information that undermines prior judgments, participants were placed in a betting scenario to monitor their own confidence in their wagers. The scenario showed that when participants had betting partners who agreed with their predictions, the participants greatly increased their bets. When their partners disagreed with their predictions, they only slightly decreased their wagers, but held to their initial predictions nonetheless. This imbalance—where confirmation has a stronger effect than refutation—demonstrates that preexisting beliefs can exert a powerful influence on behavior (Kappes et al., 2020).

Confirmation bias affects real-world decision-making, from the way we interpret experimental results to how we form our opinions on scientific theories. Even in high-stakes scenarios, such as clinical trials or policy-making, the tendency to overvalue confirmatory evidence can lead to inflated expectations and even the misinterpretation of data. For instance, early decisions in research might lead to a selective search in the literature or a tendency to report only positive findings. This, in turn, can skew the overall picture of a scientific field.

1.3 Why this matters to neuroscientists

For neuroscientists, the stakes of confirmation bias are particularly high. Neuroscience research often grapples with complex systems and subtle signals. When an initial hypothesis is formed, there is a risk of overlooking alternative mechanisms.

Consider a scenario where a hypothesis posits that a particular neurotransmitter is central for a specific behavior. If a researcher unconsciously prioritizes data that supports this hypothesis, it increases the likelihood of overlooking critical findings that suggest a role for other neurotransmitters or neural circuits. This oversight can lead to incomplete conclusions and develop an overconfidence in a flawed model of brain function.

The design and interpretation of experiments have far-reaching implications for understanding the brain. To counteract these pitfalls, it is essential to cultivate a mindset that actively challenges initial assumptions. Developing habits like deliberately searching for disconfirming evidence, discussing alternative explanations with colleagues, and employing robust statistical methods can help guard against the trap of confirmation bias. By integrating these practices, neuroscientists can design experiments that are more objective and reliable.

Developing habits like deliberately searching for disconfirming evidence, discussing alternative explanations with colleagues, and employing robust statistical methods can help guard against the trap of confirmation bias.

Takeaways:

  • Confirmation Bias is one of many naturally-occurring cognitive biases that move thinking and observations to prioritize evidence that confirms existing beliefs.
  • Neural reward systems reinforce this bias, making it difficult to recognize or challenge preconceptions.
  • Being aware of confirmation bias is the first step toward designing experiments that minimize its impact.
  • To counteract this tendency, individuals can intentionally seek out examples that violate hypotheses or other expectations.

Reflection:

  • Can you recall a time in your lab work where an unexpected result made you question your assumptions? 
  • When was the last time you formed a hypothesis and then found yourself ignoring an alternative explanation?
  • Being mindful of that feeling is an important first step toward taming confirmation bias in the lab.

Lesson 2:

"Favored" vs. Alternative Hypotheses

2.1 The Pitfalls of a Single, “Favored” Hypothesis

Confirmation bias is often revealed when scientists begin with an initial idea linking a cause to an outcome, like "when A happens, then that leads to B," and then design experiments to test that hypothesis. This kind of inquiry is an essential part of scientific discovery and experimentation, but it can lead to a project that focuses on demonstrating the link between A and B, rather than conducting a more rigorous exploration of how A and B interact within a biological system.

Consider the previous activity: you may have come up with an idea for the number rule and then tested a number sequence that matched that rule. Confirmation bias creeps in when you interpret a matching result as confirmation of your rule. In actuality, a matching number sequence is merely evidence that is consistent with your rule—other rules could also produce the same set of results!

Let's look more closely at the kinds of stumbling blocks encountered in scientific studies.

Case 1

If you start with a vague hypothesis, it is easy to interpret many kinds of results as supporting your hypothesis. For instance, if you suspect that "when A happens, then that leads to B," you may end up designing an experiment where the results show that A and B happen at the same time. However, just as in the number rule activity, this set of results is also consistent with other explanations:

  • The cause and effect are actually reversed, and B is causing A.
  • The experiment did not implement proper controls to identify what happens when A is NOT present, or to identify the conditions where B is NOT present (for more, see our forthcoming unit on controls).
  • Some other effect results in both A and B, and so they are always observed together.

A vague hypothesis makes it difficult to design an experiment that could potentially falsify it—which is a critical characteristic of a well-formed scientific hypothesis.

Case 2

If you only have a single hypothesis, your experimental efforts may be directed toward seeking to prove that hypothesis. Applying the common framework of Null Hypothesis Significance Testing (NHST), we would start by designating the hypotheses:

  • Null hypothesis (H0)—there is no relationship between A and B.
  • Alternative hypothesis (Ha)—there is a relationship between A and B.

We then proceed to design an experiment to collect data about A and B to test their relationship. When the data has been collected, we analyze it to calculate a p-value in the hopes that the p-value is smaller than our pre-set significance level (usually 0.05). If the p-value is less than the significance level, the results are deemed statistically significant, meaning that it is very unlikely that both the results obtained and the null hypothesis are true. You've successfully rejected the null hypothesis!

But what does rejecting the null hypothesis actually tell us? The null hypothesis states that there is no relationship between A and B—but was this a meaningful hypothesis to begin with? In most situations, an experiment is motivated by the observation that there IS some kind of relationship between A and B, which we hoped to elucidate through research.

By rejecting the null hypothesis, you have shown that there is some kind of association between A and B, but you haven't revealed anything about the nature of that relationship.

Case 3

If you start with two hypotheses (H1 and H2) and they are NOT mutually exclusive, your experiment may not have a clear objective or interpretable set of results. While it is great to start with two hypotheses, if they are not mutually exclusive, remember that they could both be true at the same time (or alternatively, neither could be true)! Without providing definitive evidence on what the actual relationship is between A and B, future studies may not have much to build on.

There is one case where it could be acceptable to have two hypotheses that can be true at the same time. This applies when the relationship between A and B can occur through multiple pathways, and the goal of a study is to quantify the frequency or strength of the two hypothesized pathways. In such a study, there is usually an unspoken third hypothesis that is mutually exclusive, and which can be measured or identified using a control.

The Solution

Instead, start with two hypotheses (H1 and H2) that are mutually exclusive—exactly one of H1 or H2 must be true. In such a situation, evidence that favors one hypothesis will refute the other hypothesis. Thus, your study will have a result that provides clarity on understanding the relationship between A and B. Failing to pit competing ideas against each other slows down discovery. If an initial hypothesis seems correct and no alternative is tested against it, scientists may trust that hypothesis and continue working under its assumption, derailing progress and wasting time and resources.

In an ideal scenario, if you have a favored explanation, you also have at least one well-constructed competing hypothesis—something that is just as plausible given current knowledge. The resulting experiment can show that one hypothesis must be true, preventing us from becoming too attached or investing too much effort into it being correct. Additionally, by structuring experiments to clearly differentiate between these alternatives, your results become more conclusive and more persuasive to others. Make sure you're defining your hypotheses specifically enough that they can be falsifiable

Similarly, make sure you’re defining your hypotheses specifically enough that they can be falsifiable. Vague hypotheses leave too much room for error all while making it difficult (or impossible) to determine a sufficient counterexample.

2.2 The Left-Brain/Right-Brain Myth: A Case Study

What does it look like to have a favored hypothesis in science and how can that disrupt scientific progress?

In the 1970s and 1980s, pioneering split-brain research by Roger Sperry showed that the two brain hemispheres can specialize in different cognitive tasks. 

This later gave rise to a number of studies that used brain imaging to show that cognitive tasks resulted in brain activity that favored either the left or right hemisphere (almost trivially true due to biology - it would be nearly impossible for any task to use both hemispheres exactly equally). In essence this situation fell into both case 1 and case 2 above, a vague theory that seemed to be validated by studies that disproved an uninformative null hypothesis.

Unfortunately, a side effect of this work is the resulting popular misconception that the left hemisphere is exclusively logical and the right exclusively creative (Gazzaniga 2005). This distinction became a favored explanation for personality traits, using oversimplified evidence and ignoring conflicting findings. 

The persistence of this myth influenced teaching strategies, career counseling, and how individuals viewed their own cognitive abilities. This potentially limited their personal growth as well as misdirected educational efforts by both individuals and entire organizations, which impacted education and self-perceptions of students for generations. 

This myth was so pervasive that many scientists were probably told in school that they are more “left-brained” because they are good at math and science, the more logical things, compared to artists who are more “right-brained.”

But what does the science say now?

Cognitive functions emerge from an integrated network that span both hemispheres (not from isolated “left” or “right” processes). Advanced neuroimaging studies reveal extensive communication between the two hemispheres and complex behaviors resulting from coordinated activity across these brain regions (Toga & Thompson, 2003).

While newer ways of studying brain activity helped dispel the idea that cognitive tasks are exclusively specialized to the left or right brains, in truth, this wasn't a well-specified scientific theory in the first place. The mistake was in treating this broad generalization as a scientific theory that was supported by many studies, each of which may have investigated related but different processes in the brain.

But, we have a way out: develop competing hypotheses! Making better hypotheses help us to implement strong(er) inference practices (Platt 1964).

Making better hypotheses help us to implement strong(er) inference practices.
(Platt, 1964).

2.3 Activity: Strategies for Hypothesis Generation

In the next activity, you have the opportunity to practice developing specific and mutually-exclusive hypotheses.

You can supply your own hypothesis or use one of our examples.

Embedded Webpage
Click anywhere to start

Post-activity questions:

  1. Did you come up with a plausible competing hypothesis?
  2. Which prompts helped you generate a competing hypothesis? Did any of the prompts change your view of the initial hypothesis?
  3. Were there additional ways of altering the initial or competing hypothesis?
  4. What strategies helped you think beyond your initial explanation?
  5. Did you have a preferred competing hypothesis? How did it complement or directly challenge the original idea?

Developing a strong hypothesis is challenging, and part of strengthening a hypothesis is putting it up against contradicting ideas. Remember: it is necessary to be thoughtful about initial hypotheses and consider opposing ideas that could equally explain underlying mechanisms or observed effects.

2.4 Rigorous experiments start with good hypotheses

To recap, the steps to creating hypotheses for rigorous experiments are:

  1. Develop two mutually exclusive hypotheses.
  2. Check that both hypotheses are plausible (there is some evidence and support for them to be true) and falsifiable (it is possible to show that the hypothesis is false).

What are some specific actions that can be taken to support these steps?

  1. Exploratory pre-studies: Conduct pilot experiments or look at existing data to see if a hypothesis is indeed a plausible explanation for the observations.
  2. Seek opposing views: Talk to colleagues who are skeptical. They may quickly offer other competing ideas, helping you to refine how you approach your study design.
  3. Stay updated on literature: Carefully seek out literature about contradictory or inconclusive findings to ensure that you don't get tunnel vision about the publications that support your favored explanation.

Once you have a good set of hypotheses, you can proceed confidently to the next step, which is to make choices in designing your study!

Takeaways:

  • Confirmation bias can lead you to focus on a favored hypothesis, causing you to interpret supportive evidence as definitive proof, even when other plausible explanations exist.
  • If you design experiments with two mutually exclusive hypotheses then you are more likely to develop a clear differentiation between potential explanations. This leads to more conclusive and informative results, which provides results consistent with reality.
  • Writing specific, falsifiable hypotheses ensures that research design explicitly tests the causes and underlying mechanisms of a given phenomenon rather than just confirming what you already believe.

Reflection:

  • Think back to a recent project: where did you feel a strong urge to prove your favorite explanation instead of testing whether it could be wrong?
  • When you design an experiment, how do you make room for a rival idea that could oust your front-runner hypothesis?
  • Recall a time your results “fit” a vague prediction; what alternative stories about the data did you overlook?