Causality
From 'what' to 'why'
Lesson 1:
Correlations point towards causal explanations
Summary
This lesson explores how correlations serve as starting points for causal inquiry. Learners examine an example of a correlation and use it to brainstorm causal explanations. The lesson then introduces the form of causal questions and contrasts two neurophysiological studies: one that is an indirect test, and another that does a direct manipulation.
Goal
Identify how correlations lead to causal explanations to test, and analyze study designs to understand when evidence is correlational vs a direct manipulation.
1.1 Why causality matters
What makes it possible to say one thing has caused another?
This question is at the heart of scientific inquiry across disciplines. As humans, we instinctively seek out explanations for the phenomena that we observe. And in neuroscience, we care about causal questions because the answers can give us insight into how the brain works, or even lead to treatments for neurological conditions.
Example causal questions:
- Does dopamine reinforce learning?
- Does Parkinson's disease involve dopamine dysfunction?
Discovering accurate and actionable explanations to these kinds of questions can be quite challenging! This unit will take you through the key methods in scientific research to arrive at causal inference and avoid some of the traps.
We begin, as inquiry often does, with a simple observation of two phenomena that appear to be related.
What makes it possible to say one thing has caused another?
Activity: Explain the correlation
Consider the two phenomena presented, and suggest an explanation for why they may appear to be related.
Would you say that one causes the other?
Post-activity questions:
- Does your explanation contain a hypothesis or link that could be tested scientifically?
- What factors give the appearance of a causal link, if at all, between chocolate consumption and Nobel laureates per capita?
- What kinds of information would you need to be able to make a causal claim?
Since Messerli (2012) first documented this correlation between chocolate consumption and Nobel laureates in different countries in 2012, others have discovered other, similar, correlations, such as milk consumption or GDP per capita. In truth, eating chocolate is almost certainly not going to make you a Nobel laureate. This nugget does, however, reveal several important insights into how to think about causation.
First, the presence of a correlation is not sufficient evidence to suggest causation (the old adage that "correlation is not causation").
Second, that by thinking about potential explanations for a correlation, we can land on causal questions to investigate.
1.2 The form of causal questions
Causal questions have the form: "If I make a change to an independent variable, does that cause a change in the dependent variable?"
|
Independent Variable |
Dependent Variable |
Causal Question |
|
chocolate consumption |
Nobel laureates |
Does eating chocolate make you a Nobel laureate? |
|
dopamine |
learning |
Does dopamine reinforce learning? |
|
dopamine dysfunction |
Parkinson's disease |
Does Parkinson's disease involve dopamine dysfunction? |
So far, these questions have been introduced, with the associated independent and dependent variables. Note that the exact wording of the question is not always the same! The verb that links the independent and dependent variable together may encapsulate several details, such as:
- To what extent an independent variable might be a sole cause or an influence on the dependent variable.
- Whether the independent and/or dependent variables are binary (yes/no), quantitative (on a numeric scale) or other.
- Hints as to the hypothesized causal pathway whereby the independent variable exerts influence on the dependent variable.
Some of these details will be covered in lessons 2, 4, and 5.
Causal questions have the form: "If I make a change to an independent variable, does that cause a change in the dependent variable?"
1.3 A neurophysiology example
Let's take a closer look at a neurophysiological example. Roitman & Shadlen (2002) sought to investigate the neural regions responsible for decision-making. They hypothesized that neural activity in the Lateral Intraparietal (LIP) area was involved, and so conducted the following experiment.
Monkeys were trained to observe moving dots on a screen that moved to the left or right, and to respond by moving their eyes to the left or right depending on the direction of motion. At the same time, neural activity in the LIP area was recorded.
The results indicated the following:
- Neurons in the LIP area fire approximately 100 msec before eye movement.
- Activity in the LIP area is sensitive to the stimulus strength. In trials where the motion of the dots was strong and easily visible, there was more firing compared to trials with weaker motion.
- Different neurons fire if the subsequent eye movement was to the left vs. to the right.
Consequently, Roitman & Shadlen reported
"“Because the time course and level of activity depend on the strength of random-dot motion, it is likely that LIP neurons not only represent the planned eye movement response but also the visual information on which the developing decision is based—in other words, a decision variable.”
The phrase "decision variable" suggests that LIP activity is causal to the eye movement, but is this actually the case?
Let's reframe this in the form of causal questions we used earlier:
|
Independent Variable |
Dependent Variable |
Causal Question |
|
LIP activity |
eye movement |
If I make a change to LIP activity, does that change eye movement? |
Now that we've specified the causal question more explicitly, we can identify that in the study, the manipulation was the direction and strength of the moving dot stimulus. Both LIP activity and eye movement were observed, not manipulated.
In other words, this was an indirect experiment. There was no direct manipulation of the independent variable (LIP activity) of interest.
1.4 Manipulating neural activity
What would it look like to do a direct manipulation of neural activity? Luckily, that study has already been done!
Katz et al. (2016) trained monkeys on the same task as Roitman & Shadlen, but their study contained one key difference—they infused specific brain areas with muscimol to cause (temporary) neural inactivation.
This manipulation enables a direct test of the causal hypothesis:
- If LIP activity is necessary for eye movement…
- then inactivating the LIP area should stop eye movement.
But Katz et al. find that eye movement still occurs as normal even without activity in the LIP area. This suggests that eye movement is not caused by the LIP area.
As an additional check, Katz et al. also inactivated the MT (middle temporal) area, a region known to be important for visual processing of motion. When doing so, the monkeys show impaired performance on the task, indicating that muscimol is effective at inactivating neural regions in a way that affects task performance.
Takeaways:
- Correlations are starting points for causality, leading us to ask causal questions that can be tested through studies and experiments.
- Causal questions have a specific structure: "If I make a change to an independent variable, does that cause a change in the dependent variable?"
- Some experiments may not manipulate the independent variable in a causal question; these experiments can only provide indirect evidence for causality.
Reflection:
- Have you ever conducted or read about an experiment that turned out to be a less direct test of a causal question than you initially expected? What did you learn from that?
- What do you think happens when people look at a correlation and automatically think of a causal story about the data? Do you have any tricks for remembering to think twice before reaching conclusions?
- Have you ever disagreed with someone about whether “enough” evidence supports a causal claim to consider it “established”? What additional evidence might have helped?
Lesson 2:
Strengthening causality with direct experiments and randomization
Summary
This lesson reviews two design aspects of experiments that influence the strength of causal evidence: directness and randomization. Using examples, the lesson introduces learners to the two concepts, with specific focus on the directness of the intervention, outcome measure, and alignment with a hypothesized causal pathway. Learners practice improving the directness of two hypothetical studies.
Goal
- Evaluate an experimental design for directness of the intervention and outcome measure.
- Learn the assumptions involved in using random assignment to ensure validity of causal experiments.
2.1 Causality is a spectrum
There isn't a single factor that can separate research into "causal" and "non-causal". Instead, we need to evaluate causality in terms of our confidence about the available evidence, which could arise from a single study or a collection of studies.
In other words, we should reframe our investigation into causality as:
Based on the evidence, how confident are we that X causes Y?
This lesson is about 2 attributes of studies that affect the strength of causal evidence:
- directness
- randomization.
The best way to test a causal question is with a direct experiment.
2.2 What makes an experiment direct?
When you get ready to carry out your study that tests the relationship between an independent variable and a dependent variable , you will make choices such as:
- how to implement your intervention
- how to measure outcomes
These choices are often influenced by feasibility and ease of access (what variables can be measured or manipulated and the difficulty involved), but can have major impacts on the rigor of your experiment and the resulting evidence.
2.3 How direct is the intervention?
The directness of the intervention captures the extent to which your method of implementing the intervention is aligned with the independent variable of your causal question.
For example, in the Roitman et al. study, the causal question was:
"how does neural activity in the LIP area affect eye movement decisions?"
However, the manipulation in the study was to change the type of visual stimuli shown to the subjects, and both the activation of LIP neurons and behavior were observed as outcomes.
This is an example of an indirect manipulation of the independent variable.
When designing the experiment to test this causal question, you will need to make choices about how to manipulate the neural activity in the LIP area.
- For example, you might choose to manipulate neural activity indirectly, by introducing a stimulus or engaging the subject in behavior that results in LIP neural activity.
- You might also choose to manipulate neural activity directly, such as by using transcranial magnetic stimulation (TMS) to induce neural activity directly.
Any of these choices has the potential to introduce complications to your experiment that can negatively impact your ability to draw causal conclusions!
The indirect stimulation approach adds additional elements to the intervention beyond LIP neural activity, and if any of these cause or impair the observed outcome of eye movements, that will muddy the results.
Stimulating neural activity by TMS may be imprecise and affect activity in other areas. It may also cause a pattern of neural activity that is different from what would happen in natural settings; such that a realistic pattern of activity leads to eye movement, but TMS-stimulated activity does not.
2.4 How direct is the outcome measure?
The directness of the outcome measure captures the extent to which what you measure is aligned with the dependent variable of your causal question.
For example, suppose that your hypothesis involves improvements to memory. What measures will you use to measure memory? If you use a memory test, there may be ways that your subjects can perform well without recalling the actual content you desire (e.g. multiple choice questions that are easy to guess on). On the other hand there may also be factors that negatively influence performance and are unrelated to memory (e.g. stress in a test-taking environment).
If your research interest is in the neurobiological aspects of memory formation, then you may consider imaging or chemical methods to measure the biological processes the produce recall.
Although it can be tempting to measure everything that is relevant, that could add unwanted cost and expense to the experiment; and introduce complexity in data analysis later—what if one measure shows a change but another measure does not? (We won't get into the details here, but do check out the unit on multiplicity and statistical analysis planning for more guidance on data analyses.)
These aspects of experimental design are all examples of construct validity.
Construct validity refers to how well what you have implemented or measured represent variables or concepts that are difficult to ascertain directly.
Some aspects of construct validity can be verified through manipulation checks or positive controls. See lesson 5 of our unit on Controls for more on this subject!
2.5 Pathway directness
In addition to how you operationalize the independent and dependent variables in your study, you may also want to check that your study does a thorough job of testing that the relationship between the variables is occurring through the causal pathway that you theorize.
In other words, in addition to the hypothesis that the independent variable influences the dependent variable, you probably have a hypothesis for HOW the independent variable influences the dependent variable.
In other words, in addition to the hypothesis that the independent variable influences the dependent variable,
For example, suppose that you are conducting an experiment to test the hypothesis that creating flash cards improves exam scores. You might theorize the following mediator variables that connect the independent variable (creating flash cards) and the dependent variable (memory):
- To create flash cards, ravens identify key concepts,
- creating memories of these concepts,
- which are recalled at exam time.
How can we verify if these mediators are actually operating?
A manipulation check will test whether an intervention operated as intended.
- Did a raven actually create the flash cards?
- Did they print them out from an online source?
Depending on your theorized causal pathway, you may want one or more manipulation checks as part of the experiment. Including manipulation checks can distinguish between different pathways through which the independent variable may be influencing the dependent variable.
For example, perhaps the act of making flash cards is more active than rewatching lectures, and so merely doing an active form of studying improves exam scores. We may also want to watch out for other processes that interfere with our outcomes—what if the exam is so easy that all ravens ace it; such that no difference is detected even though one study method did actually improve memory.
Activity: Evaluate an experiment for directness
When we design an experiment, we make choices about how to implement an intervention and assess an outcome. As discussed above, some of those choices can be more or less direct than others. And, when attempting to capture particularly fuzzy variables, we may need to settle for the best approximation of a valid construct we are able to identify.
In this activity, evaluate the directness of the intervention and outcome for each experiment.
Post-activity questions:
2.6 Randomization supports causal evidence
Experiments should be direct to address causality with confidence.
But there is one more design element that is important to the strength of causal evidence: randomization.
The ideal experiment tests cause and effect. Make a change to an independent variable, and observe changes in the dependent variable.
For example, consider an experiment testing the effect of studying with flash cards on memory in Rigorous Ravens. To test a Raven's ability to recall information presented on a flashcard, we ideally want to:
- First, measure the Raven's performance after studying with flash cards. Call this Yt.
- Then, rewind time, and measure the Raven's performance after studying by rewatching videos. Call this Yc.
The effect of the intervention on this Raven can then be calculated as the difference between the performance in the two conditions, Yt - Yc.
But we have a problem. The same Raven can't actually go through both conditions of this experiment.
Since we cannot measure both potential outcomes of Yt and Yc, we try to get as close as we can by computing an average treatment effect, E[Yt - Yc].
Because E[Yt - Yc] = E[Yt] - E[Yc], our approach to computing the average treatment effect is to compute E[Yt], E[Yc] separately. We randomly assign each Raven to either treatment or control, measure the corresponding Yt or Yc for each Raven, and use the average of Yt and Yc to estimate the average treatment effect.
In order for this approach to be correct, we require a few key assumptions!
- Assignment—each Raven is independently and randomly assigned to either treatment or control condition.
- Stable Unit Treatment Values Assumption (Vanderweele & Hernán, 2013)—each Raven assigned to a specific treatment receives the same intervention; and their outcome is not influenced by the treatment assignments of other Ravens.
For more on evaluating and understanding appropriate randomization techniques, check out our unit on Randomization!
Direct Experiment Checklist:
- Implement proper controls and randomization.
- Ensure a direct manipulation of the hypothesized cause.
- Use an appropriate measure of the hypothesized effect.
- Verify your causal pathway if possible.
The ideal experiment tests cause and effect. Make a change to an independent variable, and observe changes in the dependent variable.
Takeaways:
- A causal experiment is made rigorous through ensuring directness and appropriate randomization.
- Evaluating the directness of the independent variable, dependent variable, and hypothesized pathway ensures alignment between the experiment and the hypothesis under investigation.
- Randomization is necessary because it is impossible to measure the same experimental unit in both the treatment and control condition, under the same context.
Reflection:
- Have you recently encountered, in your work or a paper you read, a distinction between what is stated in a hypothesis as the independent (or dependent) variable and what was actually implemented in the study? How could the two be brought closer into alignment?
- In your field, what are common ways to exclude alternative causal pathways?
- In your field, what are common ways that randomization might not achieve the appropriate assumptions? How would you protect against these errors?
