Causation does not necessarily imply correlation

Apr 25, 2026 · 3 min read

Debate any subject with an empirical angle and you will inevitably run into the phrase “correlation does not necessarily imply causation”. While true, it is rarely an interesting observation, and quite often used to reflexively dismiss empirical evidence countering one’s viewpoint (even if this impulse is ultimately correct much of the time). As investor Paul Graham amusingly put it:

Whenever I see a reply mentioning that correlation isn’t causation, without fail it turns out to be saying something stupid. If they made a great seal of midwits, that phrase would be inscribed around the outer edge.

It is more interesting to note another bias making causal claims in research difficult: the fact that causation does not necessarily imply correlation, especially when human actors are involved. Economist Scott Cunningham has a great illustration of this at the beginning of his book Causal Inference: The Mixtape:

But weirdly enough, sometimes there are causal relationships between two things and yet no observable correlation. Now that is definitely strange. How can one thing cause another thing without any discernible correlation between the two things? Consider this example, which is illustrated in Figure 1.1. A sailor is sailing her boat across the lake on a windy day. As the wind blows, she counters by turning the rudder in such a way so as to exactly offset the force of the wind. Back and forth she moves the rudder, yet the boat follows a straight line across the lake. A kindhearted yet naive person with no knowledge of wind or boats might look at this woman and say, “Someone get this sailor a new rudder! Hers is broken!” He thinks this because he cannot see any relationship between the movement of the rudder and the direction of the boat.

But does the fact that he cannot see the relationship mean there isn’t one? Just because there is no observable relationship does not mean there is no causal one. Imagine that instead of perfectly countering the wind by turning the rudder, she had instead flipped a coin—heads she turns the rudder left, tails she turns the rudder right. What do you think this man would have seen if she was sailing her boat according to coin flips? If she randomly moved the rudder on a windy day, then he would see a sailor zigzagging across the lake. Why would he see the relationship if the movement were randomized but not be able to see it otherwise? Because the sailor is endogenously moving the rudder in response to the unobserved wind. And as such, the relationship between the rudder and the boat’s direction is canceled—even though there is a causal relationship between the two.

The term endogeneity, favoured by economists, refers to a situation where the independent variable of your analysis (rudder direction) is correlated with your error term (the random component), in this case due to an unobserved variable (the wind).

This example may seem trivial, but it is analogous to what occurs in studies of human behaviour all the time. Consider the early COVID-19 pandemic, when containment measures were imposed in response to rapidly rising cases. A naive correlation analysis might show cases continuing to rise, or even rising faster, immediately after restrictions were introduced. But this is exactly what you would expect if restrictions are imposed in response to worsening conditions. The best way to investigate causality to create randomness in your independent variable either through study design or through analysis. But governments did not impose restrictions at random, and so a positive correlation between interventions and cases simply reflect the fact that governments tend to grab the rudder when the wind picks up. (This specific question is further complicated by the fact that reported cases lag infections.)