Causality diagrams the vocabulary every pitch, report and pilot will draw in

Arrows, and what they mean.

A subconscious simulation is not a regression line; it is a directed acyclic graph. Every diagram we publish, from pitch deck to board memo to technical appendix, is drawn in the same vocabulary of five node types and four arrow types. When we say we can answer a question nobody asked, we mean: cut an arrow, pin a node, propagate.

00 · vocabulary Five node shapes. Four arrow kinds. No more than that, ever. doctrine · set 26.04
X observed U latent do(T) intervention Y outcome Z counterfactual causes confounds effect of interest severed · do-op

Five shapes. Four arrows. The ink circle with do( ) is the only thing that distinguishes a simulation from a correlation. Everyone else in the industry is drawing the same diagram with only circles.

01 · canonical pitch diagram What a pricing simulation actually does. fig. 1 · beverage case · Q4 25
INC income BP brand pref. do(P) price do(PK) pack Y purchase · $ back-door back-door effect of price effect of pack severed

The two black squares are do-operators. They sever the back-door paths from latent income and brand preference, so what remains along the red arrows is the causal effect of price and pack on purchase, not a correlation contaminated by "rich people like this brand anyway."

Source beverage case · anonymized · Q4 25
Method do-calculus, Pearl (2009) reference
Read as "what would happen to Y if we set P, pack-holding constant for all latents?"
Fig. 1.1 · Pricing simulation DAG — income and brand preference latents severed by do-operator, isolating the causal effect of price and pack on purchase. [1, 3]
02 · the stated-revealed DAG Why surveys miss: the wrong arrow points at the outcome. fig. 2 · doctrine
A · stated-preference model
"Ask people what they want, average the answers, call that a forecast."
ASK S stated Ŷ forecast Linear, clean, wrong.
B · the real data-generating process
Stated answer is another symptom of identity, just like the purchase.
IDENT identity S stated Y did assumed severed
C · what subconscious models
Condition on the identity. Intervene at the decision. Done.
IDENT do(T) Y All arrows point forward.

Panel A is what sixty billion dollars of legacy market research is built on. Panel B is what actually happens in a human head. Panel C is what we ship. The arrows are not decoration; they are the argument.

Fig. 1.2 · Stated vs. revealed DAG — identity latent drives both stated answer and revealed purchase; the assumed S→Y arrow does not exist in the generative model. Survey research treats Panel A as if it were Panel B. [1]
03 · four operations, one population Observe, associate, intervene, imagine. fig. 3 · ladder of causation
01 · observe · P(Y)
What is.
Y

A population, measured as-is.

02 · associate · P(Y | X)
What correlates.
X Y

Given X, expect Y. The survey rung.

03 · intervene · P(Y | do(T))
What would happen if.
do(T) Y

Set T by fiat. The simulator rung.

04 · imagine · P(YT=t | T=t', Y=y')
What would have happened, had.
T' T

Twin world. The regret rung.

Pearl's ladder, drawn as four cells of the same DAG. Focus groups climb to rung two. We climb all four, routinely, on a laptop, in a morning.

Fig. 1.3 · Ladder of causation (Pearl, 2009) — observe → associate → intervene → imagine. Standard analytics operates on rungs 1–2; behavioral prediction requires rung 3. [1, 2]
04 · counterfactual split One respondent, two universes, one subtraction. fig. 4 · individual treatment effect
Ui one respondent do(T) universe A YA $12.40 do(∅) universe B YB $8.10 Δi = +$4.30 ite · respondent i

Same respondent simulated twice: once in the world where we ship the feature, once where we don't. The subtraction, not the average, is what we report. You cannot do this with humans. We routinely do it with a million of them.

Reads as ITEi = Yi, do(T) − Yi, do(∅)
Aggregate CATE = E[ITE | segment]
Population ATE = E[ITE]
Fig. 1.4 · Individual treatment effect — ITEi = Yi,do(T) − Yi,do(∅). Aggregate to ATE = E[ITE] or CATE = E[ITE | segment]. Impossible to measure on real humans; requires synthetic twin worlds. [2]
05 · drawing rules What a correct causality diagram looks like at subconscious. doctrine
circle, solid stroke · observed variable
circle, dashed stroke · latent / unobserved
ink square · do( ) · intervention
circle, red ring · outcome
ink arrow · causes
grey dashed arrow · confounds / associates
red thick arrow · effect of interest
red cross · arrow severed by do( )

Every diagram answers one question: which arrow is the effect of interest? Draw that arrow red. Everything else is in ink, and the confounds you cut are in grey with a red cross. If a diagram has more than one red arrow, the question is not yet sharp enough. — subconscious design doctrine, nº 06

[1] Pearl, J. (2009). Causality: Models, Reasoning, and Inference (2nd ed.). Cambridge University Press.  ·  [2] Rubin, D.B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66(5), 688–701.  ·  [3] Pearl, J., Glymour, M., & Jewell, N.P. (2016). Causal Inference in Statistics: A Primer. Wiley.
Shapes observed · latent · intervention · outcome · counterfactual  ·  Arrows causes · confounds · effect-of-interest · severed  ·  Banned undirected edges · emoji on nodes · gradient fills · more than one red arrow per diagram · DAGs drawn at soft-tech angles