CAUSAL INFERENCE

Difference‑in‑Differences

Compare pre→post changes for treated vs control groups to difference out common trends and quantify the intervention’s causal effect.

What Makes This Reliable

Parallel Trends Testing

Statistical tests and visualizations comparing pre-treatment trends between groups, with p-values to validate the key assumption.

Dynamic Event Study

Relative time coefficients showing treatment effects before and after intervention, with confidence bands for each period.

Robustness Checks

Placebo tests on pre-treatment periods, models with/without covariates, and unit/time fixed effects for reliable estimates.

What You Need to Provide

Required Data Structure

Your data needs a unit_column (store/region ID), time_column (period), treatment_indicator (0/1 for control/treated), and outcome (metric to measure).

Data format: Panel data with multiple observations per unit over time. The algorithm automatically creates the post-treatment indicator and DiD interaction term.

Minimum requirements: At least 3 pre-treatment periods, ideally 4+ for parallel trends testing. Need both treated and control units observed before and after treatment.

What you get: Treatment effect estimate with confidence intervals, parallel trends test results, event study coefficients, placebo test validation.

Panel Schema / Pre‑Post Timeline

Quick Specs

Keysunit_id, time
Columnstreated, outcome, optional post/event time
Pre‑periods3+ recommended
Designssingle date or staggered adoption

How We Estimate Effects

A rigorous pipeline from raw panel data to defensible causal estimates

1

Parallel Trends Validation

Tests pre-treatment period interactions between time and treatment group, providing p-values to validate the key identifying assumption.

2

Fixed Effects Regression

Runs two-way fixed effects models with unit and time dummies, isolating the DiD interaction term that captures the treatment effect.

3

Event Study & Robustness

Creates dynamic treatment effects by relative time period, runs placebo tests, and provides multiple model specifications for validation.

Why This Analysis Matters

When experiments aren’t feasible, DID isolates the policy or product impact by differencing away common shocks that affect both groups.

Use DID to evaluate market‑level rollouts, regulatory changes, and operational policies. With explicit assumptions and diagnostics (parallel trends, event studies), results are explainable to non‑technical stakeholders and audit‑ready.

Key Assumption: Parallel trends — absent treatment, treated and control would have evolved similarly. We test this with pre‑trend diagnostics.

Ready to Measure Impact?

Turn your panel data into causal evidence

Read the article: Difference‑in‑Differences