Compare pre→post changes for treated vs control groups to difference out common trends and quantify the intervention’s causal effect.
Statistical tests and visualizations comparing pre-treatment trends between groups, with p-values to validate the key assumption.
Relative time coefficients showing treatment effects before and after intervention, with confidence bands for each period.
Placebo tests on pre-treatment periods, models with/without covariates, and unit/time fixed effects for reliable estimates.
Your data needs a unit_column (store/region ID), time_column (period), treatment_indicator (0/1 for control/treated), and outcome (metric to measure).
Data format: Panel data with multiple observations per unit over time. The algorithm automatically creates the post-treatment indicator and DiD interaction term.
Minimum requirements: At least 3 pre-treatment periods, ideally 4+ for parallel trends testing. Need both treated and control units observed before and after treatment.
What you get: Treatment effect estimate with confidence intervals, parallel trends test results, event study coefficients, placebo test validation.
A rigorous pipeline from raw panel data to defensible causal estimates
Tests pre-treatment period interactions between time and treatment group, providing p-values to validate the key identifying assumption.
Runs two-way fixed effects models with unit and time dummies, isolating the DiD interaction term that captures the treatment effect.
Creates dynamic treatment effects by relative time period, runs placebo tests, and provides multiple model specifications for validation.
When experiments aren’t feasible, DID isolates the policy or product impact by differencing away common shocks that affect both groups.
Use DID to evaluate market‑level rollouts, regulatory changes, and operational policies. With explicit assumptions and diagnostics (parallel trends, event studies), results are explainable to non‑technical stakeholders and audit‑ready.
Key Assumption: Parallel trends — absent treatment, treated and control would have evolved similarly. We test this with pre‑trend diagnostics.
Turn your panel data into causal evidence
Read the article: Difference‑in‑Differences