Snapshot Testing
debt(d7/e3/b5/t7)
Closest to 'only careful code review or runtime testing' (d7). The tools listed (phpunit, pest, jest) will flag a snapshot mismatch when the test runs, but the core misuse — blindly accepting a snapshot of broken output, or snapshots of dynamic data that always differ — is only visible to a developer who carefully reviews the snapshot diff. The test suite may pass after an update while silently enshrining incorrect output; no automated tool can determine whether the accepted snapshot is correct.
Closest to 'simple parameterised fix' (e3). The quick_fix describes a deliberate workflow change (use --update-snapshots intentionally, mock dynamic values). Fixing misuse patterns like dynamic data requires adding mocks or fixtures in a handful of places — not a one-liner but not a cross-cutting refactor either. Removing or reorganising excessive snapshot files is similarly localised.
Closest to 'persistent productivity tax' (b5). Snapshot files must be committed, reviewed on every PR, and deliberately updated. Across a codebase with many snapshot tests this imposes ongoing maintenance overhead on multiple work streams (CI reliability, code review, refactors that legitimately change output), but the burden is contained to the test layer and does not shape the production architecture.
Closest to 'serious trap' (t7). The canonical misconception is explicit: developers treat snapshots as correctness proofs rather than change-detectors, so a snapshot of broken output is accepted and the tests 'pass' while the bug is hidden. This directly contradicts how most assertion-based tests work (a failing assertion means wrong output), making it a serious conceptual inversion that competent developers unfamiliar with snapshot testing will reliably fall into.
Also Known As
TL;DR
Explanation
Snapshot testing is popular in React component testing (Jest) but applies anywhere serialisable output exists — JSON API responses, rendered HTML, CLI output. On first run, the output is saved as a .snap file. Subsequent runs compare against it. When the output legitimately changes, the snapshot is updated. The test catches accidental regressions in complex outputs without manually asserting every field. Brittle snapshots that change with every minor tweak erode their own value.
Common Misconception
Why It Matters
Common Mistakes
- Blindly updating snapshots when they fail without investigating why they changed.
- Snapshots of dynamic data (timestamps, random IDs) that differ on every run — always mock dynamic values.
- Too many snapshot files that no one reviews — snapshots are documentation; if they're never read, they have no value.
- Not committing snapshot files to version control — they are part of the test suite.
Code Examples
// Snapshot with dynamic data — fails every run:
$response = $api->getUser(42);
$this->assertMatchesSnapshot($response);
// Snapshot includes: 'updated_at' => '2026-03-15 07:42:13' — changes every run
// Stable snapshot — mask dynamic values:
$response = $api->getUser(42);
unset($response['updated_at']); // Remove dynamic field
unset($response['created_at']);
$this->assertMatchesSnapshot($response); // Snapshot of stable data only