Cache Stampede / Thundering Herd
Also Known As
thundering herd
dog pile effect
cache stampede problem
TL;DR
When a cached item expires, multiple simultaneous requests all miss the cache and hit the database concurrently, overwhelming it.
Explanation
A cache stampede occurs when a popular cached item expires and many concurrent requests all find a cache miss simultaneously — each generates the same expensive query, flooding the database. Solutions include: mutex/lock-based regeneration (only one process rebuilds, others wait), probabilistic early expiry (start rebuilding before TTL expires with a small probability), stale-while-revalidate (serve stale data while regenerating in the background), and cache warming on deploy. In PHP with Redis, implement a lock with SET key value NX EX seconds — the first process acquires it and rebuilds, others serve stale or wait.
Diagram
sequenceDiagram
participant W1 as Worker 1
participant W2 as Worker 2
participant W3 as Worker 3
participant C as Cache
participant DB as Database
C-->>W1: Cache MISS - expired
C-->>W2: Cache MISS - expired
C-->>W3: Cache MISS - expired
W1->>DB: expensive query
W2->>DB: expensive query duplicate
W3->>DB: expensive query duplicate
Note over DB: DB overwhelmed - stampede!
Note over C: Fix: mutex lock or probabilistic early expiry
Common Misconception
✗ Cache stampedes only affect very high-traffic sites. Any site with concurrent requests and a cache key that expires can trigger a stampede — even moderate traffic causes multiple simultaneous cache misses that hammer the database. Probabilistic early expiry or locking prevents it.
Why It Matters
Cache stampedes occur when many requests simultaneously discover an expired cache entry and all attempt to regenerate it concurrently — the backend receives a spike of expensive queries at once.
Common Mistakes
- No mutex or locking around cache miss regeneration — all concurrent requests hit the backend.
- Setting all cache entries to expire at the same time — use TTL jitter to spread expiry.
- Not using probabilistic early expiry — start regenerating before expiry to avoid a cliff-edge miss.
- Short TTLs on expensive-to-compute values — frequent expiry increases stampede probability.
Code Examples
✗ Vulnerable
// On cache miss, 100 simultaneous requests all hit the DB
$value = $cache->get('popular_data');
if ($value === null) {
$value = $this->db->expensiveQuery(); // thundering herd
$cache->set('popular_data', $value, 300);
}
✓ Fixed
// Option 1 — Mutex/lock (only one request recomputes)
$lock = $this->locks->acquire('popular_data', ttl: 10);
if ($lock) {
try {
$value = $this->db->expensiveQuery();
$cache->set('popular_data', $value, 300);
} finally { $lock->release(); }
} else {
// Other requests wait briefly then read from cache
sleep(1);
$value = $cache->get('popular_data');
}
// Option 2 — Probabilistic early recomputation (no lock needed)
// Recompute before expiry with increasing probability as TTL decreases
Tags
🤝 Adopt this term
£79/year · your link shown here
Added
15 Mar 2026
Edited
22 Mar 2026
Views
30
🤖 AI Guestbook educational data only
|
|
Last 30 days
Agents 1
No pings yesterday
Amazonbot 8
Perplexity 7
Ahrefs 5
Unknown AI 2
Google 1
SEMrush 1
Also referenced
How they use it
crawler 23
crawler_json 1
Related categories
⚡
DEV INTEL
Tools & Severity
🟠 High
⚙ Fix effort: Medium
⚡ Quick Fix
Use probabilistic early expiry (recalculate before TTL expires) or a mutex lock (Redis SET NX) to prevent multiple processes rebuilding the same cache simultaneously
📦 Applies To
PHP 5.0+
web
queue-worker
🔗 Prerequisites
🔍 Detection Hints
High-traffic cache key with expensive rebuild and no stampede protection (mutex or early recompute)
Auto-detectable:
✗ No
blackfire
datadog
⚠ Related Problems
🤖 AI Agent
Confidence: Medium
False Positives: Medium
✗ Manual fix
Fix: Medium
Context: Function
Tests: Update