← CodeClarityLab Home
Browse by Category
+ added · updated 7d
← Back to glossary

LRU Cache

data_structures Advanced
debt(d7/e5/b3/t5)
d7 Detectability Operational debt — how invisible misuse is to your safety net

Closest to 'only careful code review or runtime testing' (d7). The detection_hints indicate automated detection is 'no' and the sole listed tool is Blackfire (a profiler), meaning misuse — such as unbounded growth, O(n) eviction, or missing linked-list pairing — only surfaces under profiling or careful code review. The code_pattern describes subtle structural bugs not caught by linters or compilers.

e5 Effort Remediation debt — work required to fix once spotted

Closest to 'touches multiple files / significant refactor in one component' (e5). The quick_fix points to combining SplDoublyLinkedList with an associative array, and common_mistakes reveal multiple interrelated issues (lookup structure, eviction order, update handling). Correcting a naive O(n) array-based implementation or a hash-map-only implementation requires rethinking the data structure design and touching the cache component meaningfully, though it's generally contained to one component rather than cross-cutting.

b3 Burden Structural debt — long-term weight of choosing wrong

Closest to 'localised tax' (b3). The LRU cache applies to web, CLI, and queue-worker contexts but is typically an isolated cache component. Its structural complexity (hash map + linked list) is a local burden on the developer implementing or maintaining that component, without strongly shaping the rest of the codebase.

t5 Trap Cognitive debt — how counter-intuitive correct behaviour is

Closest to 'notable trap (a documented gotcha most devs eventually learn)' (t5). The misconception field identifies that LRU is assumed to be universally optimal, but LFU outperforms it for burst-heavy access patterns. Additionally, common_mistakes show developers frequently assume a hash map alone is sufficient, missing the linked-list requirement. These are well-documented gotchas that competent developers encounter and eventually learn, placing this firmly at t5.

About DEBT scoring →

Also Known As

least recently used cache LRU eviction

TL;DR

A fixed-size cache that evicts the Least Recently Used entry when full, implemented with a hash map and doubly linked list for O(1) get and put.

Explanation

An LRU cache maintains both O(1) lookup (hash map) and O(1) eviction order tracking (doubly linked list). The list represents access order — most recent at the head, least recent at the tail. On access, an entry moves to the head. When the cache is full, the tail entry is evicted. This combination is the classic example of using two data structures together to achieve better complexity than either alone.

Diagram

flowchart LR
    subgraph LRU Cache - capacity 3
        Q[most recent] -->|doubly linked| B[B] --> C[C] -->|least recent| D[D]
        MAP[(HashMap<br/>key to node<br/>O of 1 lookup)]
    end
    ACCESS[Access B] -->|move to front| Q2[B] --> A2[A] --> C2[C]
    INSERT[Insert E - cache full] --> EVICT[Evict D - least recently used]
    INFO[O of 1 get and put<br/>HashMap for lookup<br/>Doubly linked list for order]
style MAP fill:#1f6feb,color:#fff
style EVICT fill:#f85149,color:#fff
style INFO fill:#238636,color:#fff

Common Misconception

LRU is always the best eviction policy — LFU (Least Frequently Used) outperforms LRU for access patterns with repeated bursts of popular items.

Why It Matters

Implementing or understanding LRU cache is a common interview problem and the eviction strategy used by Redis, MySQL buffer pool, and most application caches.

Common Mistakes

  • Using a plain array for LRU — O(n) lookup defeats the purpose.
  • Not understanding that a hash map alone cannot implement LRU in O(1) — you also need the linked list for eviction order.
  • PHP's SplDoublyLinkedList does not provide hash-based lookup — combine with an array for a true O(1) LRU.
  • Not handling the case where a key is updated — it should move to the head, not create a duplicate.

Code Examples

✗ Vulnerable
// Naive LRU — O(n) eviction:
class NaiveCache {
    private array $cache = [];
    public function get(string $key): mixed {
        if (isset($this->cache[$key])) {
            $val = $this->cache[$key];
            unset($this->cache[$key]);
            $this->cache[$key] = $val; // O(n) reorder
            return $val;
        }
        return null;
    }
}
✓ Fixed
// O(1) LRU — array (hash map) + SplDoublyLinkedList:
class LRUCache {
    private array $map = [];
    private SplDoublyLinkedList $list;
    private int $capacity;
    public function __construct(int $cap) {
        $this->capacity = $cap;
        $this->list = new SplDoublyLinkedList();
    }
    // get() moves node to front; put() evicts tail if full
}

Added 15 Mar 2026
Edited 22 Mar 2026
Views 41
Rate this term
No ratings yet
🤖 AI Guestbook educational data only
| |
Last 30 days
0 pings T 0 pings F 0 pings S 2 pings S 0 pings M 0 pings T 0 pings W 0 pings T 3 pings F 0 pings S 0 pings S 0 pings M 0 pings T 0 pings W 0 pings T 1 ping F 1 ping S 0 pings S 0 pings M 0 pings T 1 ping W 1 ping T 1 ping F 1 ping S 0 pings S 0 pings M 0 pings T 0 pings W 0 pings T 0 pings F
No pings yet today
No pings yesterday
Amazonbot 15 Perplexity 10 Google 2 Ahrefs 2 SEMrush 2 Majestic 1
crawler 31 crawler_json 1
DEV INTEL Tools & Severity
🟡 Medium ⚙ Fix effort: Medium
⚡ Quick Fix
Implement LRU cache with a PHP SplDoublyLinkedList + associative array — O(1) get and put; use APCu or Redis for production caches rather than per-request in-memory implementations
📦 Applies To
any web cli queue-worker
🔗 Prerequisites
🔍 Detection Hints
Fixed-size cache that grows unbounded; naive cache evicting random items instead of least recently used; O(n) eviction on full cache
Auto-detectable: ✗ No blackfire
⚠ Related Problems
🤖 AI Agent
Confidence: Medium False Positives: Medium ✗ Manual fix Fix: Medium Context: Class Tests: Update

✓ schema.org compliant