← CodeClarityLab Home
Browse by Category
+ added · updated 7d
← Back to glossary

Memory Pressure Detection

performance PHP 7.0+ Intermediate

Also Known As

memory threshold monitoring OOM prevention memory limit detection memory usage monitoring

TL;DR

Proactively identifying when a PHP process approaches its memory limit so corrective action can be taken before a fatal error.

Explanation

Memory pressure occurs when a PHP process consumes a significant fraction of its allowed memory (set by memory_limit in php.ini). Without detection, the first symptom is usually a fatal 'Allowed memory size exhausted' error that kills the process mid-request or mid-job. Memory pressure detection involves periodically calling memory_get_usage() and comparing it against memory_get_peak_usage() or the configured limit. In long-running processes like queue workers, CLI importers, and event loops, memory can creep upward due to accumulating caches, uncollected cycles, or ORM identity maps. Detection strategies include polling memory usage inside batch loops, setting thresholds (e.g. 80% of memory_limit) and gracefully exiting or flushing caches when exceeded, and using gc_collect_cycles() to reclaim circular references. Frameworks like Laravel provide worker options (--memory=128) that check memory after each job. For production visibility, APM tools (Datadog, New Relic, Tideways) track per-process memory and alert on thresholds. The key principle is to fail gracefully or self-heal rather than crash. A queue worker that detects pressure can finish its current job, exit cleanly, and let its supervisor restart it with a fresh memory slate. This pattern prevents data corruption from mid-operation crashes and keeps throughput stable. Combine detection with investigation: if memory rises monotonically across requests, you likely have a memory leak that needs profiling with tools like Xdebug or php-meminfo rather than just periodic restarts.

Common Misconception

Increasing memory_limit is sufficient to solve memory pressure issues. Raising the limit only delays the crash - if usage grows without bound (a leak or unbounded dataset), the process will eventually exhaust any limit. Detection and graceful handling address the symptom while profiling addresses the root cause.

Why It Matters

Undetected memory pressure leads to fatal errors that kill requests mid-response or abort long-running jobs, causing data loss, broken user experiences, and cascading failures in worker pools.

Common Mistakes

  • Never checking memory_get_usage() in long-running CLI scripts or queue workers until a fatal OOM crash occurs.
  • Using memory_get_usage() without the real_usage parameter, which underreports actual memory allocated from the OS.
  • Setting the detection threshold too high (e.g. 99%) leaving no headroom for the current operation to finish cleanly.
  • Restarting workers on every single job instead of using memory thresholds, which wastes startup cost and hides the real leak.
  • Forgetting to call gc_collect_cycles() before measuring, leading to inflated readings from uncollected circular references.

Avoid When

  • Short-lived PHP-FPM requests with well-bounded data - the process dies after each request anyway.
  • You have already identified and fixed the root-cause memory leak - detection without a leak is unnecessary overhead.
  • Memory limit is set to -1 (unlimited) in a trusted environment and monitoring is handled externally by container orchestration (e.g. Kubernetes OOMKilled).

When To Use

  • Long-running queue workers or daemon processes that handle many jobs per process lifetime.
  • CLI batch importers processing large or unbounded datasets.
  • Swoole/RoadRunner workers that persist across thousands of requests without restarting.
  • Any process where a fatal OOM error would cause data corruption or lost work.

Code Examples

✗ Vulnerable
// Queue worker with no memory awareness - crashes mid-job
while (true) {
    $job = $queue->pop();
    if ($job) {
        $job->handle(); // Memory grows each iteration
        // No check - eventually: Fatal error: Allowed memory size exhausted
    }
    usleep(100000);
}

// Batch import with no pressure detection
foreach ($millionRows as $row) {
    $entities[] = Entity::fromRow($row); // Array grows without bound
}
$em->flush();
✓ Fixed
// Parse memory_limit into bytes
function getMemoryLimitBytes(): int {
    $limit = ini_get('memory_limit');
    if ($limit === '-1') return PHP_INT_MAX;
    $unit = strtolower(substr($limit, -1));
    $bytes = (int) $limit;
    return match ($unit) {
        'g' => $bytes * 1024 * 1024 * 1024,
        'm' => $bytes * 1024 * 1024,
        'k' => $bytes * 1024,
        default => $bytes,
    };
}

$threshold = (int) (getMemoryLimitBytes() * 0.80);

// Queue worker with graceful exit on memory pressure
while (true) {
    $job = $queue->pop();
    if ($job) {
        $job->handle();
    }

    if (memory_get_usage(true) >= $threshold) {
        echo "Memory threshold reached, exiting for supervisor restart.\n";
        exit(0); // Clean exit - supervisor (systemd/supervisord) restarts
    }
    usleep(100000);
}

// Batch import with chunked processing and pressure check
foreach (array_chunk($millionRows, 500) as $chunk) {
    foreach ($chunk as $row) {
        $em->persist(Entity::fromRow($row));
    }
    $em->flush();
    $em->clear(); // Detach entities, free memory

    gc_collect_cycles();
    if (memory_get_usage(true) >= $threshold) {
        throw new MemoryPressureException('Batch aborted: memory pressure');
    }
}

Added 6 May 2026
Views 4
Rate this term
No ratings yet
🤖 AI Guestbook educational data only
| |
Last 30 days
0 pings W 0 pings T 0 pings F 0 pings S 0 pings S 0 pings M 0 pings T 0 pings W 0 pings T 0 pings F 0 pings S 0 pings S 0 pings M 0 pings T 0 pings W 0 pings T 0 pings F 0 pings S 0 pings S 0 pings M 0 pings T 0 pings W 0 pings T 0 pings F 0 pings S 0 pings S 0 pings M 0 pings T 2 pings W 0 pings T
No pings yet today
Google 2
Google 2
crawler 2
DEV INTEL Tools & Severity
🟠 High ⚙ Fix effort: Low
⚡ Quick Fix
Add a memory check inside your batch or worker loop: if (memory_get_usage(true) > 0.8 * $limitBytes) { break; } and let the supervisor restart the process.
📦 Applies To
PHP 7.0+ php cli queue-worker web laravel symfony
🔗 Prerequisites
🔍 Detection Hints
while\s*\(true\)
Auto-detectable: ✓ Yes datadog-apm blackfire newrelic tideways
⚠ Related Problems
🤖 AI Agent
Confidence: Medium False Positives: Medium ✗ Manual fix Fix: Low Context: File

✓ schema.org compliant