← CodeClarityLab Home
Browse by Category
+ added · updated 7d
← Back to glossary

Queue Worker Tuning

performance PHP 7.0+ Intermediate

Also Known As

Horizon Supervisor queue workers job processing

TL;DR

Configuring PHP queue workers (Laravel Horizon, Supervisor) for throughput, memory limits, graceful restarts, and concurrency — preventing job failures and memory leaks.

Explanation

PHP queue workers are long-running processes — unlike PHP-FPM requests they do not restart after each job. Memory leaks accumulate over time. Key configuration: --max-jobs (restart after N jobs, prevents memory leak), --max-time (restart after N seconds), --memory (restart if memory exceeds limit), --timeout (kill hung jobs), --tries (retry failed jobs), --backoff (delay between retries). Supervisor keeps workers alive and respawns crashed workers. Horizon provides per-queue concurrency, job metrics, and real-time dashboard for Laravel.

Common Misconception

Queue workers run forever without issues — PHP workers accumulate memory from object graphs, event listeners, and ORM identity maps; memory limits and max-jobs restarts are not optional.

Why It Matters

A queue worker with a memory leak that is never restarted will exhaust server memory overnight, causing all jobs to fail — max-jobs and memory limits are essential production configuration.

Common Mistakes

  • No --max-jobs limit — worker runs until it exhausts memory, then crashes and all queued jobs time out.
  • Timeout shorter than the longest expected job — jobs are killed mid-execution, causing partial processing.
  • Not using --sleep for empty queues — workers spin at 100% CPU when the queue is empty without a sleep delay.
  • One worker process for all queues — high-priority queues blocked behind slow bulk jobs.

Code Examples

✗ Vulnerable
# Supervisor config — no memory limit, no max-jobs:
[program:worker]
command=php artisan queue:work
autostart=true
autorestart=true
# Worker runs indefinitely
# Memory grows from 50MB to 2GB over 3 days
# Server OOM kills worker
# All jobs fail until Supervisor restarts it
✓ Fixed
# Supervisor — with proper limits:
[program:worker-default]
command=php artisan queue:work --queue=default,low
  --max-jobs=1000 --max-time=3600
  --memory=256 --timeout=90
  --sleep=3 --tries=3 --backoff=10
process_num=4        ; 4 concurrent workers
autostart=true
autorestart=true
stopwaitsecs=120     ; Allow jobs to finish gracefully

[program:worker-critical]
command=php artisan queue:work --queue=critical
  --max-jobs=500 --memory=128 --timeout=30
process_num=2

Added 15 Mar 2026
Edited 22 Mar 2026
Views 28
Rate this term
No ratings yet
🤖 AI Guestbook educational data only
| |
Last 30 days
0 pings W 0 pings T 3 pings F 0 pings S 0 pings S 1 ping M 0 pings T 0 pings W 0 pings T 1 ping F 0 pings S 1 ping S 0 pings M 0 pings T 0 pings W 1 ping T 2 pings F 0 pings S 0 pings S 0 pings M 0 pings T 0 pings W 0 pings T 2 pings F 0 pings S 0 pings S 0 pings M 0 pings T 1 ping W 0 pings T
No pings yet today
Amazonbot 8 Perplexity 7 Google 3 Unknown AI 2 SEMrush 2
crawler 20 crawler_json 2
DEV INTEL Tools & Severity
🟠 High ⚙ Fix effort: Medium
⚡ Quick Fix
Set --timeout slightly less than visibility timeout, --max-jobs=1000 to prevent memory leaks, and --sleep=3 to avoid CPU spin on empty queue — monitor queue depth and auto-scale workers when it grows
📦 Applies To
PHP 7.0+ queue-worker laravel symfony
🔗 Prerequisites
🔍 Detection Hints
Queue workers with no timeout; memory growing unbounded over days; single worker processing all queues; no auto-scaling based on queue depth
Auto-detectable: ✗ No laravel-horizon datadog supervisor
⚠ Related Problems
🤖 AI Agent
Confidence: Medium False Positives: Medium ✗ Manual fix Fix: Medium Context: File

✓ schema.org compliant