File Descriptors & ulimit
Also Known As
file descriptor
ulimit
too many open files
FD limit
TL;DR
File descriptors are integer handles for open files, sockets, and pipes — each process has a limit (ulimit -n), and exhausting them causes 'too many open files' errors.
Explanation
Every open file, socket, pipe, or device is a file descriptor (FD). FD 0 = stdin, 1 = stdout, 2 = stderr. The soft limit (ulimit -n) is the default per-process maximum (typically 1024); the hard limit is the kernel maximum. Symptoms of FD exhaustion: 'Too many open files' errors, inability to accept new connections, failed file opens. For PHP web servers and queue workers: increase limits in /etc/security/limits.conf (system) or in the systemd service file (LimitNOFILE). Monitoring: /proc/PID/fd shows open FDs for a process; lsof -p PID lists them with paths.
Common Misconception
✗ Too many open files errors mean too many files are open simultaneously — often caused by FD leaks (not closing files/sockets after use) rather than legitimately high concurrency.
Why It Matters
A PHP queue worker that opens database connections without closing them properly exhausts its FD limit and crashes — understanding FD limits is essential for debugging long-running PHP processes.
Common Mistakes
- Not closing file handles in PHP after use — FDs accumulate over time in long-running scripts.
- Low ulimit for high-concurrency services — Nginx default (1024) is insufficient for busy servers.
- Not setting LimitNOFILE in systemd service files — systemd ignores /etc/security/limits.conf for services.
- No monitoring of FD usage — exhaustion typically happens gradually, not suddenly.
Code Examples
✗ Vulnerable
// PHP FD leak — file handle not closed:
function processLogs(array $paths): void {
foreach ($paths as $path) {
$handle = fopen($path, 'r'); // Opens FD
while (!feof($handle)) processLine(fgets($handle));
// Missing: fclose($handle) — FD leaked!
}
// 1000 log files = 1000 leaked FDs — ulimit hit
}
✓ Fixed
// Close handles explicitly:
function processLogs(array $paths): void {
foreach ($paths as $path) {
$handle = fopen($path, 'r');
try {
while (!feof($handle)) processLine(fgets($handle));
} finally {
fclose($handle); // Always closed
}
}
}
# Increase FD limit for PHP-FPM in systemd:
# /etc/systemd/system/php8.3-fpm.service.d/override.conf:
[Service]
LimitNOFILE=65536
# Check current FD usage:
lsof -p $(pgrep php-fpm | head -1) | wc -l
Tags
🤝 Adopt this term
£79/year · your link shown here
Added
16 Mar 2026
Edited
5 Apr 2026
Views
21
🤖 AI Guestbook educational data only
|
|
Last 30 days
Agents 0
No pings yet today
No pings yesterday
Amazonbot 6
Perplexity 4
Ahrefs 3
Unknown AI 2
Google 2
Majestic 1
How they use it
crawler 17
crawler_json 1
Related categories
⚡
DEV INTEL
Tools & Severity
🟠 High
⚙ Fix effort: Medium
⚡ Quick Fix
Check ulimit -n on your PHP-FPM server — the default 1024 can be exhausted under load; set fs.file-max in /etc/sysctl.conf and LimitNOFILE in systemd service files
📦 Applies To
PHP 5.0+
any
web
cli
🔗 Prerequisites
🔍 Detection Hints
Too many open files error in PHP; PHP-FPM workers failing to open sockets; connection refused under high load due to FD exhaustion
Auto-detectable:
✓ Yes
ulimit
lsof
datadog
prometheus-node-exporter
⚠ Related Problems
🤖 AI Agent
Confidence: Medium
False Positives: Medium
✗ Manual fix
Fix: Medium
Context: File
CWE-400
CWE-772