Error Tracking
debt(d9/e1/b3/t5)
Closest to 'silent in production until users hit it' (d9). The detection_hints confirm automated detection is 'no' and the code_pattern describes absence as 'errors only visible in server logs; no alerting on new error types.' There is no linter or compiler check for missing error tracking — the gap is only revealed when users report problems that were silently failing in production.
Closest to 'one-line patch or single-call swap' (e1). The quick_fix explicitly states: 'Install Sentry (free tier) with composer require sentry/sentry-sdk — one line of configuration gives you error grouping, stack traces, breadcrumbs, and release tracking.' This is a single dependency install and minimal configuration.
Closest to 'localised tax' (b3). The choice applies to web, cli, and queue-worker contexts (broad applies_to), but once the SDK is installed and configured it operates largely in the background. The common_mistakes around alert fatigue, user context, and deploy linking impose an ongoing maintenance cost on the team managing the tool, but the rest of the codebase is minimally affected. Slightly worse than b3 due to cross-context reach, so b3 holds.
Closest to 'notable trap (a documented gotcha most devs eventually learn)' (t5). The misconception field identifies the canonical wrong belief: 'Log files are sufficient for error tracking.' Most developers who haven't used dedicated error tracking assume server logs cover the same ground. The common_mistakes reinforce additional non-obvious pitfalls (alert fatigue from unfiltered 404s/validation errors, missing user context, no deploy linkage) that competent developers routinely discover only after pain in production.
Also Known As
TL;DR
Explanation
Error tracking tools (Sentry, Bugsnag, Rollbar) catch unhandled exceptions and send them to a central dashboard where errors are grouped by fingerprint (similar stack traces), counted, and alerted on. Key features: stack traces with source maps, user context (who was affected), release tracking (which deploy introduced the error), breadcrumbs (events leading up to the error), and performance monitoring integration. Essential for knowing about errors before users report them.
Diagram
flowchart LR
APP[PHP App] -->|uncaught exception| SENTRY[Sentry / Bugsnag]
SENTRY --> ALERT2[Alert developer]
SENTRY --> ISSUE[Create issue<br/>with stack trace]
ISSUE --> CONTEXT[Request context<br/>user breadcrumbs<br/>environment]
CONTEXT --> ASSIGN[Assign to developer]
subgraph Grouping
SIMILAR[Similar errors grouped<br/>by stack trace fingerprint]
COUNT[Occurrence count<br/>affected users count]
SIMILAR --> COUNT
end
style SENTRY fill:#6e40c9,color:#fff
style ISSUE fill:#f85149,color:#fff
style ASSIGN fill:#238636,color:#fff
Common Misconception
Why It Matters
Common Mistakes
- Not filtering out expected exceptions (404s, validation errors) — alert fatigue from noise buries real bugs.
- Not attaching user context — 'who was affected' is often the most important debugging context.
- Not linking errors to deploys — 'did this error start after the last deployment?' is a critical question.
- Not setting alert thresholds — getting paged for every single error occurrence vs only on new error types or rate spikes.
Code Examples
// Error visible only in log file:
try {
processOrder($order);
} catch (Exception $e) {
error_log('Order processing failed: ' . $e->getMessage());
// Buried in /var/log/php/error.log
// Discovered when customer calls support 2 days later
}
// Sentry captures with full context:
\Sentry\init(['dsn' => getenv('SENTRY_DSN')]);
\Sentry\configureScope(function (\Sentry\State\Scope $scope): void {
$scope->setUser(['id' => $userId, 'email' => $userEmail]);
});
try {
processOrder($order);
} catch (Exception $e) {
\Sentry\captureException($e); // Full stack trace, user context, automatic alert
throw $e; // Still propagate
}