Circuit Breaker Pattern
debt(d9/e5/b5/t7)
Closest to 'silent in production until users hit it' (d9). The detection_hints indicate automated detection is 'no' and tools listed (Blackfire, Datadog) are observability/profiling tools, not static analysis. The absence of a circuit breaker is invisible at code review, at lint time, and at test time — it only manifests when a downstream dependency degrades and cascading failures pile up in production under real traffic.
Closest to 'touches multiple files / significant refactor in one component' (e5). The quick_fix describes wrapping external service calls with a circuit breaker library and adding fallback logic. This requires identifying every external call site, integrating a circuit breaker abstraction (e.g. a library or custom implementation), configuring thresholds, implementing half-open state probing, and adding monitoring/alerting for circuit state — spanning multiple files and components rather than a single-line swap.
Closest to 'persistent productivity tax' (b5). The pattern applies to web, API, and queue-worker contexts wherever external dependencies exist. Once adopted, every future external integration must be evaluated for circuit breaker wrapping, threshold tuning, fallback strategy, and monitoring. It imposes an ongoing design and operational tax across many work streams without being fully architectural.
Closest to 'serious trap — contradicts how a similar concept works elsewhere' (t7). The misconception field explicitly states that developers commonly conflate circuit breakers with retry patterns, believing they solve the same problem. This is a serious trap: retries are intuitive (try again on failure) while circuit breakers invert that intuition (stop trying entirely). A developer familiar with retry logic will apply retries where a circuit breaker is needed, making the failure mode worse by hammering a failing dependency.
Also Known As
TL;DR
Explanation
Named after an electrical circuit breaker, the pattern has three states: Closed (normal — calls pass through, failures are counted), Open (tripped — calls fail immediately without attempting the downstream call, returning a fallback), and Half-Open (trial — a probe request is allowed through after a timeout; if it succeeds, the circuit closes; if it fails, it reopens). This prevents a slow or failing downstream service from exhausting all threads/connections in the caller, turning a partial failure into a total outage. PHP implementations: Ganesha, php-circuit-breaker, or a custom Redis-backed state machine. Pair with the Bulkhead and Retry patterns for comprehensive resilience.
Diagram
stateDiagram-v2
[*] --> Closed
Closed --> Open: Failure threshold exceeded
Open --> HalfOpen: Timeout elapsed
HalfOpen --> Closed: Test request succeeds
HalfOpen --> Open: Test request fails
note right of Closed: Requests pass through
note right of Open: Requests fail immediately
note right of HalfOpen: One test request allowed
style Closed fill:#238636,color:#fff
style Open fill:#f85149,color:#fff
style HalfOpen fill:#d29922,color:#fff
Common Misconception
Why It Matters
Common Mistakes
- Setting the threshold too low — a single failed request opening the circuit causes unnecessary outages.
- Not implementing a half-open state — the circuit must probe for recovery, not stay open indefinitely.
- Wrapping every external call in a circuit breaker including fast, internal ones — overhead is not always worth it.
- Not monitoring circuit state — an open circuit is a live incident that needs alerting.
Avoid When
- Calling internal services or local resources where latency is negligible and failures are surfaced immediately.
- Operations that must not be silently skipped — payments, critical writes — where the fallback is worse than waiting.
- Low-traffic services where the circuit will never accumulate enough failures to trip.
- When a retry with exponential backoff is sufficient and simpler.
When To Use
- External HTTP API calls, third-party services, or any dependency that can be slow or unavailable.
- Microservice architectures where one failing service must not cascade and bring down the entire system.
- Any call where a fast failure and fallback is preferable to a slow queue of blocked requests.
- Systems with SLA requirements where downstream failures must not exceed a time budget.
Code Examples
// No circuit breaker — repeated calls to a failing service:
function getUser(int $id): ?User {
return Http::get("https://user-service/users/$id"); // Times out on every call
// With circuit breaker: after N failures, return null immediately
// and only retry after a cooldown period
}
class CircuitBreaker {
private int $failures = 0;
private string $state = 'closed'; // closed | open | half-open
private ?int $openedAt = null;
public function call(callable $fn): mixed {
if ($this->state === 'open') {
if (time() - $this->openedAt < 30) throw new CircuitOpenException();
$this->state = 'half-open';
}
try {
$result = $fn();
$this->onSuccess();
return $result;
} catch (\Throwable $e) {
$this->onFailure();
throw $e;
}
}
private function onSuccess(): void {
$this->failures = 0;
$this->state = 'closed';
}
private function onFailure(): void {
if (++$this->failures >= 5) { $this->state = 'open'; $this->openedAt = time(); }
}
}