AI Agent Pattern
debt(d8/e7/b7/t7)
Closest to 'silent in production until users hit it' (d9), -1. No detection_hints tools are specified. The core issues — infinite loops, destructive actions without confirmation, non-deterministic behavior, missing logging — are architectural/runtime problems that no standard linter or SAST tool catches. However, some issues like missing iteration limits could be caught in careful code review, pulling it slightly below d9 to d8.
Closest to 'cross-cutting refactor across the codebase' (e7). The quick_fix summary sounds simple ('set max_iterations, log tool calls, require confirmation'), but retrofitting these safeguards into an existing agent system touches the tool registration layer, the agent loop, every tool implementation (for confirmation gates), and the logging/observability infrastructure. This is cross-cutting work spanning multiple components. Not quite architectural rework (e9), but significantly more than a single-component refactor.
Closest to 'strong gravitational pull' (b7). Once you adopt the agent pattern, every feature involving LLM tool use is shaped by this architectural choice — the tool registry, the loop/orchestration layer, the confirmation gates, the logging pipeline, and the error handling strategy all become load-bearing infrastructure. Every new tool, every new agent workflow must conform to these patterns. It doesn't quite define the entire system's shape (b9), but it exerts strong gravitational pull on all LLM-related development.
Closest to 'serious trap — contradicts how a similar concept works elsewhere' (t7). The misconception is explicit: developers assume AI agents are reliable enough to run autonomously without human oversight, treating them like deterministic automation (cron jobs, pipelines). The 'obvious' approach — give the agent tools and let it run — leads to infinite loops, irreversible destructive actions, and silent error accumulation. This directly contradicts the mental model from traditional automation where scripts reliably execute the same path every time. The non-deterministic, error-accumulating nature is a serious trap for competent developers new to the pattern.
Also Known As
TL;DR
Explanation
An AI agent extends a simple LLM call by giving the model access to tools — functions it can invoke to take actions in the world. The agent receives a goal, reasons about what tool to call, receives the tool result, and repeats until it can produce a final answer. Common tools include web search, code execution, database queries, and API calls. The ReAct pattern (Reasoning + Acting) is the most common agent architecture: the model alternates between Thought (reasoning about what to do), Action (calling a tool), and Observation (receiving the result). In PHP, agents are typically built by calling an LLM API in a loop, parsing tool-call responses, executing the requested function, and feeding results back as the next message.
Common Misconception
Why It Matters
Common Mistakes
- No iteration limit — agents can loop indefinitely if the model repeatedly calls the wrong tool or misinterprets results.
- Giving the agent access to destructive tools without confirmation — file deletion, email sending, and database writes should require explicit user approval.
- Not logging tool calls — debugging an agent that produced a wrong answer requires replaying every reasoning step.
- Treating the agent as deterministic — the same input may take different tool-calling paths across runs, making testing difficult.
Avoid When
- Simple single-turn tasks where a standard LLM call is sufficient — agents add latency and cost per loop iteration.
- Irreversible actions without human confirmation — agents can take destructive actions based on misunderstood goals.
- Untrusted or ambiguous goals — an agent given a vague objective may pursue an unintended interpretation.
- Production environments without iteration limits and logging — runaway agents consume tokens and cause unintended side effects.
When To Use
- Multi-step tasks that require planning, tool use, and adapting based on intermediate results.
- Workflows where the path to the answer is not known upfront and the model must discover it.
- Automating research, code review, or data gathering tasks that a human would do step-by-step.
- When tool-use (web search, code execution, DB queries) is required to answer the goal.
Code Examples
// ❌ No iteration limit, no logging, destructive tool access
function runAgent(string $goal): string {
while (true) {
$response = callLLM($goal);
if ($response['done']) return $response['answer'];
$result = executeTool($response['tool'], $response['args']); // may delete files!
$goal .= $result;
}
// infinite loop possible, no audit trail
}
<?php
// ✅ Agent with iteration limit, logging, and confirmation for destructive actions
function runAgent(string $goal, array $tools, int $maxIterations = 10): string
{
$messages = [['role' => 'user', 'content' => $goal]];
$iteration = 0;
while ($iteration < $maxIterations) {
$response = callLLM($messages, $tools);
Log::info('Agent step', ['iteration' => $iteration, 'response' => $response]);
if ($response['done']) return $response['answer'];
// Require confirmation before destructive tools
if (isDestructive($response['tool'])) {
confirmWithUser($response['tool'], $response['args']);
}
$result = executeTool($response['tool'], $response['args']);
$messages[] = ['role' => 'tool', 'content' => $result];
$iteration++;
}
throw new RuntimeException("Agent exceeded max iterations ($maxIterations)");
}