← CodeClarityLab Home
Browse by Category
+ added · updated 7d
← Back to glossary

AI Agent Pattern

ai_ml Advanced
debt(d8/e7/b7/t7)
d8 Detectability Operational debt — how invisible misuse is to your safety net

Closest to 'silent in production until users hit it' (d9), -1. No detection_hints tools are specified. The core issues — infinite loops, destructive actions without confirmation, non-deterministic behavior, missing logging — are architectural/runtime problems that no standard linter or SAST tool catches. However, some issues like missing iteration limits could be caught in careful code review, pulling it slightly below d9 to d8.

e7 Effort Remediation debt — work required to fix once spotted

Closest to 'cross-cutting refactor across the codebase' (e7). The quick_fix summary sounds simple ('set max_iterations, log tool calls, require confirmation'), but retrofitting these safeguards into an existing agent system touches the tool registration layer, the agent loop, every tool implementation (for confirmation gates), and the logging/observability infrastructure. This is cross-cutting work spanning multiple components. Not quite architectural rework (e9), but significantly more than a single-component refactor.

b7 Burden Structural debt — long-term weight of choosing wrong

Closest to 'strong gravitational pull' (b7). Once you adopt the agent pattern, every feature involving LLM tool use is shaped by this architectural choice — the tool registry, the loop/orchestration layer, the confirmation gates, the logging pipeline, and the error handling strategy all become load-bearing infrastructure. Every new tool, every new agent workflow must conform to these patterns. It doesn't quite define the entire system's shape (b9), but it exerts strong gravitational pull on all LLM-related development.

t7 Trap Cognitive debt — how counter-intuitive correct behaviour is

Closest to 'serious trap — contradicts how a similar concept works elsewhere' (t7). The misconception is explicit: developers assume AI agents are reliable enough to run autonomously without human oversight, treating them like deterministic automation (cron jobs, pipelines). The 'obvious' approach — give the agent tools and let it run — leads to infinite loops, irreversible destructive actions, and silent error accumulation. This directly contradicts the mental model from traditional automation where scripts reliably execute the same path every time. The non-deterministic, error-accumulating nature is a serious trap for competent developers new to the pattern.

About DEBT scoring →

Also Known As

LLM agent autonomous agent ReAct agent tool-using LLM agentic AI

TL;DR

An LLM-powered system that takes multi-step actions autonomously — calling tools, reading results, and deciding next steps in a loop until a goal is achieved.

Explanation

An AI agent extends a simple LLM call by giving the model access to tools — functions it can invoke to take actions in the world. The agent receives a goal, reasons about what tool to call, receives the tool result, and repeats until it can produce a final answer. Common tools include web search, code execution, database queries, and API calls. The ReAct pattern (Reasoning + Acting) is the most common agent architecture: the model alternates between Thought (reasoning about what to do), Action (calling a tool), and Observation (receiving the result). In PHP, agents are typically built by calling an LLM API in a loop, parsing tool-call responses, executing the requested function, and feeding results back as the next message.

Common Misconception

AI agents are reliable enough to run autonomously without human oversight. Current LLM agents have high failure rates on complex multi-step tasks, accumulate errors across steps, and can take irreversible actions (deleting files, sending emails, making API calls) based on misunderstandings. Always implement human-in-the-loop checkpoints for consequential actions, add hard limits on loop iterations, and log every tool call.

Why It Matters

The agent pattern is how LLMs become useful for tasks that require multiple steps, external data, or real-world actions — beyond single-turn Q&A. A PHP application that answers 'what is our revenue this quarter' by calling a database tool, running a calculation tool, and formatting the result is an agent. Understanding the pattern is essential for building LLM features that are more capable than a simple chat interface, while understanding the failure modes is essential for building them safely.

Common Mistakes

  • No iteration limit — agents can loop indefinitely if the model repeatedly calls the wrong tool or misinterprets results.
  • Giving the agent access to destructive tools without confirmation — file deletion, email sending, and database writes should require explicit user approval.
  • Not logging tool calls — debugging an agent that produced a wrong answer requires replaying every reasoning step.
  • Treating the agent as deterministic — the same input may take different tool-calling paths across runs, making testing difficult.

Avoid When

  • Simple single-turn tasks where a standard LLM call is sufficient — agents add latency and cost per loop iteration.
  • Irreversible actions without human confirmation — agents can take destructive actions based on misunderstood goals.
  • Untrusted or ambiguous goals — an agent given a vague objective may pursue an unintended interpretation.
  • Production environments without iteration limits and logging — runaway agents consume tokens and cause unintended side effects.

When To Use

  • Multi-step tasks that require planning, tool use, and adapting based on intermediate results.
  • Workflows where the path to the answer is not known upfront and the model must discover it.
  • Automating research, code review, or data gathering tasks that a human would do step-by-step.
  • When tool-use (web search, code execution, DB queries) is required to answer the goal.

Code Examples

✗ Vulnerable
// ❌ No iteration limit, no logging, destructive tool access
function runAgent(string $goal): string {
    while (true) {
        $response = callLLM($goal);
        if ($response['done']) return $response['answer'];
        $result = executeTool($response['tool'], $response['args']); // may delete files!
        $goal .= $result;
    }
    // infinite loop possible, no audit trail
}
✓ Fixed
<?php
// ✅ Agent with iteration limit, logging, and confirmation for destructive actions
function runAgent(string $goal, array $tools, int $maxIterations = 10): string
{
    $messages = [['role' => 'user', 'content' => $goal]];
    $iteration = 0;

    while ($iteration < $maxIterations) {
        $response = callLLM($messages, $tools);
        Log::info('Agent step', ['iteration' => $iteration, 'response' => $response]);

        if ($response['done']) return $response['answer'];

        // Require confirmation before destructive tools
        if (isDestructive($response['tool'])) {
            confirmWithUser($response['tool'], $response['args']);
        }

        $result = executeTool($response['tool'], $response['args']);
        $messages[] = ['role' => 'tool', 'content' => $result];
        $iteration++;
    }

    throw new RuntimeException("Agent exceeded max iterations ($maxIterations)");
}

Added 23 Mar 2026
Edited 25 Mar 2026
Views 45
Rate this term
No ratings yet
🤖 AI Guestbook educational data only
| |
Last 30 days
0 pings F 0 pings S 0 pings S 1 ping M 0 pings T 0 pings W 0 pings T 0 pings F 0 pings S 0 pings S 1 ping M 0 pings T 0 pings W 0 pings T 2 pings F 0 pings S 0 pings S 0 pings M 0 pings T 1 ping W 1 ping T 5 pings F 0 pings S 1 ping S 0 pings M 0 pings T 0 pings W 0 pings T 3 pings F 0 pings S
No pings yet today
Amazonbot 1 Perplexity 1 Google 1
Amazonbot 14 Perplexity 8 Google 7 ChatGPT 4 Ahrefs 3
crawler 34 crawler_json 2
DEV INTEL Tools & Severity
🟠 High ⚙ Fix effort: High
⚡ Quick Fix
Always set a max_iterations limit (10 is reasonable), log every tool call with inputs and outputs, and require confirmation before any irreversible action

✓ schema.org compliant