← CodeClarityLab Home
Browse by Category
+ added · updated 7d
← Back to glossary

AI-Assisted Code Generation

ai_ml Intermediate
debt(d5/e5/b5/t7)
d5 Detectability Operational debt — how invisible misuse is to your safety net

Closest to 'specialist tool catches it' (d5). The term's detection_hints list PHPStan, Psalm, Semgrep, and PHPUnit as tools that can catch many AI-generation mistakes — type errors, missing error handling, some security patterns. However, these are specialist tools that must be configured and run; subtle logic errors and missing edge cases won't be caught even by these, pushing some issues toward d7. Settling on d5 because the listed tools do catch a meaningful proportion of the common defects in AI-generated code.

e5 Effort Remediation debt — work required to fix once spotted

Closest to 'touches multiple files / significant refactor in one component' (e5). The quick_fix suggests running PHPStan and tests before committing, which sounds like e1-e3 for prevention. But the common_mistakes indicate that once AI-generated code is committed without review — especially security-critical code — the remediation involves auditing and rewriting across multiple files. AI code tends to be scattered throughout the codebase (CRUD, auth, tests), so fixing already-committed AI-generated code is a multi-file effort, solidly e5.

b5 Burden Structural debt — long-term weight of choosing wrong

Closest to 'persistent productivity tax' (b5). The applies_to covers web and cli contexts — broad scope. The structural burden is that every AI-generated contribution requires review discipline, static analysis enforcement, and security auditing as an ongoing process. This is a persistent productivity tax: teams must maintain review practices, CI gates, and developer awareness continuously. It doesn't quite define the system's shape (b7-b9) since it's a tooling/process concern rather than an architectural choice, but it does affect many work streams.

t7 Trap Cognitive debt — how counter-intuitive correct behaviour is

Closest to 'serious trap — contradicts how a similar concept works elsewhere' (t7). The misconception field is explicit: 'AI-generated code is production-ready because it looks correct.' This is a serious cognitive trap because the code appears syntactically and structurally correct — it mimics patterns developers trust from IDE autocomplete and documentation examples. Developers' prior experience with code-completion tools (which suggest correct completions from a known API) leads them to extend that trust to LLM-generated code, which optimizes for plausibility over correctness. The 'obvious' approach of accepting clean-looking AI output is reliably wrong for edge cases, security, and error handling.

About DEBT scoring →

Also Known As

Copilot AI coding code completion LLM code generation

TL;DR

Using LLMs to generate, complete, or refactor code — powerful for boilerplate and exploration but requiring review for correctness, security, and licence compliance.

Explanation

AI code generation tools (GitHub Copilot, Claude, Cursor) accelerate development by generating boilerplate, suggesting completions, explaining unfamiliar code, and drafting tests. Limitations: generated code can be subtly wrong (especially edge cases), may introduce security vulnerabilities (SQL injection, hardcoded secrets), may use deprecated APIs, and may include GPL-licensed code without attribution. Best practice: treat AI output as a first draft needing review, run static analysis and security scanning on generated code, and never commit without understanding what it does.

Common Misconception

AI-generated code is production-ready because it looks correct — LLMs generate plausible-looking code optimised for the happy path; edge cases, error handling, and security are commonly missing or wrong.

Why It Matters

Developers who trust AI-generated code without review introduce vulnerabilities at scale — AI tools can generate SQL injection vulnerabilities, insecure random number usage, and incorrect business logic that passes casual inspection.

Common Mistakes

  • Committing AI-generated code without reading it — generated code must be understood before committing.
  • Using AI for security-critical code without expert review — auth, crypto, and input handling require extra scrutiny.
  • Not running static analysis on generated code — PHPStan catches many AI generation mistakes.
  • Assuming generated tests are meaningful — AI often generates tests that pass without asserting behaviour.

Avoid When

  • Do not merge AI-generated code that touches authentication, cryptography, or payment flows without expert security review.
  • Avoid using AI generation as a substitute for understanding the code — generated code you cannot explain is a liability.
  • Do not generate code against proprietary internal APIs or data schemas that could expose secrets via the prompt.

When To Use

  • Generating boilerplate, scaffolding, and repetitive CRUD code where the pattern is well-understood and review is fast.
  • Exploring unfamiliar APIs or languages — AI output is a starting point for learning, not production-ready code.
  • Writing test cases and documentation where correctness is easy to verify and the cost of a mistake is low.

Code Examples

💡 Note
The bad example shows an AI-generated search function with a raw string interpolation SQL injection; the fix adds the prepared statement the prompt omitted to specify.
✗ Vulnerable
// AI-generated code with subtle SQL injection:
// Prompt: write a PHP function to search users by name
function searchUsers(string $name): array {
    global $pdo;
    // AI forgot to use prepared statements:
    return $pdo->query("SELECT * FROM users WHERE name LIKE '%$name%'")->fetchAll();
    // Attacker input: %' UNION SELECT * FROM passwords --
}
✓ Fixed
// Reviewed and corrected:
function searchUsers(string $name): array {
    // After review: use prepared statement:
    $stmt = $this->pdo->prepare(
        'SELECT id, name, email FROM users WHERE name LIKE ?'
    );
    $stmt->execute(['%' . $name . '%']);
    return $stmt->fetchAll(PDO::FETCH_ASSOC);
    // Also: limit columns, not SELECT *
}
// Run PHPStan after generation to catch type errors
// Run SAST scanner to catch security issues

Added 16 Mar 2026
Edited 31 Mar 2026
Views 31
Rate this term
No ratings yet
🤖 AI Guestbook educational data only
| |
Last 30 days
0 pings F 0 pings S 0 pings S 0 pings M 0 pings T 0 pings W 0 pings T 0 pings F 1 ping S 1 ping S 0 pings M 0 pings T 0 pings W 0 pings T 1 ping F 2 pings S 0 pings S 0 pings M 0 pings T 0 pings W 0 pings T 3 pings F 1 ping S 0 pings S 1 ping M 0 pings T 0 pings W 0 pings T 0 pings F 2 pings S
Amazonbot 1
No pings yesterday
Amazonbot 16 Unknown AI 4 Perplexity 4 ChatGPT 3 Google 2 Ahrefs 2
crawler 28 crawler_json 2 pre-tracking 1
DEV INTEL Tools & Severity
🟠 High ⚙ Fix effort: Medium
⚡ Quick Fix
Always run PHPStan and your test suite on AI-generated code before committing — AI generates plausible-looking code that may have subtle security flaws or incorrect logic
📦 Applies To
any web cli
🔗 Prerequisites
🔍 Detection Hints
AI-generated code committed without PHPStan or test coverage; security-sensitive code (auth, crypto) generated by AI and not audited
Auto-detectable: ✓ Yes phpstan psalm semgrep phpunit
⚠ Related Problems
🤖 AI Agent
Confidence: Medium False Positives: Medium ✗ Manual fix Fix: Medium Context: File Tests: Update

✓ schema.org compliant