AI-Assisted Code Generation
debt(d5/e5/b5/t7)
Closest to 'specialist tool catches it' (d5). The term's detection_hints list PHPStan, Psalm, Semgrep, and PHPUnit as tools that can catch many AI-generation mistakes — type errors, missing error handling, some security patterns. However, these are specialist tools that must be configured and run; subtle logic errors and missing edge cases won't be caught even by these, pushing some issues toward d7. Settling on d5 because the listed tools do catch a meaningful proportion of the common defects in AI-generated code.
Closest to 'touches multiple files / significant refactor in one component' (e5). The quick_fix suggests running PHPStan and tests before committing, which sounds like e1-e3 for prevention. But the common_mistakes indicate that once AI-generated code is committed without review — especially security-critical code — the remediation involves auditing and rewriting across multiple files. AI code tends to be scattered throughout the codebase (CRUD, auth, tests), so fixing already-committed AI-generated code is a multi-file effort, solidly e5.
Closest to 'persistent productivity tax' (b5). The applies_to covers web and cli contexts — broad scope. The structural burden is that every AI-generated contribution requires review discipline, static analysis enforcement, and security auditing as an ongoing process. This is a persistent productivity tax: teams must maintain review practices, CI gates, and developer awareness continuously. It doesn't quite define the system's shape (b7-b9) since it's a tooling/process concern rather than an architectural choice, but it does affect many work streams.
Closest to 'serious trap — contradicts how a similar concept works elsewhere' (t7). The misconception field is explicit: 'AI-generated code is production-ready because it looks correct.' This is a serious cognitive trap because the code appears syntactically and structurally correct — it mimics patterns developers trust from IDE autocomplete and documentation examples. Developers' prior experience with code-completion tools (which suggest correct completions from a known API) leads them to extend that trust to LLM-generated code, which optimizes for plausibility over correctness. The 'obvious' approach of accepting clean-looking AI output is reliably wrong for edge cases, security, and error handling.
Also Known As
TL;DR
Explanation
AI code generation tools (GitHub Copilot, Claude, Cursor) accelerate development by generating boilerplate, suggesting completions, explaining unfamiliar code, and drafting tests. Limitations: generated code can be subtly wrong (especially edge cases), may introduce security vulnerabilities (SQL injection, hardcoded secrets), may use deprecated APIs, and may include GPL-licensed code without attribution. Best practice: treat AI output as a first draft needing review, run static analysis and security scanning on generated code, and never commit without understanding what it does.
Common Misconception
Why It Matters
Common Mistakes
- Committing AI-generated code without reading it — generated code must be understood before committing.
- Using AI for security-critical code without expert review — auth, crypto, and input handling require extra scrutiny.
- Not running static analysis on generated code — PHPStan catches many AI generation mistakes.
- Assuming generated tests are meaningful — AI often generates tests that pass without asserting behaviour.
Avoid When
- Do not merge AI-generated code that touches authentication, cryptography, or payment flows without expert security review.
- Avoid using AI generation as a substitute for understanding the code — generated code you cannot explain is a liability.
- Do not generate code against proprietary internal APIs or data schemas that could expose secrets via the prompt.
When To Use
- Generating boilerplate, scaffolding, and repetitive CRUD code where the pattern is well-understood and review is fast.
- Exploring unfamiliar APIs or languages — AI output is a starting point for learning, not production-ready code.
- Writing test cases and documentation where correctness is easy to verify and the cost of a mistake is low.
Code Examples
// AI-generated code with subtle SQL injection:
// Prompt: write a PHP function to search users by name
function searchUsers(string $name): array {
global $pdo;
// AI forgot to use prepared statements:
return $pdo->query("SELECT * FROM users WHERE name LIKE '%$name%'")->fetchAll();
// Attacker input: %' UNION SELECT * FROM passwords --
}
// Reviewed and corrected:
function searchUsers(string $name): array {
// After review: use prepared statement:
$stmt = $this->pdo->prepare(
'SELECT id, name, email FROM users WHERE name LIKE ?'
);
$stmt->execute(['%' . $name . '%']);
return $stmt->fetchAll(PDO::FETCH_ASSOC);
// Also: limit columns, not SELECT *
}
// Run PHPStan after generation to catch type errors
// Run SAST scanner to catch security issues