Prompt Engineering
debt(d9/e3/b5/t5)
Closest to 'silent in production until users hit it' (d9). detection_hints.automated is no; bad prompts produce plausible-looking outputs that only fail on edge cases users encounter in production.
Closest to 'simple parameterised fix' (e3). quick_fix is rewriting the system prompt to be specific with role/format/examples — a contained edit to prompt strings, not a code refactor.
Closest to 'persistent productivity tax' (b5). applies_to web and cli — prompts often span many features; poor prompt patterns get copy-pasted and every AI-touching workstream pays the iteration cost.
Closest to 'notable trap most devs eventually learn' (t5). The misconception that 'more detail is always better' is a documented gotcha — verbose prompts can degrade output, contradicting newcomer intuition but learnable with iteration.
Also Known As
TL;DR
Explanation
Prompt engineering encompasses techniques for eliciting better responses from LLMs: chain-of-thought (ask the model to reason step by step), few-shot prompting (include examples), system prompts (persistent instructions), role-based framing, and output format specification. Key insight: LLMs are highly sensitive to phrasing — small changes in wording produce different outputs. Structured output (JSON mode, XML tags) makes responses programmatically parseable. For application development, system prompts define consistent behaviour across user sessions.
Common Misconception
Why It Matters
Common Mistakes
- Not specifying output format — LLM returns free text when you need JSON; specify 'respond only with valid JSON'.
- Not using system prompts for persistent instructions — including role and format instructions in every user message.
- Asking the model to do many things in one prompt — break complex tasks into sequential, focused calls.
- Not iterating on prompts against representative test cases — a prompt that works on one example may fail on edge cases.
Code Examples
// Ambiguous prompt — unpredictable output:
$prompt = 'Summarise this order data: ' . json_encode($orderData);
$response = $llm->complete($prompt);
// Response might be prose, JSON, bullet points, or in French
// Structured prompt with format specification:
$systemPrompt = 'You are an order analysis assistant. Always respond with valid JSON only. No explanations.';
$userPrompt = 'Analyse this order and return: {"status": string, "total": number, "items_count": number, "risk_flags": string[]}\n\nOrder: ' . json_encode($orderData);
$response = $llm->complete($userPrompt, system: $systemPrompt);
$data = json_decode($response->text(), flags: JSON_THROW_ON_ERROR);