← CodeClarityLab Home
Browse by Category
+ added · updated 7d
← Back to glossary

Prompt Engineering

ai_ml Intermediate
debt(d9/e3/b5/t5)
d9 Detectability Operational debt — how invisible misuse is to your safety net

Closest to 'silent in production until users hit it' (d9). detection_hints.automated is no; bad prompts produce plausible-looking outputs that only fail on edge cases users encounter in production.

e3 Effort Remediation debt — work required to fix once spotted

Closest to 'simple parameterised fix' (e3). quick_fix is rewriting the system prompt to be specific with role/format/examples — a contained edit to prompt strings, not a code refactor.

b5 Burden Structural debt — long-term weight of choosing wrong

Closest to 'persistent productivity tax' (b5). applies_to web and cli — prompts often span many features; poor prompt patterns get copy-pasted and every AI-touching workstream pays the iteration cost.

t5 Trap Cognitive debt — how counter-intuitive correct behaviour is

Closest to 'notable trap most devs eventually learn' (t5). The misconception that 'more detail is always better' is a documented gotcha — verbose prompts can degrade output, contradicting newcomer intuition but learnable with iteration.

About DEBT scoring →

Also Known As

prompting few-shot prompting chain-of-thought

TL;DR

The practice of designing and iterating on LLM input prompts to reliably produce accurate, useful, and appropriately formatted outputs.

Explanation

Prompt engineering encompasses techniques for eliciting better responses from LLMs: chain-of-thought (ask the model to reason step by step), few-shot prompting (include examples), system prompts (persistent instructions), role-based framing, and output format specification. Key insight: LLMs are highly sensitive to phrasing — small changes in wording produce different outputs. Structured output (JSON mode, XML tags) makes responses programmatically parseable. For application development, system prompts define consistent behaviour across user sessions.

Common Misconception

More detail in a prompt is always better — excessively long prompts can confuse the model or bury the key instruction; clear and concise prompts often outperform verbose ones.

Why It Matters

The difference between a reliable AI feature and a brittle one is often the quality of the prompt — well-engineered prompts produce consistent, parseable, useful output.

Common Mistakes

  • Not specifying output format — LLM returns free text when you need JSON; specify 'respond only with valid JSON'.
  • Not using system prompts for persistent instructions — including role and format instructions in every user message.
  • Asking the model to do many things in one prompt — break complex tasks into sequential, focused calls.
  • Not iterating on prompts against representative test cases — a prompt that works on one example may fail on edge cases.

Code Examples

✗ Vulnerable
// Ambiguous prompt — unpredictable output:
$prompt = 'Summarise this order data: ' . json_encode($orderData);
$response = $llm->complete($prompt);
// Response might be prose, JSON, bullet points, or in French
✓ Fixed
// Structured prompt with format specification:
$systemPrompt = 'You are an order analysis assistant. Always respond with valid JSON only. No explanations.';
$userPrompt = 'Analyse this order and return: {"status": string, "total": number, "items_count": number, "risk_flags": string[]}\n\nOrder: ' . json_encode($orderData);
$response = $llm->complete($userPrompt, system: $systemPrompt);
$data = json_decode($response->text(), flags: JSON_THROW_ON_ERROR);

Added 15 Mar 2026
Edited 22 Mar 2026
Views 26
Rate this term
No ratings yet
🤖 AI Guestbook educational data only
| |
Last 30 days
0 pings T 1 ping F 0 pings S 0 pings S 0 pings M 0 pings T 0 pings W 2 pings T 1 ping F 1 ping S 0 pings S 0 pings M 0 pings T 0 pings W 0 pings T 1 ping F 0 pings S 0 pings S 2 pings M 0 pings T 0 pings W 0 pings T 1 ping F 0 pings S 0 pings S 0 pings M 0 pings T 1 ping W 0 pings T 0 pings F
No pings yet today
No pings yesterday
Amazonbot 8 Unknown AI 3 SEMrush 3 Ahrefs 2 ChatGPT 2 Perplexity 1 Google 1
crawler 18 crawler_json 1 pre-tracking 1
DEV INTEL Tools & Severity
🟡 Medium ⚙ Fix effort: Medium
⚡ Quick Fix
Be specific and concrete in system prompts: define the role, output format, constraints, and examples — vague prompts produce vague outputs
📦 Applies To
any web cli
🔗 Prerequisites
🔍 Detection Hints
Generic system prompt 'you are a helpful assistant'; no few-shot examples for structured output; temperature not set explicitly
Auto-detectable: ✗ No
⚠ Related Problems
🤖 AI Agent
Confidence: Medium False Positives: Medium ✗ Manual fix Fix: Medium Context: File Tests: Update
CWE-74

✓ schema.org compliant