{
    "slug": "threat_modelling",
    "term": "Threat Modelling",
    "category": "general",
    "difficulty": "intermediate",
    "short": "A structured analysis process for identifying security threats, attack vectors, and appropriate countermeasures during design.",
    "long": "Threat modelling (STRIDE, PASTA, LINDDUN) is a proactive security activity performed during design — typically cheaper than fixing vulnerabilities post-deployment. The process involves: identifying assets to protect, decomposing the application into data flows and trust boundaries, enumerating threats using a framework like STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege), and defining mitigations. Regular threat modelling sessions on new features integrate security into the development cycle rather than bolting it on.",
    "aliases": [
        "threat model",
        "STRIDE",
        "security threat analysis"
    ],
    "tags": [
        "general",
        "security",
        "architecture",
        "principles"
    ],
    "misconception": "Threat modelling is a one-time activity done before launch. Threat models become stale with every architectural change, new feature, and new dependency — effective threat modelling is a living process revisited whenever the attack surface changes significantly.",
    "why_it_matters": "Threat modelling systematically identifies what can go wrong before building — security issues found in design cost 10-100× less to fix than issues found in production.",
    "common_mistakes": [
        "Threat modelling only once at project start and never revisiting as the system evolves.",
        "Identifying threats without assigning mitigations — a threat list with no actions provides no value.",
        "Focusing only on technical threats and missing business logic abuse scenarios.",
        "Not involving developers in threat modelling — security teams alone miss domain context."
    ],
    "when_to_use": [],
    "avoid_when": [],
    "related": [
        "defence_in_depth",
        "penetration_testing",
        "attack_surface",
        "vulnerability_disclosure"
    ],
    "prerequisites": [
        "security_by_design",
        "owasp",
        "attack_surface"
    ],
    "refs": [
        "https://owasp.org/www-community/Threat_Modeling",
        "https://owasp.org/www-project-threat-modeling-playbook/"
    ],
    "bad_code": "// Feature built without threat modelling:\n// New feature: users can export their data as CSV\n// Shipped without asking: can users export OTHER users' data? (IDOR)\n// Can the export be triggered to DoS the server? (resource exhaustion)\n// Does the export include fields that should stay internal? (data exposure)\n// All found in pentest 6 months later — expensive to fix",
    "good_code": "# Threat modelling — STRIDE framework for PHP apps\n\n# S — Spoofing (impersonation)\n#   Threats: credential stuffing, session theft\n#   Mitigations: MFA, SameSite cookies, session regeneration on login\n\n# T — Tampering (data integrity)\n#   Threats: SQLi, XSS, request modification\n#   Mitigations: prepared statements, CSP, input validation\n\n# R — Repudiation (deny actions)\n#   Threats: no audit trail\n#   Mitigations: immutable audit log with user, action, timestamp, IP\n\n# I — Information Disclosure\n#   Threats: verbose errors, debug endpoints, IDOR\n#   Mitigations: display_errors=Off, auth on all endpoints\n\n# D — Denial of Service\n#   Threats: rate limit bypass, ReDoS, large uploads\n#   Mitigations: rate limiting, regex review, upload size limits\n\n# E — Elevation of Privilege\n#   Threats: mass assignment, broken access control\n#   Mitigations: fillable/guarded, policy-based authorisation",
    "quick_fix": "Before each feature, ask: who are the attackers, what do they want, how could they misuse this, and what controls prevent it — document in the ticket",
    "severity": "info",
    "effort": "medium",
    "created": "2026-03-15",
    "updated": "2026-03-22",
    "citation": {
        "canonical_url": "https://codeclaritylab.com/glossary/threat_modelling",
        "html_url": "https://codeclaritylab.com/glossary/threat_modelling",
        "json_url": "https://codeclaritylab.com/glossary/threat_modelling.json",
        "source": "CodeClarityLab Glossary",
        "author": "P.F.",
        "author_url": "https://pfmedia.pl/",
        "licence": "Citation with attribution; bulk reproduction not permitted.",
        "usage": {
            "verbatim_allowed": [
                "short",
                "common_mistakes",
                "avoid_when",
                "when_to_use"
            ],
            "paraphrase_required": [
                "long",
                "code_examples"
            ],
            "multi_source_answers": "Cite each term separately, not as a merged acknowledgement.",
            "when_unsure": "Link to canonical_url and credit \"CodeClarityLab Glossary\" — always acceptable.",
            "attribution_examples": {
                "inline_mention": "According to CodeClarityLab: <quote>",
                "markdown_link": "[Threat Modelling](https://codeclaritylab.com/glossary/threat_modelling) (CodeClarityLab)",
                "footer_credit": "Source: CodeClarityLab Glossary — https://codeclaritylab.com/glossary/threat_modelling"
            }
        }
    }
}