Threat Modelling
Also Known As
threat model
STRIDE
security threat analysis
TL;DR
A structured analysis process for identifying security threats, attack vectors, and appropriate countermeasures during design.
Explanation
Threat modelling (STRIDE, PASTA, LINDDUN) is a proactive security activity performed during design — typically cheaper than fixing vulnerabilities post-deployment. The process involves: identifying assets to protect, decomposing the application into data flows and trust boundaries, enumerating threats using a framework like STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege), and defining mitigations. Regular threat modelling sessions on new features integrate security into the development cycle rather than bolting it on.
Diagram
flowchart TD
SCOPE[Define Scope<br/>what are we protecting?] -->
ASSETS[Identify Assets<br/>user data, credentials, APIs] -->
THREATS[Identify Threats<br/>STRIDE analysis] -->
VULN[Find Vulnerabilities<br/>how could each threat succeed?] -->
MITIGATE[Design Mitigations<br/>controls for each threat] -->
VALIDATE[Validate<br/>test mitigations work]
subgraph STRIDE
S[Spoofing]
T[Tampering]
R[Repudiation]
I[Info Disclosure]
D[Denial of Service]
E[Elevation of Privilege]
end
style SCOPE fill:#1f6feb,color:#fff
style VALIDATE fill:#238636,color:#fff
Common Misconception
✗ Threat modelling is a one-time activity done before launch. Threat models become stale with every architectural change, new feature, and new dependency — effective threat modelling is a living process revisited whenever the attack surface changes significantly.
Why It Matters
Threat modelling systematically identifies what can go wrong before building — security issues found in design cost 10-100× less to fix than issues found in production.
Common Mistakes
- Threat modelling only once at project start and never revisiting as the system evolves.
- Identifying threats without assigning mitigations — a threat list with no actions provides no value.
- Focusing only on technical threats and missing business logic abuse scenarios.
- Not involving developers in threat modelling — security teams alone miss domain context.
Code Examples
✗ Vulnerable
// Feature built without threat modelling:
// New feature: users can export their data as CSV
// Shipped without asking: can users export OTHER users' data? (IDOR)
// Can the export be triggered to DoS the server? (resource exhaustion)
// Does the export include fields that should stay internal? (data exposure)
// All found in pentest 6 months later — expensive to fix
✓ Fixed
# Threat modelling — STRIDE framework for PHP apps
# S — Spoofing (impersonation)
# Threats: credential stuffing, session theft
# Mitigations: MFA, SameSite cookies, session regeneration on login
# T — Tampering (data integrity)
# Threats: SQLi, XSS, request modification
# Mitigations: prepared statements, CSP, input validation
# R — Repudiation (deny actions)
# Threats: no audit trail
# Mitigations: immutable audit log with user, action, timestamp, IP
# I — Information Disclosure
# Threats: verbose errors, debug endpoints, IDOR
# Mitigations: display_errors=Off, auth on all endpoints
# D — Denial of Service
# Threats: rate limit bypass, ReDoS, large uploads
# Mitigations: rate limiting, regex review, upload size limits
# E — Elevation of Privilege
# Threats: mass assignment, broken access control
# Mitigations: fillable/guarded, policy-based authorisation
References
Tags
🤝 Adopt this term
£79/year · your link shown here
Added
15 Mar 2026
Edited
22 Mar 2026
Views
25
🤖 AI Guestbook educational data only
|
|
Last 30 days
Agents 0
No pings yet today
No pings yesterday
Perplexity 7
Amazonbot 6
Ahrefs 2
Unknown AI 2
ChatGPT 1
Google 1
Also referenced
How they use it
crawler 18
pre-tracking 1
Related categories
⚡
DEV INTEL
Tools & Severity
🔵 Info
⚙ Fix effort: Medium
⚡ Quick Fix
Before each feature, ask: who are the attackers, what do they want, how could they misuse this, and what controls prevent it — document in the ticket
📦 Applies To
PHP 5.0+
web
api
cli
🔗 Prerequisites
🔍 Detection Hints
New features merged without security review or documented threat model
Auto-detectable:
✗ No
⚠ Related Problems
🤖 AI Agent
Confidence: Low
False Positives: High
✗ Manual fix
Fix: High
Context: File