DOM-Based XSS
debt(d5/e3/b3/t7)
Closest to 'specialist tool catches it' (d5). The term's detection_hints lists semgrep and eslint with code_pattern matches for innerHTML = location.hash/search. These tools can catch common patterns, but DOM XSS flows through dynamic runtime behavior and indirect data flows (e.g., chained variable assignments) that static tools miss — making it closer to d5 than d3 (default linter) or d7 (code review only).
Closest to 'simple parameterised fix' (e3). The quick_fix is a targeted swap: replace innerHTML/document.write/eval(string) with textContent or createElement, or wrap with DOMPurify. This is a localised pattern replacement within a component rather than a single one-line patch, since multiple sink usages may need updating, but it doesn't span cross-cutting concerns.
Closest to 'localised tax' (b3). DOM XSS applies to web contexts only, and the burden is scoped to the specific components that handle user-controlled data in client-side JavaScript. It doesn't shape the entire architecture, but each developer working on those UI components must be consistently vigilant about source-to-sink flows.
Closest to 'serious trap' (t7). The canonical misconception is explicitly stated: developers who rely on server-side output encoding believe they are protected, but DOM XSS bypasses the server entirely — the payload flows from browser sources to sinks in client-side JavaScript without ever touching the server. This contradicts the mental model most developers have from learning about reflected/stored XSS, making it a serious cross-paradigm trap.
Also Known As
TL;DR
Explanation
DOM-based XSS differs from reflected and stored XSS in that the payload never reaches the server — the vulnerability exists entirely in client-side JavaScript that reads attacker-controlled data (e.g., location.hash, document.referrer) and writes it to a dangerous sink such as innerHTML, document.write(), or eval(). Because the server never sees the attack string, server-side output encoding cannot prevent it. Mitigations include using safe DOM APIs like textContent, avoiding eval-like sinks, and implementing a strict Content-Security-Policy.
How It's Exploited
Common Misconception
Why It Matters
Common Mistakes
- Using location.hash, document.referrer, or URL parameters as innerHTML or document.write() content without sanitisation.
- Passing user-controlled data to eval(), setTimeout(), or setInterval() as a string argument.
- Trusting that server-side encoding prevents DOM XSS — they operate at different layers.
- Not using DOMPurify or equivalent for any user-supplied HTML inserted into the DOM.
Code Examples
// DOM XSS via URL hash:
document.getElementById('output').innerHTML = location.hash.substring(1);
// Attacker: https://example.com/page#<img src=x onerror=alert(1)>
// Safe DOM manipulation — no innerHTML with user data:
const name = getUserInput();
// Safe: textContent — never executes HTML:
document.getElementById('greeting').textContent = 'Hello, ' + name;
// Safe: createElement — escapes automatically:
const p = document.createElement('p');
p.textContent = name; // Escaped — no execution
document.body.appendChild(p);
// If HTML is needed, sanitise first:
import DOMPurify from 'dompurify';
el.innerHTML = DOMPurify.sanitize(userHtml); // Strips scripts