Bug Bounty Programme
debt(d7/e5/b5/t5)
Closest to 'only careful code review or runtime testing' (d7). The detection_hints list HackerOne, Bugcrowd, and Intigriti — these are programme management platforms, not automated code scanners. The absence of a security.txt file or responsible disclosure policy can be spotted by a manual audit or a specialist HTTP probe, but there is no default linter that flags 'you have no bug bounty programme.' The gap is invisible in CI/CD and only surfaces when a researcher has nowhere to report, or when a vulnerability is publicly disclosed without a channel — a late operational signal.
Closest to 'touches multiple files / significant refactor in one component' (e5). The quick_fix describes defining scope, setting reward tiers, and committing to 24-hour response SLAs. This is not a one-line patch: it requires publishing a policy document (security.txt, programme page), internal triage workflows, legal review of scope, and budget allocation. Common mistakes (scope too narrow, slow triage, launching before known issues are fixed) each require their own remediation steps across teams, making this a multi-component organisational effort rather than a simple code change.
Closest to 'persistent productivity tax' (b5). A bug bounty programme applies to web and API contexts and imposes ongoing operational load: triaging incoming reports, coordinating fixes, managing researcher communications, and maintaining scope documents. It affects security, engineering, and legal teams continuously. It does not define the entire system's shape (b9) nor is it a single localised component (b3) — it is a cross-team, ongoing commitment that slows multiple work streams if managed poorly.
Closest to 'notable trap — a documented gotcha most devs eventually learn' (t5). The misconception field directly states the canonical wrong belief: that a bug bounty replaces internal security testing. This is a well-documented pitfall that organisations routinely fall into — launching a public programme prematurely (before fixing known criticals) and treating it as a substitute for internal reviews. It is a recognised industry gotcha rather than a catastrophic or architecture-level misunderstanding.
Also Known As
TL;DR
Explanation
Bug bounty programmes crowdsource security testing by incentivising researchers to find and report vulnerabilities rather than sell or exploit them. Programmes define scope (which domains/assets are in scope), reward ranges (scaled by CVSS severity), and safe harbour provisions (legal protection for good-faith research). Platforms include HackerOne, Bugcrowd, and Intigriti. Before running a public programme, ensure basic hygiene (patch known issues, have a functioning SDLC) — a programme that can't process reports creates frustration and reputational risk.
Common Misconception
Why It Matters
Common Mistakes
- Launching a public bug bounty before fixing known critical vulnerabilities — researchers find them immediately.
- Slow or dismissive responses to reports — researchers disengage and may disclose publicly.
- Scope that is too narrow — researchers find vulnerabilities out of scope and have no way to report them.
- Not triaging and fixing submissions promptly — the vulnerability exists while it awaits review.
Code Examples
// Bug bounty anti-pattern — no response SLA:
bugBounty.report({
program: 'example.com',
vulnerability: 'SQL injection on /api/search',
severity: 'Critical',
// Response: silence for 6 weeks
// Fix: never deployed
// Researcher: publishes 90-day disclosure
})
# Bug bounty programme — pay researchers to find vulnerabilities
# Scope definition (what's in/out):
# In scope: yourapp.com, api.yourapp.com, app.yourapp.com
# Out of scope: staging.*, careers.*, third-party providers
# Severity + reward table:
# Critical (CVSS 9-10): RCE, auth bypass → £5,000 - £20,000
# High (CVSS 7-8.9): SQLi, SSRF, IDOR → £1,000 - £5,000
# Medium (CVSS 4-6.9): XSS, info disclosure → £100 - £1,000
# Low (CVSS 0-3.9): self-XSS, clickjacking → £50 - £100
# Platforms: HackerOne, Bugcrowd, Intigriti
# Safe harbour: researchers acting in good faith won't face legal action
# Before launching: fix known vulns, have a response process, set realistic scope