Performance API & User Timing
debt(d5/e3/b3/t5)
Closest to 'specialist tool catches it' (d5), because detection requires DevTools inspection, Lighthouse audits, or RUM tools (chrome-devtools, lighthouse, datadog-rum listed in detection_hints). Absence of performance instrumentation is not caught by default linters; automated detection is explicitly marked 'no' in the term metadata. A developer using Date.now() instead of performance.now() won't trigger a compiler error or default linter warning.
Closest to 'simple parameterised fix' (e3), because the quick_fix is straightforward: replace Date.now() calls with performance.mark()/measure() and wire them to your analytics endpoint. This is a pattern swap within a single component (instrumentation layer), not a cross-cutting refactor. However, if marks are left uncleaned (a common mistake), remediation requires touching multiple measurement sites, pushing toward e5 in worst case.
Closest to 'localised tax' (b3), because Performance API adoption is localised to the monitoring/instrumentation layer and doesn't force architectural changes across the codebase. It applies only to 'web' contexts (single applies_to context). Once instrumentation is in place, most of the codebase remains unaffected; only the RUM/monitoring pipeline feels the weight of maintaining marks and measures.
Closest to 'notable trap' (t5), because the misconception is explicitly documented: 'console.time() is sufficient for performance measurement' is a documented gotcha that most developers eventually learn (console.time is dev-only, imprecise; performance.now() is production-grade, microsecond precision). Additionally, common_mistakes reveal non-obvious traps: not clearing marks causes buffer accumulation, measuring sub-1ms operations is unreliable due to JS engine optimisations, and PerformanceObserver vs polling has subtle behavioural differences.
Also Known As
TL;DR
Explanation
The Performance API provides sub-millisecond timing. performance.now() returns time since page load with microsecond precision — unlike Date.now() it is monotonic and not affected by system clock changes. User Timing: performance.mark('start') places a named timestamp; performance.measure('task', 'start', 'end') calculates duration between marks. PerformanceObserver listens for performance entries (paint, navigation, resource, longtask). Use for: profiling critical code paths, measuring custom metrics for RUM (Real User Monitoring), and detecting long tasks that block the main thread.
Common Misconception
Why It Matters
Common Mistakes
- Using Date.now() for timing — it is millisecond precision and affected by system clock skew.
- Not clearing marks after measurement — marks accumulate in the performance buffer.
- Measuring too small operations — JS engine optimisations make micro-benchmarks unreliable below 1ms.
- Not using PerformanceObserver for continuous monitoring — polling performance.getEntries() misses entries.
Code Examples
// Imprecise timing:
const start = Date.now();
doHeavyWork();
console.log(Date.now() - start + 'ms'); // Milliseconds only, unreliable
// Precise User Timing:
performance.mark('heavy-work-start');
doHeavyWork();
performance.mark('heavy-work-end');
performance.measure('heavy-work', 'heavy-work-start', 'heavy-work-end');
const [measure] = performance.getEntriesByName('heavy-work');
console.log(measure.duration.toFixed(3) + 'ms'); // Microsecond precision
// Observe long tasks (> 50ms) that block main thread:
const observer = new PerformanceObserver(list => {
list.getEntries().forEach(entry => {
console.warn('Long task:', entry.duration, 'ms');
});
});
observer.observe({ entryTypes: ['longtask'] });