Rolling Deployment
Also Known As
rolling update
rolling deploy
gradual rollout
TL;DR
Gradually replacing old application instances with new ones, a few at a time, so the service remains available throughout the upgrade.
Explanation
A rolling deployment updates servers sequentially — draining a server of active connections, deploying the new version, verifying health, then moving to the next. At any point during the rollout, both old and new versions serve traffic simultaneously. This means: database migrations must be backward compatible with both versions, API contracts must not have breaking changes, and feature flags may be needed to gate new behaviour. Rolling deployments are simpler than blue/green (no duplicate infrastructure) but slower to roll back — you must roll forward through the remaining servers or trigger a reverse rolling update. Kubernetes performs rolling updates natively via Deployment resources.
Common Misconception
✗ Rolling deployments are always safer than blue-green. Rolling updates expose users to mixed versions simultaneously — if old and new code are incompatible (e.g. different DB schema expectations), some requests will fail during the rollout window. Blue-green avoids this by cutting over atomically.
Why It Matters
Rolling deployments update servers one by one rather than all at once — at any point, some servers run the old version and some the new, providing gradual rollout without a full traffic cutover.
Common Mistakes
- Database migrations that break the old code version — both versions run simultaneously during a rolling deploy.
- Not handling mixed-version traffic in the application — old and new API contracts must coexist during rollout.
- Rolling too fast — updating all servers before monitoring shows the new version is healthy.
- No rollback plan — if the new version has issues, rolling back means going through the same process in reverse.
Code Examples
✗ Vulnerable
# Rolling deploy with breaking DB migration — causes errors:
# Step 1: Run migration that renames column 'user_name' to 'username'
# Step 2: Start deploying new code that uses 'username'
# Step 3: Old code still running on 50% of servers uses 'user_name' — errors!
# Fix: expand-contract — add 'username', dual-write, then remove 'user_name'
✓ Fixed
# Kubernetes — rolling update (default strategy)
apiVersion: apps/v1
kind: Deployment
spec:
replicas: 4
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1 # at most 1 pod down at a time
maxSurge: 1 # at most 1 extra pod above replica count
template:
spec:
containers:
- name: app
image: myapp:v2.5.0 # change this to trigger rolling update
readinessProbe:
httpGet: { path: /health, port: 9000 }
# pod only receives traffic once /health returns 200
# kubectl rollout status deployment/myapp
# kubectl rollout undo deployment/myapp ← instant rollback
Tags
🤝 Adopt this term
£79/year · your link shown here
Added
15 Mar 2026
Edited
22 Mar 2026
Views
29
🤖 AI Guestbook educational data only
|
|
Last 30 days
Agents 0
No pings yet today
Amazonbot 8
Perplexity 8
ChatGPT 2
Ahrefs 2
Google 2
Also referenced
How they use it
crawler 21
crawler_json 1
Related categories
⚡
DEV INTEL
Tools & Severity
🟡 Medium
⚙ Fix effort: Medium
⚡ Quick Fix
Configure Kubernetes rolling updates with maxUnavailable: 0 and maxSurge: 1 — this brings up one new PHP pod before terminating an old one, ensuring zero downtime
📦 Applies To
any
web
api
🔗 Prerequisites
🔍 Detection Hints
Deployment requiring all instances down simultaneously; no rolling update strategy configured; PHP deployment causing downtime window
Auto-detectable:
✗ No
kubernetes
helm
datadog
⚠ Related Problems
🤖 AI Agent
Confidence: Low
False Positives: Medium
✗ Manual fix
Fix: High
Context: File