Read Replicas & Database Scaling
Also Known As
DB read replica
read slave
MySQL replication
TL;DR
Directing read queries to replica servers while writes go to the primary — a simple way to scale read throughput horizontally without sharding.
Explanation
Most web applications read far more than they write. Read replicas receive a continuous stream of changes from the primary via replication (MySQL binary log, PostgreSQL streaming replication) and serve SELECT queries independently. PHP applications implement this with a connection manager routing writes to the primary DSN and reads to a replica DSN. Laravel supports multiple connections natively. Caveats: replication lag means replicas may serve slightly stale data — never read from a replica immediately after a write in the same request without a mitigation strategy (e.g. read-your-writes from the primary for the current session). Monitor replication lag as a key operational metric.
Common Misconception
✗ Read replicas are immediately consistent with the primary. Replication lag means replicas may serve stale data seconds or minutes behind the primary. Applications must route reads that require freshness (post-write reads) to the primary, not replicas.
Why It Matters
Read replicas offload SELECT queries to secondary servers — the primary handles only writes, scaling read capacity horizontally and protecting write throughput from reporting or analytics queries.
Common Mistakes
- Sending write queries to replicas — they are read-only; writes silently fail or error.
- Reading from replica immediately after a write — replication lag means the new data may not be there yet.
- Not routing long-running analytics queries to replicas — they block the primary's query queue.
- Using a single connection for both reads and writes when separate read/write connections are configured.
Code Examples
✗ Vulnerable
// All queries to primary — no read scaling:
$pdo = new PDO('mysql:host=primary;dbname=app', ...);
$pdo->query('SELECT * FROM orders'); // Should go to replica
$pdo->query('UPDATE orders SET ...'); // Correctly on primary
// With read/write splitting:
$read = new PDO('mysql:host=replica;dbname=app', ...);
$write = new PDO('mysql:host=primary;dbname=app', ...);
✓ Fixed
// Route reads to replica, writes to primary
class DatabaseManager {
public function __construct(
private PDO $primary,
private array $replicas,
) {}
public function write(): PDO { return $this->primary; }
public function read(): PDO {
// Round-robin across replicas
return $this->replicas[array_rand($this->replicas)];
}
}
// Usage
$db->write()->prepare('INSERT INTO orders ...')
$db->read()->prepare('SELECT * FROM orders WHERE user_id = ?')
// Caution: read your own writes — after a write, use primary for a short window
// to avoid reading stale replica data (replication lag)
Tags
🤝 Adopt this term
£79/year · your link shown here
Added
15 Mar 2026
Edited
22 Mar 2026
Views
35
🤖 AI Guestbook educational data only
|
|
Last 30 days
Agents 1
No pings yesterday
Perplexity 9
Google 7
Amazonbot 7
Unknown AI 3
ChatGPT 2
Ahrefs 2
Also referenced
How they use it
crawler 26
crawler_json 3
pre-tracking 1
Related categories
⚡
DEV INTEL
Tools & Severity
🟠 High
⚙ Fix effort: Medium
⚡ Quick Fix
Route all SELECT queries to read replicas and writes to the primary — Laravel's read/write connection config and Doctrine's read-only EntityManager make this transparent to application code
📦 Applies To
PHP 5.0+
web
cli
queue-worker
laravel
doctrine
🔗 Prerequisites
🔍 Detection Hints
All queries including SELECTs hitting primary DB; no read replica configured; primary DB CPU high from read-heavy workload
Auto-detectable:
✗ No
laravel-debugbar
datadog
rds-console
⚠ Related Problems
🤖 AI Agent
Confidence: Low
False Positives: Medium
✗ Manual fix
Fix: High
Context: File
Tests: Update