Polyglot Persistence
Also Known As
multi-database
polyglot storage
best-fit persistence
TL;DR
Using multiple different database technologies in a single application — each chosen for what it does best rather than forcing all data into one general-purpose store.
Explanation
Different data types have different optimal storage characteristics: relational data (users, orders) fits PostgreSQL; session/cache fits Redis; search fits Elasticsearch; document data fits MongoDB; time-series metrics fit InfluxDB or TimescaleDB; graph relationships fit Neo4j. Polyglot persistence picks the best tool for each use case. The challenge: operational overhead of maintaining multiple systems, consistency across systems, and transaction management across boundaries. Best applied incrementally when a clear mismatch with the current store exists.
Common Misconception
✗ More databases always means better architecture — polyglot persistence adds operational complexity; each additional store requires expertise, monitoring, backup, and failure handling. Start with one good store and add others only when justified.
Why It Matters
A full-text search implemented in MySQL with LIKE queries that takes 2 seconds would take 50ms in Elasticsearch — the right store for the use case eliminates entire classes of performance problems.
Common Mistakes
- Adding a new database for every new feature — operational complexity compounds quickly.
- Cross-database transactions — they are impossible without careful application-level sagas.
- Not considering team expertise — a MongoDB cluster maintained by a team unfamiliar with it is dangerous.
- Premature polyglot — solve the problem with your existing store first, add a new one only when you hit a real wall.
Code Examples
✗ Vulnerable
// MySQL full-text search — hitting limits:
$results = $pdo->query(
"SELECT * FROM glossary WHERE body LIKE '%" . $term . "%'
OR title LIKE '%" . $term . "%'"
);
// 2 second query on 10,000 terms, no ranking, no typo tolerance
✓ Fixed
// Polyglot: MySQL for data, Meilisearch for search:
// MySQL: source of truth for term content
$term = Term::find($slug);
// Meilisearch: optimised for full-text search
$searchResults = $meilisearch->index('glossary')
->search($query, ['limit' => 20]);
// 5ms response, typo-tolerant, ranked by relevance
// Keep in sync via queue:
TermSaved::dispatch($term); // Listener re-indexes in Meilisearch
Tags
🤝 Adopt this term
£79/year · your link shown here
Added
16 Mar 2026
Edited
22 Mar 2026
Views
26
🤖 AI Guestbook educational data only
|
|
Last 30 days
Agents 1
No pings yesterday
Perplexity 7
Amazonbot 6
Ahrefs 2
Google 2
Unknown AI 2
SEMrush 2
Also referenced
How they use it
crawler 20
crawler_json 1
Related categories
⚡
DEV INTEL
Tools & Severity
🔵 Info
⚙ Fix effort: High
⚡ Quick Fix
Use the right database for each use case: MySQL/PostgreSQL for relational OLTP, Redis for caching/sessions/queues, Elasticsearch/Meilisearch for full-text search, S3 for files — PHP can connect to all of them
📦 Applies To
any
web
cli
🔗 Prerequisites
🔍 Detection Hints
Full-text search on MySQL with LIKE; storing sessions in MySQL; binary files stored in DB BLOB; time-series data in relational table
Auto-detectable:
✗ No
deptrac
phpstan
⚠ Related Problems
🤖 AI Agent
Confidence: Low
False Positives: High
✗ Manual fix
Fix: High
Context: File