Method
- Start with the plan (EXPLAIN/ANALYZE). Find the biggest cost node.
- Reduce rows early: selective predicates, right indexes, avoid wide SELECT *.
- Kill N+1: batch or join; fetch all you need in one go.
- Consider normalization for writes, denormalized read models for hot endpoints.
Plans 101
-- language-sql
EXPLAIN (ANALYZE, BUFFERS)
SELECT o.id, o.total_cents, u.email
FROM orders o
JOIN users u ON u.id = o.user_id
WHERE o.user_id = $1 AND o.created_at >= now() - interval '30 days'
ORDER BY o.created_at DESC
LIMIT 50;
- Look for seq scans vs index scans; check rows vs rows removed by filter; buffer hits vs reads.
Indexes that matter
-- language-sql
CREATE INDEX CONCURRENTLY idx_orders_user_created ON orders (user_id, created_at DESC);
Covering and partial indexes
-- language-sql
CREATE INDEX idx_active_users_email ON users(email) WHERE status = 'ACTIVE';
Parameter sniffing and bind peeking
- Plans may be optimized for certain parameter values. Use
ANALYZE
, or force generic plans sparingly.
Avoid anti‑patterns
- Functions on indexed columns in WHERE (non‑sargable).
- Unbounded OFFSET/LIMIT pagination → prefer keyset pagination.
- OR across many columns → split into UNION ALL.
Keyset pagination
-- language-sql
SELECT * FROM orders
WHERE user_id = $1 AND created_at < $cursor
ORDER BY created_at DESC
LIMIT 50;
Batch writes
-- language-sql
INSERT INTO events (id, user_id, payload)
SELECT gen_random_uuid(), user_id, payload
FROM jsonb_to_recordset($1::jsonb) AS t(user_id uuid, payload jsonb);
App‑level tips
- Use connection pools sized to CPU cores × (1–4). Too big → contention.
- Timeouts everywhere; cancel long queries from the app.
Analogy
Think of the plan as a factory route: you want the expensive machines used only when necessary, not to process junk that could have been filtered at the door.
FAQ
- Why is EXPLAIN fast but EXPLAIN ANALYZE slow? The latter runs the query; use on staging or with LIMITs.
- Do hints exist? Postgres avoids hints; use statistics, indexes, and schema design.
Try it
Compare p95 endpoint latency before/after adding one composite index and switching to keyset pagination.
Méthode
- Lire le plan d’exécution (EXPLAIN/ANALYZE).
- Réduire les lignes scannées: index ciblés, filtres sélectifs, covering indexes.
- Éviter N+1: prefetch/joins, batch.
- Normaliser où nécessaire, dénormaliser pour lectures critiques.
Indexing
- B-Tree pour égalités/range, Hash pour égalités strictes, GIN/GiST pour full-text/JSON.
Paramètres
- Work_mem, effective_cache_size, parallelism; surveiller cache hit ratio, I/O wait.