Missing WHERE clause
Severity: Info
A top-level SELECT that reads from a table without a WHERE clause returns every row. For small lookup tables this is fine, but for large tables it means a full sequential scan — reading the entire table regardless of how many rows the caller actually needs.
Example
Section titled “Example”-- Reads every row from a potentially large tableSELECT id, name FROM customers;Why it matters
Section titled “Why it matters”Without a WHERE clause, PostgreSQL has no choice but to scan the whole table. This means:
- I/O proportional to table size — every page in the table is read from disk or shared buffers
- No index usage — indexes only help when there’s a predicate to filter on
- Network transfer — all rows are sent to the client, which can be slow for large tables
For tables with millions of rows, an unfiltered SELECT can lock up a connection for seconds or longer and put heavy load on shared buffers.
When it’s acceptable
Section titled “When it’s acceptable”Unfiltered selects are reasonable for:
- Small reference or configuration tables (countries, status codes, feature flags)
- Aggregations that genuinely need all rows (
SELECT count(*) FROM orders) - Data exports or ETL pipelines designed to process full tables
How to fix it
Section titled “How to fix it”Add a WHERE clause to filter to the rows you need:
SELECT id, name FROM customers WHERE active = true;If you need to browse large result sets, combine with pagination:
SELECT id, name FROM customers WHERE active = true ORDER BY name LIMIT 50 OFFSET 0;Detection
Section titled “Detection”Query Doctor flags this nudge on top-level SELECT statements that query actual tables (not subqueries or CTEs) and have no WHERE clause. The nudge marker points to the first table in the FROM clause.