Skip to content

Missing LIMIT clause

Severity: Info

A top-level SELECT that reads from a table without a LIMIT clause returns every matching row. If the caller only needs a page of results or the first few matches, the rest is wasted work — extra I/O, sorting, and network transfer for rows that are immediately discarded.

-- Returns all matching orders, even if the UI shows 25 at a time
SELECT * FROM orders WHERE customer_id = 42 ORDER BY created_at DESC;

Without LIMIT, PostgreSQL must find, sort, and return all matching rows. If there are 10,000 matches but the application only displays 25, the database does 400x more work than necessary.

When LIMIT is present, PostgreSQL can use a top-N heapsort — a much cheaper algorithm that only tracks the top N rows instead of sorting the entire result set. Without LIMIT, a full sort is required.

Large unbounded result sets consume memory both on the database server (for sorting and buffering) and on the application side. This can cause spills to disk on the server and out-of-memory errors in the application.

LIMIT is not needed when:

  • You’re aggregating (SELECT count(*), SELECT sum(total))
  • The query feeds an INSERT INTO ... SELECT or similar bulk operation
  • You genuinely need all matching rows (exports, reports, ETL)

Add a LIMIT to match what the caller actually consumes:

SELECT * FROM orders
WHERE customer_id = 42
ORDER BY created_at DESC
LIMIT 25;

For paginated APIs, use LIMIT with OFFSET or keyset pagination:

-- Keyset pagination (more efficient than OFFSET for deep pages)
SELECT * FROM orders
WHERE customer_id = 42 AND created_at < $1
ORDER BY created_at DESC
LIMIT 25;

Query Doctor flags this nudge on top-level SELECT statements that query actual tables (not subqueries or CTEs) and have no LIMIT clause. The nudge marker points to the first table in the FROM clause.