Why Your Supabase Queries Are Slow (And Exactly How to Fix Them)
Slow Supabase queries kill your app feel and inflate your bill. Here are the six causes I keep seeing in production apps, and the exact SQL and code fixes for each one.
Why Your Supabase Queries Are Slow (And Exactly How to Fix Them)#
A 400ms Supabase query on your laptop becomes a 3-second query in production. I have debugged this pattern across a dozen Next.js + Supabase projects. The causes are almost always the same six things.
Here is every one of them with the exact fix.
How to Find Your Slow Queries First#
Before fixing, find the actual culprits. Do not guess.
Go to Supabase Dashboard → Database → Query Performance. This shows queries sorted by total execution time — meaning queries that run 10ms but fire 10,000 times show up above queries that take 2 seconds but run once.
For a specific query, run this in the SQL editor:
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT * FROM posts WHERE user_id = 'abc123';
Look for:
Seq Scanon large tables — means no index is being usedRows Removed by Filter: 50000— filtering millions of rows at runtime- High
Buffers: shared hit— reading data not in cache
Now the six causes.
1. Missing Indexes on Filtered Columns#
This is responsible for 80% of slow queries I see. If you filter, sort, or join on a column, it needs an index.
-- This query does a full table scan without an index
SELECT * FROM posts WHERE user_id = $1 ORDER BY created_at DESC;
Fix:
-- Create a composite index on both columns used in the query
CREATE INDEX CONCURRENTLY idx_posts_user_created
ON posts (user_id, created_at DESC);
CONCURRENTLY creates the index without locking the table — safe for production.
Common columns that almost always need indexes:
user_idon any user-owned tablecreated_aton tables you sort by dateslugon tables you look up by URL- Foreign key columns (
post_id,organization_id, etc.)
Supabase does not create indexes on foreign keys automatically — you must add them yourself.
-- Check if an index exists on a column
SELECT indexname, indexdef
FROM pg_indexes
WHERE tablename = 'posts';
2. RLS Policies Calling auth.uid() Per Row#
This is the hidden killer. You enable Row Level Security, write a policy like this:
-- SLOW: auth.uid() is called once per row
CREATE POLICY "Users can read own posts"
ON posts FOR SELECT
USING (user_id = auth.uid());
On a table with 100,000 rows, auth.uid() is evaluated 100,000 times per query. It is a function call with overhead.
Fix — wrap in a subquery so it evaluates once:
-- FAST: auth.uid() is called once per query
CREATE POLICY "Users can read own posts"
ON posts FOR SELECT
USING (user_id = (SELECT auth.uid()));
Same policy, same security, dramatically faster. Apply this to every RLS policy you have.
While you are there, also check for policies that call functions or do subqueries on large tables. Every policy condition is evaluated for every row that passes the WHERE clause.
3. The N+1 Query Problem#
You fetch posts, then for each post you fetch the author's profile. That is 1 + N queries.
// SLOW: 1 query for posts + 1 query per post for the author
const { data: posts } = await supabase.from('posts').select('*');
for (const post of posts) {
const { data: author } = await supabase
.from('profiles')
.select('name, avatar')
.eq('id', post.user_id)
.single();
post.author = author;
}
Fix — use PostgREST's relational embedding to fetch everything in one query:
// FAST: 1 query with a JOIN
const { data: posts } = await supabase
.from('posts')
.select(`
id,
title,
slug,
created_at,
author:profiles (
name,
avatar_url
)
`);
This generates a single SQL query with a JOIN. PostgREST infers the relationship from the foreign key posts.user_id → profiles.id.
For this to work, the foreign key must exist in the database schema. If it does not, add it:
ALTER TABLE posts
ADD CONSTRAINT posts_user_id_fkey
FOREIGN KEY (user_id) REFERENCES profiles(id);
4. Fetching All Columns with select("*")#
Every column you select is transmitted over the network and deserialized in your app. If your posts table has a content column with 10KB of markdown per row, selecting 50 posts returns 500KB just for that column — even when you only need titles for a list view.
// SLOW: returns all columns including large content field
const { data } = await supabase.from('posts').select('*');
// FAST: only the columns the list view actually needs
const { data } = await supabase
.from('posts')
.select('id, title, slug, excerpt, created_at, author_name');
For detail pages where you need the full content, select everything. For list views, cards, and previews — be precise.
This also matters for RLS: PostgreSQL evaluates policies before column filtering, so you are still scanning the same rows. But you reduce network overhead and deserialization time significantly.
5. No Connection Pooling in Serverless Environments#
Vercel Functions (and any serverless environment) create a new database connection on every invocation. PostgreSQL has a limit of ~100 concurrent connections on the free tier. Under load, you hit too many connections errors and your app degrades completely.
Fix: use Supabase's built-in connection pooler (pgBouncer).
In your .env.local, you should have two database URLs:
# Direct connection — use for migrations and long-running scripts
DATABASE_URL=postgresql://postgres:[password]@db.[ref].supabase.co:5432/postgres
# Pooler connection — use for your app
DATABASE_URL=postgresql://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:6543/postgres?pgbouncer=true
For Supabase JS client in Next.js:
// lib/supabase/server.ts
import { createServerClient } from '@supabase/ssr';
import { cookies } from 'next/headers';
export function createClient() {
const cookieStore = cookies();
return createServerClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
{
cookies: {
getAll() { return cookieStore.getAll(); },
setAll(cookiesToSet) {
cookiesToSet.forEach(({ name, value, options }) =>
cookieStore.set(name, value, options)
);
},
},
}
);
}
The Supabase JS client uses the REST API over HTTPS — it does not hold raw PostgreSQL connections open. The connection pooling issue mainly affects Prisma, Drizzle, and raw pg connections. If you are using only the Supabase JS client, you are already safe.
6. Fetching More Data Than You Display#
Pagination is obvious in theory but missed constantly in practice.
// SLOW: fetches ALL posts every time
const { data } = await supabase
.from('posts')
.select('id, title, slug')
.order('created_at', { ascending: false });
// FAST: fetches 20 posts, skipping the first (page - 1) * 20
const PAGE_SIZE = 20;
const { data, count } = await supabase
.from('posts')
.select('id, title, slug', { count: 'exact' })
.order('created_at', { ascending: false })
.range((page - 1) * PAGE_SIZE, page * PAGE_SIZE - 1);
For infinite scroll, cursor-based pagination is even faster because OFFSET gets slower as you go deeper into the table:
// Cursor-based: always fast regardless of page depth
const { data } = await supabase
.from('posts')
.select('id, title, slug, created_at')
.order('created_at', { ascending: false })
.lt('created_at', lastSeenCreatedAt) // cursor
.limit(20);
Putting It All Together#
Here is what a well-optimized Supabase query looks like:
// Good: specific columns, pagination, relational data in one query
const { data: posts, count } = await supabase
.from('posts')
.select(
`
id,
title,
slug,
excerpt,
created_at,
author:profiles (name, avatar_url)
`,
{ count: 'exact' }
)
.eq('status', 'published')
.order('created_at', { ascending: false })
.range(0, 19);
And the supporting index:
CREATE INDEX CONCURRENTLY idx_posts_published_created
ON posts (status, created_at DESC)
WHERE status = 'published';
The WHERE status = 'published' makes this a partial index — smaller, faster, only indexes rows your query actually touches.
Run these optimizations in order — missing indexes first (highest impact), then RLS policies, then query structure. Most apps go from 2-second queries to under 100ms after fixing just the first two.
Frequently Asked Questions
Continue Reading
Next.js + Supabase Performance Optimization: From Slow to Lightning Fast
Transform your slow Next.js and Supabase application into a speed demon. Real-world optimization techniques that reduced load times by 70% and improved Core Web Vitals scores.
I Cut My Next.js + Supabase App Load Time by 73% - Here Are the 5 Techniques That Actually Worked
Real performance optimization results from a production SaaS app. These battle-tested techniques reduced load times from 4.2s to 1.1s and improved Core Web Vitals scores across the board.
Supabase Realtime Gotchas: 7 Issues and How to Fix Them
Avoid common Supabase Realtime pitfalls that cause memory leaks, missed updates, and performance issues. Learn real-world solutions from production applications.