Background Jobs and Async Task Patterns with Next.js and Supabase
Build background job processing and async task patterns with Next.js and Supabase. Use database queues, pg_cron, and Edge Functions without external services.
Background Jobs and Async Task Patterns with Next.js and Supabase#
Serverless functions are great for request-response cycles. They're not designed for sending 10,000 emails, processing uploaded videos, generating PDF reports, or any task that takes more than a few seconds. Vercel's execution limits will cut you off mid-task, and your users will see errors.
The standard answer is "add a job queue service" — but that means another service, another bill, another integration to maintain. If you're already on Supabase, you have most of what you need: a Postgres database with pg_cron, Edge Functions with longer execution limits, and Realtime for status updates. This guide shows you how to build a complete async task system without leaving the Supabase ecosystem.
Estimated read time: 15 minutes
Prerequisites#
- Supabase Pro plan or above (required for
pg_cron) - Next.js 14+ with App Router
- Supabase Edge Functions CLI (
supabaseCLI v1.100+) - Basic familiarity with Supabase Edge Functions
The Core Pattern: Database-Backed Job Queue#
The most reliable pattern for background jobs is a database table that acts as a queue. Jobs are inserted by your Next.js app, processed by a worker (Edge Function or pg_cron), and their status is tracked in the same table.
This gives you:
- Durability (jobs survive crashes)
- Visibility (query the table to see job status)
- Retry logic (update the row and reprocess)
- No external dependencies
The Jobs Table#
CREATE TYPE job_status AS ENUM ('pending', 'processing', 'completed', 'failed');
CREATE TABLE jobs (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
type TEXT NOT NULL, -- 'send_email', 'generate_report', etc.
payload JSONB NOT NULL, -- job-specific data
status job_status NOT NULL DEFAULT 'pending',
priority INT NOT NULL DEFAULT 0, -- higher = processed first
attempts INT NOT NULL DEFAULT 0,
max_attempts INT NOT NULL DEFAULT 3,
error TEXT, -- last error message
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
scheduled_for TIMESTAMPTZ DEFAULT now(), -- for delayed jobs
completed_at TIMESTAMPTZ
);
-- Index for the worker query
CREATE INDEX idx_jobs_worker ON jobs (status, priority DESC, scheduled_for)
WHERE status = 'pending';
-- RLS: only service role can read/write jobs
ALTER TABLE jobs ENABLE ROW LEVEL SECURITY;
-- No policies = only service role (bypasses RLS) can access
No RLS policies means only the service role can access this table. Your Next.js Server Actions use the service role to enqueue jobs, and your Edge Function worker uses the service role to process them.
Enqueuing Jobs from Next.js#
// src/lib/jobs.ts
import { createClient } from '@supabase/supabase-js'
// Service role client — server-side only
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.SUPABASE_SERVICE_ROLE_KEY!
)
type JobType = 'send_email' | 'generate_report' | 'process_upload'
interface EnqueueOptions {
priority?: number
delaySeconds?: number
maxAttempts?: number
}
export async function enqueueJob(
type: JobType,
payload: Record<string, unknown>,
options: EnqueueOptions = {}
) {
const scheduledFor = options.delaySeconds
? new Date(Date.now() + options.delaySeconds * 1000).toISOString()
: new Date().toISOString()
const { data, error } = await supabase
.from('jobs')
.insert({
type,
payload,
priority: options.priority ?? 0,
max_attempts: options.maxAttempts ?? 3,
scheduled_for: scheduledFor,
})
.select('id')
.single()
if (error) throw error
return data.id
}
Usage in a Server Action:
// app/reports/actions.ts
'use server'
import { enqueueJob } from '@/lib/jobs'
import { createClient } from '@/lib/supabase/server'
export async function requestReport(reportType: string) {
const supabase = await createClient()
const { data: { user } } = await supabase.auth.getUser()
if (!user) throw new Error('Unauthorized')
const jobId = await enqueueJob('generate_report', {
userId: user.id,
reportType,
requestedAt: new Date().toISOString(),
})
return { jobId }
}
The client gets back a jobId it can use to poll for status.
The Worker: Supabase Edge Function#
// supabase/functions/process-jobs/index.ts
import { createClient } from 'https://esm.sh/@supabase/supabase-js@2'
const supabase = createClient(
Deno.env.get('SUPABASE_URL')!,
Deno.env.get('SUPABASE_SERVICE_ROLE_KEY')!
)
Deno.serve(async () => {
// Claim a batch of pending jobs atomically
const { data: jobs, error } = await supabase.rpc('claim_jobs', {
batch_size: 5,
})
if (error) {
return new Response(JSON.stringify({ error: error.message }), { status: 500 })
}
if (!jobs || jobs.length === 0) {
return new Response(JSON.stringify({ processed: 0 }), { status: 200 })
}
const results = await Promise.allSettled(
jobs.map((job: any) => processJob(job))
)
const processed = results.filter(r => r.status === 'fulfilled').length
const failed = results.filter(r => r.status === 'rejected').length
return new Response(JSON.stringify({ processed, failed }), { status: 200 })
})
async function processJob(job: any) {
try {
switch (job.type) {
case 'send_email':
await handleSendEmail(job.payload)
break
case 'generate_report':
await handleGenerateReport(job.payload)
break
default:
throw new Error(`Unknown job type: ${job.type}`)
}
// Mark as completed
await supabase
.from('jobs')
.update({
status: 'completed',
completed_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
})
.eq('id', job.id)
} catch (err: any) {
const nextAttempt = job.attempts + 1
const shouldRetry = nextAttempt < job.max_attempts
await supabase
.from('jobs')
.update({
status: shouldRetry ? 'pending' : 'failed',
attempts: nextAttempt,
error: err.message,
// Exponential backoff: 30s, 2min, 8min
scheduled_for: shouldRetry
? new Date(Date.now() + Math.pow(4, nextAttempt) * 30000).toISOString()
: undefined,
updated_at: new Date().toISOString(),
})
.eq('id', job.id)
throw err
}
}
async function handleSendEmail(payload: any) {
// Your email sending logic here
console.log('Sending email to', payload.to)
}
async function handleGenerateReport(payload: any) {
// Your report generation logic here
console.log('Generating report for user', payload.userId)
}
The Atomic Job Claim Function#
The worker uses a Postgres function to atomically claim jobs, preventing two workers from processing the same job:
CREATE OR REPLACE FUNCTION claim_jobs(batch_size INT DEFAULT 5)
RETURNS SETOF jobs
LANGUAGE sql
AS $$
UPDATE jobs
SET
status = 'processing',
attempts = attempts + 1,
updated_at = now()
WHERE id IN (
SELECT id FROM jobs
WHERE status = 'pending'
AND scheduled_for <= now()
ORDER BY priority DESC, created_at ASC
LIMIT batch_size
FOR UPDATE SKIP LOCKED -- critical: skip rows locked by other workers
)
RETURNING *;
$$;
FOR UPDATE SKIP LOCKED is the key. It skips any rows that are already locked by another transaction, making this safe to run from multiple concurrent workers.
Scheduling the Worker with pg_cron#
-- Run the worker every minute
SELECT cron.schedule(
'process-jobs',
'* * * * *',
$$
SELECT net.http_post(
url := 'https://[project-ref].supabase.co/functions/v1/process-jobs',
headers := jsonb_build_object(
'Authorization', 'Bearer ' || current_setting('app.service_role_key'),
'Content-Type', 'application/json'
),
body := '{}'::jsonb
);
$$
);
This uses Supabase's pg_net extension to make an HTTP call to your Edge Function every minute. The Edge Function processes a batch of jobs and returns.
To store the service role key as a Postgres setting:
ALTER DATABASE postgres SET app.service_role_key = 'your-service-role-key';
[NEEDS VERIFICATION: confirm pg_net is available on all Supabase Pro plans]
Polling Job Status from the Client#
// src/hooks/useJobStatus.ts
'use client'
import { createClient } from '@/lib/supabase/client'
import { useEffect, useState } from 'react'
type JobStatus = 'pending' | 'processing' | 'completed' | 'failed'
export function useJobStatus(jobId: string | null) {
const [status, setStatus] = useState<JobStatus | null>(null)
const [error, setError] = useState<string | null>(null)
useEffect(() => {
if (!jobId) return
const supabase = createClient()
// Subscribe to realtime changes on this specific job
const channel = supabase
.channel(`job-${jobId}`)
.on(
'postgres_changes',
{
event: 'UPDATE',
schema: 'public',
table: 'jobs',
filter: `id=eq.${jobId}`,
},
(payload) => {
setStatus(payload.new.status)
if (payload.new.error) setError(payload.new.error)
}
)
.subscribe()
// Also fetch current status immediately
supabase
.from('jobs')
.select('status, error')
.eq('id', jobId)
.single()
.then(({ data }) => {
if (data) {
setStatus(data.status)
setError(data.error)
}
})
return () => {
supabase.removeChannel(channel)
}
}, [jobId])
return { status, error }
}
This combines an immediate fetch with a Realtime subscription, so the UI updates the moment the job status changes.
[INTERNAL LINK: nextjs-supabase-realtime-collaboration]
Pattern: Fire-and-Forget from Next.js#
For simpler cases where you don't need a persistent queue, you can invoke an Edge Function asynchronously from a Server Action:
// app/actions.ts
'use server'
export async function triggerBackgroundTask(data: Record<string, unknown>) {
// Don't await — fire and forget
fetch(
`${process.env.NEXT_PUBLIC_SUPABASE_URL}/functions/v1/background-task`,
{
method: 'POST',
headers: {
Authorization: `Bearer ${process.env.SUPABASE_SERVICE_ROLE_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify(data),
}
).catch(console.error) // log errors but don't block
return { queued: true }
}
This works for non-critical tasks where you don't need retry logic or status tracking. For anything important, use the database queue pattern.
Common Pitfalls#
Not using FOR UPDATE SKIP LOCKED. Without this, multiple workers will claim the same job, causing duplicate processing. This is the most critical part of the pattern.
Forgetting to handle the processing state on restart. If your worker crashes mid-job, the job stays in processing forever. Add a cleanup job that resets jobs stuck in processing for more than N minutes back to pending.
Using the anon key in the worker. The jobs table has no RLS policies, so only the service role can access it. Using the anon key will return empty results, not an error.
Not indexing scheduled_for. The worker query filters on scheduled_for <= now(). Without an index, this is a full table scan on every worker invocation.
Summary and Next Steps#
The database queue pattern — jobs table + atomic claim function + Edge Function worker + pg_cron scheduler — gives you a production-grade async task system entirely within Supabase. No external services, no additional billing, full visibility into job state.
For high-throughput scenarios (thousands of jobs per minute), you'd eventually want a dedicated queue service. But for most SaaS apps, this pattern handles the load comfortably.
Related reading:
- [INTERNAL LINK: nextjs-supabase-edge-functions-guide]
- [INTERNAL LINK: supabase-postgres-functions-triggers-guide]
- [INTERNAL LINK: nextjs-supabase-webhook-event-architecture]
Frequently Asked Questions
Related Guides
Mastering Supabase Edge Functions with Next.js
Complete guide to building and deploying Supabase Edge Functions with Next.js. Learn serverless functions, Deno runtime, database triggers, webhooks, scheduled jobs, and real-world use cases.
AI Integration for Next.js + Supabase Applications
Complete guide to integrating AI capabilities into Next.js and Supabase applications. Learn OpenAI integration, chat interfaces, vector search, RAG systems,...
Complete Guide to Building SaaS with Next.js and Supabase
Master full-stack SaaS development with Next.js 15 and Supabase. From database design to deployment, learn everything you need to build production-ready...