AI Integration for Next.js + Supabase Applications
Developer Guide

AI Integration for Next.js + Supabase Applications

Complete guide to integrating AI capabilities into Next.js and Supabase applications. Learn OpenAI integration, chat interfaces, vector search, RAG systems,...

2026-02-16
40 min read
AI Integration for Next.js + Supabase Applications

AI Integration for Next.js + Supabase Applications#

AI is transforming how we build applications. From chat interfaces to semantic search, AI capabilities are becoming essential features. This comprehensive guide teaches you how to integrate AI into Next.js + Supabase applications.

Why AI Integration?#

User Experience:

  • Natural language interfaces
  • Intelligent search
  • Personalized recommendations
  • Automated content generation

Business Value:

  • Reduced support costs
  • Increased engagement
  • Better user retention
  • Competitive advantage

Technical Benefits:

  • Supabase pgvector for vector storage
  • Next.js streaming for real-time responses
  • Edge functions for low latency
  • Scalable architecture

1. AI APIs Overview#

OpenAI#

Models:

  • GPT-4: Most capable, best for complex tasks
  • GPT-3.5 Turbo: Fast and cost-effective
  • GPT-4 Turbo: Balance of capability and speed

Pricing (as of 2026):

  • GPT-4: $0.03/1K input tokens, $0.06/1K output tokens
  • GPT-3.5 Turbo: $0.0005/1K input tokens, $0.0015/1K output tokens

Anthropic Claude#

Models:

  • Claude 3 Opus: Most capable
  • Claude 3 Sonnet: Balanced
  • Claude 3 Haiku: Fast and affordable

Best For: Long context, analysis, coding

Other Providers#

  • Cohere: Embeddings, classification
  • Hugging Face: Open-source models
  • Replicate: Image generation, specialized models

2. OpenAI Integration#

Setup#

npm install openai
// lib/openai.ts
import OpenAI from 'openai'

export const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
})

Basic Chat Completion#

// app/api/chat/route.ts
import { openai } from '@/lib/openai'
import { NextResponse } from 'next/server'

export async function POST(request: Request) {
  const { message } = await request.json()

  const completion = await openai.chat.completions.create({
    model: 'gpt-4-turbo-preview',
    messages: [
      {
        role: 'system',
        content: 'You are a helpful assistant.',
      },
      {
        role: 'user',
        content: message,
      },
    ],
  })

  return NextResponse.json({
    response: completion.choices[0].message.content,
  })
}

Streaming Responses#

// app/api/chat/stream/route.ts
import { openai } from '@/lib/openai'
import { OpenAIStream, StreamingTextResponse } from 'ai'

export const runtime = 'edge'

export async function POST(request: Request) {
  const { messages } = await request.json()

  const response = await openai.chat.completions.create({
    model: 'gpt-4-turbo-preview',
    stream: true,
    messages,
  })

  const stream = OpenAIStream(response)
  return new StreamingTextResponse(stream)
}

Related: Integrate OpenAI API with Next.js and Supabase, Build AI Chat Interface in Next.js with Streaming

3. Chat Interfaces#

Client Component#

'use client'

import { useState } from 'react'
import { useChat } from 'ai/react'

export function ChatInterface() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({
    api: '/api/chat/stream',
  })

  return (
    <div className="flex flex-col h-screen">
      <div className="flex-1 overflow-y-auto p-4">
        {messages.map((message) => (
          <div
            key={message.id}
            className={`mb-4 ${
              message.role === 'user' ? 'text-right' : 'text-left'
            }`}
          >
            <div
              className={`inline-block p-3 rounded-lg ${
                message.role === 'user'
                  ? 'bg-blue-500 text-white'
                  : 'bg-gray-200 text-black'
              }`}
            >
              {message.content}
            </div>
          </div>
        ))}
        {isLoading && (
          <div className="text-left">
            <div className="inline-block p-3 rounded-lg bg-gray-200">
              Thinking...
            </div>
          </div>
        )}
      </div>
      
      <form onSubmit={handleSubmit} className="p-4 border-t">
        <div className="flex gap-2">
          <input
            value={input}
            onChange={handleInputChange}
            placeholder="Type your message..."
            className="flex-1 p-2 border rounded"
            disabled={isLoading}
          />
          <button
            type="submit"
            disabled={isLoading}
            className="px-4 py-2 bg-blue-500 text-white rounded"
          >
            Send
          </button>
        </div>
      </form>
    </div>
  )
}

Store Chat History#

// app/api/chat/route.ts
import { createClient } from '@/lib/supabase/server'
import { openai } from '@/lib/openai'

export async function POST(request: Request) {
  const supabase = createClient()
  const { message, conversationId } = await request.json()

  // Get user
  const { data: { user } } = await supabase.auth.getUser()
  if (!user) {
    return new Response('Unauthorized', { status: 401 })
  }

  // Get conversation history
  const { data: messages } = await supabase
    .from('messages')
    .select('*')
    .eq('conversation_id', conversationId)
    .order('created_at', { ascending: true })

  // Create chat completion
  const completion = await openai.chat.completions.create({
    model: 'gpt-4-turbo-preview',
    messages: [
      { role: 'system', content: 'You are a helpful assistant.' },
      ...messages.map(m => ({ role: m.role, content: m.content })),
      { role: 'user', content: message },
    ],
  })

  const assistantMessage = completion.choices[0].message.content

  // Save messages
  await supabase.from('messages').insert([
    {
      conversation_id: conversationId,
      role: 'user',
      content: message,
      user_id: user.id,
    },
    {
      conversation_id: conversationId,
      role: 'assistant',
      content: assistantMessage,
      user_id: user.id,
    },
  ])

  return Response.json({ response: assistantMessage })
}

Related: Build AI Chat Interface in Next.js with Streaming, Build AI Chatbot with Next.js and Supabase

Enable pgvector#

-- Enable pgvector extension
CREATE EXTENSION IF NOT EXISTS vector;

-- Create table with vector column
CREATE TABLE documents (
  id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
  content TEXT NOT NULL,
  embedding vector(1536), -- OpenAI embeddings are 1536 dimensions
  metadata JSONB,
  created_at TIMESTAMPTZ DEFAULT NOW()
);

-- Create index for similarity search
CREATE INDEX ON documents USING ivfflat (embedding vector_cosine_ops)
  WITH (lists = 100);

Generate Embeddings#

// lib/embeddings.ts
import { openai } from './openai'

export async function generateEmbedding(text: string): Promise<number[]> {
  const response = await openai.embeddings.create({
    model: 'text-embedding-3-small',
    input: text,
  })

  return response.data[0].embedding
}

Store Documents with Embeddings#

// app/api/documents/route.ts
import { createClient } from '@/lib/supabase/server'
import { generateEmbedding } from '@/lib/embeddings'

export async function POST(request: Request) {
  const supabase = createClient()
  const { content, metadata } = await request.json()

  // Generate embedding
  const embedding = await generateEmbedding(content)

  // Store document
  const { data, error } = await supabase
    .from('documents')
    .insert({
      content,
      embedding,
      metadata,
    })
    .select()
    .single()

  if (error) {
    return Response.json({ error: error.message }, { status: 500 })
  }

  return Response.json({ data })
}
// app/api/search/route.ts
import { createClient } from '@/lib/supabase/server'
import { generateEmbedding } from '@/lib/embeddings'

export async function POST(request: Request) {
  const supabase = createClient()
  const { query, limit = 5 } = await request.json()

  // Generate query embedding
  const queryEmbedding = await generateEmbedding(query)

  // Search for similar documents
  const { data, error } = await supabase.rpc('match_documents', {
    query_embedding: queryEmbedding,
    match_threshold: 0.7,
    match_count: limit,
  })

  if (error) {
    return Response.json({ error: error.message }, { status: 500 })
  }

  return Response.json({ results: data })
}
-- Create similarity search function
CREATE OR REPLACE FUNCTION match_documents(
  query_embedding vector(1536),
  match_threshold float,
  match_count int
)
RETURNS TABLE (
  id uuid,
  content text,
  metadata jsonb,
  similarity float
)
LANGUAGE sql STABLE
AS $$
  SELECT
    id,
    content,
    metadata,
    1 - (embedding <=> query_embedding) AS similarity
  FROM documents
  WHERE 1 - (embedding <=> query_embedding) > match_threshold
  ORDER BY embedding <=> query_embedding
  LIMIT match_count;
$$;

Related: Implement Vector Search with Supabase pgvector, Add AI-Powered Search to Next.js Application

5. RAG (Retrieval-Augmented Generation) Systems#

RAG Architecture#

  1. User asks question
  2. Generate embedding for question
  3. Search vector database for relevant documents
  4. Pass documents + question to LLM
  5. LLM generates answer based on context

Implementation#

// app/api/rag/route.ts
import { createClient } from '@/lib/supabase/server'
import { openai } from '@/lib/openai'
import { generateEmbedding } from '@/lib/embeddings'

export async function POST(request: Request) {
  const { question } = await request.json()
  const supabase = createClient()

  // 1. Generate embedding for question
  const queryEmbedding = await generateEmbedding(question)

  // 2. Search for relevant documents
  const { data: documents } = await supabase.rpc('match_documents', {
    query_embedding: queryEmbedding,
    match_threshold: 0.7,
    match_count: 5,
  })

  // 3. Build context from documents
  const context = documents
    .map((doc) => doc.content)
    .join('\n\n')

  // 4. Generate answer with context
  const completion = await openai.chat.completions.create({
    model: 'gpt-4-turbo-preview',
    messages: [
      {
        role: 'system',
        content: `You are a helpful assistant. Answer questions based on the following context:

${context}

If the answer is not in the context, say "I don't have enough information to answer that."`,
      },
      {
        role: 'user',
        content: question,
      },
    ],
  })

  return Response.json({
    answer: completion.choices[0].message.content,
    sources: documents.map(d => ({ id: d.id, content: d.content })),
  })
}

RAG Client Component#

'use client'

import { useState } from 'react'

export function RAGChat() {
  const [question, setQuestion] = useState('')
  const [answer, setAnswer] = useState('')
  const [sources, setSources] = useState([])
  const [loading, setLoading] = useState(false)

  async function handleSubmit(e: React.FormEvent) {
    e.preventDefault()
    setLoading(true)

    const response = await fetch('/api/rag', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ question }),
    })

    const data = await response.json()
    setAnswer(data.answer)
    setSources(data.sources)
    setLoading(false)
  }

  return (
    <div>
      <form onSubmit={handleSubmit}>
        <input
          value={question}
          onChange={(e) => setQuestion(e.target.value)}
          placeholder="Ask a question..."
          disabled={loading}
        />
        <button type="submit" disabled={loading}>
          {loading ? 'Thinking...' : 'Ask'}
        </button>
      </form>

      {answer && (
        <div>
          <h3>Answer:</h3>
          <p>{answer}</p>
          
          <h4>Sources:</h4>
          <ul>
            {sources.map((source) => (
              <li key={source.id}>{source.content.substring(0, 100)}...</li>
            ))}
          </ul>
        </div>
      )}
    </div>
  )
}

Related: Build RAG System with Next.js and Supabase, Add AI-Powered Search to Next.js Application

6. Content Generation#

Generate Blog Posts#

// app/api/generate/post/route.ts
import { openai } from '@/lib/openai'

export async function POST(request: Request) {
  const { topic, keywords } = await request.json()

  const completion = await openai.chat.completions.create({
    model: 'gpt-4-turbo-preview',
    messages: [
      {
        role: 'system',
        content: 'You are an expert content writer. Write engaging, SEO-optimized blog posts.',
      },
      {
        role: 'user',
        content: `Write a blog post about "${topic}". Include these keywords: ${keywords.join(', ')}. 
        
Format:
- Compelling title
- Introduction
- 3-5 main sections with subheadings
- Conclusion
- Use markdown formatting`,
      },
    ],
  })

  return Response.json({
    content: completion.choices[0].message.content,
  })
}

Generate Product Descriptions#

// app/api/generate/description/route.ts
import { openai } from '@/lib/openai'

export async function POST(request: Request) {
  const { productName, features } = await request.json()

  const completion = await openai.chat.completions.create({
    model: 'gpt-3.5-turbo',
    messages: [
      {
        role: 'system',
        content: 'You are a marketing copywriter. Write compelling product descriptions.',
      },
      {
        role: 'user',
        content: `Write a product description for "${productName}". 
        
Features:
${features.map((f: string) => `- ${f}`).join('\n')}

Write 2-3 paragraphs that highlight benefits and create desire.`,
      },
    ],
  })

  return Response.json({
    description: completion.choices[0].message.content,
  })
}

Related: Implement AI Content Generation in Next.js, Build AI Chatbot with Next.js and Supabase

7. Image Generation#

DALL-E Integration#

// app/api/generate/image/route.ts
import { openai } from '@/lib/openai'

export async function POST(request: Request) {
  const { prompt } = await request.json()

  const response = await openai.images.generate({
    model: 'dall-e-3',
    prompt,
    n: 1,
    size: '1024x1024',
    quality: 'standard',
  })

  return Response.json({
    imageUrl: response.data[0].url,
  })
}

Store Generated Images#

// app/api/generate/image/store/route.ts
import { createClient } from '@/lib/supabase/server'
import { openai } from '@/lib/openai'

export async function POST(request: Request) {
  const supabase = createClient()
  const { prompt } = await request.json()

  // Generate image
  const response = await openai.images.generate({
    model: 'dall-e-3',
    prompt,
    n: 1,
    size: '1024x1024',
  })

  const imageUrl = response.data[0].url

  // Download image
  const imageResponse = await fetch(imageUrl)
  const imageBlob = await imageResponse.blob()

  // Upload to Supabase Storage
  const fileName = `${Date.now()}.png`
  const { data: uploadData, error: uploadError } = await supabase.storage
    .from('generated-images')
    .upload(fileName, imageBlob)

  if (uploadError) {
    return Response.json({ error: uploadError.message }, { status: 500 })
  }

  // Get public URL
  const { data: { publicUrl } } = supabase.storage
    .from('generated-images')
    .getPublicUrl(fileName)

  return Response.json({ imageUrl: publicUrl })
}

Related: Add AI Image Generation to Next.js App, Implement AI Content Generation in Next.js

8. Recommendations Engine#

Collaborative Filtering#

// app/api/recommendations/route.ts
import { createClient } from '@/lib/supabase/server'
import { generateEmbedding } from '@/lib/embeddings'

export async function GET(request: Request) {
  const supabase = createClient()
  const { data: { user } } = await supabase.auth.getUser()

  if (!user) {
    return Response.json({ error: 'Unauthorized' }, { status: 401 })
  }

  // Get user's interaction history
  const { data: interactions } = await supabase
    .from('user_interactions')
    .select('item_id, item_content')
    .eq('user_id', user.id)
    .limit(10)

  // Generate embedding from user's interests
  const userInterests = interactions
    .map(i => i.item_content)
    .join(' ')
  
  const userEmbedding = await generateEmbedding(userInterests)

  // Find similar items
  const { data: recommendations } = await supabase.rpc('match_items', {
    query_embedding: userEmbedding,
    match_threshold: 0.6,
    match_count: 10,
  })

  return Response.json({ recommendations })
}

Related: Implement AI-Powered Recommendations Next.js, Add AI-Powered Search to Next.js Application

9. Cost Optimization#

Caching Responses#

// lib/cache.ts
import { createClient } from '@/lib/supabase/server'

export async function getCachedResponse(key: string) {
  const supabase = createClient()
  
  const { data } = await supabase
    .from('ai_cache')
    .select('response')
    .eq('key', key)
    .single()

  return data?.response
}

export async function setCachedResponse(key: string, response: string) {
  const supabase = createClient()
  
  await supabase
    .from('ai_cache')
    .upsert({ key, response, expires_at: new Date(Date.now() + 86400000) })
}

Use Cheaper Models#

// Use GPT-3.5 for simple tasks
const simpleCompletion = await openai.chat.completions.create({
  model: 'gpt-3.5-turbo', // Much cheaper than GPT-4
  messages: [...],
})

// Use GPT-4 only for complex tasks
const complexCompletion = await openai.chat.completions.create({
  model: 'gpt-4-turbo-preview',
  messages: [...],
})

Limit Token Usage#

const completion = await openai.chat.completions.create({
  model: 'gpt-4-turbo-preview',
  messages: [...],
  max_tokens: 500, // Limit response length
  temperature: 0.7,
})

10. Ethical Considerations#

Content Moderation#

// app/api/moderate/route.ts
import { openai } from '@/lib/openai'

export async function POST(request: Request) {
  const { text } = await request.json()

  const moderation = await openai.moderations.create({
    input: text,
  })

  const flagged = moderation.results[0].flagged

  if (flagged) {
    return Response.json({
      allowed: false,
      categories: moderation.results[0].categories,
    })
  }

  return Response.json({ allowed: true })
}

Rate Limiting#

// lib/rate-limit.ts
import { Ratelimit } from '@upstash/ratelimit'
import { Redis } from '@upstash/redis'

const ratelimit = new Ratelimit({
  redis: Redis.fromEnv(),
  limiter: Ratelimit.slidingWindow(10, '1 h'), // 10 requests per hour
})

export async function checkAIRateLimit(userId: string) {
  const { success } = await ratelimit.limit(userId)
  return success
}

Transparency#

Always inform users when they're interacting with AI:

<div className="ai-disclaimer">
  ⚠️ This response was generated by AI. Please verify important information.
</div>

Frequently Asked Questions (FAQ)#

How do I integrate OpenAI with Next.js and Supabase?#

Install the OpenAI SDK (npm install openai), create an API client with your OpenAI API key, and use it in Next.js API routes or Server Components. Store conversation history and user data in Supabase for persistence and user management.

What's the difference between GPT-4 and GPT-3.5?#

GPT-4 is more capable and accurate but costs 10-20x more than GPT-3.5 Turbo. Use GPT-3.5 for simple tasks like basic chat, summaries, and classifications. Reserve GPT-4 for complex reasoning, code generation, and tasks requiring high accuracy.

How do I implement vector search with Supabase?#

Enable the pgvector extension in Supabase, create a table with a vector column (vector(1536) for OpenAI embeddings), generate embeddings using OpenAI's embedding API, and use cosine similarity search with CREATE INDEX USING ivfflat for performance.

What is RAG and when should I use it?#

RAG (Retrieval-Augmented Generation) combines vector search with LLMs to answer questions based on your own data. Use it when you need AI to reference specific documents, knowledge bases, or proprietary information not in the LLM's training data.

How much does AI integration cost?#

Costs vary by usage. GPT-3.5 Turbo costs ~$0.002 per 1K tokens, GPT-4 costs ~$0.03-0.06 per 1K tokens, and embeddings cost ~$0.0001 per 1K tokens. A typical chat app with 1000 daily users might cost $50-200/month depending on usage patterns.

Can I use AI features without exposing my API keys?#

Yes, always call AI APIs from server-side code (API routes, Server Components, Edge Functions). Never expose API keys in client-side code. Use environment variables and keep them server-side only.

How do I implement streaming responses?#

Use OpenAI's streaming API with stream: true, then use Next.js's StreamingTextResponse from the ai package. This sends tokens to the client as they're generated, providing a better user experience for long responses.

What's the best way to cache AI responses?#

Cache responses in Supabase with a hash of the input as the key. Set expiration times based on how often content changes. This reduces API costs and improves response times for repeated queries.

How do I moderate AI-generated content?#

Use OpenAI's Moderation API to check content for harmful material before displaying it. Implement rate limiting to prevent abuse, and always show disclaimers that content is AI-generated.

Can I fine-tune models with my own data?#

Yes, OpenAI supports fine-tuning GPT-3.5 and GPT-4 with your own training data. However, for most applications, RAG (using vector search with your data) is more cost-effective and easier to maintain than fine-tuning.

How do I handle AI API errors?#

Implement retry logic with exponential backoff for rate limits and temporary failures. Show user-friendly error messages, log errors for debugging, and have fallback responses when AI services are unavailable.

What's the difference between embeddings and completions?#

Embeddings convert text into numerical vectors for similarity search and clustering. Completions generate new text based on prompts. Use embeddings for search and recommendations, completions for chat and content generation.

Conclusion#

AI integration opens up endless possibilities for your applications. Start with simple chat interfaces, then add semantic search, RAG systems, and content generation as needed.

Remember: AI is a tool to enhance user experience, not replace human judgment. Use it responsibly, optimize costs, and always prioritize user privacy.

Build intelligent applications. Start integrating AI today.

Frequently Asked Questions

|

Have more questions? Contact us