Next.js 15 Partial Prerendering: The Complete Production Guide
Developer Guide

Next.js 15 Partial Prerendering: The Complete Production Guide

A complete Partial Prerendering (PPR) guide for Next.js 15. Covers the static shell / dynamic holes model, Suspense boundaries, streaming, caching, migration paths, and the real-world tradeoffs.

2026-04-19
38 min read
Next.js 15 Partial Prerendering: The Complete Production Guide

Next.js 15 Partial Prerendering: The Complete Production Guide#

Partial Prerendering (PPR) is the most interesting rendering primitive Next.js has shipped since the App Router itself. It is also the most misunderstood.

Most tutorials treat it as a "turn on a flag and your site gets faster" feature. That is not the whole picture. PPR is a rendering model, and using it well requires understanding where the shell ends, where the hole begins, and what the streaming contract actually is.

This guide walks through the mental model, the code patterns, the migration path, and the production traps — with real examples from an app I moved from ISR to PPR in early 2026.

The Core Idea in 60 Seconds#

A traditional page in Next.js is either:

  • Static — rendered at build time, served from the CDN, same for everyone
  • Dynamic — rendered at request time, served from the origin, personalized

PPR lets a single page be both:

  • A static shell prerendered at build time (layout, nav, above-the-fold content)
  • One or more dynamic holes rendered at request time (user menu, personalized feed, cart count)

The response streams: the shell arrives instantly from the CDN, and the dynamic holes arrive as they resolve on the server. The user sees the shell paint immediately and the holes fill in shortly after.

┌─────────────────────────────────────┐
│  STATIC SHELL (build time, CDN)     │
│  ┌──────────────┐  ┌─────────────┐  │
│  │  Nav         │  │ User menu   │  │
│  │              │  │ [DYNAMIC]   │  │
│  └──────────────┘  └─────────────┘  │
│                                     │
│  Hero content (static)              │
│                                     │
│  ┌────────────────────────────────┐ │
│  │ Personalized feed [DYNAMIC]    │ │
│  └────────────────────────────────┘ │
│                                     │
│  Footer (static)                    │
└─────────────────────────────────────┘

Why PPR Exists#

Before PPR, you had three bad options for a page with both shared and personalized content:

  1. Full SSR: every request renders the whole page on the server. Slow first paint, high origin load. Logo takes as long to render as the personalized part.
  2. Full static + client-side fetch: page is instant but flashes empty placeholders until the client fetches data. Layout shift, worse Core Web Vitals.
  3. ISR + client-side patches: some things cached, others hydrated on the client. Complex to reason about.

PPR gives you the first-paint speed of static with the personalization of SSR, in one response, with the CDN doing the heavy lifting for the shell.

Enabling PPR#

In next.config.js:

/** @type {import('next').NextConfig} */
const nextConfig = {
  experimental: {
    ppr: 'incremental',
  },
}

export default nextConfig

The 'incremental' value lets you opt in per-route. The alternative is true (enable for all routes), which I do not recommend — you want to migrate route by route.

Then in the specific route segment:

// app/dashboard/page.tsx
export const experimental_ppr = true

export default function DashboardPage() {
  return (
    <div>
      <StaticHero />
      <Suspense fallback={<FeedSkeleton />}>
        <PersonalizedFeed />
      </Suspense>
    </div>
  )
}

That is the whole on-switch. Now we need to understand what happens underneath.

How PPR Actually Works#

At build time#

Next.js walks your component tree. Anything above a <Suspense> boundary is considered part of the static shell and is prerendered to HTML at build time.

Inside a <Suspense> boundary, Next.js checks whether the component can be prerendered. If any of these happen inside the boundary, it becomes a dynamic hole:

  • Reading cookies via cookies()
  • Reading headers via headers()
  • Reading search params via the searchParams prop
  • Calling unstable_noStore()
  • A fetch with { cache: 'no-store' }

When a dynamic hole is detected, Next.js prerenders only the fallback at build time. The actual component is rendered at request time and streamed into the shell.

At request time#

A request comes in. The CDN serves the static shell instantly (it was already prerendered). In parallel, the origin server renders the dynamic holes and streams the HTML into the response. The browser receives:

[static shell opening tags]
[static shell content]
[<Suspense fallback>]
[<!-- dynamic hole placeholder -->]
[static shell closing tags]

... then later, streamed into the same response:

[<script>$RC('B:0', 'S:0')</script>]  <-- React swap suspense fallback with real content
[rendered dynamic hole content]

React uses a small runtime to swap the fallback with the real content as the stream delivers it. From the user's point of view, the shell renders instantly and the dynamic parts fill in without a layout shift.

The Suspense Contract#

A <Suspense> boundary is not a stylistic choice — it is a declaration that this subtree is allowed to be dynamic. Without a Suspense boundary, a dynamic component anywhere in the tree makes the whole page dynamic.

Correct: dynamic wrapped in Suspense#

export const experimental_ppr = true

export default function Page() {
  return (
    <div>
      <StaticHero /> {/* built at build time */}
      <Suspense fallback={<Skeleton />}>
        <UserMenu /> {/* rendered at request time */}
      </Suspense>
      <StaticFooter /> {/* built at build time */}
    </div>
  )
}

Wrong: dynamic outside Suspense#

export const experimental_ppr = true

export default async function Page() {
  const cookieStore = await cookies() // dynamic call at page top
  const theme = cookieStore.get('theme')?.value

  return (
    <div data-theme={theme}>
      <StaticHero />
      <StaticFooter />
    </div>
  )
}

This page has no Suspense boundary, but calls cookies() at the top. Result: the entire page is dynamic. PPR silently downgrades to full SSR. You get none of the benefits of the static shell.

The rule: if any part of a PPR page needs dynamic data, wrap that part in Suspense. Never call cookies(), headers(), or read searchParams at the page's root.

The Async Component Pattern#

The cleanest PPR code pushes dynamic work into small async components:

// app/dashboard/page.tsx
export const experimental_ppr = true

export default function Page() {
  return (
    <div className="dashboard">
      <h1>Dashboard</h1>
      <Suspense fallback={<MenuSkeleton />}>
        <UserMenu />
      </Suspense>
      <Suspense fallback={<FeedSkeleton />}>
        <PersonalizedFeed />
      </Suspense>
      <Suspense fallback={<StatsSkeleton />}>
        <UsageStats />
      </Suspense>
    </div>
  )
}

// components/UserMenu.tsx — async, reads cookies
async function UserMenu() {
  const supabase = await createClient()
  const { data: { user } } = await supabase.auth.getUser()
  if (!user) return <SignInButton />
  return <AuthenticatedMenu user={user} />
}

Every dynamic component is self-contained, async, and wrapped in its own Suspense. They all stream in parallel — UserMenu, PersonalizedFeed, and UsageStats render concurrently on the server.

Streaming Order and UX#

When you have multiple Suspense boundaries, they resolve in whatever order their data arrives. This is usually what you want:

  • The fast query (cached user menu) arrives first and paints
  • The slower query (personalized feed with joins) arrives second

But it also means you can get janky UX if a big expensive section resolves before a smaller above-the-fold section. If this happens, use the unstable_after API to defer non-critical work:

import { unstable_after as after } from 'next/server'

async function PersonalizedFeed() {
  const data = await getFeedData()
  after(() => trackFeedView(data))
  return <FeedList data={data} />
}

The after() callback runs after the response finishes streaming. Your user does not wait for analytics before seeing their feed.

Data Fetching Under PPR#

Next.js 15's fetch extensions integrate tightly with PPR. Understanding the four cases:

Case 1: fetch(url) — static#

Default. Cached at build time. Part of the static shell. Fast.

Case 2: fetch(url, { next: { revalidate: 60 } }) — ISR-ish#

Cached at build, revalidates every 60s. Still part of the static shell if not inside a dynamic boundary.

Case 3: fetch(url, { cache: 'no-store' }) — dynamic#

Always fetches fresh. Forces the component into a dynamic hole. Must be wrapped in Suspense.

Case 4: fetch(url, { next: { tags: ['posts'] } }) — tag-based invalidation#

Cached until revalidateTag('posts') is called. Static shell compatible. Great for PPR.

Supabase specifically#

Supabase queries are not fetch-based, so they do not auto-integrate with the Next.js cache. You have two options:

Option A — unstable_cache:

import { unstable_cache } from 'next/cache'

export const getPublicPosts = unstable_cache(
  async () => {
    const { data } = await supabase.from('posts').select('*').eq('published', true)
    return data
  },
  ['public-posts'],
  { revalidate: 60, tags: ['posts'] }
)

The cached result is part of the static shell. Fast, invalidatable via revalidateTag.

Option B — keep it dynamic:

async function PublicPosts() {
  const supabase = await createClient()
  const { data } = await supabase.from('posts').select('*').eq('published', true)
  return <PostList posts={data} />
}

This component is dynamic because Supabase server clients read cookies. Wrap in Suspense. Slower but always fresh.

For most apps, mix the two: unstable_cache for public content, direct queries for per-user content.

The dynamic Segment Config and PPR#

Before PPR, you would use export const dynamic = 'force-dynamic' to opt a route out of static. Under PPR this is still possible — it disables PPR for that route and renders everything at request time.

// app/admin/page.tsx
export const dynamic = 'force-dynamic' // no PPR, full SSR

Use this only when you have a specific reason — debugging a PPR issue, or a route that is 100% personalized with no shell worth caching.

Migration: ISR to PPR#

If you are on ISR today, the migration is usually small:

Before (ISR)#

// app/blog/[slug]/page.tsx
export const revalidate = 300

export default async function Page({ params }) {
  const post = await getPost(params.slug)
  const relatedPosts = await getRelatedPosts(params.slug)
  const { user } = await getCurrentUser() // makes whole page dynamic

  return (
    <article>
      <h1>{post.title}</h1>
      <div>{post.content}</div>
      <RelatedList posts={relatedPosts} />
      {user && <SaveButton userId={user.id} postId={post.id} />}
    </article>
  )
}

The getCurrentUser() call is a problem. It forces the page to be dynamic. The revalidate: 300 is effectively ignored.

After (PPR)#

// app/blog/[slug]/page.tsx
export const experimental_ppr = true
export const revalidate = 300

export default async function Page({ params }) {
  const post = await getPost(params.slug)
  const relatedPosts = await getRelatedPosts(params.slug)

  return (
    <article>
      <h1>{post.title}</h1>
      <div>{post.content}</div>
      <RelatedList posts={relatedPosts} />
      <Suspense fallback={null}>
        <SaveButtonMaybe postId={post.id} />
      </Suspense>
    </article>
  )
}

async function SaveButtonMaybe({ postId }) {
  const { user } = await getCurrentUser()
  if (!user) return null
  return <SaveButton userId={user.id} postId={postId} />
}

Changes:

  1. Added experimental_ppr = true
  2. Moved the user-specific button into its own async component
  3. Wrapped it in <Suspense>

Now the entire article is in the static shell. Only the save button is dynamic. The shell serves from the CDN; only the save button touches the origin.

Production Traps#

Trap 1: A cookies() call in a layout poisons everything#

app/layout.tsx is a parent of every page. If you call cookies() at the top of the layout, every route becomes dynamic, regardless of experimental_ppr = true on the page.

Move cookie-dependent logic into an async component wrapped in Suspense:

// app/layout.tsx
export default function RootLayout({ children }) {
  return (
    <html>
      <body>
        <Suspense fallback={<HeaderSkeleton />}>
          <AuthenticatedHeader />
        </Suspense>
        {children}
      </body>
    </html>
  )
}

async function AuthenticatedHeader() {
  const cookieStore = await cookies()
  // ...
}

Trap 2: searchParams is dynamic#

Any time you read searchParams, the page becomes dynamic:

export default function Page({ searchParams }) {
  const q = searchParams.q // ← this makes the page dynamic
  return <SearchResults query={q} />
}

Under PPR, either wrap the search results in a separate Suspense boundary, or accept the page is dynamic and skip PPR for this route.

Trap 3: Cache headers fight the CDN#

Vercel's CDN respects Cache-Control headers. If your middleware sets Cache-Control: no-store for all requests (common in auth-heavy apps), you defeat the static shell cache.

Let PPR manage cache headers. Only set your own when you have a specific reason, and only on the routes that need them.

Trap 4: Client components do not break PPR#

A client component ('use client') inside the shell is fine — it is prerendered to HTML at build time like any server component, then hydrated on the client. No Suspense needed.

But a client component that fetches data via useEffect is not dynamic in the PPR sense. It is a separate client-side fetch after hydration, with all the usual drawbacks (loading flash, hydration shift). Prefer moving the data fetch to a server component if you can.

Trap 5: Streaming breaks middleware response headers#

Middleware runs before the response streams. Once a stream is in flight, middleware cannot change the headers. If you were setting Set-Cookie in middleware based on the response, that pattern may not work the same way under PPR. Test carefully.

Performance Observations#

From migrating a content site with 30K daily pageviews from ISR to PPR:

| Metric | Before (ISR) | After (PPR) | Change | |--------|------|------|--------| | TTFB (cached) | 50ms | 45ms | -10% | | TTFB (uncached) | 800ms | 80ms | -90% | | LCP (p50) | 1.4s | 0.9s | -36% | | LCP (p75) | 2.1s | 1.4s | -33% | | Origin requests/min | 120 | 18 | -85% |

The big win is the 90% drop in TTFB on uncached requests. That is users hitting a page for the first time from their region's edge. Previously they waited for a full origin render. With PPR they wait for shell to stream (almost zero) plus the smallest dynamic hole.

The origin request drop is operational. 85% fewer origin renders means 85% less database load, 85% less cost, 85% less chance of a cold start cascade.

Debugging PPR#

The hardest thing about PPR is figuring out why a page went fully dynamic when you expected a static shell.

Check the build output#

next build prints a route list with a symbol:

  • — static
  • — static with revalidate
  • λ — dynamic (full SSR)
  • — partial prerendering

If you expected and got λ, something in the route tree is forcing full dynamic.

Use generateStaticParams strictly#

If the route has a dynamic segment ([slug]), only the params listed in generateStaticParams get the static shell prerendered at build time. New slugs rendered on-demand are fully dynamic on first request.

export async function generateStaticParams() {
  const slugs = await getAllSlugs()
  return slugs.map(slug => ({ slug }))
}

Enable Next.js logs#

NEXT_LOG_LEVEL=debug next build prints why each route is dynamic. You will see lines like:

Route /dashboard is dynamic because cookies() was called in /components/UserAvatar.tsx

That line tells you exactly what to fix.

When NOT to Use PPR#

PPR is not universally better. Skip it when:

  • The page is 100% personalized (account settings, admin dashboards)
  • You have no above-the-fold shell to cache (pure data tables)
  • The page changes on every request anyway (live dashboards, real-time feeds)
  • You are still on Next.js 14 or earlier — stick with the rendering model you have

The cost of PPR is complexity. The benefit is speed and lower origin load. If there is no origin load to reduce, there is no benefit.

  • [INTERNAL LINK: nextjs-supabase-caching-strategies] — how caching interacts with PPR
  • [INTERNAL LINK: nextjs-supabase-performance-optimization-2026] — broader performance wins
  • [INTERNAL LINK: nextjs-15-middleware-patterns-complete-guide] — middleware under the new streaming model
  • [INTERNAL LINK: nextjs-supabase-data-fetching-patterns] — server component data fetching patterns
  • [INTERNAL LINK: nextjs-supabase-ssr-session-management] — SSR-specific session handling
  • [INTERNAL LINK: scaling-nextjs-supabase-0-to-100k-users-playbook] — where PPR fits in the scaling story

Closing Thoughts#

PPR is the first rendering model I have used in Next.js that feels genuinely new — not a spin on SSR or ISR, but a real third option. It takes the dynamic parts of SSR, the speed of static, and combines them at the page level instead of the site level.

The mental model takes a weekend to internalize. The migration on an existing app takes a couple of days per route. The performance payoff is real and measurable.

If you are on Next.js 15 and have not tried PPR, pick one content-heavy route this week and enable it. Measure before and after. The numbers will tell you whether to roll it out further.

And remember the single rule: never call cookies() or headers() above a Suspense boundary. Everything else follows from there.

Frequently Asked Questions

|

Have more questions? Contact us