Blog post reading progress: 0%
0% read
Posts

From ISR to Cache Components

Incremental Static Regeneration

ISR changed the economics of publishing. "Dynamic at the speed of static" was the pitch. Instead of creating pages at build time, you could render on the first hit, cache the result, and let traffic gradually fill in the long tail. For sites with huge slug counts, that made static generation practical in places where it simply was not before.

ISR is still a great fit when:

  • Personalization is low — If the page is the same for all users, full page caching is extremely effective.
  • Data isn't changing often. — When data changes infrequently, the cost of cache turnover is low.
  • Reads are much higher than writes. — When most traffic is repeated hits on a small subset of pages, the cache hit ratio is high and the economics work out in your favor.

That works great, until it doesn't.

Page-level caching starts to feel awkward when:

  • Highly personalized content — When a page depends heavily on request-time data, the number of possible variants increases, which means less reuse.
  • Frequent deploys — On large route sets, deployment-driven cache turnover can mean a lot of re-warming.
  • Write-heavy long tails — Pages visited once or twice still pay the full render-and-persist cost for very little reuse.
  • Mostly shared UI — On many pages, only part of the response is truly request-specific. The nav, layout, and styles are identical across every location — only the weather data (or product info, or pricing) changes.

The key insight is simple: the page is not always the best unit of caching.

Between high-cardinality slugs, frequent deploys, and frequently changing data, the page can become too blunt a unit of caching. A payment status change, profile update, or banner update can force the route to pay the page-level cost of rebuilding cached output, even when most of the UI is still reusable.

Granular Caching with Partial Prerendering

With cacheComponents enabled in your next.config.*, Next.js can split a route into prerendered and request-time segments. That is the important shift: not moving beyond ISR's core idea, but applying it with finer granularity. Instead of caching the whole page or giving it up entirely, you can keep the reusable parts prerendered and let only the truly dynamic parts render per request.

The shift is conceptual as much as technical: you do not actually want a static site. You want a fast site. Static HTML is still useful, but it is now one optimization in a broader system that includes streamed Server Components, route shells, data caches, and runtime reuse.

Traditional ISR treated the page as the unit of reuse. PPR keeps the same basic idea, but applies it to parts of the route tree. At a high level, the parts of a route that can resolve without request-time data can be prerendered ahead of time, while APIs like cookies(), headers(), and request-dependent searchParams keep their section on the request-time side of the boundary.

What can be resolved ahead of time becomes the static shell: layouts, nav, styles, and other shared structure that can be served immediately. What depends on request-specific data gets pushed behind <Suspense> boundaries and streams in at runtime through React Server Components. The result is that a single initial response can contain both prerendered, ISR-like content and dynamic content.

Example: A weather route that shows today's weather can keep its shared layout stable while the forecast streams in per request. The navigation, footer, and nested layouts can be reused where the cache boundaries allow it, while the weather data is fetched and rendered at request time.

In practice, that creates a cleaner split between what the route always knows and what it can only know once a request arrives. When Next.js surfaces request-time data as a build-time boundary, that is often the signal to introduce <Suspense> and let only that section stream separately instead of giving up prerendering for the entire route.

For example:

// app/[id]/page.tsx
export default function Page({ params }: { params: { id: string } }) {
  return (
    <>
      <StaticLayoutStuff /> {/* Part of the shell */}
      <Suspense fallback={<Skeleton />}>
        <DynamicWeatherSection id={params.id} /> {/* Streams at runtime */}
      </Suspense>
    </>
  );
}

Introducing Cacheable Layers with use cache

Once you stop thinking in pages, reuse gets a lot more layered.

use cache marks an async function, component, page, or layout as reusable work. When that work can resolve during prerendering, its result can contribute to the shell. The cache key is not just the function arguments — it also includes function identity and serialized closed-over values. The simple mental model is: this work is safe to reuse as part of the shared route output, instead of forcing that decision at the whole-page level.

'use cache: remote' stores cached results in a remote cache handler instead of relying only on in-memory runtime caching. That is useful when you want to shield an expensive upstream API or share results across requests and instances. It also comes with tradeoffs: platform support, configured cache handlers, lookup latency, and storage cost. Nothing is free; you are trading compute in one place for a lookup somewhere else.

You can choose the granularity: cache the data, or cache the rendered output.

// Cache the data layer
async function getWeather(lat: number, long: number) {
  "use cache: remote";
  return await fetchWeatherAPI(lat, long);
}

// Or cache the rendered component output
async function WeatherCard({ locationId }: { locationId: string }) {
  "use cache: remote";
  const data = await getWeatherFromOrigin(locationId);
  return <RenderedWeather data={data} />;
}

Rule of thumb: if something can be resolved cheaply during prerendering and does not depend on request input, use cache keeps it simple. Reach for 'use cache: remote' when cross-request reuse matters across instances and the cache keys are shared enough to justify the extra network lookup. If something is cheap to compute at build time and clearly belongs in the shell, do not add a remote lookup just because you can.

This is where PPR extends the ISR strategy. The goal is not to render once and freeze the whole page. The goal is to keep the shared shell fast, let dynamic sections stream, and cache expensive work at the layer where reuse is actually high.

What Does This Look Like in Practice?

For teams running large dynamic sites, the practical move is not to replace ISR everywhere. It is to stop applying it only at the page level, identify the routes where slug count and deploy frequency make whole-page caching expensive, and then move those routes to a layered model.

  • Prerender as much of the shell as possible. Shared layout, nav, styles, and loading UI can be reused, while request-time sections stream in separately. The exact cache placement depends on your route boundaries and deployment platform, so think of this as a direction, not a magic switch.

  • Move slug-specific content behind <Suspense>. Personalized data, graphs, and widgets can all be fetched via RSC at runtime and streamed into the page without giving up prerendering for everything around them.

  • Use generateMetadata for dynamic SEO. It can read params and fetched data, and in modern Next.js it can stream separately from visual content for many clients. But HTML-limited bots still wait for metadata in the initial response, and fully prerenderable routes may still need explicit dynamic markers or cached metadata. That makes it a good fit for location-specific titles and descriptions without forcing the whole route back into page-level ISR.

  • Introduce use cache for request-independent content like sitewide data or rarely-changing category lists.

  • Reach for 'use cache: remote' when a shared remote cache is worth the tradeoff. Use it to shield expensive or rate-limited upstream APIs when the cache keys are likely to be reused. The backing store depends on your platform or configured cache handler, so the exact behavior is an implementation detail.

  • Leverage prefetching for in-app navigation. Next.js can prefetch static shells and RSC payloads ahead of navigation, making flows feel much faster. Dynamic routes can still require a server roundtrip for uncached request-time content, but prefetching helps hide that cost for active users.

What Changed, Really

ISR introduced the idea: cache what you can, compute what you must. Cache Components and PPR do not discard that idea. They make it more precise by letting a route mix prerendered and request-time work in the same initial response.

The idea did not change. The unit of caching did.