Next.js 15 ships with improvements to the App Router, streaming support, and the React compiler integration — but upgrading does not automatically fix performance problems, and new features can introduce new ones. A slow app in Next.js 14 is still slow in Next.js 15 if the underlying issues are not addressed.
This guide covers seven common performance mistakes in Next.js applications, why each one hurts performance, and the specific fix for each.
What this covers:
Over-fetching with missing cache configuration
Heavy client-side JavaScript and dynamic imports
Image optimization with the
<Image>componentBlocking the main thread with expensive computations
Streaming with Suspense and async server components
Inefficient state management and unnecessary re-renders
Monitoring performance continuously after deployment
1. Over-Fetching Data on Every Request
Next.js's extended fetch API supports caching and revalidation at the request level, but this only works when the cache configuration is explicitly set. Without it, every request to the same endpoint fetches fresh data on every page render.
The problem:
// Fetches fresh data on every request — no caching
const res = await fetch("https://api.example.com/posts");
const data = await res.json();
The fix:
// Cached and revalidated every 60 seconds
const res = await fetch("https://api.example.com/posts", {
next: { revalidate: 60 },
});
const data = await res.json();
For data that changes infrequently, revalidate can be set to a longer interval. For data that should never be cached (personalized content, live prices), use cache: "no-store":
const res = await fetch("https://api.example.com/user/me", {
cache: "no-store",
});
Being explicit about caching intent on every fetch call prevents accidental over-fetching and makes the caching behavior easy to audit.
2. Loading Heavy JavaScript in the Main Bundle
Next.js automatically code-splits at the page level, but large libraries imported at the top of a component file are included in the bundle that loads immediately. A charting library, a rich text editor, or a date picker that is only used on one route adds to the initial load for all routes.
The problem:
// Loaded immediately, even if the chart is below the fold
import Chart from "../components/Chart";
The fix:
import dynamic from "next/dynamic";
// Loaded only when the component is actually needed
const Chart = dynamic(() => import("../components/Chart"), {
ssr: false, // skip server rendering for client-only libraries
loading: () => <p>Loading chart...</p>,
});
The ssr: false option is appropriate for components that depend on browser APIs (window, document, WebGL) that are not available during server rendering. For components that can be server-rendered, omit it to preserve the SSR benefit.
Dynamic imports are particularly valuable for components below the fold, feature-flagged UI, modal content, and any heavy library that is only used in specific interactions.
3. Using Raw <img> Tags Instead of <Image>
The Next.js <Image> component handles format conversion (WebP/AVIF), responsive sizing, lazy loading, and layout stability automatically. Raw <img> tags bypass all of this.
The problem:
// Full-resolution image, no lazy loading, no format optimisation
<img src="/hero.jpg" alt="Hero section" />
The fix:
import Image from "next/image";
<Image
src="/hero.jpg"
alt="Hero section"
width={1200}
height={600}
priority
/>
The priority prop instructs Next.js to preload the image, which is appropriate for above-the-fold images like hero sections. Without it, above-the-fold images are lazy-loaded, which delays Largest Contentful Paint (LCP).
For images below the fold, omit priority to allow lazy loading. For images with unknown dimensions (user-uploaded content), use the fill prop with a sized container:
<div style={{ position: "relative", width: "100%", height: 400 }}>
<Image
src={userAvatarUrl}
alt="User avatar"
fill
style={{ objectFit: "cover" }}
/>
</div>
4. Blocking the Main Thread with Heavy Computations
JavaScript execution on the main thread blocks rendering. A synchronous computation that takes 200ms makes the page unresponsive for that duration. In a React component, this directly delays paint and interaction response.
The problem:
function DataTable({ rawData }: { rawData: number[] }) {
// Runs synchronously on every render
const processed = expensiveTransform(rawData);
return <table>{/* ... */}</table>;
}
The fix for client-side computations: Move the work to a Web Worker:
// workers/transform.worker.ts
self.onmessage = (e: MessageEvent<number[]>) => {
const result = expensiveTransform(e.data);
self.postMessage(result);
};
// In the component
useEffect(() => {
const worker = new Worker(
new URL("../workers/transform.worker.ts", import.meta.url)
);
worker.onmessage = (e) => setProcessed(e.data);
worker.postMessage(rawData);
return () => worker.terminate();
}, [rawData]);
The fix for data that can be processed server-side: Move the computation to a Server Component or a Route Handler where it runs outside the browser's main thread entirely:
// app/data/page.tsx — runs on the server
export default async function DataPage() {
const processed = await processDataOnServer(); // no client blocking
return <DataTable data={processed} />;
}
Server-side processing is often the simpler solution when the data does not require client-side interactivity.
5. Not Using Streaming for Slow Data
Without streaming, a page that depends on a slow data source makes the user wait for the entire page before showing anything. Next.js 15's improved streaming support with React Server Components makes progressive rendering straightforward.
The problem:
// The entire page waits for all data before rendering anything
export default async function Page() {
const posts = await fetchPosts(); // takes 800ms
const comments = await fetchComments(); // takes 600ms
return (
<div>
<Posts data={posts} />
<Comments data={comments} />
</div>
);
}
The fix:
import { Suspense } from "react";
export default function Page() {
return (
<div>
<Suspense fallback={<PostsSkeleton />}>
<Posts />
</Suspense>
<Suspense fallback={<CommentsSkeleton />}>
<Comments />
</Suspense>
</div>
);
}
// app/components/Posts.tsx — async server component
async function Posts() {
const posts = await fetchPosts();
return <ul>{posts.map(p => <li key={p.id}>{p.title}</li>)}</ul>;
}
Each <Suspense> boundary streams its content independently. The user sees the page shell immediately, and each section fills in as its data resolves. The two fetch calls also run in parallel rather than sequentially.
For a single slow data dependency on an otherwize fast page, wrapping only the slow component in <Suspense> gives the user a usable page immediately while the slow section loads.
6. Unnecessary Re-renders from Global State
Global state management is necessary for some application data, but using a global store for state that is only relevant to a single component or subtree causes re-renders across the application whenever that state changes.
The problem:
Storing modal open/closed state, form field values, or tooltip visibility in Redux or a global Zustand store. Any component subscribed to the store re-renders on every such change, regardless of whether it uses the changed value.
The fix:
Keep state at the lowest level in the component tree that requires it. State that is only used within one component belongs in useState. State that needs to be shared within a subtree can use useContext with a provider scoped to that subtree.
// Scoped context — only the subtree re-renders
function SearchProvider({ children }: { children: React.ReactNode }) {
const [query, setQuery] = useState("");
return (
<SearchContext.Provider value={{ query, setQuery }}>
{children}
</SearchContext.Provider>
);
}
Global state is appropriate for: authentication status, user preferences, shopping cart contents, and other data that genuinely needs to be accessible across unrelated parts of the application. For everything else, local or scoped state performs better and is easier to reason about.
7. Not Monitoring Performance After Deployment
Performance regressions are introduced gradually. A component refactor adds a large dependency. A new page skips the <Image> component. An API call loses its cache configuration. Without continuous monitoring, these regressions accumulate before they are noticed.
The tools:
Lighthouse CI runs automated performance audits in CI/CD pipelines and fails builds when scores drop below a defined threshold:
npm install -g @lhci/cli
lhci autorun
Vercel Analytics provides Core Web Vitals data from real users directly in the Vercel dashboard. For self-hosted deployments, the @vercel/speed-insights package can be integrated independently.
The Next.js useReportWebVitals hook logs Web Vitals from the client, which can be forwarded to any analytics service:
// app/layout.tsx
"use client";
import { useReportWebVitals } from "next/web-vitals";
export function WebVitals() {
useReportWebVitals((metric) => {
console.log(metric); // send to your analytics service
});
return null;
}
Setting a performance budget in Lighthouse CI and reviewing the dashboard after each deployment catches regressions before users report them.
Key Takeaways
Configure caching explicitly on every
fetchcall. The intent should be clear:revalidate,cache: "no-store", or the default.Dynamic imports with
next/dynamicreduce the initial JavaScript bundle for components that are not needed on first load.The
<Image>component handles optimization automatically. Thepriorityprop should be set for above-the-fold images to avoid LCP delays.Expensive synchronous computations block the main thread. Move them to Web Workers on the client or to the server where they run off the critical rendering path.
Streaming with
<Suspense>boundaries allows the page shell to render immediately while slow data loads independently.Global state causes application-wide re-renders on every change. Keep state local unless it genuinely needs to be shared across unrelated subtrees.
Lighthouse CI and Web Vitals monitoring catch performance regressions before they compound.
Conclusion
Most Next.js performance problems come from a small number of patterns: fetching without caching, loading code before it is needed, not using the built-in optimization primitives, and not measuring the result. The fixes are specific and incremental. Addressing one issue at a time, measuring before and after each change, produces reliable improvements without guesswork.
Hit a specific Next.js performance issue that took a while to diagnose? Share the symptom and the fix in the comments.




