
A Persistent Memory for the Node.js JIT
Node.js 22 finally allows the V8 engine to persist its optimization work to disk, effectively deleting the cold-start tax for massive dependency trees.
I remember sitting in a coffee shop, trying to run a local test suite on a massive legacy monorepo. My laptop felt like it was trying to achieve liftoff, the fans screaming while I stared at a blinking cursor for ten seconds before the first test even started. That was the "JIT tax"—the price we pay every single time we boot a Node.js process.
For years, we’ve just accepted that Node.js has "optimization amnesia." Every time you run a script, the V8 engine starts from zero. It parses your code, turns it into bytecode, and slowly figures out which parts are worth optimizing. In Node.js 22, the engine finally got a notebook to write those notes down.
The Cold Start Problem
When you launch a Node.js application, the V8 engine doesn't just "run" your JavaScript. It goes through a multi-stage pipeline. First, the Ignition interpreter handles the bytecode. As functions get called more often, they become "hot," and a compiler (like TurboFan) kicks in to turn that code into highly optimized machine code.
The problem? In a serverless function or a CLI tool, the process often dies before the engine even finishes optimizing. And even if it does finish, all that hard work is evaporated the moment the process exits.
If you have a massive dependency tree (looking at you, aws-sdk), your CPU spends the first few hundred milliseconds—or seconds—re-learning how to run the same code it ran five minutes ago.
Enter Maglev and the Disk Cache
Node.js 22 introduces a way to persist this optimization work to disk. Specifically, it leverages Maglev, V8’s mid-tier compiler that sits right between the simple interpreter and the heavy-duty TurboFan.
Maglev is fast. It generates decent machine code much quicker than TurboFan. By enabling the Maglev compiler and telling Node to save its output to a directory, you effectively "freeze" the optimized state of your app.
How to turn it on
To use this, you need to use a few specific flags. It’s currently behind an experimental curtain, but it's remarkably stable for development workflows.
First, create a directory for your cache:
mkdir .js-cacheThen, run your application with the following flags:
node --experimental-maglev \
--maglev-code-cache-to-disk \
--js-code-cache-path=./.js-cache \
index.jsThe first run will take roughly the same amount of time (maybe a tiny bit longer as it writes to disk). But the second run? That's where the magic happens. The engine sees the cached Maglev code in ./.js-cache and skips a massive chunk of the compilation phase.
Seeing the Difference (A Practical Example)
Let’s simulate a "heavy" startup. Imagine a script that loads a bunch of logic and executes a tight loop that usually triggers the JIT.
// bench.js
const start = performance.now();
function heavyTask(n) {
let result = 0;
for (let i = 0; i < n; i++) {
result += Math.sqrt(i) * Math.sin(i);
}
return result;
}
// Simulate a large app loading and running hot paths
for (let i = 0; i < 1000; i++) {
heavyTask(1000);
}
console.log(`Startup and execution took: ${(performance.now() - start).toFixed(2)}ms`);If you run this normally:
node bench.js
# Output: Startup and execution took: 142.15ms (roughly)Now run it with the cache flags twice. By the second run, you’ll likely see a significant drop in that "warm-up" time, especially in larger real-world applications where the dependency graph is deep.
In my testing with a medium-sized CLI tool, I saw cold starts drop from 450ms to about 180ms. That’s the difference between a tool feeling "laggy" and feeling "instant."
Why This Matters for Serverless and CI
If you’re running on AWS Lambda or Google Cloud Functions, you are billed for every millisecond. More importantly, your users wait for those milliseconds.
Currently, many people use "warm-up" hits to keep their Lambdas ready. With JIT caching, you could theoretically ship the .js-cache directory inside your container or deployment package. You’re essentially shipping a "pre-warmed" engine.
The "Gotchas"
It’s not all sunshine and free performance. There are a few things to keep in mind:
1. Architecture Matters: You cannot generate a cache on an Intel Mac and expect it to work on a Linux ARM64 server. The machine code is specific to the instruction set.
2. Code Changes Invalidate the Cache: If you change your source code, the engine is smart enough to know the cache is stale, but you'll have to pay the compilation tax again on the next run.
3. Disk Space: These cache files aren't massive, but they aren't zero either. If you have thousands of microservices, manage your disk usage accordingly.
How to use it in Production (Safely)
Since the flag --experimental-maglev is, well, experimental, I wouldn't bet my entire production infrastructure on it just yet without a fallback. However, for CLI tools and Build Scripts, there is almost no downside.
You can set these via environment variables so you don't have to type them every time:
export NODE_OPTIONS="--experimental-maglev --maglev-code-cache-to-disk --js-code-cache-path=./.node_cache"
node my-expensive-script.jsFinal Thoughts
Node.js 22 is a massive release for performance nerds. While features like the built-in WebSocket client or the require(esm) support got the headlines, the JIT code cache is the sleeper hit. It’s a fundamental shift in how the V8 engine handles the reality of modern, bloated JavaScript applications.
Give it a shot on your slowest local project. Your CPU fans might finally get a break.


