DEV Community

Nabin Debnath
Nabin Debnath

Posted on

I Replaced a Docker-based Microservice with WebAssembly and It's 100x+ Faster

TL;DR
We've all heard the quote from Docker's founder, Solomon Hykes, back in 2019: "If WASM+WASI existed in 2008, we wouldn't have needed to create Docker."

For years, this was just a prophecy. But in 2025, the tech has finally caught up.

I decided to find out if he was right. I took a simple, everyday Node.js Docker-based microservice, rewrote it in Rust-based WebAssembly (Wasm), and benchmarked them head-to-head.

The results weren't just better; they were shocking. We're talking 99% smaller artifacts, incremental build times cut by 10x, and cold-start times that are over 100x faster.

Here's the full story, with all the code and benchmarks.

Part 1: The "Before" - Our Bloated Docker Service

To make this a fair comparison, I picked a perfect candidate for a microservice: a JWT (JSON Web Token) Validator.

It's a common, real-world task. An API gateway or backend service receives a request, takes the Authorization: Bearer header, and needs to ask a different service, "Is this token valid?"

It's a simple, stateless function to put it in its own container.

The Node.js / Express Code

It is a Node.js code and an Express server with one endpoint: /validate. It uses the jsonwebtoken library to verify the token against a secret.

// validator-node/index.js
import express from 'express';
import jwt from 'jsonwebtoken';

const app = express();
app.use(express.json());

// The one secret key our service knows
const JWT_SECRET = process.env.JWT_SECRET || 'a-very-strong-secret-key';

app.post('/validate', (req, res) => {
  const { token } = req.body;

  if (!token) {
    return res.status(400).send({ valid: false, error: 'No token provided' });
  }

  try {
    // The core logic!
    jwt.verify(token, JWT_SECRET);
    // If it doesn't throw, it's valid
    res.status(200).send({ valid: true });
  } catch (err) {
    // If it throws, it's invalid
    res.status(401).send({ valid: false, error: err.message });
  }
});

const port = process.env.PORT || 3000;
app.listen(port, () => {
  console.log(`Node.js validator listening on port ${port}`);
});
Enter fullscreen mode Exit fullscreen mode

The Dockerfile

We use a multi-stage build with an Alpine base image to keep it small.

# Dockerfile
# --- Build Stage ---
FROM node:18-alpine AS build
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci

COPY . .

# --- Production Stage ---
FROM node:18-alpine
WORKDIR /app
COPY --from=build /app/node_modules ./node_modules
COPY --from=build /app/index.js ./index.js

# We don't need the full package.json, just the code and dependencies
ENV NODE_ENV=production
CMD ["node", "index.js"]
Enter fullscreen mode Exit fullscreen mode

The Problem

Let's check few things after docker does its work - the cost of the simple service

  • The Build Time: On my machine, building this from a cold cache takes ~81 seconds. Even with Docker layer caching, re-building after a small code change takes about 45 seconds due to context switching and layer hashing.
  • The Artifact Size: After building, the final image is 188MB. That's 188MB to ship a 30-line script.
  • The Cold Start: When deployed to a serverless platform (like Cloud Run or scaled-to-zero K8s), the cold start is painful. The container has to be pulled, and the Node.js runtime has to boot. I was seeing cold starts between 800ms and 1.5 seconds. That's a user-facing delay.

Part 2: The "After" - Rebuilding with WebAssembly

Wasm modules are small, compile-to-binary, and run in a secure, sandboxed runtime that starts in microseconds. Unlike Docker, which packages a whole OS, Wasm just packages your code.

I chose to rewrite it in Rust because of its first-class Wasm support and performance. I used the Spin framework, which makes building Wasm-based HTTP services incredibly simple.

The Rust / Spin Code

First, let's install the Spin CLI and scaffold a new project.

$ spin new
# I selected: http-rust (HTTP trigger with Rust)
Project name: validator-wasm
...
Enter fullscreen mode Exit fullscreen mode

This generates a src/lib.rs file. I opted to use the jwt-simple crate instead of the standard jsonwebtoken because jwt-simple is a pure-Rust implementation. This avoids C-binding issues and compiles down to an incredibly small Wasm binary.

// validator-wasm/src/lib.rs
use anyhow::{Result, Context};
use spin_sdk::{
    http::{Request, Response, Router, Params},
    http_component,
};
use serde::{Deserialize, Serialize};
use jwt_simple::prelude::*;

// 1. Define our request and response structs
#[derive(Deserialize)]
struct TokenRequest {
    token: String,
}

#[derive(Serialize)]
struct TokenResponse {
    valid: bool,
    #[serde(skip_serializing_if = "Option::is_none")]
    error: Option<String>,
}

// Get the JWT secret from environment or use a default
fn get_secret() -> HS256Key {
    let secret = std::env::var("JWT_SECRET").unwrap_or_else(|_| "a-very-strong-secret-key".to_string());
    HS256Key::from_bytes(secret.as_bytes())
}

/// The Spin HTTP component
#[http_component]
fn handle_validator(req: Request) -> Result<Response> {
    let mut router = Router::new();
    router.post("/validate", validate_token);
    Ok(router.handle(req))
}

// 2. JWT validation using jwt-simple
fn validate_token(req: Request, _params: Params) -> Result<Response> {
    // Read the request body
    let body = req.body();
    if body.is_empty() {
        return Ok(json_response(400, false, "Empty request body"));
    }

    let token_req: TokenRequest = serde_json::from_slice(body)
        .context("Failed to parse request body")?;

    let key = get_secret();

    // The `verify_token` function does the validation
    match key.verify_token::<serde_json::Value>(&token_req.token, None) {
        Ok(_) => Ok(json_response(200, true, "")),
        Err(e) => Ok(json_response(401, false, &e.to_string())),
    }
}

// Helper to build a JSON response
fn json_response(status: u16, valid: bool, error_msg: &str) -> Response {
    let error = if error_msg.is_empty() { 
        None 
    } else { 
        Some(error_msg.to_string()) 
    };

    Response::builder()
        .status(status)
        .header("Content-Type", "application/json")
        .body(serde_json::to_string(&TokenResponse { valid, error }).unwrap())
        .build()
}
Enter fullscreen mode Exit fullscreen mode

It is evidently more code than the Node.js code. But it's also type-safe, compiled, and we will see this unbelievably fast.

The "Build"

There's no Dockerfile. Instead, I configured the spin.toml manifest to use the modern wasm32-wasip1 target.

#:schema https://schemas.spinframework.dev/spin/manifest-v2/latest.json

spin_manifest_version = 2
[application]
name = "validator-wasm"
version = "0.1.0"

[[trigger.http]]
route = "/..."
component = "validator-wasm"

[component.validator-wasm]
source = "target/wasm32-wasip1/release/validator_wasm.wasm"  # The build output
allowed_http_hosts = []
[component.validator-wasm.build]
command = "cargo build --target wasm32-wasip1 --release"
watch = ["src/**/*.rs", "Cargo.toml"]
Enter fullscreen mode Exit fullscreen mode

Build this entire project:

$ spin build
Enter fullscreen mode Exit fullscreen mode

This one command compiles the Rust code to a Wasm module.

Part 3: The Showdown - Docker vs. Wasm Benchmarks

I've successfully run and measured both the Docker container and the Spin Wasm application. Docker runs a full operating system in a virtualized container, while Wasm runs a tiny, sandboxed module directly on the host.
This architectural difference leads to some staggering benchmark results.

Metric Docker (Node.js) WebAssembly (Rust/Spin) The Winner
Artifact Size 188 MB 0.5 MB Wasm (99.7% smaller)
Build Time (Incremental) ~45 sec (Docker layer caching) 4.2 seconds Wasm (10x faster)
Cold Start Time ~1.2 seconds (1200ms) ~10ms Wasm (120x faster)
Memory Usage ~85 MB (idle) ~4 MB (idle) Wasm (95% less)
  • Artifact Size: The Wasm module is 0.5 MB (548KB to be exact). Not 188MB. I can send this file in a Slack message. It's 99.7% smaller.
  • Build Time (Incremental): This is the developer "inner loop" metric. Rust's incremental builds are blazing fast. Once dependencies are compiled, changing your code and running spin build takes ~4 seconds. Comparing this to waiting ~45 seconds for Docker context switching and layer sha-hashing feels like a superpower.
  • Cold Start: This is the headline. The Wasm runtime starts in the low-millisecond range. I benchmarked it using spin up and got startup times consistently around 10ms. Compared to the 1200ms of the container, it's not even a contest.

This is the "100x faster" promise. It's not that the code executes 100x faster (though the Rust version is quicker); it's that the service can go from zero-to-ready 100 times faster.

Part 4: The Verdict - Is Docker Dead?

No. Of course not - Wasm is not a Docker killer. It's a Docker alternative for a specific job.

You should still use Docker/Containers for:

  • Large, complex, stateful applications (like a database).
  • Monolithic apps you're lifting-and-shifting.
  • Services that truly need a full Linux environment.

But WebAssembly is the new king for:

  • Serverless Functions (FaaS)
  • Microservices (or "nano-services")
  • Edge Computing (where low startup time is critical)
  • Plugin Systems (like for a SaaS)

My takeaway- That quote from Solomon Hykes wasn't just a spicy take. He was right.

The next time you're about to docker init a new, simple serverless function, you just ask yourself if your use case is a right candidate for this. It may or may not be.

Try it yourself. You might be shocked, too.

Top comments (3)

Collapse
 
wpqwpq profile image
WS

The problem with this approch is that you loose all the docker run environment: unified management, networking, easy proxification etc.
So you compare a bare metal binary to a whole ecosystem.

Even for micro services you will need network management, proxies etc.

What if this was a native go binary running on alpine? Would that still be a 100x factor?

Collapse
 
techieshark profile image
Peter W

Nice results. I'd be curious to know more about deployment options and your experience with that!

Collapse
 
autra profile image
Augustin Trancart

If you have rust code, why compile to wasm at all (and not just native executable)? Just for the fun of answering Solomon Hykes?