Asaduzzaman Pavel

Practical Look at Cloudflare Workers

Everyone talks about "serverless" like it's some magic bullet, but most of the time you're just trading server management for "cold start" management. Cloudflare Workers (CFW) are the first time it felt like the trade-off was actually worth it. Instead of waiting for a container to spin up, you're running in V8 isolates. It’s fast. Like, sub-5ms fast.

The "Eventual" in KV is longer than you think

I assumed Workers KV was basically a globally distributed Redis. It isn't. I once spent two hours debugging a configuration toggle where the Worker was still seeing the old value long after I’d updated it through the CLI. The "eventual consistency" can take up to 60 seconds (sometimes more if the edge node is having a day), which makes it useless for anything that needs to be synchronized.

If you need real-time consistency, you have to use Durable Objects, but then you’re paying a premium and dealing with a much more complex API. For simple "set and forget" config, KV is fine, but don't try to use it for session state unless you're okay with users "logging out" and still being logged in for a minute.

Handling Edge Redirects

This is probably the most boring use case, but it's the one that saves me the most money. Instead of hitting a heavy Rails or Node origin just to send a 301, the Worker handles it at the edge.

export default {
  async fetch(request) {
    const url = new URL(request.url);
    if (url.pathname.startsWith('/old-blog/')) {
      const newPath = url.pathname.replace('/old-blog/', '/blog/');
      return Response.redirect(`https://example.com${newPath}`, 301);
    }
    return fetch(request);
  },
};

API Gateways and the "Shield" Pattern

I started using Workers as a thin gateway because I was tired of configuring Nginx for basic things like header normalization or rate limiting.

The Worker can:

  • Route based on headers (e.g., x-api-version).
  • Inject security headers (HSTS, CSP) so my backend doesn't have to care.
  • Block bad actors before they even touch my origin's bandwidth.

Actually, I think the biggest benefit here isn't even the latency—it's the fact that my origin server is "shielded." Only "clean" requests get through. This saved me a few hundred bucks in egress fees last month when some bot decided to crawl my API.

...And then there's the Node.js compatibility

For a long time, this was a dealbreaker. If a library used Buffer or path in a specific way, it just wouldn't run. Cloudflare has improved this a lot with the nodejs_compat flag, but it's still not 100%. I still run into obscure "is not a function" errors when I try to pull in a random NPM package that assumes it's running on a "real" server. It's gotten better, but it's still a bit of a minefield.

Predictive Cache Refresh (SWR)

This is my favorite pattern. Using ctx.waitUntil(), you can serve a stale response to the user and refresh the cache in the background. The user gets zero latency, and the next user gets a fresh response.

export default {
  async fetch(request, env, ctx) {
    const cache = caches.default;
    let response = await cache.match(request);

    if (response) {
      const age = parseInt(response.headers.get('age') || '0');
      if (age > 600) { // 10 minutes
        ctx.waitUntil(updateCache(request, cache));
      }
      return response;
    }

    const freshResponse = await fetch(request);
    ctx.waitUntil(cache.put(request, freshResponse.clone()));
    return freshResponse;
  },
};

async function updateCache(request, cache) {
  const freshResponse = await fetch(request);
  if (freshResponse.status === 200) {
    await cache.put(request, freshResponse);
  }
}

The "Cliffs" in the Pricing

Cloudflare’s $5/month plan is a steal, but you have to watch the KV write costs. Writes are 10x more expensive than reads. I've seen people accidentally rack up a $50 bill because they were treating KV like a high-frequency write store.

Also, Hyperdrive is great for fixing database latency, but it’s another layer to debug. I've had cases where the connection pooler got stuck and I had to manually bounce the Worker to get it to reconnect to my Postgres instance.

Observability is still "meh"

wrangler tail is great until it isn't. When you have high traffic, the stream is just a blur of JSON. I’ve started passing the CF-Ray header to my backend logs just so I can correlate what happened at the edge with what happened in my DB.

export default {
  async fetch(request, env, ctx) {
    const rayId = request.headers.get('cf-ray');
    
    // Pass the Ray ID to your backend
    const backendResponse = await fetch('https://api.your-origin.com/data', {
      headers: {
        'X-Edge-Trace-ID': rayId,
      }
    });

    return backendResponse;
  },
};

It’s not a perfect system, but it’s better than guessing why a request failed in fra-01 but worked in sfo-02.

Asaduzzaman Pavel

About the Author

Asaduzzaman Pavel is a Software Engineer who actually enjoys the friction of a well-architected system. He has over 15 years of experience building high-performance backends and infrastructure that can actually handle the real-world chaos of scale.

Currently looking for new opportunities to build something amazing.