The Edge Computing Lie: Why Most Apps Do Not Need Edge Functions

There is a pitch you have probably heard a dozen times in the last two years. Deploy to the edge. Your code runs in 300 locations globally. Sub-five millisecond cold starts. Your users in Tokyo get the same speed as your users in New York. Modern infrastructure for modern apps.

It is a compelling pitch. And for a specific set of problems, it is accurate. The issue is that it gets applied far outside that specific set, to the point where developers choose edge functions for applications where a traditional server or regional serverless function would have been faster, simpler, and cheaper.

I have spent the last few months building with Cloudflare Workers, Vercel Edge Functions, and Deno Deploy, and I want to give you the honest version of what I found. Not the benchmark charts from the provider landing pages, but the real-world experience of what breaks, what surprises you, and which use cases actually justify the architecture shift.

The conclusion: if you are an indie hacker or a small team shipping a product with a database, edge functions are probably not the right default. There are cases where they are exactly right. Most apps are not those cases.


The Core Problem Nobody Tells You About Upfront

Edge functions cannot make traditional database connections.

This is not a minor limitation. It is the thing that quietly invalidates the edge latency story for most applications.

A Postgres or MySQL connection works over TCP with connection state that persists across queries. Edge runtimes run in V8 isolates that do not allow long-lived TCP connections. You cannot use the standard pg driver, most Prisma configurations, or any connection-pooling library that expects a persistent socket.

What this means in practice: if your edge function needs to talk to a database, you are forced into one of a few options. You use an HTTP-based database client like Supabase’s REST API or a Neon HTTP driver. You use an edge-native database like Cloudflare D1 or KV. You use a connection proxy service like Prisma Accelerate or PlanetScale’s serverless driver that adds a layer between your function and your database.

None of these are dealbreakers. But each one rewires your data layer in ways that take real time to set up, have their own latency characteristics, and put constraints on what your queries can do. When a framework blog post says “deploying to the edge” as if it is a checkbox, this is the part they are not walking you through.

And here is where the latency story unravels. If your application is deployed at the edge globally, but your database is in a single region (say, AWS us-east-1), every user who is not near us-east-1 is actually experiencing worse latency than they would with a regional server. The edge function processes the request fast, then phones home to a central database, then sends back the response. You have added a hop, not removed one. The user in Tokyo who was supposed to benefit from your global edge deployment is now waiting for a round trip to Virginia and back.

This is the edge latency trap. The benchmark shows your function starting in under five milliseconds. The real experience includes the database round trip, which is determined entirely by where your data lives, not where your function runs.


The CPU Time Trap

Even if you solve the database connectivity problem, there is a second constraint that limits what you can actually do at the edge.

CPU time limits exist on every edge runtime. Cloudflare Workers on the free tier gives you 10 milliseconds of CPU time per request. The paid tier raises this to 30 milliseconds. Deno Deploy allows somewhere between 50 and 200 milliseconds depending on the configuration. Vercel Edge Functions operate in a similar range.

This is not latency time. It is CPU time. A 30-millisecond CPU budget means your function cannot do more than about 30 milliseconds of computation before it gets killed. If you are running a lightweight operation like validating a JWT, redirecting based on geolocation, or serving a cached response, this is fine. If you are doing anything computationally non-trivial, you will hit the wall.

A real-world example: I tried running a non-trivial image transformation at the edge. It worked fine in local testing. In production on Cloudflare Workers, the function was terminated mid-processing because the computation exceeded the CPU budget. The user got an error. The fix was to move that operation back to a traditional Lambda function where there is no CPU time cap.

Edge runtimes are optimized for lightweight, stateless operations that complete quickly. They are not general-purpose compute. Using them as general-purpose compute gets you into trouble because the constraints are invisible until they bite you in production.


Where Edge Functions Actually Win

I want to be clear that edge functions are genuinely excellent for the right use cases. The problem is not the technology. The problem is the default recommendation to use them for everything.

Here is where they deliver real value:

Geolocation redirects. If a user from Germany should go to de.yourapp.com and a user from the US should go to yourapp.com, an edge function handles this in milliseconds with no round trip to your origin. The request is modified at the network layer before it ever hits your server. This is one of the cleanest edge use cases because it requires no database and no significant computation.

Auth token validation before origin. Validating a JWT is computationally cheap and stateless. Doing it at the edge means your origin server only receives authenticated requests. Invalid tokens are rejected at the network layer. This pattern works extremely well, especially when the validation does not require checking token revocation against a database.

A/B testing and feature flags at the network layer. Serving different versions of a static response based on a cookie or header is an edge-native problem. The computation is minimal, the result is deterministic, and you get the benefit of global distribution without any database dependency.

Caching and response manipulation. Edge functions are excellent at reading cached responses and modifying headers before they reach the client. Caching logic, CORS headers, security headers, response transformations for cached content. Fast, cheap, globally consistent.

Rate limiting with edge-native storage. Cloudflare Workers KV and Durable Objects are designed for this. Counting requests per IP at the edge, enforcing rate limits before requests hit your origin. This is an area where edge functions meaningfully improve your architecture, not just relocate it.

Notice what all of these have in common: they are stateless or use edge-native storage, they are computationally lightweight, and they do not need to talk to a central database for every request. That is the profile of code that belongs at the edge.


The Right Architecture for Most Small Teams in 2026

If you are building a SaaS product, a developer tool, or a consumer app as a solo developer or small team, here is the architecture that actually works for most cases:

Your application runs on traditional serverless (AWS Lambda, Google Cloud Run, Render, Railway, Fly.io) or a small persistent server. You pick a region close to your primary user base or your database. You use the database driver and ORM you are already comfortable with, with no HTTP indirection required. Connection pooling works. Complex queries work. Your existing tooling works.

In front of this, you put a CDN. Cloudflare’s free tier is genuinely excellent for this. Static assets are cached globally. Your HTML responses can be cached with appropriate cache headers. You get most of the global performance benefit without any of the edge function constraints.

If you need geolocation routing, auth token validation, or rate limiting at the network layer, you add a thin edge function for those specific concerns. Not as your primary application layer, but as a lightweight pre-processing step that handles the operations that genuinely benefit from edge deployment.

This hybrid pattern is what the “deploy everything to the edge” pitch is actually describing when it works well in practice. The problem is that the marketing collapses the distinction between the thin edge layer and the application layer, which leads developers to try to run their entire application at the edge and discover the constraints the hard way.


The Stack Decision for Indie Hackers

If you are building a product and trying to ship fast, infrastructure decisions have real time costs. Every hour spent debugging database connectivity at the edge is an hour not spent on the thing that actually makes your product valuable.

The honest recommendation: do not start at the edge unless you have a specific use case that edge functions are designed for.

Start with the simplest stack that works. A Next.js or a framework you know well, deployed to Vercel or Railway on standard serverless, connecting to a Postgres database on Neon or Supabase. Get your product in front of users. If you have a specific performance problem that edge deployment would solve, address it then, with the full context of what your actual traffic looks like.

The Bun vs Node.js decision and the local-first software decision are better places to spend your infrastructure thinking time as a small team. Both of those offer real performance improvements without the architectural tradeoffs that edge functions bring.

The developers who benefit most from edge functions are teams who have grown to the point where global latency is a measurable user experience issue, have the engineering capacity to manage the data layer complexity, and have specific use cases (auth, redirects, caching) that map cleanly to what edge functions do well.

That is not most indie hackers in the first year of a product.


The Signals That Edge Is Actually the Right Call

To be useful rather than just critical, here are the signals that tell you edge functions are the right choice for your situation:

Your users are globally distributed and latency is a key product metric. If you are building something where sub-100ms response times genuinely matter and your users are spread across North America, Europe, and Asia, edge deployment solves a real problem.

Your application is primarily serving static or cached content with lightweight personalization. Personalization at the edge based on cookies, headers, or geolocation without database calls is a perfect edge use case. Content platforms with heavy caching fit this profile well.

You need to enforce something at the network layer before requests reach your application. Auth, rate limiting, bot detection, request routing. Thin operations with no database dependency that benefit from executing as close to the user as possible.

You are willing to architect your data layer around edge-native storage or HTTP database connections. If you are building something new and are comfortable with D1, KV, or Durable Objects as your primary data store, edge functions become a first-class option. The constraints are real but they are manageable if you design for them from the start rather than retrofitting them.

If none of those descriptions match your situation, the traditional serverless plus CDN pattern will serve you better. It is less exciting to write about, but it does not surprise you with CPU time limits at 2am.


The Honest Bottom Line

Edge functions are not a lie in the sense of being fabricated. The latency numbers on the marketing pages are real. The “global distribution” is real. What is missing is the full picture of what you give up to get there.

The database connectivity constraint eliminates the latency benefit for most applications that read from or write to a database on every request. The CPU time limits make edge functions unsuitable for non-trivial computation. The complexity of working around these constraints has a real cost in developer time.

The cases where edge functions are genuinely the right tool are narrower than the marketing suggests. Auth validation, geolocation routing, caching logic, lightweight personalization. These are excellent edge use cases and if your application is primarily doing these things, edge functions are worth the learning curve.

For the typical indie hacker or small team building a product with a database and real user logic, traditional serverless in a single region with Cloudflare CDN in front is faster to ship, simpler to debug, and within a few milliseconds of edge performance for most users most of the time.

Optimize for shipping the product first. You can always add an edge layer later when you have real traffic data telling you exactly where the latency bottleneck is and whether edge deployment would actually fix it. That is a better use of the architectural decision than betting on it before you have any users at all.