As we dive headfirst into 2023, I want to examine two web development trends: serverless edge computing and WebAssembly + web/service workers.
With edge computing and serverless functions, we’re trying to reduce the amount of processing done on the client. Instead, we’re relying on a vast distributed network of servers to get their work down to the clients quickly. With WebAssembly and service workers (N.B. there are some differences between web and service workers, but for the purposes of this article I’m going to use the term “service workers” to refer to both interchangeably as they both have a role to play here), we’re giving client applications the ability to run server-like code natively and offline.
At first glance, these two trends seem to be in conflict, but I think their intersection is at the heart of the hyperlocal future of the web.
Let’s take a closer look at each trend individually.
Serverless Edge Computing
Now with edge computing and serverless functions, we’re back to letting servers do (most of) the work for us and sending along (mostly) HTML and CSS again.
The difference is that these globally-distributed “edge networks” can significantly reduce the latency of delivering server-rendered content to our web applications. Meanwhile, “serverless” functions significantly cut down on the amount of managed resources we need. Amazon (AWS) used to be the only game in town. But now, there is a growing list of vendors for this type of app distribution: CloudFlare, Vercel, Render, Deno Deploy, etc.
WebAssembly and Service Workers
In the meantime, service workers have been around a little longer. The first W3C working draft was in 2014, but the version 1 candidate draft wasn’t published until last year. Service workers let web applications offload work from the main browser thread, essentially taking advantage of parallel computing. They also enable better offline caching and performance in the case of degraded network connectivity. That allows your app to run “locally” until it can reconnect and synchronize with the server.
Wait. Isn’t that exactly what we’re trying to avoid with our server-first edge network applications?
Edge Computing and WebAssembly
Fortunately, these two advances are easily reconcilable. Edge network and serverless compute providers are already offering the ability to run WebAssembly on their platforms. This allows for truly write-once-run-everywhere code packages.
Simultaneously, service workers allow client applications to retrieve and cache assets in the background. Nearby edge nodes should significantly speed up the downloading of these assets. In a normal app lifecycle, a serverless function might process an initial app request, while subsequent requests will run natively in the client.
This gives the web app developer a lot of power. You can optimize where your app does its heavy lifting, accounting for factors like whether relevant data is stored on the server or the client or if the client has had a chance to cache large code assets. You can even optimize to do work on both the client and server simultaneously for increased performance.
I’m excited to see how different app frameworks take advantage of this new paradigm. Server-first development is the trend. However, incorporating the idea of run-anywhere WebAssembly and having your app intelligently handle the server-client handoff with service workers is still nascent.
I’m excited to see how edge networks proliferate. The competition among providers is heating up, but I think we’re just scratching the surface of what’s possible. For example, we may see services spring up that make it seamless to deploy your app across multiple edge networks, ensuring your users are always communicating with the closest possible data center.
Further, I’d like to see municipalities investing in this kind of infrastructure. Imagine if, instead of paying a private corporation to host your app, you could pay local governments. Your users’ server compute functionality could literally be in their backyards.
Eventually, hyper-specialized edge networks combined with the client-side computing power of WebAssembly and service workers may reduce latency and increase app performance to a degree indistinguishable from native offline desktop applications. This is the hyperlocal future of the web, and it’s right around the corner.
I think this article discusses a valid point but there could be a little confusion regarding Service Workers and Web Workers. The former are more for supporting off-line operation but the latter is a way of providing the browser with additional threads.
@Tracy-Gregory thanks for pointing this out! I think both service and web workers have a role to play in this context, and was kind of using the terms interchangeably to discuss everything you can do with them. I’ve updated the post to include your link and to clarify that I’m using the term “service workers” as a bit of an umbrella for both, although the discussion primarily does focus on the offline-caching ability of service workers specifically.