5 min read

The actual use-cases of WebAssembly that make sense

This topic is already covered in detail by many other writers, but this take is a bit different. I don't think wasm will revolutionize the world, certainly not to the degree that is boasted. Instead, wasm is useful in some niche applications inside the browser. I do think however, that wasm will be extremely important in out-of-browser settings, like we've seen with kubernetes.

The WebAssembly website sports a whole page dedicated to just use-cases. It's all good that it's possible to do, but is it practical? Is it useful? Most importantly, does it provide any benefit, despite it's drawbacks, that would justify development in WebAssembly? That's the topic of this article.

I'll start with the actually good use-cases in my opinion. I do think these are realistic and useful.

Good use-cases

Demos. Game demos are probably the best use-case there is. As a game studio, your games become much more accessible to a wide variety of users. What is usually called "friction" in the business. Games are starting to become mainstream and the niche base of players that used to be fine with jumping through hoops and doing shady things to their computers are becoming a smaller part of the consumer base. You don't have to download any files or install a program to play the demo.

Demos can also be of many other things, like animations, VR, visualization in news and science, and even technical demos like simulations and video and audio codecs. We have lots of things we would like to show a vast amount of people, so the list just goes on.

It's important to mention that it's not perfect. The problem is that even though wasm boasts near-native speeds, it's fair to say it's a noticeable difference in something like games where 50 and 60 fps make a huge difference. triple-A studios are in-fact concerned about the perceived performance and if a game performs poorly, it can affect the reputation negatively. We've seen this with CDPR with CyberPunk. Granted there were other issues with the game, but their biggest hit was the low quality it had to run with on the PS4 which prompted CDPR to offer refunds to those who bought it on that platform.

Client offloading of intensive processing. This one is a pretty interesting quality-of-life improvement to the user experience of many interactive websites. When is the last time you were going to upload something (video or picture) and be told the image is too large? Despite the numerous tooling, many people may struggle with this trivial task. What's more, if you would like to keep the quality as high as possible, balancing this task suddenly becomes way less trivial. Image compression tools are already here and damn they are useful. Small website owners can keep costs down by compressing images without affecting the user experience much if at all. I hate websites that ask me to keep an image within some specifications when they could just have my web browser do it automatically. Some websites will let me crop the image afterwards, but what is the point if I could do it myself in the same software i used to compress and rescale it.

Other intensive processing tasks I can think of are live video or audio processing in communications apps (depending on hardware), or machine learning routines like natural language processing for writing assistance or website specific voice assistants. There are many intensive processing tasks that are currently being handled in the server at high cost, so offloading is a very realistic and profitable endeavor that I expect we'll see more of in the future.

Virtual environments. I don't think this is too useful, but far from the worst example. I do programming in python and I have a big problem with how jupyter notebooks work. It's great that it starts a web browser, but the fact that you have to have a server is annoying. Python can run just fine in the browser and projects are coming along to get the scientific stack in the browser and working in jupyter standalone, which is absolutely fantastic.

You can't underestimate the value of a virtual environment. Almost every developer tooling have some form of it. Nodejs with nvm and python with conda are two examples. Granted these are quite the bloated module ecosystems, but you catch my drift. Windows virtual environments are getting increasingly important as we sometimes need to test programs in clean installs of windows and people are reinstalling their operating system because it's full of random stuff they don't want and are too hard to clean up. However I can't think of a very good reason to use a virtual environment in the web browser as opposed to a local program except for in the case of jupyter, but even for jupyter, I prefer the vscode version instead. The only reason I'd want to use it in the browser, is if I'm running jupyter on a remote machine dedicated to the task at hand.

The other use-cases

The use-cases mentioned above are but a small subset of the list provided by WebAssembly on their website and boasted by many articles all around. Let's go through them and I'll explain in the best of my ability why they cannot be.

High performance web frameworks. While this is by far a very popular theme in many discussion forums and there are already frameworks that do this popping about, this is certainly not a realistic use case. Although this might change in the future, the presence of glue code to manipulate the DOM has a big performance impact and with the introduction of V8 and JIT compiler for JS, the performance might even be worse for a lot of DOM manipulation. JS is actually really fast already. Even if DOM manipulation becomes possible, web development in JS is very mature. There are an endless supply of packages with dedicated development teams for pretty much anything you could think to do on the web, and this is very lacking in other languages. And it's hard enough to find JS developers for hire, even less so for any other language. The benefit of JS is that for full stack development, you have to know some JS, even if you are making a backend in something else. Developers realized they only need to learn one language to do it all and so that's where the developers have all gone and will probably stay unless some miracle culture shift happens. Until then, it is best to think of wasm as an embeddable VM for JS rather than opposed to JavaScript. A full on framework in anything but JS has no real practical benefit.

Specialized applications. While the image recognition programs out there now are really interesting, they are just demos. In reality, these programs aren't useful enough that someone would need them in a pinch where they'd go looking for a website that does it. Mostly these programs are much more useful as installed applications. Some are tools that require dedicating time to learn them, like CAD, developer tooling like compilers and IDEs, image and video editing, or language interpreters. In which case, why would you want it to be in the browser, inaccessible without internet, and not at it's maximum performance? Others do not have any performance requirements that requires something like wasm to run like p2p applications. All of these are simply better as installed programs.

Various other libraries that already exist as web APIs. Some of which are encryption and WebGL. Don't get me wrong, some libraries don't exist as web APIs like databases at the moment, and this is really where WebAssembly can shine, but that would still be a demo and the end goal would still be to implement this library as a web API.

What about out of browser use-cases?

This is really the crux of the argument. This is not really what this article is about, but I think WebAssembly has a lot more potential outside the web. I don't think it should be called WebAssembly at all to be fair, but it is what it is. Docker is a hack and the founders even agree. WebAssembly does actually fill a void here.