article

Serverless is a Broken Promise - Time to Fix it!

Current serverless offerings come with a substantial litany of complex, frustrating and expensive issues that together constitute a far cry from the promise of serverless. In this post we look into why that is, what serverless *should* be, and how to get there.

· 7 min read

While the term serverless probably means different things to different people, definitionally it should mean a cloud platform where the user can focus almost entirely on their app/service without having to deal with any underlying infra issues which are time-consuming and hard to debug (and often impossible to debug if you’re not the infra provider).

Instead, with current serverless deployments, we get some of this (we can, sometimes at a touch of a button, deploy an app to the cloud and have it available over the Internet), but also a substantial litany of complex, frustrating and expensive issues that together constitute a far cry from the promise of serverless:

  • Functions-based: this may fit some (many?) but not all cases. Serverless shouldn’t dictate the use of functions; other, more flexible schemes like containers should also apply.
  • Limited language support: Depending on the platform, not all languages are supported.
  • Duration limits: Functions can only run for a max amount of time. This may again not be an issue for everyone, but it’s certainly a limitation and potential problem.
  • Cold starts: The bane of many people’s (serverless) existence. This is a cloud’s providers problem, but the status quo is that it’s up to you, the person deploying the service, to deal with it. And the solutions often involve paying more money (e.g., to keep instances “warm”).
  • Simultaneous connection limits: yet one more thing to keep in mind; the underlying infra is certainly not transparent.
  • Deploy size limit: it’s not uncommon for serverless platforms to restrict this to at most hundreds of MBs.

If you see the words “limit”, “limitation”, and “problem” many times in the above it’s because serverless is far from the seamless, well, serverless, experience it was meant to be. I’d even argue that the promise of serverless, where we simply deploy things and the infra is taken care for us transparently, has been largely broken.

Of course there are other types of compute services beyond serverless, especially a lot of container-based ones or even VM-based ones, but there’s no panacea: if you choose these you have to sacrifice features of serverless offerings which are beneficial, such as fast autoscale, scale-to-zero and pay per use.

In essence, there’s what I’d call cloud platform fragmentation: depending on the limitation(s) you care the most about, you have to (correctly) choose among a bewildering array of Pass, FaaS, CaaS and you-name-it-aaS offerings - it’s clearly not easy being a cloud engineer. We’ll come back to why this is, and whether this must be so, a bit below.

So if the promise of serverless has been broken, this status quo begs the question: is it time to redefine what serverless means, and how it should work?

Serverless as it Should Be

From here on out I’ll argue that serverless needn’t be the way it currently is, and that most of these limitations and symptoms are not the result of anything fundamental, but rather the emerging properties of the underlying technology used to build cloud stacks and controllers, which sometimes even pre-dates the cloud.

But first things first, what should serverless look like? As I mentioned at the top of this post, serverless should allow for users to focus entirely, or as much as possible, on their app/service without having to deal with any underlying infra issues. In a sense, we would like to cherry pick the best features from the different offerings of the fragmented cloud compute platform landscape. First, from the serverless providers I’d take:

  • Fast, reactive, millisecond autoscaling, so that users don’t need to worry about infra issues to do with their service growing, or with transient spikes — this should all be handled transparently.
  • Ease of use, so that users can just point at a project’s code and the platform takes care of the rest to deploy it.

Then, from VM/container-based services (e.g., AWS EC2, Google Cloud Run, etc), I’d pick:

  • App flexibility, so any languages/apps/services can be deployed.
  • No restrictions to do with run durations, images sizes, etc.

From somewhat newer platforms like Heroku, Fly.io, Railway, Render and similar:

  • Easy deployments, where a one button click or one-liner CLI command is enough to have a project deployed.

And finally, features that should exist to remove common deployment headaches and prevent serverless from being truly serverless:

  • No cold starts, or at least fast, negligible, millisecond cold starts — users shouldn’t have to care whether an instance is cold, warm, or whatever other temperature label applies; this is all irrelevant to the actual app/service, and only creates noise and engineering headaches.
  • Fast, millisecond scale-to-zero, so that going from 0 to 1 (wake up) is hidden within the Internet’s RTT and so end users don’t notice; and so that going from 1 to 0 doesn’t result in getting charged for idle. Many platforms offer scale-to-zero, but not in these timescales.
  • Fast, millisecond stateful scale-to-zero, such that state is kept across different 0 to 1 and 1 to 0 cycles, but without having this mechanism be visible to end users.
  • Smallest possible cloud stack overhead, so that most/any inefficiencies or performance problems reside in the application and so can be improved on by developers; and so that you don’t get billed for your cloud platform’s overheads.

At present, depending on which of the above you care about and which limitations of the different offerings you can live with, cloud engineers have to pick one or another service — which always comes with compromises. Worse, should that criteria change over time, it’s difficult to change from service to service (e.g., from Lambda to something container-based like Cloud Run).

So, is it possible to build a truly serverless platform to bring all of these features together, such that users can simply concentrate on developing a project and have the infra transparently take care of the actual deployment, without headaches nor restrictions?

Enter Unikraft Cloud

Clearly I’m going to answer yes to that question 🙂 . Many of the issues plaguing the cloud such as cold boots, slow autoscale, slow scale-to-zero, chunky images, high memory usage, and over-billing aren’t fundamental, but a direct result of how cloud stacks are built.

At Unikraft Cloud we’ve built a cloud platform based on fundamentally different tech resulting from years of research and open source work into unikernels, extremely specialized, fast, yet strongly isolated virtual machines that can cold start in milliseconds; as well as a custom high performance controller and various other components that make Unikraft Cloud go fast. On Unikraft Cloud you can expect:

  • Millisecond cold starts.
  • Millisecond scale-to-zero, for going from both 0 to 1 and 1 to 0 (stateful scale-to-zero coming very soon).
  • Millisecond, reactive autoscale.
  • Deployments based on Dockerfiles, so you’re not limited to functions or a few languages.
  • Simple, one-liner deployments via kraft, its CLI tool, and integration with Github Actions.
  • Minimal cloud stack, so that most, if any, inefficiencies and perf issues reside in the app, empowering you to fix as needed; and ensuring that we as the cloud provider don’t bill you for our infra inefficiencies.
  • Ease of mind — you no longer need to make complicated decisions as to whether your service should go to a FaaS, PaaS or whatever other *aaS might fit. On Unikraft Cloud we break what we feel is a false fragmentation among platform types.
  • None of the restrictions or limitations currently found in serverless offerings

To prevent this post from growing too long, I’ll write a separate one to dive more into how cloud stacks are built and what sets Unikraft Cloud apart. In the meantime, don’t take my word for it: sign up and try it out!

Get early access to the Unikraft Cloud

If you want to find out more about the tech behind Unikraft Cloud read our other blog posts, join our Discord server and check out the Unikraft’s Linux Foundation OSS website . We would be extremely grateful for any feedback provided!

Sign-up now