When and how to use Serverless

If done right, Serverless should be a no-brainer for any use-case: less time spent configuring and scaling means more time spent building applications; auto-scaling means that the same code that serves one user, can serve thousands of users; and pay-as-you-go billing means no more paying for over-provisioned, under-utilized resources.

However, because Serverless is still in its infancy, it lacks the ecosystem of frameworks and libraries that exist for other cloud products. It also requires a shift in mindset, as developers get used to handing over control of their infrastructure to cloud providers. To give a better view of the state of Serverless today, we’ll discuss a few use-cases where it works well, and a few use-cases where it doesn’t work so well.

When to use Serverless

Serverless APIs

FaaS and CaaS are effective tools for building APIs, where each API endpoint is packaged as a single function or container. Rather than having to provision, configure and scale a load balancer and a set of VMs, the user can trust the Serverless provider to handle all the auto-scaling, load-balancing and networking, leaving them free to focus on their application logic. In addition, by writing each endpoint as its own function/container, the endpoints will each scale independently, thus freeing the developer from tuning their own auto-scaling logic. The application state behind the APIs can be stored in serverless storage, such as object storage, or a serverless NoSQL database.

Serverless workflows

Serverless can be used to build scalable workflows, such as downstream data processing, or image processing pipelines. By expressing each stage of the workflow as a serverless function/container and connecting them via message queues, the user does not have to determine the appropriate parallelism and resource levels for each stage, nor waste resources on over-provisioned VMs.

Serverless “glue”

Distributed cloud applications often need to connect streams of data between different parts of the application. This data will frequently need to be transformed or aggregated, such as when collecting events from a web application to write to a data warehouse. Rather than managing dedicated resources for this task, developers can create serverless functions, automatically triggered by upstream events, to run on demand, and transform or aggregate the data.

When not to use Serverless

With Serverless, the cloud provider is responsible for managing and scaling your infrastructure. The underlying platform must therefore cater for a range of different use-cases, scales, and performance expectations. This can lead to a “lowest common denominator” effect, where serverless will cater to the minimum requirements for all, but not do well in more custom or high-performance scenarios.

Certain characteristics of an application can make it unsuitable for serverless computing:

High-performance

If your application requires a high level of computational power, such as video rendering or scientific simulations, serverless computing may not be able to provide the necessary resources.

Long-running

Serverless computing is designed for short-lived, stateless functions. If your application requires long-running processes or needs to maintain state, it may be better to use a traditional server-based model.

Strictly regulated

Serverless runs your application on shared infrastructure, i.e. side by side with other, potentially untrusted applications. Cloud providers make every effort to isolate serverless applications, and there is no reason to believe it is any less secure than any other form of cloud computing. However, it is unlikely to meet the strict regulatory requirements around sensitive data, such as that used in medical, governmental, or law-enforcement applications.

High traffic

If your application is expected to receive a high volume of traffic, the cost of running it on a serverless platform could be significantly higher than using a traditional server-based model.

Highly complex

If your application is complex, it may be more difficult to break it down into smaller, independent functions that can be run on a serverless platform.

How to use Serverless

Once you’ve decided to embark on your Serverless journey, it can be difficult to know where to begin. All providers offer a browser-based console, where you can write code, connect services, and upload dependencies. This is great for experimenting with serverless, but can be cumbersome when it comes to managing a larger app.

Serverless Framework is an open-source project that aims to address some of the complexity around managing serverless apps. It takes a declarative approach, where you can declare multiple serverless functions and the communication between them using YAML files. Serverless Framework provides back-ends for all major cloud providers, so you can easily migrate functions between them.

Finally, there is an emerging trend for higher level Serverless frameworks, where the framework will transparently create Serverless resources from your code. These frameworks often adopt conventions from other frameworks, for example, the annotations pattern often seen in Python web frameworks.

Once you’ve written and deployed your app, you will want to monitor it, and view logs and metrics. Unlike for other distributed systems where this can be a job in itself, most serverless providers offer out-of-the-box observability products.

How Serverless will get better

As we’ve mentioned, serverless is still a young technology, and the associated open-source ecosystem, development tools, and design patterns, are still rapidly evolving. In the next 5 years or so, we can expect serverless computing to improve on its current weaknesses in several ways:

  1. Programming models. Serverless is currently based around functions, with few other abstractions or frameworks available to build larger workflows and parallel applications. The availability of new parallel and event-driven serverless frameworks will make it easier to write more complex applications.
  2. Storage. Serverless storage is still difficult to manage, as it must be provisioned and scaled independently of the compute. This scaling problem is not in line with the “ease” principle of serverless, and we will likely see serverless compute and storage more coupled in the future.
  3. Networking and discoverability. Connecting different parts of serverless applications today is difficult, as you cannot know the topology and even the number of connections you will need in advance. This has been mitigated by the use of messaging and queueing products, but this does not support the breadth of communication patterns and protocols offered by a standard networking stack.
  4. Support for different languages. Today’s serverless use-cases and programming models are well-suited to dynamic languages such as JavaScript and Python. However, as we start to see larger, more high-performance serverless applications, we can expect greater support and more frameworks in compiled languages such as Rust, Go and C++.
  5. Cost visibility and predictability. Although serverless is pay-as-you-go, this doesn’t guarantee that it will be cheaper than a server-based alternative. Given its black-box auto-scaling architecture, it’s also difficult to predict costs in advance. For this reason, it’s likely that we will see more tooling around predicting and controlling costs, as well as a race to the bottom between providers.
  6. Performance and resources. Serverless functions don’t currently offer enough resources, communication patterns, or execution time to run larger, high-performance computing tasks, such as ML training. However, the ease, scale and cost benefits of serverless are still appealing to developers of high-performance applications, so we will no doubt see more development in this area.

Recommended articles