A Serverless future outside the cloud?
Serverless is only provided by major cloud providers... but could we imagine Serverless outside the range of cloud providers someday?
Today marks one year since Scaleway’s Serverless Functions and Containers entered General Availability. We’ve achieved a lot in the past year, and we’re only just getting started. In the serverless team, we don't see serverless as a single product, but as an ecosystem, driven by a philosophy of making the cloud easier to use, more efficient, but also more affordable to use., we are committed to delivering a set of services and frameworks that provide a complete serverless experience to a diverse range of applications. In this post we’ll describe the big ideas behind the serverless ecosystem, where we are today, and where we’re going in the future.
Writing and running code on a laptop is exactly that, writing the code, then telling the operating system to run it. Developers don't need to specify how to configure or share the underlying resources, which bits of the system to interface with, or what limits they want to enforce. It’s so simple that even children can do it. Serverless is about bringing this same simplicity to the cloud, making it as simple as running an application on your laptop. This means users can focus on writing code, while the serverless platform handles the rest. This means provisioning, configuring, and scaling whatever underlying infrastructure is needed, billing the user only for what they use, and cleaning up afterwards.
We still have a way to go before we achieve the serverless dream, and today serverless is still synonymous with just two products: function-as-a-service, and container-as-a-service. Both represent a significant step forward in shielding the user from complexity, allowing them to focus on writing code, and billing them only for what they use. However, functions and containers are just the foundations of the larger serverless ecosystem. To provide a complete serverless experience, we must also consider how we compose functions and containers into larger workflows, how we manage state and scale our storage, how we can make it easy to port existing applications, and how we can write new applications for this novel environment.
Just as you can’t build a house on shaky foundations, you can’t build a serverless ecosystem without a solid base. In serverless, this base is Functions-as-a-Service and Containers-as-a-Service. These are the workhorses of serverless, executing user’s code, and scaling on demand. On top of this base of FaaS and CaaS, we can build the rest of our serverless ecosystem. We started work on our FaaS and CaaS products as early as 2019.
Although we were quick to get to an MVP, GA was still almost two years away. These two years were spent on building the scaffolding around the core compute product, to create a secure, smooth, production-grade serverless product. Significant projects included the build-and-deploy pipeline, configuring and stress-testing our autoscaling, performance tuning and profiling, scaling the platform across multiple clusters in multiple regions, security audits, and integrating the product into the Scaleway ecosystem. At the time, the company was focused on bare metal and virtual machines, so setting up and running a scalable serverless infrastructure was a big change in mindset.
It was also a big technical change, as we were sharing infrastructure between tenants at a much higher granularity, and dealing with high-volume, short-lived workloads. This placed increased stress on our metrics, logs, and internal APIs, and required a rethink of how we do observability and billing. It also put a strain on our nascent managed Kubernetes Service, with the volume of containers regularly causing headaches for the Container Registry Service. We launched the working beta in the summer of 2020.
The platform was slow during the first weeks due to unexpected consumption from the beta testers and we ran into several scaling issues. Through more than a year of beta tests, we gathered lots of feedback on how we could build an even better and more innovative product. Some of the ideas that came out of this process are still in the pipeline, so watch this space! Finally, after two years of work and a last minute bug that almost crashed the product, we were happy to release our serverless platform to the world. Scaleway’s Serverless Functions and Containers entered GA in November 2021.
Since then, we have continuously added features to make our products more performant, flexible, powerful, and easy-to-use. Although we will continue to add features to make these foundations even better (such as support for public and private container registries, new language runtimes, and improved logging and metrics), we are now in the next phase of building our serverless ecosystem.
Now that we have a solid foundation in place with Serverless Functions and Containers, we can look to the future of serverless, and what that means for the rest of the ecosystem at Scaleway. We have worked closely with existing users of both serverless and non-serverless products to determine how our serverless ecosystem can best suit their needs. As a result of these consultations, we are now working on a number of products and enhancements that will allow users to build more complex applications, at larger scale, with more data, and more resources. We can break this work down into 5 categories: workflows, storage, programming, developer experience, and performance. ## Workflows Although users can achieve a lot with single functions and containers, the real power of serverless comes from orchestrating multiple functions and containers as part of larger workflows. These workflows can be used for any multi-stage task, from downstream data processing pipelines, to multi-stage image compression and analysis, to distributed machine learning training. Serverless workflows cannot be configured without suitable plumbing to pass data between stages. This is where Scaleway’s new Messaging and Queueing service comes into play. Part of the next phase of our serverless ecosystem is a serverless orchestrator, built on messaging and queueing, which lets users connect functions and containers to a number of different event sources, including queues, publish-subscribe, and Amazon S3 events.
Compute isn’t much use without access to data, so in addition to serverless functions and containers, we need to provide serverless storage. Serverless storage shares many of the properties as serverless compute, namely users don’t have to provision or scale the underlying infrastructure, and they pay only for what they use. Object storage has provided a form of serverless storage for several years at Scaleway, but object storage is only part of the story. To broaden our serverless storage offering at Scaleway, we are working on a new line of serverless NoSQL/SQL databases, which will offer the same auto-scaling, pay-as-you-go experience as Serverless Functions and Containers.
With combined serverless compute and storage, users can build front-to-back data driven applications without configuring any infrastructure. For example, Lego only uses serverless resources (functions, databases, storage) and managed services (API gateway, Messaging, Emailing …) to run its whole e-commerce platform. This enabled them to create an efficient applications that use few resources most of the year while being able to scale to match customers demand during the Black Friday or the Christmas season.
Today’s serverless programming model is built on stateless functions. Each function can’t communicate with others, and must be able to operate at arbitrary scale. While this is a powerful tool for certain applications, it makes it difficult to port many existing applications, and presents a steep learning curve for non-serverless developers. Indeed as of today, to maximize Serverless benefits, one must think in terms of small single-purpose functions and how to orchestrate them. As is the case in other distributed and parallel execution environments, we want to support a range of different programming models and applications, not just those built on stateless functions.
To do this, we are focusing on two themes: a) building new programming models specifically for serverless b) supporting existing applications written for other execution environments. New serverless programming models are those that take advantage of the effectively limitless scaling of the serverless execution environment, inspired by ideas from actor-based parallelism, and messaging passing frameworks. These often cater to distributed, highly parallel applications such as scientific simulations and distributed ML training built on MPI. Porting existing applications means providing seamless integration with other parallel and distributed programming frameworks. Applications built on top of these frameworks can then change a line of config, to switch from their existing execution environment, to the new serverless backend. Two good examples of such frameworks are Spark and Dask, used extensively by data scientists. By porting these frameworks to serverless, we can open up the potential of serverless to a large number of new users and use-cases.
The serverless developer experience is different from that encountered in many other execution environments. By definition, serverless applications are distributed, and run in an environment that is controlled by the provider. This exacerbates challenges around logging, debugging, and monitoring, similar to those found in microservice architectures, especially those relying on Cloud Service Providers services. In particular to verify that the service (function) developed still works in the provider's environment users are required to deploy it, increasing the time to get debugging information hence breaking the code/run/improve loop.
To improve the serverless developer experience and fulfill the promise of a seamless experience, we are working on a number of tools and enhancements. The first of these is offline development and testing. The serverless execution environment is defined and controlled by the provider, and so replicating it on a local development machine can be challenging.
We are building tools to help developers get meaningful feedback from their development environment, without having to redeploy functions remotely. The second improvement we are making in this area is monitoring and logging. By integrating with Scaleway’s Observability Platform-as-a-service, we will provide users with a detailed overview of their serverless functions, along with logs and metrics that can help them manage this distributed environment.
Performance in serverless is often distilled down to a single problem, that of cold starts. Although cold starts are an important factor, especially in user-facing applications, serverless performance also covers scale-out latency, build times, and latency when interacting with the rest of the ecosystem. These performance concerns are multiplied for every container and function that is added to an application, so it’s important for us to ensure high performance when considering serverless at scale. Our work in this area covers two areas: reducing the build, deployment and invocation time of our existing architecture, and researching new architectures, such as those based on WebAssembly. Reducing the latency in our existing architecture involves caching, pooling and simply reducing the resources needed to execute each function. By adding registry caches, reusing isolation environments between requests, and pooling pre-warmed VMs and containers, we can reduce the latency involved in all aspects of our serverless systems. Experimenting with lightweight isolation mechanisms such as WebAssembly, offers orders of magnitude improvements in cold start times, as well as exciting opportunities to build new custom runtime environments, with low-latency messaging, and shared memory between functions. Our research and development on new serverless runtimes is ongoing, but we hope to reveal some exciting new improvements next year.
Serverless is a philosophy for building cloud platforms that feel like one big operating system. At Scaleway we’re committed to delivering an ecosystem of products that adhere to this philosophy, allowing users to build large, scalable, stateful applications as simply and affordably as possible. In one short year we’ve built two products that form the foundation of this ecosystem: Serverless Functions and Serverless Containers. There’s much, much more to come, so watch this space, and thank you for joining us on the journey.
Serverless is only provided by major cloud providers... but could we imagine Serverless outside the range of cloud providers someday?
Serverless is efficient to build tools that periodically collect images & metadata, to store them in a bucket. We'll use the case of a global warming monitoring system to demonstrate how to do so.
Some features may be hard to find and used only by most advanced users. We thought you might be interested in going further and optimizing your use of Kubernetes Kapsule