Understanding Scaleway's Serverless Containers
When we started building a Serverless product at Scaleway two years ago, we first thought about providing a single Serverless product: easy to install and get your application up and running...
It's no surprise when you know that databases are among the top over provisioned servers - with reports showing even containers are already idle 80% of the time.
Serverless databases can greatly reduce overall database costs, as well as overhead costs - a key concern for many organizations.
In this article, we’ll cover the foundations of what a serverless database is, and the use cases, and why it’s a trend you don’t want to miss out on.
To this day, the standard practice for databases is to overprovision them, for two main reasons:
Over provisioning - and therefore increasing your database costs - to avoid either scenario feels like a small price to pay.
As pointed out by developers and recent studies, including Datadog’s State of Cloud Costs report, most servers are highly overprovisioned (between twice and 10 times the capacity in most cases), and managing them is a time-consuming and error-prone process.
No organization wants to put its entire end user experience at risk, or permanently associate its brand with unreliability because of sizing miscalculations. The stakes are just too high.
However, when taking a closer look at how relational database engines like PostgreSQL work, they are not that different from a compute component and a storage component. Leveraging progress made by Kubernetes and Amazon S3-compatible Scaleway Object Storage, there are technical solutions to make PostgreSQL autoscaled in a reliable manner, or more: completely “serverless.”
The exact definition of Serverless can be subject to many debates and opinions, but at Scaleway we settle on:
A true Serverless solution is a solution that removes all limitations linked to the physical or virtual server it relies on as an abstraction layer.
In practice, this means the database should:
Removing the “server abstraction layer" can be perceived as a considerable risk factor - experiencing a database outage or unexpected high bills without any ability to look at what caused aren’t highly anticipated events.
As we developed our first Serverless SQL Database, we realized from user feedback and internal testing that we needed to go further for a smooth developer experience.
This is why we’re adding further requirements to this definition:
After over a year of real-life tests and discussions with users, we uncovered a variety of needs from hundreds of different organizations. Most common use cases include:
Of course, there are plenty of good reasons for choosing a traditional Databases Instance (ie. relying on a fixed-size virtual machine), either Managed or self-hosted:
Serverless databases bring another option to the table when it comes to using standard and reliable databases for intermittent or unpredictable traffic.
As with many new core storage technologies, anticipating all use cases and applications is not an easy task - it’s unlikely Amazon S3 creators predicted, 20 years ago, how standard their protocol would become for so many use cases. But seeing current adoption momentum from serverless Postgres solutions and users feedback, we can’t wait to see what’s next.
Ready to explore serverless databases? Discover Scaleway’s Serverless SQL Database.
When we started building a Serverless product at Scaleway two years ago, we first thought about providing a single Serverless product: easy to install and get your application up and running...
As a response to your growing demands, we spent the last two years building a Serverless ecosystem to provide an autonomous way of running an application: Scaleway Serverless Containers.
This article will highlight the main differentiators when choosing the storage type for your managed database.