What is a Serverless database, and How it can Save you Time and Money

It's no surprise when you know that databases are among the top over provisioned servers - with reports showing even containers are already idle 80% of the time.

Serverless databases can greatly reduce overall database costs, as well as overhead costs - a key concern for many organizations.

In this article, we’ll cover the foundations of what a serverless database is, and the use cases, and why it’s a trend you don’t want to miss out on.

Oversizing your database may feel comfortable, but it’s costing more than you think

To this day, the standard practice for databases is to overprovision them, for two main reasons:

  • It feels like a small price to pay compared to a wide application outage in case of workload peak
  • Upscaling or downscaling relational databases typically requires downtime or carefully planned maintenance - both time-consuming activities.

Over provisioning - and therefore increasing your database costs - to avoid either scenario feels like a small price to pay.

As pointed out by developers and recent studies, including Datadog’s State of Cloud Costs report, most servers are highly overprovisioned (between twice and 10 times the capacity in most cases), and managing them is a time-consuming and error-prone process.

No organization wants to put its entire end user experience at risk, or permanently associate its brand with unreliability because of sizing miscalculations. The stakes are just too high.

However, when taking a closer look at how relational database engines like PostgreSQL work, they are not that different from a compute component and a storage component. Leveraging progress made by Kubernetes and Amazon S3-compatible Scaleway Object Storage, there are technical solutions to make PostgreSQL autoscaled in a reliable manner, or more: completely “serverless.”

Less is more: understanding Serverless

The exact definition of Serverless can be subject to many debates and opinions, but at Scaleway we settle on:

A true Serverless solution is a solution that removes all limitations linked to the physical or virtual server it relies on as an abstraction layer.

In practice, this means the database should:

  • Seamlessly autoscale based on usage, for both compute and storage. This guarantees that no manual intervention or downtime is required to optimize capacity based on needs.
  • Be billed based on consumption. Only the amount of compute and storage consumed - down to the second - is billed, with no incentive to overprovision capacity.
  • Scale all the way down to zero. This removes the need for a minimum unused capacity, allowing intermittent use cases to be properly met (a minimum capacity can be thought of as a small server constantly running).
Serverless SQL database architecture diagram for optimized scaling

Removing the “server abstraction layer" can be perceived as a considerable risk factor - experiencing a database outage or unexpected high bills without any ability to look at what caused aren’t highly anticipated events.

As we developed our first Serverless SQL Database, we realized from user feedback and internal testing that we needed to go further for a smooth developer experience.

This is why we’re adding further requirements to this definition:

  • Real-time detailed consumption view and autoscaling algorithm principles
  • Ability to define a maximum capacity to control costs.
  • Data can be imported and exported in a standard format.

From test environments to absorbing unpredictable workload, the possibilities are endless

After over a year of real-life tests and discussions with users, we uncovered a variety of needs from hundreds of different organizations. Most common use cases include:

  • Scaling web applications during the day (eg. delivery or transportation apps experiencing traffic spikes regularly, or after a communication campaign) or during a particular season (eg. retail or accommodation applications experiencing peak traffic before holidays)
  • Running data processing batches intermittently (eg. production planification algorithm once every week)
  • Shutting down development environments (eg. at night and during weekends) or scaling-up temporary performance test environments (eg. for load testing)
  • Running internal toolings during business hours (eg. reporting tools that used local SQLite files beforehand).

Of course, there are plenty of good reasons for choosing a traditional Databases Instance (ie. relying on a fixed-size virtual machine), either Managed or self-hosted:

  • Having a consistent and predictable workload (eg. machine to machine workload with limited variability such as IoT sensors emitting regularly, or company system running 24/7 or in follow-the-sun)
  • Keeping fine-grained control over database configurations (eg. such as the number of connections, the maximum memory per connection, etc. in PostgreSQL)
  • Keeping full PostgreSQL compatibility and usability with regard to SQL keywords but also expected performance (eg. some stateful features from PostgreSQL  are not a good fit for scale-down to zero, and many implementation either do not support them or might degrade performance in some edge cases).

Serverless infrastructures reach the initial promise of the cloud: the ideal long-term pay-per-use solution

Serverless databases bring another option to the table when it comes to using standard and reliable databases for intermittent or unpredictable traffic.

  • They eliminate the need for overprovisioning by autoscaling based on usage and billing for actual consumption, reducing both database and overhead costs
  • They automatically adjust compute and storage resources based on demand, operate without downtime for capacity changes, and can scale down to zero, making them ideal for intermittent or unpredictable workloads
  • Serverless databases are well suited for variable traffic applications, development environments, and intermittent tasks, while traditional databases are better for consistent, predictable workloads requiring fine control.

As with many new core storage technologies, anticipating all use cases and applications is not an easy task - it’s unlikely Amazon S3 creators predicted, 20 years ago, how standard their protocol would become for so many use cases. But seeing current adoption momentum from serverless Postgres solutions and users feedback, we can’t wait to see what’s next.

Ready to explore serverless databases? Discover Scaleway’s Serverless SQL Database.

Recommended articles