Explore all Scaleway products in the console and select the right product for your use case.
Further integrations are also possible even if not listed above, for example, Secret Manager can help you to store information that requires versioning.
Serverless computing is a cloud execution model where the cloud provider dynamically manages the allocation of compute resources. Unlike traditional hosting models, you do not need to provision, scale, or maintain servers. Instead, you focus solely on writing and deploying your code, and the infrastructure scales automatically to meet demand.
These services allow you to build highly scalable, event-driven, and pay-as-you-go solutions. Serverless Containers and Functions help you create applications and microservices without worrying about server management, while Serverless Jobs lets you run large-scale, parallel batch-processing tasks efficiently. This can lead to faster development cycles, reduced operational overhead, and cost savings.
Yes. Because Serverless Containers supports any containerized application, you can choose the language, runtime, and framework that best suits your needs. As long as it can run in a container and respond to HTTP requests, Serverless Containers can host it.
With serverless, you only pay for the computing resources you use. There are no upfront provisioning costs or paying for idle capacity. When your application traffic is low, the cost scales down, and when traffic spikes, the platform automatically scales up, ensuring you never overpay for unused resources.
No, deploying a new version of your Serverless Container generates a rolling update. This means that a new version of the service is gradually rolled out to your users without downtime. Here is how it works:
This process ensures a seamless update experience, minimizing user disruption during deployments. If needed, you can also manage traffic splitting between versions during the update process, allowing you to test new versions with a subset of traffic before fully migrating to it.
Yes, Serverless Containers resources can be changed at any time without causing downtime - see the previous question for full details.
Scaling in Serverless Containers and Serverless Functions is handled automatically by the platform. When demand increases - more requests or events - the platform spins up additional instances to handle the load. When demand decreases, instances spin down. This ensures optimal performance without manual intervention.
Integration is straightforward. Serverless Functions and Containers can be triggered by events from Queues and Topics and Events, and can easily communicate with services like Managed Databases or Serverless databases. Serverless Jobs can pull data from Object Storage, or output processed results into a database. With managed connectors, APIs, and built-in integrations, linking to the broader Scaleway ecosystem is seamless.
Yes. Many traditional applications can be containerized and deployed to Serverless Containers. This makes it easier to modernize legacy systems without a complete rewrite. By moving to a serverless platform, you gain automatic scaling, reduced operational overhead, and a simpler infrastructure management experience.
Yes, all applications deployed on Serverless Containers are stateless. This means the server does not store any state about the client session. Instead, the session data is stored on the client and passed to the server as needed.
Serverless Containers are billed on a pay-as-you-go basis, strictly on resource consumption (Memory and CPU).
Memory consumption: €0.10 per 100 000 GB-s, and we provide a 400 000 GB-s free tier per account and per month.
Memory | Price per second |
---|---|
128 MB | €0.000000125 |
256 MB | €0.00000025 |
512 MB | €0.0000005 |
1024 MB | €0.000001 |
2048 MB | €0.000002 |
3072 MB | €0.000003 |
4096 MB | €0.000004 |
vCPU consumption: €1.00 per 100 000 vCPU-s, and we provide a 200 000 vCPU-s free tier per account per month.
CPU | Price per second |
---|---|
0.07 vCPU | €0.0000007 |
0.14 vCPU | €0.0000014 |
0.28 vCPU | €0.0000028 |
0.56 vCPU | €0.0000056 |
1.12 vCPU | €0.0000112 |
Criteria | Value |
---|---|
Monthly duration | 30 000 000 s |
Amount of memory allocated | 128 MB |
Amount of vCPU allocated | 70 mvCPU |
Free tier | Yes |
Monthly Cost: €22.35
Insufficient vCPU, RAM or ephemeral storage can lead to containers going to error status. Make sure to provision enough resources for your container.
We recommend you set high values, use metrics to monitor the resource usage of your container, then adjust the values accordingly.
Optimize the startup: Cold-starts can be affected by a loading a large number of dependencies and opening lot of resources at startup. Ensure that your code avoids heavy computations or long-running initialization at startup and optimize the number of loaded libraries.
Keep your container warm: You can use CRON triggers at certain intervals to keep your container warm or set the min-scale parameter to 1
when required.
Increase resources: Adding more vCPU and RAM can help to significantly reduce the cold-starts of your container.
Use sandbox v2: We recommend you use sandbox v2 (advanced settings) to reduce cold starts.
Refer to our dedicated page about Serverless Containers limitations and configuration restrictions for more information.
Scaleway's Container Registry allows for a seamless integration with Serverless Containers and Jobs at a competitive price. Serverless products support external public registries (such as Docker Hub), but we do not recommend using them due to uncontrolled rate limiting, which can lead to failures when starting resources, unexpected usage conditions, and pricing changes.
You can copy an image from an external registry using the Docker CLI, or open source third-party tools such as Skopeo. Refer to the dedicated documentation for more information.
Serverless Containers does not yet support Private Networks. However, you can use the Scaleway IP ranges defined at https://www.scaleway.com/en/peering/ on Managed Databases and other products that allow IP filtering.
There are several ways to deploy containers. Refer to the dedicated documentation to determine the best method for your use case.
Serverless Containers use the http1 protocol by default, but some services (e.g., gRPC) only support http2.
Protocol switching is available in the Console under the Advanced options
section in the Deployment
tab.
Containers use http1 by default, yet the gRPC protocol requires http2. You can upgrade the protocol to http2 (h2c
).
A Serverless Container is set to ready
once the specified port is correctly bound to the container, and will start receiving traffic. If your application needs to perform some tasks before receiving traffic (e.g. connect to a database), it’s important to run them before binding to the port (starting the webserver).
For now, the HEALTHCHECK
Docker directive has no impact on container readiness. In the future, the healthcheck will be customizable for your applications.
Scaleway Serverless Containers does not currently support Scaleway VPC or Private Networks, though this feature is under development.
To add network restrictions on your resource, consult the list of prefixes used at Scaleway. Serverless resources do not have dedicated or predictable IP addresses.
Scaleway Serverless Containers do not currently support attaching Block Storage. These containers are designed to be stateless, meaning they do not retain data between invocations. For persistent storage, we recommend using external solutions like Scaleway Object Storage.
Currently, a new container instance will always start after each deployment, even if there is no traffic and the minimum scale is set to 0. This behavior is not configurable at this time.
Serverless resources are by default stateless, local storage is ephemeral.
For some use cases, such as saving analysis results, exporting data etc., it can be important to save data. Serverless resources can be connected to other resources from the Scaleway ecosystem for this purpose:
Explore all Scaleway products in the console and select the right product for your use case.
Further integrations are also possible even if not listed above, for example, Secret Manager can help you to store information that requires versioning.
You cannot use Serverless Containers with Edge Services because there are no native integrations between the two products yet.
By design, it is not possible to guarantee static IPs on Serverless compute resources.