NavigationContentFooter
Jump toSuggest an edit

Containers - Concepts

Reviewed on 14 November 2024

Cold start

Cold start is the time a Container takes to handle a request when it is called for the first time.

Startup process steps are:

  • Downloading the container image to our infrastructure
  • Starting the container. Optimize your container startup speed to minimize this step (e.g., avoid waiting for slow connections or downloading large objects at startup)
  • Waiting for the container to listen on the configured port.

How to reduce cold starts

Concurrency

Concurrency defines the number of simultaneous requests a single instance of your container can handle at the same time. Once the number of incoming requests exceeds this value, your container scales according to your parameters.

Refer to the dedicated documentation for more information on container concurrency.

Container

A container is a package of software that includes all dependencies: code, runtime, configuration, and system libraries so that it can run on any host system. Scaleway provides custom Docker images that are entirely handled for you in the cloud.

Container image

A container image is a file that includes all the requirements and instructions of a complete and executable version of an application.

Container Registry

Container Registry is the place where your images are stored before being deployed. We recommend using Scaleway Container Registry for optimal integration. See the migration guide for full details.

CRON trigger

A CRON trigger is a mechanism used to automatically invoke a Serverless Container at a specific time on a recurring schedule.

It works similarly to a traditional Linux cron job, using the * * * * * format, and uses the UTC time zone. Refer to our cron schedules reference for more information.

Custom domain

By default, a generated endpoint is assigned to your Serverless resource. Custom domains allows you to use your own domain - see our custom domain documentation for full details.

Deployment

Some parameter changes require a new deployment of the container to take effect. The deployment happens without causing downtime, as traffic is switched to the newest version.

Endpoint

An endpoint is the URL generated to access your resource. It can be customized with custom domains.

Environment variables

Environment variables are key/value pairs injected in your container. They are useful to share information such as configurations with your container. Some names are reserved. See details about reserved names.

Ephemeral storage

In addition to vCPU and RAM, Serverless Containers also provide a storage volume for the duration of the task. This storage space allows you to hold the data retrieved by the job, and disappears once the execution is complete.

The maximum size of the ephemeral storage is tied to the allocated memory.

GB-s

Unit used to measure the resource consumption of a container. It reflects the amount of memory consumed over time.

gRPC

gRPC is supported on Serverless Containers, as long as you have enabled http2 (h2c) protocol.

Healthcheck

To determine the status of a container, the default healthcheck automatically checks if basic requirements are met, to define the status as ready.

You can define custom healthcheck rules with a specific endpoint via the Scaleway API.

Instance

A Serverless Container instance handles incoming requests based on factors like the request volume, min scale, and max scale parameters.

JWT Token

JWT (JSON Web Token) is an access token you can create from the console or API to enable an application to access your private container. Find out how to secure a container.

Load balancing

The Serverless infrastructure manages incoming request traffic. In scenarios like sudden traffic spikes or load testing, resources are automatically scaled based on the max scale parameter to handle the load.

Logging

Serverless Containers offers a built-in logging system based on Cockpit to track the activity of your resources: see monitoring Serverless Containers.

Max scale

This parameter sets the maximum number of container instances. You should adjust it based on your container’s traffic spikes, keeping in mind that you may wish to limit the max scale to manage costs effectively.

Metrics

Performance metrics for your Serverless resources are natively available: see monitoring Serverless Containers.

Min scale

Customizing the minimum scale for Serverless can help ensure that an instance remains pre-allocated and ready to handle requests, reducing delays associated with cold starts. However, this setting also impacts the costs of your Serverless Container.

mvCPU

A vCPU (Virtual Central Processing Unit) is equivalent to 1000 mvCPU.

Namespace

A namespace is a project that allows you to group your containers.

Containers in the same namespace can share environment variables, secrets and access tokens, defined at the namespace level.

NATS trigger

A NATS trigger is a mechanism that connects a container to a NATS subject and invokes the container automatically whenever a message is published to the subject.

For each message that is sent to a NATS subject, the NATS trigger reads the message and invokes the associated container with the message as the input parameter. The container can then process the message and perform any required actions, such as updating a database or sending a notification.

Port

The port parameter specifies the network port that your container listens on for incoming requests. If your application or container is set up to listen on a different port, you must specify it using the port parameter when deploying your container to Serverless Containers. It must reflect the port configuration within your container for your service to function correctly.

The value defined in the port parameter will then be passed to your container during the deployment inside the PORT environment variable.

Note

Only one HTTP port can be exposed per Serverless Container.

Privacy policy

A container’s privacy policy defines whether a container may be invoked anonymously (public) or only via an authentication mechanism provided by the Scaleway API (private).

Registry endpoint

The registry endpoint parameter is the resource linked to the container image used in your Serverless Container.

Sandbox

A sandbox is an isolation area for your container. Serverless Containers offer two sandboxing environments:

  • v2 - Recommended for faster cold starts. Can introduce some overhead on specific worklows with consequent amount of syscalls.
  • v1 - Legacy sandboxing with slower cold starts, but fully supports Linux system call interface. Prefer this option when processing large amount of syscalls.

Scale to zero

One of the advantages of Serverless Containers is that when your container is not triggered, it does not consume any resources, which enables great savings.

Scaling

Serverless Containers make scaling your application transparent, up to 50 instances of your container can be run at the same time.

Secrets

Secrets are an extra-secure type of environment variable. They are environment variables that are injected into your container and stored securely, but not displayed in the console after initial validation.

Serverless

Serverless allows you to deploy your Functions (FaaS) and Containerized Applications (CaaS) in a managed infrastructure. Scaleway ensures the deployment, availability, and scalability of all your projects.

Serverless Framework

Serverless.com (Serverless Framework) is a tool that allows you to deploy serverless applications without having to manage Serverless Container’s API call. Write and deploy a YAML configuration file, everything else is handled automatically, even the image building.

Serverless Function

Serverless Functions are serverless, fully managed compute services that allow you to run small, stateless code snippets or functions in response to HTTP requests or events.

These functions automatically scale based on demand and are designed to be lightweight, event-driven, and easily deployable, eliminating the need to worry about infrastructure management. Functions is built on top of Serverless Containers, meaning you can run your functions packaged in containers and have them scale efficiently.

Serverless Job

Serverless Jobs are similar to Serverless Containers but are better suited for running longer workloads. See the comparaison between Serverless products for more information.

Queue trigger

A queue trigger is a mechanism that connects a container to a queue created with Scaleway Queues, and invokes the container automatically whenever a message is added to the queue.

For each message that is sent to a queue, the trigger reads the message and invokes the associated container with the message as the input parameter. The container can then process the message and perform any required actions, such as updating a database or sending a notification.

Rolling update

When deploying a new version of a Serverless Container, a rolling update is applied by default. This means that the new version of the service is gradually rolled out to your users without downtime. Here is how it works:

  • When a new version of your container is deployed, the platform automatically starts routing traffic to the new version incrementally, while still serving requests from the old version until the new one is fully deployed.
  • Once the new version is successfully running, we gradually shift all traffic to it, ensuring zero downtime.
  • The old version is decommissioned once the new version is fully serving traffic.

This process ensures a seamless update experience, minimizing user disruption during deployments. If needed, you can also manage traffic splitting between versions during the update process, allowing you to test new versions with a subset of traffic before fully migrating to it

Stateless

Refers to a system or application that does not maintain any persistent state between executions. In a stateless environment, each request or operation is independent, and no information is retained from previous interactions.

This means that each request is treated as a new and isolated event, and there is no need for the system to remember previous states or data once a task is completed. Statelessness is commonly used in serverless architectures where each function execution is independent of others.

To store data you can use Scaleway Object Storage, Scaleway Managed Databases, and Scaleway Serverless Databases.

Status

A Serverless Container can have the following statuses:

  • Ready: your Serverless Container is operational to serve requests.
  • Pending: your resource is under deployment.
  • Error: something went wrong during the deployment process. Check our troubleshooting documentation to solve the issue.

Terraform

Terraform is a tool for managing infrastructure using code. Read the Terraform documentation for Serverless Containers.

Timeout

The timeout is the maximum length of time your container can spend processing a request before being stopped. This value must be in the range 10s to 900s.

vCPU

vCPU is the abbreviation for virtual Centralized Processing Unit. A vCPU represents a portion or share of the underlying physical CPU assigned to a particular container. The performance of a vCPU is determined by the percentage of time spent on the physical processor’s core. It is possible to allocate different resource allowances on specific vCPUs for specific containers or virtual machines.

vCPU-s

Unit used to measure the resource consumption of a container. It reflects the amount of vCPU used over time.

Protocol

Serverless Containers supports http1 (default) and http2 (h2c). Use HTTP/2 if your container application is configured to listen for HTTP/2 requests, such as a gRPC service or a web server that uses HTTP/2 features like multiplexing, otherwise HTTP/1 is recommended.

Was this page helpful?
API DocsScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCareers
© 2023-2025 – Scaleway