By removing or detaching the Public Gateway from the Private Network, a node pool with full isolation can lead to a single point of failure in the cluster, as nodes will no longer be able to reach their control plane.
Securing a cluster with a Private Network
Scaleway Kubernetes Kapsule provides a managed environment to create, configure, and run a cluster of preconfigured machines for containerized applications. This allows you to create Kubernetes clusters without the complexity of managing the infrastructure.
All new Kubernetes clusters are deployed with a Scaleway Private Network using controlled isolation.
Before you start
To complete the actions presented below, you must have:
- A Scaleway account logged into the console
- Owner status or IAM permissions allowing you to perform actions in the intended Organization
- Created a Kubernetes Kapsule cluster
By default, worker nodes are currently delivered with public IP addresses (controlled isolation). These IPs serve solely for outgoing traffic from your nodes to the internet; no default services are set to listen on them.
Even though these nodes have public IP addresses for specific maintenance and operational purposes, your cluster’s security remains uncompromised. See below for more information. Optionally, you can configure your nodes inside an entirely private network using full isolation.
Why have a Private Network for your Kubernetes Kapsule cluster?
A Private Network offers crucial functionalities to your cluster, including:
- Implementation of best practices in terms of security: all Scaleway resources can communicate securely (Instances, Load Balancers, Managed Databases), with less surface area for attack. For further information, refer to our blog post 10 best practices to configure your VPC.
- Compliance with expectations from the market (enterprise customers)
- Less manual configuration work such as security group configuration, IP range configuration, etc.
- Multi-AZ compatibility allows you to create node pools in several Availability Zones for better resilience.
- Lower latency
What is the difference between controlled isolation and full isolation?
Worker node pools with controlled isolation inside a Private Network have both private and public IPs for outgoing traffic. Fully isolated nodes get only a private IP, with all external communications channeled through a Public Gateway for secure external connections.
Isolation | Controlled isolation (default) | Full isolation (optional) | None (deprecated) |
---|---|---|---|
Description | Worker nodes are assigned both private IPs and public IPs. All inbound traffic on the public interface is dropped by default using Security Groups. | Worker nodes are set without public IPs (100% private network). A Public Gateway is required. | Clusters without a Private Network attached. Nodes have public-only endpoints. |
Benefits | 1. Strong security 2. Dynamic public IPs to reach out to external providers while avoiding rate limiting | 1. Maximum security 2. A stable egress IP for secure connection to external providers | n/a |
Notice | Default choice for new clusters. Can be used in combination with pools using full isolation. | Requires a Public Gateway, which incurs additional costs. | Deprecated in October 2023. |
Scaleway product compatibility
Can I use a Public Gateway with my Private Network to exit all outgoing traffic from the nodes?
Yes, you are required to attach a Private Gateway when setting up a node pool with full isolation. This allows Kapsule nodes with private IPs to route their outgoing traffic through the Public Gateway. For detailed steps on setting up a Public Gateway, refer to our Public Gateway documentation. Keep in mind that removing or detaching the Public Gateway from the Private Network can cause a single point of failure in the cluster, preventing fully isolated node pools from accessing the control plane.
To use a Public Gateway with a Private Network on a Kapsule cluster, make sure that
- The Public Gateway is located in the same region as the Kapsule cluster.
- Dynamic NAT must be activated (enabled by default).
- Advertise DefaultRoute must be activated (enabled by default).
- Your Public Gateway is fully integrated with IPAM, and is not a legacy gateway.
Is Kosmos compatible with Private Networks?
Only Kapsule can use a Private Network.
Kosmos uses Kilo as a CNI, which uses WireGuard to create a VPN Mesh between nodes for communication between pods. Any node in Kosmos, either in Scaleway or outside, uses these VPN tunnels to communicate securely by construct.
Are Managed Databases compatible with Kubernetes Kapsule on Private Networks?
Yes, they are. Since July 2023, the automatic allocation of IP addresses via IPAM is available for Managed Databases. These IP addresses are compatible with Scaleway’s VPC, which is now in General Availability. For more information about product compatibility, refer to the VPC documentation.
For any new Private Networks you create and attach to Managed Databases after July 2023, your private IP addresses are automatically allocated.
If you have set up Private Network endpoints for your Managed Databases before July 2023, and want to connect to Kapsule via a Private Network, you must first delete your old private network endpoint. Then, you can create a new one, either via the Scaleway console or API.
In the example below, we show you how to do so via the API. We specify the automated configuration of your Private Network via IPAM using "ipam_config": {},
.
curl --request POST \--url https://api.scaleway.com/rdb/v1/regions/$REGION/instances/$INSTANCE_ID/endpoints \--header "Content-Type: application/json" \--header "X-Auth-Token: $SCW_SECRET_KEY" \--data '{"endpoint_spec": {"private_network": {"ipam_config": {},"private_network_id": "<PRIVATE_NETWORK_ID>"}}}'
Replace <PRIVATE_NETWORK_ID>
with the ID of the Private Network in question.
- This action adds a new endpoint. If you want to use it in your environment, you need to update the endpoint in your configuration.
Refer to the Managed Database for PostgreSQL and MySQL API documentation for further information.
Are managed Load Balancers compatible with Kubernetes Kapsule Private Networks?
Managed Load Balancers support Private Networks with private backends and public frontends, meaning the traffic is forwarded to your worker nodes through your clusters’ Private Network.
Additionally, private Load Balancers are supported. These Load Balancers have no public IPs in either their back or frontends.
If you have a trusted IP configured on your ingress controller, note that the request will come from a private IP.
Which IP ranges are used for the Private Network of my cluster?
We automatically assign a /22 IP subnet from a Private Network to your cluster.
How can I access my cluster via my nodes’ public IPs for specific use cases?
Once you create a cluster in Kapsule, all nodes, particularly those with the Private Network feature enabled, are protected by a security group named kubernetes <cluster-id>
. Any changes made to this security group will apply to all nodes in the cluster.
If you wish to allow access to the nodes through a public IP using a specific port/protocol, you can modify the security group after creating the cluster by following these steps:
From the Scaleway console
- Go to the Instances section of the Scaleway console.
- Click the Security groups tab. A list of your existing security groups displays.
- Click the name of the security group that is configured for your Instance, which is named
kubernetes <cluster-id>
. - Click the Rules tab. A list of rules configured for this group displays.
- Click «Edit Icon» to edit the security group rules.
- Click Add inbound route to configure a new rule and customize it according to your requirements.
- Apply your custom rules by clicking «Validate Icon».
Using Terraform
If you are using Terraform to create your cluster, you can create a security group resource after creating the cluster resource and before creating the pool resource. You can find a Terraform configuration example below:
data "scaleway_k8s_version" "latest" {name = "latest"}resource "scaleway_vpc_private_network" "kapsule" {name = "pn_kapsule"tags = ["kapsule"]}resource "scaleway_k8s_cluster" "kapsule" {name = "open-pn-test"version = data.scaleway_k8s_version.latest.namecni = "cilium"private_network_id = scaleway_vpc_private_network.kapsule.iddelete_additional_resources = truedepends_on = [scaleway_vpc_private_network.kapsule]}resource "scaleway_instance_security_group" "kapsule" {name = "kubernetes ${split("/", scaleway_k8s_cluster.kapsule.id)[1]}"inbound_default_policy = "drop"outbound_default_policy = "accept"stateful = trueinbound_rule {action = "accept"protocol = "UDP"port = "500"}depends_on = [scaleway_k8s_cluster.kapsule]}resource "scaleway_k8s_pool" "default" {cluster_id = scaleway_k8s_cluster.kapsule.idname = "default"node_type = "DEV1-M"size = 1autohealing = truewait_for_pool_ready = truedepends_on = [scaleway_instance_security_group.kapsule]}resource "scaleway_rdb_instance" "main" {name = "pn-rdb"node_type = "DB-DEV-S"engine = "PostgreSQL-14"is_ha_cluster = truedisable_backup = trueuser_name = "username"password = "thiZ_is_v&ry_s3cret" # Obviously change password here or generate one at runtime through null_resource and display it via output.private_network {pn_id = scaleway_vpc_private_network.kapsule.id}}
Will the control plane also be located inside the Private Network?
Currently, only worker nodes are located in the Private Network of your cluster. The communication between the nodes and the control plane uses the Public IP of the node. You can reach the control plane for nodes using full isolation by adding a Public Gateway to the cluster.
What future options will there be for isolation?
- Control plane in isolation with nodes and API communicating in the same isolated network.
- The CNI’s network policies will restrict/allow a range of IPs or ports to control who can access the API server.