It is recommended to perform these steps during a maintenance window or periods of low traffic to minimize potential disruptions.
Migrating ENT1 pools to POP2 in your Kubernetes cluster
Scaleway is deprecating production-optimized ENT1 Instances. This guide provides a step-by-step process to migrate from ENT1 Instances to POP2 Instances within your Scaleway Kubernetes Kapsule clusters.
Before you startLink to this anchor
To complete the actions presented below, you must have:
- A Scaleway account logged into the Scaleway console
- Owner status or IAM permissions allowing actions in the intended Organization
- Created a Kubernetes Kapsule or Kosmos cluster
Identifying your ENT1 poolsLink to this anchor
- Log in to the Scaleway Console.
- Navigate to Kubernetes under the Containers section in the side menu of the console.
- Select the cluster containing the ENT1 pools you intend to migrate.
- In the Pools tab, identify and note the pools using ENT1 Instances.
Creating equivalent POP2 poolsLink to this anchor
-
For each ENT1 pool identified:
- Click + Create pool (or Add pool).
- Select POP2 from the Node Type dropdown menu.
- Configure the pool settings (e.g., Availability Zone, size, autoscaling, autoheal) to mirror the existing ENT1 pool as closely as possible.
- Click Create (or Add pool) to initiate the new pool.
-
Monitor the status of the new POP2 nodes until they reach the Ready state:
- In the Pools tab of the console.
- Alternatively, use
kubectl
with the command:Ensure all POP2 nodes display a Ready status.kubectl get nodes
Verifying workloads on the new poolLink to this anchor
-
Cordon the ENT1 nodes to prevent them from accepting new pods:
kubectl cordon <your-ent1-node-name> -
Drain the ENT1 nodes to reschedule workloads onto the POP2 nodes:
kubectl drain <your-ent1-node-name> --ignore-daemonsets --delete-emptydir-dataNoteThe flags
--ignore-daemonsets
and--delete-emptydir-data
may be necessary depending on your environment. Refer to the official Kubernetes documentation for detailed information on these options.
These commands ensure that your workloads are running on the new POP2 nodes before proceeding to delete the ENT1 pool.
Deleting the ENT1 poolLink to this anchor
- Return to your cluster’s Pools tab and wait a few minutes to ensure all workloads have been rescheduled onto POP2 nodes.
- Click the three-dot menu next to the ENT1 pool.
- Select Delete pool.
- Confirm the deletion.
Verifying the migrationLink to this anchor
-
Run the following command to ensure no ENT1-based nodes remain:
kubectl get nodesNoteOnly POP2 nodes should be listed.
-
Test your applications to confirm they are functioning correctly on the new POP2 nodes.
Migration HighlightsLink to this anchor
- Minimal disruption: Kubernetes manages pod eviction and rescheduling automatically. However, the level of disruption may vary based on your specific workloads and setup. It is recommended to maintain multiple replicas of your services, set up Pod Disruption Budgets (PDBs) to minimize downtime, and scale up workloads prior to the upgrade.
- Flexible scaling: You can configure the same autoscaling and autoheal policies on your POP2 pools as were set on your ENT1 pools.
- Equivalent performance: In most scenarios, POP2 Instances surpass the performance of ENT1 Instances, with additional CPU and memory-optimized variants available.
If you require assistance during the transitioning process, please contact our Support team.