Multi-Cloud Kubernetes best practices
Using Kubernetes in a Multi-Cloud environment can be challenging and requires the implementation of best practices. Learn a few good practices to implement a concrete multi-cloud strategy.
Kubernetes offers many features dedicated to scalability to simplify infrastructure management for companies of any size. Those features present the expected, straightforward behaviors when used in a single-cloud Kubernetes cluster, but one question remains: how will they react when a single Kubernetes cluster regroups servers from multiple cloud providers?
Working in a multi-cloud environment presents multiple perks in terms of redundancy, reliability, and customer coverage. However, it also raises questions that we will address here about the particularity of the management and the implementation of Multi-Cloud.
As of today (october 2021), HNA and Auto-Healing are not covered by Kubernetes Kosmos for external servers. However, they are available for managed Scaleway node pools in any Availability Zone, regardless of the region of the Kubernetes Kosmos Control-Plane.
Scaleway's Kubernetes Kosmos is fully integrated within the Scaleway ecosystem, which means that it benefits from the conversion of a Kubernetes service
of type LoadBalancer
into a Scaleway Multi-Cloud Load Balancer
Persistent Volumes are created through the Container Storage Interface
(CSI) of each cloud provider. In a Kubernetes Kosmos cluster, Scaleway's CSI is only deployed on the cluster's managed Scaleway Instances. Nonetheless, almost every Cloud provider's CSI is open-sourced and can be deployed on the corresponding nodes to benefit from Persistent Volumes
from any cloud provider, within the same cluster.
By design, it is not possible to have a private network regrouping servers from different cloud providers, but it is possible to have them communicate with each other.
Kubernetes Kosmos uses Kilo, a Container Network Interface
(CNI) based on Wireguard, managing a Virtual Private Network (VPN).
Kilo - the CNI used by Kubernetes Kosmos - is the solution allowing a multi-cloud Kubernetes. Indeed, if we were to change the CNI, every unmanaged node would become unreachable from the API-Server of the cluster Control-Plane. Furthermore, the cluster would be highly impacted and mostly unavailable.
Kubernetes has configuration options designed specifically for this kind of use case, such as selectors
, taints
, and tolerations
.
Choosing between High Availability and Low Latency needs to be taken into account, whether we are working in a Multi-Cloud environment or not. On a Kubernetes Kosmos cluster, pods
will be scheduled by default on the nodes with the least latency from the cluster's Control-Plane.
Of course, managing multiple cloud providers’ accounts and credentials is more restrictive than keeping a single provider. However, studies have shown that over 80% of IT companies already use multiple cloud providers. That way, they can benefit from a larger range of services.
If you are looking for the proper ally to deploy your project on, you might want to look into our range of Virtual Instances. We divided in four main categories to help you navigate among all the different specificities of each machines.
The Instances from the Learning range are perfect for small workloads and simple applications. They are built to host small internal applications, staging environments, or low-traffic web servers.
The Cost-Optimized range balances compute, memory, and networking resources. They can be used for a wide range of workloads - scaling a development and testing environment, but also Content Management Systems (CMS) or microservices. They're also a good default choice if you need help determining which instance type is best for your application.
The Production-Optimized range includes the highest consistent performance per core to support real-time applications, like Enterprise Instances. In addition, their computing power makes them generally more robust for compute-intensive workloads.
Expanding the Production-Optimized range, the Worload-Optimized range will be launched in the near future and will provide the same highest consistent performance than the Production-Optimized instances But they will come with the added flexibility of additional vCPU:RAM ratio in order to perfectly fits to your application’s requirements without wasting any vCPU or GB of RAM ressources.
Using Kubernetes in a Multi-Cloud environment can be challenging and requires the implementation of best practices. Learn a few good practices to implement a concrete multi-cloud strategy.
Learn more to implement a multi-cloud strategy on Kubernetes. All Cloud market players agree on the global definition of Multi-Cloud: using multiple public Cloud providers.
To simplify Kubernetes usage even more, we are now releasing the Application Library as part of the Easy Deploy feature on Scaleway's managed Kubernetes: Kapsule.