Note the Private Network ID from the output for later use.
Connecting Scaleway Managed Databases to Kubernetes Kapsule clusters
This guide explains how to set up and connect a Scaleway Managed Database for PostgreSQL or MySQL with a Scaleway Kubernetes Kapsule cluster.
We will walk you through the entire process using both the Scaleway CLI and Terraform approaches.
Before you startLink to this anchor
To complete the actions presented below, you must have:
- A Scaleway account logged into the console
- Owner status or IAM permissions allowing you to perform actions in the intended Organization
- A valid API key
- Scaleway CLI installed and configured
- kubectl installed
- Terraform or OpenTofu installed (for Terraform approach)
Method 1 - Using the Scaleway CLILink to this anchor
First, install the Scaleway CLI, and use scw init
to set your API key and scw config set region par1
to set the default region (e.g. Paris).
Creating a Private NetworkLink to this anchor
Create a Private Network that both your Kubernetes cluster and database will use:
scw vpc private-network create name=kube-db-network
Creating a Managed Database InstanceLink to this anchor
-
Run the following command to create a Managed PostgreSQL (or MySQL) Database Instance:
scw rdb instance create \name=my-kube-database \node-type=db-dev-s \engine=PostgreSQL-15 \is-ha-cluster=true \user-name=admin \password=StrongP@ssw0rd123 \region=fr-parThis creates a high-availability PostgreSQL 15 database with a public endpoint.
ImportantAt this point the database is exposed to the Internet.
-
Add the Private Network endpoint to the database:
scw rdb endpoint create \<database-instance-id> \private-network.private-network-id=<private-network-id> \private-network.enable-ipam=true region-fr-par -
Get the Insance details and look for the public endpoint ID under the “Endpoints” section.
scw rdb instance get <database-instance-id> -
Remove the public endpoint to ensure the database is only reachable from the Private Network and no longer exposed to the public Ineternet.
scw rdb endpoint delete instance-id=<database-instance-id> <public-endpoint-id>
Creating a Kubernetes Kapsule clusterLink to this anchor
-
Run the following Scaleway CLI command to create a Kubernetes Kapsule cluster attached to the same Private Network:
scw k8s cluster create \name=my-kube-cluster \type=kapsule \version=1.28.2 \cni=cilium \pools.0.name=default \pools.0.node-type=DEV1-M \pools.0.size=2 \pools.0.autoscaling=true \pools.0.min-size=2 \pools.0.max-size=5 \private-network-id=<private-network-id> -
Wait for the cluster to be ready, then get the
kubeconfig
:scw k8s kubeconfig install <k8s-cluster-id> region=fr-par
Creating a Kubernetes secret for database credentialsLink to this anchor
Use kubectl
to create a Kubernetes secret to store the database credentials:
kubectl create secret generic db-credentials \--from-literal=DB_HOST=<private-network-db-hostname> \--from-literal=DB_PORT=5432 \--from-literal=DB_NAME=rdb \--from-literal=DB_USER=admin \--from-literal=DB_PASSWORD=StrongP@ssw0rd123
Deploying a sample applicationLink to this anchor
-
Create a Kubernetes deployment that will connect to the database. Save this as
db-app.yaml
:apiVersion: apps/v1kind: Deploymentmetadata:name: postgres-clientspec:replicas: 1selector:matchLabels:app: postgres-clienttemplate:metadata:labels:app: postgres-clientspec:containers:- name: postgres-clientimage: postgres:latestcommand: ["sleep", "infinity"]env:- name: DB_HOSTvalueFrom:secretKeyRef:name: db-credentialskey: DB_HOST- name: DB_PORTvalueFrom:secretKeyRef:name: db-credentialskey: DB_PORT- name: DB_NAMEvalueFrom:secretKeyRef:name: db-credentialskey: DB_NAME- name: DB_USERvalueFrom:secretKeyRef:name: db-credentialskey: DB_USER- name: DB_PASSWORDvalueFrom:secretKeyRef:name: db-credentialskey: DB_PASSWORD -
Apply it to your cluster:
kubectl apply -f db-app.yaml -
Check that your application can connect to the database:
kubectl logs -f deployment/postgres-client
Method 2 - Using TerraformLink to this anchor
For a more infrastructure-as-code approach, you can use Terraform or OpenTofu (open-source Terraform fork) to set up the same resources.
Install Terraform and ensure the Scaleway Terraform provider is set up with terraform init -provider=scaleway/scaleway
.
Setting-up Terraform filesLink to this anchor
-
Create a new directory and set up your files:
mkdir scaleway-kube-dbcd scaleway-kube-db -
Create a
providers.tf
file:terraform {required_providers {scaleway = {source = "scaleway/scaleway"version = "~> 2.40"}}}provider "scaleway" {access_key = var.scaleway_access_keysecret_key = var.scaleway_secret_keyproject_id = var.project_idregion = var.regionzone = var.zone} -
Create a
variables.tf
file:variable "scaleway_access_key" {description = "Scaleway Access Key"type = stringsensitive = true}variable "scaleway_secret_key" {description = "Scaleway Secret Key"type = stringsensitive = true}variable "project_id" {description = "Scaleway Project ID"type = string}variable "region" {description = "Scaleway region (e.g., fr-par)"type = stringdefault = "fr-par"}variable "zone" {description = "Scaleway zone (e.g., fr-par-1)"type = stringdefault = "fr-par-1"}variable "db_password" {description = "Password for database user"type = stringsensitive = true}variable "db_user" {description = "Database username"type = stringdefault = "admin"} -
Create a
main.tf
file for the infrastructure:# Create Private Networkresource "scaleway_vpc_private_network" "private_net" {name = "kube-db-network"region = var.region}# Create Managed PostgreSQL Databaseresource "scaleway_rdb_instance" "database" {name = "my-kube-database"node_type = "db-dev-s"engine = "PostgreSQL-15"is_ha_cluster = trueuser_name = var.db_userpassword = var.db_passwordprivate_network {pn_id = scaleway_vpc_private_network.private_net.idenable_ipam = true}}# Kubernetes Cluster (Kapsule)resource "scaleway_k8s_cluster" "kapsule" {name = "my-kube-cluster-${random_id.suffix.hex}" # Make the name uniqueversion = "1.28.2"cni = "cilium"private_network_id = scaleway_vpc_private_network.private_net.iddelete_additional_resources = true}# Kubernetes Node Poolresource "scaleway_k8s_pool" "default_pool" {cluster_id = scaleway_k8s_cluster.kapsule.idname = "default-pool"node_type = "DEV1-M"size = 2autoscaling = truemin_size = 2max_size = 5autohealing = truecontainer_runtime = "containerd"}# Generate a random suffix for uniquenessresource "random_id" "suffix" {byte_length = 4}# Output Database Connection Informationoutput "db_host" {value = scaleway_rdb_instance.database.private_network[0].ip}output "db_port" {value = scaleway_rdb_instance.database.db_host_port}output "kubeconfig" {value = scaleway_k8s_cluster.kapsule.kubeconfigsensitive = true}
Creating a terraform.tfvars fileLink to this anchor
Create a terraform.tfvars
file to store your variables securely:
scaleway_access_key = "<your-scaleway-access-key>"scaleway_secret_key = "<your-scaleway-secret-key>"project_id = "<your-scaleway-project-id>"db_password = "<your-strong-db-password>"
Applying the Terraform configurationLink to this anchor
Initialize and apply the Terraform configuration:
terraform initterraform apply
After confirming the plan, Terraform will create all the resources and output the database endpoint.
Connecting a real applicationLink to this anchor
Now let’s deploy a more realistic application that uses the database. Here’s a simple Node.js application with Express and pg (PostgreSQL client):
Creating a Dockerfile for the applicationLink to this anchor
The Dockerfile is used to create a Docker image for your application. Here’s a simple example:
# Use the official Node.js 14 image.# https://hub.docker.com/_/nodeFROM node:14# Create and change to the app directory.WORKDIR /usr/src/app# Copy application dependency manifests to the container image.# A wildcard is used to ensure both package.json AND package-lock.json are copied.# Copying this separately prevents re-running npm install on every code change.COPY package*.json ./# Install production dependencies.RUN npm install --only=production# Copy local code to the container image.COPY . .# Run the web service on container startup.CMD [ "node", "app.js" ]# Expose the port the app runs onEXPOSE 8080
Creating the application filesLink to this anchor
You need to create the necessary files for your Node.js application. Here’s a simple app.js
and a package.json
file as an example:
app.js
:
const express = require('express');const { Pool } = require('pg');const app = express();// Get DB credentials from environment variablesconst pool = new Pool({user: process.env.DB_USER, // 'admin'host: process.env.DB_HOST, // '<private-network-db-hostname>'database: process.env.DB_NAME, // 'rdb'password: process.env.DB_PASSWORD,port: process.env.DB_PORT, // '5432'});app.get('/', async (req, res) => {try {const result = await pool.query('SELECT NOW() as now');res.send(result.rows);} catch (err) {console.error(err);res.status(500).send(err.toString());}});const PORT = process.env.PORT || 8080;app.listen(PORT, () => {console.log(`Server running on port ${PORT}`);});
package.json
:
{"name": "node-postgres-app","version": "1.0.0","main": "app.js","dependencies": {"express": "^4.17.1","pg": "^8.6.0"}}
Creating Kubernetes manifests for the applicationLink to this anchor
- Ensure the previously created secret is cleared:
kubectl delete secret db-credentials
-
Recreate the Secret Using
kubectl create secret
. Run the following command without any base64 encoding:kubectl create secret generic db-credentials \--from-literal=DB_HOST=<private-network-db-hostname> \--from-literal=DB_PORT=5432 \--from-literal=DB_NAME=rdb \--from-literal=DB_USER=admin \--from-literal=DB_PASSWORD=StrongP@ssw0rd123Kubernetes will automatically handle the base64 encoding for you.
-
Get the secret details:
kubectl get secret db-credentials -o yaml -
Create two main Kubernetes manifests: one for the deployment and one for the service.
deployment.yaml
:apiVersion: apps/v1kind: Deploymentmetadata:name: node-postgres-appspec:replicas: 1selector:matchLabels:app: node-postgres-apptemplate:metadata:labels:app: node-postgres-appspec:containers:- name: node-postgres-appimage: ${YOUR_DOCKER_REGISTRY}/node-postgres-app:latestports:- containerPort: 8080env:- name: DB_HOSTvalueFrom:secretKeyRef:name: db-credentialskey: DB_HOST- name: DB_PORTvalueFrom:secretKeyRef:name: db-credentialskey: DB_PORT- name: DB_NAMEvalueFrom:secretKeyRef:name: db-credentialskey: DB_NAME- name: DB_USERvalueFrom:secretKeyRef:name: db-credentialskey: DB_USER- name: DB_PASSWORDvalueFrom:secretKeyRef:name: db-credentialskey: DB_PASSWORD---apiVersion: v1kind: Secretmetadata:name: db-credentialstype: Opaquedata:DB_HOST: <base64-encoded-db-host>DB_PORT: <base64-encoded-db-port>DB_NAME: <base64-encoded-db-name>DB_USER: <base64-encoded-db-user>DB_PASSWORD: <base64-encoded-db-password>service.yaml
:apiVersion: v1kind: Servicemetadata:name: node-postgres-appspec:type: LoadBalancerports:- port: 80targetPort: 8080selector:app: node-postgres-app
Building and pushing the Docker imageLink to this anchor
Replace ${YOUR_DOCKER_REGISTRY}
with your Docker registry (e.g., Docker Hub username).
docker build -t ${YOUR_DOCKER_REGISTRY}/node-postgres-app:latest .docker push ${YOUR_DOCKER_REGISTRY}/node-postgres-app:latest
Deploying the application to KubernetesLink to this anchor
-
Apply the Kubernetes manifests:
kubectl apply -f deployment.yamlkubectl apply -f service.yaml -
Check the service to get the external IP:
kubectl get service node-postgres-app -
Visit the application at the external IP to see it in action. If everything is set up correctly, you should see the current PostgreSQL time displayed when you access the application URL.
Security best practicesLink to this anchor
Use Private NetworksLink to this anchor
Always use Private Networks when connecting a Kubernetes cluster to a database. This ensures that database traffic never traverses the public internet, reducing the attack surface significantly.
Implement proper TLSLink to this anchor
If you need to use a public endpoint, ensure you’re using TLS with certificate verification:
For PostgreSQL, add this to your connection string:
sslmode=verify-full sslrootcert=/path/to/scaleway-ca.pem
Restrict database access with network policiesLink to this anchor
Implement Kubernetes Network Policies to control which pods can access the database:
Use secrets managementLink to this anchor
Consider using a secrets management solution like HashiCorp Vault or Kubernetes External Secrets to manage database credentials instead of storing them directly in Kubernetes Secrets.
Regularly rotate credentialsLink to this anchor
Implement a process to regularly rotate database credentials. This can be automated using tools like Vault or custom operators.