Terraform module: Deploy your infrastructure in one click
Terraform is an infrastructure as code tool, and in this hands-on guide, we will learn how to turn an instance into a module to deploy our infrastructure.
Hi everyone! I’m Jules, Developer Relations Manager at Scaleway, and today I am going to show you how Terraform is going to change the way you currently manage your cloud infrastructure. If you want to quickly and easily set up a cloud infrastructure, one of the best ways to do it is to create a Terraform repository. You will then be able to deploy your resources in a few clicks.
Terraform is an open-source, Infrastructure-as-Code tool that helps you manage your infrastructure at any time, deploy it/delete it in just one click, and work with other developers on your projects. Previously, I worked as a Solutions Architect for different projects, and it really helps me keep track of my work, the infrastructures that I deployed and also allows many developers to work on the same project easily.
Before starting on the project, you need to have an account, your credentials all set up, and install Terraform on the server you are using, or locally, using the last version of the Scaleway Terraform provider.
First, let’s create our workspace. Even if it is not at all mandatory, Terraform developers like to organize their repository to easily find their resources. It also allows you to store your data in a unique location, depending on the environment you want to deploy your infrastructure in (you can have a repository for your development infrastructure, etc.).
Now we are going to create four different files:
For the time being, let’s just create these four files and fill the backend and the provider as in this example:
terraform { backend "s3" { bucket = "XXXXXXXXX" key = "terraform.tfstate" region = "fr-par" endpoint = "https://s3.fr-par.scw.cloud" skip_credentials_validation = true skip_region_validation = true }}/*For the credentials part:==> Create a ~/.aws/credentials:[default]aws_access_key_id=<SCW_ACCESS_KEY>aws_secret_access_key=<SCW_SECRET_KEY>region=fr-par*/
terraform { required_providers { scaleway = { source = "scaleway/scaleway" version = "2.2.0" } } required_version = ">= 0.13"}
Besides the code we are providing you with, our Terraform directory consists of several other files, created by the Terraform provider itself, to keep your infrastructure on tracks:
We should take steps to avoid check-in of such files. Indeed, there is some content you really do not want to display (like with many other programming languages). For this, we can use a .gitignore file and mention the extension of such files.
An important thing to know about Terraform is that it stores the resources it manages into a state file. There are two types of state files: remote and local. But how do we work when we use a remote state file? What keeps us from deleting what our amazing coworkers have deployed into their infrastructure? The combinaison of a backend + a remote state! Where local state is great for an isolated developer, remote state is absolutely necessary for a team, as each member will need to share the infrastructure state whenever there is a change.
So, each time a change is applied, the state is updated with new values: creations, deletions, and updates.
Knowing that, we can assume that it is going to be much more convenient to set up a backend for each developer so they can participate in our Terraform project. In our case, we will set up an Object Storage bucket to store our backend (do not forget to set up your bucket credentials in ~/.aws/credentials).
Don’t forget to create your bucket before creating your backend. This is the only thing you have to do by yourself in the console, or via the API, before launching your project.
Here, we are going to fill our main.tf with our instance resources.
For this part, we are going to launch our first instance with an IP and a volume attached to it.
resource "scaleway_instance_ip" "public_ip" {}resource "scaleway_instance_volume" "scw-instance" { size_in_gb = 30 type = "l_ssd"}resource "scaleway_instance_server" "scw-instance" { type = "DEV1-L" image = "ubuntu_focal" tags = ["terraform instance", "scw-instance"] ip_id = scaleway_instance_ip.public_ip.id additional_volume_ids = [scaleway_instance_volume.scw-instance.id] root_volume { # The local storage of a DEV1-L instance is 80 GB, subtract 30 GB from the additional l_ssd volume, then the root volume needs to be 50 GB. size_in_gb = 50 }}
Also, do not forget to fill your variables.tf and your terraform.tfvars:
variable "zone" { type = string}variable "region" { type = string}variable "env" { type = string}
zone = "fr-par-1"region = "fr-par"env = "dev"
To finally launch our infrastructure, let’s switch on our terminal and write these three commands:
Terraform Init
Terraform plan
Terraform apply
Kapsule is the managed Kubernetes cluster developed by Scaleway. To deploy it with Scaleway, you will have to create a cluster resource and a pool associated. In our example, we add an autoscaling plus an auto_upgrade (every Sunday at 4 a.m).
resource "scaleway_k8s_cluster" "kapsule" { name = "kapsule-${var.env}" description = "${var.env} cluster" version = var.kapsule_cluster_version cni = "calico" tags = [var.env] autoscaler_config { disable_scale_down = false scale_down_delay_after_add = "5m" estimator = "binpacking" expander = "random" ignore_daemonsets_utilization = true balance_similar_node_groups = true expendable_pods_priority_cutoff = -5 } auto_upgrade { enable = true maintenance_window_start_hour = 4 maintenance_window_day = "sunday" }}resource "scaleway_k8s_pool" "default" { cluster_id = scaleway_k8s_cluster.kapsule.id name = "default" node_type = var.kapsule_pool_node_type size = var.kapsule_pool_size autoscaling = true autohealing = true min_size = var.kapsule_pool_min_size max_size = var.kapsule_pool_max_size}
variable "kapsule_cluster_version" { type = string}variable "kapsule_pool_size" { type = number}variable "kapsule_pool_min_size" { type = number}variable "kapsule_pool_max_size" { type = number}variable "kapsule_pool_node_type" { type = string}
kapsule_cluster_version = "1.22"kapsule_pool_size = 2kapsule_pool_min_size = 2kapsule_pool_max_size = 4kapsule_pool_node_type = "DEV1-M"
First export your passwords in a var env:
> export TF_VAR_rdb_user_root_password=”PASSWORD”
> export TF_VAR_rdb_user_scaleway_db_password="test"
N.B.: a little reminder about passwords. Passwords at Scaleway must be between 8 and 128 characters, contain at least one digit, one uppercase, one lowercase, and one special character.
Then, just create the resources needed for your database: the instance, the database, the user and your ACLs:
resource "scaleway_rdb_instance" "scaleway-rdb" { name = "postgresql-${var.env}" node_type = var.rdb_instance_node_type volume_type = var.rdb_instance_volume_type engine = var.rdb_instance_engine is_ha_cluster = var.rdb_is_ha_cluster disable_backup = var.rdb_disable_backup volume_size_in_gb = var.rdb_instance_volume_size_in_gb user_name = "root" password = var.rdb_user_root_password}resource "scaleway_rdb_database" "scaleway-rdb" { instance_id = scaleway_rdb_instance.scaleway-rdb.id name = "${var.env}-database"}resource "scaleway_rdb_user" "scaleway-rdb" { instance_id = scaleway_rdb_instance.scaleway-rdb.id name = "${var.env}-user-database" password = var.rdb_user_scaleway_db_password is_admin = false}resource "scaleway_rdb_acl" "scaleway-rdb" { instance_id = scaleway_rdb_instance.scaleway-rdb.id acl_rules { ip = "${scaleway_instance_ip.public_ip.address}/32" description = "SCW instance" }}
rdb_instance_node_type = "db-gp-xs"rdb_instance_engine = "PostgreSQL-13"rdb_is_ha_cluster = truerdb_disable_backup = falserdb_instance_volume_type = "bssd"rdb_instance_volume_size_in_gb = "50"
variable "rdb_is_ha_cluster" { type = bool}variable "rdb_disable_backup" { type = bool}variable "rdb_instance_node_type" { type = string}variable "rdb_instance_engine" { type = string}variable "rdb_instance_volume_size_in_gb" { type = string}variable "rdb_user_root_password" { type = string}variable "rdb_user_scaleway_db_password" { type = string}variable "rdb_instance_volume_type" { type = string}
Finally, you will have your own infrastructure, with an instance, a database which is only capable to communicate with instance, and a kapsule cluster, all ready to use !
This article is only the first part of our “How you can use Terraform to deploy your Scaleway infrastructure” series. We hope that, by reading it, you will realize that Terraform is a fantastic tool to manage your cloud infrastructures at Scaleway. Of course, we have more products to show you, and there is always room for improvement: for example, wrap everything into a module to easily use this code in another project. We will tackle this topic in a future article.
Terraform is an infrastructure as code tool, and in this hands-on guide, we will learn how to turn an instance into a module to deploy our infrastructure.
Discover how Mon Petit Placement, a french startup, migrated their infrastructure to the cloud, and more particularly on Kubernetes and how they made their legacy cloud-native.
Some features may be hard to find and used only by most advanced users. We thought you might be interested in going further and optimizing your use of Kubernetes Kapsule