Terraform check{} Block (Continued)
Introduction
I have recently written about the check{}
blocks which have been introduced in Terraform v1.5.0-alpha20230405.
In a nutshell, check blocks are used to validate infrastructure resources. Module authors can now include independent check blocks in their configurations. These must have at least one assert block, but they may contain several blocks if necessary. Each assert block, similarly to the existing Custom Condition Checks, consists of a condition expression and an error_message expression.
Furthermore, check blocks can make advantage of scoped data sources. These work similarly to current data sources, with the distinction that they can only be accessed from within the check blocks that include them.
In contrast to precondition and postcondition blocks, Terraform does not halt execution in the event of a failure or error in the scoped data block or if any of the assertions fail. This capability enables practitioners to assess the state of their infrastructure in real-time, independent of the regular lifecycle management process.
Exploring the Use Case
Starting with the examples in the previous article, I have encountered a new use case which involves the helm Terraform Provider.
This is an extension to Terraform that allows you to manage Helm charts as part of your infrastructure provisioning process. It integrates Helm’s capabilities into Terraform, enabling you to install, upgrade, and delete Helm releases as part of your Terraform workflow.
In my view, it is rather important (and tricky) to manage the order in which Helm releases are deployed when using the helm Terraform Provider. Let’s consider a scenario where we aim to deploy Vault using the official Helm chart while also creating an Ingress resource to enable access. In this case, the Ingress NGINX Controller becomes a prerequisite for the Vault release.
Typically, we would define each Helm release as a Terraform resource using the helm_release
resource block, specifying the configuration parameters for each release. Subsequently, we would utilize the depends_on
argument to establish the order in which the releases should be deployed.
resource "helm_release" "ingress_nginx" {
name = "ingress-nginx"
chart = "ingress-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
version = "4.6.0"
namespace = "ingress-nginx"
}
resource "helm_release" "vault" {
name = "vault"
chart = "vault"
repository = "https://helm.releases.hashicorp.com"
version = "0.24.0"
namespace = "unfriendlygrinch"
set {
name = "server.ingress.enabled"
value = "true"
}
set {
name = "server.ingress.hosts[0].host"
value = "vault.samedin.ro"
}
set {
name = "ui.enabled"
value = "true"
}
depends_on = [helm_release.ingress_nginx]
}
In the example above, the vault
release depends on ingress-nginx
, thus Vault will be deployed only after the NGINX Ingress Controller release has been successfully provisioned.
This example addresses the case when the resources are defined in a linear manner. However, how does this look when using a for loop to create numerous releases while also taking a certain deployment sequence into account?
Let’s start by defining a helm_releases
variable:
variable "helm_releases" {
type = list(object({
name = string
chart = string
repository = string
version = optional(string)
namespace = optional(string, "default")
set = optional(map(string), {})
}))
description = <<EOT
List of Helm releases
name = Release name.
chart = Chart name.
repository = Repository URL to locate the chart.
version = Chart version.
namespace = Namespace to install the release into.
set = Value block with custom values.
EOT
default = []
}
The helm_releases
variable is defined as a list of objects, where each object represents the configuration for a Helm release.
You should note that some of the arguments of the helm_releases
variable are optional. The syntax of this is as follows: optional(<TYPE_CONSTRAINT>, <DEFAULT_VALUE>)
. In case no default value is specified, then this falls back to null
.
Let’s also update the helm_release
resource:
resource "helm_release" "this" {
for_each = { for release in var.helm_releases : release.name => release }
name = each.key
chart = each.value.chart
repository = each.value.repository
version = each.value.version
namespace = each.value.namespace
dynamic "set" {
iterator = item
for_each = each.value.set
content {
name = item.key
value = item.value
}
}
}
The resource block uses a for_each
loop to iterate over the var.helm_releases
list and create a Helm release using the specified configuration parameters.
In order to create the resources, NGINX Ingress Controller and Vault, we define these in a terraform.tfvars
file:
helm_releases = [{
name = "ingress-nginx"
chart = "ingress-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
version = "4.6.0"
namespace = "ingress-nginx"
}, {
name = "vault"
chart = "vault"
repository = "https://helm.releases.hashicorp.com"
version = "0.24.0"
namespace = "unfriendlygrinch"
set = {
"server.ingress.enabled" = "true"
"server.ingress.hosts[0].host" = "vault.samedin.ro"
"ui.enabled" = "true"
}
}]
So far, so good… How do we now manage the deployment sequence of the releases? It is no longer possible to use the depends_on
argument, because explicit dependencies are static references, , meaning the depends_on
argument accepts a well-defined list of resources.
Check
As you could imagine up to this point, we are going to define a check{}
block in order to ensure that the prerequisites of a given Helm release are met.
Let’s start by updating the helm_release
variable:
variable "helm_releases" {
type = list(object({
name = string
chart = string
repository = string
version = optional(string)
namespace = optional(string, "default")
set = optional(map(string), {})
prerequisites = optional(list(object({
name = string
namespace = string
})), [])
}))
description = <<EOT
List of Helm releases
name = Release name.
chart = Chart name.
repository = Repository URL to locate the chart.
version = Chart version.
namespace = Namespace to install the release into.
set = Value block with custom values.
prerequisites = List of Helm releases upon which the current Helm release depends on.
EOT
default = []
}
The newly added prerequisites
argument defines the Helm releases upon which the current release depends on. For example, for vault
, this would look like:
prerequisites = [{
name = "ingress-nginx"
namespace = "ingress-nginx"
}]
This argument is in turn used in the release_prerequisites
check block in order to verify whether the required releases have been deployed beforehand.
check "release_prerequisites" {
assert {
condition = length(setsubtract([for k, v in data.external.this : v.result.release_name], [for k, v in data.external.this : v.result.release_name if v.result.status == "deployed"])) == 0
error_message = format("The following Helm releases %#v are not deployed.",
setsubtract([for k, v in data.external.this : v.result.release_name], [for k, v in data.external.this : v.result.release_name if v.result.status == "deployed"]))
}
}
The check block uses an external
data source:
locals {
prerequisites = flatten([for release in var.helm_releases :
[for prereq in release.prerequisites : {
current_release_name = release.name
current_namespace = release.namespace
prereq_release_name = prereq.name
prereq_namespace = prereq.namespace
}]])
}
data "external" "this" {
for_each = { for idx, release in local.prerequisites : idx => release }
program = ["python", "${path.module}/check_helm_release.py", "${each.value.prereq_release_name}", "${each.value.prereq_namespace}"]
}
Usually, such data sources are used to retrieve data from external sources that are not managed by Terraform itself. In our case, this runs a Python script (located in the same directory as the Terraform configuration file) and verifies the status of the releases defined as prerequisites of the current release.
import subprocess
import sys
import json
name = sys.argv[1]
namespace = sys.argv[2]
result = {}
try:
helm_output = subprocess.check_output(["helm", "status", name, "-n", namespace])
status = "deployed"
except subprocess.CalledProcessError:
status = "not deployed"
except OSError:
status = "Helm is not installed"
except Exception:
status = "error"
result = {
"release_name": name,
"status": status,
}
print(json.dumps(result))
The check_helm_release.py
Python script uses the subprocess
module to execute the helm status
command and determine the status of a Helm release. It takes two command-line arguments: the name of the release and the namespace. The script handles a variety of scenarios, including successful deployment, release not being deployed, Helm not being installed, and any other issues. The release name and its status are then printed in JSON format.
Let’s now apply the configuration:
$ terraform apply
data.external.this["0"]: Reading...
helm_release.this["ingress-nginx"]: Refreshing state... [id=ingress-nginx]
data.external.this["0"]: Read complete after 0s [id=-]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# helm_release.this["vault"] will be created
+ resource "helm_release" "this" {
+ atomic = false
+ chart = "vault"
+ cleanup_on_fail = false
+ create_namespace = false
+ dependency_update = false
+ disable_crd_hooks = false
+ disable_openapi_validation = false
+ disable_webhooks = false
+ force_update = false
+ id = (known after apply)
+ lint = false
+ manifest = (known after apply)
+ max_history = 0
+ metadata = (known after apply)
+ name = "vault"
+ namespace = "unfriendlygrinch"
+ pass_credentials = false
+ recreate_pods = false
+ render_subchart_notes = true
+ replace = false
+ repository = "https://helm.releases.hashicorp.com"
+ reset_values = false
+ reuse_values = false
+ skip_crds = false
+ status = "deployed"
+ timeout = 300
+ verify = false
+ version = "0.24.0"
+ wait = true
+ wait_for_jobs = false
+ set {
+ name = "server.ingress.enabled"
+ value = "true"
}
+ set {
+ name = "server.ingress.hosts[0].host"
+ value = "vault.samedin.ro"
}
+ set {
+ name = "ui.enabled"
+ value = "true"
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
helm_release.this["vault"]: Creating...
helm_release.this["vault"]: Still creating... [10s elapsed]
helm_release.this["vault"]: Creation complete after 18s [id=vault]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Checking the Terraform state:
"check_results": [
{
"object_kind": "check",
"config_addr": "check.release_prerequisites",
"status": "pass",
"objects": [
{
"object_addr": "check.release_prerequisites",
"status": "pass"
}
]
}
]
Given the intention to secure the Ingress resource by adding TLS, one approach is to employ cert-manager
to generate and set up a Let’s Encrypt certificate. Consequently, having cert-manager
deployed becomes a requirement for vault
. Hence, we need to update the prerequisites
:
prerequisites = [{
name = "ingress-nginx"
namespace = "ingress-nginx"
}, {
name = "cert-manager"
namespace = "ingress-nginx"
}]
Let’s now review the Terraform plan:
$ terraform plan
helm_release.this["ingress-nginx"]: Refreshing state... [id=ingress-nginx]
data.external.this["1"]: Reading...
data.external.this["0"]: Reading...
helm_release.this["vault"]: Refreshing state... [id=vault]
data.external.this["1"]: Read complete after 1s [id=-]
data.external.this["0"]: Read complete after 1s [id=-]
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
╷
│ Warning: Check block assertion failed
│
│ on check.tf line 19, in check "release_prerequisites":
│ 19: condition = length(setsubtract([for k, v in data.external.this : v.result.release_name], [for k, v in data.external.this : v.result.release_name if v.result.status == "deployed"])) == 0
│ ├────────────────
│ │ data.external.this is object with 2 attributes
│
│ The following Helm releases ["cert-manager"] are not deployed.
╵
When employing a for loop, we lack the capability to deploy Helm releases in a particular sequence. However, by utilizing such a check block, the user is notified of the prior conditions and can subsequently take appropriate action.
Final Thoughts
Explicit dependencies between resources are defined using the depends_on
argument. We can specify that a particular resource depends on another, ensuring that Terraform builds or changes them in the correct order.
In most cases, Terraform infers the dependency graph automatically. However, in distinct circumstances, should, for example, a resource rely on other resource’s behavior without being able to access its data, then we need to explicitly manage dependencies between these.
We are able to deploy Helm releases either by defining the helm_release
resource multiple times and explicitly managing the prerequisites or by looping over a list and ensuring that the required components are in place by leveraging check blocks.