Skip to main content

Using Terraform

warning

Creating the workshop cluster with Terraform is currently in preview. Please raise any issues encountered in the GitHub repository.

This section outlines how to build a cluster for the lab exercises using Hashicorp Terraform. This is intended for learners who are familiar with using Terraform infrastructure-as-code.

The terraform CLI has been pre-installed in your IDE environment, so we can immediately create the cluster. Let's examine the main Terraform configuration files that will be used to build the cluster and its supporting infrastructure.

Understanding Terraform config files

The providers.tf file configures the Terraform providers needed to build the infrastructure. In our case, we use the aws, kubernetes, and helm providers:

provider "aws" {
default_tags {
tags = local.tags
}
}

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.67.0"
}
}

required_version = ">= 1.4.2"
}

The main.tf file sets up Terraform data sources to retrieve the current AWS account and region being used, as well as some default tags:

locals {
tags = {
created-by = "eks-workshop-v2"
env = var.cluster_name
}
}

The vpc.tf configuration ensures our VPC infrastructure is created:

locals {
private_subnets = [for k, v in local.azs : cidrsubnet(var.vpc_cidr, 3, k + 3)]
public_subnets = [for k, v in local.azs : cidrsubnet(var.vpc_cidr, 3, k)]
azs = slice(data.aws_availability_zones.available.names, 0, 3)
}

data "aws_availability_zones" "available" {
state = "available"
}

module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.1"

name = var.cluster_name
cidr = var.vpc_cidr

azs = local.azs
public_subnets = local.public_subnets
private_subnets = local.private_subnets
public_subnet_suffix = "SubnetPublic"
private_subnet_suffix = "SubnetPrivate"

enable_nat_gateway = true
create_igw = true
enable_dns_hostnames = true
single_nat_gateway = true

# Manage so we can name
manage_default_network_acl = true
default_network_acl_tags = { Name = "${var.cluster_name}-default" }
manage_default_route_table = true
default_route_table_tags = { Name = "${var.cluster_name}-default" }
manage_default_security_group = true
default_security_group_tags = { Name = "${var.cluster_name}-default" }

public_subnet_tags = merge(local.tags, {
"kubernetes.io/role/elb" = "1"
})
private_subnet_tags = merge(local.tags, {
"karpenter.sh/discovery" = var.cluster_name
"kubernetes.io/role/internal-elb" = "1"
})

tags = local.tags
}

Finally, the eks.tf file specifies our EKS cluster configuration, including a Managed Node Group:

locals {
remote_node_cidr = var.remote_network_cidr
remote_pod_cidr = var.remote_pod_cidr
}

module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"

cluster_name = var.cluster_name
cluster_version = var.cluster_version
cluster_endpoint_public_access = true
enable_cluster_creator_admin_permissions = true

cluster_addons = {
vpc-cni = {
before_compute = true
most_recent = true
configuration_values = jsonencode({
env = {
ENABLE_POD_ENI = "true"
ENABLE_PREFIX_DELEGATION = "true"
POD_SECURITY_GROUP_ENFORCING_MODE = "standard"
}
nodeAgent = {
enablePolicyEventLogs = "true"
}
enableNetworkPolicy = "true"
})
}
}

vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets

create_cluster_security_group = false
create_node_security_group = false
cluster_security_group_additional_rules = {
hybrid-node = {
cidr_blocks = [local.remote_node_cidr]
description = "Allow all traffic from remote node/pod network"
from_port = 0
to_port = 0
protocol = "all"
type = "ingress"
}

hybrid-pod = {
cidr_blocks = [local.remote_pod_cidr]
description = "Allow all traffic from remote node/pod network"
from_port = 0
to_port = 0
protocol = "all"
type = "ingress"
}
}

node_security_group_additional_rules = {
hybrid_node_rule = {
cidr_blocks = [local.remote_node_cidr]
description = "Allow all traffic from remote node/pod network"
from_port = 0
to_port = 0
protocol = "all"
type = "ingress"
}

hybrid_pod_rule = {
cidr_blocks = [local.remote_pod_cidr]
description = "Allow all traffic from remote node/pod network"
from_port = 0
to_port = 0
protocol = "all"
type = "ingress"
}
}


cluster_remote_network_config = {
remote_node_networks = {
cidrs = [local.remote_node_cidr]
}
# Required if running webhooks on Hybrid nodes
remote_pod_networks = {
cidrs = [local.remote_pod_cidr]
}
}

eks_managed_node_groups = {
default = {
instance_types = ["m5.large"]
force_update_version = true
release_version = var.ami_release_version
use_name_prefix = false
iam_role_name = "${var.cluster_name}-ng-default"
iam_role_use_name_prefix = false

min_size = 3
max_size = 6
desired_size = 3

update_config = {
max_unavailable_percentage = 50
}

labels = {
workshop-default = "yes"
}
}
}

tags = merge(local.tags, {
"karpenter.sh/discovery" = var.cluster_name
})
}

Creating the workshop environment with Terraform

Based on this configuration, Terraform will create the workshop environment with the following:

  • A VPC across three availability zones
  • An EKS cluster
  • An IAM OIDC provider
  • A managed node group named default
  • VPC CNI configured to use prefix delegation

Download the Terraform files:

~$mkdir -p ~/environment/terraform; cd ~/environment/terraform
~$curl --remote-name-all https://raw.githubusercontent.com/aws-samples/eks-workshop-v2/stable/cluster/terraform/{main.tf,variables.tf,providers.tf,vpc.tf,eks.tf}

Run the following Terraform commands to deploy your workshop environment:

~$export EKS_CLUSTER_NAME=eks-workshop
~$terraform init
~$terraform apply -var="cluster_name=$EKS_CLUSTER_NAME" -auto-approve

This process generally takes 20-25 minutes to complete.

Next Steps

Now that the cluster is ready, head to the Navigating the labs section or skip ahead to any module in the workshop using the top navigation bar. Once you've completed the workshop, follow the steps below to clean up your environment.

Cleaning Up (steps once you are done with the Workshop)

warning

The following demonstrates how to clean up resources once you are done using the EKS cluster. Completing these steps will prevent further charges to your AWS account.

Before deleting the IDE environment, clean up the cluster that we set up in previous steps.

First, use delete-environment to ensure that the sample application and any left-over lab infrastructure is removed:

~$delete-environment

Next, delete the cluster with terraform:

~$cd ~/environment/terraform
~$terraform destroy -var="cluster_name=$EKS_CLUSTER_NAME" -auto-approve

You can now proceed to cleaning up the IDE.