Examine the evolution of virtualization technologies from bare metal, virtual machines, and containers and the tradeoffs between them.
Install terraform and configure it to work with AWS
Learn the common terraform commands and how to use them
•Terraform Plan, Apply, Destroy
Use Terraform variables and outputs to improve make our configurations more flexible
Explore HCL language features in Terraform to create more expressive and modular infrastructure code.
Learn to break your code into modules to make it flexible and reuseable
Overview of two primary methods for managing multiple Terraform environments
Techniques for testing and validating Terraform code
Covers how teams generally work with Terraform, including automated deployment with CI/CD
In this portion of the course, we will walk through the sample Terraform configuration used throughout the remainder of the course.
We will be using various AWS resources to build a simple web application architecture.
The full code shown in the video can be found at: GitHub Repo.
Note: This lesson shows a naive implementation with all resources in a single main.tf
file, hardcoded values, etc... In future lessons we will build upon this to apply Terraform best practices.
Choose between Terraform Cloud, AWS S3 + DynamoDB, or a local backend. For this example, we will use the AWS S3 backend with DynamoDB for state locking. See the previous lesson for info about setting this up.
main.tf
file and configure the backend definition:The backend configuration goes within the top level terraform {}
block.
terraform {
# Assumes s3 bucket and dynamo DB table already set up
# See /code/03-basics/aws-backend
backend "s3" {
bucket = "devops-directive-tf-state"
key = "03-basics/web-app/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-locking"
encrypt = true
}
}
You should specify the version as well as the AWS region you want the provider to operate in.
terraform {
...
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
The following configuration defines two virtual machines with a basic python webserver that will be executed upon startup (by placing the commands within the user_data
block).
We also need to define a security group so that we will be able to allow inbound traffic to the instances.
resource "aws_instance" "instance_1" {
ami = "ami-011899242bb902164" # Ubuntu 20.04 LTS // us-east-1
instance_type = "t2.micro"
security_groups = [aws_security_group.instances.name]
user_data = <<-EOF
#!/bin/bash
echo "Hello, World 1" > index.html
python3 -m http.server 8080 &
EOF
}
resource "aws_instance" "instance_2" {
ami = "ami-011899242bb902164" # Ubuntu 20.04 LTS // us-east-1
instance_type = "t2.micro"
security_groups = [aws_security_group.instances.name]
user_data = <<-EOF
#!/bin/bash
echo "Hello, World 2" > index.html
python3 -m http.server 8080 &
EOF
}
resource "aws_security_group" "instances" {
name = "instance-security-group"
}
We saw how to create an S3 bucket when bootstrapping the AWS backend. This configuration is similar.
resource "aws_s3_bucket" "bucket" {
bucket_prefix = "devops-directive-web-app-data"
force_destroy = true
}
resource "aws_s3_bucket_versioning" "bucket_versioning" {
bucket = aws_s3_bucket.bucket.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "bucket_crypto_conf" {
bucket = aws_s3_bucket.bucket.bucket
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
To keep things simple, this configuration is deployed into a default VPC and Subnet.
Since these should already exist, we use the data
object rather than the resource
object so that terraform can retrieve information about them, but not manage them directly.
data "aws_vpc" "default_vpc" {
default = true
}
data "aws_subnet_ids" "default_subnet" {
vpc_id = data.aws_vpc.default_vpc.id
}
Security groups are how we define what traffic is allowable. Here we specify that inbound traffic on port 8080 can be routed to our virtual machines.
resource "aws_security_group_rule" "allow_http_inbound" {
type = "ingress"
security_group_id = aws_security_group.instances.id
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
We have two virtual machines and want to split traffic between them. We can do this with a load balancer. We configure the load balancer behavior and attach the two EC2 instances to it.
resource "aws_lb_listener" "http" {
load_balancer_arn = aws_lb.load_balancer.arn
port = 80
protocol = "HTTP"
# By default, return a simple 404 page
default_action {
type = "fixed-response"
fixed_response {
content_type = "text/plain"
message_body = "404: page not found"
status_code = 404
}
}
}
resource "aws_lb_target_group" "instances" {
name = "example-target-group"
port = 8080
protocol = "HTTP"
vpc_id = data.aws_vpc.default_vpc.id
health_check {
path = "/"
protocol = "HTTP"
matcher = "200"
interval = 15
timeout = 3
healthy_threshold = 2
unhealthy_threshold = 2
}
}
resource "aws_lb_target_group_attachment" "instance_1" {
target_group_arn = aws_lb_target_group.instances.arn
target_id = aws_instance.instance_1.id
port = 8080
}
resource "aws_lb_target_group_attachment" "instance_2" {
target_group_arn = aws_lb_target_group.instances.arn
target_id = aws_instance.instance_2.id
port = 8080
}
resource "aws_lb_listener_rule" "instances" {
listener_arn = aws_lb_listener.http.arn
priority = 100
condition {
path_pattern {
values = ["*"]
}
}
action {
type = "forward"
target_group_arn = aws_lb_target_group.instances.arn
}
}
resource "aws_security_group" "alb" {
name = "alb-security-group"
}
resource "aws_security_group_rule" "allow_alb_http_inbound" {
type = "ingress"
security_group_id = aws_security_group.alb.id
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
resource "aws_security_group_rule" "allow_alb_all_outbound" {
type = "egress"
security_group_id = aws_security_group.alb.id
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
resource "aws_lb" "load_balancer" {
name = "web-app-lb"
load_balancer_type = "application"
subnets = data.aws_subnet_ids.default_subnet.ids
security_groups = [aws_security_group.alb.id]
}
Rather than access the application with the auto-generated domain of the load balancer, instead we define a Route 53 DNS record to use a domain of our choosing.
resource "aws_route53_zone" "primary" {
name = "devopsdeployed.com"
}
resource "aws_route53_record" "root" {
zone_id = aws_route53_zone.primary.zone_id
name = "devopsdeployed.com"
type = "A"
alias {
name = aws_lb.load_balancer.dns_name
zone_id = aws_lb.load_balancer.zone_id
evaluate_target_health = true
}
}
You will also need to update your domain's nameservers to use the AWS nameservers.
Our application does not actually use the RDS instance, but we provision one to demonstrate how because most web applications will need a database of some kind.
resource "aws_db_instance" "db_instance" {
allocated_storage = 20
# This allows any minor version within the major engine_version
# defined below, but will also result in allowing AWS to auto
# upgrade the minor version of your DB. This may be too risky
# in a real production environment.
auto_minor_version_upgrade = true
storage_type = "standard"
engine = "postgres"
engine_version = "12"
instance_class = "db.t2.micro"
name = "mydb"
username = "foo"
password = "foobarbaz"
skip_final_snapshot = true
}
Initialize, Plan, and Apply the Configuration
terraform init
to initialize the remote backend.terraform plan
to review the changes.terraform apply
to apply the changes and provision the resources.Test the Web Application
Access the load balancer's DNS name or your domain to check if the instances are working and load balancing is functioning properly.
Run terraform destroy
to clean up the resources and avoid incurring additional costs.
Throughout the course, we will build on this base configuration and learn how to make it more extensible and cleaner.
We will also explore breaking the configuration into smaller files, using variables, and optimizing the Terraform workflow.