The next step in Infrastructure automation
Over the last year I’ve been writing a ton of terraform. The tool is incredibly versatile and has allowed me to tackle just about any automation scenario when it comes to
AWS infrastructure. It’s easy to use and easy to integrate into scripts or other automation. It’s a must have as a DevOps skill these days.
If you’ve been in the DevOps space for any length of time, you’ve definitely heard of this tool.
From terraform.io
Terraform is an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files.
The pros with Terraform are:
- Easy to use
- Good and easy to reference documentation
- Integrates with automation systems easily due to being a lightweight cli tool
- Can also automate other applications or tools, like vault or okta. See documentation.
The cons:
- When state gets messed up, can be tedious to fix
- Conditionals can be hard to integrate, so tools like jinja are often used to fill in the gaps
- importing existing resource functionality is poor and can be difficult
There are more pros and cons than what I listed, but these seem to be the ones that stick out to me the most. Overall I’ve been able to automate a ton of tasks with TF when it comes to
building out resources in AWS.
What we will do
In this intro we will do the basics with terraform:
- Setup a project for the first time
- Define an EC2 instance with an attached EBS volume
- Terraform plan to see what TF will create
- apply the plan to our AWS account
- Update the resources we created
- Destroy the resources we created
Pre-requisites
- Terraform installed
- An aws account
- You got snacks and used the bathroom already
I’m using TF version 1.0.11
1
2
3
|
[jayson@RyterINC ~] terraform --version
Terraform v1.0.11
on darwin_amd64
|
Here is the repo for the resources we will create https://github.com/RyterINC/terraform-getting-started
Let’s get started!
By the way, AWS costs money. We will be working within the free tier, but please know that I’m not responsible for charges you incur.
Let’s create a directory called dev-instance
Terraform looks for any file in your current directory with the .tf
file extension. So, we have to create some .tf
files with the resources defined that we need. Let’s
create some.
1
2
|
cd dev-instance
touch main.tf backend.tf provider.tf
|
We technically could have named these files anything we wanted as long as the .tf
extansion is present. For best practice, we name these files after the resources we are building, or something that makes sense.
Thinking of Terraform from a logical perspective, we need to configure where the state file is stored, and we have to connect to aws.
The state file in terraform (for lack of a better analogy) is like the database for your terraform resources, represented as a file in json format. In our case,
we can configure terraform to use aws s3 as our backend in our file backend.tf
.
1
2
3
4
5
6
7
|
terraform {
backend "s3" {
bucket = "devopsreport-terraform"
key = "terraform-getting-started/dev-instance.tfstate"
region = "us-east-1"
}
}
|
Let’s talk about the above backend definition:
bucket
The s3 bucket we are targeting
key
The path our state file will be in, as well as the name of the file itself
region
The region your bucket will be created in
Speaking of aws, we need to let Terraform know it is the AWS cloud we want to talk and interact with, which is why we configure it as our provider in our providers.tf
file.
1
2
3
4
5
6
7
8
9
10
11
12
|
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
|
In the above section, we define that in order to use this module, we must use the aws provider of aleast version 3.0
. We then configure the provider region as us-east-1
.
Be sure to replace all relevant names like buckets and paths to reflect your aws account
At this point, we can run a terraform init
to initialize our Terraform workspace.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
[jayson@RyterINC dev-instance] terraform init
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v3.66.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
|
Defining Resources
Our project is to create an ec2 instance, with ebs storage attached. If there is an option in the GUI for an AWS resource, chances are Terraform probably has an option or
configuration setting that allows you to define it. In our case, we need to define an ec2 instance, which is the aws_instance
resource in Terraform.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance
This documentation will show us some examples of defining an instance, as well as the arguments that you can define. Some arguments are required, with others being
optional.
1
2
3
4
5
6
7
|
resource "aws_instance" "dev-instance" {
ami = "ami-04902260ca3d33422" # us-east-1
instance_type = "t2.micro"
tags = {
Name = "dev-instance"
}
}
|
With the above configuration, we can now run a terraform plan
which will output in the console the actions terraform will take in order to get our
environment to our desired state which we define in our TF files.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
|
[jayson@RyterINC dev-instance] terraform plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_instance.dev-instance will be created
+ resource "aws_instance" "dev-instance" {
+ ami = "ami-04902260ca3d33422"
+ arn = (known after apply)
+ associate_public_ip_address = (known after apply)
+ availability_zone = (known after apply)
+ cpu_core_count = (known after apply)
+ cpu_threads_per_core = (known after apply)
+ disable_api_termination = (known after apply)
+ ebs_optimized = (known after apply)
+ get_password_data = false
+ host_id = (known after apply)
+ id = (known after apply)
+ instance_initiated_shutdown_behavior = (known after apply)
+ instance_state = (known after apply)
+ instance_type = "t2.micro"
+ ipv6_address_count = (known after apply)
+ ipv6_addresses = (known after apply)
+ key_name = (known after apply)
+ monitoring = (known after apply)
+ outpost_arn = (known after apply)
+ password_data = (known after apply)
+ placement_group = (known after apply)
+ placement_partition_number = (known after apply)
+ primary_network_interface_id = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ secondary_private_ips = (known after apply)
+ security_groups = (known after apply)
+ source_dest_check = true
+ subnet_id = (known after apply)
+ tags = {
+ "Name" = "dev-instance"
}
+ tags_all = {
+ "Name" = "dev-instance"
}
+ tenancy = (known after apply)
+ user_data = (known after apply)
+ user_data_base64 = (known after apply)
+ vpc_security_group_ids = (known after apply)
+ capacity_reservation_specification {
+ capacity_reservation_preference = (known after apply)
+ capacity_reservation_target {
+ capacity_reservation_id = (known after apply)
}
}
+ ebs_block_device {
+ delete_on_termination = (known after apply)
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ snapshot_id = (known after apply)
+ tags = (known after apply)
+ throughput = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
+ enclave_options {
+ enabled = (known after apply)
}
+ ephemeral_block_device {
+ device_name = (known after apply)
+ no_device = (known after apply)
+ virtual_name = (known after apply)
}
+ metadata_options {
+ http_endpoint = (known after apply)
+ http_put_response_hop_limit = (known after apply)
+ http_tokens = (known after apply)
}
+ network_interface {
+ delete_on_termination = (known after apply)
+ device_index = (known after apply)
+ network_interface_id = (known after apply)
}
+ root_block_device {
+ delete_on_termination = (known after apply)
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ tags = (known after apply)
+ throughput = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
|
The above looks like a lot, but if we disect it we find that all Terraform is doing is creating our instance, along with a default gp2 ebs volume that is 8 Gb in size and some other resources
that an ec2 instance will need to function. We could define all those resources, but the defaults are fine for now.
When reading a TF plan, all the +
symbols represent things TF will add, -
are things TF removes, and ~
are things TF will change. Sometimes changes can force a replacement of a
resource. For example, if we replace an AMI for an ec2 instance, TF will inform you that it needs to destroy the instance and recreate it in order to apply the change.
Let’s apply the changes
run a terraform apply
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
|
[jayson@RyterINC dev-instance] terraform apply
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_instance.dev-instance will be created
+ resource "aws_instance" "dev-instance" {
+ ami = "ami-04902260ca3d33422"
+ arn = (known after apply)
+ associate_public_ip_address = (known after apply)
+ availability_zone = (known after apply)
+ cpu_core_count = (known after apply)
+ cpu_threads_per_core = (known after apply)
+ disable_api_termination = (known after apply)
+ ebs_optimized = (known after apply)
+ get_password_data = false
+ host_id = (known after apply)
+ id = (known after apply)
+ instance_initiated_shutdown_behavior = (known after apply)
+ instance_state = (known after apply)
+ instance_type = "t2.micro"
+ ipv6_address_count = (known after apply)
+ ipv6_addresses = (known after apply)
+ key_name = (known after apply)
+ monitoring = (known after apply)
+ outpost_arn = (known after apply)
+ password_data = (known after apply)
+ placement_group = (known after apply)
+ placement_partition_number = (known after apply)
+ primary_network_interface_id = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ secondary_private_ips = (known after apply)
+ security_groups = (known after apply)
+ source_dest_check = true
+ subnet_id = (known after apply)
+ tags = {
+ "Name" = "dev-instance"
}
+ tags_all = {
+ "Name" = "dev-instance"
}
+ tenancy = (known after apply)
+ user_data = (known after apply)
+ user_data_base64 = (known after apply)
+ vpc_security_group_ids = (known after apply)
+ capacity_reservation_specification {
+ capacity_reservation_preference = (known after apply)
+ capacity_reservation_target {
+ capacity_reservation_id = (known after apply)
}
}
+ ebs_block_device {
+ delete_on_termination = (known after apply)
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ snapshot_id = (known after apply)
+ tags = (known after apply)
+ throughput = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
+ enclave_options {
+ enabled = (known after apply)
}
+ ephemeral_block_device {
+ device_name = (known after apply)
+ no_device = (known after apply)
+ virtual_name = (known after apply)
}
+ metadata_options {
+ http_endpoint = (known after apply)
+ http_put_response_hop_limit = (known after apply)
+ http_tokens = (known after apply)
}
+ network_interface {
+ delete_on_termination = (known after apply)
+ device_index = (known after apply)
+ network_interface_id = (known after apply)
}
+ root_block_device {
+ delete_on_termination = (known after apply)
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ tags = (known after apply)
+ throughput = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_instance.dev-instance: Creating...
aws_instance.dev-instance: Still creating... [10s elapsed]
aws_instance.dev-instance: Still creating... [20s elapsed]
aws_instance.dev-instance: Still creating... [30s elapsed]
aws_instance.dev-instance: Still creating... [40s elapsed]
aws_instance.dev-instance: Creation complete after 46s [id=i-02dff0a1074a2b297]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
|
Terraform will ask us if we are sure about our life choices. We reply with ‘yes’ and Terraform will make the changes defined in the plan.
Terraform will only apply the changes if “yes” is provided
At this point, if your plan successfully applied like the above, you can find the instance in your AWS account under EC2 > Instances, and your root volume at
EC2 > volumes
Destroy it and rebuild
What we have technically fits the requirements, but let’s say we want to create a custom volume that can house important application data.
Let’s do a terraform destroy
to wipe our slate clean.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
|
[jayson@RyterINC dev-instance] terraform destroy
aws_instance.dev-instance: Refreshing state... [id=i-02dff0a1074a2b297]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
# aws_instance.dev-instance will be destroyed
- resource "aws_instance" "dev-instance" {
- ami = "ami-04902260ca3d33422" -> null
- arn = "arn:aws:ec2:us-east-1:678775474392:instance/i-02dff0a1074a2b297" -> null
- associate_public_ip_address = true -> null
- availability_zone = "us-east-1c" -> null
- cpu_core_count = 1 -> null
- cpu_threads_per_core = 1 -> null
- disable_api_termination = false -> null
- ebs_optimized = false -> null
- get_password_data = false -> null
- hibernation = false -> null
- id = "i-02dff0a1074a2b297" -> null
- instance_initiated_shutdown_behavior = "stop" -> null
- instance_state = "running" -> null
- instance_type = "t2.micro" -> null
- ipv6_address_count = 0 -> null
- ipv6_addresses = [] -> null
- monitoring = false -> null
- primary_network_interface_id = "eni-07672602c8ac0a24d" -> null
- private_dns = "ip-172-31-60-123.ec2.internal" -> null
- private_ip = "172.31.60.123" -> null
- public_dns = "ec2-35-171-17-235.compute-1.amazonaws.com" -> null
- public_ip = "35.171.17.235" -> null
- secondary_private_ips = [] -> null
- security_groups = [
- "default",
] -> null
- source_dest_check = true -> null
- subnet_id = "subnet-23d69a08" -> null
- tags = {
- "Name" = "dev-instance"
} -> null
- tags_all = {
- "Name" = "dev-instance"
} -> null
- tenancy = "default" -> null
- vpc_security_group_ids = [
- "sg-faeb949d",
] -> null
- capacity_reservation_specification {
- capacity_reservation_preference = "open" -> null
}
- credit_specification {
- cpu_credits = "standard" -> null
}
- enclave_options {
- enabled = false -> null
}
- metadata_options {
- http_endpoint = "enabled" -> null
- http_put_response_hop_limit = 1 -> null
- http_tokens = "optional" -> null
}
- root_block_device {
- delete_on_termination = true -> null
- device_name = "/dev/xvda" -> null
- encrypted = false -> null
- iops = 100 -> null
- tags = {} -> null
- throughput = 0 -> null
- volume_id = "vol-097545a12b78b3504" -> null
- volume_size = 8 -> null
- volume_type = "gp2" -> null
}
}
Plan: 0 to add, 0 to change, 1 to destroy.
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
aws_instance.dev-instance: Destroying... [id=i-02dff0a1074a2b297]
aws_instance.dev-instance: Still destroying... [id=i-02dff0a1074a2b297, 10s elapsed]
aws_instance.dev-instance: Still destroying... [id=i-02dff0a1074a2b297, 20s elapsed]
aws_instance.dev-instance: Still destroying... [id=i-02dff0a1074a2b297, 30s elapsed]
aws_instance.dev-instance: Still destroying... [id=i-02dff0a1074a2b297, 40s elapsed]
aws_instance.dev-instance: Destruction complete after 41s
Destroy complete! Resources: 1 destroyed.
|
Adding a volume
We are going to edit our main.tf file
and add an ebs volume
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
|
resource "aws_instance" "dev-instance" {
ami = "ami-04902260ca3d33422" # us-east-1
availability_zone = "us-east-1c"
instance_type = "t2.micro"
tags = {
Name = "dev-instance"
}
}
resource "aws_ebs_volume" "dev-instance-volume" {
availability_zone = "us-east-1c"
size = 8
type = "gp2"
tags = {
Name = "dev-instance-volume"
}
}
resource "aws_volume_attachment" "dev-instance-attach" {
device_name = "/dev/sdh"
volume_id = aws_ebs_volume.dev-instance-volume.id
instance_id = aws_instance.dev-instance.id
}
|
We have defined a new 8Gb volume that we call “dev-instance-volume”. We ensure the AZ for both the instance and the volume are the same, since an EC2 instance
can’t attach to a volume in a different AZ. We also create a third resource aws_volume_attachment
. We reference the
id’s of both the instance and
the volume. We should now be able to do a terraform apply
and create our
new instance and custom volume.
Try it this time with terraform plan && terraform apply
to initiate the apply command only if the plan executed without errors
The above command should have executed without errors, and created your dev instance with two volumes. One for your root volume which is the default, and another volume which is named
accordingly with our tags (dev-instance-volume).
Let’s apply a name tag to our root volume so it’s obvious in the console that it belongs to our dev-instance.
edit your main.tf aws_instance
resource so it looks like the following:
1
2
3
4
5
6
7
8
9
10
11
12
13
|
resource "aws_instance" "dev-instance" {
ami = "ami-04902260ca3d33422" # us-east-1
availability_zone = "us-east-1c"
instance_type = "t2.micro"
root_block_device {
tags = {
Name = "dev-instance-root"
}
}
tags = {
Name = "dev-instance"
}
}
|
Now let’s execute a terraform plan
and see what happens
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
|
[jayson@RyterINC dev-instance] terraform plan
aws_ebs_volume.dev-instance-volume: Refreshing state... [id=vol-029bb88343f468f95]
aws_instance.dev-instance: Refreshing state... [id=i-02ff7c54d84d48e29]
aws_volume_attachment.dev-instance-attach: Refreshing state... [id=vai-2478939652]
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the last "terraform apply":
# aws_instance.dev-instance has been changed
~ resource "aws_instance" "dev-instance" {
id = "i-02ff7c54d84d48e29"
tags = {
"Name" = "dev-instance"
}
# (28 unchanged attributes hidden)
+ ebs_block_device {
+ delete_on_termination = false
+ device_name = "/dev/sdh"
+ encrypted = false
+ iops = 100
+ tags = {
+ "Name" = "dev-instance-volume"
}
+ throughput = 0
+ volume_id = "vol-029bb88343f468f95"
+ volume_size = 8
+ volume_type = "gp2"
}
# (5 unchanged blocks hidden)
}
Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to these changes.
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# aws_instance.dev-instance will be updated in-place
~ resource "aws_instance" "dev-instance" {
id = "i-02ff7c54d84d48e29"
tags = {
"Name" = "dev-instance"
}
# (28 unchanged attributes hidden)
~ root_block_device {
~ tags = {
+ "Name" = "dev-instance-root"
}
# (8 unchanged attributes hidden)
}
# (5 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
|
As it would turn out, terraform is just going to apply a change, and not destroy the volume in order to change it. This is because a tag is a non destructive change and
can be applied without reboot or a recreate. Apply the change with terraform apply
and observe the new name in the aws console.
Once you are done, you should destroy the resources with a final terraform destroy
to ensure you won’t receive charges.
In Conculsion
This is a very simple case for terraform, and can be made cleaner or better by adding stuff like modules, auto volume mounting through user data, and adding more
options to the ec2 instance as needed. I encourage you to keep adding on to this project and exploring more TF resources and options. As requirements evolve, your TF needs
and standards will also evolve. This configuration is fine for a lab, but in a professional setting we need to make this TF more scalable as more strict requirements are
defined.
We can easily create and destroy resources using terraform, and we also have naturally documented our infrastructure in the process. We can commit these files to git in order to further edit
and track changes and we were able to destroy and recreate our infrastructure very quickly, ensuring cleanup and creation was consistent every time.
This is the real power of terraform and why it’s useful. Consistency, which is not possible with manual human actions in a GUI.
I hope this was useful to you, and I will be writing more advanced Terraform posts so be on the lookout!