Deploying a simple web server using Terraform & AWS

Deploying a simple web server using Terraform & AWS

What's up, everybody! Today I'm going to be doing a quick & hopefully easy walkthrough on how to deploy a simple nginx web server on AWS using terraform. It will include a proper setup for our EC2 Instance including adding all the necessary network components.

What is terraform?

For complete beginners, Terraform is an open-source infrastructure-as-code software tool. It allows users to define and provision infrastructure resources, such as virtual machines, storage accounts, and network interfaces, in a declarative manner, using a domain-specific language (DSL).

With Terraform, users can define their infrastructure as a set of configuration files, called Terraform scripts, which describe the desired state of their infrastructure. Terraform then automatically creates or modifies the infrastructure to match the defined state, using APIs provided by the underlying infrastructure providers.

Terraform is a very very powerful tool in creating & modifying infrastructure whether it is to change a configuration, add a new instance or edit existing ones terraform makes it super easy to do so.

Head over to the official Terafform website to download and install from here

What will we be doing?

After installing terraform, we'll need to create an AWS account. If you don't have one already head over to AWS and create one.

We'll be going through doing the following:

  1. Adding AWS as a provider for our terraform scripts.

  2. We'll start creating the AWS resources necessary for getting everything ready.

  3. Applying the resources and trying to access our web server!

So without further or do, let's get started.

Adding AWS as a provider

In terraform, we can use different providers to get a better understanding of what providers exactly are heading over here. For example; AWS & GCP are considered terraform providers.

Since we'll be using AWS, let's do the following:

  1. Create an empty directory with a file named provider.tf

  2. We're going to paste the following into the file

     terraform {
       required_providers {
         aws = {
           source  = "hashicorp/aws"
           version = "~> 4.0"
         }
       }
     }
     provider "aws" {
       access_key = "access-key"
       secret_key = "secret-key"
       region = "eu-central-1"
     }
    

    The first part adds AWS as our provider, The second one configures our AWS provider by giving it the access key, secret key & region. If you don't know how to get these in your AWS account follow the steps here.

  3. Once you finished this step, in your terminal type terraform init and hit enter. It will start fetching the specified provider.

Creating the AWS resources

After that, we'll need to create our resources. We'll need to make sure we configure the network properly before deploying any EC2 instance.

What we'll be doing is kind of overkill and is just for learning purposes.

We'll be doing the following:

  1. Creating an AWS VPC (Virtual Private Cloud)

  2. Creating a public subnet inside our VPC

  3. Adding an Internet Gateway for our VPC which will help give us public internet access from inside the VPC

  4. Adding a custom route table for our subnet

  5. Adding a Security Group for our VPC to control the network traffic

  6. Adding a network interface for our to-be-created EC2 Instance

  7. Providing the network interface with a public static IP (AWS Elastic IP)

  8. Finally creating the AWS EC2 Instance inside the subnet created and installing nginx inside of it

To create our VPC & Subnet we'll create a file called network.tf and paste the following

resource "aws_vpc" "production-vpc" {
  cidr_block = "10.0.0.0/16"
}

resource "aws_subnet" "production-subnet-1" {
  vpc_id     = aws_vpc.production-vpc.id
  cidr_block = "10.0.1.0/24"
  availability_zone = "eu-central-1a"
}

The first resource is our VPC, we'll give it the name production-vpc. We'll give it a cidr_block. Cidr stands for classless inter-domain routing which is a method of assigning IP addresses that improves the efficiency of address distribution and replaces the previous system based on Class A, Class B and Class C networks.

It's the VPC network IP range which means that the first 16 bits represent the network while the last 16 bits represent the hosts. it coressponds to a subnet mask of 255.255.0.0

The second block comes from our subnet. We give it our VPC ID, our availability zone (every region has several availability zones) and a cidr_block of its own to specify the subnet IP range. 10.0.1.0/24 corresponds to having the first 24 bits identify the network whilst the last 8 bits identify the host.

To create our Internet Gateway & Custom Route Table we'll create another file called routing.tf and paste the following;

resource "aws_internet_gateway" "production-ig" {
  vpc_id = aws_vpc.production-vpc.id
}

resource "aws_route_table" "production-subnet-1-route-table" {
  vpc_id = aws_vpc.production-vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.production-ig.id
  }
  route {
    ipv6_cidr_block        = "::/0"
    gateway_id = aws_internet_gateway.production-ig.id
  }
}

resource "aws_route_table_association" "production-subnet-1-association-1" {
  subnet_id      = aws_subnet.production-subnet-1.id
  route_table_id = aws_route_table.production-subnet-1-route-table.id
}

The first resource is our Internet Gateway. It takes in the VPC id.

The route table part is optional really. What route tables do is answer the question what is the next destination for these network packets? You can redirect packets to different networks by specifying the destination IP Address in the route table.

So in the first route block, we specify that any IP Address in the given cidr_block range gets routed to our internet gateway. 0.0.0.0/0 means any IP Address.

The second route block is the same but for IPV6. The first one was IPV4 Addresses

Now, After creating the route table, we need to tell AWS that we want our subnet to use this route table. This is done using route table associations where we specify the subnet id as well as the route table id.

To Create the security group we'll create a file called security-group.tf and paste the following

resource "aws_security_group" "production-security-group" {
  name        = "allow_all"
  description = "Allow All Traffic"
  vpc_id      = aws_vpc.production-vpc.id

  ingress {
    description      = "HTTPS"
    from_port        = 443
    to_port          = 443
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]
  }

  ingress {
    description      = "HTTP"
    from_port        = 80
    to_port          = 80
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]
  }

  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
    # ALLOW ALL
  }
}

What we did was create a security group resource, and give it the VPC ID and a couple of ingress & egress blocks.

The ingress blocks are responsible for traffic coming inside the VPC. They control the inflow of traffic. Whilst the egress control traffic outflow

In the ingress blocks, we specify port-ranges 443 & 80 for HTTPS & HTTP Traffic along with cidr_blocks governing what IPS are allowed to pass through. Since it's a web server we gave it every possible IP Address. We also specify the protocol which is Transmission Control Protocol (TCP).

The egress block opens traffic outflow; It has no restriction on any outflow traffic

Now to create the EC2 Instance we'll create a new file server.tf and paste the following;

resource "aws_network_interface" "production-ec2-1-NI" {
  subnet_id       = aws_subnet.production-subnet-1.id
  private_ips     = ["10.0.1.50"]
  security_groups = [aws_security_group.production-security-group.id]
}

resource "aws_eip" "production-eip" {
  vpc                       = true
  network_interface         =  aws_network_interface.production-ec2-1-NI.id
  associate_with_private_ip = "10.0.1.50"
}

data "aws_ami" "ubuntu" {
  most_recent = true

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }

  owners = ["099720109477"] # Canonical
}

resource "aws_instance" "web" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = "t2.micro"
  availability_zone = "eu-central-1a"
  network_interface {
    network_interface_id = aws_network_interface.production-ec2-1-NI.id
    device_index = 0
  }
  user_data = <<-EOF
              #!bin/bash
              sudo apt update -y
              sudo apt install nginx -y
              sudo systemctl start nginx
              EOF
}

output "public-ip" {
  value = aws_eip.production-eip.public_ip
}

We start by creating the network interface giving it our subnet id & security group. Specifying a Private IP for it 10.0.1.50

After that, we create our Elastic IP that takes in our network interface-id, specifying that it exists inside a VPC & giving it the Private IP we specified.

Now all that's left is to create our AWS EC2 instance, The data part in the code above is called a terraform data source. They allow Terraform to use the information defined outside of Terraform, defined by another separate Terraform configuration, or modified by functions.

What it does is it fetches the AMI (Amazon Machine Image) that applies to the filters specified (ubuntu 20.04 with a virtualization type of "HVM"). More information on Virtualization types here.

Every EC2 instance needs an AMI, so we start by specifying the AMI we just got from the data source. We also specify the instance type as "t2.micro" which is just for experimentation as it's very limited resource-wise.

We follow up by adding the availability zone we want our machine to be in followed by the network interface we just created. Which will automatically put it inside the subnet created.

The device_index = 0 part means the first device in the network interface. (you could bind the network interface to different devices)

Finally, we add a user_data command which invokes the set of instructions specified when the machine boots up.

We install nginx & start the service

And we use the output directive to print out the Public IP of our EC2 Machine.

After finishing everything we can apply using terraform apply and it will show us all the changes that will get created asking us to approve them by typing "yes". After applying wait until the EC2 instance gets created (you can check from AWS Console) then visit the public IP and you should see the welcome to nginx page!

All the resources in the code are provided by terraform in their documentation. If you wanted, for example, a VPC resource just Google "terraform VPC resource" and it will probably be the first link on the page.

After finishing everything don't forget to destroy all the resources created! you can do that easily by typing in terraform destroy and everything will be destroyed.

Summary

Terraform is a very powerful tool, if we were to edit any of the resources we created we'd just edit it and apply the changes and it only would update the resource changed. I hope you got something out of this small tutorial & till the next one!

Resources

  1. https://docs.aws.amazon.com/index.html

  2. https://developer.hashicorp.com/terraform/language/resources

  3. https://spacelift.io/blog/terraform-aws-vpc

Did you find this article valuable?

Support Amr Elhewy by becoming a sponsor. Any amount is appreciated!