Integration of AWS and Terraform

Sahana B
4 min readOct 13, 2020

Task : Have to create/launch Application using Terraform

1. Create the key and security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Optional

1) Those who are familiar with jenkins or are in devops AL have to integrate jenkins in this task wherever you feel can be integrated

2) create snapshot of ebs ,Above task should be done using terraform

And , What i have done is :

Step1: configured my AWS profile in local system using Command Prompt. Filled details then Enter.

aws configure --profile sahana
AWS Access Key ID [****************5KMO]:
AWS Secret Access Key [****************VpMS]:
Default region name [ap-south-1]:
Default output format [None]:

here i already entered the access and secret keys so it is hidden.

Step 2:

Launch an ec2 instance using Terraform. I have used a Amazon AMI 2. in this installed and configured webserver in instance using Remote Exec Provisioner . i used the my key created earlier . The code for this is -

provider  "aws" {
region = "ap-south-1"
profile = "sahana"
}
resource "aws_instance" "myin" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name = "sahana12345"
security_groups = [ "launch-wizard-2" ]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/sahanab/Downloads/sahana12345.pem")
host = aws_instance.myin.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd php git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
"sudo setenforce 0"
]
}
tags = {
Name = "sahanaos"
}
}

Step 3 : Create an EBS volume. we need to launch our EBS volume in the same zone what we selected in above steps because they can’t be connected without it . For this, I have retrieved the availability zone of the instance & used it here.

resource "aws_ebs_volume" "sahanavol" {
availability_zone = aws_instance.myin.availability_zone
size = 1
tags = {
Name = "sahanaebs"
}
}

Step 4 : Attach EBS volume to the instance.

resource "aws_volume_attachment"  "ebs_att" {
device_name = "/dev/sdd"
volume_id = "${aws_ebs_volume.ashuvol.id}"
instance_id = "${aws_instance.myin.id}"
force_detach = true
}

I have retrieved the public ip and stored it in a file in my system.

resource "null_resource" "public_ip"  {
provisioner "local-exec" {
command = "echo ${aws_instance.myin.public_ip} > public_ip.txt"
}
}

Step 5 : mount EBS volume to the folder /var/www/html.

resource "null_resource" "mount"  {    depends_on = [
aws_volume_attachment.ebsatt,
]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/sahanab/Downloads/sahana12345.pem")
host = aws_instance.myin.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo mkfs.ext4 /dev/xvdd",
"sudo mount /dev/xvdd /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/sahanabalappa/cloudtask1 /var/www/html/"
]
}
}

I have downloaded all the code ,images from Github in my local system.

resource "null_resource" "git_copy"  {
provisioner "local-exec" {
command = "git clone https://github.com/sahanabalappa/Integration_Of_Terraform_AND_AWS C:/Users/sahana/Pictures/"
}
}

Step 6 : create an S3 bucket on AWS.

resource "aws_s3_bucket" "sahanabkt" {
bucket = "sahana123"
acl = "private"
tags = {
Name = "sahana1234"
}
}
locals {
s3_origin_id = "myS3Origin"
}

Step 7 : S3 bucket created, upload image that downloaded from Github in local system. I uploaded one . You can upload more.

resource "aws_s3_bucket_object" "object" {
bucket = "${aws_s3_bucket.sahanabkt.id}"
key = "test_pic"
source = "C:/Users/sahana/Pictures/img1.jpg"
acl = "public-read"
}

Step 8 : create a CloudFront and connect it to S3 bucket. The CloudFront is needed for fast delievery of content from the edge locations across the world.

resource "aws_cloudfront_distribution" "sahanafnt" {
origin {
domain_name = "${aws_s3_bucket.sahanabkt.bucket_regional_domain_name}"
origin_id = "${local.s3_origin_id}"
custom_origin_config { http_port = 80
https_port = 80
origin_protocol_policy = "match-viewer"
origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]
}
}
enabled = true
default_cache_behavior { allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "${local.s3_origin_id}"
forwarded_values { query_string = false cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
}

Now, go to /var/www/html and update the link of the images with the link of CloudFront.

Step 9 : Written a terraform code retrieve the public ip instance and open it in chrome. This will open the page of site present in /var/www/html.

resource "null_resource" "local_exec"  {
depends_on = [
null_resource.mount,
]
provisioner "local-exec" {
command = "start chrome ${aws_instance.myin.public_ip}"
}
}

Finally its done , it will open the home page and you can see the images what you uploaded in earlier steps.

--

--