Deploying a VPC in Terraform with State Stored in S3 & Running CI/CD Pipeline with Jenkins

Kevin Czarzasty
9 min readAug 12, 2021

--

The purpose of today’s article is to show how multiple tools can come together to create architecture that is reliably stored and continuously deployed. You can read some related articles here and here.

The sequence I followed was first to create the VPC with CIDR 10.0.0.0/16, create two public subnets with CIDR 10.0.1.0/24 and 10.0.2.0/24, and create an autoscaling group with Amazon Linux 2 instances (t3.nano) with a minimum of 2 & a max of 3 instances. Then I created an AWS S3 bucket to store Terraform State, and last I used Jenkins to create a CI/CD pipeline that tests and deploys the infrastructure through AWS.

For this Exercise, I needed:

PART 1: GitHub Repository

My first objective was to ensure that my Terraform files were made, and that they properly reflected the desired architecture. In Image 1, you can see the repo from my GitHub.

Image 1: Repo

Note that if you are performing this exercise on your own, you’ll need to replace my GitHub repo in Line 24 of the Jenkinsfile with your unique URL. Image 2 shows the line to which I’m referring.

Image 2: Line 24 of Jenkinsfile

Also if you are performing this on your own you can clone my repo like so (Image 3).

Image 3: Git Clone [repo]

PART 2: TERRAFORM & AWS S3

State files can contain inherently sensitive information, so best practice is to store it remotely. Not only is this secure, but storing state remotely helps with collaboration across teams. We also want to be mindful of locking state, because if it remains unlocked, state will change each time a team member provisions resources. To avoid this potential headache, common backend options will control this automatically. However, AWS S3 does not lock state automatically. In this exercise, I will keep state unlocked, because I am working on this alone, but if you are in a collaborative environment, refer to this Terraform documentation for assistance in locking state on S3.

So, after establishing my GitHub repo, it was then time to set up my S3 backend. To do this, I first created my bucket with the AWS CLI and confirmed its existence (Image 4).

Image 4: Creating & verifying “aws-terraform-s3-jenkins-project” bucket

I then verified that in backend.tf the name of the bucket was accurately reflected, which it was (Image 5).

Image 5: Verifying backend bucket name

I then changed my directory into the project, and ran Terraform Init to initialize modules, providers, backend, etc (Image 6).

Image 6: Changing Directory & Terraform Init

Note in Image 6 that I ‘Successfully configured the backend “s3”.’ Note that this doesn’t mean that the object was actually created in the bucket yet, but it means we have initialized that this process will occur upon a Terraform Apply.

I then ran the following as seen in Image 7: terraform fmt -recursive, terraform validate, terraform plan.

Image 7: Prepping for apply

I then ran terraform apply (Image 8).

Image 8: Terraform Apply

I wanted to confirm that our backend was created with state stored as an object in my S3 bucket. To do so, I checked both in the AWS Console (Image 9), and in my Terminal (Image 10). Both confirmed state was stored.

Image 9: Confirming State on S3
Image 10: Confirming State in Terminal

I then went into the console to grab one of the IP addresses of the instances we created (Image 11), and I confirmed that the instance was working by entering the address into my browser which yielded the confirmed result as seen in Image 12.

Image 11: Identifying IP Address of one of the instances
Image 12: Confirming the instance was working

I then ran a terraform destroy to clear the VPC from my AWS account.

PART 3: PREPARING TO RUN PIPELINE

It was now time to get Jenkins involved in our effort to run CI/CD pipeline. Jenkins is a free and open source automation server helping manage the processes of software development like building, testing, and deploying, facilitating continuous integration and continuous delivery.

First I needed to create a Jenkins Server, which I elected to do on the AWS Marketplace, where we can also grab our AMI (Image 13). Note that this will incur costs.

Image 13: Selecting Jenkins Packaged by Bitnami on AWS Marketplace

I then went on to configure the software (Image 14).

Image 14: Configuring

I then went on to launch the software (Image 15). Note it is important to select a Security Group that has SSH, HTTP, and HTTPS configured (not pictured).

Image 15: Launching

I then saw the deployment confirmation (Image 16).

Image 16: Deployed

Also I went to my AWS Console to see the EC2 Instance confirmed (Image 17)

Image 17: EC2 instance confirmed (green box)

I then needed to get my system log in the dropdown from Actions named “Monitor & Troubleshoot” (Image 18).

Image 18: Get system log

In the log I identified my credentials (Image 19).

Image 19: Credentials

I then typed in the IP address of the EC2 Instance which brought me to the Jenkins sign in page (Image 20). I proceeded to sign in using the credentials from Image 19.

Image 20: Jenkins sign in

Once I was signed in, it was time to configure Terraform in Jenkins. To perform this, I navigated to Manage Jenkins > Manage Plugins (Image 21).

Image 21: Manage Plugins in Jenkins

I then searched Terraform, and then installed Terraform without a restart (Image 22).

Image 22: Install Terraform without restart

I then navigated to Manage Jenkins > Global Tool Configuration, and then scrolled down to Terraform to Add Terraform but deselected “install automatically” (Image 23).

Image 23: Add Terraform

You’ll notice the install directory isn’t populated — keep it that way for now.

Next I needed to SSH into the Jenkins server in order to install Terraform on the Jenkins server, which showed as successful in my Terminal (Image 24).

Image 24: ssh bitnami@[public ip] result

I then went to Terraform’s website, and right-clicked on the CLI for 64-bit Linux, and chose to Copy Link Address (Image 25).

Image 25: Copying the link to download Terraform CLI for Linux 64-bit

Next I needed to install the CLI & ensure it was in the correct directory /usr/bin. You’ll see in Image 26 that I achieved this through the following commands:

  • Retrieved the download content with: wget [previously copied URL from Terraform.io]
  • Ensured the file was on the server with: ls
  • Unzipped the file with: unzip [file name]
  • moved the file to /usr/bin with: sudo mv terraform /usr/bin
  • Confirmed this series of tasks was successful and that the file was in /usr/bin with: which terraform
Image 26: Installing CLI & moving it in the right directory

I was now ready to circle back and populate the install directory in Jenkins (Image 27).

Image 27: Populating install directory

Now it was time to manage my AWS Credentials on Jenkins. For this, I navigated to Security > Manage Credentials > Jenkins (Image 28).

Image 28: Navigating to manage my AWS Credentials

In “Store=Jenkins” I clicked “Global Credentials (unrestricted),” and then I populated my AWS Access Key ID & Secret Access Key. Make sure to Select “Secret text” for “Kind.” You can see they were populated in Image 29.

Image 29: Populated AWS Credentials

I was now ready to configure the Jenkins Pipeline. To do so, I navigated to Dashboard > New Item, and then chose Pipeline (Image 30).

Image 30: Selecting Pipeline in New Item from Dashboard

Now it was time for the Pipeline to call upon my GitHub repo, which I did by selecting “Pipeline Script from SCM,” choosing Git as the SCM, populating my GitHuB URL, and ensuring that the branch was my primary branch (Image 31). I then clicked Save.

Image 31: Advanced Project Options

I then navigated back to the Dashboard and confirmed that my Pipeline was there (Image 32).

Image 32: Confirming my Pipeline was on the Dashboard

PART 4: RUNNING PIPELINE & TROUBLESHOOTING

It was now time to run my Jenkins pipeline. To do this, I navigated to and clicked Build Now. However, I then ran into an issue, similar to one that I ran into when working with GitLab as a CI/CD tool. The primary branch being named “master” instead of “main” was causing the Console Output to print failure (Image 33).

Image 33: Error 1

I therefore changed the remote reference to refs/heads/main in the Configurations. This got my pipeline to run further along before running into another error. This time there error was that my Jenkinsfile in GitHub couldn’t be identified (Image 34).

Image 34: Error 2

I realized that in GitHub my Jenkinsfile needs to be correctly named “Jenkinsfile” instead of “jenkinsfile.” I renamed the file in GitHub and re-ran the pipeline. This time, we got even further on the run, but again received an error (Image 35).

Image 35: Error 3

It still appeared that Jenkins what having trouble identifying my repo. This was terribly confusing, and it took me another 30 attempts to get past checkout for me to identify the root problem.

Here was the problem and solution. I noticed in the logs that although we configured Jenkins to seek the “main” branch, it was still considering the “master” (Image 36).

Image 36: “Master” persisting

I therefore renamed the branch in GitHub to “Master,” and reverted the Jenkins configuration back to “Master.” Sure enough, this worked, and my build got beyond checkout (Image 37).

Image 37: Stage view, passing plan

I then gave the approval to apply the plan (Image 38).

Image 38: Proceed

The apply was then complete (Image 39).

Image 39: Applied

I confirmed in my AWS Console that the 2 new EC2 instances were created (Image 40).

Image 40: Confirming EC2 Instances

I confirmed with the public IP addresses that the websites were working, and they were (Image 41).

Image 41: Confirming the websites work

Last, I confirmed that the ASG was working by terminating an instance, and sure enough a new one was created (Image 42).

Image 42: Confirmed ASG working

Conclusion

If you made it to this point and are still following along, you can proceed to destroy your resources. You’ll recall that a parameter was made to do this.

Image 43 shows my destroy, you can see that the flow proceeded from checkout to destroy and Jenkin correctly did not reproduce more resources by skipping Plan, Approval, and Apply.

Image 43: Destroy

Thanks for reading.

Credits: This project was recommended to me by my coaches at Level Up In Tech.

--

--