Deploying a VPC in Terraform with State Stored in S3 & Running CI/CD Pipeline with Jenkins
The purpose of today’s article is to show how multiple tools can come together to create architecture that is reliably stored and continuously deployed. You can read some related articles here and here.
The sequence I followed was first to create the VPC with CIDR 10.0.0.0/16, create two public subnets with CIDR 10.0.1.0/24 and 10.0.2.0/24, and create an autoscaling group with Amazon Linux 2 instances (t3.nano) with a minimum of 2 & a max of 3 instances. Then I created an AWS S3 bucket to store Terraform State, and last I used Jenkins to create a CI/CD pipeline that tests and deploys the infrastructure through AWS.
For this Exercise, I needed:
PART 1: GitHub Repository
My first objective was to ensure that my Terraform files were made, and that they properly reflected the desired architecture. In Image 1, you can see the repo from my GitHub.
Note that if you are performing this exercise on your own, you’ll need to replace my GitHub repo in Line 24 of the Jenkinsfile with your unique URL. Image 2 shows the line to which I’m referring.
Also if you are performing this on your own you can clone my repo like so (Image 3).
PART 2: TERRAFORM & AWS S3
State files can contain inherently sensitive information, so best practice is to store it remotely. Not only is this secure, but storing state remotely helps with collaboration across teams. We also want to be mindful of locking state, because if it remains unlocked, state will change each time a team member provisions resources. To avoid this potential headache, common backend options will control this automatically. However, AWS S3 does not lock state automatically. In this exercise, I will keep state unlocked, because I am working on this alone, but if you are in a collaborative environment, refer to this Terraform documentation for assistance in locking state on S3.
So, after establishing my GitHub repo, it was then time to set up my S3 backend. To do this, I first created my bucket with the AWS CLI and confirmed its existence (Image 4).
I then verified that in backend.tf the name of the bucket was accurately reflected, which it was (Image 5).
I then changed my directory into the project, and ran Terraform Init to initialize modules, providers, backend, etc (Image 6).
Note in Image 6 that I ‘Successfully configured the backend “s3”.’ Note that this doesn’t mean that the object was actually created in the bucket yet, but it means we have initialized that this process will occur upon a Terraform Apply.
I then ran the following as seen in Image 7: terraform fmt -recursive, terraform validate, terraform plan.
I then ran terraform apply (Image 8).
I wanted to confirm that our backend was created with state stored as an object in my S3 bucket. To do so, I checked both in the AWS Console (Image 9), and in my Terminal (Image 10). Both confirmed state was stored.
I then went into the console to grab one of the IP addresses of the instances we created (Image 11), and I confirmed that the instance was working by entering the address into my browser which yielded the confirmed result as seen in Image 12.
I then ran a terraform destroy to clear the VPC from my AWS account.
PART 3: PREPARING TO RUN PIPELINE
It was now time to get Jenkins involved in our effort to run CI/CD pipeline. Jenkins is a free and open source automation server helping manage the processes of software development like building, testing, and deploying, facilitating continuous integration and continuous delivery.
First I needed to create a Jenkins Server, which I elected to do on the AWS Marketplace, where we can also grab our AMI (Image 13). Note that this will incur costs.
I then went on to configure the software (Image 14).
I then went on to launch the software (Image 15). Note it is important to select a Security Group that has SSH, HTTP, and HTTPS configured (not pictured).
I then saw the deployment confirmation (Image 16).
Also I went to my AWS Console to see the EC2 Instance confirmed (Image 17)
I then needed to get my system log in the dropdown from Actions named “Monitor & Troubleshoot” (Image 18).
In the log I identified my credentials (Image 19).
I then typed in the IP address of the EC2 Instance which brought me to the Jenkins sign in page (Image 20). I proceeded to sign in using the credentials from Image 19.
Once I was signed in, it was time to configure Terraform in Jenkins. To perform this, I navigated to Manage Jenkins > Manage Plugins (Image 21).
I then searched Terraform, and then installed Terraform without a restart (Image 22).
I then navigated to Manage Jenkins > Global Tool Configuration, and then scrolled down to Terraform to Add Terraform but deselected “install automatically” (Image 23).
You’ll notice the install directory isn’t populated — keep it that way for now.
Next I needed to SSH into the Jenkins server in order to install Terraform on the Jenkins server, which showed as successful in my Terminal (Image 24).
I then went to Terraform’s website, and right-clicked on the CLI for 64-bit Linux, and chose to Copy Link Address (Image 25).
Next I needed to install the CLI & ensure it was in the correct directory /usr/bin. You’ll see in Image 26 that I achieved this through the following commands:
- Retrieved the download content with: wget [previously copied URL from Terraform.io]
- Ensured the file was on the server with: ls
- Unzipped the file with: unzip [file name]
- moved the file to /usr/bin with: sudo mv terraform /usr/bin
- Confirmed this series of tasks was successful and that the file was in /usr/bin with: which terraform
I was now ready to circle back and populate the install directory in Jenkins (Image 27).
Now it was time to manage my AWS Credentials on Jenkins. For this, I navigated to Security > Manage Credentials > Jenkins (Image 28).
In “Store=Jenkins” I clicked “Global Credentials (unrestricted),” and then I populated my AWS Access Key ID & Secret Access Key. Make sure to Select “Secret text” for “Kind.” You can see they were populated in Image 29.
I was now ready to configure the Jenkins Pipeline. To do so, I navigated to Dashboard > New Item, and then chose Pipeline (Image 30).
Now it was time for the Pipeline to call upon my GitHub repo, which I did by selecting “Pipeline Script from SCM,” choosing Git as the SCM, populating my GitHuB URL, and ensuring that the branch was my primary branch (Image 31). I then clicked Save.
I then navigated back to the Dashboard and confirmed that my Pipeline was there (Image 32).
PART 4: RUNNING PIPELINE & TROUBLESHOOTING
It was now time to run my Jenkins pipeline. To do this, I navigated to and clicked Build Now. However, I then ran into an issue, similar to one that I ran into when working with GitLab as a CI/CD tool. The primary branch being named “master” instead of “main” was causing the Console Output to print failure (Image 33).
I therefore changed the remote reference to refs/heads/main in the Configurations. This got my pipeline to run further along before running into another error. This time there error was that my Jenkinsfile in GitHub couldn’t be identified (Image 34).
I realized that in GitHub my Jenkinsfile needs to be correctly named “Jenkinsfile” instead of “jenkinsfile.” I renamed the file in GitHub and re-ran the pipeline. This time, we got even further on the run, but again received an error (Image 35).
It still appeared that Jenkins what having trouble identifying my repo. This was terribly confusing, and it took me another 30 attempts to get past checkout for me to identify the root problem.
Here was the problem and solution. I noticed in the logs that although we configured Jenkins to seek the “main” branch, it was still considering the “master” (Image 36).
I therefore renamed the branch in GitHub to “Master,” and reverted the Jenkins configuration back to “Master.” Sure enough, this worked, and my build got beyond checkout (Image 37).
I then gave the approval to apply the plan (Image 38).
The apply was then complete (Image 39).
I confirmed in my AWS Console that the 2 new EC2 instances were created (Image 40).
I confirmed with the public IP addresses that the websites were working, and they were (Image 41).
Last, I confirmed that the ASG was working by terminating an instance, and sure enough a new one was created (Image 42).
If you made it to this point and are still following along, you can proceed to destroy your resources. You’ll recall that a parameter was made to do this.
Image 43 shows my destroy, you can see that the flow proceeded from checkout to destroy and Jenkin correctly did not reproduce more resources by skipping Plan, Approval, and Apply.
Thanks for reading.
Credits: This project was recommended to me by my coaches at Level Up In Tech.