Setting Up My First DevOps Pipeline with Terraform & Ansible

My First DevOps Project: Automating Infrastructure with Terraform, Ansible & Jenkins CI/CD Pipeline

Setting Up My First DevOps Pipeline with Terraform & Ansible

Creating and setting up an EC2 Instance on AWS

  • Change subnet to ap-south-1 for Mumbai region and launch instance


Connect Terminal with the instance through SSH

go to connect in ec2 and connect in terminal through SSH


After connecting the terminal, Update the terminal and install Java and Jenkins in it

# Updating system
sudo apt update

# Installing Java.. Required for installation of Jenkins
sudo apt install fontconfig openjdk-17-jre

# Install Jenkins
sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \
  https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
echo "deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]" \
  https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
  /etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update
sudo apt-get install jenkins

After Installation, Enable Jenkins and Start Jenkins

# Enable Jenkins
sudo systemctl enable jenkins

# Start Jenkins
sudo systemctl start jenkins

# Check status of Jenkins
sudo systemctl status jenkins

Now Enabling Port 8080 inside EC2 instance for running Jenkins

Copy public url of your EC2 Instance and paste it in the browser, adding :8080 after IP

Open Terminnal and extract password

sudo cat /var/lib/jenkins/secrets/initialAdminPassword

Copy Password and paste it in to Jenkins in browser

Click INSTALL SELECTED PLUGGINS and it will take some time to install pluggins

Enter your credentials and press enter

AND We have entered into Jenkins 🥳


Now Creating Pipeline for our project

  • Click on NEW ITEM, then Provide Name and Select Freestyle Project


Making a github repo and pasting linking in git section

Inside Build Steps, select execute shell, and we're going to write this type of file (just an overview). Don’t copy and paste just now. )

#!/bin/bash    

# using xe- :  x is debugging mode showing what is happening and e throws error if anything goes wrong
set -xe

# Moves into the folder where your Terraform code is stored
cd /terraform

terraform init    # Installing all dependencies for terraform
# $TERRAFORM_ACTION is a variable that decides which Terraform command to run.
terraform $TERRAFORM_ACTION -auto-approve    # -auto-approve skips manual confirmation (otherwise, Terraform asks for "yes" input)

# Move to Ansible Directory

# Check if terraform succeeds, if yes then only proceed to Ansible
if [ "$TERRAFORM_ACTION" = "destroy" ]; then
    exit 0
else
    cd ../ansible
    ansible-playbook -i /opt/ansible/inventory/aws_ec2.yaml apache.yaml     # run playbook of Ansible
fi

Installing Terraform And Ansible into our Instance

  • Installing Terraform

      ## Copy and paste in terminal
    
      sudo apt-get update && sudo apt-get install -y gnupg software-properties-common
    
      # Install the HashiCorp GPG key.
      wget -O- https://apt.releases.hashicorp.com/gpg | \
      gpg --dearmor | \
      sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/null
    
      # Verify the key's fingerprint.
      gpg --no-default-keyring \
      --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \
      --fingerprint
    
      echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
      https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
      sudo tee /etc/apt/sources.list.d/hashicorp.list
    
      sudo apt update
    
      sudo apt-get install terraform
    

Running only terraform file in shell for better understanding

#!/bin/bash    

# using xe- :  x is debugging mode showing what is happening and e throws error if anything goes wrong
set -xe

# Moves into the folder where your Terraform code is stored
cd terraform

terraform init    # Installing all dependencies for terraform
terraform plan    # gives a detail view of what it is doing
# $TERRAFORM_ACTION is a variable that decides which Terraform command to run.
terraform $TERRAFORM_ACTION -auto-approve    # -auto-approve skips manual confirmation (otherwise, Terraform asks for "yes" input)

Now we need to create this TERRAFORM_ACTION variable


Modifying terraform folder


Create terraform folder and type vim main.tf

# Installing AWS from hashicorp
terraform {
    required_providers {
        aws = {
            source = "hashicorp/aws"
            version = "~> 5.88.0"
        }
    }
    required_version = ">=1.2.0"
}

# Fetching latest AMI ID from AWS because if we hardcode, that image may become outdated
data "aws_ami" "aws_linux" {
    most_recent = true  # Get the latest image available AMI

    # It tells Terraform to look for an AMI with a name that matches a specific pattern.
    filter {
      name   = "name"
      values = ["amzn2-ami-kernel-5.10-hvm-*-x86_64-gp2"]
    }

    # It filters AMIs based on their virtualization type. "hvm" (Hardware Virtual Machine) is required for modern EC2 instances.
    filter {
      name   = "virtualization-type"
      values = ["hvm"]
    }

}

# Creation of AWS instance
resource "aws_instance" "my_instance" {
    # We can even hardcode value from aws but this is future proof
    ami = data.aws_ami.aws_linux.id    # fetching ami from above filter block
    instance_type = "t2.micro"
    key_name = "Ansible-key"
    tags = {
        Name = "${var.name}-server"
        Environment = "dev"    # using this to store instance id in inventory
    }
}

# Create Bucket Manually from AWS

creating variable.tf

variable "name" {
    default = ""    # Taking input from user -- name of instance
}

creating providers.tf

provider "aws" {
    region = "ap-south-1"
}

Creating backend for terraform for storing our data (storing the state file on s3 bucket we created) to store data on cloud

so creating backend.tf and linking our .tfstate with our created bucket

terraform{
    backend "s3" {
        bucket = "devops-project-bucket-23"
        key = "terraform/statefile.tfstate"    # defines the file path inside the S3 bucket where Terraform will store the state file.
        region = "ap-south-1"
    }
}

Running Build in Jenkins for terraform

Getting error on Terraform Init

Error is coming because EC2 not have access to AWS credentials


Creating role in IAM in AWS to provide him all permissions

  • Providing user two permissions

    1. Administator Access

    2. S3 Access

Go inside security in EC2

Then Update IAM role in there


Downloading AWS CLI on EC2 Instance

sudo apt update && sudo apt install unzip -y
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws --version

Providing credentials to EC2 using AWS CONFIGURE

# Configure AWS by 
aws configure
# This would be interface after writing this
# AWS Access Key ID [None]: # from IAM user
# AWS Secret Access Key [None]: # IAM
# Default region name [None]: ap-south-1
# Default output format [None]: # skip

Then throwing Error for S3 bucket not created, so create it manually in AWS


For Rebuilding previous build, Install the rebuilder plug-in in Jenkins

Install This


Now Run the Build in Jenkins AND BOOM🎉, We got our instance

Check for new instance in AWS Instance


Since there is heavy Load on Jenkins Master, creating Jenkins Slave instance

Installing Java in Slave

# Updating system
sudo apt update

# Installing Java.. Required for installation of Jenkins
sudo apt install fontconfig openjdk-17-jre

Connecting Jenkins Master With Slave using SSH

Creating A New Node in Jenkins, i.e. Slave

Save after configuration

Linking Our free-style project with agent (label)

AND OUR AGENT GETS CONNECTED WITH OUR MASTER🎉🎉


Jenkins is running build on Slave


Installing Terraform On Slave

## Copy and paste in terminal

sudo apt-get update && sudo apt-get install -y gnupg software-properties-common

# Install the HashiCorp GPG key.
wget -O- https://apt.releases.hashicorp.com/gpg | \
gpg --dearmor | \
sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/null

# Verify the key's fingerprint.
gpg --no-default-keyring \
--keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \
--fingerprint

echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
sudo tee /etc/apt/sources.list.d/hashicorp.list

sudo apt update

sudo apt-get install terraform

Providing IAM Role to Slave

Now We have configured IAM with Terraform; now run the build


Anddd we got SUCCESSS 🎉🎉

Terraform has successfully made instance and has configured properly with our slave


Now Time for integrating Ansible

Installing Ansible in our slave

sudo apt-add-repository
 ppa:ansible/ansible
 sudo apt update
 sudo apt install ansible

Now Installing Dynamic Inventory

  • If EC2 instances stop and start, their IP changes (unless Elastic IP is used).

  • You have to manually update the inventory file every time.

  • This is not scalable for a dynamic cloud environment.

    Solution: Dynamic Inventory!

    • Instead of a fixed list, Ansible can fetch real-time instance details from AWS.

    • This means no need to manually update IPs in the inventory file.

    • Ansible will dynamically get all the required details from AWS automatically.

sudo apt install python3-pip
pip install boto3 botocore  # These libraries allow Ansible to communicate with AWS.

Making an inventory file inside ansible folder to store data

mkdir -p ~/jenk_terra_ansib_project/ansible/inventory
cd ~/jenk_terra_ansib_project/ansible/inventory

Create a new file aws_ec2.yml inside inventory and add this in it

This will fetch all things with same tag (in our case: Env = "dev") and put it in the inventory

plugin: amazon.aws.aws_ec2
regions:
  - ap-south-1  # Change this to your AWS region
filters:
  instance-state-name: running  # Fetch only running instances
  tag:Environment: "dev"

# Group instances by Name
keyed_groups:
  - key: tags.Name  # Group instances based on AWS tags

Now Editing ansible config file

sudo mkdir -p /etc/ansible
sudo nano /etc/ansible/ansible.cfg
[default]
inventory = /jenk_terra_ansib_project/ansible/inventory/aws_ec2.yml                                        >

[inventory]
enable_plugins = aws_ec2

Installing AWS CLI inside Slave ( this was causing error) 🥲

sudo apt update && sudo apt install unzip -y
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws --version

Now doing aws configure in slave and providing all IAM details


Error coming because I forgot to run build in Jenkins after updating Tag in main. tf😭

# Do this after running build to check if inventory is working fine or not
ansible-inventory -i /home/ubuntu/jenk_terra_ansib_project/ansible/inventory/aws_ec2.yml --list -vvv

Creating apache_play.yml PLAYBOOK inside apache Folder

vim ansible_play.yml

if we run this ansible-inventory command, we see our host is

In your dynamic inventory ,aws_ec2.yml the parsed output shows that your EC2 instance is grouped under "aws_ec2"


Need to export private file from local to server to connect with instance we created

# In local where file is present do SCP
sudo scp -i "Ansible-key.pem" Ansible-key.pem ubuntu@ec2-15-206-195-26.ap-south-1.compute.amazonaws.com:/home/ubuntu

Creating ansible_play.yml

-
    name: Installing Apache Server
    hosts: aws_ec2
    become: yes
    # connecting ec2 instance using SSH key
    remote_user: ec2-user    # default for connecting aws EC2 instance
    vars:
        ansible_ssh_private_key_file: "/home/ubuntu/jenk_terra_ansib_project/ansible/Ansible-key.pem"
    tasks:
        # Installing Apache and PHP
      - name: Install Apache and PHP
        package:
            name:
               - httpd    # apache for amazon linux
               - php
            state: present
        # Starting Apache Service
      - name: Start Apache Service
        service:
            name: httpd
            state: started
            enabled: yes
        # Download index.php file from my github repo
      - name: Download index.php from my github repo to apache 
        get_url:
            # The "raw" content url is used to directly fetch the file from GitHub instead of cloning the whole repository.
            url: "https://raw.githubusercontent.com/dakshsawhneyy/jenk_terra_ansib_project/main/ansible/index.php"
            dest: "/var/www/html/index.php"
      - name: need additional packages (wget) for installing terraform
        package: 
            name: wget
            state: latest
        # now we need to install terraform
        # terraform integrates with jenkins for automation
      - name: Installing Terraform
        unarchive:
            src: https://releases.hashicorp.com/terraform/0.9.1/terraform_0.9.1_linux_amd64.zip
            dest: /usr/bin
            remote_src: yes

Now adding Ansible part in jenkins configuration

#!/bin/bash    

# using xe- :  x is debugging mode showing what is happening and e throws error if anything goes wrong
set -xe

# Moves into the folder where your Terraform code is stored
cd /terraform

terraform init    # Installing all dependencies for terraform
# $TERRAFORM_ACTION is a variable that decides which Terraform command to run.
terraform $TERRAFORM_ACTION -auto-approve    # -auto-approve skips manual confirmation (otherwise, Terraform asks for "yes" input)
# as we give apply or destroy, it will run that command

# Move to Ansible Directory
# Add ansible part

# Check if terraform succeeds, if yes then only proceed to Ansible
if [ "$TERRAFORM_ACTION" = "destroy" ]; then
    exit 0
else
    cd ../ansible
    ansible-playbook -i ../ansible/inventory/aws_ec2.yml ansible_play.yml     # run playbook of Ansible
fi

Selecting destroy in jenkins now to run Ansible code

  • Due to this destroy, Terraform destroys all infrastructure to start with new and then exits with code 0, so ansible doesn’t gets executed

  • Then apply; it will create infra and also use code for Ansible


Error coming: AWS 2 pluggin needed cred

export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_REGION="ap-south-1"

Error coming ‘connection time out’

Fixing this. This is happening because the server is still initializing and Apache is sending requests to instances. To prevent this, add sleep after creation of instance so that first instance can be built successfully and then other operations will get performed

and then rebuild:


Again Error coming →

Meaning it is unable to ssh into the instance, maybe unable to get verification (yes,no,fingerprint)

# Use this command in terminal, 
ssh-keyscan ec2-13-126-179-52.ap-south-1.compute.amazonaws.com >> ~/.ssh/known_hosts

FINALLLLLYYYY🎉🎉🎉🎉

And the project gets completed 🎉🎉🥳🥳