AWS CloudFormation: Simplifying Cloud Deployments

harshitha3232428 Last Updated : 10 Jan, 2025
10 min read

In this article, we’ll explore how AWS CloudFormation simplifies setting up and managing cloud infrastructure. Instead of manually creating resources like servers or databases, you can write down your requirements in a file, and CloudFormation does the heavy lifting for you. This approach, known as Infrastructure as Code (IaC), saves time, reduces errors, and ensures everything is consistent.

We’ll also look at how Docker and GitHub Actions fit into the process. Docker makes it easy to package and run your application, while GitHub Actions automates tasks like testing and deployment. Together with CloudFormation, these tools create a powerful workflow for building and deploying applications in the cloud.

Learning Objectives

  • Learn how to simplify cloud infrastructure management with AWS CloudFormation using Infrastructure as Code (IaC).
  • Understand how Docker and GitHub Actions integrate with AWS CloudFormation for streamlined application deployment.
  • Explore a sample project that automates Python documentation generation using AI tools like LangChain and GPT-4.
  • Learn how to containerize applications with Docker, automate deployment with GitHub Actions, and deploy via AWS CloudFormation.
  • Understand how to set up and manage AWS resources like EC2, ECR, and security groups using CloudFormation templates.

This article was published as a part of the Data Science Blogathon.

What is AWS Cloud-Formation?

In the world of cloud computing, managing infrastructure efficiently is crucial. So, AWS CloudFormation comes into picture, that makes it easier to set up and manage your cloud resources. It allows you to define everything you need — servers, storage, and networking in a simple file.

AWS CloudFormation is a service that helps you define and manage your cloud resources using templates written in YAML or JSON. Think of it as creating a blueprint for your infrastructure. Once you hand over this blueprint, CloudFormation takes care of setting everything up, step by step, exactly as you described.

Infrastructure as Code (IaC), is like turning your cloud into something you can build, rebuild, and even improve with just a few lines of code. No more manual clicking around, no more guesswork — just consistent, reliable deployments that save you time and reduce errors.

Sample ProjectPractical Implementation: A Hands-On Project Example

Streamlining Code Documentation with AI: The Document Generation Project:

To start Cloud Formation, we need one sample project to deploy it in AWS.

I already created a project using Lang-chain and OPEN AI GPT-4. Let’s discuss about that project then we will have a look on how that project is deployed in AWS using cloud Formation.

GitHub code link: https://github.com/Harshitha-GH/CloudFormation

In the world of software development, documentation plays a major role in ensuring codebases are comprehensible and maintainable. However, creating detailed documentation is often a time-consuming and boring task. But we are techies, we want automation in everything. So to deploy a project in AWS using CloudFormation, I  developed an automation project using AI (Lang-Chain and Open AI GPT-4) to create the Document Generation Project — an innovative solution that utilizes AI to automate the documentation process for Python code.

Here’s a breakdown of how we built this tool and the impact it aims to create. To create this project we are following a few steps.

Before starting a new project, we have to create a python environment to install all required packages. This will help us to maintain necessary packages.

I wrote a function to parse the input file , which typically takes a python file as an input and print the names of all functions.

Generating Documentation from Code

Once the function details are extracted, the next step is to feed them into OpenAI’s GPT-4 model to generate detailed documentation. Using Lang-Chain, we construct a prompt that explains the task we want GPT-4 to perform.

prompt_template = PromptTemplate(
        input_variables=["function_name", "arguments", "docstring"],
        template=(
            "Generate detailed documentation for the following Python function:\n\n"
            "Function Name: {function_name}\n"
            "Arguments: {arguments}\n"
            "Docstring: {docstring}\n\n"
            "Provide a clear description of what the function does, its parameters, and the return value."
        )
    )#import csv

With help of this prompt, Doc Generator function takes the parsed details and generates a complete, human-readable explanation for each function.

Flask API Integration

To make the tool user-friendly, I built a Flask API where users can upload Python files. The API parses the file, generates the documentation using GPT-4, and returns it in JSON format.

We can test this Flask API using postman to check our output.

Flask API Integration

Dockerizing the Application

To deploy into AWS and use our application, we need to containerize our application using docker and then use GitHub actions to automate the deployment process. We will be using AWS CloudFormation for the automation in AWS. Service-wise we will be using Elastic Container Registry to store our containers and EC2 for deploying our application. Let us see this step by step.

Creation of Docker Compose

We will create the Docker file. The Docker file is responsible for spinning up our respective containers

# Use the official Python 3.11-slim image as the base image
FROM python:3.11-slim

# Set environment variables to prevent Python from writing .pyc files and buffering output
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# Set the working directory inside the container
WORKDIR /app

# Install system dependencies required for Python packages and clean up apt cache afterwards
RUN apt-get update && apt-get install -y --no-install-recommends \
    gcc \
    libffi-dev \
    libpq-dev \
    python3-dev \
    build-essential \
    && rm -rf /var/lib/apt/lists/*

# Copy the requirements file to the working directory
COPY requirements.txt /app/

# Upgrade pip and install Python dependencies without cache
RUN pip install --no-cache-dir --upgrade pip && \
    pip install --no-cache-dir -r requirements.txt

# Copy the entire application code to the working directory
COPY . /app/

# Expose port 5000 for the application
EXPOSE 5000

# Run the application using Python
CMD ["python", "app.py"]#import csv

Docker Compose

Once Docker files are created, we will create a Docker compose file that will spin up the container.

version: '3.8'

services:
  app:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "5000:5000"
    volumes:
      - .:/app
    environment:
      - PYTHONDONTWRITEBYTECODE=1
      - PYTHONUNBUFFERED=1
    command: ["python", "app.py"]#import csv

You can test this by running the command

docker-compose up –build#import csv

After the command executes successfully, the code will function exactly as it did before.

Creating AWS Services for Cloud-Formation Stack

Creating AWS Services for Cloud-Formation Stack

I create an ECR repository. Apart from that we will make GitHub actions later to create all our other required services.

The repository, I have created has namespace cloud_formation repo name as demo. Then, I will proceed with the CloudFormationtemplate, a yaml file that helps in spinning up required instance, pulling the images from ECR and other resources.

Instead of manually setting up servers and connecting everything, AWS CloudFormation is used to set up and manage cloud resources (like servers or databases) automatically using a script. It’s like giving a blueprint to build and organize your cloud stuff without doing it manually !

Think of CloudFormation as writing a simple instruction manual for AWS to follow. This manual, called as ‘template’, tells AWS to:

  • Start the servers required for the project.
  • Pull the project’s container images from the ECR storage repository.
  • Set up all other dependencies and configurations needed for the project to run.

By using this automated setup, I don’t have to repeat the same steps every time I deploy or update the project — it’s all done automatically by AWS.

Cloud-formation Template

AWS CloudFormation templates are declarative JSON or YAML scripts that describe the resources and configurations needed to set up your infrastructure in AWS. They enable you to automate and manage your infrastructure as code, ensuring consistency and repeatability across environments.

# CloudFormation Template
AWSTemplateFormatVersion: "2010-09-09"
Description: Deploy EC2 with Docker Compose pulling images from ECR

Resources:
  BackendECRRepository:
    Type: AWS::ECR::Repository
    Properties:
      RepositoryName: backend


  EC2InstanceProfile:
    Type: AWS::IAM::InstanceProfile
    Properties:
      Roles:
        - !Ref EC2InstanceRole

  EC2InstanceRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: ec2.amazonaws.com
            Action: sts:AssumeRole
      Policies:
        - PolicyName: ECROpsPolicy
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Action:
                  - ecr:GetAuthorizationToken
                  - ecr:BatchGetImage
                  - ecr:GetDownloadUrlForLayer
                Resource: "*"
        - PolicyName: SecretsManagerPolicy
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Action:
                  - secretsmanager:GetSecretValue
                Resource: "*"

  EC2SecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Allow SSH, HTTP, HTTPS, and application-specific ports
      SecurityGroupIngress:
        # SSH Access
        - IpProtocol: tcp
          FromPort: 22
          ToPort: 22
          CidrIp: 0.0.0.0/0
        # Ping (ICMP)
        - IpProtocol: icmp
          FromPort: -1
          ToPort: -1
          CidrIp: 0.0.0.0/0
        # HTTP
        - IpProtocol: tcp
          FromPort: 80
          ToPort: 80
          CidrIp: 0.0.0.0/0
        # HTTPS
        - IpProtocol: tcp
          FromPort: 443
          ToPort: 443
          CidrIp: 0.0.0.0/0
        # Backend Port
        - IpProtocol: tcp
          FromPort: 5000
          ToPort: 5000
          CidrIp: 0.0.0.0/0

  EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: t2.micro
      KeyName: demo
      ImageId: ami-0c02fb55956c7d316
      IamInstanceProfile: !Ref EC2InstanceProfile
      SecurityGroupIds:
        - !Ref EC2SecurityGroup
      UserData:
        Fn::Base64: !Sub |
          #!/bin/bash
          set -e  # Exit script on error
          yum update -y
          yum install docker git python3 -y
          pip3 install boto3
          service docker start
          usermod -aG docker ec2-user

          # Install Docker Compose
          curl -L "https://github.com/docker/compose/releases/download/$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep tag_name | cut -d '"' -f 4)/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
          chmod +x /usr/local/bin/docker-compose

          # Retrieve secrets from AWS Secrets Manager
          SECRET_NAME="backend-config"
          REGION="us-east-1"
          SECRET_JSON=$(aws secretsmanager get-secret-value --secret-id $SECRET_NAME --region $REGION --query SecretString --output text)
          echo "$SECRET_JSON" > /tmp/secrets.json

          # Create config.py dynamically
          mkdir -p /backend
          cat <<EOL > /backend/config.py
          import json
          secrets = json.load(open('/tmp/secrets.json'))
          OPENAI_API_KEY = secrets["OPENAI_API_KEY"]
          EOL

        

          # Authenticate with ECR
          aws ecr get-login-password --region ${AWS::Region} | docker login --username AWS --password-stdin ${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com

          # Pull images from ECR
          docker pull ${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/personage/dodge-challenger:backend-latest

          # Create Docker Compose file
          cat <<EOL > docker-compose.yml
          version: "3.9"
          services:
            backend:
              image: ${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/personage/dodge-challenger:backend-latest
              ports:
                - "5000:5000"
              volumes:
                - /backend/config.py:/app/config.py
                - /tmp/secrets.json:/tmp/secrets.json
              environment:
                - PYTHONUNBUFFERED=1

            
          EOL

          # Start Docker Compose
          docker-compose -p demo up -d

Outputs:
  EC2PublicIP:
    Description: Public IP of the EC2 instance
    Value: !GetAtt EC2Instance.PublicIp#import csv

Let’s decode the updated template step by step:

We are defining a single ECR resource, which is the repository where our Docker image is stored.

Next, we create an EC2 instance. We’ll attach essential policies to it, mainly for interacting with the ECR and AWS Secrets Manager. Additionally, we attach a Security Group to control network access. For this setup, we will open:

  • Port 22 for SSH access.
  • Port 80 for HTTP access.
  • Port 5000 for backend application access.

A t2.micro instance will be used, and inside the User Data section, we define the instructions to configure the instance:

  • Install necessary dependencies like Python, boto3, and Docker.
  • Access secrets stored in AWS Secrets Manager and save them to a config.py file.
  • Login to ECR, pull the Docker image, and run it using Docker.

Since only one Docker container is being used, this configuration simplifies the deployment process, while ensuring the backend service is accessible and properly configured.

Uploading and Storing Secrets to AWS Secret Manager

Till now we have saved the secrets like Open AI key in config.py file. But, we cannot push this file to GitHub, as it contains Secrets. So, we use AWS Secrets manager to store our secrets and then retrieve it through our CloudFormation template.

Till now we have saved the secrets like Open AI key in config.py file. But, we cannot push this file to GitHub, as it contains Secrets. So, we use AWS Secrets manager to store our secrets and then retrieve it through our CloudFormation template.

Uploading and Storing Secrets to AWS Secret Manager
Uploading and Storing Secrets to AWS Secret Manager

Creating GitHub Actions

Creating GitHub Actions

GitHub Actions is used to automate tasks like testing code, building apps, or deploying projects whenever you make changes. It’s like setting up a robot to handle repetitive work for you !

Our major intention here is that as we push to a specific branch of github, automatically the deployment to AWS should start. For this we will select ‘main’ branch.

Storing the Secrets in GitHub

Sign in to your github and follow the path below:

repository > settings > Secrets and variables > Actions

Then you need to add your secrets of AWS extracted from you AWS account, as in below image.

Storing the Secrets in GitHub

Initiating the Workflow

After storing, we will create a .github folder and, within it, a workflows folder. Inside the workflows folder, we will add a deploy.yaml file.

name: Deploy to AWS

on:
  push:
    branches:
      - main

jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:
      # Step 1: Checkout the repository
      - name: Checkout code
        uses: actions/checkout@v3
      
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v4 # Configure AWS credentials
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: ${{ secrets.AWS_REGION }}

      # Step 2: Log in to Amazon ECR
      - name: Log in to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v2

      # Step 3: Build and Push Backend Image to ECR
      - name: Build and Push Backend Image
        run: |
          docker build -t backend .
          docker tag backend:latest ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ secrets.AWS_REGION }}.amazonaws.com/personage/dodge-challenger:backend-latest
          docker push ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ secrets.AWS_REGION }}.amazonaws.com/personage/dodge-challenger:backend-latest

      
      
      # Step 5: Delete Existing CloudFormation Stack
      - name: Delete Existing CloudFormation Stack
        run: |
          aws cloudformation delete-stack --stack-name docker-ecr-ec2-stack
          echo "Waiting for stack deletion to complete..."
          aws cloudformation wait stack-delete-complete --stack-name docker-ecr-ec2-stack || echo "Stack does not exist or already deleted."

      # Step 6: Deploy CloudFormation Stack
      - name: Deploy CloudFormation Stack
        uses: aws-actions/aws-cloudformation-github-deploy@v1
        with:
          name: docker-ecr-ec2-stack
          template: cloud-formation.yaml
          capabilities: CAPABILITY_NAMED_IAM

Here’s a simplified explanation of the flow:

  • We pull the code from the repository and set up AWS credentials using the secrets stored in GitHub.
  • Then, we log in to ECR and build/push the Docker image of the application.
  • We check if there’s an existing CloudFormation stack with the same name. If yes, delete it.
  • Finally, we use the CloudFormation template to launch the resources and set everything up.

Testing

Once everything is deployed, note down the IP address of the instance and then just call it using postman to check everything works fine.

Testing final output

Conclusion

In this article, we explored how to use AWS CloudFormation to simplify cloud infrastructure management. We learnt how to create an ECR repository, deploy a Dockerized application on EC2 instance, and automate the entire process using GitHub Actions for CI/CD. This approach not only saves time but also ensures consistency and reliability in deployments.

Key Takeaways

  • AWS CloudFormation simplifies cloud resource management with Infrastructure as Code.
  • Docker containers streamline application deployment on AWS-managed infrastructure.
  • GitHub Actions automates build and deployment pipelines for seamless integration.
  • LangChain and GPT-4 enhance Python documentation automation in projects.
  • Combining IaC, Docker, and CI/CD creates scalable, efficient, and modern workflows.

Frequently Asked Questions

Q1. What is AWS CloudFormation?

A. AWS CloudFormation is a service that enables you to model and provision AWS resources using Infrastructure as Code (IaC).

Q2. How does Docker integrate with AWS CloudFormation?

A. Docker packages applications into containers, which can be deployed on AWS resources managed by CloudFormation.

Q3. What role does GitHub Actions play in this workflow?

A. GitHub Actions automates CI/CD pipelines, including building, testing, and deploying applications to AWS.

Q4. Can I automate Python documentation generation with LangChain?

A. Yes, LangChain and GPT-4 can generate and update Python documentation as part of your workflow.

Q5. What are the benefits of using IaC with AWS CloudFormation?

A. IaC ensures consistent, repeatable, and scalable resource management across your infrastructure.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details