Throughout this project, we will learn how to create a monitoring application in Python using Flask. We will start by building the application and containerize it using docker. Once we have our app containerized and running locally we will then create ECR (Elastic Container Registry) in AWS using Python boto3 module. We will push the docker image of our app in ECR to store, retrieve or use the docker image. Next, we will move to the deployment phase where we will create an elastic Kubernetes cluster on EKS (Elastic Kubernetes Service) with nodes and deploy the application on Kubernetes. This article is divided into 3 parts of phases.
Building the app
Containerizing the app
Deploying the app on Kubernetes
Prerequisites-
AWS Account
Programmatic access and AWS configured with CLI.
Python3 Installed
Docker and Kubectl installed
A code editor (I am using Vscode)
Phase 1- Build the app
Create a folder in your code editor and create a Python file named app.py
Now we need to create a requirements.txt file that will contain all the modules, framework, etc. which the app will use.
Paste these required dependencies in a requirements.txt file
Flask
MarkupSafe
Werkzeug
itsdangerous
psutil
plotly
tenacity
boto3
kubernetes
I will be mentioning terminal a lot in this article, make sure you are opening and entering commands in your code editor’s terminal.
Now run this command to install all the dependencies-
pip3 install -r requirements.txt
Now add some css styling so that we can see metrics in the form of gauge. Create a folder named “templates” and inside this folder create a file named “index.html”. Now paste this code in index.html
<html>
<head>
<title>System Monitoring</title>
<script src="https://cdn.plot.ly/plotly-latest.min.js"></script>
<style> .plotly-graph-div {
margin: auto;
width: 50%;
background-color: rgba(151, 128, 128, 0.688);
padding: 20px;
} </style>
</head>
<body>
<div class="container">
<h1>System Monitoring</h1>
<div id="cpu-gauge"></div>
<div id="mem-gauge"></div>
{% if message %}
<div class="alert alert-danger">{{ message }}</div>
{% endif %}
</div>
<script> var cpuGauge = {
type: "indicator",
mode: "gauge+number",
value: {{ cpu_metric }},
gauge: {
axis: { range: [null, 100] },
bar: { color: "white" },
bgcolor: "purple",
borderwidth: 3,
bordercolor: "black",
steps: [
{ range: [0, 40], color: "green" },
{ range: [40, 80], color: "yellow" },
{ range: [80, 100], color: "red" }
],
threshold: {
line: { color: "red", width: 4 },
thickness: 0.75,
value: {{ cpu_metric }}
}
}
};
var memGauge = {
type: "indicator",
mode: "gauge+number",
value: {{ mem_metric }},
gauge: {
axis: { range: [null, 100] },
bar: { color: "white" },
bgcolor: "black",
borderwidth: 3,
bordercolor: "black",
steps: [
{ range: [0, 40], color: "green" },
{ range: [40, 70], color: "yellow" },
{ range: [70, 100], color: "red" }
],
threshold: {
line: { color: "red", width: 4 },
thickness: 0.75,
value: {{ mem_metric }}
}
}
};
var cpuGaugeLayout = { title: "CPU Utilization" };
var memGaugeLayout = { title: "Memory Utilization" };
Plotly.newPlot('cpu-gauge', [cpuGauge], cpuGaugeLayout);
Plotly.newPlot('mem-gauge', [memGauge], memGaugeLayout); </script>
</body>
</html>
Now back to app.py let’s code our app now. We are going to use a module named “psutil” which will retrieve the CPU and memory metrics of the machine. Our app is using flask framework so we also need it. Now these two modules are already included in the requirements.txt file and are already installed using the command I mentioned above. We will be importing these two modules into our code.
This is the complete code of our application paste it in the app.py file-
import psutil as monitor
from flask import Flask, render_template
app = Flask(__name__)
@app.route('/')
def index():
cpuPer = monitor.cpu_percent()
memUti = monitor.virtual_memory().percent
Message = None
if cpuPer > 80:
Message = 'CPU usage more than 80 percent, you need to scale up'
elif memUti > 80:
Message = 'Memory usage more than 80 percent, you need to scale up'
return render_template("index.html", cpu_metric=cpuPer, mem_metric=memUti, message=Message)
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
The code is pretty simple so I am not going to explain it in detail, but here are some things which the code is doing.
In line 6, you can see “@app.route(‘/’)”
which means the app is going to run whenever the user comes to the home path. In the return statement, we are returning render_template which is using index.html file as a template.
Whenever the CPU is above 80% usage it’s going to display “CPU usage more than 80 percent, you need to scale up” or if the memory usage is above 80% it’s going to display “Memory usage more than 80 percent, you need to scale up” otherwise it’s going to display the current CPU and memory metric.
Now we are ready to run our application. Run in your terminal-
python3 app.py
You should see this in your terminal after running the command above-
Now open your browser, go to local host 127 or just simply click the link of local host 127 in your terminal. You should see this-
If you are seeing this, congratulations buddy, you have successfully built the app. You can click on reload page option of your browser to get the current system metrics of your machine(Just bang on that button). To stop the app press control+c in your terminal. Now you have to containerize this app. Let’s commence phase 2.
Phase 2- Containerize the app
To containerize our app we will need a “Dockerfile” so create a file named “Dockerfile” in your folder.
You have to paste this into your Dockerfile
FROM python:3.12.0b4-slim-bullseye
WORKDIR /app
COPY . .
RUN pip3 install --no-cache-dir -r requirements.txt
ENV FLASK_RUN_HOST=0.0.0.0
EXPOSE 5000
CMD ["flask", "run"]
Explanation for docker file-
Since we are building the app in Python we are going to use Python as our base image, we are using the 3.12.0b4-slim-bullseye version because it takes very less space. Next, we are setting the working directory to the app directory, then in the next line, we are copying everything from our current directory in our local machine to the working directory of the image. In the next line we are running the command which will install all the dependencies for our app in the image, it has a “— no-cache-dir” tag so that there are no caching issues. In the next line, we are setting an environment variable named “FLASK_RUN_HOST” in which we are setting host 0.0.0.0 which is not restricting our app to use in specific IP ranges. In the next line, we are exposing port 5000. Finally, in the last line, we are running the command to run our flask app.
Now let’s build the image of our app, Enter this command in your terminal-
docker build -t monitoring-app .
Your terminal will look something like this-
After this process has been completed, you can enter this command and you will see the image you built named “monitoring-app”
docker images
Now run the container from the image we built by using this command-
docker run -p 5000:5000 monitoring-app
After running this command your container will start running, you can go to localhost:5000 in your browser and see that the container is running. If your container is running, congrats you have successfully containerized the app, you can enter ctrl+c
command in your terminal to stop the container. Let’s start phase 3 and deploy the app on Kubernetes.
Phase 3- Deploy the App on Kubernetes
We now upload our image to a repository, we can use dockerhub or any other repository but we are using AWS ECR or Elastic Container Registry for this project. Now we can create this repository by using AWS CONSOLE but we are gonna create it by Python using the boto3 module locally from our local machine so that we learn to create stuff in AWS using Python.
Boto3 is an AWS SDK FOR Python to create, configure and manage AWS services. You can also check out boto3 documentation for in-depth information. Here is the link for boto3 documentation-
Boto3 1.28.19 documentation
*You use the AWS SDK for Python (Boto3) to create, configure, and manage AWS services, such as Amazon Elastic Compute…*boto3.amazonaws.com
Now let’s create the repository, and make a file in your code editor name as “ecr.py”. Paste this code in ecr.py
import boto3
client = boto3.client('ecr')
repository_name = "my-cloud-native-repo"
response = client.create_repository(
repositoryName=repository_name,
tags=[
{
'Key': 'repo',
'Value': 'monitor-app'
},
],
)
repository_uri = response['repository']['repositoryUri']
print(repository_uri)
This code is going to make an AWS ECR repository in your AWS account, now I am not going to explain this code because you will easily understand this code when you see aws boto3 ECR documentation. Here is the link for the ECR documentation from boto3 documentation-
ECR - Boto3 1.28.18 documentation
*A low-level client representing Amazon EC2 Container Registry (ECR) Amazon Elastic Container Registry (Amazon ECR) is a…*boto3.amazonaws.com
After pasting the above-mentioned code in the ecr.py file, Now you have to enter the command below and a Private ECR repository will get automatically created in your AWS account, just make sure you have configured AWS CLI with your aws account otherwise this command is not going to work. Here is the command-
python3 ecr.py
After running this command you can check your ECR repository, and there will be a repository created.
Now we have our repository created we just need to upload our image to this repository. Go to the repository which you have created in AWS CONSOLE, and you will see a push command option on your screen. Like this-
Click on this option, and you will see a set of commands, enter these commands one by one in your code editor’s terminal. Make sure you have “macOS/Linux selected” in the push commands option.
After running the first command you will see the login succeeded option in your terminal.
After running the second command, your app will get built. Your terminal will look something like this when running second command-
Enter the third command
Finally, enter the fourth command and you will see your image will get start uploading to your AWS ECR repository, your terminal will look something like this-
Now, you have successfully uploaded your image to your AWS ECR repository. You can see your uploaded image in your AWS ECR repository in the image section in AWS CONSOLE.
Now we have to make Kubernetes cluster and node group in that cluster but before this, we will need to create a role with necessary permission policies in AWS IAM or Identity and Access Management because the cluster and node group will ask for it when you create it.
We will create two roles one for AWS EKS or Elastic Kubernetes Services and one for the node group in that cluster.
Let’s create a role for EKS Cluster first, go to AWS IAM and create a role named as EKS-Cluster-Role and in this role attach AmazonEKSClusterPolicy
Now, Create a role for the cluster role group, name it as Node-Group-Role and attach these policies in this role-
AmazonEC2ContainerRegistryReadOnly
AmazonEKS_CNI_Policy
AmazonEKSServicePolicy
AmazonEKSWorkerNodePolicy
AmazonEKSVPCResourceController
Now you have the roles ready, let’s make AWS EKS Cluster now. In your AWS account go to EKS and click on Create then click on Add cluster option on the top right corner of the screen and select Create, here is a screenshot for reference-
Now create the cluster with the name app-cluster and select EKS-Cluster-Role which we have created in the above steps for the cluster role in the Cluster service role option your screen should look like this
Click on the next option and on the next page select your default vpc and then select subnets, select 3 subnets, and make sure the subnets you are selecting are public. Then in the security group section remove the security group if there is any, you don’t need to select security group because it will get automatically be created by the cluster with all inbound and outbound rules, you can optionally select a security group here but then make sure it has all the necessary inbound and outbound permissions. I will recommend that you do not select any security group.
Then in the next options keep things as they are, this page should look something like this-
Click on the next option, you will see Configure logging page, keeps things as they are and click on next. You will now reach the Select add-ons page keep things as they are and click on Next. Now you will be at Configure selected add-ons settings keep things as they are and click on next. Now you should be at the Review and create page. This page should look something like this-
Click on create, and now your cluster is being created. It may take some to get the cluster ready so wait for some time.
When your cluster is ready, go to your cluster, in the compute section select Add node group
After clicking on Add node group you will be at Configure node group page name the node group as cluster-node-group and then select the role which we have created earlier for the node group named as Node-Group-Role After that click on next. You will now be at the Set compute and scaling configuration page keep things as they are just change the only instance type to t2.micro to be in the free tier. Your page should look like this-
Click on next you will be at Specify networking page, keep things as they are, the subnets selected here are the subnets which you selected when you made the cluster, so do not change anything and click next. Then you will be at the Review and create page and click on create. At this point node group is being created, it will take some time so may have to wait a bit long.
When your cluster is ready, open the code editor in your local machine and create a file in the folder which is being used for this whole project and name this file “eks.py” In this file we will use Kubernetes client which will create a Kubernetes deployment for us when we run the code in this file. Paste this code in this file-
from kubernetes import client, config
# Load kubernetes configuration
config.load_config()
# Create a kubernetes API client
api_client = client.ApiClient()
# Define the deployment
deployment = client.V1Deployment(
metadata=client.V1ObjectMeta(name="cloud-native-monitoring-app"),
spec=client.V1DeploymentSpec(
replicas=1,
selector=client.V1LabelSelector(
match_labels={"app": "cloud-native-monitoring-app"}
),
template=client.V1PodTemplateSpec(
metadata=client.V1ObjectMeta(
labels={"app": "cloud-native-monitoring-app"}
),
spec=client.V1PodSpec(
containers=[
client.V1Container(
name="monitoring-app-container",
image="827695660685.dkr.ecr.ap-south-1.amazonaws.com/my-cloud-native-repo:latest",
ports=[client.V1ContainerPort(container_port=5000)]
)
]
)
)
)
)
# Create the deployment
api_instance = client.AppsV1Api(api_client)
api_instance.create_namespaced_deployment(
namespace="default",
body=deployment
)
# Define the service
service = client.V1Service(
metadata=client.V1ObjectMeta(name="cloud-native-monitoring-service"),
spec=client.V1ServiceSpec(
selector={"app":"cloud-native-monitoring-app"},
ports=[client.V1ServicePort(port=5000)]
)
)
# Create the service
api_instance = client.CoreV1Api(api_client)
api_instance.create_namespaced_service(
namespace="default",
body=service
)
Again, I am trying to keep this article short so not going to add an explanation you will understand from checking out documentation. You can check relevant documentation to know more about Kubernetes deployment-
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
If you want to know more about client libraries check out the documentation-
Client Libraries
*This page contains an overview of the client libraries for using the Kubernetes API from various programming languages…*kubernetes.io
Now eks.py file is using kubernetes config file in the code, kubernetes config file contains information about our cluster. So we need this config file to have data about our cluster so we can configure the config file with our cluster by entering this command in the code editor’s terminal-
aws eks update-kubeconfig --name app-cluster
After doing all this, finally, enter this command to run your Kubernetes cluster with the deployment file we created from the code above-
python3 eks.py
Now your kubernetes cluster should be running with the pods. Check your deployments and your pods by entering these commands
kubectl get deployment
kubectl get pods
kubectl get services
If your pods are running, congratulation buddy. You have successfully deployed the app in a kubernetes cluster.
You can see the system metrics of pods in which our app is running in your browser using port forwarding. Enter this command in your code editor’s terminal-
kubectl port-forward svc/cloud-native-monitoring-service 5000:5000
Now open your browser and go to localhost:5000, you should see the app working. These metrics which you are seeing are not of your local machine but are of pods of your AWS account’s cluster node group. You are seeing these metrics in the local host because of the above command we entered which is port forwarding these metrics by retrieving it from AWS and forwarding it to the local host on port 5000 of our machine.
Phase 3 is now completed and so is this project, I hope you learn something from this project. If you have any questions or thoughts feel free to ask me in the comments section.