Photo by Lorenzo Herrera on Unsplash
Deploying Microservice App on K8s
Part 5: Deployment and statefulset manifests
This blog is Part 5 of the blog series it is recommended that you read the previous blogs before this one. Hereis the Part 1 of the series.
In this blog, I will explain the Kubernetes deployment and statefulset manifests. Please refer to the previous project blog to fully understand this part of the project.
The deployment manifest is for the authentication microservice and the statefulset is for the backend microservice. The backend has a statefulset type manifest because it uses persistent storage.
Clone the project repository-
As you open the project repository, in the manifest directory, you will find the deployment manifest named auth_deploy.yaml
and the backend statefulset manifest named backend_deploy.yaml
. I am going to discuss these manifests in this blog. You do not need to edit the discussed files, I will tell you when to edit and what to edit as I discuss these files.
Both of these manifest files also contain services for their respective microservices.
auth_deploy.yaml
apiVersion: v1
kind: Service
metadata:
name: auth-service
namespace: auth
spec:
selector:
app: auth
ports:
- port: 8000
targetPort: auth-port
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth
namespace: auth
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
nodeSelector:
node-group: auth-ng
containers:
- name: auth
image: Your authentication image URI
resources:
limits:
memory: "0.6Gi"
cpu: "600m"
ports:
- name: auth-port
containerPort: 8000
volumeMounts:
- name: config-vol
mountPath: /app/config
- name: secret-vol
mountPath: /app/secret
volumes:
- name: config-vol
configMap:
name: auth-config
- name: secret-vol
secret:
secretName: auth-secret
The "---" mark divides this file into two parts, above this mark is the manifest of service and below is the deployment manifest for authentication microservice.
Let's discuss service first
Service Manifest of Authentication
In the metadata section, the service of authentication microservice is named- auth-service
and kept in the auth
namespace. The selector field of this service has matchLabels filed with an app: auth
value which means that this service will target any pod that has app: auth
label.
The port is 8000
specified in the port section, which is the port on which this service will listen to and the target port is auth-port, which is the name of the pod port. This pod port is defined in the deployment. We can bind the pod port to the service port by referencing the pod port's name in the target port of service.
You can look at the Kubernetes service documentation here for more information.
Deployment Manifest of Authentication
In the deployment, you can see the deployment is named auth and it is set to reside in auth
namespace.
In the deployment, you can see two spec fields, the outer spec field specifies the configuration of the deployment itself and the inner spec field is for the configuration of the pod managed by deployment.
The outer spec field specifies replicas which decide how many replicas of pod the deployment will run, it is set to 1. Next, in the selector field a matchLabels field is defined with app: auth
which means that this deployment will target any pod with an app: auth
label.
After this, you can see a template field that specifies the pod template for the deployment, in this field in the metadata section the pod is labeled app: auth
after this there is an inner spec filed for the deployment.
In the inner spec field, the first field is nodeSelector
field which has the label value node-group: auth-ng
. This field will make sure that the authentication microservice pod gets scheduled to the node that has node-group: auth-ng
label (Remember that in our terraform code we are labeling the authentication microservice node as node-group: auth-ng
and in the previous blog I told you that only a microservice pod will run on a single node)
Next, we have the container field which defines the configuration of the pod. We have the name of the container that is set to auth after this in the image field I have provided the URI of the image.
You need to edit the manifest here and paste the URI of you image in you AWS account's ECR repository. I will tell you how to do this in later part of this blog.
Next is the resources field that decides how much of the CPU and Memory of the Node the Pod is allowed to use. I have set Memory to 600 MB or 0.6 GB and memory to 600m which means 60% of a single CPU core.
Next, we have the port field, in this field, we are making the pod listen on port 8000 and naming this port auth-port
. This is the same port that is referenced in the above service targetPort
field.
Next, we have the volumeMounts
field, this is the place where we will mount the Kubernetes ConfigMap and Secret volume to the specific directories in the container that we have created in ConfigMap and Secret manifests and discussed in previous blogs.
We discussed in previous blogs that in the container secret config is stored in the /app/secret
directory and the non-secret config is stored in the /app/config
directory. Hence, in the mountPath
field, we are mounting the secret volume on the /app/secret
directory and the configmap volume on the /app/config
directory in the container. In the --name
field, we are referencing the volume to mount in the path, this volume is specified in the volumes field in the next part of this manifest.
At last, we have the volumes
field, we are defining the volume and its type here. First, we are defining the ConfigMap volume, naming it with a - name
field and putting its name there, the name specified here is the same name that the volumeMounts
field is referencing to, in the - name
field. We next define the volume type with the configMap
field which means that the volume is a Kubernetes configMap volume and then we name the configMap volume in the - name
field, in this field we have to provide the name of the configMap that we have created in the previous blog.
Just like this, we are defining the secret volume. We name the secret volume in the - name
that is again referenced in the volumeMounts
field, then we specify its type by mentioning a secret
field and then specifying the Kubernetes secret in the secretName
field. This secretName
value has to be the name of the Kubernetes secret that we have created in the previous blogs.
backend_deploy.yaml
apiVersion: v1
kind: Service
metadata:
name: backend-service
namespace: backend
labels:
app: backend
spec:
type: LoadBalancer
selector:
app: backend
ports:
- port: 5000
targetPort: backend-port
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: backend-statefulset
namespace: backend
spec:
selector:
matchLabels:
app: backend
serviceName: backend-service
replicas: 1
template:
metadata:
labels:
app: backend
spec:
nodeSelector:
node-group: backend-ng
containers:
- name: backend
image: Your backend image URI
ports:
- containerPort: 5000
name: backend-port
volumeMounts:
- name: backend-volume
mountPath: /app/CSVs
- name: config-vol
mountPath: /app/config
- name: secret-vol
mountPath: /app/secret
volumes:
- name: backend-volume
persistentVolumeClaim:
claimName: backend-pvc-claim
- name: config-vol
configMap:
name: backend-config
- name: secret-vol
secret:
secretName: backend-secret
The "---" mark divides this file into two parts, above this mark is the manifest of service and below is the statefulset manifest for the backend microservice.
Let's discuss service first. This service manifest is similar to that of an authentication service.
Service Manifest of Backend
In the metadata section, the service of backend microservice is named- backend-service
and kept in the backend
namespace.The selector field of this service has matchLabels filed with a app: backend
value which means that this service will target any pod that has a app: backend
label.
In the spec field, type
is set to LoadBalancer
which means this service will be accessible from outside the cluster.
The port is 5000
specified in the port section, which is the port on which this service will listen to and the target port is backend port, which is the name of the pod port. This pod port is defined in the statefulset. We can bind the pod port to the service port by referencing the pod port's name in the target port of service.
Statefulset Manifest of Backend
In the statefulset, you can see the statefulset is named backend-statefulset and it is set to reside in backend
namespace.
In the statefulset, you can see two spec fields, the outer spec field specifies the configuration of the statefulset itself and the inner spec field is for the configuration of the pod managed by statefulset.
The outer spec field specifies the selector field and a matchLabels field is defined with app: auth
which means that this statefulset will target any pod with an app: auth
label. Next, we have the serviceName field which is set to the name of the service of this statefulset. In statefulset we define the service name but not in the deployment. Read the K8s statefulset manifest doc to know why, I will provide the link at the end of this section.
Next, in replicas which decide how many replicas of pod the statefulset will run, it is set to 1.
After this, you can see a template field that specifies the pod template for the statefulset, in this field in the metadata section the pod is labeled app: backend
after this, there is an inner spec filed for the deployment.
In the inner spec field, the first field is the nodeSelector
field which has the label value node-group: backend-ng
. This field will make sure that the backend microservice pod gets scheduled to the node that has a node-group: backend-ng
label (Remember that in our terraform code we are labeling the backend microservice node node-group: backend-ng
and in the previous blog I told you that only a microservice pod will run on a single node)
Next, we have the container field which defines the configuration of the pod. We have the name of the container that is set to backend after this in the image field I have provided the URI of the image.
You need to edit the manifest here and paste your image's URI in your AWS account's ECR repository. I will tell you how to do this later in this blog.
Next is the resources field that decides how much of the CPU and Memory of the Node the Pod is allowed to use. I have set Memory to 600 MB or 0.6 GB and memory to 600m which means 60% of a single CPU core.
Next, we have the ports
field, in this field, we are making the pod listen on port 5000 and naming this port backend-port
. This is the same port that is referenced in the above service targetPort
field.
Next, we have the volumeMounts
field, this is where we will mount the persistent storage along with Kubernetes ConfigMap and Secret volume to the specific directories in the container we have created in ConfigMap and Secret manifests and discussed in previous blogs.
As I discussed in the earlier blog the CSV files contain the user tasks and this file resides in /app/CSVs in the container. This is the directory where we have to mount the persistent storage so that every new pod will have the same files as other pods.
So we are mounting the persistent storage in the mountPath field with the directory path of the CSV files which is /app/CSVs
. In the --name
field, we are referencing the volume to mount in this path, this volume is specified in the volumes field in next part of this manifest.
We discussed in previous blogs that in the container secret config is stored in the /app/secret
directory and the non-secret config is stored in the /app/config
directory. Hence, in the mountPath
field, we are mounting the secret volume on the /app/secret
directory and the configmap volume on the /app/config
directory in the container. In the --name
field, we are referencing the volume to mount in the path which is backend-volume
, this volume is specified in the volumes field in the next part of this manifest.
At last, we have the volumes
field, we are defining the volume and its type here. First, we define the persistent volume, we name the persistent volume as backend-volume
after this we are setting the volume type to persistentVolumeClaim
and in this field we are referencing the volume in the claimName
field, the referenced name is the name of PVC (Persistent Volume Claim) which we have created in the previous blog, hence the claimName
is backend-pvc-claim
.
The next volume is ConfigMap volume, naming it with a - name
field and setting its volume to backend-config
, the name specified here is the same as the volumeMounts
field is referencing to, in the - name
field. We next define the volume type with the configMap
field which means that the volume is a Kubernetes configMap volume and then we name the configMap volume in the - name
field, in this field we have to provide the name of the configMap that we have created in the previous blog.
Just like this, we are defining the secret volume. We name the secret volume in the - name
that is again referenced in the volumeMounts
field, then we specify its type by mentioning a secret
field and then specifying the Kubernetes secret in the secretName
field. This secretName
value has to be the name of the Kubernetes secret that we have created in the previous blogs.
Read the Kubernetes statefulset doc here.
Upload Microservices Image to AWS ECR Repository
You need to build the architecture for this, You need to use the same Terraform architecture we built on the earlier blog. So build the architecture first and then follow the steps-
Go to the AWS ECR repository. We have repositories named auth for authentication microservice image and backend for backend microservice.
Go to any of these two repository.
You will see the "view push commands" option, click on that.
Copy the first command.
Go to the project directory that you cloned from the repository link I mentioned in an earlier blog.
In the Project open the terminal and go to the directory
cd "Tasklist App"/auth
If you are uploading the image in the auth repository of AWS ECR else if you are uploading to the backend repository of ECR,cd "Tasklist App"/backend
directory in the terminal.In the directory paste the first command you copied from "view push commands" option.
Next, Copy the second command from ECR and paste it into the same directory. (You should have docker installed and running for this command to work)
Copy the third command and paste it into the terminal.
Copy the fourth command and paste it into the terminal.
Your image upload will start.
When the image gets uploaded, go to the AWS ECR repository. There you will see the Image URI, copy the image's URI from there and paste it in its respective manifest.
Share the blog on socials and tag me onXandLinkedInfor the #buildinpublic initiative.
******Thank you for reading๐๐******