Developed by Yacine FLICI
Developed by Tilelli Bektache
Contact: [email protected] [email protected]
This project demonstrates the deployment of a web service using Docker, Kubernetes, and Istio for service mesh integration. The service is developed, containerized, and deployed on a Kubernetes cluster, followed by setting up a service mesh with Istio. This README provides a step-by-step guide to the project's implementation.
- Project Setup
- Build the Docker Image and Run the Container
- Push the Docker Image to Docker Hub
- Create a Deployment and Service in Kubernetes
- Create Gateway and Virtual Service for Istio
- Apply the Configuration Files and Verify the Deployments
- Access the Web Service
minikube start
This Dockerfile sets up a containerized environment for a Python web application.
- Base Image: Uses Python 3.12.3 as the base image.
- Working Directory: Sets the working directory in the container to the current directory.
- Copy Files: Copies all the contents of the current directory on the host machine into the container.
- Environment Variable: Sets the environment variable
PYTHONUNBUFFERED
to1
to ensure output is not buffered. - Copy Requirements: Copies the
requirements.txt
file into the container. - Install Dependencies: Installs the Python packages listed in
requirements.txt
. - Command: Defines the command to run the application:
python3 manage.py runserver 0.0.0.0:8000
, which starts the Django development server and makes it accessible on port 8000.
This Dockerfile sets up a Python environment, installs dependencies, and runs a Django web server.
This Dockerfile sets up a containerized environment for a Python application.
- Base Image: Uses Python 3.12.3 as the base image.
- Working Directory: Sets the working directory in the container to the current directory.
- Copy Files: Copies all the contents of the current directory on the host machine into the container.
- Environment Variable: Sets the environment variable
PYTHONUNBUFFERED
to1
to ensure output is not buffered. - Copy Requirements: Copies the
requirements.txt
file into the container. - Install Dependencies: Installs the Python packages listed in
requirements.txt
. - Command: Defines the command to run the application:
python3 app.py
, which starts the application usingapp.py
.
This Dockerfile sets up a Python environment, installs dependencies, and runs a Python application script.
docker build -t academix-project .
docker run -p 8000:8000 academix-project
Screenshot of browser output:
docker tag academix-project pyzone49/academix_project:1
docker push pyzone49/academix_project:1
Backend API Docker Image:
docker tag academix-api pyzone49/academix_api:2
docker push pyzone49/academix_api:2
Screenshot of Docker Hub:
Using the YAML files provided in the repository, create a deployment and service for the frontend web service and backend API.
4.1 api.yaml
This YAML configuration creates a Kubernetes Deployment and Service for the backend API called academix-api
, and connects it to a MySQL database service.
Deployment:
- The Deployment, named
flask-api-deployment
and located in theproduction
namespace, manages a single replica of theflask-api
pod. - The pod uses the Docker image
pyzone49/academix_api:2
and listens on port 5000. - Environment variables are set to configure the connection to a MySQL database, including the host, port, database name, user, and password.
- The restart policy is set to always restart the container if it fails.
Service:
- The Service, named
flask-api-service
and also in theproduction
namespace, is of typeClusterIP
. - It exposes the
flask-api
pod on port 5000 and routes traffic to the container's port 5000. - The Service uses a label selector to route traffic to the appropriate pod with the label
app: flask-api
.
This setup ensures that the academix-api
backend is deployed with a single replica in the production
namespace and is connected to a MySQL database. The ClusterIP
Service exposes the API internally within the Kubernetes cluster on port 5000, making it accessible to other services within the cluster.
4.2 deployment.yaml and service.yaml
This YAML configuration creates a Kubernetes Deployment and Service for the frontend web service called academix-project
.
Deployment:
- The Deployment, named
academix-deployment
and located in theproduction
namespace, manages a single replica of theacademix-service
pod. - The pod uses the Docker image
pyzone49/academix_project:1
and listens on port 8000. - The
imagePullPolicy
is set toIfNotPresent
, ensuring that the image is pulled only if it's not already present locally. - The pod is labeled with
app: academix-service
, and the Deployment uses this label to identify and manage the pod. - The restart policy is set to always restart the container if it fails.
Service:
- The Service, named
academix-service
and also in theproduction
namespace, is of typeClusterIP
. - It exposes the
academix-service
pod on port 8000 and routes traffic to the container's port 8000. - The Service uses a label selector to route traffic to the appropriate pod with the label
app: academix-service
.
This setup ensures that the academix-project
frontend web service is deployed with a single replica in the production
namespace. The ClusterIP
Service exposes the web service internally within the Kubernetes cluster on port 8000, making it accessible to other services within the cluster.
This would create a deployment for a MySQL database called academixdb
.
First, we need to create a Persistent Volume Claim (PVC) to provide storage for our MySQL database.
Next, we create a deployment for the MySQL database. This deployment includes environment variables for setting up the database, user, and password, and it mounts the PVC created earlier.
Finally, we create a service for the MySQL database to allow communication between the backend service and the database.
By following these steps, the MySQL database is deployed and accessible within the Kubernetes cluster. The web service can now connect to this database using the provided environment variables.
Using the YAML files provided in the repository, create a gateway and virtual service for the frontend web service and backend API.
export PATH=$PWD/bin:$PATH
istioctl install --set profile=demo -y
kubectl label namespace default istio-injection=enabled
and before that we make sure to start minikube tunnel:
minikube tunnel
This will allow us to access the services from the browser.
Using infrastucture.yaml file, create a gateway for the frontend web service and backend API.
THis will create a gateway called academix-gateway
and attach it to the frontend web service and backend API.
Following this architecture:
Using microservices.yaml file, create a virtual service for the frontend web service and backend API.
This will create a virtual service called academix-virtual
and attach it to the frontend web service and backend API.
This document explains the configuration of virtual services as defined in the provided YAML file. The file configures routes for different URIs to specific backend services.
- match prefix:
/contact
- host service:
academix-service
- match prefix:
/about
- host service:
academix-service
- match prefix:
/formations
- host service:
academix-service
- match prefix:
/admin
- host service:
academix-service
- match prefix:
/home
- host service:
academix-service
- match prefix:
/data
- host service:
flask-api-service
./apply.sh
./check_setup.sh
istioctl proxy-config routes istio-ingressgateway-podname -n istio-system
kubectl port-forward svc/istio-ingressgateway -n istio-system 8080:80
Here is the scores of the labs:

For this lab, I experienced internet issues and clicked unintentionally: