Canary Deployment using Istio and Google Kubernetes Engine (GKE)

Rishabh Jain
7 min readJul 2, 2020

--

In a production environment, the best practice is to roll out your new features in a phase-wise release, and therefore, a need arises to split the incoming traffic between the older and the newer versions of the application.

In technical parlance, this type of requirement is termed as “Canary Deployment” and hence it becomes very important in rolling out newer versions of your application.

A combination of GKE and Istio would help in achieving this type of methodology, let’s see how both of these work in conjunction.

Why Istio?

You might be wondering, Why should I use Istio? OR Why cannot I do this in a classic Kubernetes Cluster alone?

Well, You can achieve this in a classic Kubernetes alone as well, but enabling Istio in the Kubernetes cluster provides you with fine-grained control over the traffic coming in and out of the Cluster. This is the reason Istio is called a Service Mesh.

You can get following benefits of using Istio-

  • It provides monitoring, traffic control, and security between the services running in Kubernetes Engine.
  • It can be used to understand how distributive services work together.
  • Acts as a unified proxy-based router and routes traffic to the services.
  • Two-way TLS authentication i.e. the client can also provide the proxy that is signed by Certified Authority.
  • Ability to track services and help in understanding the failures.
  • Ability to provide Fault injection to understand faults in the app.

Requirements

To get started, you need the following-

  1. A GCP Project. You can opt for a free trial.
  2. A Kubernetes Cluster with Istio feature enabled (refer below).
  3. An editor to develop flask services and YAML files

Steps

  • Create a Kubernetes Cluster with Istio Enabled
  • Inject Istio
  • Create one Flask API
  • Create a second Flask API
  • Package both the APIs
  • GKE Deployment YAML for API-1
  • GKE Deployment YAML for API-2
  • Istio Gateway YAML
  • Istio Destination Rule YAML
  • Istio Virtual Service YAML
  • Communicate with your service
  • Gitlab Code

Create a Kubernetes Cluster with Istio feature

Create a Kubernetes Cluster from the GCP console and enable the Istio feature as shown in the image below

Create a Kubernetes cluster with Istio

Connect to the cluster using the following command-

gcloud container clusters get-credentials cluster-1 —zone us-central1-c —project <PROJECT_ID>

Note: Please add your project id in the above command

Inject Istio

In order to take advantage of all of Istio’s features, pods in the mesh must be running an Istio sidecar proxy.

To enable this sidecar proxy, run the following command-

kubectl label namespace default istio-injection=enabled

Note: This command is executed for the namespace “default”.

Create one Flask API

To create a flask app, open your text editor and code the following-

flask-1.py code file

This code is a flask code that runs on the host IP address and on port 8080. It is an API that returns a print statement stating “This is v1” where v1 is version 1.

Create a second Flask API

To create a flask app, open your text editor and code the following-

flask-2.py code file

This code is a second flask code that runs on the host IP address and on port 8080. It is an API that returns a print statement stating “This is v2” where v2 is version 2.

Package both the APIs

After we’re done creating both the applications let’s package both the applications and push them to the docker hub.

To package the applications we will write 2 Dockerfiles-

Dockerfile-1

From python:latest
Run pip install flask
ADD flask-1.py /
CMD [“python3”, “/flask-1.py”]

Dockerfile-2

From python:latest
Run pip install flask
ADD flask-1.py /
CMD [“python3”, “/flask-1.py”]

After writing both the docker files let’s build the dockers using the following commands-

$ docker build -f Dockerfile-1 -t <Docker_image_name>:<tag> .
$ docker build -f Dockerfile-2 -t <Docker_image_name>:<tag> .

Note: You can add your specified name in the above command. I have used 995040/canary-deployment:flask1 and 995040/canary-deployment:flask2 as names

Now let’s verify the images once-

$ docker images

The output of docker images-

The output of docker images

Push the docker images on the Docker hub-

$ docker push <repo>/<docker_name_1>:<tag_1>
$ docker push <repo>/<docker_name_2>:<tag_2>

Note: Repo is you docker hub account.

GKE Deployment Yaml for API-1

We’ve created the dockers and pushed them to the docker hub. It’s time to create a Deployment and a Service YAML to deploy these docker images onto the Kubernetes Cluster.

Deployment and Service Yaml File-

Deployment file for the first application

The above YAML files have certain parameters like-

  • Deployment Name- flask-deployment-1
  • Deployment Labels-
app: flask-1
  • Deployment Selector-
app: flask-pod
  • Pod labels-
app: flask-pod
version: v1
  • Service Name- lbservice
  • Service Selector-
app: flask-podcontainerPort- 8080 (The port of the container)targetPort- 8080 (The port of the container where the traffic is suppose to come)port- 80 (The port of the service where the traffic will make a request)

Deploy the YAML file-

$ kubectl apply -f <file_name>

You have deployed your first application, let’s verify the deployment using the following commands-

$ kubectl get deployments
$ kubectl get pods
$ kubectl get svc

GKE Deployment Yaml for API-2

Follow the similar process that you’ve followed before for the second application.

Deployment and Service Yaml File-

Deployment file for the second application

The above YAML files have certain parameters like

  • Deployment Name- flask-deployment-2
  • Deployment Labels-
app: flask-2
  • Deployment Selector-
app: flask-pod
  • Pod labels-
app: flask-pod
version: v2
  • Service Name- lbservice
  • Service Selector-
app: flask-podcontainerPort- 8080 (The port of the container)targetPort- 8080 (The port of the container where the traffic is supposed to come)port- 80 (The port of the service where the traffic will make a request)

Deploy the YAML file-

$ kubectl apply -f <file_name>

You have deployed your second application, let’s verify the deployment using the following commands-

$ kubectl get deployments
$ kubectl get pods
$ kubectl get svc

Note: Now you have two versions v1 and v2 deployed under a single Load Balancer service of k8s.

Istio Gateway YAML

After verifying the running pods, deployments, and service. It’s time to create an Istio Gateway which will handle the ingress and egress traffic that will enter or leave from the service mesh.

The gateway is applied to the Envoy proxies which are running at the edge of the mesh.

Istio gateway [SRC- Draw.io]

Write the YAML file to create a gateway-

Gateway YAML file

This gateway configuration lets HTTP traffic from “*” into the mesh at port 80, But it does not specify the route as shown in the diagram i.e. once the traffic comes inside the mesh, where should I send the traffic to? The traffic should be sent to flask-1 or flask-2 containers and this configuration is done in the Virtual Service Yaml file. Before we configure the Virtual Service Yaml file, let’s define some rules for the traffic.

Apply the YAML file-

$ kubectl apply -f <file_name>

Istio Destination Rule YAML

The Destination rules specify; Where to send the traffic? Should I send it to Version 1 or should I send it to Version 2?

To specify such routes for the traffic, subsets are to be created in the YAML file for the destination rule.

Destination Rule’s YAML File-

Destination-Rule YAML file

The parameters for Destination Rules-

  • host- lbservice is the name of the service on which pods are exposed
  • Subsets
name- Name field represents the version 1 of the application.name- Name field represents the version 2 of the application.

Apply the YAML file-

$ kubectl apply -f <file_name>

Istio Virtual Service Yaml

Virtual Service specifies; How should I send the traffic? What should be the split of the traffic?

Virtual Service will route the requests to different subsets that were defined in destination rules. The “hosts” field can be an IP address, a DNS name, or (”*”) prefixes. 100% traffic in the below example is being routed to version 1 and 0% traffic is routed to version 2 and therefore, traffic can be controlled by using the weight key.

Any HTTP traffic is routed on the basis of the match i.e. if the traffic is connecting to a prefix “/” then only it will route the traffic to version 1 else it will route the traffic to version 2.

Virtual Service YAML file

Some notable parameters of the above YAML file-

  • Host- It is the k8s service on which the pods are running
  • Subset- These represent the different names of versions that were specified in Destination Rule
  • Weight- Weight is the percentage of traffic to be redirected.
  • Match- Upon the match condition the traffic is routed, conditions can also be matched for headers or regex.
  • Gateway- Virtual Service needs a gateway on it to be able to identify the ingress traffic.

Apply the YAML file

$ kubectl apply -f <file_name>

Communicate to your Service

You can communicate to your service on the ingress-gateway IP of your Istio feature. To get the ingress-gateway IP just run the following command-

$ kubectl get svc -n istio-system

Get the Ip and visit http://ingress-gateway-ip from your browser and visit your application.

To send multiple requests in a while loop you can write the following command in bash-

$ while sleep 1; do curl http://<INGRESS-GATEWAY-IP>/; echo “”; done

Github Code

Download all the code from this GITHUB repo to replicate the process.

--

--