Spotted a bug? Have a great idea? Help us improve developer profiles!

Achieve continuous deployment to Google Kubernetes Engine (GKE) with Cloud Build

In this codelab, you'll learn to set up a continuous delivery pipeline for GKE with Cloud Build. You'll complete the following steps:

  • Create a GKE cluster.
  • Review the app structure.
  • Manually deploy the app.
  • Create a repository for your source.
  • Set up automated triggers in Cloud Builder.
  • Automatically deploy branches to custom namespaces.
  • Automatically deploy master as a canary.
  • Automatically deploy tags to production.

Step 1

Activate Cloud Shell

  1. From the Cloud Console, click Activate Cloud Shell .

If you've never started Cloud Shell before, you'll be presented with an intermediate screen (below the fold) describing what it is. If that's the case, click Continue (and you won't ever see it again). Here's what that one-time screen looks like:

It should only take a few moments to provision and connect to Cloud Shell.

This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory and runs in Google Cloud, greatly enhancing network performance and authentication. Much, if not all, of your work in this codelab can be done with simply a browser or your Chromebook.

Once connected to Cloud Shell, you should see that you are already authenticated and that the project is already set to your project ID.

  1. Run the following command in Cloud Shell to confirm that you are authenticated:
gcloud auth list

Command output

 Credentialed Accounts
ACTIVE  ACCOUNT
*       <my_account>@<my_domain.com>

To set the active account, run:
    $ gcloud config set account `ACCOUNT`
gcloud config list project

Command output

[core]
project = <PROJECT_ID>

If it is not, you can set it with this command:

gcloud config set project <PROJECT_ID>

Command output

Updated property [core/project].

Step 2

Set up some variables.

export PROJECT=$(gcloud info --format='value(config.project)')
export ZONE=us-central1-b
export CLUSTER=gke-deploy-cluster

Store values in gcloud config.

gcloud config set project $PROJECT
gcloud config set compute/zone $ZONE

Run the following commands to see your preset account and project. When you create resources with gcloud, this is where they get stored.

gcloud config list project
gcloud config list compute/zone

Step 3

Make sure that the following APIs are enabled in the Google Cloud Console:

  • GKE API
  • Container Registry API
  • Cloud Build API
  • Cloud Source Repositories API
gcloud services enable container.googleapis.com \
    containerregistry.googleapis.com \
    cloudbuild.googleapis.com \
    sourcerepo.googleapis.com

Step 4

Run the following command to get the sample code.

git clone https://github.com/GoogleCloudPlatform/container-builder-workshop.git
cd container-builder-workshop

Step 5

Start your cluster with three nodes.

gcloud container clusters create ${CLUSTER} \
    --project=${PROJECT} \
    --zone=${ZONE} \
    --scopes "https://www.googleapis.com/auth/projecthosting,storage-rw"

It may take a moment to create the cluster.

Step 6

Give Cloud Build rights to your cluster.

export PROJECT_NUMBER="$(gcloud projects describe \
    $(gcloud config get-value core/project -q) --format='get(projectNumber)')"

gcloud projects add-iam-policy-binding ${PROJECT} \
    --member=serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com \
    --role=roles/container.developer

Your environment is ready!

You'll deploy the sample app, gceme, in your continuous deployment pipeline. The app is written in the Go language and is located in the root directory. When you run the gceme binary on a Compute Engine instance, the app displays the instance's metadata in an info card as follows:

gceme info card

The app mimics a microservice by supporting the following operation modes:

  • In backend mode, gceme listens on port 8080 and returns Compute Engine instance metadata in JSON format.
  • In frontend mode, gceme queries the backend gceme service and renders the resulting JSON in the user interface.

You'll deploy the app into two different environments:

  • Production: The live site that your users access.
  • Canary: A smaller-capacity site that receives only a small percentage of your user traffic, which you'll use to validate your software with live traffic before it's released to all users.

Step 1

Create the Kubernetes namespace to logically isolate the deployment.

kubectl create ns production

Step 2

Create the production and canary deployments and services using the kubectl apply commands.

kubectl apply -f kubernetes/deployments/prod -n production
kubectl apply -f kubernetes/deployments/canary -n production
kubectl apply -f kubernetes/services -n production

Step 3

Scale up the production environment frontends. By default, only one replica of the frontend is deployed. Use the kubectl scale command to ensure that you have at least four replicas running at all times.

kubectl scale deployment gceme-frontend-production -n production --replicas 4

Step 4

Confirm that you have five Pods running for the frontend, including four for production traffic and one for canary releases. That means that changes to your canary release will only affect 1 out of 5 (20%) of users.

You should also have two Pods for the backend, including one for production and one for canary.

kubectl get pods -n production -l app=gceme -l role=frontend
kubectl get pods -n production -l app=gceme -l role=backend

Step 5

Retrieve the external IP address for the production services.

kubectl get service gceme-frontend -n production

Step 6

Store the frontend service load balancer IP address in an environment variable for later use.

export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}"  --namespace=production services gceme-frontend)

Step 7

Confirm that both services work by opening the frontend external IP address in your browser.

Step 8

Check the version output of the service by hitting the /version path. It should read 1.0.0.

curl http://$FRONTEND_SERVICE_IP/version

Congratulations! You deployed the sample app! Next, you'll set up a pipeline for continuously and reliably deploying your changes.

Step 1

Create a copy of the gceme sample app and push it to Cloud Source Repositories.

Step 2

Initialize the sample-app directory as its own Git repository.

gcloud source repos create default
git init
git config credential.helper gcloud.sh
git remote add gcp https://source.developers.google.com/p/$PROJECT/r/default

Step 3

Set the username and email address for your Git commits. Replace [EMAIL_ADDRESS] with your Git email address. Replace [USERNAME] with your Git username.

git config --global user.email "[EMAIL_ADDRESS]"
git config --global user.name "[USERNAME]"

Step 4

Add, commit, and push the files.

git add .
git commit -m "Initial commit"
git push gcp master

Step 1

Set up a build trigger to watch for changes to any branches except master.

cat <<EOF > branch-build-trigger.json
{
  "triggerTemplate": {
    "projectId": "${PROJECT}",
    "repoName": "default",
    "branchName": "[^(?!.*master)].*"
  },
  "description": "branch",
  "substitutions": {
    "_CLOUDSDK_COMPUTE_ZONE": "${ZONE}",
    "_CLOUDSDK_CONTAINER_CLUSTER": "${CLUSTER}"
  },
  "filename": "builder/cloudbuild-dev.yaml"
}
EOF

curl -X POST \
    https://cloudbuild.googleapis.com/v1/projects/${PROJECT}/triggers \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer $(gcloud config config-helper --format='value(credential.access_token)')" \
    --data-binary @branch-build-trigger.json

Step 2

Set up a build trigger to watch for changes to only the master branch.

cat <<EOF > master-build-trigger.json
{
  "triggerTemplate": {
    "projectId": "${PROJECT}",
    "repoName": "default",
    "branchName": "master"
  },
  "description": "master",
  "substitutions": {
    "_CLOUDSDK_COMPUTE_ZONE": "${ZONE}",
    "_CLOUDSDK_CONTAINER_CLUSTER": "${CLUSTER}"
  },
  "filename": "builder/cloudbuild-canary.yaml"
}
EOF


curl -X POST \
    https://cloudbuild.googleapis.com/v1/projects/${PROJECT}/triggers \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer $(gcloud config config-helper --format='value(credential.access_token)')" \
    --data-binary @master-build-trigger.json

Step 3

Set up a build trigger to execute when a tag is pushed to the repository.

cat <<EOF > tag-build-trigger.json
{
  "triggerTemplate": {
    "projectId": "${PROJECT}",
    "repoName": "default",
    "tagName": ".*"
  },
  "description": "tag",
  "substitutions": {
    "_CLOUDSDK_COMPUTE_ZONE": "${ZONE}",
    "_CLOUDSDK_CONTAINER_CLUSTER": "${CLUSTER}"
  },
  "filename": "builder/cloudbuild-prod.yaml"
}
EOF


curl -X POST \
    https://cloudbuild.googleapis.com/v1/projects/${PROJECT}/triggers \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer $(gcloud config config-helper --format='value(credential.access_token)')" \
    --data-binary @tag-build-trigger.json

Review triggers are set up on the Triggers page.

Development branches are a set of environments that your developers use to test their code changes before submitting them to the live site for integration. Those environments are scaled-down versions of your app, but need to be deployed with the same mechanisms as the live environment.

Create a development branch

To create a development environment from a feature branch, you can push the branch to the Git server and let Cloud Build deploy your environment.

Create a development branch and push it to the Git server.

git checkout -b new-feature

Modify the site

In order to demonstrate changing the app, you need to change the gceme cards from blue to orange.

Step 1

Open html.go and replace the two instances of blue with orange.

Step 2

Open main.go and change the version number from 1.0.0 to 2.0.0. The version is defined in the following line:

const version string = "2.0.0"

Kick off deployment

Step 1

Commit and push your changes to kick off a build of your development environment.

git add html.go main.go
git commit -m "Version 2.0.0"
git push gcp new-feature

Step 2

After the change is pushed to the Git repository, navigate to the History page user interface, where you can see that your build started for the new-feature branch.

Click into the build to review the details of the job.

Step 3

Once that completes, verify that your app is accessible.

Retrieve the external IP address for the production services.

kubectl get service gceme-frontend -n new-feature

export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=new-feature services gceme-frontend)

curl http://$FRONTEND_SERVICE_IP/version

You should see it respond with 2.0.0, which is the version that is now running.

Congratulations! You set up the development environment.

Now that you verified that your app is running your latest code in the development environment, deploy that code to the canary environment.

Step 1

Create a canary branch and push it to the Git server.

git checkout master
git merge new-feature
git push gcp master

Again, after you push to the Git repository, navigate to the History page user interface, where you can see that your build started for the master branch.

Click into the build to review the details of the job.

Step 2

Once complete, you can check the service URL to ensure that some of the traffic is being served by your new version. You should see about one in five requests returning version 2.0.0.

export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend)
while true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 1;  done

You can stop that command by pressing Control+C (Command+C on Macintosh).

Congratulations! You deployed a canary release.

Now that your canary release was successful and you haven't heard any customer complaints, you can deploy to the rest of your production fleet.

Step 1

Merge the canary branch and push it to the Git server.

git tag v2.0.0
git push gcp v2.0.0

Review the job on the the History page user interface, where you can see that your build started for the v2.0.0 tag.

Click into the build to review the details of the job.

Step 2

Once complete, you can check the service URL to ensure that all of the traffic is being served by your new version, 2.0.0. You can also navigate to the site using your browser to see your orange cards.

export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend)
while true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 1;  done

Step 3

You can stop this command by pressing Control+C (Command+C on Macintosh).

Congratulations! You deployed your app to production.