4 April 2017 - 13 minute read

I’m going to start off by saying what this post is not. This is not a discussion on the benefits of practicing Agile or adopting its logical conclusion, Continuous Deployment. I’m going to assume you’ve already been convinced. If you are looking for further persuasion then I would recommend reading The Phoenix Project as it is very entertaining, but also describes the need for Agile while providing solid analogies and business context. Alternatively, there are plenty of great posts out there:

The Components

Continuous Deployment

Continuous Deployment, Continuous Integration and Continuous Delivery are very similar concepts and are often used interchangeably.

I will be using the following definitions for the purposes of this post:

  • Continuous Integration – Build and test your code automatically.
  • Continuous Delivery – Build and test your code automatically. If tests pass, make your application available for deployments, deploy it to a test environment and verify whether the deployment was a success.
  • Continuous Deployment – Build and test your code automatically. If tests pass, make your application available for deployments, deploy it to a test environment and verify whether the deployment was a success. If the test deployment was a success, promote that code to production.

Deploying Applications as Containers

There is a phrase I heard recently that sums up my view on software deployment very well: “Treat your servers like cattle, not pets”. This is suggesting we create environments for our applications that only last as long as the deployments do, destroying them when applications are updated rather than updating and nurturing them throughout the applications’ lifetime. This stops our application servers accumulating state and becoming “snowflakes” (each one unique, unpredictable and difficult to reproduce). It also gives you the confidence that you can deploy your applications from scratch easily in a disaster scenario.

There are a couple of suitable CD techniques to achieve this without a lot of manual effort, especially when combined with Agile where we have a very short release cycle. You can use configuration management tools such as Puppet or Chef in conjunction with your continuous integration process. The other way – which I will use in this example – is to deploy your applications as containers, where they are packaged along with their required environment. Platforms such as Kubernetes and Mesos provide a distributed layer of abstraction over multiple hosts, allowing us to run containers with availability guarantees in production.

The Build Environment

The idea of snowflake servers reaches further than deployed application servers. Build servers themselves are some of the worst examples I’ve come across. Accumulated state consisting of a rare combination of build tool versions and dependency libraries installed over time, as well as all the little configuration changes that were manually set to get one particular build working. These servers become a dangerous single point of failure when they become the only environments where applications can be built. The presence of older build tools can also make introducing newer builds more difficult – possibly resulting in yet more bespoke setup.

A build process tends to change once it has been created. This can be problematic if you have defined your build steps directly in your build tool: either your branch or every other will not build and test correctly at the point the build process needs changing. Once this change has been merged into master and other branches have been rebased, your current branches’ HEADs will be fine, but building historical versions of your application will no longer work. Because of this, it is better to define your build steps, in code, alongside your application source. This way, the steps required to build and test are captured with your source. Every revision will then build correctly and can be subject to the same code review controls to minimise risk and ensure the knowledge is shared.

I’ll be creating an environment that uses freshly created docker containers for each build to overcome these two potential issues. These containers will contain all compile-time and build dependencies in a stateless, reproducible and self-documenting format. The build server itself will not contain any build dependencies. To keep our build and test steps captured in code, there are many continuous integration products that support having your build steps alongside your code. I have found that Jenkins with their Pipeline plugin work very well, especially as they also have a module to add Docker features to their DSL.

An Example – Creating a Jenkins Server, Scala Microservice and Deployment Process to Kubernetes

What I’d Like To Achieve

For this post I wanted to create a reproducible, reference-example of setting up a CD pipeline from scratch. Using Jenkins Pipeline and Docker for building, testing and triggering deployments, and Kubernetes for hosting our application.

I have created an application that can run as a container. It is written in Scala and provides a very simple gRPC API. We will:

  • Install and configure Jenkins and Kubernetes
  • Deploy a containerised application to Kubernetes
  • Create a Jenkins Pipeline job to continually build, test and deploy the application

Installation of Jenkins

These instructions are for Amazon Linux – which is based on CentOS – but it should be fairly trivial to adapt these for use with other distributions. With the recent release of Windows Server 2016 Containers, I would hope a similar Windows set up is possible too; however, this will need to be a topic for another day.

First create and ssh to a new machine in your environment. Out of habit, I will ensure software is up to date and install vim and tmux, – this is optional.

sudo yum update
sudo yum install vim tmux

We can then install our necessities: Docker, a JVM and Jenkins. I also add the “jenkins” user to the “docker” group which will give it access to start, stop and otherwise interact with containers.

sudo wget -O /etc/yum.repos.d/jenkins.repo
sudo rpm --import
sudo yum install jenkins docker java git
sudo usermod -a -G docker jenkins
sudo chkconfig docker on
sudo service docker start
sudo chkconfig jenkins on
sudo service jenkins start

Take note of the content of the /var/lib/jenkins/secrets/initialAdminPassword file. You will need this key shortly. If you use your own internal Docker repository then configure this now.

Configuration of Jenkins

Open a web browser and point it at port 8080 of your new server. It should prompt you for the initial admin password for your Jenkins instance. This was the content of the file you took note of in the previous step.

Next, close the wizard that starts by clicking the top right “X” – we’ll do these steps separately.

Select “Manage Jenkins” from the left hand side.

Select “Manage Plugins” from the management menu. There are many plugins available for Jenkins; I like to keep the selection fairly minimal so the build process is captured in our Jenkinsfile as much as possible. Select the following:

  • AnsiColor
  • Blue Ocean
  • build timeout plugin
  • Pipeline
  • Timestamper
  • Workspace Cleanup Plugin

If it is useful to you in your organisation also select:

  • Active Directory plugin

If you are going to be running many build pipelines, I would recommend installing a plugin that allows adding additional nodes easily by SSH or dynamically in AWS:

  • SSH Slaves
  • Amazon EC2 plugin

Click “Install without restart”.

Navigate to “Manage Jenkins” > “Manage Users” and add any local user accounts. Then navigate to “Manage Jenkins” > “Configure Global Security”. Here you can configure the privileges for your newly added users and configure your Active Directory connection if you installed the plugin earlier.

The Deployment Environment

To run our containers we will install a Kubernetes cluster. There are good instructions available for this from the Kubernetes Getting Started site covering many possible deployment scenarios. I have used both the kube-aws tool from CoreOS and kops from Kubernetes themselves with success.

For this example, I created two namespaces in my Kubernetes cluster to represent my live and dev environments. Generally one would use two separate clusters for this purpose.

You will want to construct a kubeconfig that contains a Context for each of your environments. These Contexts should contain the hostname and credentials/certificates needed to connect to your clusters or namespaces.

kubectl config set-cluster sam-dev --server --certificate-authority=~/kube-demo/ca.pem --embed-certs=true --kubeconfig=~/kube-demo/kubeconfig
kubectl config set-credentials admin --client-certificate=~/kube-demo/admin.pem --client-key=~/kube-demo/admin-key.pem --embed-certs=true --kubeconfig=~/kube-demo/kubeconfig
kubectl config set-context ROOT --cluster=sam-dev --user=admin --kubeconfig=~/kube-demo/kubeconfig
kubectl --kubeconfig=~/kube-demo/kubeconfig --context=ROOT create namespace sam-dev
kubectl --kubeconfig=~/kube-demo/kubeconfig --context=ROOT create namespace sam-live
kubectl config set-context DEV --cluster=sam-dev --user=admin --namespace=sam-dev --kubeconfig=~/kube-demo/kubeconfig
kubectl config set-context LIVE --cluster=sam-dev --user=admin --namespace=sam-live --kubeconfig=~/kube-demo/kubeconfig

Once you have installed your cluster and constructed your kubeconfig, you will want to create a Deployment to describe how your application should run.

kubectl --kubeconfig=~/kube-demo/kubeconfig --context=DEV run grpc-demo --image=sambott/grpc-test:0.2 --port=11235

You will then want to expose this as a service. In the example below I am setting the service type to “LoadBalancer”. On supported platforms such as GCE and AWS this will add an externally facing load balancer.

kubectl --kubeconfig=~/kube-demo/kubeconfig --context=DEV expose deployment/grpc-demo --type=LoadBalancer --port=11235

This process should be replicated for the live environment. You can also create YAML files to express the deployments, pods, services and additional config together, but I have skipped this for simplicity.

A Working Example

The Project

Clone the sample repository at

This is a Scala project that contains a gRPC server, a couple of unit tests and an example client that can be used to verify a deployment. When run, the sample client will connect to a deployed server and test that it gets a response from the defined API.

$ ./sample-client/target/universal/stage/bin/demo-client 11235

2017-02-12 10:39:09 [main] INFO com.winton.DemoClient$ - Creating client
Feb 12, 2017 10:39:09 AM io.grpc.internal.ManagedChannelImpl <init>
INFO: [ManagedChannelImpl@6b927fb] Created with target
2017-02-12 10:39:09 [main] INFO com.winton.DemoClient$ - Client Created
2017-02-12 10:39:09 [main] INFO com.winton.DemoClient$ - calling: getMessage(A Message!)
2017-02-12 10:39:09 [ForkJoinPool-1-worker-5] INFO com.winton.DemoClient$ - Received: Hi! You just sent me A Message!


The key features to understand are:

  • protocol/ the gRPC and protocol buffers. This defines the service definition to be used by both the client and server.
  • server/ contains the server source
  • sample-client/ contains the client source
  • The project is built using sbt, a command line dependency management and build tool, typically used for Scala projects. Running sbt server/run would be sufficient to download the dependencies, compile the protocol and server projects, and run the resulting binaries.

The Build Environment

To ensure that our build server is free of state – and to explicitly define and document what is required to build our project – I have added a Dockerfile to the repository. custom-build-env/Dockerfile is used to create a clean environment for every build containing:

  • Python 2.7
  • A Java 8 JDK
  • sbt

Creating the build environment may take a few minutes the first time the project is built. But it should add no more than a second to subsequent builds because docker will cache images (and the layers that make up images).

The Jenkinsfile

To define the build process in Jenkins, we have added Jenkinsfile to the project. This file is written in Jenkins’ Groovy-based DSL which outlines the stages required.



node {

  def buildEnv
  def devAddress

  stage ('Checkout') {
    checkout scm
    GIT_VERSION = sh (
      script: 'git describe --tags',
      returnStdout: true

  stage ('Build Custom Environment') {
    buildEnv ="build_env:${GIT_VERSION}", 'custom-build-env')

  buildEnv.inside {

    stage ('Build') {
      sh 'sbt compile'
      sh 'sbt sampleClient/universal:stage'

    stage ('Test') {
      parallel (
        'Test Server' : {
          sh 'sbt server/test'
        'Test Sample Client' : {
          sh 'sbt sampleClient/test'

    stage ('Prepare Docker Image') {
      sh 'sbt server/docker:stage'

  stage ('Build and Push Docker Image') {
    withCredentials([[$class: "UsernamePasswordMultiBinding", usernameVariable: 'DOCKERHUB_USER', passwordVariable: 'DOCKERHUB_PASS', credentialsId: 'Docker Hub']]) {
      sh 'docker login --username $DOCKERHUB_USER --password $DOCKERHUB_PASS'
    def serverImage ="sambott/grpc-test:${GIT_VERSION}", 'server/target/docker/stage')
    sh 'docker logout'

  stage ('Deploy to DEV') {
    devAddress = deployContainer("sambott/grpc-test:${GIT_VERSION}", 'DEV')

  stage ('Verify Deployment') {
    buildEnv.inside {
      sh "sample-client/target/universal/stage/bin/demo-client ${devAddress}"

stage 'Deploy to LIVE'
  timeout(time:2, unit:'DAYS') {
    input message:'Approve deployment to LIVE?'
  node {
    deployContainer("sambott/grpc-test:${GIT_VERSION}", 'LIVE')

def deployContainer(image, env) {
  docker.image('lachlanevenson/k8s-kubectl:v1.5.2').inside {
    withCredentials([[$class: "FileBinding", credentialsId: 'KubeConfig', variable: 'KUBE_CONFIG']]) {
      def kubectl = "kubectl  --kubeconfig=\$KUBE_CONFIG --context=${env}"
      sh "${kubectl} set image deployment/grpc-demo grpc-demo=${image}"
      sh "${kubectl} rollout status deployment/grpc-demo"
      return sh (
        script: "${kubectl} get service/grpc-demo -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'",
        returnStdout: true

// vim: set syntax=groovy :

Looking at the contents of this file you will see it is broken down into logical steps using the stage function. These stages are:

  • Checkout
    • Checkout the code and calculate a version from the repository using git describe
  • Build Custom Environment
    • Create the build environment by creating an image from the custom-build-env/Dockerfile
  • Build
    • Use sbt to compile the three projects
  • Test
    • Run the unit tests for the projects
  • Prepare Docker Image
    • Create the folder containing the Docker build to package the server as a container
  • Build and Push Docker Image
    • Build the Docker image for this version of our server application
  • Deploy to DEV
    • Deploy the application to the DEV environment
  • Verify Deployment
    • Test that the deployment and application work by running our client against that instance
    • This could easily be an integration test suite instead of a client if desired
  • Deploy to LIVE
    • If everything has succeeded, deploy the code to LIVE

There are a few areas of the file that are worth noting:

  • After we create a Docker image for our build steps, we then run that container, mount our working directory, and run steps inside it using build_env.inside { ... }.
  • For the “Test” Stage we use the parallel function to run two commands in parallel.
  • Using withCredentials( ... ) makes passwords and other sensitive information available to your build from the Jenkins Credential Store. It will automatically obscure any use of them from the build output.
  • Inside our deployment function I use a Docker image that contains the Kubernetes commandline tool. This demonstrates running steps inside a container from Dockerhub.
  • After we have deployed to DEV and verified the application is working. I have added an approval step before deploying to live. There are a couple of crucial components to this stage:
    • This approval is wrapped in a timeout(). Without this Jenkins will soon fill up with a long list of awaiting approvals.
    • The manual input is outside of a node {... } block. This means the job will not reserve one of Jenkins’ scarce execution slots while it sits and waits for your input.
    • Where possible, I would avoid the manual approval stage. Where manual approval is required before a live deployment, we have a “Continuous Delivery” process where software is built, packaged, tested and presented as an option for a live deployment. Without the manual step we have a full “Continuous Deployment” pipeline.

Adding this project to Jenkins

Adding a Jenkinsfile-based build to Jenkins is trivial, we only need to tell Jenkins which repository to reference and:

  • From the Jenkins landing page, select “New Item”.
  • Give the new project a name and choose “Multibranch Pipeline”.
  • In the configuration, under “Branch Sources”, add the git repository:
  • Under “Build Triggers”, select “Periodically if not otherwise run” every minute or so. This will define how often Jenkins will poll for changes in the repository
  • The final step is to add some credentials for this build for the Docker registry and Kubernetes configs:
    • From the Jenkins landing page, navigate to Credentials > System > Global
    • Add a “file” secret containing your kubeconfig with the ID “KubeConfig”
    • Add a “Username and Password” secret with your Dockerhub credentials with the ID “Docker Hub”

Running the build

This config will poll the git repository for changes. To manually run the build, navigate to the pipeline in the “Blue Ocean UI” and click the play icon next to the master branch.