As we grow both our client base and our product line-up, the stability of our platform becomes more and more important. But as we grow our engineering team, more frequent commits add greater challenges to maintaining this stability.

A reactionary solution would be to add more stringent QA requirements to the release process, formalizing manual testing steps, perhaps even appointing a team to be responsible for ensuring the quality of the release branch. This would come at the price of slowing the release cycle, and losing some of the agile momentum that has been at the core of Yext’s culture since the beginning.

An approach like Continuous Delivery, on the other hand, has the promise of greater confidence in released builds, precicely by encouraging frequent and rapid deployment.

This post is a condensed version of a talk given at our recent engineering offsite introducing the team at large to the concepts involved and exploring how CD might benefit Yext.

Defining Continuous Delivery

Lianping Chen of Paddy Power defines Continuous Delivery as:

“Continuous Delivery (CD) is a software engineering approach in which teams keep producing valuable software in short cycles and ensure that the software can be reliably released at any time.”1

What follows is by no means a comprehensive description of Continuous Delivery (you’d need a whole book2 for that), but it outlines the general concepts along with some common practices.

The Pipeline

The central principle of Continuous Delivery is the pipeline, a workflow that follows a change from development right the way through to release.

Anatomy of the
pipeline

(diagram by Jez Humble, published on WikiMedia)

A failure at any phase in the pipeline prevents a change from progressing to the next phase, and immediate feedback is given to the delivery team so they can work to correct the problem quickly.

Only after a change has passed every phase of the pipeline is it deployed into production. There may even be additional testing performed after the final deployment, resulting in a rollback to a previous released version if it fails.

Automation

Automation is critical to the success of a Continuous Delivery pipeline. The more that human decisions can be removed from the process, the less likely it is that a mistake will result in a phase being improperly executed or skipped over entirely.

This does not mean, however, that human decisions are completely removed. It is perfectly reasonable to include manual testing steps, or require approval before promoting a change to the next phase of the pipeline. Manual approvals are actually highlighted in the diagram above.

Testing

Testing will feature heavily within a Continuous Delivery pipeline, whether it is automated or manual.

Tests can be categorized as:

  • Unit
  • Integration
  • Acceptance
  • Non-functional (e.g.: load/performance)

The list is ordered in terms of time and resource costs, unit tests being the cheapest in both respects. Unit and integration tests are typically automated, whereas acceptance and non-functional tests can be automated, but are often executed manually.

Canary Deployments

While not necessarily considered a testing phase, a common final gate in many Continuous Delivery pipelines is the canary deployment. A canary deployment involves releasing a change to a subset of your users in order to identify any problems that may not be uncovered by more formalized testing. Should the change perform as desired for a period of time, it can be released to all users.

There are a number of ways to achieve this kind of partitioning for a release. One method would be to deploy a small-scale replica of your production environment which can be load balanced against your full environment, although this would require that changes maintain backwards compatibility with shared resources such as the database or third party APIs. Alternatively, changes could be programmatically enabled or disabled for individual requests, which would require a well-defined API to support the kill switch.

Rapid Feedback

Pipelines can vary greatly in length. For example, the pipeline employed at GrubHub3 can get a change into production within an hour, whereas Facebook consolidates releasable builds into a weekly push4. The more quickly a build can be identified as unreleasable, the less time and resources need to be expended on that build, and the more rapidly it can be corrected.

To ensure that feedback is received as quickly as possible, early phases of the pipeline should be short and use relatively few resources. Later phases will then use progressively more time and resources, with the effort dedicated to a phase increasing along with the confidence in the change being tested.

For example, a pipeline could execute unit testing as the first testing phase, as it is quick and will weed out fundamental problems. Load testing could be the last testing phase before release, since we would only want to dedicate the necessary time and resources to running load testing on a build that we are sure meets our functional requirements.

Auditability

Due to the high levels of automation and consistent behaviours, Continuous Delivery pipelines are inherently auditable, via logging, and detailed records of the results of each phase. As a result, you always have a view into exactly what versions of your software you’re running and where.

In order to make these results more meaningful, it is also important to promote binaries along the pipeline once they have been built, so that we know we are working with the same assets at each phase, and can better reason about the results.

Continuous Delivery vs. Continuous Deployment

Continuous Delivery and Continuous Deployment are very similar methodologies, with one important distinction. In Continuous Deployment, any change that can be released into production, will automatically be released. In Continuous Delivery, however, while any change passing all pipeline phases can be released, a human has the final say in approving a release.

At Yext, we occasionally want to freeze production during a particularly high-sakes demo with a large client or partner. In a Continuous Delivery environment, this kind of release freeze could be managed automatically. It would also be simple to employ practices such as stopping releases between Friday and Monday (nobody likes getting beeped at the weekend), or a policy such as Facebook’s, which requires engineers to be online when their code is being released4.

How We Can Get There

The deployment workflow at Yext currently looks like this:

The current Yext
workflow

In this workflow, an engineer is responsible for the testing and deployment of their own changes. After pushing a change to our Git repo, the expectation is that the engineer would wait for feedback from Jenkins before deploying to a staging environment for manual testing. Once they are satisfied, they can deploy to production and should monitor the production environment for a time after to check for any immediate issues.

While unit tests are executed during the commit phase (as part of the Jenkins build), it is expected that automated acceptance tests (which use Selenium) will be run by the developer prior to the initial commit.

This workflow, while comprehensive, has a lot of room for steps to be haphazardly executed, or skipped entirely. In addition, the details of the process may vary from team to team, with this critical information either being detailed in a wiki (which needs to be maintained) or through oral history within a team (which can lose detail from telling to telling). A point of pride at Yext is that all of our engineers deploy code on their first day, a practice which is currently in danger of becoming ungainly. By adopting CD, we can remove a lot of the burden on individual engineers, and teams as a whole, flattening the learning curve.

What We Do Well

While our workflow is far from being a pipeline, Yext has a great base from which to migrate to true Continuous Delivery. We currently have in-house tools allowing uniform one-command deploys of any part of our system, as well as office, sandbox and staging environments, which takes care of a lot of the work involved in building our pipeline already!

As mentioned, we also have a number of automated tests, at both the unit and acceptance level. Of course, we could always use more of that!

What We’re Missing

All we really need to do at this point is to consolidate the various separate pieces of our workflow into a true pipeline. We’re currently evaluating a number of tools that may help us do this, including Jenkins, Bamboo and Go CD, and have already started testing pipelines for some of our services. Despite potentially confusing nomenclature, Go CD is currently looking like the front-runner, primarily as it treats the pipeline as a first-class entity, whereas other solutions are designed for CI with plugins for CD. But our full comparison would probably make good fodder for a future blog post.

Likely Challenges

There are still a number of questions that remain to be answered around the design and implementation of a CD pipeline, and each of these will pose interesting challenges as we proceed down this road.

The Yext architecture is esssentially a series of microservices (and some not so micro) built from a single monolithic Git repository. This will mean that the CD pipeline will need to automatically identify which services need to be built and deployed for any given change, as it is not desirable to redeploy the world for every single commit.

The current and future pace of development will heavily influence the design and implementation of a CD pipeline. The engineering team currently pushes to the main repo’s master branch 80 times per day on average, which could result in contention for resources if every change makes it through the whole pipeline. It will be critical to strike a balance between being able to identify breaking changes, and not wasting resources on trivial ones.

We also want to greatly expand our automated testing coverage over our legacy code, and ensure that we get good coverage of our new code going forward. The trick here will be expanding our testing efforts without having to pause development while we figure it out.

Our Goal

Before too long, we hope to have a pipeline that looks a little like this:

Our target pipeline

Keep an eye on this blog for updates on our progress!

  1. Continuous Delivery: Huge Benefits, but Challenges Too - Lianping Chen, Paddy Power

  2. Continuous Delivery - Jez Humble and David Farley

  3. Enabling Continuous (Food) Delivery at GrubHub - A talk at Dockercon 15, presented by Jeff Valeo

  4. Development and Deployment at Facebook - Dror G. Feitelson, Eitan Frachetenberg, Kent L. Beck 2