In order for you to keep up with customer demand, you need to create a deployment pipeline. You need to get everything in version control. You need to automate the entire environment creation process. You need a deployment pipeline where you can create test and production environments, and then deploy code into them, entirely on demand.

—Erik to Grasshopper, The Phoenix Project [1]

Continuous Deployment

Continuous Deployment (CD) is the process that takes validated Features from Continuous Integration and deploys them into the production environment, where they are tested and readied for release. It is is the third element in the four-part Continuous Delivery Pipeline of Continuous Exploration (CE), Continuous Integration (CI), Continuous Deployment, and Release on Demand.

Figure 1. Continuous Deployment in the context of the Continuous Delivery Pipeline

Since tangible value occurs only when end users are successfully operating the Solution in their environment, CD is a critical capability for each Agile Release Train (ART) and Solution Train. This demands that the complex routine of deploying to production receives early and meaningful attention during development.

By calling out specific mechanisms for continuously maintaining deployment readiness throughout the feature development timeline, the continuous exploration and continuous integration articles led us directly to this point.

The result is smaller batches of features, some of which are always ready for deployment and release. Now we just need to continuously deploy these valuable assets so that they can be available immediately in production. This gives the business the ability to release more frequently, and thereby lead its industry with the shortest sustainable lead time.

Details

The goal is always the same: to deliver increasingly valuable solutions to the end users as frequently as possible. A leaner and more Agile approach to the development process, as SAFe describes, helps establish faster development flow by systematically reducing time in the development cycle and introducing Built-in Quality approaches.

However, in many cases development teams still deliver solutions to deployment or production in large batches. There, the actual deployment and release of the new solution is likely to be manual, error prone, and unpredictable, adversely affecting release-date commitments and delivered quality.

To address this, the development and operations teams must focus their attention collectively on the downstream, deployment process. By reducing the transaction cost and risk at that point, the business can move to a more continuous deployment process, tuned to deliver smaller batch sizes more economically. That is the final key to unlocking a more continuous delivery process.

Six Recommended Practices for Continuous Deployment

SAFe recommends six specific practices to help establish a more efficient and continuous deployment process, as highlighted in Figure 2.

Figure 2. Six recommended practices for Continuous Deployment

Each is described in the sections below.

Maintain Development and Test Environments to Better Match Production

Often teams discover that what seemed to work well in development does not work in production. This results in much time spent frantically fixing new defects directly in the production environment, typically in emergency mode.

One root cause: Development environments often don’t match production environments. For example, as Em Campbell-Pretty notes from her experience in adopting SAFe at Telstra [2], “The team quickly made a surprising discovery: Only 50 percent of the source code in their development and test environments matched what was running in production.”

Part of the reason for this is practicality and cost. For example, it may not be feasible to have a separate load balancer or production-equivalent data set for every development team. However, most software configurations can be affordably replicated across all environments. Therefore, all changes in the production environment (such as component or supporting application upgrades, new development-initiated configuration/environment changes, and changes in system metadata) must be replicated back to all development environments. This can be accomplished with the same workflow and pipeline that’s used for continuous delivery of the production solution.

To support this, all configuration changes need to be captured in version control, and all new actions required to enable the deployment process should be documented in scripts and automated wherever possible.

Maintain a Staging Environment that Emulates Production

This leads to a second issue. For many reasons, development environments will never match production environments identically. In production, for example, the application server is behind a firewall, which is preceded by a load balancer. The much larger-scale production database is clustered, and media content lives on separate servers. And on, and on. Once again, Murphy’s Law will take effect: Deployment will fail, and debugging and resolution will demand an unpredictable amount of time.

This typically creates the need for a staging environment that bridges the gap. Even though pursuing production equivalency may never be financially prudent—for example replicating the hundreds or thousands of servers required—there are numerous ways to achieve the functional equivalent without such an investment.

For example, it may be sufficient to have only two instances of the application server, instead of 20, and a cheaper load balancer from the same vendor. In a cyber-physical system example—that of a crop harvesting combine, for instance—all the electronics subsystems, drive motors, and hardware actuators that operate the machine can be practically provisioned as a staging environment, without the 15 tons of iron.

Deploy to Staging Every Iteration

It’s impossible to understand the true state of any system increment unless it can be operated and tested in a production-like environment. So, one suggestion seems obvious: Do all System Demos from the staging environment. That way, deployability becomes part of the Definition of Done (DoD).

And while continuous deployment readiness is critical to establishing a reliable delivery process, the real benefits of shortening lead time come from actually deploying to production more frequently. This also helps eliminate long-lived production support branches and the resulting extra effort needed to merge and synchronize all instances where changes are needed.

Automate Testing of Features and Nonfunctional Requirements

But when you deploy more frequently, you have to test more frequently. And that calls for testing automation, including the ability to run automated regression tests on all the unit tests associated with the stories that implement the feature. Running automated acceptance tests at the feature level is also required.

Deploying incrementally also means that teams will be deploying partial functionality—individual stories, parts of features, features that depend on other yet-to-be-developed features, and features that depend on external applications and the like. Therefore, some of what teams need to test against will not be present in the system at the time they need to test it. Fortunately, there are many evolving techniques for addressing this, including applying mocks, stubs, and service virtualization.

And finally, while automating 100 percent of nonfunctional tests may not be practical, what can be automated should be automated—especially in those areas where new functionality might affect system performance. Otherwise, some change could have an unanticipated and detrimental effect on performance, reliability, compliance, or any other system quality.


Note: For more on test-driven development (TDD), acceptance test-driven development (ATDD), deployment testing, test automation, and testing Nonfunctional Requirements (NFRs), see the companion Test-First article and references [3] and [4].


Automate Deployment

By now it should be clear that the actual deployment process itself also requires automation. This includes all the steps in the flow, including building the system, creating the test environments, executing the automated tests, and deploying and validating the verified code and associated systems and utilities in the target environment. This final, critical automation step is achievable only via an incremental process, one that requires the organization’s full commitment and support, as well as creativity and pragmatism, as the teams prioritize target areas for automation. The end result is an automated deployment process, as Figure 3 illustrates.

Figure 3. Automated deployment process

We can see from Figure 3 that there are three main processes that must be automated:

  1. Automatically fetch version-controlled development artifacts – The first step is to automatically fetch all necessary artifacts from the CI process, including code, scripts, tests, supporting configuration items, and metadata—all of which must be maintained under version control. This includes the new code, all required data (dictionaries, scripts, look-ups, mappings, etc.), all libraries and external assemblies, configuration files, and databases. Test data must also be version controlled and manageable enough for the teams to update every time they introduce, create, or test a new scenario.
  2. Automatically build the system and its environments – Many deployment problems arise from the error-prone, manually intensive routines needed to build the actual runtime system and its environments. They include preparing the operating environment, applications, and data; configuring the artifacts; and initiating the required jobs in the system and its supporting systems. To establish a reliable deployment process, the environment setup process itself needs to be fully automated. This can be facilitated largely by virtualization, using Infrastructure as a Service (IaaS) and applying special frameworks for automating configuration management jobs.
  3. Automatically deploy to production – Finally, the process of deploying to production and validating all the deployed assets in that environment must also be automated. This has to be done in such a way that it doesn’t interfere with production operation. Techniques are discussed in further detail in Release on Demand.

Decouple Deployment from Release

Finally, one myth about continuous delivery is exactly that: the myth that you must deliver continuously to the end user, whether or not you—or they—like it. But that exaggerates the case and ignores the basic economic and market factor, which is that the act of releasing a solution is dependent on more than just the state of the system. There are many factors that affect the timing of releasing, including precedent events, customer readiness, channel and supplier support, trade shows and market events, and compliance demands. This is also discussed further in “Release on Demand.”


Learn More

[1] Kim, Gene, et al. The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win. IT Revolution Press, 2013.

[2] Kim, Gene and Jez Humble, Patrick Debois, John Willis. The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations. IT Revolution Press. Kindle Edition.

[3] Humble, Jez and David Farley. Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation. Addison-Wesley, 2010

[4] Gregory, Janet and Lisa Crispin. More Agile Testing: Learning Journeys for the Whole Team. Addison-Wesley Signature Series (Cohn). Pearson Education. Kindle Edition.

 

 

Last update: 13 September, 2017