Is CI really not CD any more?

21/11/2016   Industry insights  

To put this in context, we would be looking at continuous deployment of a Java application to Wildfly servers.

Over the last few years, I’ve seen many articles and heard many people in DevOps and automation areas throw up their arms in disgust when someone like myself tries to tell them that their CI server can also perform CD (continuous delivery and/or deployment).

The common statement is always “CI is not CD!”.

C.I. is nothing more than a term or discipline used by system automation to perform continuous testing to ensure that components within the application development cycle do not cause any breakage.  This is fine if you’re a developer or code producer.

If we take C.I. further this can also encompass your IaaS and PaaS environment too, which should include the end-to-end testing if you want a fully automated deployment to production.  C.I. performs a crucial part of fully automated deployment since all tests must be strict and the process of ensuring that the tests are in place prior to coding or deployment to ensure detection of any issues that could arise from any new elements being added.

Most people have encountered the automated C.I. servers such as;

  • Jenkins/Hudson
  • GoCD
  • TeamCity
  • Jarvis
  • CircleCI
  • And so on

Some of these services are still relatively new and focus on only performing C.I., whereas the longer standing systems such as Jenkins and TeamCity were originally set up to perform automated tasks, but were adopted early by developers as a way of providing feedback to their automated build and test.  The nice thing about these older systems is they have evolved and have a multitude of plug-ins to cope with the tools the Developers are using, but also to enable the entire systems to work with DevOps to continue the process into the Production environment.

How do we use these systems for continuous delivery/deployment?

Some of you may be aware that to obtain continuous delivery using Jenkins, TeamCity or GoCD can be done through the developers integration tools, such as Maven or Ant where you can specify how to push the artifact to your software repository, or you can use plug-ins available with these systems to copy the artifact to the repository if as DevOps you do not wish to provide authentication information to the developers.  This depends on your approach as to how much control your developers have on production.  For example, if you’re in the finance industry you are generally required to have strict checks on your employees for them to be able to work on production environments to conform to auditing and regulatory requirements.  If this is the case then you would need to separate out your environments accordingly so that the developers cannot promote to QA or Production, but instead, if tests pass in the lower environment, a notification is sent to inform the next environments automation server that a new release is ready for it.

Once you have your artifact in a repository such as JFrog Artifactory  you can have your automated system (Jenkins, GoCD, etc) can poll the repository, or even an S3 bucket if you’re using AWS or a web server with a directory listing, to see if a new file has arrived.  This would be the next task in the pipeline, and if necessary could be done by the same automation server, or could be an automation server inside the environment looking to perform the deployment.

By hosting separate automation servers in each of your environments, means you can change passwords and keys for each environment only when necessary, to meet policy requirements of developers not having access to these servers, or being able to influence the other environments.

Deploying artifacts through Jenkins, GoCD or TeamCity will require you to use plug-ins that make use of remote execution or remote copying.  All of these automation servers include these types of plug-in, and all of them allow for the use of keys or passwords to perform the connection.  Hook these up to a secure identity server and you can automate your keys so that no-one knows any of the passwords for production environments.  Through the use of the remote execution processes you can start the deployment process using your choice of deployment software (Puppet, Chef, Capistrano, homegrown scripts, and others).

Any automated deployment system should be able to perform final tests on the deployment that has occurred to ensure all is working.  As part of the deployment once it has been completed there should be the final end-to-end test.  A web-based system may be done by starting a Selenium cluster and receive the result.  On failure, a rollback process should be put in place, either by symlink back to the last version or removing the AMI from AWS and reconfiguring back to the last AMI.  We should always build this in, as a safety net to a lapse in testing.

Diagram One

Diagram 1

Diagram 1 shows an example outline of the workflow for a typical DevOps pipeline for deploying applications and infrastructure to the common 3 environments.  If you are using IaaS and PaaS then you may be using software such as Terraform to build the 3 environments and populate with subnets, software repositories and the Jenkins/Deploy servers.  The Jenkins/Deploy servers can then be preloaded with a job that checks a git repository to load all other tasks, allowing for your environments to self-seed and start to perform tasks if the environments or servers need to be rebuilt.  Once the IaaS element is built, then our Jenkins CI server will continually compile and deliver the latest updates by the developers to the DEV software repository.  The Jenkins/Deploy server will be looking for changes to the software repository in its own environment to determine when it’s next run is due.  Once detected the Jenkins server will deploy the application to the servers in the environment and then start to perform tests on those servers.  For this example, it would be useful to understand that the use of Tomcat or Wildfly type servers lend themselves nicely to this method of delivery, since we want to continually deploy the application to the deployments directory.

The Process

If our Wildfly server had the following directory structure;

/opt/wildfly/appversions

/opt/wildfly/deployments

Then we would release our artifacts to the appversions directory and then add a symlink in the deployments directory to the necessary version we wish to run. The Jenkins server would have it’s ssh public key deployed to the application servers that are running Wildfly.  This would then allow Jenkins to run remote commands to do the following;

  1. Copy the latest artifact from the software repository into the appversions directory (a. This will include the version in the name of the file)
  2. Remove the current .war file from the deployments directory
  3. Wait for the application to go to .undeployed
  4. Remove the current .undeployed file from the deployments directory
  5. Create the symlink to the new version of the .war file
  6. Wait for the .deployed file to appear

As part of this deployment process, we would need to ensure that the .deployed exists, otherwise we would need to switch the symlink back to the previous version, and return fail to Jenkins.

Once Jenkins receives the success back then it can continue on to its next task which is to test end-to-end the application.  For this, Jenkins would need to start a task on a Selenium server and wait for the result, before continuing.  In the same light as the deployment, it should be prepared to return the version to the previous one on failure of the end-to-end testing.  As with the application servers, the Selenium servers would have the public key of the Jenkins server (or password).

After successful testing, Jenkins would then copy the artefact from the DEV software repository to the QA software repository.  This would then become the trigger for the QA environment to then start the same process, but perhaps with different end-to-end tests if the software has interactions or dependencies with other software.

We would continue to do this all the way through into production, hence the strict requirement for testing, not just unit or CI, but also environment integration testing.  Although rollback sounds like a 1980s thing to do, it is still an essential element in the ability to perform a fully automated deployment environment, since we can never guarantee that the tests are 100% perfect, or a dependent service may not yet be in production that our new update relies on.

Summary

Some of you may be using this if you have physical infrastructure to which your Wildfly/JBoss systems run on, or you may be using lots of different pieces of software to manage your environment, but it is worth remembering that tools such as Jenkins, TeamCity and GoCD have a whole host of useful workflow elements already built in.

Steve Shilling

Steve Shilling is an experienced Senior Unix/Linux System Administrator, with extensive experience in system automation and automated deployments. He has a technical blog, and runs a training company.