Assume that you are responsible for building DevOps Pipelines for a Java based application and are using the most common DevOps tools such as JIRA for Issue tracking, GIT for version control, Jenkins as orchestrator, SonarQube for code quality, Maven for build, Junit for testing, Docker for deployment, ServiceNow for change management and Splunk for measurement. How can you practically implement this?

The first step will be designing pipeline structure, which mainly depends on the branching strategy supporting continuous delivery. Nowadays Trunk based development or Single Branch Development is most commonly used when developers work on short lived feature branches and collaborate on source code in a single branch called a ‘trunk’. Developers are motivated to check-in their code multiple times in a day and a release branch will be created at iteration to push ready code to production.

For trunk-based development, developers check-in their code multiple times in a day and a release branch will be created at iteration to push ready code to production.

With the above branching strategy in place, let’s start building automation. Version control should be used for everything required to recreate software: source code, build, deploy, test scripts, environment provisioning code, etc. You can maintain automation code either in the same source code repo or in a dedicated DevOps GIT repo. If your delivery team formation includes a DevOps engineer, then it makes sense to have DevOps automation scripts in the same repo, and its deliverables will be treated as part of release. However, if you have a central DevOps team then you can fork that DevOps repo to get automation code and work on it further. The key here is that your automation code should be source controlled and your pipeline should be ready to recover from failures.

For Pipeline as Code, there are many scripting options available such as scripted pipelines, declarative pipelines, and blue-ocean pipelines. Or, if you are very new to Jenkins, then start with free-style jobs to get hands-on first. Every style comes with pros and cons, study them well and choose your best option. As an example, scripted pipelines allow you to code using your own custom programs and open a lot of flexibility; however, some options such as replay stage and post build notifications are only possible with declarative pipelines. Free-style jobs are heavily impacted by specific plugin upgrades, and there are chances that you can lose your configuration. I will cover a comparison of all these different techniques in my next blog.

Recently, the mother-seed job concept was introduced, which allows you to recreate your pipelines in case of deletion or migration. You just need to create the mother script on top of all child pipeline folders as shown below:

DevOps as the mother script above the child folders: Built Scripts, Env. Provision Scripts, Deploy Scripts, Quality Scripts, Metrics Scripts, Feature Pipeline, Dev Pipeline, and Deploy Pipeline, and the child folders' related functions.

Here, you can see for every capability we have created separate folders to group related functions such as Automated Build Scripts, Automated Environment Provisioning Scripts, Automated Deployment Scripts, Quality Assurance Scripts, etc. This allows us to work in a microservices model with the ease of change management. You can further break it down as per your convenience.

For every pipeline code, we have created a separate folder (in green) which contains the main pipeline script and seed script. Every pipeline will have stages which are going to call relevant code from the capability folders (in blue). We are eliminating the possibility of duplication of code in the pipeline, and at the same time, the scripts are reusable, repeatable and reliable.

Also, if you look at the number of pipelines required here, it is very minimal. All these pipeline triggers are tied with GIT actions such as commit and merge, so manual triggering of the pipeline is not required. Developers will work on code and all checks will be done in automatically in the background.

Continuous Development Pipelines' examples of Feature Pipeline on Commit, Dev Pipeline on Merge, and Release Pipeline on Commit.

Below are some useful tips for designing CI/CD pipelines:

  • Triggers: Pipelines are triggered automatically based on SCM Actions such as On Commit, On Merge, On Comment etc.

  • Code Coverage: It should be 70-80% for Regression Testing and it should be a little higher for unit testing—80-85%

  • Code Quality: Quality checks should be implemented with strict quality gates, blocking bad code immediately. Sometimes, you can also configure these rules as pre-commit hooks to prevent bad quality from being pushed in code repo.

  • Unit Testing: Most unit testing scripts are written in the form of functions and they help by alerting you early if any functionality is broken because of code changes. It’s highly recommended that Developers’ IDE (Integrated Development Environment) is integrated with unit testing and code quality tools through extensions so that these checks can get executed on every compile. This helps drive Test Driven Development (TDD).

  • Build: Build Packages are stored in a Build Artifact repo such as Artifactory. Every artifact should be traceable to the source code from where its built. You can use build properties or build info files embedded in a package itself to record info such as package creation date, user, branch, commit ID, and release name. Successful release artifacts should apply GIT tagging or labelling in SCM to trace stable code later. Also, binaries libraries should be maintained in build artifacts repo instead of source code repo.

  • System Testing: It’s a simple rule of thumb to follow mandatory regression testing clearance for merge acceptance. This way, early defect detection is possible. Simple integration of JIRA-Jenkins for automatic defect creation will allow you to track metrics such as defect density and defect leakage on active development branches. Make sure to pay enough attention to the five elements of testing to drive Test Driven Development (TDD): Test Strategy, Test Framework, Test Cases, Test Data, and Test Execution.

  • Feedback: It’s important to understand automation to reduce cycle time and improve quality. Metrics and measures need to be captured in Splunk. And, every pipeline report can be uploaded into the relevant JIRA Issue as a comment so that IM/Leads can keep track of the progress of feature work. Every merge request should have a score card received from the pipeline indicating whether this merge request is healthy enough for acceptance.

Now, let’s look at deployment.

Green-Blue deployment:

“Green-Blue” deployments are becoming popular because of zero down time deployment. They are also called A/B deployment, or red/black deployment. They involve two identical environments running in production whose configuration is exactly same. The “green” environment is the active production environment serving all users. New releases will be deployed to the “blue” environment and user traffic is diverted from green to blue using a load balancer. Once all users are diverted and the system is running smoothly, then the blue environment becomes green, and green becomes blue. This approach eliminates downtime for end users with a load balancer. And, it reduces risk: if a new version on blue is not stable then you can immediately roll back to the last version by switching back to green.

In conclusion, start your DevOps journey with precise and effective CI/CD pipeline designing. These tips will help you achieve accelerated speed of development with value delivery.

Reshma Shinde

DevOps Project Manager

Subscribe to Accenture's Software Engineering Blog Subscribe to Accenture's Software Engineering Blog