CI/CD with OpenShift

o1

Overview:

Releasing software frequently to users is usually time-consuming and painful process. Continuous Integration and Continuous Delivery can help organizations to become more agile by automating and streamlining steps involved in going from an idea, change in the market and business requirement to the delivered product to the customer.

Jenkins has been a center-piece for CI and with the introduction of Pipeline Jenkins plugin, it has become popular tool for building Continuous Delivery pipelines that not only builds and tests the code changes but also pushes change through various steps required to make sure the change is ready for release in upper environments like UAT and Stage.

CI/CD is one of the popular use-cases for OpenShift Container Platform. OpenShift provides a certified Jenkins container for building Continuous Delivery pipelines and scales the pipeline execution through on-demand provisioning of Jenkins slaves in containers. This allows Jenkins to run many jobs in parallel and removes the wait time for running builds in large projects. OpenShift provides an end-to-end solution for building complete deployment pipelines and enables the necessarily automation required for managing code and configuration changes through the pipeline out-of-the-box.
Tools required to set up a CI/CD infrastructure on OpenShift:

  • Jenkins: CI/CD engine
  • GitHub: GIT server
  • Developing Source Code: Source Code on which the CI/CD is to apply
  • Automation Scripts: Automated testing enables staff to avoid manual tests and focus on other project priorities. A QA team can reuse automated test scripts to ensure each check executes the same way every time.

Continuous Integration:

Continuous Integration is a development practice in which the developers are required to commit changes to the source code in a shared repository several times a day or more frequently. Every commit made in the repository is then built. This allows the teams to detect the problems early. Apart from this, depending on the Continuous Integration tool, there are several other functions like deploying the build application on the test server, providing the concerned teams with the build and test results etc.

Before Continuous Integration:

Let us imagine a scenario where the complete source code of the application was built and then deployed on test server for testing. It sounds like a perfect way to develop a software, but, this process has many flaws. I will try to explain them one by one:

  • Developers have to wait till the complete software is developed for the test results.
  • There is a high possibility that the test results might show multiple bugs. It was tough for developers to locate those bugs because they have to check the entire source code of the application.
  • It slows the software delivery process.
  • Continuous feedback pertaining to things like coding or architectural issues, build failures, test status and file release uploads was missing due to which the quality of software can go down.
  • The whole process was manual which increases the risk of frequent failure.

It is evident from the above stated problems that not only the software delivery process became slow but the quality of software also went down. This leads to customer dissatisfaction. So to overcome such a chaos there was a dire need for a system to exist where developers can continuously trigger a build and test for every change made in the source code. This is what CI is all about.

Traditional Integration:

In Traditional Integration or/software development cycle,

  • Each developer gets a copy of the code from the central repository.
  • All developers begin at the same starting point and work on it.
  • Each developer makes progress by working on their own or in a team.
  • They add or change classes, methods, and functions, shaping the code to meet their needs, and eventually, they complete the task they were assigned to do.
  • Meanwhile, the other developers and teams continue working on their own tasks, changing the code or adding new code, solving the problems they have been assigned.
  • If we take a step back and look at the big picture, i.e. the entire project, we can see that all developers working on a project are changing the context for the other developers as they are working on the source code.

The main factors that can make these problems escalate:

  • The size of the team working on the project.
  • The amount of time passed since the developer got the latest version of the code from the central repository.

Jenkins For Continuous Integration:

Continuous Integration is the most important part of DevOps that is used to integrate various DevOps stages. Jenkins is the most famous Continuous Integration tool. Jenkins is an open source automation tool written in Java with plugins built for Continuous Integration purpose. Jenkins is used to build and test your software projects continuously making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. It also allows you to continuously deliver your software by integrating with a large number of testing and deployment technologies.

 

Pipeline Stages and Pipeline Flow:

A pipeline stage is a logically grouped set of tasks intended to achieve a specific function within a pipeline (e.g. Build the App, Deploy the App, Test the App, Promote the App). The pipeline succeeds when all stages have completed without failure. Typically, stages run serially (one after the other) and in a consistent order, but some may run in parallel. We refer to the movement from one stage to the next as triggering. The ultimate goal for a successful pipeline is that it is able to run all the way through automatically (automatic triggering), taking the workload all the way into a production state without any intervention my humans. This level of continuation allows for development teams to release small amounts of code quickly and with low risk.

To achieve a pipeline with this level of capabilities requires a high level of investment on the part of the development and operations teams to build proper testing and validation into the automated pipeline. This ensures quality and compliance of the code before deploying it into production. For this reason, many pipelines initially include manual triggers — stops or pauses after certain stages, requiring manual intervention to run tests, review code, or receive sign-off before approving the pipeline to continue on to a higher stage.

There is a continuum between strictly manually triggered pipelines and automatically triggered pipelines. Most organizations may begin with full manual triggering between stages,  but should look to remove as many of those manual triggers as feasible in order to reduce bottlenecks in the system.

Pipelines and Triggers:

Both OpenShift and Jenkins provide methods to trigger builds and deployments. Centralizing most of the workflow triggers through Jenkins reduces the complexity of understanding deployments and why they have occurred. The pipelines buildconfigs are created in OpenShift. The OpenShift sync plugin ensures Jenkins has the same pipelines defined. This simplifies Jenkins pipeline bootstrapping. Pipeline builds may be initiated from either OpenShift or Jenkins.

 

Process for Triggering a Pipeline Execution:

  • Clone the code from GitHub repository
  • Create new project on OpenShift
  • Add the Jenkins ephemeral templated application to the project—it should be an instant app in the catalog. A Jenkins deployment should be underway, and after the Jenkins images have been pulled from the repository, a pod will be running. There are two services created: one for the Jenkins web-ui and the other for the jenkins-jnlp service. This is used by the Jenkins slave/agent to interact with the Jenkins application.
  • Configure Jenkins
  • Configure the source code in Jenkins on which CI/CD is to be done.
  • First, a developer commits the code to the source code repository. Meanwhile, the Jenkins server checks the repository at regular intervals for changes.
  • Soon after a commit occurs, the Jenkins server detects the changes that have occurred in the source code repository. Jenkins will pull those changes and will start preparing a new build.
  • If the build fails, then the concerned team will be notified.
  • If built is successful, then Jenkins deploys the built in the test server.
  • After testing, Jenkins generates a feedback and then notifies the developers about the build and test results.
  • It will continue to check the  source code repository for changes made in the source code and the whole process keeps on repeating.

Summary:

This blog summarizes how you can readily use integrated pipelines with your OpenShift projects. Automating each gate and step in a pipeline allows you to visibly feed back the results of your activities to teams, allowing you to react fast when failures occur. The ability to continually iterate what you put in your pipeline is a great way to deliver quality software fast. Use pipeline capabilities to easily create container applications on demand for all of your build, test, and deployment requirements.

 

 

 

 

Leave a Reply