Organizations want agility from their IT without sacrificing quality. How does one maintain and verify quality from the developer’s machine to the production server? How does one ensure that not only does the code have as few bugs as possible, but also does what it was specified to do? Enter the concepts of Continuous Integration and DevOps.
Let me tell you a story. Dharmender, a developer, writes in a new feature to an existing line of business application, does a little bit of code checking at his end and gets the guy sitting next to him to look in and clear it visually. Then, he checks it into the source control system and goes down to the cafeteria for lunch. While standing in line to pick up his food, he gets an alert on his phone from a server somewhere on the other side of the world that unit tests have passed, he responds by approving a build. After lunch, while he’s still chatting with his friends, Neena his PM, calls him to confirm what she sees on her screen. She has already played with the feature in a sandboxed environment. Satisfied with his answer, she clicks a button on her screen, sending the feature into production.
The agility you demand in getting through bug fixes and features to production as quickly and painlessly as possible, is possible. A critical enabler for this is Continuous Integration, and the benefit you reap is the ability to integrate it into your DevOps pipeline.
“How?” you ask? Let’s delve deeper.
What is CI?
CI or Continuous Integration is the loop you build from the developer’s workstation to your production system. In between sits a layer of automated tests and human eyes to push things along (After all, we do not yet trust our own code so much as to let it flow to production unseen by human eyes). The idea is simple enough– the developer writes code and checks it into a suitable source control system. Someone (presumably another developer, or a “tester”, or the modern mix of the two an “SDET”) has already coded automated test cases called “unit tests” and checked them into the same system. The source administration has set up a policy to govern all check-ins to be gated by running these unit tests. The failed test cases will automatically generate bugs in the team’s bug tracking system. Successful check-ins are approved either for immediate or scheduled (daily or nightly) builds. It is then taken forward through the cycles of whatever SDLC pattern the team follows.
That is how all major software products in the world, including Windows and Office are built. That is how all modern software teams enabling themselves for DevOps are doing things as well.
The link to devOps
So when does CI become a DevOps operation? The critical link here is the ability to send the build generated at the end of the CI cycle to your production server. Typically, there are a set of key stakeholders (business owners, business users, end customers, beta testers, or whatever name you would want to call them) who take up a certified build of a product, use it for a while and run through usage scenarios that are important to them. Once they are satisfied, they give the all okay and the build is marked golden and deployed at an auspicious moment in time. But this is not really amenable to the idea of extreme agility. Typical organizations still make use of human eyes to pass the bits to “golden” state. The type of problems expected here are quite different to what automation and unit testing can discern. Is that color too bright? What would this component to do if another component X is also installed? What happens if we try to login at midnight?
Different systems handle this problem differently. For example, Microsoft released something called “Visual Studio Release Management” as a layer binding everything from your source code to their Azure systems in one interface. This product provides the human eye gatekeeper with controls to approve or reject a build to move to along the SDLC path to production. But that works if you have a Team Foundation Server (TFS) or Git based source repository (or want to migrate to one). CodeShip.io and Travis are two other popular CI tools out there for Git repositories. SVN has Jenkins. CruiseControl is another CI platform that supports a lot of source control systems.
I have said in the past that there is a difference between “Trend”, “Cutting Edge” and “Hype”. It is important for CIOs to discern one from the other when making a decision to pursue, or not pursue, a certain paradigm. Dev Ops is one of those that live in this blurred area. A lot of material out there lands squarely in the realm of “Hype”. Especially when you hear the term from the team proposing a drastic budget cut for the coming fiscal.
DevOps, really is about empowering the IT operations team perform release management. It can include a process where the IT operations team also makes code changes to the application, but this is not always possible. No development team out there is going to willingly support a scenario where IT operations is deploying their own code to production and I have been on both sides of that battle. Dev Ops is about responding with extreme agility to a production “situation not normal” to restore service back to acceptable parameters, without involving the software engineering team. While this does not explicitly include enhancements, it does not preclude a sufficiently advanced and technically capable team from doing that.
The way it should work is like this: Something goes wrong in production. The service engineer attending to the system as enough instrumentation and tools to help identify the root cause. If determined that it requires a change of some code, they go ahead to make that change. Leveraging the process of CI, when the service engineer applies the code to source control, the pre-configured unit tests kick in, build the code, test it and deploy it to staging systems. Through Release Management features, the code gets a visual once-over from the business stakeholders involved and with a click of a button, is deployed into production. What it cuts through is the delays caused by a waterfall-like SDLC model. There is no more requirement for a lengthy process of: file a bug, get it triaged by the engineering team, someone provides estimates and this is locked in, then code is written, sometime later it is tested and certified, a suitable release window is found and the fix gets deployed after about a month (if lucky) or a quarter (typically). With the Dev Ops world, CI enables the same thing to happen in a matter of hours or even a couple of days.
Does the story at the beginning of this piece look more like reality now? In fact, the current state at many software houses is even ahead of that scenario. It is possible to plug into mashup services like Twillio, Yammer, Twitter and WhatsApp and other providers to send status messages back to specific owners about the status, failures and so on, as SMS messages or messages leveraging social media such as Twitter, Yammer and WhatsApp. Information about new features and items in the product backlog can be automatically posted to forums like User Voice and Stack Exchange to solicit early user feedback on the proposals. In the same way, feedback and bug reports received through these forums can be reverse integrated into internal bug tracking systems.
There is no “start” or “stop” markers for a program under continuous integration. It begins with the decision to enter the CI Loop and ends with the retirement of the product. CI definitely has far reaching benefits to any software organization, regardless of whether or not they intend to go into DevOps. For one thing, CI eventually guarantees a zero-downtime scenario. Adopting it, makes a lot of sense.