Continuous Delivery – Ensuring Consistent, Repeatable, and Successful Deployment

Share on FacebookTweet about this on TwitterShare on LinkedIn

Share on FacebookTweet about this on TwitterShare on LinkedIn

segue-blog-continuous-delivery-successul-deployment

 

Deployments are crucial for IT at large organizations, but how do you ensure a consistent, repeatable, and successful deployment across development, testing, and production? Mark Shapiro, Segue’s Senior Architect for Application Development, joined an online panel on the subject as a part of Continuous Discussions (#c9d9). This  community initiative, sponsored by Electric Cloud, powers Continuous Delivery at businesses like SpaceX, Cisco, GE and E*TRADE by automating their build, test and deployment processes. Continuous Discussions is a series of community panels about Agile, Continuous Delivery, and Devops.

Below are a few soundbites from Mark’s contribution to the panel.

How Does Environment Fidelity Affect the Continuous Delivery Pipeline?

“One of the big problems we face with fidelity across our pipeline, is that a lot of our work is with the government.They have their own environments that they create, that they host the applications on and we don’t necessarily have the ability to exactly mirror the production environment in our test and dev environments. While the goal is to have that 100% environment fidelity, so that the scripts that you write to do deployments work across all of these different environments, we do have to have some ability to change that. For example- here I’m installing in this environment and the script knows that for this environment it does XYZ. But for this other environment, it does XYZ AND there’s this additional ABC that it has to do.

This is especially complicated when you are dealing with a server farm environment. In the cloud, it is nice and easy; we can set up another set of cloud servers that look just like these, but sometimes that doesn’t work – for example, when the hardware that the system is running on is prohibitively expensive to reproduce in a dev/test environment. So you do what you can and you automate as much as you can. Where it’s a must to have differences in environments, you document them, making sure that anyone who is touching the system knows this environment is a little bit different, because of this, and here is how we deal with it. And we do configuration management on our documentation together with the source codes, our builds, and the output of the builds.”

What Do You Do to Ensure Fidelity?

“The Government is only part of our customer base. We also have customers where we can fully control the environment. One of the keys to ensuring fidelity is separating the configuration from the application – including environment and software configuration. Everything has to be outside the application but accessible by it. So when we do a deployment, if there are environment updates that we need to do- for example if a particular build requires an operating system patch, external application, or external library to be installed, that gets included. There are references in the application build but the actual execution of that is done through the deployment script, which, ideally, is 100% automated. If we need to have a system admin login to apply a patch, that’s not good. If we can say ‘This build needs this patch,’ the build’s script sees that, goes and gets the patch, downloads it, and installs it – that is good. Then we know that it has been installed correctly and comes from a reliable source.

I try to have as little configuration as possible in the actual application binary or in any of the files that ship with it. I try to put everything that I can in a database. That way, if I have multiple servers, and all of them need to run in the same environment in the same version of the application, I don’t have to worry about the wrong configuration file getting deployed. I set the configuration in the database. The application connects to the database, pulls its configuration, and does whatever it needs to do with it. Any time we have had configuration files on a server, we’ve run into problems every single time. Every single time, when an update was made, the update was missed on one of the servers. Users accessing the application have a different experience depending on which server they are on and it’s a nightmare. So get everything in one central place and automate, automate, automate.”

How Do You Move from Scripting to an End-to-end View of Your Pipeline and What Are the Benefits There?

“Let’s assume that your script right now is a document that has the step-by-step process. If you try to go from that to a fully automated pipeline, you are most likely going to fail. In most cases, it’s too much to take up all at once. My recommendation would be to start with automating the individual steps that you can. Usually that means build and deployment scripts, since they are the easiest. Automated testing is a big hurdle to get past. So is automated load testing.

When you are evaluating tools to use, I recommend picking tools that can work across the entire pipeline. If you are going to need to do something, even if you’re not doing it now, make sure the tool supports it. For tools you are already using, it is best if one of them can work across the whole pipeline or if they can talk to each other across the whole pipeline. Automate everything as far as you can. Once you are comfortable, you can move to combining individual pieces into a larger and larger pipeline until you’ve got everything fully automated.”

Need Help? Contact us