Can You Deploy Applications Quickly?

I have had a number of conversations recently with customers that have made investments in IT automation tools and platforms.  Some are seeing small gains from the automation but most are struggling to realize the true ROI.  What I typically find is that these targeted investments are point solutions designed to drive efficiency into a specific step or task, like server provisioning or application deployment; and as such, they are often disconnected from the underlying value chain, or delivery pipeline.  This disconnect constrains these tools and platforms under the existing communication channels, delivery methodology, and operating models thereby preventing any real gains in velocity, agility, and/or quality.  Without looking holistically at the problem and focusing on end-to-end deployment expertise, the benefits of DevOps will remain elusive.

improve_pipeline

Lean management foundations in DevOps challenge enterprises to widen their lens and build solutions within the context of the end-to-end value chain rather than focusing on a specific task or activity. In practice, this means that you build a complete, end-to-end tool chain every release even though some of the capabilities and workflows may be very rudimentary (even manual) in the early versions. For example, my teams regularly build a deployment pipeline to support a “Hello World” application in Sprint 0.  This MVP pipeline both demonstrates and measures the team’s ability to effectively deploy and configure software and its corresponding infrastructure.  As new tools and automation are added to the pipeline, more robust versions, or iterations, emerge.  This iterative cycle allows new value to be readily recognized by the team and more importantly for new feedback to be ingested.

By focusing on deployment first, the team has the basic pipeline needed to create new artifacts and to monitor and manage those artifacts systematically as they move through the development lifecycle. The basic pipeline provides both the ability to instantiate known-good versions of applications and infrastructure and to layer new changes onto these known-good configurations for testing.  The ability to start from, and/or roll back to, a known-good state is critical for testing changes and/or debugging errors and issues with a new change.  By always starting with a known-good base and minimizing the amount of change you introduce at any one time, teams will reduce the amount of risk and effort needed to conduct root cause analysis and remediation in the event of an issue.  This will help accelerate throughput and improve quality.

Too often, changes are manually deployed to an environment as an update or upgrade. This practice inevitably will corrupt your known-good way points and create “snowflake” implementations as updates are layered on top of each other.  By stacking changes, teams lose traceability and repeatability.  Without traceability or repeatability, it is nearly impossible to define and recreate working configurations and/or known-good versions consistently.  Configuration Management databases (CMDB) are supposed to protect against this risk but often these systems require manual updates and as such, regularly fall out of sync with reality.  As a result, teams struggle to recover quickly from and outage or incident.  Incident management often turns into an expensive game of find-the-needle-in-a-haystack.

Looking past the glitz of automation and the promises of speed and agility, DevOps is about the disciplined practice of continuously improving the delivery pipeline. DevOps provides a framework and approach for refactoring complicated processes and introducing highly collaborative, agile practices that can lead to cultural change in the enterprise.  Much of this optimization and change is done through automation and orchestration tools however these tools are both selected and implemented through a DevOps design pattern that is laser focused on delivery value back to the enterprise.

We all read about the promise of DevOps, namely more releases, faster deployment, higher quality, better recovery time, and satisfied employees. This all starts with your ability to deploy infrastructure and applications successfully and repeatedly to your consumers.  Before getting all spun up about what are best tools and platforms, you might want to examine just how good you are at deploying change.

If you would like to learn more or need help jumpstarting or course correcting your DevOps Transformation, feel free to contact us at devops@emc.com.

About the Author: Bart Driscoll

Bart Driscoll is the Global Innovation Lead for Digital Services at Dell Technologies. This practice delivers a full spectrum of platform, data, application, and operations related services that help our clients navigate through the complexities and challenges of modernizing legacy portfolios, implementing continuous delivery systems, and adopting lean devops and agile practices. Bart’s passion for lean, collaborative systems combined with his tactical, action-oriented focus has helped Dell Technologies partner with some of the largest financial services and healthcare companies to begin the journey of digital transformation. Bart has broad experience in IT ranging from networking engineering to help desk management to application development and testing. He has spent the last 22 years honing his application development and delivery skills in roles such as Information Architect, Release Manager, Test Manager, Agile Coach, Architect, and Project/Program Manager. Bart has held certifications from PMI, Agile Alliance, Pegasystems, and Six Sigma. Bart earned a bachelor’s degree from the College of the Holy Cross and a master’s degree from the University of Virginia.