I pose the question, “I script therefore I Infrastructure-as-Code?”
If Infrastructure-as-Code were really that simple, we would already have started utilizing DevOps and Infrastructure-as-Code (IaC) long ago. However, since DevOps and IaC are complicated tools, it’s imperative to take proper measures to achieve a truly successful result. Automation is a critical component for cloud platforms, continuous delivery, and IaC/DevOps operating models, but merely implementing automation will not deliver the results or transformation promised by IaC literature, success stories, and conference sessions. Traditional automation helps in Day 1 provisioning, but it isn’t resilient over the long term, adding cost and complexity to Day 2 operations. And, while scripting is a little faster, it doesn’t significantly impact failure rates or recovery time.
So, if IaC isn’t just writing code to create infrastructure, what is it doing? How is it different?
Infrastructure-as-Code is applying software development practices and processes to the definition and management of infrastructure resources. It includes processes like the infrastructure development lifecycle (the practice of building automation in a development and test environment before publishing or pushing into production), and practices such as version control for the management of code and binaries needed to compose your infrastructure. IaC, however, is more than just practices and processes; it is also an architectural paradigm built for change and measured by resiliency, two core principles of chaos engineering.
As a result of these characteristics, IaC differs from traditional automation in the following ways:
- Engineering Lifecycle or Pipelines
- Automated Testing
- Version controlled
- Composite Architectures
- Parameterized Code
- Agile-based methodologies
Engineering Lifecycle or Pipelines
Engineering pipelines are automated workflows that manage changes to infrastructure resources and services. Changes consist of things like patches, upgrades, scale request, configurations, and more. As changes are created, the automated pipeline shepherds these changes through the testing, packaging, and deploying required for that resource or service published for general consumption by end-users, developers, customers, and the like. Integrated into this pipeline, there are a series of automated test tools connected via APIs that verify and validate the technical soundness and purpose fitness of each change. Like software development, infrastructure pipelines consist of multiple quality gates, such as DEV, TEST, STAGE, and PROD. Each gate is designed to boost confidence that the change is good, safe, and performant.
Since multiple small changes can be parallelized and feedback of errors swiftly returned, automated testing is key to improving velocity and release frequency. Like software development, the depth, breadth, and complexity of testing increases as you move through quality gates and closer to production. Dev testing typically consists of basic unit testing, code analysis, and packaging to quickly verify that the change is accurate, compliant, and able to be deployed. Later stages test integrations, dependencies, security, compliance, capacity, and more. The goals of test automation are twofold. First, increase level confidence that a change is good, and second, increase release frequency and accelerate throughput. The key to infrastructure-as-code is shifting this testing as close to the engineer making the change as possible. This is commonly referred to as “shifting left.” Doing this, engineers will get feedback from passed or failed tests quickly, reducing the need and frequency of having to task shift to bug fix.
Version control is perhaps the most important component of the IaC platform, and it enables three core capabilities:
- Auditable code. Source code control tools capture check-in and commit events, therefore enabling engineering and compliance teams to know who made a change, what changed, and when it changed. These capabilities are core features of all version control systems and are created from event logs in the system. Source code control tools, a type of version control tool, also offer DIFF reporting that allows an engineer to evaluate different version of scripts and code to highlight what changed.
- Known-good Flags. Source code and artifact repositories can mark/flag code and/or binaries as known-good (meaning it is successfully running in production) so that engineers can always start new development efforts using the “latest and greatest” package. This is different than a gold image because with version control tools, you can mark multiple versions of the same code base and binaries as “known good” to support multiple versions of a service. This is easier than having to maintain and support multiple gold images.
- Rollbacks. Unlike gold images, source code tools enable teams to back out changes that fail or throw errors in production environments through simple command-line tools. If used in parallel with a configuration automation tools like Puppet, Chef, Ansible, Terraform, etc. your platform can automatically correct system configurations triggered from a rollback event in the source code control tool. Any machine dependent on that component and version will be automatically updated.
Borrowing from the microservice architecture of cloud native development, infrastructure engineering needs to decompose full-stack environments into a collection of reusable components, such as base VMs, network configurations, OS, middleware, database, etc. More mature organizations can move lower into the stack further deconstructing VMs into compute, network, and storage. By breaking your infrastructure into “microservices”, you are afforded some of the benefits to that architectural model such as isolating change, reuse and more granular controls. Composite architecture also improves the flexibility of the service catalogue allowing new services to be created dynamically using a collection of proven, known good components. This can also reduce the load of customized “gold images” needed.
Coupled with composite architectures are coding practices borrowed from software development. Rather than hard coding configurations, paths, credentials, etc. into your automation code or blueprint, modern automation tooling allows parameterized code. Parametrized code stores specific environment and application data outside of the executable automation code. This practice allows the same automation code to create a nearly infinite collection of unique instances by injecting variable values (aka. configuration details) into the automation process at time-of-use. Additionally, if enterprise architects define sets of pre-approved ranges and naming conventions, new instances can be automatically tested for compliance and standards as part of the provisioning process. In other words, automated tests can verify and validate that the externalize configuration data (e.g. variables) are defined within the acceptance thresholds and conventions. If a test fails, the environment will not be created, and the appropriate stakeholders will be notified.
Given the pace of change, IaC employs agile/lean methodologies for managing work, which includes these core principles and practices:
- Dynamic prioritization. Agile’s iterative and incremental structure affords the team regular opportunities (every two weeks) to reevaluate the current prioritization of work, add work items, remove work items, and change the prioritization as needed. This planning cadence, central to agile, more accurately reflects and more effectively responds to the dynamic environment in which IT shop operate.
- Process transparency. Agile/lean methodologies use boards to manage work and communicate status to the team and broader community. These boards enable stakeholders, sponsors, and consumers to see what is being worked on and what will be worked on next. Transparency gets over-burdened IT workers out of the negotiation process. If a ticket holder wants to change the priority, the board shows them who and how to make that happen. It is no longer an IT only decision, it is a business decision.
- Productivity measures. Agile is customer-centric by design. Lean is value stream-centric. Together, agile and lean derive a set of metrics that are focused on productivity and customer success. IaC adopts these metrics and measures throughput, cycle time, and failure rate. These metrics measure the amount of work being produced and the effectiveness of the process creating it.
Watch this new sub-titled, animated video which explains in under 2 minutes how Dell Technologies Infrastructure as Code service helps our customers with infrastructure automation.
So, if you find yourself continually investing and reinvesting in platform automation, and you are not seeing the desired outcome – stop. There is a better way – a different way. That way is Infrastructure-as-Code. Borrowing the best practices from agile, lean, cloud native development, and software engineering, IaC affords enterprises a modern approach to unlocking the benefits of cloud platforms and automation. If you need help in getting started, upskilling your team, or building your first MVP, Dell Technologies can help. For more info, please get in touch with your Dell representative.
Also, I invite you to watch the third installment of our “Be a Multi-Cloud Genius,” a virtual roundtable series. This latest webinar entitled Application Optimization in the Multi-Cloud Environment features myself and IDC’s Deepak Mohan, Research Director for Infrastructure systems, Platforms & Technologies, and focuses on how IT organizations are deciding where applications are placed in a multi-cloud environment to drive maximum efficiency, agility, and cost savings.