Multicloud Data Centers and Going D.I.G.I.T.A.L? (Do I Give it to Amazon then Leave?)

Gosh, I hope the answer is no.

But, I have to admit that does appear to be the big, red easy button that many CIOs around the world are exploring. It isn’t just Amazon, it’s Google, Azure, Virtustream, and others. Given the short tenure of C-suite execs, many CIOs are feeling the pressure to outsource IT to the cloud in order to meet more and more aggressive SLO’s, cost cutting demands, and most importantly innovation. A few years ago, we called this phenomena “shadow IT.” Today, the reality of multicloud data centers is becoming more and more the norm. It isn’t happening in the shadow (as much), it’s becoming a conscious and informed decision of the business and IT together to hopefully deliver on the promise and value of digital transformation.

So Does the Multicloud Model Make Sense?

From a purely economics perspective, multi-cloud doesn’t make sense. Assuming you have a well-run IT shop and are effectively automating all routine data center activities like patch management, scaling, and monitoring, it is more cost effective to keep your data center on-premises. If it wasn’t, Amazon and others wouldn’t be in the market. But most IT shops are not well-oiled machines, they are laden with technical debt; they have inefficient, often manual processes; and, their investment in automation to date has fallen short of expectations. In this type of environment, a multicloud platform can be the difference between success and failure.

Multicloud Platform Overview

Before delving into what multicloud solves, I should define what a multicloud is. In short, multicloud is a collection of public and private infrastructure resources (compute, network, storage) administered via a common control and management plane. A basic example is VMware’s Cloud on AWS, where you can extend your on-premises vSphere environment into the AWS cloud using EC2 instances. In this example, vSphere is your management and control plane and your on-premises hardware and AWS EC2 instances comprise your integrated resource pool. In this configuration, you can “seamlessly” migrate vSphere based workloads using standard VMI images to/from the AWS cloud.

This example leads me to the promise of multicloud platforms, namely speed, portability, scalability, and resiliency. These capabilities are attained via one common thread across all multicloud platforms. That thread is consistency. Unlike the data center of the past where environments are assembled from procedural scripts, checklists, and tribal knowledge; multiclouds, powered by the public cloud paradigm, employ automation to manage infrastructure resources. Basic provisioning, scaling, patching, and configuring are managed by the platform. This enables IT to reliably and repeatedly provide accurate and consistent environments to product development teams and business users. Consistency at this layer provides a solid foundation, or known-good state, against which you can confidently develop, verify, and validate change thereby reducing risk and accelerating throughput. Furthermore by managing to a known-good state, operators of multiclouds can more effectively monitor for change, variants, and other issues. Because the platform is adept at recreating consistent environments, it can quickly recover from outages or other issues by “repaving” the resources.

Lastly, consistency enables both portability and scalability by providing the immutable building blocks needed to recreate and reconfigure an instance or environment. It is through these methods that a multicloud platform closes the performance and productivity gaps of traditional IT shops.

A Multicloud IT Shop

Despite the ambitions of tooling providers around the world, the automation, orchestration, and validation alluded to the above doesn’t come out-of-the-box when purchasing a cloud solution. This is because many enterprises support multiple technology stacks, managed by flexible environment contracts, and numerous integration patterns. There are just too many permutations to provide a fully supported, packaged multicloud solution. As such, the onus of building this platform and the myriad of services and tools falls on the Enterprise IT shop.

If IT attempts to build this platform mimicking existing processes and practices, they will inevitably recreate a “newer, shinier” version of what they support today, namely ticket-driven, snowflake configurations, manual processes, which delivers underwhelming outcomes. In order to truly transform, IT must introduce new skills, practices, processes, and tools that change the way systems are defined, deployed, and managed.

Key characteristics of the transformed, multicloud shop:

Lean and Agile Principles

Multicloud IT shops have fully embraced lean and agile thinking. Above all else, they value working code in production. That code is manifested in applications and infrastructure running in PRODUCTION. In other words, code defines, deploys, and configures the application and environment running in production. Code also defines the workflow, orchestration, and testing that creates, verifies, and manages the environment and application. What most distinguishes these multicloud shops is that they tirelessly work to improve practices and processes in order to accelerate time-to-production and minimize cost of support. Their transformation never really ends.

Systems Thinking Approach (End-To-End)

Embracing the lean and agile concepts, multicloud IT shops employ a system thinking approach to problem solving. It isn’t enough to optimize the activities of a single team or department, multicloud solutions employ an end-to-end pipeline (or workflow) that mirrors the SDLC and Change Management process. By employing a systems approach, a multicloud shop is always identifying constraints in the pipeline and actively working to remediate them.

API-first Design

API-first design does not equate to deployment automation via procedural scripts. Multicloud platforms consist of an ecosystem of tools, code, and configurations that in-concert define, deploy, and manage your data center. Because integration between tools, the pipeline, and the actual environments is critical, APIs are a language and design pattern of modern, multiclouds.  APIs can be surfaced through catalogues, through orchestrations, and/or via command line. For example, you may use Puppet to define, orchestrate, and manage environmental configurations.  Puppet will be linked to a pipeline (workflow) tool like Jenkins or CodeStream. Automated testing tools that validate and verify the configuration will be linked to that workflow.

Emergent Standards

This is one of the most difficult concepts to embrace in the multicloud operating model as control is shifted to product teams and pipelines from Enterprise Architects and Review Boards.  As application and environments are on-boarded into the multicloud by product teams, architects help build and design service packages. Through these efforts, design patterns begin to emerge across the portfolio. These patterns are then promoted into standards. Standards are managed and maintained via automated tests and checks. Product teams that use standards bypass the Review Process in favor of automated tests. As new service packages are created, the architecture team can pair with the product team to define and ultimately promote new standards.  In a multicloud operating model, the standards definition and review process is evergreen.

Developer Skillset

This is the most visible change when transitioning to a multicloud operating model, namely everyone codes. In practice, an “Operator” no longer applies a patch to a production machine.  Instead, they build and test new automation code that deploys/implements that patch to a known-good version of infrastructure. This development process creates a new version of infrastructure that is verified and validated before promoting to the Production service catalogue. Once promoted, product teams can begin to validate and verify their applications against this new version prior to rolling it out across the data center. This minimizes outages and issues related to platform and configuration dependencies being arbitrarily pushed out. It also put the onus of refactoring the application to function correctly on the new “standard” to development teams.


Introducing multicloud into the modern data center is more than just new hardware and software. To achieve the outcomes of digital IT transformation, organizations need to also change how they build and manage these solutions. These changes will require a critical evaluation of existing processes, skills, tooling, etc. in order to be successful.

About the Author: Bart Driscoll

Bart Driscoll is the Global Innovation Lead for Digital Services at Dell Technologies. This practice delivers a full spectrum of platform, data, application, and operations related services that help our clients navigate through the complexities and challenges of modernizing legacy portfolios, implementing continuous delivery systems, and adopting lean devops and agile practices. Bart’s passion for lean, collaborative systems combined with his tactical, action-oriented focus has helped Dell Technologies partner with some of the largest financial services and healthcare companies to begin the journey of digital transformation. Bart has broad experience in IT ranging from networking engineering to help desk management to application development and testing. He has spent the last 22 years honing his application development and delivery skills in roles such as Information Architect, Release Manager, Test Manager, Agile Coach, Architect, and Project/Program Manager. Bart has held certifications from PMI, Agile Alliance, Pegasystems, and Six Sigma. Bart earned a bachelor’s degree from the College of the Holy Cross and a master’s degree from the University of Virginia.