A Practical View of Composable Infrastructure

Lately, if you’ve been following datacenter IT trends, you’ve come across the idea of composable infrastructure. What is it? Basically, it’s the idea that datacenters ought to be defined by the needs of applications, not by the ideas of a datacenter admin or an IT executive.

Here’s a real-world situation to highlight the idea. We’re all faced with the reality that workloads are becoming more dynamic each day. As a result, infrastructure has had to become more dynamic, as well. The days of one-app, one server are hopelessly outdated because that approach forces datacenters into a high overhead and low flexibility model that’s inefficient and costly. Even virtualization, which gives us the ability to spin up compute capacity as needed, is beginning to seem a little inhibiting because virtual machines aren’t tuned, on the fly, for the needs of the application. With a more dynamic infrastructure, however, comes an exponential increase in options. The days when purchasing a server meant choosing between 1 or 2 cores are long gone. Now, the choice is between 4 and 56 cores (and growing every generation). How can you predict your application fit and what your resource needs will be over the lifecycle of your hardware?

Composable infrastructure turns physical infrastructure into pools of modular building blocks that workloads can use, as needed, to provide a service. Envision an infrastructure that looks like fluid pools of compute, storage, and fast flexible fabric that are disaggregated and abstracted. Resources are quickly composed, decomposed back into the pools and then re-composed in a software template to fit the specific needs of an application or workload that will run on it. All datacenter resources – off-premises or on-premises – are centrally orchestrated and available, via an API, to any workload or service that needs them.

Suddenly, with composability, IT admins aren’t trying to make decisions about sizing physical infrastructure or virtual machines in a dark room or worrying about providing more compute capacity to a workload that’s seeing a spike in demand. The composable infrastructure API allows developers to treat hardware infrastructure as code, so they can programmatically control infrastructure instead of letting the infrastructure restrict what they can do. It’s the ultimate idea in granular tuning that ought to lead to significant gains in managing overhead, maximizing efficiency, and delivering dynamic, agile performance levels for each workload.

Sounds great, but there are a few challenges.

As we’ve talked about in earlier posts, we’re advocates for choice paired with guidance. Our customers demand approaches that work across many vendors and many technologies. Organizations require solutions that are simple, inexpensive agile and scalable over proprietary, monolithic and expensive. That’s why open APIs matter, why industry-standard technologies matter, and why interoperability matters. Basically, the more inflexible, proprietary, and complicated something is, the less valuable it is to the marketplace – because it gets in the way of the flexibility that enables success, growth, and innovation.

Right now, most approaches to composable infrastructure are being driven by a single company. They’re not open – so they lack the flexibility and choice we advocate. Hopefully, they’ll become open so customers have more freedom. We’re looking forward to the evolution of standards-based approaches for composable infrastructure – which will inevitably increase customer choices and leverage expertise by controlling cost. After all, the marketplace is littered with derelict big ideas that were pushed by a single enterprise technology vendor. Right now, composable infrastructure could be one of those big ideas.

The second problem is that composable infrastructure isn’t a practical solution – yet.  At Dell, while we’re certainly committed to future IT, we’re keenly focused on solving immediate customer problems tied to workload optimization, and composable infrastructure hasn’t achieved that. Our aim is to be the leader at solving customer problems in modular, cost-effective and immediately impactful ways that remain relevant far into the future. But, interestingly, our successes align nicely with the themes and ideas behind composable infrastructure. Our approaches address many of the same problems.

For example, Dell’s Active System Manager (ASM), built on an open and extensible architecture, enables pooled resources and service-centric IT. It embraces an open world by rapidly and easily turning hardware from many vendors into pools of server, storage and network resources. These pools are made available to users on demand, and then unused components can be released back into the pools for re-use. The aim is to ensure that application sets can get the resources they need in real time to increase levels of agility and efficiency.

But ASM also boosts flexibility. Imagine a retailer who needs extra capacity to cope with the holidays. ASM, through its user interface or RESTful API, can be used to add capacity on-the-fly to the resource pools, so IT administrators can see the extra capacity, allocate it and manage it without seams or compromise. Access to the public cloud just becomes a part of the resource pool. Essentially, the way we treat public cloud, via ASM and through the use of Dell Cloud Manager (DCM), is an extension of our fundamental approach, which is to embrace open and agnostic approaches to enterprise IT. Public or private, on-premises or off-premises, it shouldn’t matter when customer challenges need a solution.

Moving past ASM, on workload-specific problems, we’re doing a great job of giving customers modular building blocks of integrated capacity that abstract away from physical infrastructure toward more agile, dynamic, value-optimized solutions. Just in the past twelve months, we’ve launched a collection of future-ready approaches to modular IT including:

PowerEdge FX architecture – Scale workloads quickly, as needed, adding resources incrementally without the expense and inefficiency of overprovisioning. The groundbreaking FX architecture combines efficient management with flexible data center building blocks to optimize for both modern and legacy workloads.

Dell Hybrid Cloud Solution for Microsoft – integrates private and public Azure cloud capability in a single solution that can be deployed in less than three hours.

Dell Reference Architectures for OpenStack – modular, enterprise-grade OpenStack reference architectures, partnering with Red Hat.

Dell XC Series of Web-scale Converged Appliances, powered by Nutanix – integrate easily into any data center, and can be deployed for multiple virtualized workloads including desktop virtualization, database and private cloud projects.

VMware STD-C investments (Dell Engineered Solutions for VMware EVO:RAIL and VMware STD-C).

In short, we’re meeting customers where their problems are and providing new innovative solutions that deliver results aligned to their preferences for technology providers and business outcomes.

Now, it’s worth saying that that the future will hopefully be in composable infrastructure and we continue to take steps in that direction. We will continue to drive for the adoption of standardized APIs.  We will enable composable through standards like Redfish (DMTF’s Redfish is an open industry standard specification and schema that specifies a RESTful interface and utilizes JSON and OData to help customers integrate solutions within their existing tool chains), to software-defined layers that sit on top of Redfish and below orchestration layers like OpenStack. The industry should really rally around building blocks like these to enable a true composable future.

But rest assured, while it’s evolving into something agnostic and widespread, we’ll be exploring new open, modular and dynamic ways to address customer problems. Stay tuned.

About the Author: Robert Hormuth

Robert Hormuth is Vice President/Fellow and CTO of Dell EMC Server Solutions Group. Robert has 29 years in the computer industry. Robert joined Dell in 2007 after 8 years with Intel and 11 years at National Instruments. Robert and his team focus on future server architectures and technology driving future technology intercepts onto the server portfolio. Robert’s past design/architecture activities include IO peripheral designs, x86 (386, 486, Pentium, Pentium Pro, Pentium II/III) system design, BIOS, firmware, Application Software, and FPGA/ASIC design. Robert has participated in creation of multiple industry standards - VME, VXI, PCI, PXI, PCIe, NVMe, RedFish, SSDFF. Robert has a B.S. in Electrical and Computer Engineering from The University of Texas at Austin and currently holds 16 patents.