Will the public cloud kill agile development?

Contrary to popular belief, the public cloud will not necessarily make life easier for IT. In fact, technology professionals, particularly those in relatively new fields like DevOps, are at serious risk of becoming irrelevant if they can’t or won’t understand the affordances of cloud infrastructure.

Trevor Pott nailed it in his recent article about the rise of DevOps and SecOps when he said “developers become more paranoid…with operations out of the way and infrastructure provisionable through APIs there is no one to blame for delays but themselves.” The issue is that DevOps teams are made up primarily of developers who’ve learnt to manage operations along the way. And Pott (understandably) doesn’t reach the point that in the case of agile development, the medium really is the message, or at least inexorably intertwined with it.

Without at least an appreciation for the technology infrastructure that supports agile – or worse, rigidly defining it for one explicit purpose or another – DevOps will not be able to provide the iterative, responsive, continuous delivery that is its raison d’être. In other words, it will fail. But this infrastructure must also be simple and malleable enough to use that it doesn’t become a time-sink for the former developers that dominate the school of DevOps.

A question concerning (cloud) technology

Ostensibly, the public cloud is the most malleable of technology infrastructures, an acknowledgement of how “without their code, few organisations will be competitive,” as Pott puts it. But is it? Public clouds are not always the most cost-efficient or easy to maintain and scale. Nor are they, especially in the case of SaaS, open to customisation and variation of their workloads. This is not a bad thing in itself. But it poses some particularly thorny issues for DevOps.

The main issue is that DevOps exists as what one of my friends calls a response to the high modernism of technology – the notion that software ought to be developed upon planning principles so fine and rigid as to obviate the very role of the developer themselves. In his essay The Question Concerning Technology, Heidegger makes a similar point with his “standing-reserve”, the ideology which defines any technology as built for, and only ever completing, a single and immutable purpose. The alternative – and the motivation for DevOps – is to embrace potential rather than stricture, whereby any particular object is open to interpretation and alteration based on whatever circumstances called for. Heidegger calls this spirit of technology techne. DevOps calls it agile.

The public cloud, governed as it is by third-party forces, is increasingly an example of a standing reserve. Anything “as a service” essentially sits waiting to be called on for one specific purpose, whether hosting particular workloads or providing particular applications. The affordances available to DevOps – to make constant minute changes to how their products and services function – are increasingly restricted, whether by cost or technical complexity or just standard access denial. In other words, the public cloud offers simplicity only at the sacrifice of control. And without control over the infrastructural medium, the DevOps messages of responsive and agile will become practically irrelevant.

The techne-cal solution

Of course, DevOps itself exists to merge the agile mindset of “dev” with the functional control of “ops”. But, as Pott points out, operations has traditionally worked under an “us vs. them” mentality in restricting technology resources for only the most well-defined of purposes. Operations is the high priest of technology as standing-reserve, if you will. So it’s unlikely that DevOps will find much help there.

What DevOps really needs is a medium where agile development doesn’t generate frictions for coders that disrupt continuous delivery, but which also provides an infinite range of affordances for potential projects and services. A techne platform, in other words. Private cloud infrastructure is the obvious choice – but it typically goes too far the other way, creating even more frictions by dint of technical complexity as a result of its piecemeal or siloed construction. What if the private cloud came pre-assembled, with all systems integrated from the very beginning? This is the principle behind converged infrastructure.

With converged infrastructure, DevOps can fully understand the medium in which it’s working, since all component systems are already integrated and accounted for. Like a potter with clay, that immediate sense for the technological medium is important because it lets the craftsperson get on with the actual business of building something – whether a vase or an enterprise application – in the knowledge that the medium will respond in a more or less predictable way. Unlike the medium of public cloud, converged infrastructure also allows full control over how its affordances get used, reused, and recycled.

The old boundaries between traditional packaged applications and mobile-first, web-based apps no longer apply: they can run securely on the same infrastructure without conflict or incompatibility. Once again, this allows DevOps to delve into rapid iteration, production, and destruction without questioning the baseline integrity of their infrastructure. And to top things off, the long-term costs of running enterprise applications on converged infrastructure are typically lower than in the public cloud – negating one of the biggest reasons for ceding infrastructural control in the first place.

For business managers, the question after all this is probably “so what?” The answer is that waterfall and other prescriptive, high-modernism ideologies about software are no longer functional – if they ever were. Now, speed and responsiveness are kings: if you can cut time-to-market from 25 days to 5 for a new service, you can beat the competition, at least for the next few months. But the curse, and magic, of continuous delivery is that it never stops improving. As Pott writes, the tribes within DevOps need to quickly find common ground to keep delivering those results for their businesses. A technological medium like converged infrastructure, which can give developers myriad affordances to iterate and test while smoothing out the frictions of operational control, will be a necessary bridge between them.

Image: “Waterfall and Rocks“, Mark Engelbrecht

About the Author: Matt Oostveen