Last week, I had the pleasure of attending the DevOps Enterprise Summit (#DOES15) in San Francisco where I was able to spend three days surrounded by people thinking about, talking about, and most importantly figuring out DevOps in the enterprise. What was abundantly clear is that DevOps is not just for the unicorns, aka. Web companies, as evidenced by keynotes from Target, HP, GE, and CapitalOne. In fact, what resonated with me and other attendees is that the principals and practices of DevOps can be employed in any environment with any technology stack to improve quality, accelerate deployments, and shift organizational culture. And yes, that even includes the mainframe.
Below is short list of key takeaways from the conference that will shape how we work with our customers, providing you with a more valuable IT transformation experience.
Building a cross-functional community is a critical investment needed to radiate DevOps. In nearly all the success stories shared at the conference, it was clear that the strength, structure, and accessibility of the community were a key indicator of success and adoption. Strength came from both coaches and SMEs who defined the DevOps guardrails and partnered with development teams to learn and practice DevOps. These coaches and SMEs also sponsored events like Hack-a-thons or internal DevOps days to bring people together and bridge the common gaps in understanding and perspective that prevail in the Enterprise.
A second common community trend was making space for teams to learn and practice DevOps. This space consisted of physical office space and tools configured to support team collaboration as well as time, to learn and experiment with new tools, techniques, and practices. A number of speakers from HP, Target, IBM, etc., described DevOps Dojo’s and/or Centers of Excellence which are physical spaces, supported by coaches and SMEs, where whole teams can come to learn and practice DevOps against an active development project.
Automate, then test, test, test
DevOps is as much about going fast as it is about building high quality. Test automation integrated into the delivery pipeline was a key theme in this year’s DevOps Summit. Interestingly enough, it was not automation for the sake of reducing cost or lightening the testing burden. Rather, it was focused on shortening feedback loops and proactively identifying and resolving errors before they were promoted into Production. While this does reduce the cost associated with remediating a defect and/or outages caused by missed defect, I found it interesting that the key driver was performance and quality rather than cost saving.
A couple other tidbits related to testing — Include security and compliance testing in your tool chain. Waiting until late project stages to run penetration testing or PCI checks can introduce complex defects that must be fixed prior to deployment. These types of errors can seriously derail a project. Multiple practitioners recommended running these scans early and often in the development lifecycle. Better still, introduce code analysis tools and pretested code libraries and frameworks that prevent vulnerable code from entering the pipeline.
Lastly, you can even create pipelines for Mainframes. One example being IBM’s Rational® Development and Test Environment for System z® enables developer and test teams to emulate a zOS hardware configuration on Linux. Now even mainframe code can be managed in pipeline without the cost of MIPS and resources on an actual mainframe.
Employ the scientific method to test hypothesis and measure results until the best path is identified. Changing without measuring the outcome doesn’t provide you with the needed information to learn and adapt. Recheck assumptions and make course corrections as needed throughout the DevOps transformation process. There is no single right way to do DevOps. It is a journey and will be unique for every enterprise and every portfolio. Deming had it right over half a century ago — Plan. Do. Check. Act. (Deming)
ChatOps is the integration of collaboration tools with monitoring and management tools so that subscribers can get real time updates, alerts, and notification on environments and applications through a single pane. What I found so interesting about ChatOps was not the single pane notion, rather how ChatOps helped foster community. First, ChatOps provides the same information to team members from different departments in real time – everyone knows there is a problem. Second, ChatOps provides a vehicle for the team to collaborate on resolving issues – less finger-pointing more joint discovery and resolution. For more info on ChatOps specifically, check the ChatOps for Dummies book.
While these are only a few of nuggets I gleaned from the session, there was much more to learn from other participants and speakers. I am looking forward to DOES2016 – maybe in Austin.