Co-authored by Luke Woydziak and Megan Murawski of Dell EMC’s Cloud Dojo in Cambridge, MA.
Why do some projects result in products customers love while others either fail or are never used? Why do some engineering teams have great quality while others struggle? Why can some teams deliver updates on a daily basis while others take 6-months to a year? These are some of the questions we set out to answer when we first formed the Dojo a few short years ago. The real answer we found comes down to discipline. We searched out the best techniques and practiced them rigorously to do one-thing…deliver better products rapidly. I want to take you on a journey from the inception of one of those products. I’ll give an overview of those techniques and how we apply them.
Typically, products we observe start with an idea for a solution. This seems like the right way and we see examples all around us of great luminaries who execute on great ideas and achieve phenomenal success. It’s easy to think of that as the way to develop great products, but we have found that approach to lead more often to disaster than success. Greater than 70% of the time is what venture metrics suggest. So how do we flip the odds? We flip the approach. It all starts with a problem and lots of testing.
How do we find valuable problems? We use a process called synthesis. Synthesis entails having structured conversations with people who are all too happy to tell you about their problems…potential customers. These structured conversations lead to valuable insights when aggregated together. We use some simple technology to support this: sticky notes and a blackboard:
From this zoomed out view you can see the scattered notes in the middle left. This is the section for the findings. Notice how some of them clump together. On the middle right we have separated some of the general findings. On the far left, we have observations about the person we talked to. And on the far right we have next steps.
As we zoom in on one of the findings, we can see the details.
The different colors represent different users. Interestingly, the amount of representative customers necessary to generalize to 85% of the market is surprisingly small – 5.
Note in this case the prevalence of “IP security”. This was the problem the customers were trying to solve: “How do I connect Cloud Native applications to my Legacy data stored on NFSv3 servers?”
Now that we have a problem, who is actually having it is the next important question. We return to the synthesis board and fill out a persona worksheet.
The persona allows us to aggregate the common characteristics of our customer. Our product team can then develop a great deal of empathy for this person and step into their shoes. This fills out the picture and allows for the best solution to be imagined.
We test both the problem and persona against several other representative customers to dial them in, but for the most part we have found that the original 5 are pretty spot on. At this point we begin to brainstorm solutions. The solutions allow for us to perform a process called scoping, which as its name suggests scopes the effort needed to bring one (or more) of those solutions to life.
Often as we found, those solutions require engineering effort. We kick off this effort with a process called inception. The format for the inception is a one-day meeting, where we go through an agenda of:
- Work Item (story) creation/prioritization
We enter these stories into a work tracker in the order we want them completed.
This begins the engineering work. Here too we use rigorous best practices. You may have heard terms like DevOps, Continuous Integration (CI) and Continuous Deployment (CD). The underpinning for all of this is various forms of automation. We strive to automate as much as possible from developer testing to deployment and operation of changes. We practice Test Driven Development (TDD), meaning that for any development task, we start with a test.
The first test is written at the highest level possible (typically an end-to-end test). It is then automated using a CI server to automatically rerun that test whenever a code change is detected. We then decompose the system elements necessary to solve the test and write a second round of test to drive development of those elements. Finally, we implement the element only so far as to solve the test we have just written.
After the test is passing, we will correct any bad design decisions and factor the code in a maintainable way. Finally, we commit the code (and tests) to our source repository and allow our CI server to process it. This involves repeating the elemental (or unit tests) and the entire system/integration level tests we may have written. If all tests are passing, our CI system will package the change and deploy it to our customers.
This process is strictly adhered to such that we end up with a full testing suite that can definitively verify and validate any change. Of course human error can still occur, so we use another best practice called pair programming.
During all development, engineers will work in pairs on a given story. This accomplishes a threefold mission: First, it establishes enforcement of the discipline Second, it ramps up new team members and Third, it surfaces a diverse look at every problem. Finally, we use a product owner to ensure that indeed the requirements of the story have been satisfied and the correct automation is in place to prove it works and prevent future changes from regressing the system.
Now we return to the beginning for synthesis and repeat the whole process. Of course some of the work will already be done. In that case, we use and extend upon our knowledge until we need more development work to be done. The approach is iterative and additive over time. Before the second iteration we also verify with a sample customer the implemented solution and reapply the learning. Thus the cycle then begins anew.
Feedback and Retrospectives
As important as feedback and iterative process is to our product development, we also use this as a tool for continuous improvement of the team. When something isn’t working, we take the time on a regular cadence to address the issue and brainstorm a better solution. When something is working, we take the time to acknowledge the success and encourage the team. Actionable change allows us to keep moving forward.
A Better Way…the Way
As you can see, there is no real magic here. We practice a rigorous and disciplined process that is wholly repeatable. We have collected the best techniques and continue to search for better ones. We will incorporate better ideas as we are continuously striving to improve. This has lead to a project that in a little under a year has seen adoption of 50+ Enterprise companies. Customers are speaking on our behalf at conferences praising the usability and quality of our solutions. We can deliver updates to those customers daily and maintain a very high level of quality. In addition to product success, we’ve also worked throughout the organization to spread the Dojo Way. By committing to the process and using feedback, we can continually grow and evolve our way of working, translating this into quality products and functional and efficient teams.
 Gage, D. (2012, September 20). The Venture Capital Secret: 3 Out of 4 Start-Ups Fail. Retrieved August 24, 2017, from https://www.wsj.com/articles/SB10000872396390443720204578004980476429190
 Hubbard, D. W. (2014). How to measure anything: finding the value of intangibles in business. Hoboken, NJ: John Wiley & Sons, Inc.