This is the second installment of a blog series about the EMC Durham Data Center. Click here to read part one.
In my first blog in this data center blog series, I talked about the challenges of architecting this cloud-optimized, 100 percent virtualized, scalable and sustainable data center. In this second blog, I would like to share some insights from the first 90 days of the project.
EMC IT set an aggressive two-year target to migrate and transform our data center into a private cloud. Every day was critical. To preserve as much schedule as possible for migrating applications, we allotted 90 days from when the facility was completed to stand up our infrastructure at our the Cloud Data Center in Durham, NC. We needed to install the network, storage, compute and backup to eventually host more than 350 applications, 2,000 servers and six petabytes of storage. In the old dedicated physical IT world this would have been impossible. In the cloud? We were about to find out.
It was a tall order, but by standardizing on the Vblock architecture and virtualizing applications our dedicated team of engineers and technicians were able to complete the initial Cloud Data Center build on schedule.
First, no matter how much you plan for a project like this one, unforeseen challenges with equipment and circumstances are going to arise. Be prepared to be flexible, creative and resourceful.
Initially, the Durham team was small; only 12 IT employees most of whom were still moving to North Carolina themselves or being hired during the first few weeks of the project. Regardless, everyone pitched in to do their part.
We augmented the team by flying down key resources from the IT team in Massachusetts for two week sprints; “borrowed” a few people from our nearby Apex manufacturing facility and had a dedicated EMC Global Services team to tackle setting up the storage arrays and SANs.
We received the keys to the 20,000-square-foot Durham Data Center on October 4, 2010. We then referred to our detailed plan mapping out slated locations for network, storage, servers, and backup equipment, which had been staged at our Research Triangle Park facility or was on order and imminently expected.
Christmas morning every day
It was a mad-dash, crazy time–we didn’t even have a chair or a table for the first couple of weeks. Everyone was tearing open boxes and hauling out equipment. It was like Christmas every morning. We made a pile for network, storage, compute and security, and yes, “some assembly” was required. The cardboard and packaging material was nearly piling up faster than we could get dumpsters delivered.
As we stood up the equipment, we faced an ongoing series of challenges–delayed deliveries, missing or wrong parts, more power connectivity requirements than we anticipated. I even brought my toolbox from home and we shifted rack set-ups, added shelves, and rounded up what we needed for components to get the job done. At one point, after trying various options to deal with the fact that one of the rails on one component was too long to fit in the cabinet, I actually took it home and sawed it off to make it fit. We put in long hours, orde
red a lot of pizzas and improvised to make things work.
Intent on the deadline
Our mantra was compelling: We must stay on schedule! It would start to cost EMC millions of dollars the second we missed our deadline in setting up the new data center because we would be unable to vacate the old data center from a rented facility by the end of our lease.
During the first two weeks, we set up the core network, management network, five SANs and installed our Vblock. It took every bit of the next six weeks to get all of the remaining equipment racked and configured so it would be available to run our applications.
In parallel to the physical build out, we had defined the architecture for the foundation applications like Active Directory, Domain Name Service and Network Time Protocol. These are the key IT applications that need to be running to host business applications.
In December 2010, we built out new VMs for all of the foundation applications and handed them off to the application teams for configuration and testing. By the end of the term–day 90–most of them were up and running—more than 20 percent of the total servers needed for the data migration were complete.
Our team worked hard to meet the deadline, but we didn’t do it alone. We got support from EMC customer service professionals, all of EMC IT, as well asmanufacturing and engineering. Everyone understood that getting the data center up was a priority for EMC. I only had to ask an organization once for help and received tremendous support.
Besides being creative in coping with unforeseen challenges in scheduling, missing or mismatched components and the like, there are a few other lessons I can share:
– Standardization and virtualization are tremendous simplifiers. When in doubt, Vblock and virtualize.
– Focus your team on one critical aspect of the stand up process at a time. If you need the management cluster stood up to allow a particular set of applications to be moved, concentrate on just that until it is done. Then move on to the next priority.
– Bring in resources from elsewhere in your organization when it makes sense. Flying in sprint teams provided the necessary people-power and kept all of us energized to get everything done.
– Be prepared to work hard, it is contagious. This is the kind of endeavor that just requires pitching in and doing what is needed. The good news with our project was that everyone rose to the occasion, helping in any way they could to meet our goals.
Stay tuned for my next blog on the importance of proactive configuration management in your Cloud Data Center process.