Like me, you’ve probably noticed a lot of news lately on client virtualization. From new virtualization software and graphics technology by Citrix, to the delivery of an open source client from VMware, there’s definitely no shortage of buzz.
Perhaps because of this buzz, one of my colleagues this week asked me, “Are we witnessing the death of the PC desktop?”
Wow! There is a question that makes you stop and think.
My short answer is no. We’re simply looking at the latest in a series of evolutionary steps that will reshape the way IT thinks of the desktop. Ultimately, I don’t think that we, as end users, will know or care if our desktop is virtual. Let me tell you why.
Revolution or Evolution?
Let’s start by looking at what has happened in the data center. It wasn’t that long ago that server virtualization was the latest rage in IT – in fact, Dell talked about it a bit at last week’s IT Executive Summit for our customers. Now that a large number of companies have virtualized their server and storage infrastructure, we are seeing reports of significant ROI. For example, by utilizing server virtualization and other cost saving data center techniques, our own IT department was able to cut operating costs by approximately $29 million and we were able to avoid building a new data center.
CIOs look at this kind of savings and they have to wonder what kind of efficiencies they can they gain on the client side.
As a result, 2009 is shaping up to be the “Year of the Pilot.” A substantial number of our customers are experimenting with or adopting virtualization for portions of their client infrastructure.
Why launch a pilot now? A few industry developments have converged to create the “next step” conditions for there to be an evolution in client management:
- Data center consolidation and server virtualization are allowing us to centralize both storage and, where appropriate, computing resources.
- New technologies are enabling the delivery of a true PC desktop computer to a variety of devices – we are not talking about a terminal service anymore. And as mentioned previously, virtualization software is evolving to address the issues of scalability, graphics performance, and data center resources that have been sticking points for customers in the past.
- Lastly, client virtualization using a true “bare metal” hypervisor is just around the corner. This will offer a key component for Flexible Computing that will enable what we like to call “one image, any device.” Bare metal hypervisors promise to simplify image management and enable the disconnected use of virtual clients, critical for mobile usage models.
We expect that new client hypervisor technologies will find their way into production networks in the latter half of 2009. When that happens, the game officially changes and we expect to see the market move from pilots to broad-based rollouts.
So what now?
This is the year to get educated and prepare for the coming change. In the next 18-24 months, the way we build, manage, and deploy clients will enter into the next stage of its evolution. If you want to hear more, I’ve posted a series of short videos about Flex Computing, available here. Also, in future posts we’ll examine some of the environments and usage models where the traditional PC is, and may always be, the preferred choice.
For now, though, let me leave you with this thought. The era of “one size fits all” computing is at an end, and the era of “one image, any device” is at hand. Personally I’m excited at the prospect of true synergy between virtual and traditional PCs, what do you think?