From its genesis in x86-based servers, virtualization technology has spread rapidly into storage and the network. Today, it's at your desktop, in your processors and memory, and in your switches. It's shaping hardware and software appliances — heck it's even in your smartphones.
Tom Nolle, CEO of CIMI, a high-tech consulting firm, describes three essential missions for virtualization: as a client technology, as a server technology, and as a network technology. These three areas, he says, are converging around the idea of cloud computing.
On the winding route from that first server virtualization project to the cloud of tomorrow, you never know where virtualization will wind up next. But watching leading-edge enterprises can provide a clue.
Dell builds toward a private cloud
Dell, for example, is well into its third-generation phase of virtualization and is strategizing about the fourth. Virtualization has become the company's main computing platform and a fundamental part of its enterprise infrastructure, as well as having opened a smarter path for growth, says Matt Brooks, senior enterprise architect for the company.
Dell's commitment to virtualization translates into a mindboggling 6,200 virtual machines in use — roughly 2,500 in production and another 3,700 in test and development.
“We’ve gone from a consolidation to a containment focus. The next stage we see, which we're moving into now, is creating an optimized environment where we push all of our workload needs onto a platform that's managed around aggregate capacity,” Brooks says. This applies to data center refreshes or new servers — virtual or otherwise — and requires tighter control of how IT manages capacity, he adds.
From there, Brooks continues, Dell can make the leap to an automated data center (otherwise called the real-time infrastructure or a private cloud) in which the physical and virtual environments are managed as one.
“This is about being able to extend the management and computing efficiencies we see with virtualization into the physical environment. A lot of this involves moving the workload back into external storage, the network or the transactional layer,” he says. “We'd have a singular provisioning process, whereas today we have a provisioning process — a very efficient one — designed around virtualization and another one for the physical platform.”
Once workloads move off the server, Dell gains efficiencies and lots more flexibility, Brooks says. “We'd be able to tag a policy specifically to a workload and say, 'This workload, based on this schedule, this service-level agreement or these capacity requirements, needs to consume the entire resources of this server for a certain period of time and then maybe move and join the rest of the virtualized workloads.'”
Information, workloads and workspaces
Workload virtualization is one of three next steps that leading-edge enterprises are taking in their move to the 100% virtual data center of the future, says Tony Bishop, CEO of IT consulting firm Adaptivity and onetime overseer of Wachovia's pioneering virtualization initiatives. Information virtualization comes first, with workspace virtualization the result.
With information virtualization, an enterprise is able to assemble a single view, or profile, of a client by bringing together information stored in multiple repositories. “Let's say I have a platinum customer with 12 accounts. I don't want to make that customer go into each account. I want a single profile — and this is possible with information virtualization,” Bishop says.
He points to Composite Software, whose Composite Information Server pulls together and presents flat-file and relational data in this way; and Endeca Technologies, whose Information Access Platform does the same for unstructured content, such as PowerPoint presentations, videos and Word documents. “Information virtualization is going to be big,” Bishop predicts. “If I can't get to the information I need, right away, the benefits of virtualization will become limited. You need to do more than virtualize the infrastructure.”
The same could be said of the workload. “If I can't move the workload around to where the processing and resources are that best fit what I'm trying to do, then I'm not able to take advantage of the elasticity and fluidity expected of virtualization,” Bishop says.
In the case of that platinum customer, IT systems should recognize its requests, then send the workload to the best resources — physical or virtual — for meeting the service levels or response times appropriate to that client level. “The business could say, 'I don't care what's going on, platinum customers have to have blink-of-an-eye response times,'” Bishop says. “If you don't virtualize at the workload tier and make sure the workload moves to wherever the best fit is, you're never going to get there.”
This represents a shift from today's supply-driven mentality to a demand-driven, service-oriented approach. Getting there requires that enterprises adopt a product such as Appistry CloudIQ Platform (formerly called Enterprise Application Fabric), DataSynapse's FabricServer or IBM WebSphere XTP and build a framework around it, he says.
Virtualizing the workspace is the next logical step, Bishop says. “If you can break the bond of hardwired information and content, and have it so that whenever I ask for something it gets processed [to meet service levels,] then you have the ability to have a virtual extension anywhere.”
This goes beyond the desktop virtualization concept talked about today, in that the user need not have a distinct physical PC. A smartphone would suffice, maybe even a TV, Bishops says. “With a single ID, I should be able to travel anywhere and if I can just get to a screen with Internet access, I should be able to have my entire workspace with me — completely there but virtual.”
While leading-edge enterprises are striding toward this virtual nirvana, the majority of companies are baby-stepping their way through current-generation virtualization projects. What's next for them is more about growing the virtual server environment, integrating virtualization across servers, storage and the network, extending virtualization to the desktop — and figuring out how to manage it all.
We explore those issues in this New Data Center package.