Security and regulatory concerns have some users warily eyeing the move to server virtualization.
For example, during the past year, the Stanford Hospital & Clinics, part of Stanford University in Palo Alto, Calif., has shifted about half of its applications from traditional server platforms to VMware-based virtual machines (VM) — and found it strongly impacted decision-making on security.
“You change the character of the IT infrastructure,” says Mike Mucha, information security officer at the hospital, about what he's seen in virtualization's impact. “There's uncertainty.”
“Virtualization tends to be an extension of the server component and it's led by the server team,” Mucha says. But virtualization's switching aspect means the traditional network itself is altered, which Mucha notes has generated some “pushback” from the network and storage teams that also have to be at the table when it comes to making decisions.
“The server people are taking on non-traditional roles, making decisions about network architecture,” he says about virtualization's impact in his organization.
Security questions come up, such as where to deploy intrusion-detection and management systems or firewalls, in a virtualized world.
There's awe in the instantaneous speed that VMware offers in set up and tear down of VMs but worry about potential abuse of that power, too, deliberate or unintentional.
Mucha decided Stanford Hospital & Clinics would benefit by adding another layer of security controls for VMware's ESX servers and management console by inserting the policy-enforcement appliance from start-up HyTrust.
The HyTrust appliance places controls on administrative and user decision-making responsibilities, plus adds some VM-focused intrusion-detection capability.
“It gives us some controls,” Mucha says, adding when it comes to virtualization, a new era of risk mitigation is emerging that has to be addressed, especially as Cisco, Juniper and other traditional switch vendors introduce further virtualized switching technologies.
Others also caution that virtualization should be seen as introducing new risks that need to be understood, especially by any organization subject to regulatory concerns, such as the Payment Card Industry Data Security Standard (PCI DSS), which anyone processing payment-cards has to follow.
For anyone who has had no experience at all in virtualization, “if you have a choice, I highly recommend you don't adopt virtualization for any regulated projects,” said Joshua Corman, principal security strategist at IBM's Internet Security Systems division, speaking on the topic at the recent Interop conference.
He said virtualization brings new attack surfaces, operational and availability risks, and increased complexity with features such as live migration. Live migration features that move VMs from one physical server to another open up new attack possibilities, he pointed out. Data center managers should be asking if their VMs are moving to less-secure servers.
For use of virtualization in production, Corman strongly recommended Type 1 hypervisors — bare-metal hypervisors that run directly on hardware — over Type 2 hosted hypervisors that are often free and meant for test and development.
He pointed out the PCI DSS adds more confusion because the rules suggest each server should have only one primary function, which could be taken to mean servers shouldn't be virtualized at all if they are to conform with PCI DSS rules. Acknowledging uncertainty over the matter, the PCI Security Standards Council expects to be issuing guidelines on virtualization and payment-card processing by year-end.
Security managers in regulated industries such as banking are sizing up virtualization with a critical eye.
There's increased pressure to save costs, something virtualization might potentially deliver, but any cost savings could evaporate if virtualization brings heightened risks and security concerns, said Lynn Terwoerds, head of security architecture and standards for financial services firm Barclays Bank.
“I'm challenged by that,” said Terwoerds, who spoke on the topic during a panel discussion at last month's RSA Conference.
In the recent effort at Barclays to understand the impact of any move to virtualization, the “risk and audit folks” who play a role in bank technology decisions have been asking questions such as 'what new risks are you introducing or can you lower your risk profile in any way?'” Terwoerds pointed out.
She said metrics to firmly address such questions are hard to come by and in a bank environment, which must conform to many regulations for data-retention as well as the the Sarbanes-Oxley Act rules, and there's no room for a casual decision.
“We want to define in explicit detail where is our customer's data and where is it going,” Terwoerds said, noting the PCI rules seem like “a cold bucket of water” on virtualization deployment in that area.
But the larger issue is that virtualization is not just “a technology problem” that needs to be understood but “it's how I'm managing my vendors and contracts, especially what is auditable,” Terwoerds said, adding that banks tend to be cautious on that score.