News

Virtualization, cloud computing pose new challenges, opportunities

Three years ago, Carnegie Mellon University opened the Data Center Observatory – an answer to the ever-rising operational costs in IT. Administrative expenses were spiraling out of control because individual research groups within the university were running their own IT infrastructure, characterized by short periods of heavy use followed by many hours sitting idle and wasting energy.

The solution was to build an administered utility that provides computational and storage resources to the university community. Besides improving administrative efficiency, the DCO helped control power and cooling costs while letting researchers focus on what they do best rather than worry about maintaining their own mini data centers.

“We didn't have the name cloud computing [at the time] but as it turns out that's exactly what I was pitching to the university,” says Greg Ganger, a professor of electric and computer engineering and director of Carnegie Mellon's Parallel Data Lab, a storage systems research center.

So far, the DCO houses 325 computers connected to 12 network switches, 38 power distributors and 12 remote console servers. More than 1,000 cables and 530TB of storage are in use, while environmental conditions are monitored by 13 sensor nodes. Most equipment is donated by vendors or bought with grants.

Two thousand square feet in size, the DCO is being built in zones, with two out of four zones online at this time.

The DCO gets the “observatory” part of its name because it was designed not only to provide real data center resources but also to serve as a test bed for systems researchers looking to “understand the sources of operational costs and to evaluate novel solutions,” according to Carnegie Mellon. A windowed wall, and LCD display showing electrical usage and other statistics gives people walking by a sense of what's happening inside the Data Center Observatory.

Building the DCO was not without its challenges, however. Besides “playing Tetris with the room” to figure out how best to place equipment, Ganger found that convincing researchers to share was not always easy.

“We learned how hard it is to get people in the same space,” says Ganger, who described the project at a recent event hosted by Schneider Electric and in an interview with Network World. “Each group had its own operating system that they had to have, and their own set of libraries and unique setups. Early on it was clear we had to use virtual machines.”

Rather than use the expensive VMware virtualization tools, Ganger opted for the open source Xen and KVM platforms. About a third of DCO machines have been virtualized, making it easier to increase and decrease resources provisioned to each research group. Overall, virtualization has been very useful but raised some interesting concerns, he says.

Virtual machines need lots of memory, Ganger notes. If VMs can be suspended when they are not in use, it's easier to provide memory to the VMs that need it. But suspending a VM can harm the application running inside it, if the application wasn't written specifically for a VM, Gagner says.

“If they have open network connections that are active, those connections will break [when the VM is suspended],” Ganger says. “We're trying to figure out how to have the capability to get stuff out of the way so it's not taking up memory.”

Ganger and his team designed the Data Center Observatory in partnership with the Schneider Electric-owned APC, which supplied In-Row Cooling and Hot Aisle Containment technologies, allowing potential capacity of 40 racks and 774 kilowatts of power.

Figuring out how to efficiently cool such large densities of equipment took lots of planning.

“As a person who comes from a software systems background, it never occurred to me how much was involved in constructing a room like this, the power and cooling issues, the scale of the power and scale of computing involved,” Ganger says.

Although the phrase “cloud computing” was not in vogue when Ganger started building the Data Center Observatory, he now considers the DCO to be essentially a private cloud for Carnegie Mellon researchers.

“I think of [a cloud] as an infrastructure that's managed by some other group … that you can count on for providing the hardware resources you need to do your work,” he says.

Though originally designed for internal usage, the DCO has become part of public clouds such as Open Cirrus, a cloud computing research test bed created by HP, Intel and Yahoo; and a university collaboration project known as the Open Cloud Testbed. Carnegie Mellon is also part of the Internet2 consortium and the National LambdaRail network.

These cloud experiments are in the early stages, but Ganger expects them to become more important as time goes on. “Eventually, [becoming part of the larger, public cloud] is going to be the right answer,” he says. “Eventually cloud computing is going to be something that is understood well enough that the interfaces are standardized, and people agree it's the right way to do it, and it handles all the different modes of computation you want to handle.”

As the public cloud matures, researchers across the country may have access to machines in the DCO, and Carnegie Mellon researchers will increase utilization of external data center resources. But Ganger says the software layer that assigns resources will have to become more sophisticated, with the ability to dynamically provision compute and storage capacity to each user without overburdening any specific data center that's attached to the cloud.

“Two years from now, I would like to be at the point where that kind of resource flexibility is there,” he says, “but right now it's not, right now we're just spinning the thing up.”

Previous ArticleNext Article

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines