News

Consortium tackles cloud computing standards

Everyone’s talking about building a cloud these days. But if the IT world is filled with computing clouds, will each one be treated like a separate island or will open standards allow all to interoperate with each other?

That’s one of the questions being examined by the Open Cloud Consortium (OCC), a newly formed group of universities that is both trying to improve the performance of storage and computing clouds spread across geographically disparate data centers and promote open frameworks that will let clouds operated by different entities work seamlessly together.

Cloud is certainly one of the most used buzzwords in IT today, and marketing hype from vendors can at times obscure the real technical issues being addressed by researchers such as those in the Open Cloud Consortium.

“There’s so much noise in the space that it’s hard to have technical discussions sometimes,” says Robert Grossman, chairman of the Open Cloud Consortium and director of the Laboratory for Advanced Computing (LAC) and the National Center for Data Mining (NCDM) at the University of Illinois at Chicago

The OCC wants to support development of open source software for cloud-based computing and develop standards and interfaces for the interoperation of various types of software that support cloud computing.

OCC members include the University of Illinois, Northwestern University, Johns Hopkins, the University of Chicago, and the California Institute for Telecommunications and Information Technology (Calit2). Cisco is the first major IT vendor to publicly join the OCC, though more could be on the way.

The consortium’s key infrastructure is the Open Cloud Testbed, a testbed consisting of two racks in Chicago, one at Johns Hopkins in Baltimore and one at Calit2 in La Jolla, all joined with 10 Gigabit Ethernet connections.

Grossman and colleagues recently used the testbed to measure the performance penalty when doing computation over wide areas. Grossman says by using Sector and Sphere, open source software developed by the National Center for Data Mining for use in storage and compute clouds, they were able to transport data about twice as fast as Hadoop, an Apache Software Foundation project. One of several reasons for the speed improvement is the use of the UDT protocol, which is designed for extremely high speed networks and large data sets. Most cloud services use TCP, Grossman says.

The project won the SC08 supercomputing conference’s Bandwidth Challenge Award. “Processing data by clouds today is almost always done within a single data centre due to the technical challenges processing data across multiple datacenters,” a press release announcing the award states. The project “demonstrated technology … that enables cloud computing to utilize high performance networks and spread cloud computing across data centres to create wide area clouds.”

The Open Cloud Consortium is just getting started, having formed in mid-2008. Grossman says the group is looking at the same technical issues as companies like VMware, which is developing a broad operating system that can manage the entire data center.

The main idea is to gather universities and IT companies in a noncompetitive way to exchange technical information, hopefully leading toward cloud computing that is faster, more secure and based on open standards and open source software.

“I’m not a marketing guy,” Grossman says. “This is really trying to understand interoperability issues that I still don’t think are clearly understood, and issues about how you operate clouds over wide areas.”

Previous ArticleNext Article

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines