News

Novel way to cool data centers passes first test

A team of engineers led by Lawrence Berkeley National Laboratory has successfully tested a novel system that they say could greatly improve the efficiency of data center cooling.

It's an important area for data center operators, who are struggling with the escalating costs of cooling increasingly powerful server equipment. Some facilities have been unable to add new equipment because they have reached the limit of their power and cooling capacity.

By some estimates, the energy used to cool IT systems accounts for nearly half the cost of running a data center. The amount of energy consumed by data centers in the U.S. doubled between 2000 and 2006, and could double again by 2011 if practices aren't improved, according to the U.S. Department of Energy.

Server equipment in data centers needs to be kept within a certain temperature range. Hardware can fail if it is too warm, but overcooling wastes energy. Still, most data centers err on the side of caution and cool their equipment more than they need to.

The Lawrence Berkeley engineers, working with Intel, Hewlett-Packard, IBM and Emerson Network Power, have been experimenting with a way to deliver just the right amount of cooling to computing equipment.

They fed temperature readings from sensors that are built into most modern servers directly into the data-center building controls, allowing the air conditioning system to keep the facility at just the right temperature to cool the servers.

It's a simple idea but something that hadn't been achieved before. IT and facilities management systems have historically been managed separately. Computer Room Air Handlers, or CRAH units — basically large air conditioners — are most often controlled using temperature sensors located on or near the CRAH air inlets.

That's the way 76 percent of data centers do it, according to an end-user study cited in a white paper about the experiment. Eleven percent of data centers place the sensors in the cold aisles between the server racks, which is better but still not ideal.

Linking the IT equipment directly to the cooling systems represents “the most fruitful area in improving data center efficiency over the next several years,” according to the white paper.

The project has been a success, according to Bill Tschudi, a program manager at Lawrence Berkeley. “The main goal we had was to show that you could do this, that you could use the sensors in the IT equipment to control the building systems, and we achieved that,” he said.

The amount of energy saved will vary depending on how efficient a data center is to begin with, he said. He predicted that most data centers would see a return on their investment within a year.

Most data centers today are over-cooled, according to the end-user study. It found that 90 percent of respondents keep their data center at least 5C below the upper limit recommended by The American Society of Heating, Refrigerating and Air-Conditioning Engineers, which publishes data center temperature guidelines. Adding even a few degrees of extra cooling can be expensive in data centers.

“There's this idea that the best data center is a cool data center, but what we've found is that it's safe to run them a little bit warmer,” said Allyson Klein, a manager with Intel's Server Platform Group.

Linking the IT and building control systems sounds simple but posed some technical challenges. IT management systems speak a different language from building control systems, so the engineers had to develop software to convert the IT information into a protocol that can be understood by the CRAH units.

The software was custom-written for the project, but commercial vendors are developing products to do that work, Klein said.

The project also used variable-speed fans in the CRAH units, which allow the cooling supply to be regulated more precisely. But Klein said data centers could see benefits even without those fans, just from having more precise data about server temperatures.

The project is being wrapped up now and the engineers will report their findings in a session at the Intel Developer Forum this month, and at the Data Center Energy Efficiency Summit in October. NetApp has conducted a similar project and will also present its findings at the summit.

Part of the technique's appeal is that the up-front costs are relatively low. “We're using industry-standard technologies, so there's no special sauce that would prevent customers from employing this,” Klein said. The temperature data could be fed directly into the building control systems, or sent via management consoles from IBM, HP and others, she said.

Most new servers include the front-panel temperature sensors employed in the experiment, and the EPA plans to add the sensors to its list of requirements for Energy Star servers, she said.

Other types of instrumentation data are likely to be used in the future.

“If you think about it, this is just a baby step to get started,” Tschudi said. “You could use this same idea to integrate more of the data center, so that instead of thinking of it in terms of IT equipment and infrastructure equipment, you could think of it as a single entity that's seamlessly controlling itself.”

Previous ArticleNext Article

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines