Energy Manager

Energy Efficiency News
FEATURE – Overhead cabling saves energy in data centres

Although not considered a best practice from an energy efficiency point of view, a common method for cooling data centre equipment is to employ a raised floor as a plenum for the delivery of cold air to server intakes. The cold air is forced underneath the floor by fans within air handlers. However, this method is not the only option. Many new data centres today forgo the expense of the raised floor and place equipment on a hard floor. They cool their servers by employing in-row, overhead or room air-conditioning with hot aisle containment. The hard floor approach also forces the issue of placing cables overhead and many data centres have become accustomed to working with overhead cables.

January 24, 2012  By Victor Avelar



In both cases, data centre owners have to resolve the issue of how to lay out power and data cables. Data centres that depend on raised floor cooling distribution often route network data and power cabling underneath the raised floor. This cabling then feeds individual IT racks through cable cutouts at the back each rack. These cable cutouts allow cold air to bypass the IT server inlets at the front of the racks and mix with the hot air at the back of the rack. This design practice can lead to hot spots, clogged floors and overall lower cooling system efficiency.

Raised floors filled with cabling and other obstructions make it difficult to supply cold air to racks. The raised floor cable cutouts necessary to provide cable access to racks and PDUs result in a cold air leakage of 35%. The cable blockage and air leakage problems lead to the need for increased fan power, oversized cooling units, increased pump power and lower cooling set points.

Meantime, placing data centre power and data cables in overhead cable trays instead of under raised floors can result in an energy savings of 24%. This paper analyzes the effect of underfloor cabling on cooling and on electrical consumption, and shows how placing network data and power cabling in overhead cable trays can lower cooling fan and pump power consumption by 24%.

Underfloor cabling energy waste
Underfloor cabling contributes to energy losses in three ways:

Advertisement

• Blockage of air due to cables.
• Bypass air from rack cable cutouts.
• Bypass air from power distribution unit (PDU) cutouts.

Blockage of air due to cables
When new network or power cables are added under the floor, older unused cables are rarely pulled out to make room. Instead the cables are left undisturbed to minimize the risk of downtime. The build-up of cables causes blockages in air flow, which contribute to hot spots in the data centre.

A common solution is to add more air conditioning—not for cooling capacity, but for extra fan power to increase static pressure and overcome the underfloor blockages. The raised floor hides the build-up of cables over time. In contrast, overhead cabling is visible and more likely to be properly maintained and managed over the years.

Bypass air from rack cable cutouts
Underfloor cabling requires that cables come up through the floor tile and through the bottom of the rack. Cable cutouts in the tile measure about 8 x 8 in., and are only partially filled with cabling. The remaining space is usually left open, allowing cold air to leak into the hot aisle (assuming a hot-cold aisle layout).

The hot aisle should be the space where the hottest air in the data centre makes its way back to the computer room air handler (CRAH). The cold air that leaks into the hot aisle lowers the air temperature back to the CRAH, which decreases its capacity to remove heat. For example, a CRAH unit with 27ºC return air temperature provides 70kW of heat removal capacity. However, at a return air temperature of 22ºC, the heat removal capacity drops to 43kW. The capacity lost due to bypass air may create hot spots, which are sometimes addressed by adding more CRAH units.

Bypass air from PDU cutouts
Many PDUs are configured with four 42-position panels, which means up to 168 individual circuits can be distributed to the IT racks. In addition to these conductors, large input conductors feed the PDU. The installation and removal of these conductors requires a 9-sf to 16-sf opening underneath the PDU. This bypass air from around conductors has the same negative effect on the cooling system efficiency as the bypass air from rack cable cutouts.

Energy savings with overhead cabling
The energy savings attributed to overhead cabling are derived from lower fan and pump losses. Chiller energy cost savings can also be realized when the chilled water supply temperature is increased. A hypothetical data centre was modelled to evaluate the savings in moving network and power cables to overhead cable tray. The assumptions used for the analysis include the following:

• Data centre capacity: 1 MW
• Cooling system: chilled water
• Constant speed CRAH fans
• Rack inlet temperature with underfloor cabling: 18ºC
• Rack inlet temperature with overhead cabling: 20ºC
• Average rack density: 2 kW/rack
• IT Equipment ΔT: 11ºC
• Quantity of IT racks: 500
• Average cable cutout area per rack: 0.33 sf (a conservative figure, since the 8 x 8-in. cutout is partially filled with cabling)
• Total rack cable cutout area: 167 sf
• Minimum airflow required for IT: 120,038 cfm
• Hot air recirculation: 5% of airflow required for IT
• Average cfm at the front of each rack: 240 cfm
• Open area of 25% open perforated tile: 1 sf
• Average velocity at the front of each rack: 240 ft/min

In this analysis, a 1-MW data centre at 100% load is assumed to have 500 IT racks at an average power density of 2 kW/rack. Table 1 shows the calculated area of open tile cutout space and the associated air leakage as a percent of total required IT airflow. It is clear the cable cutouts behind IT racks contribute the largest amount of cold air leakage in data centres with raised floor cooling.

Moving the power and data cabling overhead reduces the total leakage to 13%. This reduction in leakage causes the CRAH return temperatures to increase, which then increases the cooling capacity of each individual CRAH. Ultimately, this reduces the number of CRAH units required.

Table 2 shows the design conditions modelled for the underfloor and overhead cabling scenarios. The temperatures for the rack inlet air and the CRAH supply and return air are based on energy balance equations which account for hot and cold air leakage. In this analysis, the number of CRAH units was reduced from 42 to 31. This leads to an estimated 24% savings in fan and pump power.

This analysis does not include the benefit of reduced air blockages under the raised floor. Removing abandoned cabling under the floor would have increased the energy savings stated above. In addition to the energy savings, significant capital cost savings are realized by foregoing the cost of 11 extra CRAH units—an estimated savings of $90,000. Finally, the analysis assumed the same chilled water supply temperature for both scenarios. In cases where the chiller is dedicated to the data centre, the chilled water temperature could be increased, thereby further increasing chiller efficiency and overall savings.

Spaghetti anyone?
Even overhead cabling can develop the problem of cable “spaghetti”—a huge bunch of cables, entangled with each other. When this occurs, new cable cannot be laid because it is impossible to pull out “dead” cabling from the pile of existing cables. Cable trays begin to sag under the weight of cables, and this increases the risk of a fault in equipment operation.

Consider a row of racks full of servers and networking equipment. Cables connected to panels and servers are laid on top of the racks in trays. When a contact breaks, the connection between two points is lost. When this happens, it is impossible to find or remove the faulty cable because it difficult to locate the defective cable within the mass of tangled cables. In these cases, new cable is often laid between the two points, but the old, defective cable is left inside. Over the course of time, this cable clutter results in 80% of dead cables being left in place, while the total quantity of cables increases.

Gradually, cable tray supports cannot carry the increasing load and more supports must be installed. In addition, no space is left under the ceiling because of the fact that cable bundles are all laid on one level.

The solution to this dilemma is to organize cable in trays mounted at different levels. Multi-level cable tray organization allows data centre personnel to sort and plan cable location, integration and removal on an ongoing basis. When a dead cable needs to be removed, it will not be tangled or buried. It will be easy to extract the cable from a single small bundle.

As the data centre changes, equipment moves in and out, and new components are added and removed. These changes result in frequent modifications to cables, which is why it is important the cable tray system be designed to accommodate such changes. New tray infrastructure must be compatible and interchangeable with the old system. The overhead tray system has to be flexible enough to transform without any fundamental changes in the original system.

Conclusion
Significant energy waste occurs in data centres when cable congestion forms air dams beneath the raised floor and cable penetrations in the raised floor tiles allow the cold air to escape and mix with the hot air. Modelling and analysis show that the decision to place network data and power cabling to overhead cable trays can lower cooling fan and pump power consumption by 24%.

It is possible to run cables overhead thereby saving energy and improving reliability through improved cable maintenance practices. Running structured cabling and power cabling in overhead cable trays results in several benefits. Raised floor plenums have less impedance to air flow when they are free of cables, and less air leakage occurs because the raised floor would have no holes designed to accommodate cabling. As a result, less fan energy would be required to cool servers. The decision to place cables overhead also provides one less reason to absorb the significant expense of a raised floor.

Overhead cable tray technology has made advances in recent years. These systems are now modular and much more flexible to accommodate dynamic data centre environments. Sound cable practices include the deployment of multi-layered overhead cable tray systems.

Victor Avelar is a senior research analyst at Schneider Electric’s Data Centre Science Centre. He is responsible for data centre design and operations research, and consults with clients on risk assessment and design practices to optimize the availability and efficiency of their data centre environments. Victor holds a bachelor’s degree in mechanical engineering, and is a member of AFCOM—an association of data centre management professionals.


Print this page

Advertisement

Stories continue below