Processing Power comes at a price beyond the cost of the silicon itself, yet is still measured in dollars. For the faster the electronics runs, the more heat it produces. And if the product is destined for an enclosure the manufacturer has to ensure that sufficient cooling is incorporated to manage internal temperatures.
According to Rittal industrial product manager David Nicholson equipment manufacturers aren’t helping either. He says that systems integrators (SI’s) are taking advantage of shrinking electronics by housing them in a more compact chassis – often with scant regard for the thermal problems this produces. “The problem is you still have to get rid of the heat,” emphasises Nicholson. “Ideally SIs should build [compact electronics] into a standard chassis and let it cool naturally, but people don’t want to spend on space when they could have smaller equipment.”
But there is a physical threshold to this argument claims John Drain, CEO of rack designer SpinServer. “There’s a finite power dissipation you can [extract from a] rack before it’ll melt,” he says. “Whether you cool the cabinet via [methods such as] subfloor or rooftop ventilation is a little academic if the power threshold for a given rack is exceeded.”
This limit is determined by the power generated by the electronics in typical operation compared to its maximum temperature specification. Determining cooling requirements in a rack can be as simple as measuring the temperature inside and out for that size of rack during operation. Unfortunately this is an empirical method that demands you build the equipment first and fit the cooling later, hardly ideal. An alternative is to work with equipment manufacturers that understand the implications of heat build-up in an enclosure and have designed the product accordingly.
“Unfortunately,” explains Rittal datacomms product manager Darren Nash, “airflow is often an afterthought.
“SIs need to look at the layout of their equipment and calculate the heat losses from each component to determine the required airflow capacity, whether that be via ventilation or chilled air,” Nash explains.
Lilydale, Victoria based SpinServer does exactly that. “We take airflow issues seriously,” says John Drain.
As the name suggests, SpinServer’s product is just that – a reversed server chassis with the I/O connections and processors at the front end rather than the back. “With most motherboards, the processors are at the front, so with our arrangement, airflow passes over them first,” explains Drain. “In a normal server chassis, the processors are positioned at the rear and are the last component to be cooled.”
In addition, the layout of SpinServer directs incoming air diagonally across the motherboard. “This is because the air venting on 2U and 3U servers is on the left-hand side at the front, while the bulk of the fans are on the right hand side at the rear, promoting a diagonal airflow,” Drain continues.
The 1U height SpinServer presents further design challenges. 1U is approximately 40 mm high, a corresponding 40 mm fan has an airflow capacity of approximately five cubic feet per minute. This compares with an 80 mm fan—typical of most desktop chassis—with an airflow capacity almost ten times that of the 40 mm variety. To overcome this, Spinserver’s 1U product features three 80 mm fans on an angle to fit within the 40 mm height of the 1U form factor specification.
“This gives you a three fan capability in the back of the chassis in addition to the 40 mm fan for the power supply, increasing the airflow capacity,” claims Drain.
Go with the flow
Portable storage device maker LaCie (see Electronics News 23 Jan 03) has developed a chassis material dubbed zinc aluminium metal alloy casting (ZAMAC), which the company claims eliminates the need for a cooling fan.
This may prove useful for a stand-alone unit that is free to radiate directly to the air, but is less practical for enclosures featuring stacked racks. “If the chassis is made of some ‘magic’ cooling material but is jammed against another server chassis made of aluminium or steel and there isn’t a separate airflow between them then it won’t help all that much in terms of cooling within that chassis,” says Drain.
Enclosure manufacturer MFB claims it has addressed the problem of heat conduction between stacked racks. “We’ve just introduced a 1U fixed fan unit that allows users to stack server-on-server while not allowing the movement of hot air between each unit,” says the company’s sales manager Jason Jenner. “Forcing hot air from one piece of equipment into another often compounds cooling problems, so it makes good sense to avoid it,” explains Jenner.
Spot cooling is also employed to deal with board hotspots, but its application is limited. “Putting an air column onto a hotspot can achieve good results but you have to remember the heat is still going somewhere,” Rittal’s David Nicholson said. “Alternatively you could position fans on the rear door behind the hottest components in the rack to assist in sucking cooler air through the rack from front to rear.”
American Power Conversion (APC) is offering an air distribution unit (ADU), which can take cool air from the air-conditioning under the raised floor of a computer room and blow it up through the rack to the top. “We’re in an era where racks are becoming denser and we need to be able to cool them down,” APC Asia Pacific availability consulting and services manager Tim Downs explained. “Our ADU is going to get us one step of the way.”
Designing ventilation into both the racks and the flooring in a CRAC system increases cooling options. For example, a perforated tile may be laid in front of particularly dense racks, allowing cool air from under the floor to be blown directly onto the rack.
When a rack is in a remote location, it is usually more viable to air-condition the rack—providing the rack is sealed—itself rather than the whole room, according to Nash. “Specifically, you’re looking at an air conditioner in the top of the rack and various ducting channels down the sides and towards the front of the rack to target hotspots.”