Space and Heat Limitations Make Managing Power a Challenge
Every minute of every day, 90 million transactions take place on the internet. No, we don’t mean just purchases— “transactions” include emails, app downloads, video streams, social media interactions, retail purchases and more. And it equals out to about 1.5 million transactions every second.
They’re processed through data centers around the world, running on more than 10,000 servers, supported by networks of switches, routers and cooling equipment. And those all pull in a massive amount of power. It’s an amount that isn’t getting smaller by any means; power consumption is expected to double every 5 years in the U.S. alone, just to keep up with us.
So, what’s the problem? Space and heat.
Our energy consumption may be doubling every 5 years, but the space allotted to hold all the things powering that energy is not. Back in the early days, server system infrastructure requirements called for 400 to 600W power supplies with I/Os using 4 to 6 power blades rated at 30.0A per blade. Now, power supply manufacturers need to triple that, but in the same space. We’re looking at closer to 6 to 8 power blades, each capable of handling 70.0 to 80.0A per blade, all while generating no more than a 30-degree temperature rise.
But all that equipment is likely to generate more heat than can be contained. Servers, routers and other components generate heat coming off the racks that hold them. Heat also occurs as a natural part of the process as power is converted from AC to DC and DC to AC. The design of the power supply unit’s (PSU) printed circuit board (PCB) with copper layers, layer thicknesses and footprint can contribute to a temperature rise as well. Add in all the fans needed to cool everything down (which also generate their own heat), and it’s downright tropical. No connector supplier wants their connector to be a heat sink, which is often seen during thermal evaluations when heat transfers from the PCB to the connector.
Finding a solution for space and heat constraints is not easy, though.
On the surface, a good solution seems to be in place for excessive heat created by all our power equipment. Manufacturers have begun to design vents into the housing of each piece, allowing heat to escape and preventing overheating. But that can become irrelevant when higher power density needs to be brought into a too-small space.
Advances are being made in copper alloys to allow increased conductivity, but those advances aren’t keeping pace with increased power needs. Improvements in contact design can help alleviate power loss, but it’s not a reliable solution to meet density requirements. On top of that, connector designers are getting requests to decrease the centerline spacing between the power contacts—but that creates mutual or joule heating issues.
Regardless of the challenges, the industry is focused on finding solutions to push forward in a better way. Just a 1 percent improvement in data center electrical efficiency will pull in millions of dollars of savings. And those are savings that could be passed on to every stakeholder.
For more information, please visit: www.molex.com/link/ext-power.html