This is a general question. Occasionally you see published articles about the high costs of energy by data centers. Of course this includes the actual use of electricity by the computers and other equipment, but it also includes the costs of cooling all this equipment. Most computers and other electrical equipment have a device to change AC to the DC the device actually uses. This process generates a good bit of heat, separate from the heat the device generates when it actually operates. My understanding is that this heat generation is much larger than what is actually used. So my question is this: Why not have a central unit that changes AC to DC outside of the area that is to be kept cool? The heat generated is outside of the area, and presumably could be dealt with by fans. Then have a DC bus that transfers power to the units in their racks. Perhaps each device requires some slightly different voltage, I would think not but perhaps I am wrong. I would guess someone could manufacture a device that would plug into the machine and condition the voltage as it transfers power to the device. I'm a mechanical engineer, not an electrical one. Is there anything wrong with my reasoning? Something I do not know? I wouldn't think it to be a problem to supply DC with the short runs in a data center. Perhaps I misunderstand the requirements of the devices used?