This effectively paralyzed the project for a solid month, since nothing could happen until the engineer rendered his verdict. Four weeks later the engineer announced the floor stable…barely. While the two rooms could house a datacenter, it would have to be a lightweight datacenter because most of the racks would be limited to an 800-pound maximum load, the few exceptions being certain areas over the support beams. That was a nasty kick in the nethers, given that a fully loaded cluster-running rack can weigh as much as 2000 pounds and we had planned on using six of the 12 racks in the new datacenter for Beowulf clusters. Strike one — back to the drawing board.
A flurry of tropical meetings later and we had what looked like an effective workaround. The four server clusters would move to another location, while the HIG datacenter would now house departmental servers from the various SOEST departments in 12 APC InfraStruXure racks. This would effectively make HIG 319 the central datacenter for all these departments while freeing up space for the clusters at the other locations. Not an optimal solution, but a necessary move if the college intended to install the new server clusters it wanted.
Lesson 2: Don't skimp on professional services
Work on gutting and remodeling HIG 319 resumed and we made our first official contacts with APC for power and cooling solutions and rack requirements. The information we received back took into account our square footage, the current electrical and cooling specs of the two rooms, and our intended server and rack load. APC ran all these figures through its datacenter planning tool and sent back a series of PDFs that gave us an initial floor plan, the names and model numbers of the power and cooling solutions they recommended, and a basic blueprint of every rack in the new datacenter. Initially this looked great, but later we found we’d made a critical mistake.
APC was kind enough to volunteer not only the equipment, but also manpower for the project. Understandably, the company wanted to save as much money as it could here, so our project was run using the cost-savings model rather than the full-on professional service model of APC datacenter design. The deluxe model would have required more manpower in the form of a project manager on APC’s side.
For readers embarking on their own datacenter project, we can’t over-recommend spending the money on full professional services consulting with a core vendor such as APC. Had we the good sense to solicit the service, UH reps say they would have tried to come up with the money somewhere, because trying to save cash by running without such help is very risky -- as we were about to find out.
Even at this early planning stage, an APC project manager would have gone over every detail in a conference call, whereas we simply received PDF-laden e-mails; he also would have given recommendations for installing the wiring, piping, and other prerequisites. Opting for the unroyal treatment, we were simply referred to a reference page on APC’s Web site that showed piping specs for a variety of different cooling solutions. Left to our own devices -- and the recommendation of a UH air-conditioning engineer who misunderstood some specifications -- we made the wrong choice.
In short, there's no substitute for expert guidance. An APC project manager would have made this selection for us and simply told us what to install. The right piping would have been a no-brainer from the start, instead of a last-minute correction, and nearly a costly rip-out-and-replace exercise.