Whiteboxing part 2: Building the ultimate Whitebox

In part 1 of this series I posted the way I selected hardware for my ultimate whitebox server. A whitebox server is a cheap server you can use to run VMware vSphere without it being on the VMware HCL. Never supported, but working nonetheless. Now that the hardware to use was selected and ordered from my local computer components dealer, the next step is to assemble and test the setup, which is the focus of this post.

Unpacking and assembling

The first part was the easy part: Unpack all those colorful boxes! After some paper shredding and plastic cutting I was looking at my new homelab in parts. Up next, the standard assembly of all items: Put the CPU on the mainboard, put memory on the mainboard, put the mainboards into their housing, connect it all up. I will not go into too much detail here, since this is all pretty basic DIY PC building.

One thing is maybe not that well known though: A lot of older server cards are the PCI-X standard. This is the 64bit version of PCI (not PCIe!). You can actually put PCI-X cards into PCI slots; they’ll just revert to 32bit I/O mode which makes them slower but at least working in your homelab server. The mainboard I use (also see part 1) is the Asus M4A88T-V EVO did not disappoint: All PCI slots had enough “room” behind the slots to place PCI-X cards in them!

I finished up the hardware build by upgrading the BIOS to the latest firmware, then entering the BIOS, loading the defaults and making sure all the BIOS options I required were indeed set. Because my Intel NICs were still used in my old homelab, for now I inserted an old 3COM Gbit card I had laying around (with a Broadcom chip) in order to have a supported NIC for testing.

First tests with vSphere: How to create a vSphere ESXi USB stick

After the first server was built, I was very keen on getting vSphere to run on it. So I created two USB sticks. One with vSphere 4.1, the other with a vSphere 5 beta version on it. Both USB sticks were created using VMware Fusion (VMware Workstation will work just fine as well). This is how you create your USB sticks quickly and easily:

  1. Create a new VM of type “ESX server 4” in Workstation or Fusion. Create a CUSTOM VM, and do not provide it with any harddisk;
  2. Mount the vSphere ISO image to the DVD drive of the VM;
  3. Insert the USB stick into the PC and mount it to the VM;
  4. Start the VM.
  5. After the VM has booted into the vSphere installer, simply select the USB stick and start the installation.
  6. After installation, remove the USB stick from the PC. All done!

After both USB Sticks (having vSphere 4.1 and 5beta on it) were created, I was ready to go!

Booting vSphere for the first time: #FAIL

From here on things started to go wrong: I inserted the vSphere 4.1 USB stick into the whitebox and booted it up. It is always exciting if the mainboard you just bought actually is capable of running vSphere without issues. All seemed fine, the whitebox booted off the USB stick. But during boot, it froze while loading the USB drivers. This is what I had read on the internet about this mainboard: vSphere supposedly has issues with the USB hardware on this particular board. And so it did obviously; vSphere 4.1 was just sitting there and did not continue to boot at all.

Not giving up that easily, I decided to connect a local SATA drive to the system, and disable USB in the BIOS altogether. The plan would be to install ESXi 4.1 on the hard disk, hopefully skipping the USB issues (I was planning to run ESXi4 from disk anyway). I rebooted the whitebox off a DVD I burnt with vSphere 4.1 and I tried to install ESXi 4.1 on that disk. Failure once again; the installer this time froze on loading drivers again.

Looking on the internet I found a trick I had actually not seen before: While ESXi tries to load the drivers and freezes, just press F12-F11-F12-F11 a few times and the booting resumed. Using this trick I managed to install ESXi 4.1 on the hard disk.

After the install the whitebox rebooted from its hard disk. Again, it froze during boot but I got it progressing further by using the F11-F12-F11 trick. Not the greatest start, but vSphere 4.1 was at least working. Still I was definitely thinking about returning the mainboards and getting some other type because of these issues. Once the whitebox was booted though, it showed solid performance. I managed to run three dual vCPU Windows XP VMs on it that saturated all six cores of the CPU in the box using cpuburn.exe. All performed flawlessly after the troublesome boot of ESXi4.

Booting vSphere5 beta: Success & Realtek 8111E NIC working!

Now on to see how vSphere5 would work on the homelab. I re-enabled USB in the BIOS, and tried to boot it from the vSphere 5beta stick. Much more luck this time: vSphere 5 booted without any issues, and best of all, the onboard NIC (Realtek 8111E) was detected and usable by vSphere5! Owwwww yes 🙂

Since vSphere5 is around the corner anyway, I decided to keep the current mainboards, and temporarily live with the troublesome vSphere 4.1 boot. As the vSphere5 RTM version came around, I switched to this release and is working like a charm now!

Power consumed by the whiteboxes

Very important in most homelab situations is the power consumption of your servers. So I decided to put a power meter in the 230VAC line to the whitebox. I configured vSphere to use the “Enhanced AMD PowerNow!(tm)” feature and I set the policy to “Low Power” (since I’ll never saturate the CPU anyway). When running vSphere without any VMs running, it drew around 88 watts; not bad at all! When I spun up all three dual-vCPU Windows XP VMs that each ran their cpuburn.exe, the whitebox consumed a total of 273 watts; that is over 21GHz of CPU power right there! When only running a single dual-vCPU VM with cpuburn.exe the drawn current was 176 watts (that is still over 7GHz of CPU roaring).

Even though the power consumption was under 300 watts, I would not encourage anyone to use a 300 watt power supply, but go for at least a 450 watt version.

Upgrading my homelab: From ‘Terrance & Phillip’ to ‘Cow & Chicken’

After a deep breath, it was time to upgrade my homelab to the new hardware. My old trusty servers (dualcore AMD x4000, 6GB RAM, diskless) called “Terrance” and ” Phillip” were about to be recycled for the kids to use as their homework/gaming PCs.

Terrance & Phillip
My old and trusty Whitebox ESX servers Terrance and Phillip are finally retired.

So it was time to say goodbye to my dear “Terrance” and “Phillip”, and welcome the two new whiteboxes I named “Cow” & “Chicken”!

Since I have websites and other stuff running on my boxes, I could not (would not) just switch them both off and get to work… Instead, I shut down all non-vital VMs and moved all running VMs to “Phillip” using VMotion. Then I shut down Terrance and I removed it from my “rack”.

After getting the Intel NICs out of Terrance I built them into “Cow”. Then I placed “Cow” into the rack next to “Phillip” and wired it up. Now I needed to install vSphere 4.1 on my SAN (I use boot from SAN). So I disabled all access but the boot LUN to “Cow” and ran the installer from its DVD drive. After installation I enabled access to all the vmfs LUNs again and booted the server. Again I needed to use the F11-F12-F11 trick to get it up and running.

After adding “Cow” to the cluster, I managed to get a large part of the host profile over from “Phillip”. It failed at some point while applying the profile to the new hardware, but all the important stuff (like all the VLANs in the networking section) were applied already! Since VMotion will not work between my old and new CPUs, and using EVC (Enhanced VMotion Compatibility) would only postpone downtime (because at some point I’d want to enable all CPU features anyway) I just set DRS to “manual”, shut all VMs, and rebooted them on the new host.

After all this it was just a question of “Repeat” for the second host, and my new and shiny hardware was running. Goodbye Terrance and Phillip. Hello Cow & Chicken!

Cow & Chicken
My brand new Whitebox ESX servers are called “Cow” and “Chicken”. Because you can.

Lessons learned

The most important lesson learned is never ever buy a mainboard for running vSphere without some source confirming it works (or make sure you can trade the board for another one at your vendor). Also do not forget to make sure you have supported NICs in your system. The working Realtek 8111E NIC under vSphere5 is a nice bonus. Buying a bigger mainboard (ATX with more slots) proves cost effective because the single port Intel Gbit NICs are the cheapest option you can get (unless you manage to get some cheap dual-ported ones from ebay or you can live with the limited number of NICs). I needed additional cards anyway in my server (for the SAN connectivity), so using a larger ATX board and not a uATX made perfect sense in my case. But your requirements may very well be different!

The new and improved homelab
The new and improved homelab. Two whiteboxes running ESXi5. For the keen observer: An old Infortrend EONstor is used as shared storage.

One Response to “Whiteboxing part 2: Building the ultimate Whitebox”

Soon to come
  • Coming soon

    • Determining Linked Clone overhead
    • Designing the Future part1: Server-Storage fusion
    • Whiteboxing part 4: Networking your homelab
    • Deduplication: Great or greatly overrated?
    • Roads and routes
    • Stretching a VMware cluster and "sidedness"
    • Stretching VMware clusters - what noone tells you
    • VMware vSAN: What is it?
    • VMware snapshots explained
    • Whiteboxing part 3b: Using Nexenta for your homelab
    • widget_image
    • sidebars_widgets
  • Blogroll