Whiteboxing part 1: Deciding on your ultimate ESX Whitebox


So you’ve decided: You want to build yourself an ESX(i) environment while minimizing cost. But how do you choose between available hardware? In this blogpost I will be focussing on my recent Whitebox server selecgtion and how I got to my configuration out of all available components.



Different ways of getting to a successful Whitebox config

There are several different ways of getting to a cheap Whitebox configuration. So far I’ve been seeing four approaches:

  1. Build one big Windows/Linux server and run everything virtual (so virtual ESX nodes on VMware Workstation);
  2. Build one big ESX(i) server and run everything virtual (so virtual ESX nodes on the physical ESX node);
  3. Build two smaller ESX(i) servers (surprise suprise… this can actually be cheaper over one big node!);
  4. Buy a complete (supported) system (Like Dell or HP).




The fourth option is generally speaking not the cheapest, and as a true geek this option has too little risk of not working 😉 For those reasons I will not concentrate on that option. That leaves the first three. In my case I want to run ESX(i) physically and not via workstation, so that leaves option two and three. I specifically mentioned these two options seperately, because there are some important factors to take into account:

  1. When you need more than 16GB of memory, buying two smaller servers might actually be cheaper than buying a single one with 24GB or more. Boards able to handle more than 16GB are much more expensive, and if you need 8GB modules the price will go up significantly (even if considered they have twice the capacity);
  2. When you buy two servers, you can get the benefits of HA, DRS and VMotion on a physical level, although you need to have shared storage for this to work;
  3. More expensive (server) boards will often give you KVM-over-IP, so you can login to the server to restart it, work on the console etc. ;
  4. Always take power usage in consideration!



So there you have it. If you just start your server for tests and shut it down again, you could consider running from a single box. But if you plan to run your setup 24/7, you’d want to have two machines, just like “in the real world”. You will love HA and DRS, even in a homelab I promise you 🙂

In this blogpost I will be building two whiteboxes, where I’ll focus on cost as a primary concern. So how do we go about deciding on what we are going to use?


How to decide on hardware that is perfect for you

There are hundreds of working combinations. Expensive, not so expensive, big, bigger, smaller, cheaper. I’ll describe how I got to the setup perfect for ME. You must decide for yourself which way to go, but I followed this decision tree:

  1. Decide on the CPU technology: AMD or Intel;
  2. Decide on the mainboard size (uATX or ATX);
  3. Decide on the mainboard type (Vendor, type);
  4. Decide on memory size and speed;
  5. Decide on power supply and case.



I specifically left out storage of the equasion… For now.


Deciding on a CPU technology

First thing I did was decide on the CPU vendor. Intel or AMD? When deciding, I figured that both CPU vendors nowadays have excellent ways of delivering VMware functionality and performance while minimize power consumption. So it is down to the price. Asking around there were some statements that made me decide (these are quotes from other people which I have not verified but seem to be correct):

  1. For good ESX performance, the Intel i7 quadcore is great. These chips have tremendous per-core performance;
  2. AMD processors generally have less per-core performance than the Intel i7, but CPUs are cheaper on a per-core basis;
  3. Both AMD and Intel have great VMware integration.



The second statement is about price. Looking at prices for an i7 CPU, the cheapest I could find was the socket 1366 Intel® Core™ i7 950 for 237 euros. For socket 1156 the cheapest i7 was the Intel® Core™ i7-870 for 249 euros.

But then I turned to AMD. They have 2.8GHz six-core CPUs for under 150 euros! So even if the Intel i7 is faster on a per-core basis, having two extra cores should about match performance. I found the AMD Phenom II X6 1055T for only 135 euros, and decided that CPU would totally rock a whitebox for a very decent price.


Deciding on the mainboard size

Most mainboards come in one of two sizes: ATX and microATX. The microATX boards are smaller, have less PCI/PCIe expansion slots, and often have only two DIMM slots. Upside to microATX boards is, that they very often have an integrated videochip, which saves you $$ by not having to get one.

ATX boards are bigger. They have more expansion slots, and always come with 4 DIMM slots (at least for AMD). Onboard video is not seen that often on these boards, but some are available.

Since I wanted to have all the expansion ability I could have, I decided on an ATX board. Now “only” to decide on the make and model…


Deciding on the mainboard make and model

I cannot help it. I’m a fan of Asus mainboards. With mainboards from other vendors I have had really weird ESX issues, but Asus mainboards have always done well (I have previously used the M2N-VM DVI and the NCCH-DL mainboards with great success for ESX).

In my selection I was looking for the cheapest working combination. So I looked at my favorite online shop, and selected all Asus AM3 socket mainboards, and sorted on price. Next, I selected the first (=cheapest) mainboard that had both the ATX form factor and onboard video. Forget about onboard NICs that will work. Unless you find a board with an nVidia chipset, don’t overspend here. Just get some ethernet add-in cards and you’ll probably be cheaper. Almost no affordable mainboards use ESX-compatible NICs nowadays. Wouldn’t it be great if some future version of vSphere would support the cheaper onboard NICs (hint: in my case the Realtek 8111E) ? Anyway, after some searching, out came the Asus M4A88T-V EVO at 83 euros.

Now comes a very important part: I have often seen people buying hardware for whiteboxes that in the end had issues running ESX. A working chipset on one mainboard is still no guarantee another mainboard using that chipset will work. So always try to find other people running the mainboard you plan to buy and see if they encounter any issues. Two great sources for this are:

Ultimate ESX Whitebox (Ultimatewhitebox.com)

Motherboards and unsupported servers that work with ESX 4.x and / or ESXi 4.x Installable (vmhelp.com)

I found my mainboard on the vm-help.com site, but only the /USB3 version. I decided I could take that risk, because the rest of the mainboard appers to be completely identical apart form the USB3. I saw some issues there concerning USB, but I figured I would not use that anyway (I planned to boot from disk).


Deciding on memory size and speed

Then on to deciding on memory. Given the amount of CPU power that comes out of the AMD 6 core, I figured I wanted to max out the memory on the mainboard. The maximum amount of memory is 16GB DDR3 1066 or 1333 (4x4GB). The speed of the memory is dependent on the CPU used. After some searching, I found that the CPU I selected (the 2.8GHz AMD Phenom II X6 1055T) would run the memory at only 1066MHz, so I decided to change the CPU type to a 3.2GHz AMD Phenom II X6 1090T which is still a bargain at 145 euros. This CPU is able to drive the memory at 1333. I selected the cheapest memory sets for 2x4GB DDR3 1333. I ended up with GeIL 8 GB DDR3-1333 Kit (black dragon) for 50 euros per 2x4GB.


Deciding on power supply and casing

From what I have seen, even the cheapest power supplies in the cheapest cases will work perfectly and will continue to work perfectly if you run them 24/7. Power supplies tend to break during power-on! The cheapest solutio you can find will be to get a case with a power supply incorporated. The mainboard I selected uses an 8-pin ATX connector for the CPU, so I looked for the cheapest midi tower with a power supply that had the 8 pin CPU connector and a supply for >400W (the sixcore CPU eates quite some power at 100% CPU load!).

I ended up with an MS-TECH LC-05B for 48 euros, with a 550W power supply included. I also threw in two case fans per system to make sure the airflow would work at warmer days.


Other stuff you’ll need

My configuration was done. But that was only because I had stuff in my old homelab like Gbit networking cards. I’d recommend to get at least two working network cards. Any Intel Gbit card will work as far as I know, so you could for example take two Intel® EXPI9301CT cards for 29 euros each, or decide on a single dual-ported card like the Intel® E1G42ETBLK. For the latter I’d recommend to get it somewhere second hand or on ebay, because in a regular shop these cards cost around 150 euros each (That is why I’d opt for two single ported cards instead of a dualported one).

Other stuff you might want to add is a DVD-ROM (or a DVD-burner which costs the same!), and maybe local (SATA) disks. But storage is a totally different matter which I will cover in a spearate post.


The shopping list

So here is the shopping list that I used for my configuration. This list is for TWO ESX Whiteboxes:

Nr of items Item Total price
2
CPUs Socket AM3 Phenom II X6 1090T (6x 3200 MHz) € 289,80
2
Asus Mainboard M4A88T-V EVO (AMD 880G) € 165,80
4
Geil DDR3 2x4GB kit 1333 € 199,96
2
Midi Tower case MS-Tech LC-05B with 550W power supply € 95,98
4
Arctic Cooler casefan F8 (80x80x25 mm) € 11,96
4
Intel Gbit network card EXPI9301CT (1 x RJ-45) € 115,96
2
DVD-reWriters Serial-ATA Samsung SH-222AB € 37,98
Grand Total € 917,44



Great stuff! I actually managed to build two ESX whiteboxes at just over 900 euros, with a total capacity of over 38GHz of CPU power and 32Gbytes of memory. In my case it was actually even cheaper because I migrated my Intel NIC cards from my previous whiteboxes.

In the next posts I’ll be describing how I went along building, testing and using!

16 Responses to “Whiteboxing part 1: Deciding on your ultimate ESX Whitebox”

  • New Post: Whiteboxing part 1: Deciding on your ultimate ESX whitebox. http://www.vmdamentals.com/?p=2314

  • […] HomeAbout « Whiteboxing part 1: Deciding on your ultimate ESX Whitebox […]

  • Derek says:

    I wrote about my whitebox experiences with an Intel Sandy Bridge PC here:

    http://derek858.blogspot.com/2011/03/building-sandy-bridge-esx-server.html

  • vAmar says:

    are you able to do the FT lab on these AMD CPU’s?.

  • Ben says:

    Awesome detailed writeup and very informative. Any idea if that cpu/mb supports hardware passthrough for pci/pci-e?

    thanks from Canada!

    • Thanks 🙂

      The mainboard does not support PCI passthrough according to the vSphere 5 configuration tab (“Host does not support passthrough configuration”)… Sorry!

  • Ben says:

    No worries, thanks for the reply. Sometimes it needs to be enabled in the bios first, CPU needs Vt-x and MB needs VT-d or IOMMU for amd from what I remember. I tried to find info for this MB on the web for IOMMU but no luck.

  • Ben says:

    Hello again, if you could look that would be great. If not no worries, I will likely go with the asus M4A89TD-PRO/USB3 which unfortunately is more expensive but I know IOMMU works as a buddy has one.

    • I did check. There aren’t too many options to choose from in the BIOS:

      – GART error reporting
      – Microcode Updation
      – Secure Virtual Machine Mode
      – Cool ‘n’ Quiet
      – ACPI SRAT Table
      – C1E Support
      – ASUS Core Unlocker

      All but GART and the core unlocker are enabled, still no PCI passthrough.. sorry!

  • Ben says:

    No worries, thanks again for checking. I ended up picking up the M4A89TD-PRO/USB3 for $159 CAD and it appears to be working though I’ve yet to test IOMMU just yet.

  • Nethaji Reddy says:

    Hi,I am planning to buy AMD FX 8150, can any one suggest me which ASUS motherboard will support all features of vsphere 5.will SABERTOOTH 990FX works for me,THanks in advance.

  • Anonymous says:

    […] ZFS Storage within ESXi Do MSI intend to implement BIOS support for IOMMU in its 890FXA-GD70 ?? Whiteboxing part 1: Deciding on your ultimate ESX Whitebox How to assign devices with VT-d in KVM – KVM ESX / ESXi 4.0 Whitebox HCL Ultimate VMWare ESX […]

  • silopolis says:

    hi,

    With the same objectives in mind, a lot of googling and reading, countless config simulations with various cases/mobos/NICs… I finally ended buying used HP ML110 G6 (and HP MicroServers for storage nodes) with Intel Xeon X34xx CPUs which you can find for really nice price on eBay (so much that I could even buy them in the US, from France !). These are nicely spec’d boxes with remote management available.

    The only thing I’m _really_ laking is another PCIe port on the Microserver because the management card occupies one of the two available on the box. This forbids adding an additional GbEth port AND a hardware RAID controler.

    Thank you very much for all your very interesting articles
    Keep on posting
    Bests

  • Cathi Faylor says:

    I think the Marriott was Facades or Deceptions. But I could be confused with Latham Circle Mall. Then it went on to be a tiki bar i think.

Soon to come
  • Coming soon

    • Determining Linked Clone overhead
    • Designing the Future part1: Server-Storage fusion
    • Whiteboxing part 4: Networking your homelab
    • Deduplication: Great or greatly overrated?
    • Roads and routes
    • Stretching a VMware cluster and "sidedness"
    • Stretching VMware clusters - what noone tells you
    • VMware vSAN: What is it?
    • VMware snapshots explained
    • Whiteboxing part 3b: Using Nexenta for your homelab
    • widget_image
    • sidebars_widgets
  • Blogroll
    Links
    Archives