{"id":4465,"date":"2012-12-28T15:31:48","date_gmt":"2012-12-28T14:31:48","guid":{"rendered":"http:\/\/vmdamentals.com\/?p=4465"},"modified":"2012-12-28T15:31:48","modified_gmt":"2012-12-28T14:31:48","slug":"whiteboxing-part-3-choosing-storage-for-your-homelab-2","status":"publish","type":"post","link":"https:\/\/www.vmdamentals.com\/?p=4465","title":{"rendered":"Whiteboxing part 3: Choosing storage for your homelab"},"content":{"rendered":"<blockquote><p>After a long time I want to continue my series on building out your own home lab. Up next: Storage. What to choose, how to build it? I will be focussing on building out shared storage in this blog post. Yes you could use local disks just fine without sharing them outside that one box, but that is not where the hard choices lie. Once you decide you want shared storage, the questions start to pop up and decisions have to be made. <\/p><\/blockquote>\n<p><BR><br \/>\n<strong>Different approaches to storage in your home lab<\/strong><\/p>\n<p>Before you start buying disks or SSDs, first things first. To begin with it is very important to make some base decisions. Some the most important <!--more-->ones:<\/p>\n<ul>\n<li>Will you build disks inside your ESX nodes;<\/li>\n<li>Will you buy a separate server or NAS box.<\/li>\n<\/ul>\n<p>So what are the cons and pros of both approaches? For power consumption, the first options is usually cheapest. You don&#8217;t need another box or NAS box, you don&#8217;t need to power that extra box. You just add disks to your existing setup. On the other hand, adding disks to your physical boxes will limit you in flexibility. Try upgrading vSphere on a node that carries all your storage\u2026!<\/p>\n<p>Let&#8217;s start with your options for using local disks. After that we&#8217;ll step into the separate NAS boxes.<br \/>\n<BR><br \/>\n<strong>Building disks inside your ESX nodes: VMware vSA<\/strong><\/p>\n<p>Very much in line with VMware&#8217;s vision is to build out local storage and present it as shared storage. The problem: VMware only has &#8220;first wave technology&#8221; out today. It is called the vSphere Storage Appliance (vSA). This appliance can combine local disks of two or three nodes, and project them out as shared storage. <\/p>\n<p>The good thing on the VMware vSA is that you can bring down nodes for upgrades etc. without impacting storage availability. But this comes at a price: you obviously need to mirror storage and you always need three nodes to build a reliable setup. Yes you can run VSA in a 2 node setup, but you&#8217;d still need some third machine to either participate in the vSA cluster (which makes it a three node config), or a piece of software running on the third machine to provide a cluster witness functionality in case of a node failure.<\/p>\n<p>The need to mirror the data across disks in different nodes is another pro or con of the setup (depends on what matter to you). This is on top of the RAID protection that has to be part of each node individually. So in a capacity-optimised scenario you&#8217;d have at least:<\/p>\n<ul>\n<li>RAID5 protection in each node;<\/li>\n<li>MIRROR between two nodes.<\/li>\n<\/ul>\n<p>Of course this describes a SUPPORTED configuration. You can get away with RAID0 sets in your nodes and get a working vSA cluster that has only the mirroring overhead. Question is, do you want to get into that potential mess? Loosing a single disk would mean loosing a member, which in turn means a reprotection of an entire node of the vSA cluster.<\/p>\n<p>In a two node setup both nodes will mirror all available data and present it out. In a three node setup the appliance will deliver three datastores out, each having a mirror between two of the three nodes (so a mirror over 1&#038;2 ,  2&#038;3 ,  3&#038;1). The RAID protection in the nodes is projected to cope with any disk failure, while the mirrors provide protection against node loss (or upgrade, maintenance etc). <\/p>\n<p>For a home lab setup this quickly becomes an issue: You have to have at least two nodes with some kind of RAID controller. Even though most main boards today deliver RAID setups, they hardly ever work in RAID mode because the RAID controllers rely on the O\/S. You guessed it, vSphere has no support for software RAID implementations. That means your nodes need a &#8220;real&#8221; RAID controller on board. Once you have this, you additionally need some kind of third machine online as well to be the cluster witness (or a true third node).<\/p>\n<p>As described above, on the capacity front you loose a lot of capacity by using the vSA. You loose your RAID protection within the nodes (50% for RAID10, one disk for RAID5 and two disks for RAID6), and on top of all that another 50% of your is lost to mirroring between boxes.<\/p>\n<p>On the performance front, you loose a lot of write performance as each write has to go both mirrors. Especially when you use RAID5 or RAID6 in the nodes, the write penalty (see <a href=\"http:\/\/vmdamentals.com\/?p=296\" title=\"Throughput part 2: RAID types and segment sizes \" target=\"_blank\">Throughput part 2: RAID types and segment sizes<\/a>) can be pretty devastating for your performance. When a disk has failed and you need to rebuild the RAID set, things get even worse in RAID5 or RAID6. The way to protect yourself against that would be to use RAID10 inside the nodes, but that in turn would mean to mirror two mirrors, leaving you with only 25% of your raw capacity.<br \/>\n<BR><br \/>\n<strong>Building disks inside your ESX nodes: Single-node Virtual Storage Appliances<\/strong><\/p>\n<p>Another way to go is to just fill <strong>one<\/strong> server with local disks, install a virtual appliance on that box, and feed the appliance your local drives (or big virtual disks that live on them). The virtual appliance will then apply RAID to your local disks and project them out as shared storage (usually over iSCSI or NFS).<\/p>\n<p>For my own home lab, I looked into three options to use for a virtual appliance:<\/p>\n<ul>\n<li>FreeNAS  8 (based on FreeBSD, can use ZFS);<\/li>\n<li>Openfiler 2 (based on rPath Linux, uses 8TB+ journalling filesystem);<\/li>\n<li>Nexenta Community Edition 3 (based on Sun&#8217;s OpenSolaris 10, uses ZFS).<\/li>\n<\/ul>\n<p>They all look the same from the outside, meaning they all run some kind of Linux or Unix operating system, they use disk logic from the O\/S, and they have a GUI glued on top to make management easier. On the inside, they use very different stuff: Openfiler uses Linux, FreeNAS uses FreeBSD with ZFS, while Nexenta uses Sun&#8217;s OpenSolaris (the version that was out before they got bought by Oracle). <\/p>\n<p>How did I select which one to use? Simple. I installed each appliance on my home lab system. I fed each one some thin provisioned virtual disks, and requested the appliance to bake me shared storage from those disks over NFS. Finally, I migrated a test Windows XP VM onto it and the testing could begin!<\/p>\n<p>For testing I did not focus all too much on performance; for me it is more important that my data is not lost. For you that may be different, and in that case I suggest that you run some tests yourself before you decide on an appliance to use. Testing for reliability was very simple in the virtual world: Let it run, remove one of the virtual disks on the fly and see how it copes. Then delete that virtual disk (like a physical disk that was destroyed), and on the fly I insert another virtual disk of the same size. Then I request the appliance to rebuild. In all examples I used RAID5 \/ RAIDZ1 to configure a system with a single redundant disk (N+1).<\/p>\n<p>Testing was done quite quickly: None of the appliances handled this &#8220;disaster&#8221; in a perfect manner: One did not even detect the disk was gone for many minutes, the other refused to see my new virtual disk as a new disk. Nexenta did all as expected, but refused to initiate the rebuild from the GUI.<\/p>\n<p>After some playing around, I managed to get myself into &#8220;serious&#8221; trouble trying to get it to repair the shared data. Only Nexenta came out pretty well: As soon as I went back to it&#8217;s command line, I was able to just shoot the <em>zpool<\/em> command to initiate the rebuild to the new device and off it went. Did the web GUI show the rebuild? Nope. It kept reporting the degraded state (instead of rebuilding, resilvering, reconstructing or whatever), but by using the command line I was able to follow the rebuild process perfectly (in percents, time passed, remaining time and MB\/s). Pretty sweet; I wish I could see that in the GUI! <\/p>\n<p>Once the rebuild was complete, the web GUI all of a sudden showed &#8220;ONLINE&#8221; instead of &#8220;DEGRADED&#8221;, and even the disks reported in the pool were the correct ones. Pretty good stuff even though I needed to use the command line to get it done.<br \/>\n<BR><br \/>\n<strong>Using a separate server for shared storage<\/strong><\/p>\n<p>If you do not want to have any issues with local storage on vSphere nodes you may want to be upgrading or putting into maintenance mode on a regular basis, you could decide to build a separate SAN\/NAS box out of a PC. That works pretty well, and gives you a LOT of potential for a very nice price.<\/p>\n<p>So how do you build this? Simply look at the previous paragraph, but run the appliance natively instead of inside an appliance! Since Nexenta was my choice for the virtual appliance, I&#8217;ll get into Nexenta running natively as well.<\/p>\n<p>You do not really need all that much for an effective shared storage box. In general, it is even cheaper than buying a dedicated NAS box (but beware &#8211; most NAS boxes use less power).<\/p>\n<p>Especially if you&#8217;re a fan of DIY, it is a lot of fun to build your own dedicated NAS system. I played with Nexenta, and successfully installed it onto a USB stick and on a CompactFlash card while using a bunch of 250GB SATA drives as shared storage. As Nexenta uses ZFS, it will help to give it <em>loads<\/em> of memory. Then you need CPU power, some networking and of course storage. Adding one or more SSD drives may increase performance even more. I may dive into Nexenta sizing in more depth if I find the time in a separate post.<\/p>\n<p>When you DIY, you can very flexibly size for CPU power, memory, number of NICs (and speed) etc. This is a freedom you won&#8217;t get with most (or even any?) ready-to-go NAS boxes.<br \/>\n<BR><br \/>\n<strong>Using a ready-to-go NAS system<\/strong><\/p>\n<p>Quick and easy. That is all there is to say about these boxes. They use little power, they are standalone (so you can play with your vSphere nodes all you want; the storage is completely separated from them). In general, they are more expensive than building your own storage system out of a PC, unless you plan on building a really small, low performance unit. In that case you could consider a box like the Iomega IX-2. They house only two disks and use little power. They also come REALLY cheap (I&#8217;ve seen IX-2&#8217;s without disks for 99 euros!). The downsides? They are relatively slow. Not really meant to run a lot of VMs from, but more for streaming and backup. <\/p>\n<p>The bigger NAS boxes (like the Iomega PX series, Qnaps and Synologies) cost way more than a PC-based NAS (prices around 700 euros and up without disks). If money is not your primary concern, then these NAS boxes are a serious option. If you want the biggest bang for the buck and most flexibility, you&#8217;re probably better off building your own PC-NAS.<br \/>\n<BR><br \/>\n<strong>Main takeaways<\/strong><\/p>\n<p>When choosing storage for your home lab, there are some important things to remember. First of all, where to put your disks. One way of doing this is to have local storage in one of your vSphere nodes and run a virtual appliance to share them out. Another approach is to convert a PC into a NAS box, or buy a ready-to-go NAS box. Lastly, you could choose to build a hybrid between local and shared storage by using VMware&#8217;s vSA, where each node has local storage and you can bring down nodes one by one (for maintenance etc) without loosing your shared storage. <\/p>\n<p>What road to choose will be different for everyone. How many vSphere nodes you you have? What is your capacity and performance needs? What is your budget? How &#8220;green&#8221; does your solution need to be? Will you be running the home lab 24\/7 or not? All these questions will decide which path to choose.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>After a long time I want to continue my series on building out your own home lab. Up next: Storage. What to choose, how to build it? I will be focussing on building out shared storage in this blog post. Yes you could use local disks just fine without sharing them outside that one box, [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[77,5],"tags":[564,321,320,565,562,563,561,499,549,559,566,560,317],"class_list":["post-4465","post","type-post","status-publish","format-standard","hentry","category-storage","category-vmware","tag-freenas","tag-home-lab","tag-homelab","tag-local-storage","tag-nexenta","tag-openfiler","tag-virtual-storage-appliance","tag-vmware-vsa","tag-vsa","tag-vsphere-homelab","tag-vsphere-local-storage","tag-vsphere-sotrage-appliance","tag-whitebox"],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/www.vmdamentals.com\/index.php?rest_route=\/wp\/v2\/posts\/4465","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.vmdamentals.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.vmdamentals.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.vmdamentals.com\/index.php?rest_route=\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.vmdamentals.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4465"}],"version-history":[{"count":34,"href":"https:\/\/www.vmdamentals.com\/index.php?rest_route=\/wp\/v2\/posts\/4465\/revisions"}],"predecessor-version":[{"id":4501,"href":"https:\/\/www.vmdamentals.com\/index.php?rest_route=\/wp\/v2\/posts\/4465\/revisions\/4501"}],"wp:attachment":[{"href":"https:\/\/www.vmdamentals.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4465"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.vmdamentals.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4465"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.vmdamentals.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4465"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}