Is this blogpost going to inspire you to buy a soldering iron and build a PCI-e card? No. Is it a really cool idea to try and build a Fusion-I/O or EMC Lightning-like card in your homelab? Possibly: YES!
So what are Fusion-I/O or EMC Lighting solutions all about?
The idea behind server-side cards like these are really simple: You somehow get the cards inbetween the storage datastreams of the vSphere5 server, and you cache any data passing through on flash memory placed on the card. Some cards are smarter, some a little less smarter in the way they work. The basic idea remains the same though.
Doing read caching is not too complex for cards like these; there is no risk here in loosing data since you are only caching blocks from the array (which you then do NOT have to get from the array and this is where you win for reads). Can you also do write caching in these cards? You sure could, if you can live with potential data loss if the write was stored in the card and not yet synced to the array and your box burns down.
So how to build one of these without a soldering iron?
If you look at these server cards, and you take a step back, what do you see? Exactly, it is an appliance that has data in, data out and some solid state device on the side to cache stuff.
So what would happen if I built a virtual appliance, that uses NFS exports as an input and delivers NFS exports out again, using either memory and/or a local SSD drive as its caching device? By the sounds of this it could REALLY work. The best part? Your original NFS store would not be touched if you just did read caching. Write caching within this appliance would even be possible, especially when an SSD is used as a caching device (because the SSD would be non-volatile).
One major downdise of this solution would be the ineffectiveness of vMotion. vMotion would work, but if you want REAL performance you’d want to keep the VMs running through an appliance like this local to the appliance itself (to keep the NFS exports coming out of the appliance from traversing the physical network). A script might be able to vMotion the VMs to “their” vSphere server, or you could create a DRS rule to keep the VMs running off an appliance together with the appliance if the appliance only uses vSphere memory for its caching. Either way, this could work smooth!
So how to shape this idea
Instead of building my own appliance, I decided to look around for an appliance that already does this. After looking around for some time, I came to the conclusion that this has NOT been done yet. I could not find a single virtual appliance that would take one or more NFS exports and redeliver them from a local NFS server transparantly.
The thing that comes closest to this I think will be a ZFS-based appliance: ZFS is able to use memory as cache, and on top of that you can assign a “caching device” to ZFS as well.
Unfortunately it will not create a “transparant” appliance; the data on the external NAS (through a vmdk) or SAN device will be ZFS formatted. To bad, but at least it will be able to demonstrate the power of a software solution like this.
To the lab!
I will be testing this setup with some kind of ZFS-based NFS virtual appliance that will take caching memory or a caching device. I will be looking at appliances like Nexanta to do some fun testing! I’ll need an SSD in one of my homelab servers though, and most important…. I’ll need TIME.