“My VAAI is Better Than Yours”

VAAI has been around for quite some time, but I still get a lot of questions on the subject. Most people seem to think VAAI is solely for speeding up processes, where in reality there should not be significant speeding up if your infrastructure has enough reserves. VAAI is meant to offload storage-related things so they are executed where they should: Inside the storage array.

 

EDIT: My title was stolen borrowed from my dear collegue Bas Raayman in a post like this one, but focussing on file-side in My VAAI is Better Than Yours – The File-side of Things. Nice addition Bas!

My VAAI is better than yours

I recently had an interesting conversation Read the rest of this entry »

Snapshot Consolidation needed – Which with my luck… fails

As I am testing several third party backup tools, this morning I stumbled upon a failed backup. No snapshot present on the VM which could not be backed up – but a yellow mention in the VI client: “Configuration Issues – Virtual Machine disks consolidation is needed“. And my luck was, that selecting “consolidate” ended in that one brilliant error: Unable to access file since it is locked. Great. Here’s what was wrong!

Read the rest of this entry »

Best of VMdamentals.com 2011 posts

At the very end of 2011, I decided to post my top 10 posts over the year 2011:

  1. RAID5 DeepDive and Full-Stripe Nerdvana
  2. “If only we could still get 36GB disks for speed”
  3. VM performance troubleshooting: A quick list of things to check
  4. Cool videos on Technology, Virtualization and Storage
  5. Different Routes to the same Storage Challenge
  6. Under the covers with Miss Alignment: Full-stripe writes
  7. VMworld Party Copenhagen 2011 – What’s Hot & What’s Not
  8. Whiteboxing part 1: Deciding on your ultimate ESX Whitebox
  9. Whiteboxing part 2: Building the ultimate Whitebox
  10. Veeam Backup part 2- Using jumbo frames to target storage


My view on things

For me personally, 2011 has been a crazy year. Getting one’s head around being a vSpecialist working for EMC, the world leader in the storage and virtualization segment is not an easy task. Now that I’m settling down in this new function I hope to have some more time to do cool technical deepdive stuff in 2012.

Both EMC and VMware have a very similar vision where things are going. And if the biggest storage vendor and the biggest virtualization vendor have a joint vision… I think you understand where we will all be going next year….

To the Cloud or Bust!

Sizing VDI: Steady-state workload or Monday Morning Login Storm?

For quite some time now we have been sizing VDI workloads by measuring what people are doing during the day on their virtual desktops. Or even worse, we use a synthetic workload generator. This approach WILL work to size the storage during the day, but what about the login storm in the morning? If this spikes the I/O load above the steady stte workload of the day, we should consider to size for the login storm…

Read the rest of this entry »

Why Virtual Desktop Memory Matters

I have seen several Virtual Desktop projects with “bad storage performance”. Sometimes because the storage impact simply was not considered, but in some cases because the project manager decided that his Windows 7 laptop worked ok with 1GB of memory, so the Virtual desktops should have no issue using 1GB as well. Right? Or wrong? I decided to put this to the test.

Test setup

To verify the way a windows 7 linked clone (VMware View 5) would perform on disk, I resurrected some old script I had laying around on vscsiStats. Read the rest of this entry »

Merry X-mas and a happy new Year!

Just a short post to wish everyone the best wishes 🙂

Thanks to everyone who visited my blog site in 2011!

Building your own vSphere storage-accelerator card!?!

Is this blogpost going to inspire you to buy a soldering iron and build a PCI-e card? No. Is it a really cool idea to try and build a Fusion-I/O or EMC Lightning-like card in your homelab? Possibly: YES!



So what are Fusion-I/O or EMC Lighting solutions all about?

The idea behind server-side cards like these are really simple: You somehow get the cards inbetween the storage datastreams of the vSphere5 server, and you cache any data passing through on flash memory placed on the card. Some cards are smarter, some a little less smarter in the way they work. The basic idea remains the same though.

Doing read caching is not too complex for cards like these; there is no risk here in loosing data since you are only caching blocks from the array (which you then do NOT have to get from the array and this is where you win for reads). Can you also do write caching in these cards? You sure could, if you can live with potential data loss if the write was stored in the card and not yet synced to the array and your box burns down.


So how to build one of these without a soldering iron?

If you look at these server cards, and you take a step back, what do you see? Exactly, it is an appliance that has data in, data out and some solid state device on the side to cache stuff.

So what would happen if I built a virtual appliance, that uses NFS exports as an input and delivers NFS exports out again, using either memory and/or a local SSD drive as its caching device? By the sounds of this it could REALLY work. The best part? Your original NFS store would not be touched if you just did read caching. Write caching within this appliance would even be possible, especially when an SSD is used as a caching device (because the SSD would be non-volatile).

One major downdise of this solution would be the ineffectiveness of vMotion. vMotion would work, but if you want REAL performance you’d want to keep the VMs running through an appliance like this local to the appliance itself (to keep the NFS exports coming out of the appliance from traversing the physical network). A script might be able to vMotion the VMs to “their” vSphere server, or you could create a DRS rule to keep the VMs running off an appliance together with the appliance if the appliance only uses vSphere memory for its caching. Either way, this could work smooth!


So how to shape this idea

Instead of building my own appliance, I decided to look around for an appliance that already does this. After looking around for some time, I came to the conclusion that this has NOT been done yet. I could not find a single virtual appliance that would take one or more NFS exports and redeliver them from a local NFS server transparantly.

The thing that comes closest to this I think will be a ZFS-based appliance: ZFS is able to use memory as cache, and on top of that you can assign a “caching device” to ZFS as well.

Unfortunately it will not create a “transparant” appliance; the data on the external NAS (through a vmdk) or SAN device will be ZFS formatted. To bad, but at least it will be able to demonstrate the power of a software solution like this.


To the lab!

I will be testing this setup with some kind of ZFS-based NFS virtual appliance that will take caching memory or a caching device. I will be looking at appliances like Nexanta to do some fun testing! I’ll need an SSD in one of my homelab servers though, and most important…. I’ll need TIME.

Any ideas that might help here are more than welcome. How cool would it be if you can create a caching appliance within vSphere??!?!

RAID5 DeepDive and Full-Stripe Nerdvana

Ask any user of a SAN if cache matter. Cache DOES matter. Cache is King! But apart from being “just” something that can handle your bursty workloads, there is another advantage some vendors offer when you have plenty of cache. It is all in the implementation, but the smarter vendors out there will save you significant overhead when you use RAID5 or RAID6, especially in a write-intensive environment.

Recall on RAID

Flashback to a post way back: Throughput part 2: RAID types and segment sizes. Here you can read all about RAID types and their pros and cons. For now we focus on RAID5 and RAID6: These RAID types are the most space efficient ones, but they have a rather big impact on small random writes. Read the rest of this entry »

Different Routes to the same Storage Challenge

Once shared storage came about, people have been designing these storages so that you would not have to care again for failing disks; shared storage is built to cope with this. Shared storage is also able to deliver more performance; by leveraging multiple hard disks storage arrays managed to deliver a lot of storage performance. Right until the SSDs came around, the main and only way of storing data was using hard disks. These hard disks have their own set of “issues”, and it is really funny to see how different vendors choose different roads to solve the same problem.

Read the rest of this entry »

Passing the VMware VCP510 exam

Still had this coupon lying around for a free VCP510 exam. I got it because I did the VCP4-DT exam, and last week I saw that it was valid through… The end of THIS month. So what to do? I just took the shot. I hardly had any time to study, but then… That is nothing new to me… Just how I passed my VCP4 and VCP4-DT certifications as well 🙂

Read the rest of this entry »

Soon to come
  • Coming soon

    • Determining Linked Clone overhead
    • Designing the Future part1: Server-Storage fusion
    • Whiteboxing part 4: Networking your homelab
    • Deduplication: Great or greatly overrated?
    • Roads and routes
    • Stretching a VMware cluster and "sidedness"
    • Stretching VMware clusters - what noone tells you
    • VMware vSAN: What is it?
    • VMware snapshots explained
    • Whiteboxing part 3b: Using Nexenta for your homelab
    • widget_image
    • sidebars_widgets
  • Blogroll
    Links
    Archives