Archive for the ‘VDI and VMware View’ Category
VMware View 5.1 host caching using vSphere 5’s CBRC
I have seen different implementations of read caching in arrays and even inside hosts, just to be able to cope with boot storms of VDI workloads. When using linked clones caching really helps; all the VDIs being booted perform massive reads from a very small portion of the infrastructure: the replica(s). VMware came up with a nice software solution to this: Why not sacrifice some memory inside the vSphere nodes to accommodate read caching there? This is what CBRC (vSphere 5) or Host Caching (View 5.1) is all about. And… It really works!
What happens during a boot storm
First of all, we need to figure out what happens during a boot storm. Even wondered just how much data Read the rest of this entry »
EMC FAST-cache and “Follow the I/O”
I do not often write to specific implementations of a vendor. This time however I focus on EMC’s FAST-cache technology, and we will be playing a little “follow the I/O” to see what it actually does, where it helps and where it might not.
Backwards VDI math: Putting numbers to the 1000 user RA
EMC and VMware have published a joined Reference Architecture where an EMC VNX5300 using a minimum configuration of disks squeezes out the required IOPS for a thousand VDI users. That is awesome stuff, but how to go about using and remodeling this RA for your own needs? In this blog post I’ll try to put some numbers to it, both validating and enabling you to resize for your needs.
A very cool use case: VMware View and 1000 vDesktops running off an EMC VNX5300
This is a very VERY cool one. You can find the Reference Architecture Read the rest of this entry »
Sizing VDI: Steady-state workload or Monday Morning Login Storm?
For quite some time now we have been sizing VDI workloads by measuring what people are doing during the day on their virtual desktops. Or even worse, we use a synthetic workload generator. This approach WILL work to size the storage during the day, but what about the login storm in the morning? If this spikes the I/O load above the steady stte workload of the day, we should consider to size for the login storm…
Why Virtual Desktop Memory Matters
I have seen several Virtual Desktop projects with “bad storage performance”. Sometimes because the storage impact simply was not considered, but in some cases because the project manager decided that his Windows 7 laptop worked ok with 1GB of memory, so the Virtual desktops should have no issue using 1GB as well. Right? Or wrong? I decided to put this to the test.
Test setup
To verify the way a windows 7 linked clone (VMware View 5) would perform on disk, I resurrected some old script I had laying around on vscsiStats. Read the rest of this entry »
Under the Covers with Miss Alignment Part 2: Linked Clones
This post is the continuation of Under the Covers with Miss Alignment: I keep hearing this rumor more and more often: It appears that both snapshots and linked clones on vSphere 4.x and 5.0 are misaligned. Not having had the time to actually put this to the test, I thought it would at least be informative to give you some more down-and-dirty information on the subject.
Steve Herrods VMworld Keynote Summary
Welcome to the tuesday General Session. Steve Herrod is taking the stage with some cool new stuff. This blogpost is typed as-we-go, so bear with me if anything is misspelled or looks chaotic 🙂
Introduction
Steve is looking at three phases in VDI: Simplify, Manage and Connect. These three phases are important to distinguish: First you simplify, you need to manage your setup, and users need to connect. View 5 is built to accommodate this to the max.
Up next, the new goodness for small businesses and finally… Melvin the Monster VM!
Read the rest of this entry »
vDesktops – Where do you measure IOPS?
People are talking SO much about VMware View sizing these days. Everyone seems to have their own view on how much IOPS a vDesktop (virtual desktop) really uses. When you’re off by a few IOPS times a thousand desktops things can get pretty ugly. Everyone hammers on optimizing the templates, making sure the vDesktops do not swap themselves to death etc. But everyone seems to forget a very important aspect…
Where to look
People are measuring all the time. Looking, checking, seeing it fail in the field, go back to the drawing board, size things up, try again. This could happen in an environment where storage does not a 1-on-1 relation with disk drives (like when you use SSDs for caching etc). But even in straight RAID5/RAID10 configs I see it happen all the time. Read the rest of this entry »
VMmark 2.0 released, but where is the ViewTile?
VMware’s VMmark (I have also read the name being VMark though?) has been around for a long time. It is software which creates “tiles” of workload on a physical server using several virtual machine workloads. It then adds tiles to the hardware platform, until its resources run out. Now version 2.0 is out!
Breaking VMware Views sound barrier with Sun Open Storage (part 2)
It’s been months since I performed a large performance measurement using the Sun Unified Storage array (7000 series) in conjunction with VMWare View and linked clones. Not much has been done with the report, not by me, not by Sun, not by my employer.
So now I have decided to share this report with the world. In essence it has been a true adventure “how to cram as many View desktops (vDesktops) into an array as small and cheap as possible”. The Sun 7000 series storage is built around the ZFS filesystem, which can do amazing things when used right. And linked clone technology appears to be a perfect match to the ZFS filesystem when combined with Read and log optimized SSDs. Combined with NFS, the “sound barrier” was broken by not needing VMFS and all of its limitations when it comes to using more than 128 linked clones per store. Instead, we did hundreds even nearing a thousand linked clones per store!
In the end, we managed to run over 1300 userload-simulated vDesktops without noticable slowness / latency. Then the VMware testing environment ran out of physical memory and refused to push further. To the Sun storage we had a sustained 5000-6000 WOPS at that time, which the ZFS filesystem managed to reduce to no more than 50 WOPS to the SATA disks. Too amazing to be true? Well, read all about it:
For the faint hearted, the excerpt can be download from here:
Performance Report Excerpt Sun Unified Storage and VMware View 1.0 (713 KB)
Or read the full-blown report here:
Performance Report Sun Unified Storage and VMware View 1.0 (4.06 MB)