GeekWeek Team05 Statistics!

This week I have been attending GeekWeek with Team5, a mix of vSpecialists and vArchitects. We totally rocked it! Tomorrow is the “grande finale”, hopefully we’ll pass. I think we can all agree that it has been a wonderful experience, both fun and educating in one way or another.

Read the rest of this entry »

vDesktops – Where do you measure IOPS?

People are talking SO much about VMware View sizing these days. Everyone seems to have their own view on how much IOPS a vDesktop (virtual desktop) really uses. When you’re off by a few IOPS times a thousand desktops things can get pretty ugly. Everyone hammers on optimizing the templates, making sure the vDesktops do not swap themselves to death etc. But everyone seems to forget a very important aspect…

Where to look

People are measuring all the time. Looking, checking, seeing it fail in the field, go back to the drawing board, size things up, try again. This could happen in an environment where storage does not a 1-on-1 relation with disk drives (like when you use SSDs for caching etc). But even in straight RAID5/RAID10 configs I see it happen all the time. Read the rest of this entry »

Place to be: EMC world 2011

This year another EMC world is surely going to rock the house once again. It’s party time again from May 9th to 12th in Las Vegas!

EMC world 2011


When will the fun EVER stop? 2011 is going to be a rocking year for EMC, full of groundbreaking records and very cool products. Ranging from security through backup to disaster recovery – All within EMCs balanced portfolio.

There will be a lot of very interesting things to do at EMC world 2011. Take a look at the session catalog here (requires login): EMC world 2011 Session Catalog. Read the rest of this entry »

Veeam Backup vs PHDvirtual Backup part 3- Handling disaster recovery

After a rather successful part 2 of this series, it is high time to kick off part 3, which covers Replication and Disaster Recovery (DR). Most important to note, that backup and DR are two completely different things, and one should not be tempted to combine both unless you are positive your solution will cover all business requirements for both DR and backup.

Read the rest of this entry »

“If only we could still get 36GB disks for speed”

Yesterday I remembered a rather funny discussion I once had. Someone stated “if only we could still get 36GB 15K disks, we could speed things up by using a lot of spindles”.

Kind of a funny thing if you think about it. At the time I figured that 36GB disks would force you to use more drives in order to reach a proper capacity. And since a lot of people still tend to scale to capacity only, your problems increase with the size of disks. Let’s say your environment requires 6TB, you could use 4 2TB drives in RAID5 – But don’t expect 100 VMs running properly from that 😉

The funny thing (and the reason for this post) is that most people seem to miss out on the following…


The latest thing: vTesting!

Yes I admit it, now that I’m an EMC vSpecialist I do not have very much time left for all these deepdive measurements. So I’m forced to introduce a new type of testing. I’ll call it a vTest. Actually Einstein is the father of this type of testing, simply because he did not have spaceships that could do near-lightspeed. Me, I simply lack time. Hm that is a kind of deep statement in this light right 🙂

With no further delay I’ll just drop the statement for this vTest, and we’ll boldly go where no geek has gone before:


“A 7200rpm SATA disk CAN outperform a 15K FC disk”



So how many of you think the above is pure nonsense? Don’t be shy, let’s see those fingers!

Now for the actual vTest: In this test I play the devil’s advocate and use a 2TB 7200rpm SATA drive, and a 36GB 15K FC disk. Both disks get 36GB of data carved out. Now we run a vTest performing heavy random access on both 36GB chunks.

See where I’m going? If not, here is a hint: Throughput part 1: The Basics. In random access patterns, the biggest latency in physical disks comes from the average seek time of the head to the correct cylinder on disk. And the trick is in the “average” part.

The average seek time is the average time required for a head to seek to any given cylinder on the disk. But this seek time heavily depends on where the head was coming from. Normally the average seek time is measured when the head needs to travel half of the platter’s surface. But in our test that is far from reality for our 2TB SATA drive!

As the 36GB 15K FC drive has to move its head all over the platter, the 2TB SATA disk only moves (36GB/20000GB)*100 = 1,8% of its total stroke distance. In fact even that is a lie: The outside of the platter carries way more data than the inside, so assuming the 36GB is carved out at the edge (what most arrays do), this number is even lower, probably below 1% !

This means the average seek time of this disk is no longer around 8-9ms, but drops to around 1ms (no, not 1% of 9ms! This value will be very near the track-to-track seektime which for SATA usually is around 1ms). Even the addition of the extra rotational latency of the SATA disk (because it spins at 7200rpm instead of 15000rpm) does not help: The total average seek time is still way lower than the total average seek time of the poor 36GB 15K disk…

Yes you could discuss on caching efficiency; the way the disks differ in sorting the order in which they fetch random blocks, but still:

If you now review the initial statement again, would you still have the same answer….???
(At least it should get you thinking!)

VMdamentals.com on Veeam’s podcast!

As some of you may have seen, Veeam’s podcast episode two features an interview with VMdamentals, by yours truly. Check it out here:


Veeam Community Podcast Episode 2 – VMdamentals shootout!


Veeam Community Podcast

Update from ESX4.1 to update1 fails with “vim.fault.noHost”


Today I decided to update my home lab from vSphere 4.1 to vSphere 4.1u1. Updating vCenter went smoothly. Once I tried to update the first ESX node in the cluster using VMware Update Manager (VUM), it failed with the error “vim.fault.noHost”.

Say what? Googling the error did not give away too much detail; all posts on this were way back in the ESX 3.5 times. I hate when this happens. So what to do? Yes I still run ESX in my homelab (I like boot from SAN way too much ;). So off to the logging.

It had been some time since I looked at ESX logs in detail; the amount of “verbose errors” are enormous…. Anyway, it seemed to have something to do with the way vCenter talks (or rather fails in talking) to the node…

First I tried rebooting the node, then run VUM again to remediate the node agian… But again it failed. Finally I just removed the node from the cluster (via a hard “disconnect” followed by a remove), then re-added the node. After this, the node remediated without issue.

EMC’s Record Breaking Event: Almost showtime!

As EMC2 counts down into the final hours before their live record breaking event, posts are showing up around three products: Data Domain Archiver, the Data Domain 890 and GDA, and the EMC2 VNX and VNXe series. I looked around and found some info here and there on these new products.

Read the rest of this entry »

Veeam Backup vs PHDvirtual Backup part 2- Performing Backup and Restores

In part 1 of this series, I looked at two solutions for making virtual backups: Veeam and PHDvirtual. In this part, I’ll be looking at installing, making backups, verifying backups and of course restoring items.

Read the rest of this entry »

Veeam Backup vs PHDvirtual Backup part 1- Introduction

For a long time I have been a fan of PHDvirtual (formerly esXpress) and their way of backing up virtual environments. Their lack of ESXi support has driven a lot of people towards other vendors, and the one that is really on technology’s edge nowadays is Veeam’s Backup and Replication. Now that PHDvirtual has released their version 5.1 with ESXi support, it is high time for a shootout between the two.

Some history on drawing virtual backups

In the old ESX 3.0 and ESX 3.5 days, there was hardly any integration with 3rd party backup products. Read the rest of this entry »

Soon to come
  • Coming soon

    • Determining Linked Clone overhead
    • Designing the Future part1: Server-Storage fusion
    • Whiteboxing part 4: Networking your homelab
    • Deduplication: Great or greatly overrated?
    • Roads and routes
    • Stretching a VMware cluster and "sidedness"
    • Stretching VMware clusters - what noone tells you
    • VMware vSAN: What is it?
    • VMware snapshots explained
    • Whiteboxing part 3b: Using Nexenta for your homelab
    • widget_image
    • sidebars_widgets
  • Blogroll
    Links
    Archives