Posts Tagged ‘ESX’

VM performance troubleshooting: A quick list of things to check

I often see virtual machines that perform poorly. There can be many many reasons. I thought it was time to post a few “top 5 things to check in any given VMware ESX(i) environment” that might help you solve any issues.

Things to check on storage

Storage is often considered the bad guy when it comes to bad performance of virtual machines. As it turns out, this is not very often the case at all. Still, some storage-related things to check if you encounter a poor performing VM:

Read the rest of this entry »

Speeding up your storage array by limiting maximum blocksize

Recently I got an email from a dear ex-colleague of mine Simon Huizenga with a question: “would this help speed up our homelab environment?”. Since his homelab setup is very similar to mine, he pointed me towards an interesting VMware KB article: “Tuning ESX/ESXi for better storage performance by modifying the maximum I/O block size” (KB:1003469). What this article basically describes, is that some arrays may experience a performance impact when very large storage I/O’s are performed, and how limiting the maximum sizes of I/O blocks might improve performance in specific cases.

Read the rest of this entry »

Whiteboxing part 2: Building the ultimate Whitebox

In part 1 of this series I posted the way I selected hardware for my ultimate whitebox server. A whitebox server is a cheap server you can use to run VMware vSphere without it being on the VMware HCL. Never supported, but working nonetheless. Now that the hardware to use was selected and ordered from my local computer components dealer, the next step is to assemble and test the setup, which is the focus of this post.

Read the rest of this entry »

Whiteboxing part 1: Deciding on your ultimate ESX Whitebox

So you’ve decided: You want to build yourself an ESX(i) environment while minimizing cost. But how do you choose between available hardware? In this blogpost I will be focussing on my recent Whitebox server selecgtion and how I got to my configuration out of all available components.

Different ways of getting to a successful Whitebox config

There are several different ways of getting to a cheap Whitebox configuration. So far I’ve been seeing four approaches:

  1. Build one big Windows/Linux server and run everything virtual (so virtual ESX nodes on VMware Workstation);
  2. Build one big ESX(i) server and run everything virtual (so virtual ESX nodes on the physical ESX node);
  3. Build two smaller ESX(i) servers (surprise suprise… this can actually be cheaper over one big node!);
  4. Buy a complete (supported) system (Like Dell or HP).

Read the rest of this entry »

Veeam Backup vs PHDvirtual Backup part 3- Handling disaster recovery

After a rather successful part 2 of this series, it is high time to kick off part 3, which covers Replication and Disaster Recovery (DR). Most important to note, that backup and DR are two completely different things, and one should not be tempted to combine both unless you are positive your solution will cover all business requirements for both DR and backup.

Read the rest of this entry »

Update from ESX4.1 to update1 fails with “vim.fault.noHost”


Today I decided to update my home lab from vSphere 4.1 to vSphere 4.1u1. Updating vCenter went smoothly. Once I tried to update the first ESX node in the cluster using VMware Update Manager (VUM), it failed with the error “vim.fault.noHost”.

Say what? Googling the error did not give away too much detail; all posts on this were way back in the ESX 3.5 times. I hate when this happens. So what to do? Yes I still run ESX in my homelab (I like boot from SAN way too much ;). So off to the logging.

It had been some time since I looked at ESX logs in detail; the amount of “verbose errors” are enormous…. Anyway, it seemed to have something to do with the way vCenter talks (or rather fails in talking) to the node…

First I tried rebooting the node, then run VUM again to remediate the node agian… But again it failed. Finally I just removed the node from the cluster (via a hard “disconnect” followed by a remove), then re-added the node. After this, the node remediated without issue.

Veeam Backup vs PHDvirtual Backup part 2- Performing Backup and Restores

In part 1 of this series, I looked at two solutions for making virtual backups: Veeam and PHDvirtual. In this part, I’ll be looking at installing, making backups, verifying backups and of course restoring items.

Read the rest of this entry »

Veeam Backup vs PHDvirtual Backup part 1- Introduction

For a long time I have been a fan of PHDvirtual (formerly esXpress) and their way of backing up virtual environments. Their lack of ESXi support has driven a lot of people towards other vendors, and the one that is really on technology’s edge nowadays is Veeam’s Backup and Replication. Now that PHDvirtual has released their version 5.1 with ESXi support, it is high time for a shootout between the two.

Some history on drawing virtual backups

In the old ESX 3.0 and ESX 3.5 days, there was hardly any integration with 3rd party backup products. Read the rest of this entry »

PHD Virtual Backup 5.1er – First Impressions

Today I got my hands on the new PHD Virtual Backup Appliance – version 5.1-ER. Following in the footsteps of its XenServer brother, this new version uses a single VBA (versus the previous versions where multiple VBA’s were used). Best of all: ESXi support at last!

Read the rest of this entry »

vscsiStats in 3D part 2: VMs fighting over IOPS

vscsiStats is definitely a cool tool. Now that the 2D barrier was broken in vscsiStats into the third dimension: Surface charts! it is time to move on to the next level: Multiple VMs fighting for IOPS!

Update: Build your own 3D graphs! Check out vscsiStats 3D surface graph part 3: Build your own!

I figured the vscsiStats would be most interesting in a use case where two VMs are battling for IOPS from the same RAID set. A single VM would have to force I/O on a RAID set. Wouldn’t it be cool to start a second VM on the same RAID set later on and to see what happens in the 3D world? In this blogpost I’m going to do just that!

TO THE LAB!

The setup is simple: Take a LUN on a RAID5 array of (4+1) SATA72K spindles, take two (Windows 2003 server) VMs which have a datadisk on this LUN. Now install iometer on both VMs. These two instances of iometer will be used to make both VMs fight for IOPS.

The iometer load is varied between measurements, but globally it emulates a server load (random 4K reads, random 4K writes, some sequential 64K reads).

First only a single VM runs the iometer load. At 1/3rd of the sample-run, the second VM is started to produce the same IO pattern. At 2/3rd, the first VM stops its IO pattern load. This results in the following graph:

VMs fighting for IOPS - blocksize view Read the rest of this entry »

Soon to come
  • Coming soon

    • Determining Linked Clone overhead
    • Designing the Future part1: Server-Storage fusion
    • Whiteboxing part 4: Networking your homelab
    • Deduplication: Great or greatly overrated?
    • Roads and routes
    • Stretching a VMware cluster and "sidedness"
    • Stretching VMware clusters - what noone tells you
    • VMware vSAN: What is it?
    • VMware snapshots explained
    • Whiteboxing part 3b: Using Nexenta for your homelab
    • widget_image
    • sidebars_widgets
  • Blogroll
    Links
    Archives