Posts Tagged ‘vSphere’
Still had this coupon lying around for a free VCP510 exam. I got it because I did the VCP4-DT exam, and last week I saw that it was valid through… The end of THIS month. So what to do? I just took the shot. I hardly had any time to study, but then… That is nothing new to me… Just how I passed my VCP4 and VCP4-DT certifications as well
Today Paul Maritz is takes the stage for the VMworld 2011 general session in Las Vegas. In line with this years motto “Your Cloud. OWn IT” he laid out the direction things will be going according to VMware, starting today. This is definitely the bigger picture.
I must say that I’m a bit sad about the deep techie stuff becoming less and less visible on VMworld. But that is what is neccesary for the next phase in “IT life”. Ever seen a Star Trek movie where they needed to debug their wapr drive? Nope. As technology develops, the “simple things” just vanish into the background. The cloud is here.
Read the rest of this entry »
I often see virtual machines that perform poorly. There can be many many reasons. I thought it was time to post a few “top 5 things to check in any given VMware ESX(i) environment” that might help you solve any issues.
Things to check on storage
Storage is often considered the bad guy when it comes to bad performance of virtual machines. As it turns out, this is not very often the case at all. Still, some storage-related things to check if you encounter a poor performing VM:
Read the rest of this entry »
Recently I got an email from a dear ex-colleague of mine Simon Huizenga with a question: “would this help speed up our homelab environment?”. Since his homelab setup is very similar to mine, he pointed me towards an interesting VMware KB article: “Tuning ESX/ESXi for better storage performance by modifying the maximum I/O block size” (KB:1003469). What this article basically describes, is that some arrays may experience a performance impact when very large storage I/O’s are performed, and how limiting the maximum sizes of I/O blocks might improve performance in specific cases.
In part 1 of this series I posted the way I selected hardware for my ultimate whitebox server. A whitebox server is a cheap server you can use to run VMware vSphere without it being on the VMware HCL. Never supported, but working nonetheless. Now that the hardware to use was selected and ordered from my local computer components dealer, the next step is to assemble and test the setup, which is the focus of this post.
So you’ve decided: You want to build yourself an ESX(i) environment while minimizing cost. But how do you choose between available hardware? In this blogpost I will be focussing on my recent Whitebox server selecgtion and how I got to my configuration out of all available components.
Different ways of getting to a successful Whitebox config
There are several different ways of getting to a cheap Whitebox configuration. So far I’ve been seeing four approaches:
- Build one big Windows/Linux server and run everything virtual (so virtual ESX nodes on VMware Workstation);
- Build one big ESX(i) server and run everything virtual (so virtual ESX nodes on the physical ESX node);
- Build two smaller ESX(i) servers (surprise suprise… this can actually be cheaper over one big node!);
- Buy a complete (supported) system (Like Dell or HP).
People are talking SO much about VMware View sizing these days. Everyone seems to have their own view on how much IOPS a vDesktop (virtual desktop) really uses. When you’re off by a few IOPS times a thousand desktops things can get pretty ugly. Everyone hammers on optimizing the templates, making sure the vDesktops do not swap themselves to death etc. But everyone seems to forget a very important aspect…
Where to look
People are measuring all the time. Looking, checking, seeing it fail in the field, go back to the drawing board, size things up, try again. This could happen in an environment where storage does not a 1-on-1 relation with disk drives (like when you use SSDs for caching etc). But even in straight RAID5/RAID10 configs I see it happen all the time. Read the rest of this entry »
After a rather successful part 2 of this series, it is high time to kick off part 3, which covers Replication and Disaster Recovery (DR). Most important to note, that backup and DR are two completely different things, and one should not be tempted to combine both unless you are positive your solution will cover all business requirements for both DR and backup.
Today I decided to update my home lab from vSphere 4.1 to vSphere 4.1u1. Updating vCenter went smoothly. Once I tried to update the first ESX node in the cluster using VMware Update Manager (VUM), it failed with the error “vim.fault.noHost”.
Say what? Googling the error did not give away too much detail; all posts on this were way back in the ESX 3.5 times. I hate when this happens. So what to do? Yes I still run ESX in my homelab (I like boot from SAN way too much . So off to the logging.
It had been some time since I looked at ESX logs in detail; the amount of “verbose errors” are enormous…. Anyway, it seemed to have something to do with the way vCenter talks (or rather fails in talking) to the node…
First I tried rebooting the node, then run VUM again to remediate the node agian… But again it failed. Finally I just removed the node from the cluster (via a hard “disconnect” followed by a remove), then re-added the node. After this, the node remediated without issue.