VMware Data Protection 5.1 reviewed
People who have been using VMware Data Recovery quickly discovered that this product had issues. VMware’s take on Data Recovery was that they wanted to have a backup product for the smaller shops with a short time-to-market. Too bad it was this product that drove a lot of users to Veeam or PHDvirtual because of its many problems. In secrecy VMware started working together with EMC’s BRS division to build a brand new backup product leveraging EMC’s Avamar technology under the codename “Project Toystory”. This product has seen the light of day as “vSphere Data Protection 5.1” or vDP for short. In this post I will be looking into vDP version 5.1, which is actually the initial release.
Introduction to vSphere Data Protection 5.1
This is the first release of vDP, so actually a 1.0 version. I will not be expecting a fully feature-rich product, but one that actually WORKS would be nice. After all, it is a “free” product (well, if you have Essentials plus or above you already paid for it) and if it does what you need it to do, why spend the extra cash?
The technology behind vDP is EMC’s Avamar. For technology that is enough said. Avamar features industry-leading deduplication, and as far as I know the only one that features variable block size deduplication. I think everyone knows deduplication by now, so I’ll just focus on the variable block size part here.
The strongest deduplication: Variable block size
Avamar is able to use variable sizes of the block it scans through. This delivers a VERY high deduplication ratio compared to the competition that uses fixed block sizes. So how does this work?
Consider two large files sitting in a VM. Both are identical, except that one file as a tiny bit of data inserted in the beginning. After being backed up, all the blocks inside that large file have been either sent to the backup (or marked as a duplicate coming from elsewhere) in a number of blocks. For fixed block size dedupe, this looks like this:

Deduplicating two large files using fixed block size dedupe. Note how the inserted data in the second file causes the backup to require a lot of data blocks (13 in this example).
Note that the blue blocks are all considered to be “Block1” as they deduplicate too. All unique blocks are numbered up. This example clearly shows how a small amount of data that gets inserted into the data stream can cause a major disruption in fixed block size dedupe algorithms. Because of the data insertion, all succeeding blocks are mismatched and will be stored again as the deduplication algorithm fails to detect that this data was actually already stored. Remember, this can be the case if you have two identical bits of data, but also if data changes between two backups.
The variable block size dedupe does a way better job:

Deduplicating two large files using variable block size dedupe. Note how the inserted data in the second file causes a small block (B7) to be sent to the backup store. But after this small block, the rest is recognized again and the backup in this variable length case requires only 6 blocks and a tiny one.
As you can see, the variable block size algorithm spots the grey data block as being new data, and will obviously need to store that particular block. But after that the algorithm picks up the blocks as it already knows them, and will not store them again. This can help a lot in reducing the footprint of your backups, in this case 13 blocks versus 6 blocks (and a tiny one). This works within a single VM but also across multiple VMs backed up by the same vDP appliance.
So technology-wise this is a very efficient way to store your data. But how does vDP do feature-wise?
Limits and boundaries compared
VMware has set certain limits to the backup size when using vDP. For starters, you can get the vDP appliance in three sizes: 0,5TB, 1TB and 2TB. These are fixed, and cannot be changed after deployment. Also, if you want to grow beyond 2TB, you need to deploy a second vDP appliance. You can grow up to a maximum of 10 vDP appliances in a vCenter environment, maxing out at 100 VMs per vDP. These maximums appear to be soft maximums (meaning you can technically go beyond 10 appliances and 100 VMs per appliance but VMware will not support those configs and probably with good reason).
For smaller deployments, that appears to be just fine: 1000 VMs using 20TB of variable block size deduped storage is quite serious. There is one caveat though: Every vDP appliance has its own dedup store. This means that if you have 5 VDP appliances running, blocks common to multiple vDP appliances will still be stored multiple times. This does influence the overall data consumption, and it would be wise to think about what goes where (for example: put all your Linux VMs on one vDP appliance, and all Windows VMs on another) in order to maximize the dedupe ratios.
This is not as bad as it looks though. I cannot help myself to compare it to Veeam, where every single backup job has its own dedup store. In that respect, the vDP technology has a cleaner design (you want multiple jobs to share one dedup repository) and on top of that it features a far more effective dedupe (as Veeam only uses fixed block size dedupe). Your mileage may vary though: If you backup 20TB of VMs in a single Veeam job, then Veeam will deduplicate among all 20TBs of data in that job. vDP in this scenario cannot as the target vDP appliances are limited to 2TB each. But then again, how realistic is a single 20TB (deduped space!) backup job.
Another well known competitor in this space is PHDvirtual. They have something inbetween: Per appliance they run a single dedup store (for all backups jobs running in there) and they do not have the 2TB limit, but like Veeam only a fixed block size dedupe algorithm is used.
If you want best of all worlds you’d probably want to look at EMC’s full-blown Avamar. The full-blown Avamar dedupes everything across one big variable block size based dedup store. On top of that, it can also handle physical machines, laptops, has WAN-optimized ROBO backups (as dedup is source based) and you can replicate between Avamar appliances.
vDP 5.1 Features
So how does vDP do feature-wise? Again, I cannot help myself comparing to the competition, mainly Veeam I guess. Also, do not forget that even though vDP is version 5.1, it really is a version 1.0 (so I’ll be gentle š ).
Being a version 1.0 product, its does not have a cool amount of buttons to play with. It is built very much to the KISS principle (Keep It Simple, Stupid!). You can backup VMs, restore VMs, and do file level restores of VMs. There isn’t much more to it today…
Time to give vDP a spin and see!
Installing vDP 5.1
As you deploy vDP 5.1, you need to decide size you’ll be deploying first. Remember that each version actually uses more storage than it can actually store! I decided to deploy the 2TB version.
Installation is really simple:
- Make sure DNS is configured, including reverse DNS;
- Deploy the virtual appliance;
- Start the appliance, configure some basics and connect it to vCenter 5.1;
- Use the vSphere web client to use vDP (the native client is not supported).
Easy right? The only really odd thing: The appliance needs a strong password of exactly 9 characters. No more, no less. Kind of defeats the purpose of a strong password I’d think (as there are way less combinations that add up to exactly 9 characters). Weird.
As a backup target, you need to specify where the appliance should put its VMDKs. It cannot backup directly to a CIFS share, which in general isn’t very speedy anyway. So through vSphere you can use storage connected to vSphere through FC(oE), iSCSI or NFS which is the general best practice for most backup appliances. A cool thing: The appliance uses multiple 256GB disks to build the total storage capacity. In this setup, there is no “natural” 2TB limit to the backend store, so my hopes are on new releases featuring higher capacities per appliance (one can hope right?). This is how the deployed 2TB appliance looks:

Settings of a deployed 2TB vDP 5.1 appliance. You can clearly see the amount of disks connected to the appliance that make up for the total of 3.2TB space required for the 2TB version.
Once you have logged into the vSphere webinterface, you have possibly the most clean and simple interface I’ve ever seen: A tab for Backup, Restore, Reports and Configuration (which is already done):

Maybe the most simple to use interface I’ve ever seen in a backup product: vDP 5.1 lacks fancy buttons, but has what it needs to do its thing.
To create a backup, you simply add the VMs of your choice to a job. You configure how long you want to keep the data, and when to run the backup. That’s it. Can you configure quiescing when snapshotting? No. Can you leave out vDisks inside VMs you want to skip? No. Limited? Maybe. Simple? Definitely. Tracking your backup is easy as well. Actions show in both the original vCenter client and the web client:

Running vDP backup of my Artemis VM. You can clearly see the snapshot being made and the job progressing.
One weird thing is the percentage of the jobs. Very often I’d see a backup sitting at 26% of completion, then all of a sudden be completed. I’m not really complaining, but it would be nice if the percentage of a job was more accurate while tracking progress.
On the restore side things are just as easy. Select the backup you want to restore, you can restore to the original location or to a new one. That’s it. Very easy to use indeed.
The configuration screen shows how vDP takes a somewhat different approach than most backup applications. It uses a fixed window for running backups, a maintenance window and a blackout window:

vDP 5.1 configuration tab. Note the window that is set for the appliance for running backups, the maintenance and the blackout window.
So what are these maintenance and blackout windows? They both are required for the proper operation of vDP, so they must remain there. You can however adjust the times of day of each. The maintenance window is the portion of each day reserved for performing routine server maintenance activities such as integrity check validation, whereas the blackout window performs server maintenance which requires unrestricted access. During any of the windows restores are possible though.
The last feature (which you really do not find anywhere in the web interface) is file level restore or FLR for short. Most backup solutions offer file-level restores, but from the backup software itself. vDP takes a different approach: File level restores are done from the VM where you need to restore! A user connects to the console of a VM (using the vCenter Console, RDP etc), and you go to the vDP appliance using a web browser pointing to https://{vDP IP}:8543/flr . You provide credentials for the VM you are currently logged into, and you get a selector box in your webpage where you can mark files for restoration. Again, simple and effective:
A HUGE plus to this approach: Users of VMs can perform file-level restores on their own. They log into “their” VM, go to the file-level-restore webpage of vDP, and get their files back from a backup. No intervention at all from any administrator… No more users bothering you to get one of their precious files back. Ow yes.
A small negative to this approach: The VM that wants to use FLR needs to have Adobe Flash installed in order to be able to run the restores. That’s right, you need to install Adobe Flash in your server VMs. Whether or not that is a problem, that is up to you to decide…
A trick to skip vDisks in vDP backups
When I stated you cannot exclude certain vDisks from VMs, I did not mention independent disks. It is impossible for this kind of backup solution to backup any independent disk, as you cannot snapshot an independent disk. vDP will silently skip independent disks, and that is marked as a “known issue”. In my view, that is not an issue at all, it is a feature and I want my backups to show up green even if independent disks were skipped; now I can backup that huge fileserver’s C: drive without hitting vDP with the file data (if that is what I’d want). If you restore such a VM, the independent disks will be there in the restore, but will not contain any data. It is a nice and very much useable way to skip disks for sure.
Quiesced Snapshot issues
I had some more issues with the quiesced snapshots. As vDP creates backups, it makes snapshots that are always quiesced. Some time ago I was told that was a really bad idea for domain controllers. Not sure how people feel about that right now, but fact is that a lot of VMs I tried refused to snapshot (“Timed out while quiescing the virtual machine.”). For Windows 2008R2 VMs there is actually a known issue around refusing to snapshot with quiescing turned on: VMware KB 1031298: Cannot take a quiesced snapshot of Windows 2008 R2 virtual machine. Just to be sure I removed VMware tools from my domain controllers, and then reinstalled VMware Tools with the VSS component disabled.
Scheduling and IOPS flooding
As for scheduling: I really missed two options here: The ability to not run the backups scheduled (but manually), and I also missed the option to run a backup multiple times a day. To get that going, you’d actually need a second vDP appliance (a “morning” and “afternoon” appliance so to speak) as the backup windows are set on a per-appliance basis. Not really what you’d want. On the other hand you see all VMs, including the ones not being backed up. That is actually a nice touch, you can identify very easily if you are missing any VMs in the backups. Also, you can start backups with “only out of date sources”, which is basically a “retry job” but a little smarter.
At some point I managed to kill my entire environment kicking off a large vDP backup: Per appliance multiple VMs are backed up simultaneously (up to eight VMs in parallel). Can you configure the maximum number of parallel streams? Not really. So my vDP appliance started to create snapshot after snapshot, then it decided to run so many backup streams in parallel that it completely flooded my NAS. As a result, vCenter (also running from those disks) started to respond really slow, and finally everything came to a grinding halt. Ouch. The cleanup? Manually remove vDisks from the appliance (that were still attached), a lot of VM snapshot consolidation, some manual snapshot removal. Nothing you won’t recognize from any of the other backup products out there if they error out. So how do you limit the number of parallel streams?
I tried to create several backup jobs, each with only one or two VMs inside, hoping that vDP would only kick of the jobs one by one. No such luck; the appliance is too smart, and kicks off all backups and mixes and matches the VMs in all the jobs. So the I/O flooding goes on. One nice thing on having multiple smaller jobs: For the initial backup you can run them manually one by one. This will load the initial backups onto the appliance step by step, not hogging the disks too much. After the initial backups are all done, the following backups are far more quiet thanks to CBT and dedupe. Still, I would love to see an option to manually set the maximum number of parallel backup streams on the vDP appliances just to keep those smaller environment with large change rates from being impacted too much.
Finally I decided to enabled SIOC on all of my NFS stores. This seems to effectively limit the IOPS performed by the “noisy neighbor” vDP so that the latency stays within acceptable boundaries. Whether that is enough for any small environment I’m not sure, but my poor virtual NAS with 4 SATA drives in RAID5 at least did not completely drown this time and response times of the VMs stayed within the acceptable numbers of ms š . This night the backup of 12 VMs ran for only 18 minutes. After that… ALL DONE without issues or any noticable slowness of the environment. Quite impressive!
One last thing: If you REALLY manage to break your vDP appliance, you can login to the appliance directly (using https://{vDP-IP}:8543/vdp-configure/ ) and you can actually go back to a previous “checkpoint” to return the appliance to a working state. Upgrades of the appliance are also run from this interface; you just connect an upgrade ISO to the appliance, and go to the upgrade tab. A checkpoint will be created and the appliance is upgraded. Sweet!
Conclusion
The functionality of vDP is solid. EMC’s Avamar technology helps to get stable results with industry leading deduplication ratios. Best part, you probably already own a copy if you have an Essentials plus bundle or better. If it fits the bill, then why generate another one š
vDP is extremely simple to install, and extremely simple to use. This is partly because it features only the basics in this release. There are no extra buttons to press which makes the room for error really small. File Level Restore driven from within the VM is another surprise: the user of a VM can restore his or her own files without interference form any administrator. The “issue” of having to install Adobe Flash in the VM before file level restore works is something you’ll have to live with.
Due to some missing basics like NOT being able to not quiesce a VM when taking a snapshot might be an issue. If you need more, today I’d suggest to look at Veeam or PHD Virtual. If on the other hand you are just looking for a basic backup that works, vDP might be for you.
References:
VMware KB: VMware Data Protection (VDP) FAQ
Document: vSphere Data Protection 5.1 Release Notes
Document: vSphere Data Protection 5.1 Administration Guide
Hi, is it possible to put the backup in a nas over the lan?
Hi Rob,
That is very possible, in fact it was the way I tested vDP. You can only mount the backup stores of vDP to a datastore in vSphere, so you just mount your NAS to your vSphere node(s). As you deploy the vDP virtual appliance, you configure the data disks to live on the NAS datastore and you’re done.
Thanks
As usual Erik it’s a very nice and detail article.
Good article. VDP Advanced addresses some of the limitations mentioned in this article.
VDP has two tiers:
vSphere Data Protection (VDP)
vSphere Data Protection Advanced (VDP Advanced)
The following table defines the features available in VDP and VDP Advanced.
Table 1-1. VDP and VDP Advanced Features
Feature VDP VDP Advanced
VMs supported per VDP Appliance up to 100 up to 400
Maximum datastore size 2 TB 8 TB
Ability to expand current datastore No Yes
Support for image-level backups Yes Yes
Support for guest-level backups of:
Microsoft SQL Servers No Yes
Microsoft Exchange Servers No Yes
Support for file level recovery Yes Yes
Hi Dean,
Yes VDP Advanced ups the stakes a bit. But there is a huge difference between VDP and VDR advanced: VDP is included in vSphere, VDP Advanced is not. Advanced needs a separate license.
That all of a sudden places it into the realm with lots of other competitors… The core Avamar engine would “survive” in a head-2-head comparison, but the rest of the features are still somewhat behind. Not strange, considering this is no more than a v2.0 product (and I would probably classify VDP advanced as a 1.1 version).
I think I’ll wait a while for a new version before I write another post on this product, so that we’ll really have something to compare š
Abuout IOPS flooding and how to limit the number of parallel streams? I found that changing file /usr/local/avamarclient/etc/avagent-list from eigth to number of wished (3 in my case) and reboot vdp appliance.
This is unsupported workaround, but in my situacion seems to works fine.
root@vdp:/etc/init.d/#: more /usr/local/avamarclient/etc/avagent-list
# Limits for number of file and image proxies.
file=3
image=3
# Named proxies – pins are associated based on current limits.
proxy-1:
proxy-2:
proxy-3:
#proxy-4:
#proxy-5:
#proxy-6:
#proxy-7:
#proxy-8:
hello,
one small question
i’ve just upgrade vmtools on a Linux VM (OEL). After that, every time when the VDP start the VM is freezing (processor and memory usage goes up to 100%) and i have to restart the VM.
I’ve reed explanations about the problems with windows server VM’s and quiesced backup.
Any ideas?
Thank you