Compellent presentation @Dutch VMUG 2010

This time I joined a session with Compellent. I have known them for their smart storage arrays, which are able to move data between storage tiers to optimize speed of regularly used data.

I’m excited to see what they have come up with this year!



They store data in a flexible manner. They call it “fluid data“. Efficient and cost effective, they have a very flexible way of creating RAID sets. RAID levels are determined at a block level, once again to optimize the flow of data.

Compellent has tiers on either SSD, FC, SAS or SATA. But even nicer, they can also create a sort of “sub tiers” by choosing different RAID levels within each tier! The Compellent storage box constantly monitors the flow of data, and automagically (love that word!) moves blocks of data between tiers and these “subtiers” (= my own word to make things clearer in this blog entry)

Compellents power lies in the software. The hardware is quite basic hardware, only their cache card is custom made. This enables them to be very flexible in adopting new architectures and techniques.

The software that ticks inside the Compellent boxes also performs all nifty new features. Replication, snapshotting (no application integration yet I’m told), VMware Site Recovery Manager (SRM) support… It’s all there.

Shame VAAI is not inside their boxes yet – but it’s coming (for an explanation on VAAI see this entry in Duncan Epping’s blog).

Integration with the VI client is also present. Using a plugin you can start using their storage with real ease. Same goes for recvovering snapshots; it is all in the GUI of the plugin.

I think their way of using RAID6 is really nice. I was especially interested in the fact that they write to RAID10 first (only a write penalty of 2), to migrate the data at a later time to the RAID6 part of the same spindles.

The official answer on having deduplication is “no”. But I’m sure it is under investigation… 😉

A metro solution? Yep, it is there, called “Live Volume“. Asynchronous at the moment, but still it is possible to perform offline maintenance to one of the two sides. The synchronous variant is in the making, after which any side can fail without interrupting data delivery, or you can load balance across the two.

Overall, their solution surely seems nice for VMware.

One Response to “Compellent presentation @Dutch VMUG 2010”

Soon to come
  • Coming soon

    • Determining Linked Clone overhead
    • Designing the Future part1: Server-Storage fusion
    • Whiteboxing part 4: Networking your homelab
    • Deduplication: Great or greatly overrated?
    • Roads and routes
    • Stretching a VMware cluster and "sidedness"
    • Stretching VMware clusters - what noone tells you
    • VMware vSAN: What is it?
    • VMware snapshots explained
    • Whiteboxing part 3b: Using Nexenta for your homelab
    • widget_image
    • sidebars_widgets
  • Blogroll
    Links
    Archives