Tag Archives: storage

Evolution of storage – traditional to hyperconverged

These days, there’s been an explosion in diversity of storage options, which often bleed into compute and/or networking when it comes to virtualized architecture.  It used to be that storage was storage, networking was networking, and compute was compute.  And when it came to storage, while architectures differed, what you stored your virtual machines did storage and storage only.  EMC ClARiiON/VNX, NetApp Filers, iSCSI targets like LeftHand, Compellent, EqualLogic, etc.  These were all storage and storage only.

Some of these added SSD as permanent storage type disks and/or as an expanded caching tier.  We also saw the emergence of all flash storage arrays that attempted to make the most of SSD using technologies like compression and deduplication to overcome the inherent weakness of SSD of high cost per unit of storage.  These arrays often are architectured from the ground up to work best with SSD, taking into account garbage collection needed to reuse space in SSD.

But these are also all storage only type devices.

Over time, that’s changed.  We now have converged infrastructure, such as VCE and Flexpod, but those typically still use devices dedicated for storage.  VCE VBlock and VxRack use EMC arrays.  FlexPod uses NetApp filers.  These are prepackaged validated designs built in factory, but still use traditional type storage arrays.

Keep in mind I don’t think there’s inherently anything wrong with this or any of these architectures.  I’m just laying the framework down to describe the different options available.

Now, we do have options that truly move away from the concept of buying a dedicated storage array, called Hyperconverged.  They’re still shared storage in the sense that your VMs can be automatically restarted on a different host should the host they are running goes down.  There’s still (when architected and configured properly) no single point of failure.  But this category doesn’t use a dedicated storage device.  Instead, it utilizes effectively local storage/DAS connected to multiple compute units pooled together with special sauce to turn this storage into highly available, scalable storage, usually for use with virtualization.  In fact, many only work with virtualization.  These tend to use commodity type hardware in terms of x86 processors, RAM, and disk types, although many companies sell their own hardware with these components in them, and/or work with server hardware partners to build their hardware for them.

The common thread between them though is you’re not buying a storage array.  You’re buying compute + storage + special sauce software when you comprise the total solution.

These options are for example Nutanix, VMware VSAN (or EVO:RAIL that utilizes it), Simplivity, ScaleIO, and you will see more emerging, and plenty I didn’t mention just because I’m not intending that to be a definitive list.

While there are good solutions in each of these types of storage arrays, none of the types are perfect.  None of these types work best for everyone, despite what any technical marketing will try to tell you.

So while there are more good choices to choose from than there ever has been in storage, it’s also harder to choose a storage product than it ever has been.  My goal in these posts are to lay a foundation to help understand these different options, which might help people sort through them better.