Category Archives: Unity

unity

How To Manage Unity Host LUN IDs

EMC’s new Unity arrays have been out for a bit now.  Every array has its weirdness to it.  I just found my first Unity oddity today.  Remember, the Unity replaces the VNX and the VNXe.  If you’ve never played with a VNXe, EMC simplified the Unisphere interface, and removed and/or hid many options in the process.  With that in mind, the Unity does this, too.  You will find this perhaps first managing Unity Host LUN IDs.

Why might you want to control the host LUN IDs?

  • Consistent LUN Masking for easier troubleshooting.
  • Storage based replication like RecoverPoint best practices want consistent Host LUN IDs for the replicated LUNs for both host access and RPAs.
  • Boot from SAN LUNs are either required to be a specific Host LUN ID such as LUN 0, or you may be running Cisco UCS and need to specificy a specific Host LUN ID for the boot LUN in the boot policy.

In this case, the customer decided to reinstall ESXi on a new boot LUN.  They created a new boot LUN, granted the correct ESXi host access, and then deleted the old one.

Provisioning LUNs

Provisioning LUNs on an EMC Unity array is easy, at least in most cases.  EMC streamlined the interface on these arrays.  You simply go to Storage > the subcategory Block or VMware if it’s for ESXi, and step through the wizard to set your options and grant host access.

unitycreatelun

When you get to the point where you grant access, you simply put checkmarks in the boxes for each host to which you wish to grant access.

unitygrantlunaccess

Do you see anything missing, even if you click that gear and add columns?  If you guessed there’s no way to control what host LUN ID will be used in the LUN masking, go pat yourself on the back.

Fortunately, the Unity does provide a way to set the Host LUN ID once a LUN is created… in most cases…

Managing Unity Host LUN IDs for LUN Masking

Here’s how Unity Host LUN IDs work.  Unity automatically picks a host LUN ID for the host, picking the next available host LUN ID for that host.  If you wish to change the host LUN ID for a LUN mask, simply navigate to Access > Hosts, click on the host you wish the change the Host LUN ID for, the pencil icon to edit.  Next, click LUNs.  Finally, click “Modify LUN IDs”.

change unity host lun id

Easy right?  As long as your host isn’t identified as a VMware host, you’re good!

Managing Unity Host LUN IDs for LUN Masking on VMware Hosts

If you follow those directions but your host is identified as a VMware host within Unity, you’re in for a nasty surprise.  Let’s play the game, “what’s missing?”

unity host lun id missing

If you guessed “no way to change Host LUN IDs”, you’re correct again.  YOU’RE ON FIRE!!!  That’s not all, though.  There’s no LUN access management here at all!  That includes Host LUN IDs.

Don’t bother going into the initiators either.  It’s not there.  It’s not under the LUN in question, either; although, that’s how to manage LUN access in general for VMware hosts.  Other than command line, there’s nothing you can do with identified VMware hosts.

Solution?  Don’t make it an identified VMware host.  Simply remove the vCenter server from Unity.  That places all ESXi servers discovered from this vCenter into the general Hosts group, giving you back the ability to change the Host LUN IDs.

unity vcenter server removed esxi hosts

Now, you can manage the Host LUN IDs again!

unity

Unity – EMC’s new unified storage array

As you may know, EMC released their new unified storage array for block and file called Unity.  I wanted to go over it a bit to help people understand where this array fits within the storage landscape to see if it might be a good fit for them.

What exactly is Unity?

Unity is a block and file unified array.  It’s very similar to both the VNXe and VNX models of the past.  Like most of those, Unity supports both Fiber Channel and iSCSI protocols for block storage.  It also supports CIFS and NFS for File protocols as well.

It is a dual storage processor ALUA array for redundancy, with dual processors, redundant components across the board.

For IO ports, it includes 2x 1GbE ports and 2 Converged Network Adapters (CNAs) per storage processor.  The CNA’s at the factory can be configured to act as 10GbE adapters for iSCSI or NFS, or as up to 16Gb fibre channel interfaces.  Note those modes cannot be switched after the device is shipped.  You can also add up to two IO modules per storage processor in identical pairs to provide additional IO ports, including 1GbE, 10GbE, or FC.  All unity arrays also have 2 SAS ports to connect to additional racks of disks called DAEs (just like the VNX).  The 500 and 600 arrays can have additional SAS ports installed as an IO module to approach their maximum supported disk configurations as well.

The Unity arrays effectively replace all the VNXe storage arrays going forward.  In addition, they replace most VNX storage arrays.  The only exception are the VNX 7600 and 8000 arrays, which will continue due to their higher scalability relative to the Unity models.

Unity has all the other features you come to expect from EMC, including secure remote support and monitoring in ESRS, FAST Cache using SSDs as a third layer of cache for storage acceleration, FAST VP auto storage tiering, and more.

Also, the Unity arrays have all flash models for each of the models as well for the performance conscious.

Improvements Over the VNX/VNXe

There are quite a few improvements I wanted to point out over the VNX and/or VNXe.

  • HTML5 based Unisphere – YES!  TAKE THAT JAVA!
  • Simplified and easier to use interface
  • Significantly smaller rack footprint when offering  both block and file
    • Within the VNX line, you had to have Storage Processors, X-blade data movers, and Control Stations typically to offer both protocols.  taking up way more rack space and power.  Now, just the DPE provides the same functionality!
  • Support for both block and file VMware vVols
  • Easy setup of ESRS within the array, similar to the VNXe did, but not the VNX
  • All arrays come with IO ports that could potentially bring both iSCSI and FC support without requiring any additional IO cards
  • Far faster setup
  • Better remote monitoring and data analytics of the storage array
  • Ability to run Unity as a virtual storage appliance for dev/test, potentially even for free!

That’s quite a jump from the VNX/VNXe, even though the concepts of the two arrays are the same.

Where can I learn more?

There actually is an abundance of learning resources about the Unity arrays already available.  I would suggest checking out the following: