Category Archives: vSphere

vCenter 6 Reconfigure from Embedded to External PSC

There have been some problems with embedded PSC configurations, so I’ve had requests to move away from the embedded PSC (PSC and vCenter in same OS instance) to external configurations.  Thankfully, vCenter Update 1 and above has a method to do just this!

Transitioning to External PSC

To accomplish this, I first built a new virtual machine running Server 2012 R2, patched it to current, joined it to the domain, and granted the appropriate rights for the vCenter service account.

It’s also important to note that the existing vCenter 6.0 must be running Update 1 or later for this to work.  Obviously, you should deploy a new PSC using the same build as the existing vCenter.  Patch up your current vCenter up to Update 1 or higher obviously if needed.

Also, make sure you have a good rollback plan, like a whole VM backup or snapshots as needed.

This process works just as well for the appliance.

You then install an external PSC joining the existing SSO domain and site.  Now there are two PSCs, but vCenter is still setup in an embedded configuration, so the external PSC isn’t used yet.

At this point, you need to use the cmsso-util utility with the reconfigure option, located in your vCenter installation folder.  It’s typically under C:\Program Files\VMware\vCenter Server\bin folder.

cmsso-util reconfigure –repoint-psc destpsc.vs6lab.local –username administrator –domain-name “vsphere.local” -passwd “P@ssw0rd”

I immediately ran into my first issue…

repointpscdnserror

“The provided Platform Services Controller(PSC) is not a replication partner of the localhost. Please make sure to provide the Primary Network Identifier (PNID) of the PSC.”

A little googling led me quickly to this community post that states the DNS name is apparently case sensitive, so check your DNS records to see if maybe it’s all caps, or what.  Use that, and you’re golden.  In my case, it was DESTPSC.vs6lab.local.

Sit back and be patient.  Mine took probably a solid 10 minutes, but I am running it in a slower lab environment.

When it’s finished, verify the vSphere Web Client is functioning.  Also, verify the PSC has been repointed under your vCenter Server – Manage – Advanced Settings – config.vpxd.sso.admin.uri

confirmpscrepoint

PSC is done!

vSphere Replication 6 – Stopping replication impacts

I unfortunately didn’t get a chance to post Thursday, as I came down with a bit of a stomach bug, but I’m back at it!

I found this little interesting tidbit during preparation for VCAP6-DCV Deployment…

Did you know in 6.X that stopping replication on a VM in vSphere Replication 6.X has different behavior depending upon if you used a replication seed?

Just to make sure we’re all clear, a replication seed in vSphere Replication speak is if you copy down a VMDK from the source side, upload it to the target site, and then configure replication for the VM and select the datastore/folder for the VMDK.  When vSphere Replication sees the matching VMDK, it uses the data there and replicates only the changes since the download.

In vSphere Replication 6.X, if this was done, and you stop replication for a VM, the target VMDKs are left in place.  If this wasn’t done, and just let vSphere Replication replicate the initial copy of the VMDK, if you stop replication, the VMDK is DELETED at the target site!  If that’s a large data set, that could be a lot of data that has to be replicated again, and more than likely over a WAN link!

This in particular impacts a somewhat common task when it comes to growing a VMDK for a VM being replicated by vSphere Replication.  To do this, replication must be disabled at some point.

If you used a replication seed, it’s actually easier.  You simply stop replication, grow the VMDK on both sides, and reconfigure replication.  Pretty easy.  The target VMDK would obviously not have been deleted, making this possible.

If the VMDK wasn’t seeded, you need to do a planned failover, stop replication, resize the VMDK on both sides, and reconfigure replication.  This also obviously requires downtime.

I’m still investigating to see if there’s a way to determine if the VMDK was seeded or not, so you would know which way to go.  If you’re unsure though, use the non-seeded method as a precaution unless it’s okay to have to re-replicate the VMDK/whole VM.

VMware network test commands

I recently ran into an issue with vSphere Replication that involved network connectivity (probably a future post), and I quickly realized that VMware network test commands are not consistent across all their products, so this could be confusing for many people.  I’ll update this post later as I get the commands for other products, but this may help someone looking for how to do VMware network testing and troubleshooting.

ESXi

ESXi has two helpful commands.  For basic connectivity tests, vmkping is awesome because it’s simple to use and to specify which kernel port group you want to test.  Sure, you could use ping, but you can’t specify which vmk interface with it.

To ping 192.168.1.1 with your Management Port group, assuming it’s default, so it’s using vmk0, it’s simply:

vmkping 192.168.1.1 -I vmk0

Another good use is validating jumbo frames, as you can specify the packet size as well and disable packet fragmentation.  To conduct the same test with a packet size of 9000 and ensure the packet doesn’t get fragmented:

vmkping 192.168.1.1 -I vmk0 -s 9000 -d

For testing specific port connectivity, ESXi does support the netcat, aka nc command.  To test port 80 on destination 192.168.1.1:

nc -z 192.168.1.1 80

You can specify UDP mode using -u as well.  Note that at least in my experience -s <source IP> does NOT work, so I don’t believe it’s possible to specifically direct netcat through a specific vmkernel port.  When I tried it for example forcing it through an IP that shouldn’t work, connectivity was still made when it shouldn’t have.

Any VMware Product Running on Windows 2012 or Higher (vCenter, SRM)

Everybody knows ping.  I’m not gonna go over that.  But did you know that PowerShell has a ping cmdlet?  This is useful for documentation of results, using export-csv, and scripting lots of ping tests.

To ping 192.168.1.1:

test-connection 192.168.1.1

Another handy trick is you can remotely have multiple Windows machines ping the same computer and/or specify multiple targets.  For example, if I want server1, server2, to ping 192.168.1.1 and 192.168.1.2:

test-connection -Source Server1,Server2 -ComputerName server3,server4

PowerShell also has cmdlets to test network port connectivity as well.  To test if the local machine can connect to 192.168.1.1 on TCP port 80:

test-netconnection -computername 192.168.1.1 -InformationLevel detailed -port 80

Unfortunately, there isn’t a handy -source parameter, but you could use PowerShell remoting to run this command on multiple remote computers, too.

VMware vCenter Server Appliance

For pinging, there’s the ping command.  That’s easy enough.

If you try to use netcat for port testing, it isn’t there by default.  You have to run the following to temporarily install it on version 6:

/etc/vmware/gss-support/install.sh

Rebooting the VCSA removes it.

You can also use curl if that’s something you’d rather not do:

curl -v telnet://192.168.1.1:80

vSphere Replication Appliance

For pinging, there’s the ping command.  No surprises.

For network port testing, again, netcat isn’t installed, nor is there a supported way to install it to my knowledge.  Instead, use the curl command:

curl -v telnet://192.168.1.1:80

Keep checking back, as I add more.

Updating vSphere 6 vCenter Server Appliance

If you skipped the first release of vCenter 6 and deployed Update 1, a new version of vCenter was released for Update 1 with some security fixes among other things.  Many people are opting for the appliance version of vCenter for the first time, and patching it isn’t like the Windows version, so I wanted to document my experience with how to install updates for the vSphere 6 vCenter Server Appliance.

First off, friendly reminder, RTFM with this kind of thing.  I’m screwing around in my lab, so I didn’t as I began and immediately ran into issues, as you’ll see, but it was my fault.

Step 1:  Check interoperability with all vSphere components, third party products, and note upgrade paths.

If you are using any products that interact with vCenter from VMware such as Horizon View, vCenter Operations Manager, Site Recovery Manager, or third party products such as backup products (Veeam, etc.), management products (VMTurbo), etc., ensure you are using versions that are supported with the new version of vCenter you are about to upgrade to, and if not, ensure you map out the proper order and new versions you need to install in order to preserve functionality for all your products and services.  Don’t forget to check support on your external database if you use one, too.

I’m assuming you’ve taken care of all this already.

Step 2: Download all your relevant files you’ll need.

At a minimum, you’ll need to download the patch file from VMware.  This is NOT the full install version of the appliance!  You need to go to:

https://my.vmware.com/group/vmware/patch

Filter for patches for vCenter, the major version of vCenter, and download the applicable patch file for your deployed version of the appliance.

I didn’t RTFM, so I downloaded the VCSA full installable file ISO, and got greeted with the following:

Command> software-packages stage –iso –acceptEulas
[2016-01-09T19:31:01.009] : Staging software update packages from ISO
[2016-01-09T19:31:01.009] : ISO unmounted successfully
[2016-01-09T19:31:01.009] : CD drives do not have valid patch iso.
[2016-01-09T19:31:01.009] : Staging process failed.

Get the patch file!

If you use the Appliance Management Interface to do this, you can have it automatically download the correct file for you.  The upgrade ISO files aren’t the smallest files, so I would encourage you to download it and have it ready.  If you’re curious, the patch file I downloaded for this was 1.5GBs.  You don’t want to eat up your planned downtime by waiting for an ISO.

Step 3:  Ensure a backout plan if it fails.  Take whole VM backups of all relevant vCenter VMs – Platform Services Controller and vCenter.  Take a VM snapshot as well for faster rollback.

The remaining steps are repeated for external PSCs and vCenter servers.  Just ensure you update all external PSCs before you update vCenter server nodes.  Don’t forget to test PSC functionality prior to continuing with the vCenter servers.

Step 4: Mount the patch ISO file into the VM if you are doing this via command line, or which to use a manually downloaded ISO instead of having vCenter download it for you.

Straightforward step here.  If you don’t know how to do this, you probably should stop now. 🙂

Step 5: Initiate the upgrade command

Command line method

Enable SSH on the appliance via the VCSA DCUI, and putty into the VM, and run the following:

software-packages install –iso –acceptEulas

(That’s double hyphens.)

You can seed the install files as well if you like, but I personally don’t see much advantage in doing this.

GUI

Using a web browser, log in to the vCenter Server Appliance Management Interface.  (Port 5480 using https), ensure the repository is configured properly (probably “Use default option”) if you want vCenter to download the patch ISO for you, initiate a check for patches.  Select URL if you want vCenter to download the patch for you, or select Check CDROM if you downloaded the ISO already and mounted it.  Finally, click Install Updates.

Step 6: Monitor the install progress and follow the instructions.

Monitor the installation, and ensure that it succeeds.  It’s completed when you are back to the Command> prompt if you’re using the command line.  You should also see:

Packages upgraded successfully, Reboot is required to complete the installation.

Reboot the VCSA VM if you are instructed to do so using:

shutdown reboot -r “vCenter 6.0 Update <whatever version you’re installing”

If you’re updating with the GUI, you should see a Reboot option under Summary.

If you have errors, review the /var/log/vmware/applmgmt/software-packages.log file.

Step 6: Dismount the ISO

Again, simple stuff.

Step 7 – Verify functionality of vCenter and integrated products

Step 8 – Clear out VM snapshot

Obviously, do not do this until you’re sure you don’t need to rollback.  With that said, do NOT keep the snapshot indefinitely either, as it will degrade vCenter performance, use up additional space on your datastore, and increases the chance of data corruption the longer you wait.

And there you have it!

Change Block Tracking issues with SRM

As it may be obvious, I’ve been doing quite a bit of work with VMware Site Recovery Manager with storage based replication lately, specifically EMC’s MirrorView.  I ran into another issue while testing with SRM 6 + ESXi 5.0 hosts.

During the project, we are updating vCenter from 5.0 to 6.0, SRM from 5.0 to 6.0, verifying everything works, and then proceeding with updating ESXi hosts.  We didn’t bother patching ESXi 5.0 hosts, since they would be updated to 6.0 soon enough.  We wanted to make sure SRM worked through vCenter before updating ESXi simply to ensure an easy rollback.

However, during failover testing, we ran into an issue where most VMs would not power on during isolated testing and failovers.  The error was as follows:

Error – Cannot open the disk ‘/vmfs/volumes/<VMFS GUID>/VMNameVMName.vmdk’ or one of the snapshot disks it depends on.

When you look into the events for an impacted VM, you would find the following:

“Could not open/create change tracking file”

We cleared CBT files for all the VMs, and tried again, forcing replication, and it worked.  We figured CBT got corrupted.  But then Veeam ran its backups, we tried an isolated test, and almost all the VMs couldn’t power on in an isolated test again.

I know ESXi 6 has been in the news lately for corruption in Change Block Tracking, but it’s far from the only version that’s suffered from an issue with CBT.  ESXi 5.0, 5.1, and 5.5 have had their issues, too.  In this case, the customer was running a version that needed a patch to fix CBT.  We remediated the hosts to patch them to current, reset CBT data yet again, allowed Veeam to backup the VMs, and tried an isolated test.  All VMs powered on successfully.

It’s important to note that Veeam really had nothing to do with this problem, and neither did MirrorView.  This was strictly an unpatched ESXi 5.0 issue.  So, if you run into this with any ESXi version using storage based replication, I recommend patching the hosts to current, resetting CBT data, run another backup, make sure the storage replicated the LUN after this point, and try again.

Adventures in SRM 6.0 and MirrorView

Recently, I setup SRM 6.0 with MirrorView storage based replication.  It was quite the adventure.  The environment was using SRM 5.0 and MirrorView, and we upgraded them to vSphere 6.0 and SRM 6.0 recently.  I wanted to get my findings down in case it may help others setting this up.  I found when I ran into issues, it wasn’t easy finding people who were doing this, as many who are using VNXs are using RecoverPoint now instead of MirrorView.

Version Support

First off, you might be wondering why I recently deployed SRM 6.0 instead of 6.1.  That’s an easy question to answer – currently, there is no support for MirrorView with SRM 6.1.  I’m posting this article in 11/2015, so that may change.  Until it does, you’ll need to go with SRM 6.0 if you want to use MirrorView.

Installation of Storage Replication Adapter

I’m assuming you already have installed SRM, and configured the pairings and what not.  At the very least, have SRM installed in both sites before you proceed.

Here’s where things got a little goofy.  First off, downloading the SRA is confusing.  If you go to VMware’s site to download SRA’s, you’ll see two listings for the SRA, with different names, suggesting they work for different arrays, or do something different, or are different components.

mirrorsradownload

They’re actually so far as I can tell two slightly different versions of the SRA.  Why are they both on the site for download?  No idea.  So I went with the newer of the two.

You also need to download and install Navisphere CLI from EMC for the SRA to work.  There are a few gotchas on the install of this to be aware of. Install this first.

During installation, you need to ensure you check the box “Include Navisphere CLI in the system environment path.”

navispherepath

That’s listed in the release notes of the SRA, so that was easy to know.  You also need to select to not store credentials in a security file.

I ended up having issues with the SRA being able to authenticate to the arrays when I originally told it to store credentials thinking this could allow easier manual use of Navisphere CLI should the need arise, but that messed things up, so uninstalled, and reinstalled Navisphere CLI without that option, and the bad authentication messages went away.

Next, install the SRA, which is straight forward.  After the installation of the SRA, you must reboot the SRM servers, or they will not detect that they have SRA’s installed.  That takes care of the SRAs.

Configuring the SRAs

Once you have installed the SRA’s, it’s time to configure the array pairs.  First, go into Site Recovery within the vSphere Web Client, and click Array Based Replication.

arraybasedreplication

Next, click Add Array Manager.

addarraymanager

Assuming you’re adding arrays from two sites, click “Add a pair of array managers”.

addarraypairs

Select the SRM Site location pair for the two arrays.

sralocationpair

Select the SRA type of EMC VNX SRA.

selectsratype

Enter the Display name, the management IPs of the array, filters for the mirrors or consistency groups if you are using MirrorView for multiple applications, and the username and password info for the array for each site.  Be sure to enter the correct array info for the indicated site.

sraarrayinfo

I always create a dedicated SRM service account within the array, so it’s easy to audit when SRM initiates actions on the storage array.

You’ll need to fill the information out for each site’s array.

Keep the array pair checked and click next.

enablearraypairs

Review the summary of action and click finish.

At this point, you can check the array in each site and see if it is aware of your mirrors being replicated.

checksrareplicationinfo

So far so good!  At this point, you should be able to create your protection groups and recovery plans, and start performing tests of a test VM and recoveries as well.

Problems

I began testing a test Consistency Group within MirrorView, which contained one LUN, which stored a test VM.  Test mode worked immediately to DR.  Failover to the DR site failed, as it often does in my experience with most Storage Based Replication deployments.  No problem, I simply launch it again, and it works, and it did in this case.

With the VM then in the DR site, I performed an isolated test back to production, which worked flawlessly.  It’s when I tried to fail back to production I encountered a serious problem.  SRM reported that the LUN could not be promoted.  Within SRM, I was given only the option to try failover again.  The icon was grayed out to do cleanup or a test.  Relaunching failover resulted in the same result.  I tried rebooting both SRM servers, vCenter, running rediscovery of the SRAs, you name it.  I was stuck.

I decided to just manually clean up everything myself.  I promoted the mirror in the production site, had hosts in both sites rescan for storage.  The LUN became unavailable in the DR site, but in production, while the LUN was visible in terms of seeing an available LUN, the datastore wouldn’t mount.  Rebooting the ESXi server didn’t help.  I finally added it as a datastore, selecting not to resignature the datastore.  The datastore mounted, but I found that the datastore wouldn’t mount after a host reboot.  Furthermore, SRM was reporting the MirrorView consistency group was stuck failing over, showing Failover in Progress.  I tried recreating the SRM protection group, re-adding the array pairs, and more, but nothing worked.

After messing with it for awhile, checking MirrorView and the VNX, VMware, etc., I gave up and contacted EMC support, who promptly had me call VMware support, who referred me back to EMC again because it was clearly an SRA problem for EMC.

With EMC’s help, I was able to cleanup the mess SRM/SRA made.

  1. The Failover in Progress reported by the SRA was due to description fields on the MirrorView description view.  Clearing those and rescanning the SRAs fixed that problem.
  2. The test LUN not mounting was due to me not selecting to resignature the VMFS datastore when I added it back in.

At this point, we were back to square one, and I went through the gambit of tests. I got errors because the SRM placeholders were reporting as invalid.  Going to the Protection Group within SRM and issuing the command to recreate the SRM placeholders fixed this issue.

We repeated testing again.  This time, everything worked, even failback.  Why did it fail before?  Even EMC support had no answer.  I suspect it’s because anytime I make the first attempt in a direction in an SRM environment to failover, it always fails.  Unfortunately, it was very difficult to fix this time.

vSphere 6.0 Change Block Tracking Patch released

Just a heads up, but VMware dropped the public release of the patch to resolve the Change Block Tracking problem in ESXi 6.0.  You can apply the patch using VMware Update Manager, or install it manually.

Logically, remember that you can’t just apply the patch and all is well.  You need to reset CBT data to “start fresh” because all changed blocks reported prior to the patch are still suspect.  Most backup vendors detail how you do this in VMware, but I wanted to share a few tips in this regard.

  1. Change Block Tracking can easily be disabled/enabled on Powered Off VMs.  That’s not an issue.
  2. You can reset Change Block Tracking information on a VM by disabling CBT on the VM, taking a snapshot of the VM, deleting the snapshot, and then re-enable CBT.  This makes for automation of this very easy.  Veeam has a PowerCLI script that can do this as an example, although it is clearly a use at your own risk affair.

Finally, don’t forget to enable CBT on your backup jobs and/or VMs when you’re ready if that was disabled as a workaround.  You can do this using PowerShell if you’re using Veeam.

 

Change VMware MPIO policy via PowerCLI

This is one of those one liners I think that I’ll never use again, but once again, I found myself using it to fix MPIO policies on a vSphere 5.0 environment plugging into a Nexsan storage array.  I’ve previously used it on EMC and LeftHand when the default MPIO policy for the array type at the time of ESXi installation is not the recommended one after the fact, or in the case of LeftHand, it is wrong from the get go.

get-vmhost | get-scsilun | where vendor -eq "NEXSAN" | set-scsilun  -MultipathPolicy "RoundRobin"

In this case, it was over 300 LUN objects (LUNs x hosts accessing them), so that’s about 5 mouse clicks per object to fix via GUI.  Translation, you REALLY want to use some kind of scripting to do this, and PowerCLI can do it in one line.

You gotta love PowerShell!

Disable CBT on Veeam jobs via PowerShell

If you haven’t heard the not so great news, VMware has discovered a bug  in vSphere 6 with Change Block Tracking (CBT) that can cause your backups to be corrupt and therefore invalid.  Currently, they are recommending not to use CBT with vSphere 6 when backing up your VMs.

I was looking for an easy way to disable this on all jobs in Veeam quickly via PowerShell, but it’s not obvious how to do that, so I took some time to figure it out.  Here it is assuming the module is loaded in your PowerShell session.

$backupjobs = get-vbrjob | where jobtype -eq "Backup"
foreach ($job in $backupjobs){
$joboptions = $job | get-vbrjoboptions
$joboptions.visourceoptions.UseChangeTracking = $false
$job | set-vbrjoboptions -options $joboptions
}

Here’s now to enable it again:

$backupjobs = get-vbrjob | where jobtype -eq "Backup"
foreach ($job in $backupjobs){
$joboptions = $job | get-vbrjoboptions
$joboptions.visourceoptions.UseChangeTracking = $true
#$joboptions.visourceoptions.EnableChangeTracking = $true
$job | set-vbrjoboptions -options $joboptions
}

Sorry it’s not pretty on the page, but I wanted to get this out ASAP to help anyone needing to do this quickly and effectively.

One thing to note is in the enable script, there’s a commented line out.  If you have already set your jobs manually and wish to use the script to enable CBT again, be aware that the option to enable CBT within VMware if it is turned off gets disabled if you turn CBT off altogether within the job setup.  If you disable CBT with my script, that doesn’t get touched, so you don’t need to remove the # on that line.   If you want that option enabled again, take out the # before that line, and it’ll enable that option again.

Hope this helps!

Clarifying vSphere Fault Tolerance

I hear a lot of confusion about some of the new enhancements of vSphere 6. One is specifically Fault Tolerance (FT)

In case you do not know what FT is, this is a feature that basically (was supposed to) fit the need for a handful of your most critical VMs that High Availability (HA) didn’t protect well enough.  HA restarts a VM if the ESXi physical host it was running on failed on another host, or if you enable VM Monitoring, a VM that blue screened or locked up.  Note the VM would be down during the restart time of the VM and the boot up of the OS within the VM.  FT effectively runs a second copy of the VM in lockstep on another host, so should the host the live VM runs on fails, the second copy immediately takes over on the other host, with no downtime.

Please note that vSphere 6 nor previous versions of vSphere do not protect against an application crash itself unless the application crashed due to a hardware failure using Fault Tolerance.  It only protects against effectively failures pertaining to hardware, like a host failure.  There is no change there.  If you want protection from application failures, you still should look at application clustering and high availability solutions, like Exchange DAGs, network load balancing, SQL clustering, etc.  On the flip side, I have personally seen many environments actually have MORE downtime because of application clustering solutions, especially when customers don’t know how to manage them properly, but FT is a breeze to manage.

The problem with FT in the past is it had so many limitations.  The disks had to be zero eager thick provisioned for the VM, you could not VMotion the VM or the second copy, and more, but the biggest limitation was the VM could only have 1 vCPU.  If you’re thinking how many critical apps only need 1 vCPU, the answer is pretty much zero.  Almost all need more, so FT became the coolest VMaware feature nobody used.

That changes in vSphere 6.  You can use FT to protect VMs with up to 4 vCPUs.  They can be thin or thick provisioned.

FT protected VMs can now be backed up with whole VM backup products that utilize the VMware Backup APIs, which is all of them that backup whole VMs.  Veeam, VMware Data Protection, etc.  This is a pretty big deal!

You can hot configure FT for a VM on the fly now without disrupting the VM if it is currently running, which is yet also really cool.  Maybe you got a MS two node cluster, and one gets corrupted.  Enable FT on the remaining one to provide extra protection until the second node is rebuilt!

Also, the architecture changed.  This is good and bad.  In the past, FT required all the VMs disks to be on shared storage, and the active and passive VMs used the same Virtual disk files, VM Config files, etc.  This is no longer the case.  Now the storage is replicated as well, and it can be to the same Datastore or different datastores.   Those datastores can be on completely different storage arrays if you want.  On the downside, you need twice the storage for FT protected VMs than you did before, but the good news is a storage failure may not take out both data sets and kill the VM, too!

In my opinion, these changes have finally made FT definitely something that should be considered and will be implemented far more commonly.

So while a lot of the restrictions were lifted, there are still some left, notably:

  • Limit of 4 vCPUs, 64GBs of RAM for a FT protected VM.
  • Far more hardware is supported, but you still need hardware that is officially supported.
  • VM and the FT copy MUST be stored on VMFS volumes.  No NFS, VSAN, or VVOL stored VMs!
  • You cannot replicate the VM using vSphere Replication for a DR solution.
  • No storage DRS support for FT protected VMs
  • 10gb networking is highly recommended.  This is the first resource that runs out when protecting VMs with FT.  So if you were thinking FT with the storage replication would be a good DR solution across sites, uhh, no.
  • Only 4 FT active or passive copies per host.

So, if you’re thinking about a vSphere solution for a customer, and you pretty much dismissed FT, consider it now.  And if you support environments with VMware, get ready to see more FT as vSphere 6 gets adopted!