Tag Archives: vsphere6

Updating vSphere 6 vCenter Server Appliance

If you skipped the first release of vCenter 6 and deployed Update 1, a new version of vCenter was released for Update 1 with some security fixes among other things.  Many people are opting for the appliance version of vCenter for the first time, and patching it isn’t like the Windows version, so I wanted to document my experience with how to install updates for the vSphere 6 vCenter Server Appliance.

First off, friendly reminder, RTFM with this kind of thing.  I’m screwing around in my lab, so I didn’t as I began and immediately ran into issues, as you’ll see, but it was my fault.

Step 1:  Check interoperability with all vSphere components, third party products, and note upgrade paths.

If you are using any products that interact with vCenter from VMware such as Horizon View, vCenter Operations Manager, Site Recovery Manager, or third party products such as backup products (Veeam, etc.), management products (VMTurbo), etc., ensure you are using versions that are supported with the new version of vCenter you are about to upgrade to, and if not, ensure you map out the proper order and new versions you need to install in order to preserve functionality for all your products and services.  Don’t forget to check support on your external database if you use one, too.

I’m assuming you’ve taken care of all this already.

Step 2: Download all your relevant files you’ll need.

At a minimum, you’ll need to download the patch file from VMware.  This is NOT the full install version of the appliance!  You need to go to:

https://my.vmware.com/group/vmware/patch

Filter for patches for vCenter, the major version of vCenter, and download the applicable patch file for your deployed version of the appliance.

I didn’t RTFM, so I downloaded the VCSA full installable file ISO, and got greeted with the following:

Command> software-packages stage –iso –acceptEulas
[2016-01-09T19:31:01.009] : Staging software update packages from ISO
[2016-01-09T19:31:01.009] : ISO unmounted successfully
[2016-01-09T19:31:01.009] : CD drives do not have valid patch iso.
[2016-01-09T19:31:01.009] : Staging process failed.

Get the patch file!

If you use the Appliance Management Interface to do this, you can have it automatically download the correct file for you.  The upgrade ISO files aren’t the smallest files, so I would encourage you to download it and have it ready.  If you’re curious, the patch file I downloaded for this was 1.5GBs.  You don’t want to eat up your planned downtime by waiting for an ISO.

Step 3:  Ensure a backout plan if it fails.  Take whole VM backups of all relevant vCenter VMs – Platform Services Controller and vCenter.  Take a VM snapshot as well for faster rollback.

The remaining steps are repeated for external PSCs and vCenter servers.  Just ensure you update all external PSCs before you update vCenter server nodes.  Don’t forget to test PSC functionality prior to continuing with the vCenter servers.

Step 4: Mount the patch ISO file into the VM if you are doing this via command line, or which to use a manually downloaded ISO instead of having vCenter download it for you.

Straightforward step here.  If you don’t know how to do this, you probably should stop now. 🙂

Step 5: Initiate the upgrade command

Command line method

Enable SSH on the appliance via the VCSA DCUI, and putty into the VM, and run the following:

software-packages install –iso –acceptEulas

(That’s double hyphens.)

You can seed the install files as well if you like, but I personally don’t see much advantage in doing this.

GUI

Using a web browser, log in to the vCenter Server Appliance Management Interface.  (Port 5480 using https), ensure the repository is configured properly (probably “Use default option”) if you want vCenter to download the patch ISO for you, initiate a check for patches.  Select URL if you want vCenter to download the patch for you, or select Check CDROM if you downloaded the ISO already and mounted it.  Finally, click Install Updates.

Step 6: Monitor the install progress and follow the instructions.

Monitor the installation, and ensure that it succeeds.  It’s completed when you are back to the Command> prompt if you’re using the command line.  You should also see:

Packages upgraded successfully, Reboot is required to complete the installation.

Reboot the VCSA VM if you are instructed to do so using:

shutdown reboot -r “vCenter 6.0 Update <whatever version you’re installing”

If you’re updating with the GUI, you should see a Reboot option under Summary.

If you have errors, review the /var/log/vmware/applmgmt/software-packages.log file.

Step 6: Dismount the ISO

Again, simple stuff.

Step 7 – Verify functionality of vCenter and integrated products

Step 8 – Clear out VM snapshot

Obviously, do not do this until you’re sure you don’t need to rollback.  With that said, do NOT keep the snapshot indefinitely either, as it will degrade vCenter performance, use up additional space on your datastore, and increases the chance of data corruption the longer you wait.

And there you have it!

vSphere 6.0 Change Block Tracking Patch released

Just a heads up, but VMware dropped the public release of the patch to resolve the Change Block Tracking problem in ESXi 6.0.  You can apply the patch using VMware Update Manager, or install it manually.

Logically, remember that you can’t just apply the patch and all is well.  You need to reset CBT data to “start fresh” because all changed blocks reported prior to the patch are still suspect.  Most backup vendors detail how you do this in VMware, but I wanted to share a few tips in this regard.

  1. Change Block Tracking can easily be disabled/enabled on Powered Off VMs.  That’s not an issue.
  2. You can reset Change Block Tracking information on a VM by disabling CBT on the VM, taking a snapshot of the VM, deleting the snapshot, and then re-enable CBT.  This makes for automation of this very easy.  Veeam has a PowerCLI script that can do this as an example, although it is clearly a use at your own risk affair.

Finally, don’t forget to enable CBT on your backup jobs and/or VMs when you’re ready if that was disabled as a workaround.  You can do this using PowerShell if you’re using Veeam.

 

Disable CBT on Veeam jobs via PowerShell

If you haven’t heard the not so great news, VMware has discovered a bug  in vSphere 6 with Change Block Tracking (CBT) that can cause your backups to be corrupt and therefore invalid.  Currently, they are recommending not to use CBT with vSphere 6 when backing up your VMs.

I was looking for an easy way to disable this on all jobs in Veeam quickly via PowerShell, but it’s not obvious how to do that, so I took some time to figure it out.  Here it is assuming the module is loaded in your PowerShell session.

$backupjobs = get-vbrjob | where jobtype -eq "Backup"
foreach ($job in $backupjobs){
$joboptions = $job | get-vbrjoboptions
$joboptions.visourceoptions.UseChangeTracking = $false
$job | set-vbrjoboptions -options $joboptions
}

Here’s now to enable it again:

$backupjobs = get-vbrjob | where jobtype -eq "Backup"
foreach ($job in $backupjobs){
$joboptions = $job | get-vbrjoboptions
$joboptions.visourceoptions.UseChangeTracking = $true
#$joboptions.visourceoptions.EnableChangeTracking = $true
$job | set-vbrjoboptions -options $joboptions
}

Sorry it’s not pretty on the page, but I wanted to get this out ASAP to help anyone needing to do this quickly and effectively.

One thing to note is in the enable script, there’s a commented line out.  If you have already set your jobs manually and wish to use the script to enable CBT again, be aware that the option to enable CBT within VMware if it is turned off gets disabled if you turn CBT off altogether within the job setup.  If you disable CBT with my script, that doesn’t get touched, so you don’t need to remove the # on that line.   If you want that option enabled again, take out the # before that line, and it’ll enable that option again.

Hope this helps!

vSphere 6 – Certificate Management – Part 2

Previously, I posted about certificate management in vSphere 6, which has simplified the process and provided several ways to trust certificates that will be used, while providing flexibility about what will issue the certificates to vSphere internal functionality and client facing functionality as well.

One of the simplest means to allow your clients to trust SSL certificates used by vSphere 6 is to utilize the CA functionality built into vCenter 6, and configure your clients to trust it as a Root CA, which is the focus of this blog post.

Basically, to do this, you need to obtain the root certificates of your vCenter servers, and the install them into your clients Trusted Root CA stores.  Thankfully, this is a relatively straightforward and easy process.

Obtaining Root CA files for vCenter

Obtaining the root CA files for vCenter is very easy.  To do this, simply connect to the FQDN of your vCenter server using HTTPS.  Do not add anything after this, like you may to connect to the vSphere Web Client.  For example, you would use:

https://vcentersvrname.domain.com

Once connected, you will see a link to obtain the vCenter certificates, as you can see below:

Download vCenter root CA certsWhen you download this file, it will be a zip file containing two certificate files for your vCenter server.  If your vCenter server is part of a linked mode installation, you will have two certificate files for every vCenter in the linked mode instance.  The files with an .0 extension are the root CA files you need to import.  Below, you can see the zip file downloaded from a vCenter in a two vCenter server linked mode installation.

vcenterdownloadfile
Extract the contents of the zip file.  Next, rename the .0 files with a .cer extension.  This will allow the files to be easily installed within a Windows machine.  You can then open them to check out the properties of the files if you like.

Installing Root CA file(s)

If you’re familiar with Windows machines, this is pretty straight forward.  You would either import this file into each machine manually, or you can use a Group Policy Object, import the file(s) into it, and refresh your GPO.

That’s it!  Pretty easy to do!  At the very least, you should do this instead of blindly allowing the exception for the untrusted certificate every time, because we all know we aren’t checking the thumbprints of those certs to ensure we’re connecting into the same server, plus this removes the warnings if you’re not creating a permanent exception.

Routing VMotion Traffic in vSphere 6

One of the new features that is officially supported and exposed in the UI for vSphere 6 is routed VMotion traffic.  This blog post will identify what the use cases are, why there was difficulty leading up to vSphere 6, and how vSphere 6 overcomes it.

Use Cases

So why would you want to route Vmotion traffic, anyway?  Truthfully, in the overwhelming majority of cases, you wouldn’t and shouldn’t route Vmotion traffic for various reasons.

Why?  Remember a few facts about Vmotion traffic:

Additional latency, delays, and reduced throughput exponentially reduces vSphere performance.  When a Vmotion operation gets underway, the running contents of memory for a VM is copied from one host to another and changes in the VM’s working set of memory are monitored for changes.  Interative copies continue to copy changes until there is a small enough delta difference that those small differences can be copied very quickly.  Therefore, the longer a Vmotion takes, the more changes in the working set accumulate, and the more changes that accumulate, the longer the operation will take, which invites even more changes to occur during the operation.  Adding an unnecessary hop in the network can only reduce Vmotion performance.  Therefore, if you are within the same datacenter, it is almost certainly the case that routing Vmotion traffic is ill advised at best.  About the only situation I could possibly think this might be a good idea is if you have a very large datacenter with hundreds of hosts, which causes performance deteriortation because of too many broadcasts within a single LAN segment, but you infrequently need to Vmotion VMs between hosts within different clusters.  But you would need A LOT of ESXi hosts that may need to Vmotion between each other before that would make sense.

So when would routed Vmotion traffic make sense?  Vmotioning VMs between datacenters!  Sure, you could stretch the VMotion Layer 2 network between the datacenters with OTV instead, but at that point, you are choosing the lesser of two evils – Vmotioning with a router between hosts in different datacenters, or the inherent perils of stretching an L2 network across sites.  The WAN link will take the bigger toll over an extra hop in the network by far, so there’s no question here the better choice would be to route the Vmotion traffic instead of stretching the Vmotion network between sites.

This is important because cross vCenter VMotioning is now possible, too, and Vmware has enabled additional network portability via other technologies such as NSX, so the need to do this is far greater than in the past, when the only scenario routing Vmotion traffic would make sense is in stretched storage metro clusters and the like.

Why was this a problem in the past?

If you’ve never done stretched metro storage clusters, this may never have occurred to you because there was pretty much never a need to route any kernel port group traffic other than host management traffic.  The fundamental problem was ESXi had a single TCP/IP stack, with one default gateway.  If you followed best practices, you would make multiple kernel port groups to segregate iSCSI, NFS, Vmotion, Fault Tolerance, and Host Management traffic, each in their own segregated VLANs.  You would configure the host’s default gateway as an IP in the host’s management traffic subnet, because you probably shouldn’t route any of that other traffic.  Well, now we need to.  Your only option to do this would be to create static route statements via command line to make this happen on every single host.  As workload mobility increases with vSphere 6 cross-vCenter Vmotion capabilities, NSX, and vCloud Air, this just isn’t a very practical solution.

How does VMware accomplish this in vSphere 6?

Very simple, at least conceptually anyway.  ESXi 6 has the capability of having multiple independent TCP/IP stacks.  By default, there already exists separate TCP/IP stacks for Vmotion and other traffic.  Each can be assigned separate default gateways.

Simple to manage, and configure!  Just configure the stacks appropriately, and ensure your kernel port groups are configured to use the appropriate stack.  Vmotion port groups should use the Vmotion stack, while pretty much everything else should use the default stack.

How cool is that?

vSphere 6 – Certificate Management Intro

I like VMware and their core products like vCenter, ESXi, etc.  Personally, one thing I really admire is the general quality of these products, how reliable they are, how well they work, and how VMware works to address pain points of them to make them extremely usable.  They just work.

However, certificate management has been a big pain point of the core vSphere product line.  There’s just no way around it.  And certificates are important.  You want to ensure the systems you’re connecting to when you manage them are those systems.  For many customers I’ve worked with, because of the pain of certificate management within vSphere, the fact that some customers are too small and don’t have an on premise Certificate Authority, and to ensure the product continues to work, they often don’t replace the default self-signed certificates generated by vSphere.

That’s obviously less than ideal.  The good news is certificate management has been completely revamped in vSphere 6.  It’s far easier to replace certificates if you like, and you have some flexibility as to how you go about this.

Three Models of Certificate Management

Now, you have several choices for managing vSphere certificates. This post will outline them.  Later, I’ll show you how you can implement each model.  Much of this information comes from a VMworld session I attended called “Certificate Management for Mere Mortals.”  If you have access to the session video, I would highly encourage viewing it!

Before we get into the models, be aware that certificates can basically fall under one of two categories – certificates that facilitate client connections from users and admins, and certificates that allow different product components to interact.  Also, vCenter also has built in Certificate Authority functionality within it.  That’s a bit obvious since you already had self-signed certificates, but this functionality has been expanded.  For example, you can allow vCenter to act as a subordinate authority of your enterprise PKI, too!

Effectively, this means you have some questions up front you want to answer:

  1. Are you cool with vCenter acting as a certificate authority at all?  The biggest reason to use vCenter is it is easier to manage certificates this way, but your security guidelines may not allow it.
  2. Are you cool with vCenter being a root certificate authority should you be cool with it generating certificates?  If not, you could make it a subordinate CA.
  3. For each certificate, which certificate authority should generate them?  Maybe your security requirement that the internal PKI must be used is only for certificates viewable on client connections as an example.

From these questions, typically a few models emerge for certificate management.  You effectively have four models that emerge, which is a combination of your vCenter acting as a certificate authority or not, and which certificates it will generate.

Model 1: Let vCenter do it all!

This model is pretty straight forward.  vCenter will act as a certificate authority for your vSphere environment, and it will generate all the certificates for all the things!  This can be attractive for several reasons.

  1. It’s by far the easiest to implement.  It will generate all your certificates for you pretty much, and install them.
  2. It’ll definitely work.  No worries about generating the wrong certificate.
  3. If you don’t have an internal CA, you’re covered!  vCenter is now your PKI for vSphere.  Sweet!  You can even export vCenter’s root CA certificate, and import it into your clients using Active Directory Group Policy, or other technologies to get client machines to automatically trust these certificates!  Note that it is unsupported for vCenter to generate certificates for anything other than vSphere components.

Model 2: Let vCenter do it all as a subordinate CA to your enternal PKI

Very similar model to the above.  The only exception is instead of vCenter being the root CA, you make vCenter become a subordinate CA for your enterprise PKI.  This allows your vCenter server to more easily generate certificates that are trusted automatically by client machines.  Yet it also ensures that certificates are still easily generated and installed properly.

However, it is a bit more involved than the first model, since you must create a certificate request (CSR) in vCenter to submit to your enterprise PKI, and then install the issued certificate within vCenter manually.

Model 3: Make your enterprise PKI issue all the certificates

Arguably the most secure if your enteprise PKI is secured, this model is pretty self-explanatory.  You don’t make use of any of the certificate functionality within vCenter.  Instead, you must manually generate all certificate requests for all vCenter components, ESXi servers, etc., submit them to your enterprise PKI, and install all the resulting certificates for each yourself.

While this could be the most secure way to go about certificate management, it is by far the most laborious solution to implement, and it is the solution that is most likely to be problematic.  You have to ensure your PKI is configured to issue the correct certificate type and properties, you have to install the right certificates on the right components, etc.  It’s all pretty much on you to get everything right!

Model 4: Mix and match!  (SAY WHAT?!?!?)

When I first heard this being discussed in the session, my immediate reaction by my security inner conscious was, “This sounds like a REALLY bad idea!!!”

But as I listened, it actually makes quite a bit of sense when done properly.  You can mix and match which certificates are and are not generated by the PKI components within vCenter.  However, the model that makes sense if you go hybrid (a hybrid solution doesn’t make sense for everyone!) would be to allow vCenter to manage the certificate generation for all certificates that facilitate vSphere component communication, but use either Model 1, 2, or 3 for all other certificates that facilitate client connections.  Should this meet your security requirement, it meets the best of both worlds – certificates issued by your internal PKI that your clients automatically trust and thereby (potentially) more secure, but ease of management and better reliability for all the certificates that clients don’t see for internal vSphere components.

Which should you go with?

I hate using the universal consultant answer, but I have to.  It depends.  If you don’t have an internal PKI, go with Model 1.

If you have an internal PKI just because you had to for something else, and you want easy trusting of vSphere connections by your clients, go with model 1 and import vCenter’s root CA into your client machines, OR go with Model 2.  Which one in this case?  If you don’t consider yourself really good at PKI management, or if you don’t need many machines to be able to connect to vSphere components, probably Model 1.  The more clients that need to connect, the more it might lean you towards Model 2.

Do you have security requirements that prevent you from using vCenter’s PKI capabilities altogether?  You have no choice, go with Model 3.

I would generally try though for people who think they need to go with Model 3 to look at Model 4’s hybrid approach.  Unless you absolutely have to go with Model 3, go Model 4.

Hope this helps!