Tag Archives: VDS

vSphere Networking Clarifications

I get a lot of questions about the finer points of vSphere networking.  I wanted to provide a consolidated list of general recommendations and info about some of the options within vSphere networking as a quick reference.

Please keep in mind recommendations below for vSphere networking are extremely vague and generally speaking.  You should not necessarily assume any recommendations below are the right choice for your environment!

  • When using Virtual Distributed Switches (VDS) in vSphere networking configurations, what should distributed port group port binding be set to – static, dynamic, or ephemeral?
    • Short answer – Generally use static for most workloads, and consider ephemeral for infrastructure type workloads related to vCenter, such as vCenter, PSCs, domain controllers/DNS servers, etc.
    • Long answer – Dynamic was deprecated in vSphere 5.0.  Don’t use it.  Use static or ephemeral.  Static has a few advantages; a VM will always stay on the same virtual switch port even if powered off, so statistics are easier to get for a particular virtual NIC.  However, VMs will consume more VDS ports since they will consume them even when powered off, but that’s a lot of ports to eat through.  It also generally results in less load on vCenter and ESXi hosts, as ports aren’t constantly allocated and unallocated.  Ephemeral basically is no binding at all.  This reduces the number of VDS ports consumed, as powered off VMs don’t use up VDS ports.  However, ephemeral slows down operations within the VDS as ports are allocated and unallocated when VMs are powered off and on.  One advantage ephemeral does have is it does not require vCenter to be available for VMs to make use of those ports; static sometimes does, hence the recommendation for vCenter and related workloads to use ephemeral port bindings to avoid chicken and egg type scenarios, as vCenter is the control plane for the VDS, and is therefore mainly responsible for port bindings.
  • When using Virtual Distributed Switches (VDS) in vSphere networking configurations, what should distributed port group port allocations be set to – Elastic or Static?
    • Generally, use the default 8 for number of ports, and set allocation to elastic.  This should keep the number of unused virtual switch ports to a minimum while allowing the port groups to scale up and down as needed.
  • What load balancing should be used in vSphere networking – Virtual Port ID, MAC hash, IP Hash, LACP, or Load Based Teaming?
    • Short answer – there’s no right answer for everyone.  Read the long answer.
    • Long answer – There are MANY MANY considerations when selecting between load balancing.  I want to throw out some cavaets first, and then give some general hand rules.  Take these into account before reading my general recommendations!  One or more of these might force you in a specific directions or rule out some of options. They should be considered first and foremost before the general recommendations.
      • Caveats
        • You can only use LACP and Load Based Teaming if you’re using VDS.
        • If you want to use port mirroring for any reason, LACP doesn’t support it.
        • If you’re using Host Profiles to configure host networking, LACP can’t be configured using them, an important consideration when using stateless autodeploy.
        • The only two load balancing modes that can in anyway grant a single vNIC more bandwidth than a single physical NIC in the team are LACP and IP Hash.
        • Both LACP and IP Hash require special switch port configurations.
        • LACP does not support beacon probing.
        • The presence of NSX drastically impacts which modes you can, can’t, should, and shouldn’t use.
          • Load based teaming is not supported with logical switching or edge gateways.
      • General recommendations
        • If you are using standard switches, and no VM’s vNIC requires more bandwidth than a single physical NIC in the team, use virtual port ID.  It keeps CPU utilization the lowest, and requires no special switch configuration, making it generally less problematic than IP Hash.
        • If you’re using Virtual Distributed Switches, and no VM’s vNIC requires more bandwidth than a single physical NIC in the team, use Load Based teaming.  While it costs more in CPU than Virtual Port ID, it provides a worthwhile enhancement to network performance via better load balancing, and it also requires no special switch configuration, making it generally less problematic than IP Hash and LACP.
  • Should Network I/O Control (NIOC) be used in vSphere networking?
    • If you are converging host management, VM, vMotion, Fault Tolerance, and/or Storage on the same physical NICs, NIOC should be enabled.  NIOC helps ensure that no traffic type can overwhelm the others.  This is especially important when it comes to IP storage traffic sharing physical links with other traffic, regardless if it’s iSCSI, NFS, VSAN, or other hyperconverged traffic such as Nutanix.
    • If you aren’t converging different types of traffic on the same physical NICs, there’s little reason not to enable NIOC assuming you’re using Virtual Distributed Switches.
  • Beacon Probing looks like a better failover detection in vSphere Networking.  Should I use it?
    • Generally speaking, no.  It requires three uplinks within the team minimum to use.  Generally, making use of Link State Tracking within switches if possible is a better solution.

I’ll add more, as I get more questions about vSphere networking.