Tag Archives: PowerShell

Configure Dump Collector with PowerCLI in vSphere 6

I had a script to configure Dump Collector settings that I used in previous versions of vSphere.  If you look around the web, you’ll find similar PowerCLI snippets to configure Dump Collector.

If you use that snippet in vSphere 6, it doesn’t work.  You’ll get the following error:

Message: Cannot set 2 server ip parameters.;
InnerText: Cannot set 2 server ip parameters.EsxCLI.CLIFault.summary
At line:4 char:1

This is because ESXCLI now has a parameter for whether to use IPv6, so when using get-esxcli, invoking the method to set requires an additional value.  Remember, esxcli is not intuitive in that “enabled” properties are either true or null, so don’t use $false.

The revised code should now be:

$vcenterip = '192.168.1.10'
foreach($vmhost in Get-VMHost){
	$esxcli = Get-EsxCli -VMHost $vmhost.Name
	$esxcli.system.coredump.network.set($null,"vmk0",$null,$vcenterip,6500)
	$esxcli.system.coredump.network.set($true)
	$esxcli.system.coredump.network.get()
}

Also not something commonly found on the internet – can you test the ESXi netdump configuration?  Yep!

foreach($vmhost in Get-VMHost){
$esxcli = Get-EsxCli -VMHost $vmhost.Name
Write-Host "Checking dump collector on host $vmhost.name"
$esxcli.system.coredump.network.check()
}

And there you have it!

VMware network test commands

I recently ran into an issue with vSphere Replication that involved network connectivity (probably a future post), and I quickly realized that VMware network test commands are not consistent across all their products, so this could be confusing for many people.  I’ll update this post later as I get the commands for other products, but this may help someone looking for how to do VMware network testing and troubleshooting.

ESXi

ESXi has two helpful commands.  For basic connectivity tests, vmkping is awesome because it’s simple to use and to specify which kernel port group you want to test.  Sure, you could use ping, but you can’t specify which vmk interface with it.

To ping 192.168.1.1 with your Management Port group, assuming it’s default, so it’s using vmk0, it’s simply:

vmkping 192.168.1.1 -I vmk0

Another good use is validating jumbo frames, as you can specify the packet size as well and disable packet fragmentation.  To conduct the same test with a packet size of 9000 and ensure the packet doesn’t get fragmented:

vmkping 192.168.1.1 -I vmk0 -s 9000 -d

For testing specific port connectivity, ESXi does support the netcat, aka nc command.  To test port 80 on destination 192.168.1.1:

nc -z 192.168.1.1 80

You can specify UDP mode using -u as well.  Note that at least in my experience -s <source IP> does NOT work, so I don’t believe it’s possible to specifically direct netcat through a specific vmkernel port.  When I tried it for example forcing it through an IP that shouldn’t work, connectivity was still made when it shouldn’t have.

Any VMware Product Running on Windows 2012 or Higher (vCenter, SRM)

Everybody knows ping.  I’m not gonna go over that.  But did you know that PowerShell has a ping cmdlet?  This is useful for documentation of results, using export-csv, and scripting lots of ping tests.

To ping 192.168.1.1:

test-connection 192.168.1.1

Another handy trick is you can remotely have multiple Windows machines ping the same computer and/or specify multiple targets.  For example, if I want server1, server2, to ping 192.168.1.1 and 192.168.1.2:

test-connection -Source Server1,Server2 -ComputerName server3,server4

PowerShell also has cmdlets to test network port connectivity as well.  To test if the local machine can connect to 192.168.1.1 on TCP port 80:

test-netconnection -computername 192.168.1.1 -InformationLevel detailed -port 80

Unfortunately, there isn’t a handy -source parameter, but you could use PowerShell remoting to run this command on multiple remote computers, too.

VMware vCenter Server Appliance

For pinging, there’s the ping command.  That’s easy enough.

If you try to use netcat for port testing, it isn’t there by default.  You have to run the following to temporarily install it on version 6:

/etc/vmware/gss-support/install.sh

Rebooting the VCSA removes it.

You can also use curl if that’s something you’d rather not do:

curl -v telnet://192.168.1.1:80

vSphere Replication Appliance

For pinging, there’s the ping command.  No surprises.

For network port testing, again, netcat isn’t installed, nor is there a supported way to install it to my knowledge.  Instead, use the curl command:

curl -v telnet://192.168.1.1:80

Keep checking back, as I add more.

Using PowerShell when there isn’t PowerShell support

I know many of us work on lots of different technologies, many of which don’t have native PowerShell cmdlets, and that kind of thing.  Sometimes it’s DOS, sometimes, it’s Telnet/SSHing into a command line where you got to run individual command strings to fix a bunch of individual objects.  I know many of you guys end up hacking stuff together using Excel or other tools to basically to assemble a repeated command to fix multiple objects, or create rules or whatever, like…

First part of command object1 second part of command

First part of command object2 second part of command

And you got a list of all your objects you got to do this on.  This can be painful.

Let me give you an example…

Working on an issue with an old version of EMC RecoverPoint, which has no PowerShell integration.

Basically, the customer masked some LUNs to VMAX front end ports that aren’t hooked up, and RecoverPoint is barking because it can’t access those ports.  So the customer has to unmap the front end ports and unmask.  I know for many of you guys, it’s this garbledy gook of tech you don’t work with.  In the end, the specific technology doesn’t matter.

RecoverPoint reports all the volumes that are the problem, like this:

Devices: 2B3B,277F,83D8,2B34,2250,21DD,2774,102A,21E2,281E,102B,281F,83D5,83E1,12B7,83CB,83DC,83DF,2775,83DB,24BB,83CE,818D,83D9,2784,2776,83CD,83DA,12CF,281D,83E3,0FB4,83D0,2B50,83CC,0FA3,8037,0FB3,83D1,2772,8196,83D4,83CF,83E2,83D3,83D7,2773,277E,12CC,12C9,8038,83DE,8036,1518,83D6,83D2,83DD,83E0

The first thing I need is an array of these I can pump into a loop.

This is stupid simple for PowerShell.  Each device is separated by a comma, so I can just use comma as the split character.

(Cut off the long string of devices, you get the idea)

$devicelist = “2B3B,277F,83D8,2B34,2250,21DD”

$devices = $devicelist.split(‘,’)

Now, if you type $devices, you get:

2B3B

277F

83D8

2B34

2250

21DD

Now we have our simple array.

Also, another helpful thing to know is if you have a sequence of numbers, you can use another PowerShell trick.  Say I need an array of objects that’s object1-10.  Also easy:

$objects = 1..10 | foreach-object {“object” + $_}

Type $objects and you get:

object1

object2

object3

object4

object5

object6

object7

object8

object9

object10

Yes, you can do this for IPs.  Say I want an array of all IPs in 192.168.0.0/24, so I can ping them or whatever.

$ips = 1..254 | foreach-object {‘192.168.1.’ + $_}

Maybe port ranges with “TCP” in front for firewall rule statements.

$tcpports = 3000..4000 | foreach-object {“TCP” + $_}

Now, I need to have command string stuff added in front and behind this.  Again, this doesn’t matter what tech you’re working on, just put your garbledy gook that I wouldn’t understand in.  $_ is the instance in the array

$commands = $devices | foreach-object {‘symconfigure -sid 1234 -cmd “unmap dev ‘ + $_ + ‘ from dir ALL:ALL;” commit’}

If I type $commands, I get:

symconfigure -sid 1234 -cmd “unmap dev 2B3B from dir ALL:ALL;” commit

symconfigure -sid 1234 -cmd “unmap dev 277F from dir ALL:ALL;” commit

symconfigure -sid 1234 -cmd “unmap dev 83D8 from dir ALL:ALL;” commit

symconfigure -sid 1234 -cmd “unmap dev 2B34 from dir ALL:ALL;” commit

symconfigure -sid 1234 -cmd “unmap dev 2250 from dir ALL:ALL;” commit

symconfigure -sid 1234 -cmd “unmap dev 21DD from dir ALL:ALL;” commit

BAM!  We got our commands, and we’re rolling.  If I want to save the commands as a text file…

$commands | out-file c:\dir\ourcoolscript.txt

Now I can copy/paste into putty/telnet session, or upload the script file and launch it if that’s possible, whatever I want to do.

WAY faster IMO than using other tools or duct taping a solution using Excel or other weird methods, and far more flexible.

So even if your technologies don’t have PowerShell, you can still use PowerShell!

Taking scripting too far?

I love scripting, and I am a huge advocate of PowerShell.  I talk about how it can be leveraged seemingly all the time to customers who don’t leverage it.  I encourage customers constantly to make use of it to make them more efficient.

But…  is it possible to take scripting too far?  Of course.

I stumbled across this article about a sysadmin who automated his job to arguably a ridiculous degree.

I shouldn’t say he arguably went too far.  He definitely did.  To me, the worst example in the article is where he automated the rollback of one of his user’s databases based on contents of an email if he received it from a particular end user.

Scripting is so beneficial virtually anyone in the IT field, or more specifically automation.  I applaud almost all efforts to do this.  However, scripting gets dicey when you begin to automate specifically decision making, especially complex decision making.

Don’t get me wrong, decision making is possible and beneficial in scripting, but it shouldn’t always be used.  I’ve many a times included conditional logic in a script, and it was absolutely essential to accomplishing the goal of the script.  However, sometimes decisions are just too complex to make based on limited information.

In this case, I have a lot of problems setting up what he did.  First off, how on earth can you tell just from some keywords in the contents of an email that you should roll back the database, without the end user asking specifically to roll back the database?  Even if the end user requested this, if the end user doesn’t know how to do this, there’s a pretty decent chance that this isn’t the best solution anyway.

Secondly, I seriously doubt the email was authenticated to be from this specific user.  IE, if this type of automation is wide spread given the general security posture of most email systems, it could be trivial to exploit to cause a day’s worth of data loss.

With all this said, I generally have the opposite problem with customers not automating anything, as opposed to customers automating things they shouldn’t, but this does demonstrate it’s possible to go to the opposite extreme.

Change VMware MPIO policy via PowerCLI

This is one of those one liners I think that I’ll never use again, but once again, I found myself using it to fix MPIO policies on a vSphere 5.0 environment plugging into a Nexsan storage array.  I’ve previously used it on EMC and LeftHand when the default MPIO policy for the array type at the time of ESXi installation is not the recommended one after the fact, or in the case of LeftHand, it is wrong from the get go.

get-vmhost | get-scsilun | where vendor -eq "NEXSAN" | set-scsilun  -MultipathPolicy "RoundRobin"

In this case, it was over 300 LUN objects (LUNs x hosts accessing them), so that’s about 5 mouse clicks per object to fix via GUI.  Translation, you REALLY want to use some kind of scripting to do this, and PowerCLI can do it in one line.

You gotta love PowerShell!

Disable CBT on Veeam jobs via PowerShell

If you haven’t heard the not so great news, VMware has discovered a bug  in vSphere 6 with Change Block Tracking (CBT) that can cause your backups to be corrupt and therefore invalid.  Currently, they are recommending not to use CBT with vSphere 6 when backing up your VMs.

I was looking for an easy way to disable this on all jobs in Veeam quickly via PowerShell, but it’s not obvious how to do that, so I took some time to figure it out.  Here it is assuming the module is loaded in your PowerShell session.

$backupjobs = get-vbrjob | where jobtype -eq "Backup"
foreach ($job in $backupjobs){
$joboptions = $job | get-vbrjoboptions
$joboptions.visourceoptions.UseChangeTracking = $false
$job | set-vbrjoboptions -options $joboptions
}

Here’s now to enable it again:

$backupjobs = get-vbrjob | where jobtype -eq "Backup"
foreach ($job in $backupjobs){
$joboptions = $job | get-vbrjoboptions
$joboptions.visourceoptions.UseChangeTracking = $true
#$joboptions.visourceoptions.EnableChangeTracking = $true
$job | set-vbrjoboptions -options $joboptions
}

Sorry it’s not pretty on the page, but I wanted to get this out ASAP to help anyone needing to do this quickly and effectively.

One thing to note is in the enable script, there’s a commented line out.  If you have already set your jobs manually and wish to use the script to enable CBT again, be aware that the option to enable CBT within VMware if it is turned off gets disabled if you turn CBT off altogether within the job setup.  If you disable CBT with my script, that doesn’t get touched, so you don’t need to remove the # on that line.   If you want that option enabled again, take out the # before that line, and it’ll enable that option again.

Hope this helps!

NetApp snapshots and volume monitoring script

I just finished a script I created for a customer to help them resolve a problem with their NetApp.  Basically, sometimes their NetApp snapshots would not purge and get stuck, and/or the volumes would run out of space.  I advocated to them many times that if there isn’t a monitoring solution in place to detect this, PowerShell could fill in the gaps.  They took me up on getting something setup because this had happened too often.

First, you need to download the NetApp DataOnTAP PowerShell toolkit and install it.

This script detects any volume with less than 90% free space, and any volume snapshot older than 14 days, which are customizable easily via the variables. Finally, it offers to delete the old snapshots while you’re running the script.

$maxvolpercentused = '90'
$maxsnapshotdesiredage = (get-date).adddays(-14)
import-module dataontap
Write-Host "Enter a user account with full rights within the NetApp Filer"
$cred = Get-Credential
$controller = 'Put Your NetApp filer IP/name here'
$currentcontroller = connect-nacontroller -name $controller -credential $cred
Write-Host "Getting NetApp volume snapshot information..."
$volsnapshots = get-navol | get-nasnapshot
Write-Host "Getting NetApp volume information..."
$vollowspace = get-navol | where-object {$_.percentageused -gt "$maxvolpercentused"}
if ($vollowspace -eq $null){
 Write-Host "All volumes have sufficient free space!"
 }
else {
 Write-Host "The following NetApp volumes have low free space, and should be checked."
 $vollowspace
 Read-Host "Press Enter to continue..."
 Write-Host "Getting volume snapshot information for volumes with low space..."
 $vollowspace | get-nasnapshot | sort-object targetname | select-object targetname,name,created,@{Name="TotalGB";expression={$_.total/1GB}}
 Read-Host "Press Enter to continue..."
 }
Write-Host "Checking for snapshots older than the max desired age of..."
$maxsnapshotdesiredage
Write-Host "Finding old snapshots..."
$oldsnapshots = get-navol | get-nasnapshot | where-object {$_.created -lt "$maxsnapshotdesiredage"}
if ($oldsnapshots -eq $null){
 Write-Host "No old snapshots exist!"
 }
else {
 Write-Host "The following snapshots are longer than the identified longest retention period..."
 $oldsnapshots | select-object targetname,name,created,@{Name="TotalGB";expression={$_.total/1GB}}
 Read-Host "Press Enter to continue..."
Write-Host "You will now be asked if you would like to delete each of the above snapshots."
Write-Host "Note that Yes to All and No to All will not function.."
Write-Host "If you elect to delete them, it is NON-REVERSIBLE!!!"
$oldsnapshots | foreach-object {$_ | select-object targetname,name,created,@{Name="TotalGB";expression={$_.total/1GB}} ; $_ | remove-nasnapshot -confirm:$true}
 }
Write-Host "Script completed!"

The resulting output looks like this.

Enter a user account with full rights within the NetApp Filer

cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:
Credential
Getting NetApp volume snapshot information...
Getting NetApp volume information...
All volumes have sufficient free space!
Checking for snapshots older than the max desired age of...

Monday, July 27, 2015 11:18:58 AM
Finding old snapshots...
The following snapshots are longer than the identified longest retention period...

TargetName : NA_NFS01_A_DD
Name : smvi__Daily_NFS01_A_&_B_20120621171008
Created : 6/21/2015 4:59:16 PM
TotalGB : 50.8708076477051

Press Enter to continue...:

Script completed!

Hope this helps someone out there!

Howto: Fix vCenter 5.5 Syslog Collector bug

I just ran into an issue for a customer that is running vCenter 5.5.  Apparently, there is a known issue with the 5.5 version of the syslog collector that causes the debug log to grow indefinitely when it was upgraded  to 5.5 according to the article.  However, the customer in question was built fresh with 5.5, although I did update within 5.5 to newer builds during that time.  Bottom line is it should be checked if they’re running 5.5 version of the syslog collector in all cases to be sure.  The KB article outlines steps to stop this, which basically involves turning off debug logging altogether.

The debug log doesn’t contain actual syslog info for hosts, so this log file is only useful for troubleshooting issues with the syslog collector itself, so it’s almost certainly safe to delete.

Please note this only impacts the syslog collector.  If you did not install the syslog collector, this isn’t applicable.

You can copy and paste the following into an admin elevated PowerShell window to automate stopping the syslog service, changing the config file for the syslog collector to turn off debug logging completely, delete the probably massive debug log, and start the syslog collector again.  You can also save it as a .ps1 file and run it in an elevated prompt as well.

stop-service vmware-syslog-collector
(get-content "C:\ProgramData\VMware\VMware Syslog Collector\vmconfig-syslog.xml") | foreach-object {$_ -replace "<level>1</level>", "<level>0</level>"} | set-content "C:\ProgramData\VMware\VMware Syslog Collector\vmconfig-syslog.xml"
remove-item "C:\ProgramData\VMware\VMware Syslog Collector\logs\debug.log"
start-service vmware-syslog-collector

This assumes of course the syslog collector is running on a Windows machine, not the vCenter Appliance.  The article doesn’t seem to make clear if this issue only applies to the Windows version of Vcenter, and how to fix the vCenter Appliance if the issue did impact it as well.

Hope this helps!

Fix AD Lingering Objects with PowerShell

I briefly ran a blog before on wordpress.com, and most of the information there is outdated, or probably not relevant today, but there are a few posts that I’ve found little else on the internet to address.  These typically harken back to my AD/Exchange heavy days, but they’re still relevant today.  One of those posts is how to fix Active Directory lingering objects using PowerShell.

I ran into a problem in a large forest with multiple child domains and lots of domain controllers – 10 domains and 275 domain controllers!

To protect identities, let’s assume a forest consisted of domain.com, with two child domains – child1.domain.com and child2.domain.com.  Each domain has 2 global catalog servers (gc1, gc2), and one domain controller that is not a global catalog (dc1).

What are lingering objects anyway?

Remember that at least one domain controller in each domain must be a global catalog server.  GC’s have a copy of all objects in the forest, but only a subset of each object’s properties is found in AD.  For all objects in a GC that are not in that domain controller’s domain, the GC has a read-only copy.  You cannot manually go in and alter, create, or delete objects directly in the Global Catalog for objects that reside in another domain.

Lingering objects occur when through a variety of ways, a global catalog in one domain ends up with objects that no longer exist in another domain.  For example, let’s say a user exists in child2.domain.com and is deleted.  If somehow this doesn’t replicate to a GC in child1.domain.com or domain.com, the global catalogs in domain.com and child1.domain.com now have that user as a lingering object.  This can occur through a variety of ways, such as replication failures, or a global catalog server was disconnected for a long period of time.

Further info can be found here.

To find if you have lingering objects on a domain controller, you must run the following command:

repadmin /removelingeringobjects ServerName ServerGUID DirectoryPartition /advisory_mode

Simply remove the /advisory_mode switch to remove lingering objects.

ServerName is the fully qualified domain name of a global catalog that has lingering objects.  ServerGUID is a domain controller’s GUID from the domain that the lingering object is from, and you’d like to use it as a reference.  DirectoryPartition is the distinguished name of the GC partition with the lingering object.  Usually, lingering objects are computer or user account objects, so this would look like dc=domain,dc=com.
Finding the DC’s GUID can be done by looking in the forward lookup zone _msdcs.domain.com.

Lingering objects can cause problems with outdated or invalid group membership, problems with address book generation with Exchange, or basically problems with anything that depends upon valid info within the global catalog.  It can even cause replication failures depending upon your global catalog replication topology, and if you have strict replication enabled.

Scenario
Let’s say you suspect gc1.child1.domain.com has lingering objects from child2.domain.com.  You would first need a GUID of a DC in child2.domain.com that you believe has accurate domain information.  Let’s say you believe that dc is GC2.child2.domain.com.  Use the DNS MMC, connect to a DNS server hosting domain.com, look in the _msdcs.domain.com zone, and you will see all domain controllers in your forest.  Copy the GUID to your clipboard.  Let’s say GC2.child2.domain.com’s GUID is:

85d158d2-a006-4fff-b1e5-f9b6eaabab2b

You would then run:
repadmin /removelingeringobjects gc1.child1.domain.com 85d158d2-a006-4fff-b1e5-f9b6eaabab2b dc=child2,dc=domain,dc=com /advisory_mode

Note you need the Windows Support Tools installed.

This isn’t so tough.

However, if you suspected all your global catalogs had lingering objects for this domain, you’d need to run this command for each GC not in child2.domain.com.  Not terrible for this small of an environment.  To fix them, just chop off the advisory mode switch, and you’re done.

Think Big!

What if your environment was a 10 domain forest with over 100 domain controllers, and no predictable pattern of which domain controllers were global catalogs and which weren’t?!  Even if you knew which were global catalogs, who wants to issue that many commands?!

Wouldn’t it be nice is if we could issue this command to every global catalog not in child2.domain.com (since their GC’s have writable copies of the partition, theirs would be correct and would fix lingering objects on their own)?

That is what I faced.  I found replication wasn’t occuring for a domain partition in the global catalog because strict replication was enabled, and all global catalogs outside of a particular domain had lingering objects.  Talk about a pain in the butt!  Unless of course…
PowerShell to the rescue!

We can easily get all the global catalogs in the forest:
$forest = [system.directoryservices.activedirectory.Forest]::GetCurrentForest()
$forest.globalcatalogs | select-object name

You would receive output of the fully qualified domain names of all global catalogs.
But wait.  We only want GC’s that are NOT in child2.domain.com.  Simple enough with a where-object filter.

$forest.globalcatalogs | where-object {$_.name -notlike “*.child2.domain.com”} | select-object name

Now we just need to set this to a variable, so we add “$gcs = “ to the beginning of the second line.  This will allow us to have an array we can then perform an action or command on.  The last part is a bit tricky because we’re intermixing PowerShell with a standard command line.  Usually, you need to use the ‘ character around phrases.  Also, in this case, we’ve actually grabbed objects within the $gcs variable, so we want to make sure we’re not passing any other properties or code associated to objects.  We literally just want the name of each to be passed.  Remember, $_ means every object in the pipeline.  By adding .name, we’re saying don’t pass any other output related to each object in the array other than it’s name.  Without it, you get errors because PowerShell is putting extra characters in for each Global Catalog.

Final commands:

$forest = [system.directoryservices.activedirectory.Forest]::GetCurrentForest()
$gcs = $forest.globalcatalogs | where-object {$_.name -notlike "*.child2.domain.com"} | select-object name
$gcs | foreach-object {repadmin /removelingeringobjects $_.name 85d158d2-a006-4fff-b1e5-f

VMware dedicated swapfile datastores

Dedicated swapfile datastores in VMware are often overlooked.   Here’s why you might use them, and how to size them easily with PowerCLI.

It’s very often advisable to create dedicated swapfile datastores in your VMware vSphere environment.   There are numerous benefits:

  • Ensure there’s room to start a VM
  • Use different storage type than what the working directory uses for performance or cost savings
  • Reduce replication traffic when using storage based replication, because there’s no reason to replicate this storage
  • You may want to snapshot storage that runs VMs for easy recoverability, but there’s no reason to snapshot swapfile

If you decide to create dedicated datastores, you want to use the following principles:

  • Create datastores that are resilient, so that VMs can be started
  • Have hosts that frequently have VMs VMotion between them, such as a cluster, use the same datastores to reduce vMotion network traffic
  • Carefully monitor their space, and size them correctly, and allow for some overhead for growth.

The swapfile size for each VM is determined by the following:

  • The VM’s defined RAM minus the RAM reservation for that VM.

For example, if a VM is defined as having 8GBs, but the reservation for RAM is set for 2GBs, a 6GB swapfile will be created.  By default, a VM has no reservation for RAM.

That means that this datastore space consumption can fluctuate as VMs are built, powered off and on, whenever RAM is added or removed from a VMs definintion, or if its memory reservation is adjusted.

This begs the question – How do you easily size for these datastores easily?  Harnass PowerShell by using PowerCLI!  Simply tune the $vms variable portion or what’s piping to it of the following to grab the VMs that will likely VMotion between the same hosts.  This would usually be by cluster.

$vms = get-cluster clustername | get-vm
$RAMDef = $vms | Measure-Object -Sum memoryGB | select-object sum -expand sum
$RAMResSum = $vms | get-VMResourceConfiguration | measure-object -sum memreservationGB | select-object sum -expand sum
$SwapDatastore = $RamDef - $RamResSum
Write-Host "Defined amount of RAM within VMs is $RAMDef GBs"
Write-Host "Memory reservation for VMs is $RamResSum GBs"
Write-Host "A datastore of at least $SwapDatastore GBs will be needed, plus overhead."

Output will look like this:

Defined amount of RAM within VMs is 218 GBs
Memory reservation for VMs is 0 GBs
A datastore of at least 218 GBs will be needed, plus overhead.

For overhead, you want to keep at least 25% free probably minimum just to keep datastore free space alarms from going off, plus any additional growth from the factors outlined above, mostly centered around new VMs being built.

Many customers balk when told how big the swapfile datastore will be, but you have to remember if you’re changing this within a customer’s environment, they’re going to gain back swapfile space within their existing datastores as swapfiles get placed on the dedicated datastore.

Also, think of the potential storage space savings you could get if you are storage snapshotting your VM datastores, and replicating, plus the bandwidth savings.  Let’s say you have VMs that in aggregate are defined with 500GBs of RAM with no memory reservation.  If you’re doing both snapshots and replication and didn’t dedicate a datastore to the swapfiles, you’re talking savings of 500GBs of replication space, and up to 1TB worth of space savings alone depending upon how much additional space the swapfiles are taking within your storage snapshots.  Pretty worth it!

How do you migrate existing swapfiles?

  1. First, set your cluster to use the host’s swapfile setting instead of the cluster’s.
  2. Set all your hosts to use the same datastore.

To do this in PowerCLI:

$cluster = "clustername"
$swapfiledatastore = "swapfiledatastorename"
get-cluster $cluster | set-cluster -VMSwapfilePolicy InHostDataStore

You’ll have to manually set the host’s cluster datastore with the web or thick client.  PowerCLI fails to set the heartbeat datastore if the host is in a cluster unfortunately.

You should see the swapfiles deleted from the VMs’ working directories and created in the new datastore as VMs are power cycled.