The complaints are just slowness. They are only coming from the users of the "R1" and "R3-Win10" machines, explained below. Obviously, this is dependent on load, so after-hours or over a weekend when I look and test, everything works fine for me. So these captures are taken in the middle of a workday.
This is a Poweredge R720 with 2xXeon E5-2670s = 20 physical cores, 128GB RAM & 2 disk arrays, a RAID 10 of spinners, 10TB Capacity, and a RAID 1 of SSDs, 1TB capacity.
The following VMs are on the array of spinners:
SB-SERVER2 = Server 2012 Domain Controller
APP-SERVER2 = Server 2012 Application Server
CompuVM = Win10 machine housing the Ubiquiti controller & similar
vCenter65 = management interface
MetroXP = WinXP workstation for accessing historical LOB app - normally powered off
PowerChute = small linux machine for managing the dual UPSes.
The following VMs are on the array of SSDs:
R1 = Win10 workstation currently in use by an employee at a satellite office
R1-Win10 = Win10 workstation powered on, but no current user
R2-Win10 = Win10 workstation powered on, but with no current user
R3-Win10 = Win10 workstation currently in use by a 2nd employee at a satellite office
Both of the "idle" workstation vms are intended to other users at the satellite office (who are currently remoting into physical machines), but I don't want to proceed with this until I solve the performance issues, or deem them unsolvable.
Ok, here are the caps. For the CPU and Memory screens, I have included both "full" displays and "VM only" displays.
Obvious observations: The CPU "%Wait" column seems like the canary in this coalmine, but the "%RDY" doesn't look like I would expect if processing power were in short supply. There has been no over-allocation of resources.
I would also expect that the disk array of spinners would be a clear bottleneck, but the disk screens don't seem to bear that out...
CPU Full Screen:
CPU - VMs Only
Memory, Full Screen:
Memory, VMs Only
Disk Adapter:
Disk Device:
Disk - VM
Network:
Power Management:
