Hi,
We have a networking intensive application running inside a CentOS 7.6 VM with 8 vCPUs. It is latency sensitive and needs response time around a few microseconds but sometimes the threads are stuck for more than 1 millisecond.
BIOS configuration : no hyperthreading, power management performance (C1E disable, C States disable)
ESXi 6.5 configuration :
Only one VM running
host power management = high performance
VM latency sensitivity = high
VM 2 network cards PCI passthrough
Setting VM CPU affinities gives worst results
Linux is configured with isolcpus. "perf sched" shows that no other process is scheduled on the same cores when the 1 msec delay occurs.
esxtop cpu shows that the VM %RDY is regularly above 0,3 %.
esxtop power management shows that every physical cpu core is sometimes put in the C1 state.
sched-stats shows that some system processes are running on the same CPU as the VM vCPUs. We also see that the VM CPUs are often moved around the different CPU cores.
Is it possible to associate vCPUs to physical CPUs so they never get moved to another CPU ?
Is it possible to force ESXi system threads to be scheduled only on CPUs not in use by the VM ?
Is it possible to completely disable power management so the CPUs never enter the C1 state ?
Are there other reasons than ESXi scheduling and power management which could explain such high latency ?
Is there a way to configure ESXi and VM so that the vCPUs always get response times less than a few microseconds ?
Thanks,
Sylvie