Quantcast
Channel: VMware Communities : Discussion List - ESXi
Viewing all articles
Browse latest Browse all 8132

Wide-VM NUMA Layout (ESXi 6.0 U2)

$
0
0

Hello,

 

we created a VM with the following Settings:

128 vCPUs

4 TB Memory

numa.autosize.vcpu.maxPerVirtualNode = "16"

cpuid.coresPerSocket = "16"

numa.vcpu.preferHT = "TRUE"

 

The goal was to create 8 vNUMA Nodes and distribute them on all physical Nodes.

 

Physical layout:

8 CPU Sockets (Intel(R) Xeon(R) CPU E7-8880 v4)

768 GB Memory / CPU Socket

 

vNUMA Layout looks finde, in Guest and on ESXi:

DICT                  numvcpus = "128"
DICT                   memSize = "4177920"
DICT               displayName = ""
DICT      cpuid.coresPerSocket = "16"
DICT        numa.vcpu.preferHT = "true"
DICT numa.autosize.vcpu.maxPerVirtualNode = "16"
numa: Setting.vcpu.maxPerVirtualNode=16 to match cpuid.coresPerSocket
numa: VCPU 0: VPD 0 (PPD 0)
numa: VCPU 1: VPD 0 (PPD 0)
numa: VCPU 2: VPD 0 (PPD 0)
numa: VCPU 3: VPD 0 (PPD 0)
numa: VCPU 4: VPD 0 (PPD 0)
numa: VCPU 5: VPD 0 (PPD 0)
numa: VCPU 6: VPD 0 (PPD 0)
numa: VCPU 7: VPD 0 (PPD 0)
numa: VCPU 8: VPD 0 (PPD 0)
numa: VCPU 9: VPD 0 (PPD 0)
numa: VCPU 10: VPD 0 (PPD 0)
numa: VCPU 11: VPD 0 (PPD 0)
numa: VCPU 12: VPD 0 (PPD 0)
numa: VCPU 13: VPD 0 (PPD 0)
numa: VCPU 14: VPD 0 (PPD 0)
numa: VCPU 15: VPD 0 (PPD 0)
numa: VCPU 16: VPD 1 (PPD 1)
numa: VCPU 17: VPD 1 (PPD 1)
numa: VCPU 18: VPD 1 (PPD 1)
numa: VCPU 19: VPD 1 (PPD 1)
numa: VCPU 20: VPD 1 (PPD 1)
numa: VCPU 21: VPD 1 (PPD 1)
numa: VCPU 22: VPD 1 (PPD 1)
numa: VCPU 23: VPD 1 (PPD 1)
numa: VCPU 24: VPD 1 (PPD 1)
numa: VCPU 25: VPD 1 (PPD 1)
numa: VCPU 26: VPD 1 (PPD 1)
numa: VCPU 27: VPD 1 (PPD 1)
numa: VCPU 28: VPD 1 (PPD 1)
numa: VCPU 29: VPD 1 (PPD 1)
numa: VCPU 30: VPD 1 (PPD 1)
numa: VCPU 31: VPD 1 (PPD 1)
numa: VCPU 32: VPD 2 (PPD 2)
numa: VCPU 33: VPD 2 (PPD 2)
numa: VCPU 34: VPD 2 (PPD 2)
numa: VCPU 35: VPD 2 (PPD 2)
numa: VCPU 36: VPD 2 (PPD 2)
numa: VCPU 37: VPD 2 (PPD 2)
numa: VCPU 38: VPD 2 (PPD 2)
numa: VCPU 39: VPD 2 (PPD 2)
numa: VCPU 40: VPD 2 (PPD 2)
numa: VCPU 41: VPD 2 (PPD 2)
numa: VCPU 42: VPD 2 (PPD 2)
numa: VCPU 43: VPD 2 (PPD 2)
numa: VCPU 44: VPD 2 (PPD 2)
numa: VCPU 45: VPD 2 (PPD 2)
numa: VCPU 46: VPD 2 (PPD 2)
numa: VCPU 47: VPD 2 (PPD 2)
numa: VCPU 48: VPD 3 (PPD 3)
numa: VCPU 49: VPD 3 (PPD 3)
numa: VCPU 50: VPD 3 (PPD 3)
numa: VCPU 51: VPD 3 (PPD 3)
numa: VCPU 52: VPD 3 (PPD 3)
numa: VCPU 53: VPD 3 (PPD 3)
numa: VCPU 54: VPD 3 (PPD 3)
numa: VCPU 55: VPD 3 (PPD 3)
numa: VCPU 56: VPD 3 (PPD 3)
numa: VCPU 57: VPD 3 (PPD 3)
numa: VCPU 58: VPD 3 (PPD 3)
numa: VCPU 59: VPD 3 (PPD 3)
numa: VCPU 60: VPD 3 (PPD 3)
numa: VCPU 61: VPD 3 (PPD 3)
numa: VCPU 62: VPD 3 (PPD 3)
numa: VCPU 63: VPD 3 (PPD 3)
numa: VCPU 64: VPD 4 (PPD 4)
numa: VCPU 65: VPD 4 (PPD 4)
numa: VCPU 66: VPD 4 (PPD 4)
numa: VCPU 67: VPD 4 (PPD 4)
numa: VCPU 68: VPD 4 (PPD 4)
numa: VCPU 69: VPD 4 (PPD 4)
numa: VCPU 70: VPD 4 (PPD 4)
numa: VCPU 71: VPD 4 (PPD 4)
numa: VCPU 72: VPD 4 (PPD 4)
numa: VCPU 73: VPD 4 (PPD 4)
numa: VCPU 74: VPD 4 (PPD 4)
numa: VCPU 75: VPD 4 (PPD 4)
numa: VCPU 76: VPD 4 (PPD 4)
numa: VCPU 77: VPD 4 (PPD 4)
numa: VCPU 78: VPD 4 (PPD 4)
numa: VCPU 79: VPD 4 (PPD 4)
numa: VCPU 80: VPD 5 (PPD 5)
numa: VCPU 81: VPD 5 (PPD 5)
numa: VCPU 82: VPD 5 (PPD 5)
numa: VCPU 83: VPD 5 (PPD 5)
numa: VCPU 84: VPD 5 (PPD 5)
numa: VCPU 85: VPD 5 (PPD 5)
numa: VCPU 86: VPD 5 (PPD 5)
numa: VCPU 87: VPD 5 (PPD 5)
numa: VCPU 88: VPD 5 (PPD 5)
numa: VCPU 89: VPD 5 (PPD 5)
numa: VCPU 90: VPD 5 (PPD 5)
numa: VCPU 91: VPD 5 (PPD 5)
numa: VCPU 92: VPD 5 (PPD 5)
numa: VCPU 93: VPD 5 (PPD 5)
numa: VCPU 94: VPD 5 (PPD 5)
numa: VCPU 95: VPD 5 (PPD 5)
numa: VCPU 96: VPD 6 (PPD 6)
numa: VCPU 97: VPD 6 (PPD 6)
numa: VCPU 98: VPD 6 (PPD 6)
numa: VCPU 99: VPD 6 (PPD 6)
numa: VCPU 100: VPD 6 (PPD 6)
numa: VCPU 101: VPD 6 (PPD 6)
numa: VCPU 102: VPD 6 (PPD 6)
numa: VCPU 103: VPD 6 (PPD 6)
numa: VCPU 104: VPD 6 (PPD 6)
numa: VCPU 105: VPD 6 (PPD 6)
numa: VCPU 106: VPD 6 (PPD 6)
numa: VCPU 107: VPD 6 (PPD 6)
numa: VCPU 108: VPD 6 (PPD 6)
numa: VCPU 109: VPD 6 (PPD 6)
numa: VCPU 110: VPD 6 (PPD 6)
numa: VCPU 111: VPD 6 (PPD 6)
numa: VCPU 112: VPD 7 (PPD 7)
numa: VCPU 113: VPD 7 (PPD 7)
numa: VCPU 114: VPD 7 (PPD 7)
numa: VCPU 115: VPD 7 (PPD 7)
numa: VCPU 116: VPD 7 (PPD 7)
numa: VCPU 117: VPD 7 (PPD 7)
numa: VCPU 118: VPD 7 (PPD 7)
numa: VCPU 119: VPD 7 (PPD 7)
numa: VCPU 120: VPD 7 (PPD 7)
numa: VCPU 121: VPD 7 (PPD 7)
numa: VCPU 122: VPD 7 (PPD 7)
numa: VCPU 123: VPD 7 (PPD 7)
numa: VCPU 124: VPD 7 (PPD 7)
numa: VCPU 125: VPD 7 (PPD 7)
numa: VCPU 126: VPD 7 (PPD 7)
numa: VCPU 127: VPD 7 (PPD 7)
numaHost: 8 virtual nodes, 8 virtual sockets, 8 physical domains

 

available: 8 nodes (0-7)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
node 0 size: 523263 MB
node 0 free: 509779 MB
node 1 cpus: 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
node 1 size: 528384 MB
node 1 free: 461074 MB
node 2 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
node 2 size: 524288 MB
node 2 free: 510266 MB
node 3 cpus: 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
node 3 size: 524288 MB
node 3 free: 512085 MB
node 4 cpus: 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79
node 4 size: 524288 MB
node 4 free: 474284 MB
node 5 cpus: 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
node 5 size: 524288 MB
node 5 free: 511631 MB
node 6 cpus: 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111
node 6 size: 524288 MB
node 6 free: 511478 MB
node 7 cpus: 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127
node 7 size: 475136 MB
node 7 free: 453380 MB
node distances:
node   0   1   2 3   4   5   6   7  0:  10  20  20  20  20 20  20  20  1:  20  10  20  20  20 20  20  20  2:  20  20  10  20  20 20  20  20  3:  20  20  20  10  20 20  20  20  4:  20  20  20  20  10 20  20  20  5:  20  20  20  20  20 10  20  20  6:  20  20  20  20  20 20  10  20  7:  20  20  20  20  20 20  20  10

 

But when i check sched-stats -t numa-clients, I can see that Physical CPU Packages are shared:

 

For my understanding, that is not possible when a vNUMA node hat 512GB RAM and a CPU Package has 768 GB Memory...

 

What is wrong here?

 

Best Regards,

Markus


Viewing all articles
Browse latest Browse all 8132

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>