I really want to be able to use VLANs with VMs on my 2 ESXi servers, but there's something not working as I'd expect. I think I've configured it in a sane manner, but I may well have done/not done something stupid. I've configured each of the VMware ESXi 6.7 servers, with a virtual switch (vSwitch0) that has 4 physical NICs, for redundancy (though only 1 port is live).
There are various port groups attached to vSwitch0, each with a separate VLAN ID, except the management & vm networks, which I assume just use the native VLAN(1) of the physical switch port(s) that the NICs are connected to.
Each Port group / VLAN is configured thusly
I've installed some VMs, to which I've added a NIC per network, trying to debug the network issue - here's one
Here's its CLI with its network config & some pings to known good IPs & the results of "arp -a", showing good results on ens160 (the non-VLAN'd VM network), and no network visibilty on ens161 (the LeasedLine WAN (VLAN58))
Now I know that all this screams at a physical switch problem, but the ESX VLAN config is new to me and I'd really appreciate some confirmation of the sanity & validity of my ESXi networking configuration, from some VMware officianados.
The switches are a mixture of Cisco (WS-C2960S-48TS-S), most distant from the ESX hosts, then a Dell (PCT6248), then a Dell (PC8024), into which the ESXi servers are connected. They all have the interconnects set as LAGs (in Dell-speak) & Port-Channels (in Cisco-speak). Essentially, debugging the 10Gb switch is not easy, as the only things connected to it is the ESXi hosts. The interconnects are set as "switchport mode trunk" (Ciscos), or "switchport mode general" (Dell) - the ports that the ESXi NICs are plugged into, are also set to "switchport mode general".




