I have a Dual-Port ConnectX-3 Pro EN 40/56 GbE QSFP+ NIC (MCX314A-BCCT)
I have been struggling for days now to get this card working.
To explain the topology, I have two other Hosts, with an Infiniband Switch between them.
Both of the other Hosts are plugged into the InfiniBand Switch via Passive QSFP Copper Cables.
The other two Hosts and the Mellanox Cards in them are working fine.
Example:
I am now trying to add a third host, with this NIC in it.
The NIC is VMware-certified, and I am using the VMware-provided Driver specifically made for it.
(VMware Compatibility Guide - I/O Device Search)
But the NIC Ports will not connect and link when plugged in to the Switch.
I tried replacing the Card. I tried replacing the Cables with known, tested cables.
I have tried changing the port_type_array to Ethernet, and to Infiniband.
When the card's two ports are directly plugged into one another (while the NIC in Ethernet Mode), Link is active.
But the card has to be in InfiniBand mode because it is an InfiniBand Switch on the other end, and the other two Hosts are running in InfiniBand mode.
I notice that the new NIC is using a different Driver:
The existing, working NICs are using the "ib_ipoib" driver, and the new Host's NIC is using the "mlx4_en" driver.
I don't know of any esxcli or esxcfg command to specify or change the Driver that's being used for the card, and switch it to ib_ipoib.
I've tried installing older drivers, going all the way back to the ESXi 5.0 Drivers, but nothing I did -- for days -- got those ports to energize.
The InfiniBand card's ports remain in "Down" state.
I am at my wits end at this point, and so I am reaching out to the community here.
Does anyone have any suggestions on how I can get this card working?