Hello,
I have two ESXi server with Qlogic QLE3242 cards in them. One card works fine with active twinax to a 10 gig isolated storage network. The other card was going to be network uplinks to our core switches and contain the vlans that are to be made available to virtual machine networks. Currently no SFP+ will link up. VMWare always declares the link down and I can't get link lights on the ports. Fiber tested with a light meter and I'm getting around -11 db at 850nm from the card and also the switch. Tried the following SFP's
Broacade SFP+ from fs.com
Broadcom SFP+ from fs.com
QLogic SFP+ from fs.com
Extreme Networks SFP+ from fs.com
Cisco SFP+ official from cablesandkits.com
None would link up, despite all of them powering on and illumianting the mm fiber optic transmit laser.
Twinax doesn't work for our network uplinks as the core switch is in another rack. Twinax can work for storage as our storage system and the switches for it are in the same rack as the servers.
I have no issue using fs.com SFP+ modules for any other piece of equipment... including the Broadcom QLogic 57840 10Gigabit Adapter in a Dell FX2 chassis.
So does anyone have a recommendation for either a cheap low profile dual port 10gig fiber based nic? Or does anyone know a way to get any of those SFP+ modules I have WORKING in this piece of junk nic? I stayed clear from Intel based nics because of the monopoly they have only using overpriced Intel based SFP+ modules. 99% of the cheap Intel based nics you find online include knock off nics, which only work with the allow_unsupported_sfp driver argument available to any operations system that is NOT VMware (have no idea how much money Intel is paying VMware to keep that option out).
Thanks!