A large IO from a Guest VM running Microsoft Windows Server 2008 R2 using pvscsi driver to an RDM disk, is split into 64kb IO sizes when sent to storage array. ESXi host profile has Disk.DiskMaxIOSize set to the default (32 MB).
Using Windows perfmon, can monitor IO at the Windows "physical disk" layer, and it shows large IO being properly performed. The pvscsi driver, along with the Disk.DiskMaxIOSize value of 32MB should allow the large IO through the IO stack. Performance is very good, but we want to further improve the efficiency of the large IOs. There is a non-trivial loss of efficiency and distortion of IO statistics when the large IOs are unnecessarily split. It also stresses the lun-queue-depth –centric capabilities, and will hit queue-depth ceilings sooner.
Storage array performance statistics show maximum IO size ever received on the front end SAN ports being 64kb.
EMC PowerPath/VE for ESXi is used for multipath management.
An HP/Emulex SAN controller is being used. The Emulex driver module scatter/gather segment count parameter "lpfc_sg_seg_cnt" is at the default value of 64, which typically enables 512kb IO sizes.
Why is the large IO being decomposed into multiple 64kb IOs, and where would this process be documented?
What are the configuration parameters that can be changed to increase the disk IO size being sent to the storage array?
Using a Linux analogy, the ESXi system is behaving as if the "/sys/block/sd*/queue/max_sectors_kb" under Linux were set to 64.
Thanks for the help.
Dave B