Broadcom + Hyper-V + VMQ’s

Posted on Updated on

There have been a lot of posts on the internet discussing VMQ’s. Microsoft had published this LINK to ensure that the best performance was experienced on Virtual machines. VMQ is supposed to offer an intelligent buffer, whereby virtual machines benefit from network cards which can offload and prioritise virtual machine traffic. The fact is that the Broadcom cards cause high latency issues, and inconsistency with network traffic.

I tried a few tests:

1.5 GB file:

  • Copying from Physical Machine to Physical Machine:  <10 seconds
  • Copying from Virtual machine to Physical Machine: 34 Seconds
  • Copying from Virtual to Virtual machine: >1 minute

This pointed me to the settings within Hyper-V for the Networking interface and the iSCSI disk layer. I first checked the disks, and the IOPS on the SAN was hitting a maximum of 600 IOPS which is less than average throughput for the SAN, considering there are 24 virtual machines on the SAN volume. Then I tried the network throughput by using a few linux tools, and this proved that the Network interface was at fault. The strangest thing was that the throughput would hit about 200MB/sec and then drop off to about 15K/sec and then spike back up again. I then looked at the network interface and disabled the following setting within the network adapter:

Once this was completed then performance of the virtual machines suddenly increased. They are now running as fast as the Physical ones.

Now its time to change to intel cards.

2 thoughts on “Broadcom + Hyper-V + VMQ’s

    Sanjay Ramlochund said:
    September 24, 2015 at 4:34 am

    Thanks to share.
    Should the Virtual machin Queues MQ be disabled also

      wortontech responded:
      September 24, 2015 at 5:05 am

      These can be left enabled. I would recommend getting an Intel card for Hyper-V network and cluster heartbeat. I have done lots of these.

Leave a comment