Hyper-V 2012 R2 Very Slow Network Performance on Dell Server with Broadcom NIC Cards

We had good luck with Dell servers in the past so when it came time to acquire another physical server, we called up our account rep at Dell who was more than happy to give us a not-really-any-better-than-the-internet quote.  VMs come and go, you spin them up so easily, it has lost some of the satisfaction because it’s rarely a hands-on ordeal. In a time when physical servers are becoming an endangered species (out side of data centers somewhere in the cloud), it was exciting and refreshing to unbox a real honest to goodness physical server.

Dell PowerEdge R320 + 32 GB + 8 fast 1TB drives + 2 Broadcom NICs + Performance model

We slapped in on the rails and fired it up. The first day was ok because it was mostly hardware fun and basic install, then on day two after Server 2012 R2  was installed with Hyper-V enabled  and a couple of guest VMs running, we noticed that performance was shockingly slow on the host network. Uh-oh, what was configured incorrectly? We were quite methodical when setting everything up so so we double checked everything to make sure nothing obvious was missing. We couldn’t find anything noticeably wrong with the Server config, so it was time to dig a little deeper.

Symptoms:

  • Everything moved like molasses
  • Hyper-V host was getting by but not acting like an enterprise grade server, more like a 9 year old Celeron client with 500MB of memory
  • The guest VMs in Hyper-V Manager would take too long to do simple operations
  • Any connection to the network outside the physical box was ridiculously slow
  • The guest VM graphics popped really slowly, desktops were choppy not smooth

Something was definitely wrong.

To rule out the most obvious, I reinstalled the absolute latest Broadcom drivers just to be sure. Then went on to find several sources expressing the need to disable “TCP offloading” which on our server was called “TCP / UDP Checksum Offload (IPv4 and 6)” so I took the advice. Once disabled there was, you guessed it, absolutely no change. Other reputable advice suggested disabling “Large Send Offload V2 (IPv4 and 6)” so again I complied. No change. I disabled Virtual Machine Queues (VMQ) on the NIC dedicated to guest traffic as one article suggested. Sadly, no difference.

Our server has two NICs and all these suggestions had different variations on what sort of formula, i.e. disable on both the host and the guest network, or one or the other, etc. I kept researching as and did notice that the VMQ recommendation kept popping up so I decided to take a look at it again referencing an article from Dell.

I disabled VMQ for both the guest and the host NIC and after that, boom, now was it fast. Pings all went back to <1ms every time, not 300m+. Everything was working as it shold and now we could move on to other tasks. After some minor tweaks, it was apparent that we were now utilizing the full power of the GB network connections. Appearently, without a network infrastructure that can support VMQ, it slows things down a bit.

These are the steps needed for 2012 R2:

Open Network Connections on the server

Right click on the NIC to get the menu and select properties

Select Configure in the properties window

On the advanced tab, scroll to VMQ and select disable

Click ok and watch your speed increase to what you were expecting! Your headache and frustration is over with…