vmxnet3: Fix incorrect small packet padding

When running ESXi under qemu there is an issue with the ESXi guest
discarding packets that are too short.  The guest discards any packets
under the normal minimum length for an ethernet packet (60).  This
results in odd behaviour where other hosts or VMs on other hosts can
communicate with the ESXi guest just fine (since there's a physical NIC
somewhere doing padding), but VMs on the host and the host itself cannot
because the ARP request packets are too small for the ESXi host to
accept.

Someone in the past thought this was worth fixing, and added code to the
vmxnet3 qemu emulation such that if it is receiving packets smaller than
60 bytes to pad the packet out to 60. Unfortunately this code is wrong
(or at least in the wrong place). It does so BEFORE before taking into
account the vnet_hdr at the front of the packet added by the tap device.
As a result, it might add padding, but it never adds enough.
Specifically it adds 10 less (the length of the vnet_hdr) than it needs
to.

The following (hopefully "obviously correct") patch simply swaps the
order of processing the vnet header and the padding.  With this patch an
ESXi guest is able to communicate with the host or other local VMs.

Signed-off-by: Brian Kress <kressb@moose.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Dmitry Fleytman <dmitry@daynix.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This commit is contained in:
Brian Kress 2015-06-23 11:49:25 -04:00 committed by Stefan Hajnoczi
parent 5df6a1855b
commit b83b5f2ef9

View File

@ -1879,6 +1879,12 @@ vmxnet3_receive(NetClientState *nc, const uint8_t *buf, size_t size)
return -1; return -1;
} }
if (s->peer_has_vhdr) {
vmxnet_rx_pkt_set_vhdr(s->rx_pkt, (struct virtio_net_hdr *)buf);
buf += sizeof(struct virtio_net_hdr);
size -= sizeof(struct virtio_net_hdr);
}
/* Pad to minimum Ethernet frame length */ /* Pad to minimum Ethernet frame length */
if (size < sizeof(min_buf)) { if (size < sizeof(min_buf)) {
memcpy(min_buf, buf, size); memcpy(min_buf, buf, size);
@ -1887,12 +1893,6 @@ vmxnet3_receive(NetClientState *nc, const uint8_t *buf, size_t size)
size = sizeof(min_buf); size = sizeof(min_buf);
} }
if (s->peer_has_vhdr) {
vmxnet_rx_pkt_set_vhdr(s->rx_pkt, (struct virtio_net_hdr *)buf);
buf += sizeof(struct virtio_net_hdr);
size -= sizeof(struct virtio_net_hdr);
}
vmxnet_rx_pkt_set_packet_type(s->rx_pkt, vmxnet_rx_pkt_set_packet_type(s->rx_pkt,
get_eth_packet_type(PKT_GET_ETH_HDR(buf))); get_eth_packet_type(PKT_GET_ETH_HDR(buf)));