At present net_checksum_calculate() blindly calculates all types of
checksums (IP, TCP, UDP). Some NICs may have a per type setting in
their BDs to control what checksum should be offloaded. To support
such hardware behavior, introduce a 'csum_flag' parameter to the
net_checksum_calculate() API to allow fine control over what type
checksum is calculated.
Existing users of this API are updated accordingly.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
qemu_hexdump()'s pointer to the buffer and length of the
buffer are closely related arguments but are widely separated
in the argument list order (also, the format of <stdio.h>
function prototypes is usually to have the FILE* argument
coming first).
Reorder the arguments as "fp, prefix, buf, size" which is
more logical.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Message-Id: <20200822180950.1343963-3-f4bug@amsat.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Interrupt conditions occurring while masked are not being
signaled when later unmasked.
The fix is to raise/lower IRQs when IMASK is changed.
To avoid problems like this in future, consolidate
IRQ pin update logic in one function.
Also fix probable typo "IEVENT_TXF | IEVENT_TXF",
and update IRQ pins on reset.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Current code that handles Tx buffer desciprtor ring scanning employs the
following algorithm:
1. Restore current buffer descriptor pointer from TBPTRn
2. Process current descriptor
3. If current descriptor has BD_WRAP flag set set current
descriptor pointer to start of the descriptor ring
4. If current descriptor points to start of the ring exit the
loop, otherwise increment current descriptor pointer and go
to #2
5. Store current descriptor in TBPTRn
The way the code is implemented results in buffer descriptor ring being
scanned starting at offset/descriptor #0. While covering 99% of the
cases, this algorithm becomes problematic for a number of edge cases.
Consider the following scenario: guest OS driver initializes descriptor
ring to N individual descriptors and starts sending data out. Depending
on the volume of traffic and probably guest OS driver implementation it
is possible that an edge case where a packet, spread across 2
descriptors is placed in descriptors N - 1 and 0 in that order(it is
easy to imagine similar examples involving more than 2 descriptors).
What happens then is aforementioned algorithm starts at descriptor 0,
sees a descriptor marked as BD_LAST, which it happily sends out as a
separate packet(very much malformed at this point) then the iteration
continues and the first part of the original packet is tacked to the
next transmission which ends up being bogus as well.
This behvaiour can be pretty reliably observed when scp'ing data from a
guest OS via TAP interface for files larger than 160K (every time for
700K+).
This patch changes the scanning algorithm to do the following:
1. Restore "current" buffer descriptor pointer from
TBPTRn
2. If "current" descriptor does not have BD_TX_READY set, goto #6
3. Process current descriptor
4. If "current" descriptor has BD_WRAP flag set "current"
descriptor pointer to start of the descriptor ring otherwise
set increment "current" by the size of one descriptor
5. Goto #1
6. Save "current" buffer descriptor in TBPTRn
This way we preserve the information about which descriptor was
processed last and always start where we left off avoiding the original
problem. On top of that, judging by the following excerpt from
MPC8548ERM (p. 14-48):
"... When the end of the TxBD ring is reached, eTSEC initializes TBPTRn
to the value in the corresponding TBASEn. The TBPTR register is
internally written by the eTSEC’s DMA controller during
transmission. The pointer increments by eight (bytes) each time a
descriptor is closed successfully by the eTSEC..."
revised algorithm might also a more correct way of emulating this aspect
of eTSEC peripheral.
Cc: Alexander Graf <agraf@suse.de>
Cc: Scott Wood <scottwood@freescale.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: qemu-devel@nongnu.org
Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Depending on QEMU network setup it is possible for us to receive a
complete Ethernet packet that is less 64 bytes long. One such example is
when QEMU is configured to use a standalone TAP device (not set to be a
part of any bridge) receives and ARP packet. In cases like that we need
to add more than just 4-bytes of CRC padding and ensure that our payload
is at least 60 bytes long, such that, when combined with CRC padding
bytes the resulting size is at least 802.3 minimum MTU bytes
long (64). Failing to do that results in code in etsec_walk_rx_ring()
setting BD_RX_SH which, in turn, makes corresponding Linux driver of
emulated host to reject buffer as a runt packet
Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1453832250-766-19-git-send-email-peter.maydell@linaro.org
The free() and g_free() functions both happily accept
NULL on any platform QEMU builds on. As such putting a
conditional 'if (foo)' check before calls to 'free(foo)'
merely serves to bloat the lines of code.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
The BH will be scheduled when etsec->rx_buffer_len is becoming 0, which
is the condition of queuing.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1436955553-22791-7-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
When etsec_reset returns 0, peer would queue the packet as if
.can_receive returns false. Drop etsec_can_receive and let etsec_receive
carry the semantics.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Message-id: 1436955553-22791-6-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
IRQ are lowered when ievent bit is cleared, so irq_pulse makes no sense
here...
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Codespell found and fixed these new typos:
* doesnt -> doesn't
* funtion -> function
* perfomance -> performance
* remaing -> remaining
A coding style issue (line too long) was fixed manually.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
This implementation doesn't include ring priority, TCP/IP Off-Load, QoS.
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>