If somehow we've gotten behind a lot, simply skip ahead, like the ehci code
does.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Before this patch uhci would process an unlimited amount of frames when
behind on schedule, by setting the timer to a time already past, causing the
timer subsys to immediately recall the frame_timer function gain.
This would cause invalid cancellations of bulk queues when the catching up
processed more then 32 frames at a moment when the bulk qh was temporarily
unlinked (which the Linux uhci driver does).
This patch fixes this by processing maximum 16 frames in one go, and always
setting the timer one ms later, making the code behave more like the ehci
code.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Rather then using the magic 32 value in various places.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Re-arrange how we process frames / increase frnum / report pending interrupts,
to avoid a 1 ms delay in interrupt reporting to the guest. This increases
the packet throughput for cases where the guest submits a single packet,
then waits for its completion then re-submits from 500 pkts / sec to
1000 pkts / sec. This impacts for example the use of redirected / virtual
usb to serial convertors.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
ehci_raise_irq(s, USBSTS_PCD), gets applied immediately so there is no need
to call commit_irq after it.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
I tried lowering the time between raising an interrupt and rescanning the
async schedule to see if the guest has queued a new transfer before, but
that did not have any positive effect. I now believe the cause for this is
that lowering this time made it more likely to hit the 1 ms interrupt
threshold penalty for the next packet, as described in my
"ehci: Use uframe precision for interrupt threshold checking" commit.
Now that we do interrupt threshold handling with uframe precision, futher
lowering this time from .5 to .25 ms gives an extra 15% improvement in speed
(MB/s) reading from a simple USB-2.0 thumb-drive.
While at it also properly set the int_req_by_async flag for short packet
completions.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Before this patch, the following could happen:
1) Transfer completes, raises interrupt
2) .5 ms later we check if the guest has queued up any new transfers
3) We find and execute a new transfer
4) .2 ms later the new transfer completes
5) We re-run our frame_timer to write back the completion, but less then
1 ms has passed since our last run, so frindex is not changed, so the
interrupt threshold code delays the interrupt
6) 1 ms from the re-run our frame-timer runs again and finally delivers
the interrupt
This leads to unnecessary large delays of interrupts, this code fixes this
by changing frindex to uframe precision and using that for interrupt threshold
control, making the interrupt fire at step 5 for guest which have low interrupt
threshold settings (like Linux).
Note that the guest still sees the frindex move in steps of 8 for migration
compatibility.
This boosts Linux read speed of a simple cheap USB thumb drive by 6 %.
Changes in v2:
-Make the guest see frindex move in steps of 8 by modifying ehci_opreg_read,
rather then using a shadow variable
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
ehci_fill_queue assumes that there is a one on one relationship between an ep
and a qh, this patch adds a check to ensure this.
Note I don't expect this to ever trigger, this is just something I noticed
the guest might do while working on other stuff. The only way this check can
trigger is if a guest mixes in and out qtd-s in a single qh for a non
control ep.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Remove the short-circuiting of fetchqtd in fetchqh, so that the
qtd gets properly verified before completing the transaction.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
This is not allowed, except for clearing active on cancellation, so don't
warn when the new token does not have its active bit set.
This unifies the cancellation path for modified qtd-s, and prepares
ehci_verify_qtd to be used ad an extra check inside
ehci_writeback_async_complete_packet().
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Also drop the warning printf, which was there mainly because this was an
untested code path (as the previous bug fixes to it show), but that no
longer is the case now :)
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
A device reset does not affect the link state, only set_link does.
Signed-off-by: Amos Kong <akong@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Commit b9d03e352c added link
auto-negotiation emulation, it would always set link up by
callback function. Problem exists if original link status
was down, link status should not be changed in auto-negotiation.
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Amos Kong <akong@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
We don't clean up network if fails to parse "-device" parameters without
calling net_cleanup(). I touch a problem, the tap device which is
created by qemu-ifup script could not be removed by qemu-ifdown script.
Some similar problems also exist in vl.c
In this patch, if network initialization successes, a cleanup function
will be registered to be called at qemu process termination.
Signed-off-by: Amos Kong <akong@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Discard packets longer than 16384 when !SBP to match the hardware behavior.
Signed-off-by: Michael Contreras <michael@inetric.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Add support for compiling for GCOV test coverage, enabled
with '--enable-gcov' during configure.
Test coverage will be reported after each test.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The division routines do not read or write tcg registers,
but can raise fixed-point divide exceptions.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Don't load the displacement into a register first, add it second
so that tcg_gen_addi_i64 can eliminate zeros. Don't mask the
displacement first so that we don't turn small negative numbers
into large positive numbers.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Giving the proper mask to disas_jcc allows us to generate an inline
comparison generating the carry/borrow with setcond.
In the very worst case, when we must use the external helper to compute
a value for CC, we generate (cc > 1) instead of (cc >> 1), which is only
very slightly slower on common cpus.
In the very best case, when the CC comes from a COMPARE insn and the
compiler is using ALCG with zero, everything folds out to become just
the setcond that the compiler wanted.
Signed-off-by: Richard Henderson <rth@twiddle.net>
After full conversion, we can audit the uses of LTGT cc ops
and see that none of the instructions can ever set CC=3.
Thus we can extend the table to treat that bit as ignored.
This fixes a regression wrt the pre-conversion translation
in which NE was used for both m=6 and m=7.
Signed-off-by: Richard Henderson <rth@twiddle.net>
While they aren't expensive, they aren't free to process. When we
know that the three cc helper variables are dead, don't kill them.
Signed-off-by: Richard Henderson <rth@twiddle.net>