This ldscript is not needed and actually makes things worse by putting
everything in one LOAD section, which then needs to have rwx permission.
Remove it so that we get two LOAD sections with better permissions.
Fixes PR 57323.
The only effect of the `volatile' qualifier on an asm block with
outputs is to force the instructions to appear in the generated code,
even if the outputs end up being unused. Since these instructions
have no (architectural) side effects -- provided %fs is set
correctly, which must be the case here -- there's no need for the
volatile qualifier, so nix it.
The only effect of the `volatile' qualifier on an asm block with
outputs is to force the instructions to appear in the generated code,
even if the outputs end up being unused. Since these instructions
have no (architectural) side effects -- provided %gs is set
correctly, which must be the case here -- there's no need for the
volatile qualifier, so nix it.
This convinces gcc to do less -- make a smaller stack frame, compute
fewer conditional moves in favour of predicted-not-taken branches --
in the fast path where we are sleepable as the caller expects.
Wasn't able to convince it to do the ncsw loop with a
predicted-not-taken branch, but let's leave the __predict_false in
there anyway because it's still a good prediction.
Root intr xfers require taking adaptive locks, which is forbidden
while polling.
This is not great -- any USB transfer completion callbacks might try
to take adaptive locks, not just uhub_intr, and that will always
causes trouble. We get lucky with ukbd_intr because it's not
MP-safe, so it relies only on the kernel lock (a spin lock) anyway.
But this change brings xhci in line with ehci.
PR kern/57326
XXX pullup-8
XXX pullup-9
XXX pullup-10
The lock gets set with atomic_exchange() -> __sync_lock_test_and_set()
which sets the value to 255 instead of 1. Check for a taken lock
with "!= 0" instead of "== 1". This should work on all architectures.
Ok: Matthew Green
The lock gets set with atomic_exchange() -> __sync_lock_test_and_set()
which sets the value to 255 instead of 1. Check for a taken lock
with "!= 0" instead of "== 1". This should work on all architectures.
Ok: Matthew Green
endian machines. When long double support was added, the old code was kept
for the regular double code. This code was never used because WIDE_DOUBLE
was always defined in the Makefile. Remove that old code, and conditionalize
the WIDE_DOUBLE code based on if long doubles are different than doubles on
the specific platform.