register has the INTHIGH bit set, the controller is going to keep the
line low when *not* asserting an interrupt, and since EISA level-tiggered
interrupts are active-low, this would result in a forever-interrupt-storm.
So, if INTHIGH is set in INTDEF, establish our interrupt handler as
IST_EDGE, which will program the EISA PIC to detect the interrupt on
the rising edge of the IRQ line.
does indeed mean "IRQ signal is active-high", but "else edge" is not
correct; level-triggered EISA interrupt are active-low, and edge-triggered
EISA interrupts are rising-edge, so INTHIGH would in fact mean "edge".
allocating memory for them, requesting all the metadata contents of
these buffers (and repeating in the unlikely case of the number of
buffers increasing too much since the estimate) and then straight away
throwing all the contents out just to count how many buffers there were,
just get the initial estimate from the kernel and subtract the slop.
Reduces system CPU usage of "systat vm" by approx 80% for any system
with a reasonable number of buffers.
done for aarch64, arm, and powerpc. Otherwise, child is trapped to the
PTRACE_BREAKPOINT_ASM (== trapa) instruction indefinitely.
Fix tests/lib/libc/sys/t_ptrace_wait*:core_dump_procinfo.
Rename the variables as well. Their previous name 'ci' was not easy to
understand, the 'i' may have meant 'stack item'. The new name 'cs'
simply means 'control statement'.
No functional change.
For keywords that have a single spelling variant (such as __packed),
write this form in the source, to make it searchable. This also avoids
a few calls to malloc.
Previously, some keywords had leading underscores and some hadn't, this
was inconsistent.
No functional change.
This aligns more closely with the grammar from GCC's parser. The global
cleanup from the grammar rule 'external_declaration:
top_level_declaration' is not performed anymore, which doesn't matter
since there is nothing to clean up after a single semicolon.
No functional change.
Alloclist remains not per-RAID, so initialize that pool
separately/differently than the rest.
The remainder of pools in RF_Pools_s are now per-RAID pools. Mostly
mechanical changes to functions to allocate/destroy per-RAID pools.
Needed to make raidPtr available in certain cases to be able to find
the per-RAID pools.
Extend rf_pool_init() to now populate a per-RAID wchan value that is
unique to each pool for a given RAID device.
TODO: Complete the analysis of the minimum number of items that are
required for each pool to allow IO to progress (i.e. so that a request
for pool resources can always be satisfied), and dynamically scale
minimum pool sizes based on RAID configuration.
RAIDframe can't deal with that, so create a dedicated pool of buffers
to use for IO. PR_WAITOK is fine here, as we pre-allocate more than
we need to guarantee IO can make progress. Tuning of pool still to
come.
on Alpha, and furthermore it's unlikely that any given context switch
will be returning to one even if the process has them. So, re-arrange
the RAS processing in cpu_switchto() so that the most likely code paths
are predicted by the branch predictor. On an EV4-class processor, this
will save ~4-6 cycles on just about every context switch.
any <sys/*.h> headers and for the COHERENCY_UNIT and CACHE_LINE_SIZE
defaults to be provided after the <machine/*.h> includes, but before the
<sys/*.h> includes.
COHERENCY_UNIT and CACHE_LINE_SIZE are used by a few <sys/*.h> filss.
I checked a handful of kernel builds produce the same binary before and
after this change. I'll check more.
- Use __CTASSERT() instead of rolling our own compile-time assertion
using cpp.
- Use __BIT() &c instead of rolling our own.
- Improve some comments.
- Define a default FP_C and FPCR value that is self-consistent, and
initialize it properly at process creation time.
- Fix signal information when the trap shadow cannot be resolved.
- Use defined constants rather than magic numbers for the exception
summary bits.
- Add a machdep sysctl to enable FP software-completion debugging.