NetBSD rbtree(3) is not relocatable, so this extra step is needed.
Unfortunately, there's no easy way to automate detection of where we
need to apply this in ported code...
Requires breaking the rbtree(3) abstraction, but this is necessary
because the body of the loop often frees the element, so as is we had
a huge pile of use-after-free going on.
Requires changing struct interval_tree_node's rbnode member to match
the Linux name, since we now use container_of here, and radeon relies
on this.
While here, reduce it to membar_exit -- it's obviously not needed for
store-before-load here (although alpha doesn't have anything weaker
than the full sequential consistency `mb'), and although we do need a
store-before-load (and load-before-load) to spin waiting for the CPU
to wake up, that already happens a few lines below with alpha_mb in
the loop anyway. So no need for membar_sync, which is just `mb'
under the hood -- deleting the membar_sync in this place can't hurt.
The membar_sync had been inserted automatically when converting from
an older style of atomic_ops(3) API.
- Use atomic_store_release and atomic_load_consume for associating a
freshly constructed pool_cache with its underlying pool. The pool
gets published in various ways before the pool cache is fully
constructed.
=> Nix membar_sync -- no store-before-load is needed here.
- Take pool_head_lock around sysctl kern.pool TAILQ_FOREACH. Then take
a reference count, and drop the lock, around copyout.
=> Otherwise, pools could be partially initialized or freed while
we're still trying to read from them -- and in the worst case,
we might see a corrupted view of the tailq.
=> If we kept the lock around copyout, this could deadlock in memory
allocation.
=> If we didn't take a reference count while releasing the lock, the
pool could be destroyed while we're trying to traverse the list,
sending us into oblivion instead of the next element.
- Serialize updates to lockstat_enabled, lockstat_dev_enabled, and
lockstat_dtrace_enabled with a new __cpu_simple_lock.
- Use xc_barrier to obviate any need for additional membars in
lockstat_event.
- Use atomic_load/store_* for access that might not be serialized by
lockstat_lock or lockstat_enabled_lock.
Pre-C90 argument declarations have been old for more than 30 years now,
so mention that fact in the constant name. This reduces potential
confusion with other occurrences of the words 'arg' or 'argument'.
No functional change.
The commit that introduced the assertion failure looks innocent, it only
adds a few predefined functions for GCC mode. Nevertheless, before that
commit, lint consistently complained about 'error: void type illegal in
expression [109]', which doesn't make sense either.
This fix also removes the creative use of the initialization stack to
store the type of the statement expression. Having a separate stack for
these statement expressions makes the code easier to understand.
Filling deallocated memory with 0x00 may hide errors, so rather fill
with 0xA5.
While this doesn't change anything about the test about the assertion
failure after a do-while loop (see t_integration.sh, test case
assertion_failures), it may detect other similar bugs earlier.
lint complained:
if_wm.c(10028): error:
void function wm_txrxintr_disable cannot return value [213]
if_wm.c(10049): error:
void function wm_txrxintr_enable cannot return value [213]
No functional change.