Same as mutex_exit. Relevant only on cnMIPS where the store buffers
get clogged. Recommended by the Cavium documentation. No semantic
change, only performance -- this only adds a barrier in some cases
where there was none before, so it can't hurt correctness.
This change deletes memory barriers on non-Octeon MP. However, all
the appropriate acquire and release barriers are already used in
mutex stubs, and no barriers are needed in atomic_* unless we set
__HAVE_ATOMIC_AS_MEMBAR which we don't on MIPS. So this should be
safe.
Unclear whether we need this even on Octeon -- don't have a clear
reference on why it's here.
BDSYNC is used for membar_sync, which is supposed to be a full
sequential consistency barrier, which is not provided by syncw, so
this is necessary for correctness.
BDSYNC is not used for anything else, so this can't hurt performance,
except where it was necessary for correctness anyway or where the
semantic choice of membar_sync was too strong anyway.
This change deletes a memory barrier. However, it should be safe:
The semantic requirement for this is already provided by the SYNC_REL
above, before the ll. And as currently defined, SYNC_REL is at least
as strong as SYNC, so this change can't hurt correctness on its own
(barring CPU errata, which would apply to other users of SYNC_REL and
can be addressed in the definition of SYNC_REL).
Later, perhaps we can relax SYNC_REL to syncw on Octeon if we prove
that it is correct (e.g., if Octeon follows the SPARCv9 partial store
order semantics).
Nix now-unused SYNC macro in asm.h.
This change should be safe because it doesn't remove or weaken any
memory barriers, but does add, clarify, or strengthen barriers.
Goals:
- Make sure mutex_enter/exit and mutex_spin_enter/exit have
acquire/release semantics.
- New macros make maintenance easier and purpose clearer:
. SYNC_ACQ is for load-before-load/store barrier, and BDSYNC_ACQ
for a branch delay slot -- currently defined as plain sync for MP
and nothing, or nop, for UP; thus it is no weaker than SYNC and
BDSYNC as currently defined, which is syncw on Octeon, plain sync
on non-Octeon MP, and nothing/nop on UP.
It is not clear to me whether load-then-syncw or ll/sc-then-syncw
or even bare load provides load-acquire semantics on Octeon -- if
no, this will fix bugs; if yes (like it is on SPARC PSO), we can
relax SYNC_ACQ to be syncw or nothing later.
. SYNC_REL is for load/store-before-store barrier -- currently
defined as plain sync for MP and nothing for UP.
It is not clear to me whether syncw-then-store is enough for
store-release on Octeon -- if no, we can leave this as is; if
yes, we can relax SYNC_REL to be syncw on Octeon.
. SYNC_PLUNGER is there to flush clogged Cavium store buffers, and
BDSYNC_PLUNGER for a branch delay slot -- syncw on Octeon,
nothing or nop on non-Octeon.
=> This is not necessary (or, as far as I'm aware, sufficient)
for acquire semantics -- it serves only to flush store buffers
where stores might otherwise linger for hundreds of thousands
of cycles, which would, e.g., cause spin locks to be held for
unreasonably long durations.
Newerish revisions of the MIPS ISA also have finer-grained sync
variants that could be plopped in here.
Mechanism:
Insert these barriers in the right places, replacing only those where
the definition is currently equivalent, so this change is safe.
- Replace #ifdef _MIPS_ARCH_OCTEONP / syncw / #endif at the end of
atomic_cas_* by SYNC_PLUNGER, which is `sync 4' (a.k.a. syncw) if
__OCTEON__ and empty otherwise.
=> From what I can tell, __OCTEON__ is defined in at least as many
contexts as _MIPS_ARCH_OCTEONP -- i.e., there are some Octeons
with no _MIPS_ARCH_OCTEONP, but I don't know if any of them are
relevant to us or ever saw the light of day outside Cavium; we
seem to buid with `-march=octeonp' so this is unlikely to make a
difference. If it turns out that we do care, well, now there's
a central place to make the distinction for sync instructions.
- Replace post-ll/sc SYNC by SYNC_ACQ in _atomic_cas_*, which are
internal kernel versions used in sys/arch/mips/include/lock.h where
it assumes they have load-acquire semantics. Should move this to
lock.h later, since we _don't_ define __HAVE_ATOMIC_AS_MEMBAR on
MIPS and so the extra barrier might be costly.
- Insert SYNC_REL before ll/sc, and replace post-ll/sc SYNC by
SYNC_ACQ, in _ucas_*, which is used without any barriers in futex
code and doesn't mention barriers in the man page so I have to
assume it is required to be a release/acquire barrier.
- Change BDSYNC to BDSYNC_ACQ in mutex_enter and mutex_spin_enter.
This is necessary to provide load-acquire semantics -- unclear if
it was provided already by syncw on Octeon, but it seems more
likely that either (a) no sync or syncw is needed at all, or (b)
syncw is not enough and sync is needed, since syncw is only a
store-before-store ordering barrier.
- Insert SYNC_REL before ll/sc in mutex_exit and mutex_spin_exit.
This is currently redundant with the SYNC already there, but
SYNC_REL more clearly identifies the necessary semantics in case we
want to define it differently on different systems, and having a
sync in the middle of an ll/sc is a bit weird and possibly not a
good idea, so I intend to (carefully) remove the redundant SYNC in
a later change.
- Change BDSYNC to BDSYNC_PLUNGER at the end of mutex_exit. This has
no semantic change right now -- it's syncw on Octeon, sync on
non-Octeon MP, nop on UP -- but we can relax it later to nop on
non-Cavium MP.
- Leave LLSCSYNC in for now -- it is apparently there for a Cavium
erratum, but I'm not sure what the erratum is, exactly, and I have
no reference for it. I suspect these can be safely removed, but we
might have to double up some other syncw instructions -- Linux uses
it only in store-release sequences, not at the head of every ll/sc.
There is no need to make this variable externally visible. There are
several more variables in this file that could be unexported, leave
these for someone who knows whether vmstat.c is used by other parts of
the tree as well.
No functional change, OK mrg.
This only applies to traditional C and ensures that the behavior is
preserved when rearranging the C parser to evaluate string concatenation
from left to right.
No functional change. As before, the string literals "1" "2" "3" are
not concatenated from left to right, instead concatenation starts with
"23" and then proceeds to "123".
NetBSD rbtree(3) is not relocatable, so this extra step is needed.
Unfortunately, there's no easy way to automate detection of where we
need to apply this in ported code...
Requires breaking the rbtree(3) abstraction, but this is necessary
because the body of the loop often frees the element, so as is we had
a huge pile of use-after-free going on.
Requires changing struct interval_tree_node's rbnode member to match
the Linux name, since we now use container_of here, and radeon relies
on this.
While here, reduce it to membar_exit -- it's obviously not needed for
store-before-load here (although alpha doesn't have anything weaker
than the full sequential consistency `mb'), and although we do need a
store-before-load (and load-before-load) to spin waiting for the CPU
to wake up, that already happens a few lines below with alpha_mb in
the loop anyway. So no need for membar_sync, which is just `mb'
under the hood -- deleting the membar_sync in this place can't hurt.
The membar_sync had been inserted automatically when converting from
an older style of atomic_ops(3) API.
- Use atomic_store_release and atomic_load_consume for associating a
freshly constructed pool_cache with its underlying pool. The pool
gets published in various ways before the pool cache is fully
constructed.
=> Nix membar_sync -- no store-before-load is needed here.
- Take pool_head_lock around sysctl kern.pool TAILQ_FOREACH. Then take
a reference count, and drop the lock, around copyout.
=> Otherwise, pools could be partially initialized or freed while
we're still trying to read from them -- and in the worst case,
we might see a corrupted view of the tailq.
=> If we kept the lock around copyout, this could deadlock in memory
allocation.
=> If we didn't take a reference count while releasing the lock, the
pool could be destroyed while we're trying to traverse the list,
sending us into oblivion instead of the next element.
- Serialize updates to lockstat_enabled, lockstat_dev_enabled, and
lockstat_dtrace_enabled with a new __cpu_simple_lock.
- Use xc_barrier to obviate any need for additional membars in
lockstat_event.
- Use atomic_load/store_* for access that might not be serialized by
lockstat_lock or lockstat_enabled_lock.
Pre-C90 argument declarations have been old for more than 30 years now,
so mention that fact in the constant name. This reduces potential
confusion with other occurrences of the words 'arg' or 'argument'.
No functional change.
The commit that introduced the assertion failure looks innocent, it only
adds a few predefined functions for GCC mode. Nevertheless, before that
commit, lint consistently complained about 'error: void type illegal in
expression [109]', which doesn't make sense either.
This fix also removes the creative use of the initialization stack to
store the type of the statement expression. Having a separate stack for
these statement expressions makes the code easier to understand.