Invert the sense of the bit to mark if LOCKDEBUG is enabled to
disabled.
This will help my fellow developers spot "use before initialised"
problems that hppa picks up very well.
but fix the !LOCKDEBUG case by defining the "no debug" bits to zero so
they have no effect on lock stubs.
Invert the sense of the bit to mark if LOCKDEBUG is enabled to disabled.
This will help my fellow developers spot "use before initialised" problems
that hppa picks up very well.
It has to be done differently, because the semantics of mtx_owner in the non-
LOCKDEBUG case can vary significantly between archs, and thus it is not
possible to simply flip a bit to 1.
Ok core@, as at least i386 is unbootable right now.
sometimes do not serve as memory barriers, allowing memory references to
bleed outside of critical sections. It's possible that this is the
reason for pkgbuild's longstanding crashiness.
For rwlocks, always enable the explicit membars. They were disabled only
on x86, and since they are not in the fast-path it's not a big deal.
TODO: convert these to an atomic_membar_foo() or similar that does ordering
between regular data references and atomic references.
- Tweak it so it can also catch common errors with condition variables.
The change to kern_condvar.c is not included in this commit and will
come later.
- Don't call kmem_alloc() if operating in interrupt context, just fail
the allocation and disable debugging for the object. Makes it safe
to do mutex_init/rw_init/cv_init in interrupt context, when running
a LOCKDEBUG kernel.
- Use atomic ops directly, since rwlocks work the same way on all platforms.
- Try to make it a bit more cache efficient, and use branch hints.
- Fix a bug in rw_downgrade() where the turnstile lock was not released.
- Remove a couple of redundant assertions.
- Use atomic_swap instead of atomic_cas where it's safe to do so.
- After acquiring the turnstile lock in rw_vector_enter, check if the
owner is running again and spin if so.
- Introduce and use rw_onproc() instead of abusing mutex_onproc().
- Change the handoff/release algorithm to reduce the window when a rwlock
can held, but the owner not on a CPU.
there are no waiters. This gives a major boost to build.sh on larger
systems as directory vnode locks are exclusive for lookup, but are often
only held for a very short period of time.
This change has the potential to more readily expose lock order reversals
and other types of deadlock.
- G/C spinlockmgr() and simple_lock debugging.
- Always include the kernel_lock functions, for LKMs.
- Slightly improved subr_lockdebug code.
- Keep sizeof(struct lock) the same if LOCKDEBUG.
from doc/BRANCHES:
idle lwp, and some changes depending on it.
1. separate context switching and thread scheduling.
(cf. gmcgarry_ctxsw)
2. implement idle lwp.
3. clean up related MD/MI interfaces.
4. make scheduler(s) modular.