- Fix a preemption bug in CURCPU_IDLE_P() that can lead to a bogus
assertion failure on DEBUG kernels.
- Fix MP/preemption races with timecounter detachment.
- Tweak it so it can also catch common errors with condition variables.
The change to kern_condvar.c is not included in this commit and will
come later.
- Don't call kmem_alloc() if operating in interrupt context, just fail
the allocation and disable debugging for the object. Makes it safe
to do mutex_init/rw_init/cv_init in interrupt context, when running
a LOCKDEBUG kernel.
one of the following:
- Holding kernel_lock (indicating that the code is not MT safe).
- Bracketing critical sections with kpreempt_disable/kpreempt_enable.
- Holding the interrupt priority level above IPL_NONE.
Statistics on kernel preemption are reported via event counters, and
where preemption is deferred for some reason, it's also reported via
lockstat. The LWP priority at which preemption is triggered is tuneable
via sysctl.
This is good since they are effectively the same as ...
lockmgr(&lock, LK_RELEASE);
lockmgr(&lock, LK_EXCLUSIVE);
.. and therefore don't behave as expected.
tech-kern:
- Invert priority space so that zero is the lowest priority. Rearrange
number and type of priority levels into bands. Add new bands like
'kernel real time'.
- Ignore the priority level passed to tsleep. Compute priority for
sleep dynamically.
- For SCHED_4BSD, make priority adjustment per-LWP, not per-process.
I don't like doing this, but there's too much pain to get this file
to compile clean due to how SPINLOCK_{BACKOFF,SPIN}_HOOK and
mb_write() are spread out in weird weird places throughout MD code.
- G/C spinlockmgr() and simple_lock debugging.
- Always include the kernel_lock functions, for LKMs.
- Slightly improved subr_lockdebug code.
- Keep sizeof(struct lock) the same if LOCKDEBUG.
conjunction with LK_DRAIN. This has the same effect as LK_DRAIN
except it atomically does NOT mark the lock as drained. This
guarantees that when we got the lock, we were the last one currently
waiting for the lock.
Use LK_DRAIN|LK_RESURRECT in vclean() to make sure there are no
waiters for the lock. This should fix behaviour theoretized to be
caused by vfs_subr.c 1.289 which caused vclean() to run into
completion and free the vnode before all lock-waiters had been
processed. Should therefore fix the "simple_lock: unitialized lock"
problems seen recently.
thanks to Juergen Hannken-Illjes for some analysis of the problem
and Erik Bertelsen for testing
from doc/BRANCHES:
idle lwp, and some changes depending on it.
1. separate context switching and thread scheduling.
(cf. gmcgarry_ctxsw)
2. implement idle lwp.
3. clean up related MD/MI interfaces.
4. make scheduler(s) modular.