- add a new global variable, doing_shutdown, which is nonzero if
vfs_shutdown() or panic() have been called.
- in panic, set RB_NOSYNC if doing_shutdown is already set on entry
so we don't reenter vfs_shutdown if we panic'ed there.
- in vfs_shutdown, don't use proc0's process for sys_sync unless
curproc is NULL.
- in lockmgr, attribute successful locks to proc0 if doing_shutdown
&& curproc==NULL, and panic if we can't get the lock right away; avoids the
spurious lockmgr DIAGNOSTIC panic from the ddb reboot command.
- in subr_pool, deal with curproc==NULL in the doing_shutdown case.
- in mfs_strategy, bitbucket writes if doing_shutdown, so we don't
wedge waiting for the mfs process.
- in ltsleep, treat ((curproc == NULL) && doing_shutdown) like the
panicstr case.
Appears to fix: kern/9239, kern/10187, kern/9367.
May also fix kern/10122.
contains the values __SIMPLELOCK_LOCKED and __SIMPLELOCK_UNLOCKED, which
replace the old SIMPLELOCK_LOCKED and SIMPLELOCK_UNLOCKED. These files
are also required to supply inline functions __cpu_simple_lock(),
__cpu_simple_lock_try(), and __cpu_simple_unlock() if locking is to be
supported on that platform (i.e. if MULTIPROCESSOR is defined in the
_KERNEL case). Change these functions to take an int * (&alp->lock_data)
rather than the struct simplelock * itself.
These changes make it possible for userland to use the locking primitives
by including <machine/lock.h>.
1) fix typo preventing compilation (missing comma).
2) in SLOCK_WHERE, display cpu number in the MP case.
3) the folowing race condition was observed in _simple_lock:
cpu 1 releases lock,
cpu 0 grabs lock
cpu 1 sees it's already locked.
cpu 1 sees that lock_holder== "cpu 1"
cpu 1 assumes that it already holds it and barfs.
cpu 0 sets lock_holder == "cpu 0"
Fix: set lock_holder to LK_NOCPU in _simple_unlock().
too. Remove some needless code duplication by adding a "drain" argument
to the ACQUIRE() macro (compiler can [and does] optimize the constant
conditional).
- Now compatible with MULTIPROCESSOR (requires other changes not yet
committed, but which will be later today).
- In addition to tracking simple locks, track exclusive spin locks.
- Count spin locks like we do sleep locks (in the cpu_info for this
CPU).
- Lock debug lists are now TAILQs, so as to make the locking order
more obvious when dumping the list.
Also, some suggestions from Bill Sommerfeld:
- SIMPLELOCK_LOCKED and SIMPLELOCK_UNLOCKED constants, which may be
defined in <machine/lock.h> (default to 1 and 0, respectively). This
makes it easier to support architectures which use test-and-clear
rather than test-and-set.
- Add __attribute__((__aligned__)) to the `lock_data' member of the
simplelock structure. This makes it easier to support architectures
which can only perform atomic operations on very-well-aligned memory
locations. NOTE: This changes the size of struct simplelock, and
will cause a version bump.
remove simplelockrecurse, lockpausetime and PAUSE():
none of these serve any purpose anymore.
in the LOCKDEBUG functions, expand the splhigh() region to
cover the entire function. without this there can still be races.
and system call now just return EFAULT). A complete fix will
presumably have to wait for UBC and/or for vnode locking protocols to
be revamped to allow use of shared locks.
could be done in one of 2 ways:
* call lk_init with LK_CANRECURSE, resulting in a lock that
always can be used recursively.
* call lockmgr with LK_CANRECURSE, meaning that it's ok if this
lock is already held by us.
Sometimes we need a locking type that says: take this lock now, exclusively,
but while I am holding it, I may go through a code path which could attempt
to get the lock again, and which is unaware that the lock might already
be taken. Implement LK_SETRECURSE for this purpose. Assume that locks and
unlocks come in matching pairs (they should), and check for this 'level'
using SETRECURSE locks.