Commit Graph

50 Commits

Author SHA1 Message Date
thorpej 113dd58233 Add a LOCKDEBUG check for a r/w spinlock spinning out of control.
Partially from Bill Sommerfeld.
2000-11-22 06:31:22 +00:00
thorpej 3075dec01a Allow machine dependent code to specify a hook to be run when a
spinlock's interlock is released.

Idea from Bill Sommerfeld.
2000-11-20 20:04:49 +00:00
sommerfeld 1cbfb08951 Fix !LOCKDEBUG && !DIAGNOSTIC case 2000-08-28 21:07:52 +00:00
sommerfeld bdc30aed03 Since the spinlock count is per-cpu, we don't need atomic operations
to update it, so don't bother with <machine/atomic.h>

Flush kernel_lock_release_all() and kernel_lock_acquire_count() (which
didn't do spinlock accounting correctly), and replace them with
spinlock_release_all() and spinlock_acquire_count().
2000-08-26 19:26:43 +00:00
thorpej fe036cae9a Fix a printf format (for Alpha). 2000-08-26 17:02:16 +00:00
sommerfeld 11eae2ffaf Default simple_lock_debugger to "on" on MULTIPROCESSOR.
Change uninitialized simple_lock check from KASSERT to use SLOCK_WHERE
(to show the "real" source line where the error was detected).
2000-08-23 15:17:47 +00:00
thorpej 7508bd7231 Use spllock() rather than splhigh(). 2000-08-22 19:47:26 +00:00
thorpej a2f2d10800 Slight adjustment to INTERLOCK_*() macros to make it easier
for the compiler to optimize.
2000-08-22 17:31:32 +00:00
thorpej 5573e863c7 - Clean up _simple_lock_held()
- In simple_lock_switchcheck(), allow/enforce exactly one lock to be
  held: sched_lock.
- Per e-mail to tech-smp from Bill Sommerfeld, r/w spin locks have
  an interlock at splsched(), rather than splhigh().
2000-08-21 02:17:45 +00:00
thorpej 8bc6ee56cb Lock debugging fix: Make sure a simplelock's lock_holder gets
initialized properly, and consistently tracks the owning CPU's
cpuid.  Add some diagnostic assertions to enforce this.
2000-08-19 19:36:18 +00:00
thorpej 391e1e1f44 For spinlocks, block interrupts while holding the interlock. Partially
from Bill Sommerfeld.
2000-08-17 14:36:32 +00:00
thorpej b6aaff9c44 Add a DIAGNOSTIC check for release of an unlocked lock.
From Bill Sommerfeld.
2000-08-17 04:18:21 +00:00
thorpej f2098b2382 Some more lock debugging support:
- LOCK_ASSERT(), which expands to KASSERT() if LOCKDEBUG.
- new simple_lock_held(), which tests if the calling CPU holds
  the specified simple lock.

From Bill Sommerfeld, modified slightly by me.
2000-08-17 04:15:43 +00:00
eeh cd557cfb3c Nother __kprintf_attribute__ to be removed. 2000-08-10 04:37:59 +00:00
thorpej c70ada6428 Fix printf format error pointed out by Steve Woodford. 2000-08-08 19:55:26 +00:00
thorpej b9d2d53fb8 Add a DIAGNOSTIC or LOCKDEBUG check for held spin locks. 2000-08-07 22:10:52 +00:00
thorpej b24441d4d1 It doesn't make sense to charge simple locks to proc's, because
simple locks are held by CPUs.  Remove p_simple_locks (which was
unused anyway, really), and add a LOCKDEBUG check for held simple
locks in mi_switch().  Grow p_locks to an int to take up the space
previously used by p_simple_locks so that the proc structure doens't
change size.
2000-08-07 21:55:22 +00:00
thorpej 8fd9032b90 ANSI'ify. 2000-07-14 07:14:33 +00:00
sommerfeld e964d558a7 Fix assorted bugs around shutdown/reboot/panic time.
- add a new global variable, doing_shutdown, which is nonzero if
vfs_shutdown() or panic() have been called.
- in panic, set RB_NOSYNC if doing_shutdown is already set on entry
so we don't reenter vfs_shutdown if we panic'ed there.
 - in vfs_shutdown, don't use proc0's process for sys_sync unless
curproc is NULL.
 - in lockmgr, attribute successful locks to proc0 if doing_shutdown
&& curproc==NULL, and  panic if we can't get the lock right away; avoids the
spurious lockmgr DIAGNOSTIC panic from the ddb reboot command.
 - in subr_pool, deal with curproc==NULL in the doing_shutdown case.
 - in mfs_strategy, bitbucket writes if doing_shutdown, so we don't
wedge waiting for the mfs process.
 - in ltsleep, treat ((curproc == NULL) && doing_shutdown) like the
panicstr case.

Appears to fix: kern/9239, kern/10187, kern/9367.
May also fix kern/10122.
2000-06-10 18:44:43 +00:00
thorpej 6ea30ef2e8 Use ltsleep(). 2000-06-08 05:50:59 +00:00
thorpej 75dbbed64a Fix a typo, and add some lint comments. 2000-05-23 05:17:11 +00:00
sommerfeld 4d573016ed Let MULTIPROCESSOR && LOCKDEBUG case compile again 2000-05-03 13:53:59 +00:00
thorpej 8185691694 - If a platform defines __HAVE_ATOMIC_OPERATIONS, use them for counting
in the MULTIPROCESSOR case.
- Move a misplaced #ifdef so that LK_REENABLE actually works.
2000-05-02 04:32:33 +00:00
thorpej f51470a514 Require that each each MACHINE/MACHINE_ARCH supply a lock.h. This file
contains the values __SIMPLELOCK_LOCKED and __SIMPLELOCK_UNLOCKED, which
replace the old SIMPLELOCK_LOCKED and SIMPLELOCK_UNLOCKED.  These files
are also required to supply inline functions __cpu_simple_lock(),
__cpu_simple_lock_try(), and __cpu_simple_unlock() if locking is to be
supported on that platform (i.e. if MULTIPROCESSOR is defined in the
_KERNEL case).  Change these functions to take an int * (&alp->lock_data)
rather than the struct simplelock * itself.

These changes make it possible for userland to use the locking primitives
by including <machine/lock.h>.
2000-04-29 03:31:45 +00:00
sommerfeld 39db0e9c7e Three MULTIPROCESSOR + LOCKDEBUG fixes:
1) fix typo preventing compilation (missing comma).
2) in SLOCK_WHERE, display cpu number in the MP case.
3) the folowing race condition was observed in _simple_lock:
	cpu 1 releases lock,
	cpu 0 grabs lock
 	cpu 1 sees it's already locked.
	cpu 1 sees that lock_holder== "cpu 1"
	cpu 1 assumes that it already holds it and barfs.
	cpu 0 sets lock_holder == "cpu 0"
Fix: set lock_holder to LK_NOCPU in _simple_unlock().
2000-02-09 16:46:09 +00:00
thorpej 8d4e2a9293 Make it possible to direct LOCKDEBUG messages to syslog only. 1999-08-27 01:14:38 +00:00
thorpej cca4496da7 Use cpuid_t and cpu_number(). 1999-08-10 21:10:20 +00:00
thorpej cb41412726 Fix a thinko in draining of spin locks: bump waitcount in the spin case,
too.  Remove some needless code duplication by adding a "drain" argument
to the ACQUIRE() macro (compiler can [and does] optimize the constant
conditional).
1999-07-28 19:29:39 +00:00
mellon a976011fcf - Correct the definition of the COUNT macro so that it takes the same
number of arguments when compiled without DIAGNOSTIC as with.
1999-07-28 01:59:46 +00:00
thorpej 6390046137 Improve the LOCKDEBUG code:
- Now compatible with MULTIPROCESSOR (requires other changes not yet
  committed, but which will be later today).
- In addition to tracking simple locks, track exclusive spin locks.
- Count spin locks like we do sleep locks (in the cpu_info for this
  CPU).
- Lock debug lists are now TAILQs, so as to make the locking order
  more obvious when dumping the list.

Also, some suggestions from Bill Sommerfeld:
- SIMPLELOCK_LOCKED and SIMPLELOCK_UNLOCKED constants, which may be
  defined in <machine/lock.h> (default to 1 and 0, respectively).  This
  makes it easier to support architectures which use test-and-clear
  rather than test-and-set.
- Add __attribute__((__aligned__)) to the `lock_data' member of the
  simplelock structure.  This makes it easier to support architectures
  which can only perform atomic operations on very-well-aligned memory
  locations.  NOTE: This changes the size of struct simplelock, and
  will cause a version bump.
1999-07-27 21:29:15 +00:00
thorpej c0e24db820 Use wakeup_one() for waking up sleep lock sleepers. 1999-07-26 23:02:53 +00:00
thorpej 50f9f26fe1 Add a spin lock mode to the lock manager. Provides a read/write
spin lock facility.  Some code and ideas from Ross Harvey.
1999-07-25 06:24:22 +00:00
chs fce05250f9 more cleanup:
remove simplelockrecurse, lockpausetime and PAUSE():
none of these serve any purpose anymore.
in the LOCKDEBUG functions, expand the splhigh() region to
cover the entire function.  without this there can still be races.
1999-07-19 03:21:11 +00:00
sommerfe c0d15c5c7c Count lockmgr locks held by process if LOCKDEBUG || DIAGNOSTIC.
(previously, it was just under LOCKDEBUG).
1999-05-04 15:58:53 +00:00
sommerfe f1a508e354 Prevent deadlock cited in PR4629 from crashing the system. (copyout
and system call now just return EFAULT).  A complete fix will
presumably have to wait for UBC and/or for vnode locking protocols to
be revamped to allow use of shared locks.
1999-03-25 00:20:35 +00:00
fvdl 080ad305ff Recursive locks were previously only available with LK_CANRECURSE. This
could be done in one of 2 ways:

	* call lk_init with LK_CANRECURSE, resulting in a lock that
 	  always can be used recursively.
	* call lockmgr with LK_CANRECURSE, meaning that it's ok if this
	  lock is already held by us.

Sometimes we need a locking type that says: take this lock now, exclusively,
but while I am holding it, I may go through a code path which could attempt
to get the lock again, and which is unaware that the lock might already
be taken.  Implement LK_SETRECURSE for this purpose. Assume that locks and
unlocks come in matching pairs (they should), and check for this 'level'
using SETRECURSE locks.
1999-02-28 14:09:15 +00:00
chs 33c042b6a2 print a little more info in simple_lock_freecheck(). 1999-01-22 07:55:17 +00:00
bouyer c2912048fc Cosmectic change in a panic(), so that the panic string printed by savecore
has more meaning.
1998-12-02 10:41:01 +00:00
chs 61458d7dfa LOCKDEBUG enhancements for non-MP:
keep a list of locked locks.
use this to print where the lock was locked
when we either go to sleep with a lock held
or try to free a locked lock.
1998-11-04 06:19:55 +00:00
pk c65c55af6f Disable the daft PAUSE() macro, which manages to skip all the relevant
code in lockmgr() most of the time. This a no doubt a case of Bad Coding Style.
1998-10-14 09:41:21 +00:00
thorpej ac0d359bcb Initialize the CPU ID in the simplelock. 1998-09-29 07:29:53 +00:00
thorpej 0c11d72456 Key off MULTIPROCESSOR, not NCPUS > 1. Pull in <machine/lock.h> if
MULTIPROCESSOR is defined, and rely on it to define the simple lock
operations.
1998-09-24 22:30:11 +00:00
perry 275d1554aa Abolition of bcopy, ovbcopy, bcmp, and bzero, phase one.
bcopy(x, y, z) ->  memcpy(y, x, z)
ovbcopy(x, y, z) -> memmove(y, x, z)
   bcmp(x, y, z) ->  memcmp(x, y, z)
  bzero(x, y)    ->  memset(x, 0, y)
1998-08-04 04:03:10 +00:00
thorpej ad7a87400a defopt LOCKDEBUG 1998-05-20 01:32:29 +00:00
fvdl e5bc90f40c Merge with Lite2 + local changes 1998-03-01 02:20:01 +00:00
chs d7d62b7ad3 snazzier LOCKDEBUG code. 1998-02-07 02:14:04 +00:00
mycroft 7f35228e7e Make wmesg arguments to various functions const. 1997-10-09 12:49:44 +00:00
fvdl 6eecd789e6 There appear to be spinlock bugs in the VM code. They are not a problem
now, as we're always one on CPU (they will be later, though). With DEBUG,
they cause a lot of output, so DEBUG -> LOCKDEBUG for now.
1997-07-06 22:51:59 +00:00
fvdl 8f8988628d Add NetBSD RCS Id, and a few minor changes to make it compile. 1997-07-06 12:35:33 +00:00
fvdl f93a04da47 Import Lite2 locking code 1997-07-06 12:19:53 +00:00