Commit Graph

60 Commits

Author SHA1 Message Date
lukem adc783d537 add RCSIDs 2001-11-12 15:25:01 +00:00
chs adb1a233b7 replace wakeup_one() with wakeup(). wakeup_one() can only be used if the
woken-up thread is guaranteed to pass the buck to the next guy before
going back to sleep, and the rest of the lockmgr() code doesn't do that.
from Bill Sommerfeld.  fixes PR 14097.
2001-09-29 21:27:49 +00:00
chs 039c1fd312 print a stack trace in more LOCKDEBUG cases.
add a blank line between complaints.
use TAILQ_FOREACH where appropriate.
2001-09-25 06:13:29 +00:00
sommerfeld acf40b361c Correct comment to match code 2001-09-22 22:36:30 +00:00
wiz daa5d204e4 synchron*, not sychron* 2001-07-08 17:41:14 +00:00
thorpej 31769952ca Add a simple_lock_only_held() LOCKDEBUG routine, which allows code
to assert that exactly zero or one (and a specific one) locks are
held.

From Bill Sommerfeld.
2001-06-05 04:38:08 +00:00
enami bda65c7816 Define local variable cpu_id only when either MULTIPROCESSOR or DIAGNOSTIC
is defined since it isn't used otherwise.
2001-05-01 04:30:04 +00:00
marcus b6240639a2 STDC cleanup: volatile needs to be cast away for lk_flags as well. 2001-04-27 00:05:13 +00:00
thorpej c24c3604b0 SPINLOCK_INTERLOCK_RELEASE_HOOK should actually be
SPINLOCK_SPIN_HOOK, so that we actually check for
pending IPIs on the Alpha more than once.  Also,
when we call alpha_ipi_process(), make sure to go
to splipi().
2001-04-20 22:58:39 +00:00
jmc ca607b87cf Default lock_printf to syslog rather than printf. Some of the lock debug checks
are done inside of wakeup which is holding the sched lock. Printf can cause
wakeup to get called again (pty redirection of console message) which will
panic with sched lock already held.

This isn't a long term fix as not being able to printf vs. sched lock should
be cleaned up better but this avoids continual panics with lockdebug running
and an xterm -C.
2000-12-24 23:56:24 +00:00
thorpej 113dd58233 Add a LOCKDEBUG check for a r/w spinlock spinning out of control.
Partially from Bill Sommerfeld.
2000-11-22 06:31:22 +00:00
thorpej 3075dec01a Allow machine dependent code to specify a hook to be run when a
spinlock's interlock is released.

Idea from Bill Sommerfeld.
2000-11-20 20:04:49 +00:00
sommerfeld 1cbfb08951 Fix !LOCKDEBUG && !DIAGNOSTIC case 2000-08-28 21:07:52 +00:00
sommerfeld bdc30aed03 Since the spinlock count is per-cpu, we don't need atomic operations
to update it, so don't bother with <machine/atomic.h>

Flush kernel_lock_release_all() and kernel_lock_acquire_count() (which
didn't do spinlock accounting correctly), and replace them with
spinlock_release_all() and spinlock_acquire_count().
2000-08-26 19:26:43 +00:00
thorpej fe036cae9a Fix a printf format (for Alpha). 2000-08-26 17:02:16 +00:00
sommerfeld 11eae2ffaf Default simple_lock_debugger to "on" on MULTIPROCESSOR.
Change uninitialized simple_lock check from KASSERT to use SLOCK_WHERE
(to show the "real" source line where the error was detected).
2000-08-23 15:17:47 +00:00
thorpej 7508bd7231 Use spllock() rather than splhigh(). 2000-08-22 19:47:26 +00:00
thorpej a2f2d10800 Slight adjustment to INTERLOCK_*() macros to make it easier
for the compiler to optimize.
2000-08-22 17:31:32 +00:00
thorpej 5573e863c7 - Clean up _simple_lock_held()
- In simple_lock_switchcheck(), allow/enforce exactly one lock to be
  held: sched_lock.
- Per e-mail to tech-smp from Bill Sommerfeld, r/w spin locks have
  an interlock at splsched(), rather than splhigh().
2000-08-21 02:17:45 +00:00
thorpej 8bc6ee56cb Lock debugging fix: Make sure a simplelock's lock_holder gets
initialized properly, and consistently tracks the owning CPU's
cpuid.  Add some diagnostic assertions to enforce this.
2000-08-19 19:36:18 +00:00
thorpej 391e1e1f44 For spinlocks, block interrupts while holding the interlock. Partially
from Bill Sommerfeld.
2000-08-17 14:36:32 +00:00
thorpej b6aaff9c44 Add a DIAGNOSTIC check for release of an unlocked lock.
From Bill Sommerfeld.
2000-08-17 04:18:21 +00:00
thorpej f2098b2382 Some more lock debugging support:
- LOCK_ASSERT(), which expands to KASSERT() if LOCKDEBUG.
- new simple_lock_held(), which tests if the calling CPU holds
  the specified simple lock.

From Bill Sommerfeld, modified slightly by me.
2000-08-17 04:15:43 +00:00
eeh cd557cfb3c Nother __kprintf_attribute__ to be removed. 2000-08-10 04:37:59 +00:00
thorpej c70ada6428 Fix printf format error pointed out by Steve Woodford. 2000-08-08 19:55:26 +00:00
thorpej b9d2d53fb8 Add a DIAGNOSTIC or LOCKDEBUG check for held spin locks. 2000-08-07 22:10:52 +00:00
thorpej b24441d4d1 It doesn't make sense to charge simple locks to proc's, because
simple locks are held by CPUs.  Remove p_simple_locks (which was
unused anyway, really), and add a LOCKDEBUG check for held simple
locks in mi_switch().  Grow p_locks to an int to take up the space
previously used by p_simple_locks so that the proc structure doens't
change size.
2000-08-07 21:55:22 +00:00
thorpej 8fd9032b90 ANSI'ify. 2000-07-14 07:14:33 +00:00
sommerfeld e964d558a7 Fix assorted bugs around shutdown/reboot/panic time.
- add a new global variable, doing_shutdown, which is nonzero if
vfs_shutdown() or panic() have been called.
- in panic, set RB_NOSYNC if doing_shutdown is already set on entry
so we don't reenter vfs_shutdown if we panic'ed there.
 - in vfs_shutdown, don't use proc0's process for sys_sync unless
curproc is NULL.
 - in lockmgr, attribute successful locks to proc0 if doing_shutdown
&& curproc==NULL, and  panic if we can't get the lock right away; avoids the
spurious lockmgr DIAGNOSTIC panic from the ddb reboot command.
 - in subr_pool, deal with curproc==NULL in the doing_shutdown case.
 - in mfs_strategy, bitbucket writes if doing_shutdown, so we don't
wedge waiting for the mfs process.
 - in ltsleep, treat ((curproc == NULL) && doing_shutdown) like the
panicstr case.

Appears to fix: kern/9239, kern/10187, kern/9367.
May also fix kern/10122.
2000-06-10 18:44:43 +00:00
thorpej 6ea30ef2e8 Use ltsleep(). 2000-06-08 05:50:59 +00:00
thorpej 75dbbed64a Fix a typo, and add some lint comments. 2000-05-23 05:17:11 +00:00
sommerfeld 4d573016ed Let MULTIPROCESSOR && LOCKDEBUG case compile again 2000-05-03 13:53:59 +00:00
thorpej 8185691694 - If a platform defines __HAVE_ATOMIC_OPERATIONS, use them for counting
in the MULTIPROCESSOR case.
- Move a misplaced #ifdef so that LK_REENABLE actually works.
2000-05-02 04:32:33 +00:00
thorpej f51470a514 Require that each each MACHINE/MACHINE_ARCH supply a lock.h. This file
contains the values __SIMPLELOCK_LOCKED and __SIMPLELOCK_UNLOCKED, which
replace the old SIMPLELOCK_LOCKED and SIMPLELOCK_UNLOCKED.  These files
are also required to supply inline functions __cpu_simple_lock(),
__cpu_simple_lock_try(), and __cpu_simple_unlock() if locking is to be
supported on that platform (i.e. if MULTIPROCESSOR is defined in the
_KERNEL case).  Change these functions to take an int * (&alp->lock_data)
rather than the struct simplelock * itself.

These changes make it possible for userland to use the locking primitives
by including <machine/lock.h>.
2000-04-29 03:31:45 +00:00
sommerfeld 39db0e9c7e Three MULTIPROCESSOR + LOCKDEBUG fixes:
1) fix typo preventing compilation (missing comma).
2) in SLOCK_WHERE, display cpu number in the MP case.
3) the folowing race condition was observed in _simple_lock:
	cpu 1 releases lock,
	cpu 0 grabs lock
 	cpu 1 sees it's already locked.
	cpu 1 sees that lock_holder== "cpu 1"
	cpu 1 assumes that it already holds it and barfs.
	cpu 0 sets lock_holder == "cpu 0"
Fix: set lock_holder to LK_NOCPU in _simple_unlock().
2000-02-09 16:46:09 +00:00
thorpej 8d4e2a9293 Make it possible to direct LOCKDEBUG messages to syslog only. 1999-08-27 01:14:38 +00:00
thorpej cca4496da7 Use cpuid_t and cpu_number(). 1999-08-10 21:10:20 +00:00
thorpej cb41412726 Fix a thinko in draining of spin locks: bump waitcount in the spin case,
too.  Remove some needless code duplication by adding a "drain" argument
to the ACQUIRE() macro (compiler can [and does] optimize the constant
conditional).
1999-07-28 19:29:39 +00:00
mellon a976011fcf - Correct the definition of the COUNT macro so that it takes the same
number of arguments when compiled without DIAGNOSTIC as with.
1999-07-28 01:59:46 +00:00
thorpej 6390046137 Improve the LOCKDEBUG code:
- Now compatible with MULTIPROCESSOR (requires other changes not yet
  committed, but which will be later today).
- In addition to tracking simple locks, track exclusive spin locks.
- Count spin locks like we do sleep locks (in the cpu_info for this
  CPU).
- Lock debug lists are now TAILQs, so as to make the locking order
  more obvious when dumping the list.

Also, some suggestions from Bill Sommerfeld:
- SIMPLELOCK_LOCKED and SIMPLELOCK_UNLOCKED constants, which may be
  defined in <machine/lock.h> (default to 1 and 0, respectively).  This
  makes it easier to support architectures which use test-and-clear
  rather than test-and-set.
- Add __attribute__((__aligned__)) to the `lock_data' member of the
  simplelock structure.  This makes it easier to support architectures
  which can only perform atomic operations on very-well-aligned memory
  locations.  NOTE: This changes the size of struct simplelock, and
  will cause a version bump.
1999-07-27 21:29:15 +00:00
thorpej c0e24db820 Use wakeup_one() for waking up sleep lock sleepers. 1999-07-26 23:02:53 +00:00
thorpej 50f9f26fe1 Add a spin lock mode to the lock manager. Provides a read/write
spin lock facility.  Some code and ideas from Ross Harvey.
1999-07-25 06:24:22 +00:00
chs fce05250f9 more cleanup:
remove simplelockrecurse, lockpausetime and PAUSE():
none of these serve any purpose anymore.
in the LOCKDEBUG functions, expand the splhigh() region to
cover the entire function.  without this there can still be races.
1999-07-19 03:21:11 +00:00
sommerfe c0d15c5c7c Count lockmgr locks held by process if LOCKDEBUG || DIAGNOSTIC.
(previously, it was just under LOCKDEBUG).
1999-05-04 15:58:53 +00:00
sommerfe f1a508e354 Prevent deadlock cited in PR4629 from crashing the system. (copyout
and system call now just return EFAULT).  A complete fix will
presumably have to wait for UBC and/or for vnode locking protocols to
be revamped to allow use of shared locks.
1999-03-25 00:20:35 +00:00
fvdl 080ad305ff Recursive locks were previously only available with LK_CANRECURSE. This
could be done in one of 2 ways:

	* call lk_init with LK_CANRECURSE, resulting in a lock that
 	  always can be used recursively.
	* call lockmgr with LK_CANRECURSE, meaning that it's ok if this
	  lock is already held by us.

Sometimes we need a locking type that says: take this lock now, exclusively,
but while I am holding it, I may go through a code path which could attempt
to get the lock again, and which is unaware that the lock might already
be taken.  Implement LK_SETRECURSE for this purpose. Assume that locks and
unlocks come in matching pairs (they should), and check for this 'level'
using SETRECURSE locks.
1999-02-28 14:09:15 +00:00
chs 33c042b6a2 print a little more info in simple_lock_freecheck(). 1999-01-22 07:55:17 +00:00
bouyer c2912048fc Cosmectic change in a panic(), so that the panic string printed by savecore
has more meaning.
1998-12-02 10:41:01 +00:00
chs 61458d7dfa LOCKDEBUG enhancements for non-MP:
keep a list of locked locks.
use this to print where the lock was locked
when we either go to sleep with a lock held
or try to free a locked lock.
1998-11-04 06:19:55 +00:00
pk c65c55af6f Disable the daft PAUSE() macro, which manages to skip all the relevant
code in lockmgr() most of the time. This a no doubt a case of Bad Coding Style.
1998-10-14 09:41:21 +00:00