Commit Graph

77 Commits

Author SHA1 Message Date
yamt efc80878d1 use lockstatus() instead of L_BIGLOCK to check if we're holding a biglock.
fix PR/25595.
2004-05-18 11:59:11 +00:00
yamt b4831906b2 introduce LK_EXCLOTHER for lockstatus().
from FreeBSD, but a little differently.  instead of letting lockstatus()
take an additional thread argument, always use curlwp/curcpu.
2004-05-18 11:55:59 +00:00
wiz d20841bb64 Uppercase CPU, plural is CPUs. 2004-02-13 11:36:08 +00:00
hannken 10654a5c0a Fix last commit. The current spl was an implicit argument to the ACQUIRE
macro.  With help and approval from YAMAMOTO Takashi <yamt@netbsd.org>
2003-12-08 14:21:25 +00:00
yamt f9d2295ad0 turn ACQUIRE macro into a function by introducing new internal
flags, LK_SHARE_NONZERO and LK_WAIT_NONZERO.  from FreeBSD.
2003-11-23 08:57:16 +00:00
agc aad01611e7 Move UCB-licensed code from 4-clause to 3-clause licence.
Patches provided by Joel Baker in PR 22364, verified by myself.
2003-08-07 16:26:28 +00:00
pk 9ead24ac7a Use lock_printf() in SPINLOCK_SPINCHECK() and SLOCK_TRACE(). 2003-02-19 22:34:42 +00:00
pk 7f03dc8c13 _simple_lock(): revert to IPL at entry while spinning on the lock; raise
to spllock() again after we get it.
2003-01-19 14:40:55 +00:00
thorpej e0d8d366df Merge the nathanw_sa branch. 2003-01-18 10:06:22 +00:00
pk c70db21e38 lock_printf(): use vsnprintf/printf_nolog to avoid covertly using the system
log and thereby invoking scheduler code.
2003-01-15 23:11:05 +00:00
scw 0f91ed3dfa Quell uninitialised variable warnings. 2002-11-24 11:37:54 +00:00
perry 6858187df6 /*CONTCOND*/ while (0)'ed macros 2002-11-02 07:20:42 +00:00
fvdl 4244ef3b22 For INTERLOCK_ACQUIRE, s/splsched/spllock/. 2002-11-01 01:13:32 +00:00
provos 0f09ed48a5 remove trailing \n in panic(). approved perry. 2002-09-27 15:35:29 +00:00
chs 0e83d71253 print a stack trace in the "spinout" case too. 2002-09-14 21:42:42 +00:00
thorpej e839580821 Move kernel_lock manipulation info functions so that they will
show up in a profile.
2002-05-21 01:38:26 +00:00
enami af81fabf7c Remove #ifdef DIAGNOSTIC around panic(). It is better than NULL pointer
dereference.
2002-05-11 11:56:57 +00:00
lukem adc783d537 add RCSIDs 2001-11-12 15:25:01 +00:00
chs adb1a233b7 replace wakeup_one() with wakeup(). wakeup_one() can only be used if the
woken-up thread is guaranteed to pass the buck to the next guy before
going back to sleep, and the rest of the lockmgr() code doesn't do that.
from Bill Sommerfeld.  fixes PR 14097.
2001-09-29 21:27:49 +00:00
chs 039c1fd312 print a stack trace in more LOCKDEBUG cases.
add a blank line between complaints.
use TAILQ_FOREACH where appropriate.
2001-09-25 06:13:29 +00:00
sommerfeld acf40b361c Correct comment to match code 2001-09-22 22:36:30 +00:00
wiz daa5d204e4 synchron*, not sychron* 2001-07-08 17:41:14 +00:00
thorpej 31769952ca Add a simple_lock_only_held() LOCKDEBUG routine, which allows code
to assert that exactly zero or one (and a specific one) locks are
held.

From Bill Sommerfeld.
2001-06-05 04:38:08 +00:00
enami bda65c7816 Define local variable cpu_id only when either MULTIPROCESSOR or DIAGNOSTIC
is defined since it isn't used otherwise.
2001-05-01 04:30:04 +00:00
marcus b6240639a2 STDC cleanup: volatile needs to be cast away for lk_flags as well. 2001-04-27 00:05:13 +00:00
thorpej c24c3604b0 SPINLOCK_INTERLOCK_RELEASE_HOOK should actually be
SPINLOCK_SPIN_HOOK, so that we actually check for
pending IPIs on the Alpha more than once.  Also,
when we call alpha_ipi_process(), make sure to go
to splipi().
2001-04-20 22:58:39 +00:00
jmc ca607b87cf Default lock_printf to syslog rather than printf. Some of the lock debug checks
are done inside of wakeup which is holding the sched lock. Printf can cause
wakeup to get called again (pty redirection of console message) which will
panic with sched lock already held.

This isn't a long term fix as not being able to printf vs. sched lock should
be cleaned up better but this avoids continual panics with lockdebug running
and an xterm -C.
2000-12-24 23:56:24 +00:00
thorpej 113dd58233 Add a LOCKDEBUG check for a r/w spinlock spinning out of control.
Partially from Bill Sommerfeld.
2000-11-22 06:31:22 +00:00
thorpej 3075dec01a Allow machine dependent code to specify a hook to be run when a
spinlock's interlock is released.

Idea from Bill Sommerfeld.
2000-11-20 20:04:49 +00:00
sommerfeld 1cbfb08951 Fix !LOCKDEBUG && !DIAGNOSTIC case 2000-08-28 21:07:52 +00:00
sommerfeld bdc30aed03 Since the spinlock count is per-cpu, we don't need atomic operations
to update it, so don't bother with <machine/atomic.h>

Flush kernel_lock_release_all() and kernel_lock_acquire_count() (which
didn't do spinlock accounting correctly), and replace them with
spinlock_release_all() and spinlock_acquire_count().
2000-08-26 19:26:43 +00:00
thorpej fe036cae9a Fix a printf format (for Alpha). 2000-08-26 17:02:16 +00:00
sommerfeld 11eae2ffaf Default simple_lock_debugger to "on" on MULTIPROCESSOR.
Change uninitialized simple_lock check from KASSERT to use SLOCK_WHERE
(to show the "real" source line where the error was detected).
2000-08-23 15:17:47 +00:00
thorpej 7508bd7231 Use spllock() rather than splhigh(). 2000-08-22 19:47:26 +00:00
thorpej a2f2d10800 Slight adjustment to INTERLOCK_*() macros to make it easier
for the compiler to optimize.
2000-08-22 17:31:32 +00:00
thorpej 5573e863c7 - Clean up _simple_lock_held()
- In simple_lock_switchcheck(), allow/enforce exactly one lock to be
  held: sched_lock.
- Per e-mail to tech-smp from Bill Sommerfeld, r/w spin locks have
  an interlock at splsched(), rather than splhigh().
2000-08-21 02:17:45 +00:00
thorpej 8bc6ee56cb Lock debugging fix: Make sure a simplelock's lock_holder gets
initialized properly, and consistently tracks the owning CPU's
cpuid.  Add some diagnostic assertions to enforce this.
2000-08-19 19:36:18 +00:00
thorpej 391e1e1f44 For spinlocks, block interrupts while holding the interlock. Partially
from Bill Sommerfeld.
2000-08-17 14:36:32 +00:00
thorpej b6aaff9c44 Add a DIAGNOSTIC check for release of an unlocked lock.
From Bill Sommerfeld.
2000-08-17 04:18:21 +00:00
thorpej f2098b2382 Some more lock debugging support:
- LOCK_ASSERT(), which expands to KASSERT() if LOCKDEBUG.
- new simple_lock_held(), which tests if the calling CPU holds
  the specified simple lock.

From Bill Sommerfeld, modified slightly by me.
2000-08-17 04:15:43 +00:00
eeh cd557cfb3c Nother __kprintf_attribute__ to be removed. 2000-08-10 04:37:59 +00:00
thorpej c70ada6428 Fix printf format error pointed out by Steve Woodford. 2000-08-08 19:55:26 +00:00
thorpej b9d2d53fb8 Add a DIAGNOSTIC or LOCKDEBUG check for held spin locks. 2000-08-07 22:10:52 +00:00
thorpej b24441d4d1 It doesn't make sense to charge simple locks to proc's, because
simple locks are held by CPUs.  Remove p_simple_locks (which was
unused anyway, really), and add a LOCKDEBUG check for held simple
locks in mi_switch().  Grow p_locks to an int to take up the space
previously used by p_simple_locks so that the proc structure doens't
change size.
2000-08-07 21:55:22 +00:00
thorpej 8fd9032b90 ANSI'ify. 2000-07-14 07:14:33 +00:00
sommerfeld e964d558a7 Fix assorted bugs around shutdown/reboot/panic time.
- add a new global variable, doing_shutdown, which is nonzero if
vfs_shutdown() or panic() have been called.
- in panic, set RB_NOSYNC if doing_shutdown is already set on entry
so we don't reenter vfs_shutdown if we panic'ed there.
 - in vfs_shutdown, don't use proc0's process for sys_sync unless
curproc is NULL.
 - in lockmgr, attribute successful locks to proc0 if doing_shutdown
&& curproc==NULL, and  panic if we can't get the lock right away; avoids the
spurious lockmgr DIAGNOSTIC panic from the ddb reboot command.
 - in subr_pool, deal with curproc==NULL in the doing_shutdown case.
 - in mfs_strategy, bitbucket writes if doing_shutdown, so we don't
wedge waiting for the mfs process.
 - in ltsleep, treat ((curproc == NULL) && doing_shutdown) like the
panicstr case.

Appears to fix: kern/9239, kern/10187, kern/9367.
May also fix kern/10122.
2000-06-10 18:44:43 +00:00
thorpej 6ea30ef2e8 Use ltsleep(). 2000-06-08 05:50:59 +00:00
thorpej 75dbbed64a Fix a typo, and add some lint comments. 2000-05-23 05:17:11 +00:00
sommerfeld 4d573016ed Let MULTIPROCESSOR && LOCKDEBUG case compile again 2000-05-03 13:53:59 +00:00
thorpej 8185691694 - If a platform defines __HAVE_ATOMIC_OPERATIONS, use them for counting
in the MULTIPROCESSOR case.
- Move a misplaced #ifdef so that LK_REENABLE actually works.
2000-05-02 04:32:33 +00:00