Commit Graph

150 Commits

Author SHA1 Message Date
mrg 9a7ae38999 remove dated and wrong comments about curlwp being NULL.
_kernel_{,un}lock() always assume it is valid now.
2009-12-20 20:42:23 +00:00
dyoung b734bafe0e Fix spelling: situatations -> situations. 2009-07-17 22:17:37 +00:00
ad 2fc2b08001 - Add lwp_pctr(), get an LWP's preemption/ctxsw counter.
- Fix a preemption bug in CURCPU_IDLE_P() that can lead to a bogus
  assertion failure on DEBUG kernels.
- Fix MP/preemption races with timecounter detachment.
2009-05-23 17:08:04 +00:00
ad 0efea177e3 Remove LKMs and switch to the module framework, pass 1.
Proposed on tech-kern@.
2008-11-12 12:35:50 +00:00
matt 1906aa3e59 Switch from KASSERT to CTASSERT for those asserts testing sizes of types. 2008-07-02 14:47:34 +00:00
pooka d496f014ff Don't compile kern_lock for rump any more, it's no longer required.
Allows us to get rid of the incorrect _RUMPKERNEL ifdefs outside sys/rump.
2008-06-25 13:16:58 +00:00
ad 7b8f512433 LOCKDEBUG:
- Tweak it so it can also catch common errors with condition variables.
  The change to kern_condvar.c is not included in this commit and will
  come later.

- Don't call kmem_alloc() if operating in interrupt context, just fail
  the allocation and disable debugging for the object. Makes it safe
  to do mutex_init/rw_init/cv_init in interrupt context, when running
  a LOCKDEBUG kernel.
2008-05-31 13:15:21 +00:00
ad aa7e99c693 Replace a couple of tsleep calls with cv_wait. 2008-05-27 17:50:03 +00:00
ad 245f0726ac Reduce ifdefs due to MULTIPROCESSOR slightly. 2008-05-19 17:06:02 +00:00
ad 81194e34f1 Allow rw_tryenter(&lock, RW_READER) to recurse, for vfs_busy(). 2008-05-06 17:11:45 +00:00
martin ce099b4099 Remove clause 3 and 4 from TNF licenses 2008-04-28 20:22:51 +00:00
ad 4c7ba24481 Add MI code to support in-kernel preemption. Preemption is deferred by
one of the following:

- Holding kernel_lock (indicating that the code is not MT safe).
- Bracketing critical sections with kpreempt_disable/kpreempt_enable.
- Holding the interrupt priority level above IPL_NONE.

Statistics on kernel preemption are reported via event counters, and
where preemption is deferred for some reason, it's also reported via
lockstat. The LWP priority at which preemption is triggered is tuneable
via sysctl.
2008-04-28 15:36:01 +00:00
ad 92cbb1b2af Extend spl protection to keep all kernel_lock state in sync. There could
have been problems before. This might help with the assertion failures
seen on sparc64.
2008-04-27 14:13:05 +00:00
drochner 76ad1614e9 remove useless passing of the lwp from the KERNEL_LOCK() ABI
(not the API; this would be easy as well)
agreed (a while ago) by ad
2008-04-01 19:49:31 +00:00
ad 164f30df1b Don't report kernel lock spinouts if init has not yet started.
XXX This should be backed out when we are sure that the drivers
are good citizens and configure nicely with interrupts enabled /
the system running.
2008-03-30 15:39:46 +00:00
yamt a67bae0b7b - simplify ASSERT_SLEEPABLE.
- move it from proc.h to systm.h.
- add some more checks.
- make it a little more lkm friendly.
2008-03-17 08:27:50 +00:00
ad 0430885280 Goodbye lockmgr(). 2008-01-30 14:54:25 +00:00
ad ea9faa6742 lockstat: no longer track lockmgr() events. 2008-01-26 14:29:31 +00:00
ad e8532b7138 - Fix a memory order problem with non-interlocked mutex release.
- Give kernel_lock its own cache line.
2008-01-10 20:14:10 +00:00
ad 0664a0459b Start detangling lock.h from intr.h. This is likely to cause short term
breakage, but the mess of dependencies has been regularly breaking the
build recently anyhow.
2008-01-04 21:17:40 +00:00
ad 4a780c9ae2 Merge vmlocking2 to head. 2008-01-02 11:48:20 +00:00
ad e23d4a9b80 Nothing uses shared -> exclusive upgrades any more, so remove the code.
This is good since they are effectively the same as ...

	lockmgr(&lock, LK_RELEASE);
	lockmgr(&lock, LK_EXCLUSIVE);

.. and therefore don't behave as expected.
2007-12-06 17:05:07 +00:00
ad b470ab628d Use membar_*(). 2007-11-30 23:05:43 +00:00
yamt 38d5e34116 make kmutex_t and krwlock_t smaller by killing lock id.
ok'ed by Andrew Doran.
2007-11-21 10:19:06 +00:00
ad dbd3ed7b2a Remove KERNEL_LOCK_ASSERT_LOCKED, KERNEL_LOCK_ASSERT_UNLOCKED since the
kernel_lock functions can be patched out at runtime now. Assertions are
provided by the existing functions and by LOCKDEBUG_BARRIER.
2007-11-13 22:14:34 +00:00
ad d831186d55 Merge scheduler changes from the vmlocking branch. All discussed on
tech-kern:

- Invert priority space so that zero is the lowest priority. Rearrange
  number and type of priority levels into bands. Add new bands like
  'kernel real time'.
- Ignore the priority level passed to tsleep. Compute priority for
  sleep dynamically.
- For SCHED_4BSD, make priority adjustment per-LWP, not per-process.
2007-11-06 00:42:39 +00:00
pooka 38bc08f047 Wrap parts dealing with kernel_lock behind #ifndef _RUMPKERNEL.
I don't like doing this, but there's too much pain to get this file
to compile clean due to how SPINLOCK_{BACKOFF,SPIN}_HOOK and
mb_write() are spread out in weird weird places throughout MD code.
2007-10-31 15:36:07 +00:00
ad 933b05e877 Use __SIMPLELOCK_LOCKED_P(). 2007-10-17 17:22:19 +00:00
ad 11dc639958 Merge from vmlocking:
- G/C spinlockmgr() and simple_lock debugging.
- Always include the kernel_lock functions, for LKMs.
- Slightly improved subr_lockdebug code.
- Keep sizeof(struct lock) the same if LOCKDEBUG.
2007-10-11 19:45:24 +00:00
ad 5a5a78fb02 Kill transferlockers() now that it's unused. 2007-10-10 17:37:40 +00:00
ad 5b3ae27b0d __FUNCTION__ -> __func__ 2007-09-17 21:33:34 +00:00
skrll 9fdaf800d9 Merge nick-csl-alignment. 2007-09-10 11:34:05 +00:00
pooka abf11c212d Define a new lockmgr flag LK_RESURRECT which can be used in
conjunction with LK_DRAIN.  This has the same effect as LK_DRAIN
except it atomically does NOT mark the lock as drained.  This
guarantees that when we got the lock, we were the last one currently
waiting for the lock.

Use LK_DRAIN|LK_RESURRECT in vclean() to make sure there are no
waiters for the lock.  This should fix behaviour theoretized to be
caused by vfs_subr.c 1.289 which caused vclean() to run into
completion and free the vnode before all lock-waiters had been
processed.  Should therefore fix the "simple_lock: unitialized lock"
problems seen recently.

thanks to Juergen Hannken-Illjes for some analysis of the problem
and Erik Bertelsen for testing
2007-07-29 12:40:37 +00:00
ad c52c14050e Be more forgiving if panicstr != NULL. 2007-07-29 11:45:21 +00:00
ad b2138a8b6a Re-apply rev 1.111:
Always include kernel_lock so that LOCKDEBUG checks can find the symbol.
2007-06-18 21:37:32 +00:00
ad a2884738ea Nuke __HAVE_SPLBIGLOCK. 2007-06-15 20:59:38 +00:00
ad 029f4f9cd7 splstatclock, spllock -> splhigh 2007-06-15 20:17:07 +00:00
yamt f03010953f merge yamt-idlelwp branch. asked by core@. some ports still needs work.
from doc/BRANCHES:

	idle lwp, and some changes depending on it.

	1. separate context switching and thread scheduling.
	   (cf. gmcgarry_ctxsw)
	2. implement idle lwp.
	3. clean up related MD/MI interfaces.
	4. make scheduler(s) modular.
2007-05-17 14:51:11 +00:00
perseant 55307f6a04 Include the lwpid in the lock panic message, so we don't see silly messages
like
	lockmgr: pid 17, not exclusive lock holder 17 unlocking
2007-04-14 06:59:25 +00:00
ad 3d5b66ed02 Always include kernel_lock so that LOCKDEBUG checks can find the symbol. 2007-03-30 11:05:59 +00:00
christos 2058fdeab3 add a lockpanic function that prints more detailed error messages. 2007-03-04 06:20:25 +00:00
yamt c574bfa378 typedef pri_t and use it instead of int and u_char. 2007-02-27 15:07:28 +00:00
thorpej 4f3d5a9cc0 TRUE -> true, FALSE -> false 2007-02-22 06:34:42 +00:00
ad cebcfebbd2 kernel_lock():
- Fix error in previous.
- Call LOCKDEBUG_WANTLOCK() so the "exclusive wanted" count isn't off.
2007-02-20 16:10:10 +00:00
ad 723654a989 _kernel_lock(): we can recurse here if we take an interrupt while spinning.
Don't double book the time spent with lockstat.
2007-02-20 15:56:59 +00:00
ad b07ec3fc38 Merge newlock2 to head. 2007-02-09 21:55:00 +00:00
ad 9f07c24ec6 lockstat: improve reporting slightly, and fix a bug where the command
could spin while resorting lists.
2006-12-25 11:57:40 +00:00
chs a8459c8cc1 in lockstatus(), report LK_EXCLOTHER if LK_WANT_EXCL or LK_WANT_UPGRADE
is set, since the thread that set either of those flags will be the next
one to get the lock.  fixes PR 35143.
2006-12-09 15:59:25 +00:00
yamt 1a7bc55dcc remove some __unused from function parameters. 2006-11-01 10:17:58 +00:00
christos 4d595fd7b1 - sprinkle __unused on function decls.
- fix a couple of unused bugs
- no more -Wno-unused for i386
2006-10-12 01:30:41 +00:00