Commit Graph

45 Commits

Author SHA1 Message Date
rmind 11a35aed4d - Fix a few possible locking issues in execve1() and exit1(). Add a note
that scheduler locks are special in this regard - adaptive locks cannot
  be in the path due to turnstiles.  Randomly spotted/reported by uebayasi@.
- Remove unused lwp_relock() and replace lwp_lock_retry() by simplifying
  lwp_lock() and sleepq_enter() a little.
- Give alllwp its own cache-line and mark lwp_cache pointer as read-mostly.

OK ad@
2010-12-18 01:36:19 +00:00
ad 7364cd36a3 Allocate sleep queue locks with mutex_obj_alloc. Reduces memory usage
on !MP kernels, and reduces false sharing on MP ones.
2009-03-21 13:11:14 +00:00
ad 22191248b6 Update CALLOUT_INVOKING correctly, seems to have been lost. 2008-10-10 11:42:58 +00:00
rmind b37999b4b5 Add few KASSERTs. 2008-09-06 23:08:54 +00:00
matt 1906aa3e59 Switch from KASSERT to CTASSERT for those asserts testing sizes of types. 2008-07-02 14:47:34 +00:00
ad 93e0e98369 Take the mutex pointer and waiters count out of sleepq_t: the values can
be or are maintained elsewhere. Now a sleepq_t is just a TAILQ_HEAD.
2008-05-26 12:08:38 +00:00
martin ce099b4099 Remove clause 3 and 4 from TNF licenses 2008-04-28 20:22:51 +00:00
ad c8ff5c0c50 kmutex_t * -> void *, to avoid MD header fallout. 2008-04-23 13:19:44 +00:00
ad 43d8bae932 Give callout_halt() an additional 'kmutex_t *interlock' argument. If there
is a need to block and wait for the callout to complete, and there is an
interlock, it will be dropped while waiting and reacquired before return.
2008-04-22 12:04:22 +00:00
ad ecebc8b473 Implement MP callouts as discussed on tech-kern. The CPU binding code is
disabled for the moment until we figure out what we want to do with CPUs
being offlined.
2008-04-22 11:45:28 +00:00
ad 96a6231c10 callout_halt: remove unneeded extern decl. 2008-03-29 14:07:23 +00:00
ad c7e03d2b58 callout_destroy: fix assertion to not fire when a callout is destroying
its own handle. PR kern/38324.
2008-03-29 14:00:55 +00:00
ad 6c180fd421 Pull in sys/cpu.h for cpu_intr_p(). 2008-03-28 21:58:43 +00:00
ad c3338aabf1 Enable blocking synchronization for callouts as discussed at length on
tech-kern last year. Instead of modifying callout_stop, add a new
routine (callout_halt) which will sleep if the callout is already in
flight. Note that if a callout can take locks, the caller of callout_halt
must not hold any of those locks - otherwise the two could deadlock.
2008-03-28 20:44:38 +00:00
ad 0664a0459b Start detangling lock.h from intr.h. This is likely to cause short term
breakage, but the mess of dependencies has been regularly breaking the
build recently anyhow.
2008-01-04 21:17:40 +00:00
ad 598ab03ad0 Match the docs: MUTEX_DRIVER/SPIN are now only for porting code written
for Solaris.
2007-12-05 07:06:50 +00:00
joerg 8c65b9a8ce Share code between callout_schedule and callout_reset. 2007-11-23 19:43:32 +00:00
ad d831186d55 Merge scheduler changes from the vmlocking branch. All discussed on
tech-kern:

- Invert priority space so that zero is the lowest priority. Rearrange
  number and type of priority levels into bands. Add new bands like
  'kernel real time'.
- Ignore the priority level passed to tsleep. Compute priority for
  sleep dynamically.
- For SCHED_4BSD, make priority adjustment per-LWP, not per-process.
2007-11-06 00:42:39 +00:00
ad 46ed8f7d77 Use the softint API. 2007-10-08 16:18:02 +00:00
ad 7fecd4ded9 callout_softclock: add a couple of assertions. 2007-08-01 23:23:41 +00:00
ad 00348ca253 callout_barrier: drop kernel_lock before blocking. 2007-07-30 21:36:54 +00:00
ad 56161147f6 Define _CALLOUT_PRIVATE. 2007-07-10 21:26:00 +00:00
ad 75ff053010 Make netstat build again. I don't see why it has any business dumping
the raw contents of tcpcb but that's another story.
2007-07-10 21:12:32 +00:00
ad 88ab7da936 Merge some of the less invasive changes from the vmlocking branch:
- kthread, callout, devsw API changes
- select()/poll() improvements
- miscellaneous MT safety improvements
2007-07-09 20:51:58 +00:00
matt 93feeb1203 Fix lossage from boolean_t -> bool and updated x86 bus_dma. 2007-02-22 04:38:02 +00:00
ad b07ec3fc38 Merge newlock2 to head. 2007-02-09 21:55:00 +00:00
yamt 1a7bc55dcc remove some __unused from function parameters. 2006-11-01 10:17:58 +00:00
christos 4d595fd7b1 - sprinkle __unused on function decls.
- fix a couple of unused bugs
- no more -Wno-unused for i386
2006-10-12 01:30:41 +00:00
christos 95e1ffb156 merge ktrace-lwp. 2005-12-11 12:16:03 +00:00
drochner 49d230fa91 need a "const" 2005-06-01 12:27:15 +00:00
christos efb6943313 - add const.
- remove unnecessary casts.
- add __UNCONST casts and mark them with XXXUNCONST as necessary.
2005-05-29 22:24:14 +00:00
perry da8abec863 nuke trailing whitespace 2005-02-26 21:34:55 +00:00
thorpej 67f69c1c63 Make callout_setfunc() a CPP macro. Suggested by enami. 2003-10-30 04:32:56 +00:00
thorpej db71356cd1 - Change callout_setfunc() to require that the callout handle is already
initialized.  Update the txp(4) to compensate.
- Statically initialize the TCP timer callout handles in the tcpcb
  template.  We still use callout_setfunc(), but that call is now much
  less expensive.  Add a comment that the compiler is likely to unroll
  the loop (so don't sweat that it's there).
2003-10-27 16:52:01 +00:00
scw a3b1a08d1e Fix for PR kern/22933
Avoid gcc3 pointer alias bugs caused by casting between struct callout
and struct callout_circq.
2003-09-25 10:44:11 +00:00
scw 20fb5a9f59 Cast from pointer type to db_addr_t via intptr_t. 2003-09-07 21:28:16 +00:00
he 69f1e70775 On second thought, callout_stop() should not clear the INVOKING flag. 2003-08-03 19:14:59 +00:00
he ddef043b97 Temporarily introduce CALLOUT_INVOKING, callout_invoking() and callout_ack()
to make users of the callout facility able to cooperate to work around the
race caused by the callout code lowering interrupt priority level when
invoking callout handlers, something which allows other code to run before
the callout handler gets to it's spl*() call.

This is to enable the workaround for the TCP code found in PR#20390 to be
applied.

This should be backed out once a more comprehensive fix can be put in
place.
2003-07-20 16:25:57 +00:00
lukem 09b3191490 add missing __KERNEL_RCSID() 2003-07-14 14:59:01 +00:00
mjl cf40158777 Typos in comments. 2003-05-17 15:53:42 +00:00
thorpej b9d81d9cc9 Change a printf to an event counter. Callout event counters are conditional
on CALLOUT_EVENT_COUNTERS.
2003-02-26 23:13:19 +00:00
yamt f4585f067a - don't compare c_time directly.
- in callout_hardclock, test if timeout_todo is empty or not
  before release the lock.
2003-02-11 09:43:37 +00:00
drochner d229e1e550 replace &(a?b:c) by (a?&b:&c), so that it looks more like an lvalue
(to lint at least)
approved by thorpej
2003-02-10 19:18:56 +00:00
martin 968efa18d6 Format fix for archs where ptrdiff_t != int. 2003-02-04 10:14:53 +00:00
thorpej 1b84adbe5f New callout implementation. This is based on callwheel implementation
done by Artur Grabowski and Thomas Nordin for OpenBSD, which is more
efficient in several ways than the callwheel implementation that it is
replacing.  It has been adapted to our pre-existing callout API, and
also provides the slightly more efficient (and much more intuitive)
API (adapted to the callout_*() naming scheme) that the OpenBSD version
provides.

Among other things, this shaves a bunch of cycles off rescheduling-in-
the-future a callout which is already scheduled, which the common case
for TCP timers (notably REXMT and KEEP).

The API has been simplified a bit, as well.  The (very confusing to
a good many people) "ACTIVE" state for callouts has gone away.  There
is now only "PENDING" (scheduled to fire in the future) and "EXPIRED"
(has fired, and the function called).

Kernel version bump not done; we'll ride the 1.6N bump that happened
with the malloc(9) change.
2003-02-04 01:21:03 +00:00