Commit Graph

10 Commits

Author SHA1 Message Date
rmind 40cf6f3659 Remove uarea swap-out functionality:
- Addresses the issue described in PR/38828.
- Some simplification in threading and sleepq subsystems.
- Eliminates pmap_collect() and, as a side note, allows pmap optimisations.
- Eliminates XS_CTL_DATA_ONSTACK in scsipi code.
- Avoids few scans on LWP list and thus potentially long holds of proc_lock.
- Cuts ~1.5k lines of code.  Reduces amd64 kernel size by ~4k.
- Removes __SWAP_BROKEN cases.

Tested on x86, mips, acorn32 (thanks <mpumford>) and partly tested on
acorn26 (thanks to <bjh21>).

Discussed on <tech-kern>, reviewed by <ad>.
2009-10-21 21:11:57 +00:00
ad 7c89190b50 Start percpu allocation at (ALIGNBYTES + 1) to avoid problem with importing
offset zero to vmem.
2008-12-15 11:59:22 +00:00
yamt 6d3b5bc3c9 - encrypt/decrypt offsets if DIAGNOSTIC.
- add an assertion.
these changes allow to detect a use of uninitialized percpu_t *.
2008-05-03 05:31:56 +00:00
ad 4c7ba24481 Add MI code to support in-kernel preemption. Preemption is deferred by
one of the following:

- Holding kernel_lock (indicating that the code is not MT safe).
- Bracketing critical sections with kpreempt_disable/kpreempt_enable.
- Holding the interrupt priority level above IPL_NONE.

Statistics on kernel preemption are reported via event counters, and
where preemption is deferred for some reason, it's also reported via
lockstat. The LWP priority at which preemption is triggered is tuneable
via sysctl.
2008-04-28 15:36:01 +00:00
ad 27168d9d58 - Rename crit_enter/crit_exit to kpreempt_disable/kpreempt_enable.
DragonflyBSD uses the crit names for something quite different.
- Add a kpreempt_disabled function for diagnostic assertions.
- Add inline versions of kpreempt_enable/kpreempt_disable for primitives.
- Make some more changes for preemption safety to the x86 pmap.
2008-04-27 11:37:48 +00:00
yamt 582ad655c2 fix a comment. 2008-04-26 08:06:11 +00:00
thorpej 0cfa6e7487 Make the percpu API a little more friendly:
- percpu_getptr() is now called percpu_getref() and implicitly disables
  preemption (via crit_enter()) when it is called.
- Added percpu_putref() which implicitly reenables preemption (via
  crit_exit()).
2008-04-09 05:11:20 +00:00
yamt a67bae0b7b - simplify ASSERT_SLEEPABLE.
- move it from proc.h to systm.h.
- add some more checks.
- make it a little more lkm friendly.
2008-03-17 08:27:50 +00:00
yamt 2c871b8070 - add a cpu_info pointer argument to percpu_callback_t.
- unexport percpu_zero.
- add some comments.
2008-01-17 09:01:57 +00:00
yamt ea8e75911e add a per-cpu storage allocator. 2008-01-14 12:40:02 +00:00