Commit Graph

18 Commits

Author SHA1 Message Date
chs
1f0e167178 vmem_alloc() with VM_SLEEP cannot fail, so percpu_alloc() cannot fail either. 2017-05-31 23:54:17 +00:00
uebayasi
fb969037b5 Consistently use kpreempt_*() outside scheduler path. 2014-11-27 15:00:00 +00:00
para
e62ee4d475 extending vmem(9) to be able to allocated resources for it's own needs.
simplifying uvm_map handling (no special kernel entries anymore no relocking)
make malloc(9) a thin wrapper around kmem(9)
(with private interface for interrupt safety reasons)

releng@ acknowledged
2012-01-27 19:48:38 +00:00
dyoung
78b0e18345 Report vmem(9) errors out-of-band so that we can use vmem(9) to manage
ranges that include the least and the greatest vmem_addr_t.  Update
vmem(9) uses throughout the kernel.  Slightly expand on the tests in
subr_vmem.c, which still pass.  I've been running a kernel with this
patch without any trouble.
2011-09-02 22:25:08 +00:00
uebayasi
2de1fdfe8b These don't need uvm/uvm_extern.h. 2011-07-27 14:35:33 +00:00
rmind
f132c365c0 Sprinkle __cacheline_aligned and __read_mostly. 2011-05-13 22:16:43 +00:00
martin
ffaba5de79 Relax an assertion 2011-04-19 07:12:59 +00:00
matt
fdd122f0c1 Add a KASSERT 2011-04-14 05:53:53 +00:00
rmind
40cf6f3659 Remove uarea swap-out functionality:
- Addresses the issue described in PR/38828.
- Some simplification in threading and sleepq subsystems.
- Eliminates pmap_collect() and, as a side note, allows pmap optimisations.
- Eliminates XS_CTL_DATA_ONSTACK in scsipi code.
- Avoids few scans on LWP list and thus potentially long holds of proc_lock.
- Cuts ~1.5k lines of code.  Reduces amd64 kernel size by ~4k.
- Removes __SWAP_BROKEN cases.

Tested on x86, mips, acorn32 (thanks <mpumford>) and partly tested on
acorn26 (thanks to <bjh21>).

Discussed on <tech-kern>, reviewed by <ad>.
2009-10-21 21:11:57 +00:00
ad
7c89190b50 Start percpu allocation at (ALIGNBYTES + 1) to avoid problem with importing
offset zero to vmem.
2008-12-15 11:59:22 +00:00
yamt
6d3b5bc3c9 - encrypt/decrypt offsets if DIAGNOSTIC.
- add an assertion.
these changes allow to detect a use of uninitialized percpu_t *.
2008-05-03 05:31:56 +00:00
ad
4c7ba24481 Add MI code to support in-kernel preemption. Preemption is deferred by
one of the following:

- Holding kernel_lock (indicating that the code is not MT safe).
- Bracketing critical sections with kpreempt_disable/kpreempt_enable.
- Holding the interrupt priority level above IPL_NONE.

Statistics on kernel preemption are reported via event counters, and
where preemption is deferred for some reason, it's also reported via
lockstat. The LWP priority at which preemption is triggered is tuneable
via sysctl.
2008-04-28 15:36:01 +00:00
ad
27168d9d58 - Rename crit_enter/crit_exit to kpreempt_disable/kpreempt_enable.
DragonflyBSD uses the crit names for something quite different.
- Add a kpreempt_disabled function for diagnostic assertions.
- Add inline versions of kpreempt_enable/kpreempt_disable for primitives.
- Make some more changes for preemption safety to the x86 pmap.
2008-04-27 11:37:48 +00:00
yamt
582ad655c2 fix a comment. 2008-04-26 08:06:11 +00:00
thorpej
0cfa6e7487 Make the percpu API a little more friendly:
- percpu_getptr() is now called percpu_getref() and implicitly disables
  preemption (via crit_enter()) when it is called.
- Added percpu_putref() which implicitly reenables preemption (via
  crit_exit()).
2008-04-09 05:11:20 +00:00
yamt
a67bae0b7b - simplify ASSERT_SLEEPABLE.
- move it from proc.h to systm.h.
- add some more checks.
- make it a little more lkm friendly.
2008-03-17 08:27:50 +00:00
yamt
2c871b8070 - add a cpu_info pointer argument to percpu_callback_t.
- unexport percpu_zero.
- add some comments.
2008-01-17 09:01:57 +00:00
yamt
ea8e75911e add a per-cpu storage allocator. 2008-01-14 12:40:02 +00:00