Commit Graph

207 Commits

Author SHA1 Message Date
riastradh
6a970e990b #if DIAGNOSTIC panic ---> KASSERT
- Omit mutex_exit before panic.  No need.
- Sprinkle some more information into a few messages.
- Prefer __diagused over #if DIAGNOSTIC for declarations,
  to reduce conditionals.

ok mrg@
2017-03-14 03:13:50 +00:00
knakahara
3bf8089a9a fix: "vmstat -C" CpuLayer showed only the last cpu values. 2016-02-05 03:04:52 +00:00
pooka
d8e04c9094 to garnish, dust with _KERNEL_OPT 2015-08-24 22:50:32 +00:00
maxv
40e89e3a3c Introduce POOL_REDZONE. 2015-07-28 12:32:44 +00:00
joerg
d0f3f6896c Add kern.pool for memory pool stats. 2014-06-13 19:09:07 +00:00
abs
6fe75f1616 Ensure pool_head is non static - for "vmstat -i" 2014-04-26 16:30:05 +00:00
para
e3e2479f22 replace vmem(9) custom boundary tag allocation with a pool(9) 2014-02-17 20:40:06 +00:00
pooka
83a2a556bf In pool_cache_put_slow(), pool_get() can block (it does mutex_enter()),
so we need to retry if curlwp took a context switch during the call.
Otherwise, CPU-local invariants can get screwed up:

    panic: kernel diagnostic assertion "cur->pcg_avail == cur->pcg_size" failed

This is (was) very easy to reproduce by just running:

  while : ; do RUMP_NCPU=32 ./a.out ; done

where a.out only calls rump_init().  But, any situation there's contention
and a pool doesn't have emptygroups would do.
2013-03-11 21:37:54 +00:00
christos
a67c3c8971 printflike maintenance. 2013-02-09 00:31:21 +00:00
christos
b490177227 proper locking for DEBUG 2012-08-28 15:52:19 +00:00
jym
57d7988f76 Now that pool_cache_invalidate() is synchronous and can handle per-CPU
caches, merge together pool_drain_start() and pool_drain_end() into

bool pool_drain(struct pool **ppp);

"bool" value indicates whether reclaiming was fully done (true) or not (false)
"ppp" will contain a pointer to the pool that was drained (optional).

See http://mail-index.netbsd.org/tech-kern/2012/06/04/msg013287.html
2012-06-05 22:51:47 +00:00
jym
ca40366292 As pool reclaiming is unlikely to happen at interrupt or softint
context, re-enable the portion of code that allows invalidation of CPU-bound
pool caches.

Two reasons:
- CPU cached objects being invalidated, the probability of fetching an
obsolete object from the pool_cache(9) is greatly reduced. This speeds up
pool_cache_get() quite a bit as it does not have to keep destroying objects
until it finds an updated one when an invalidation is in progress.

- for situations where we have to ensure that no obsolete object remains
after a state transition (canonical example: pmap mappings between Xen VM
restoration), invalidating all pool_cache(9) is the safest way to go.

As it uses xcall(9) to broadcast the execution of pool_cache_transfer(),
pool_cache_invalidate() cannot be called from interrupt or softint context
(scheduling a xcall(9) can put a LWP to sleep).

pool_cache_xcall() => pool_cache_transfer() to reflect its use.

Invalidation being a costly process (1000s objects may be destroyed),
all places where pool_cache_invalidate() may be called from
interrupt/softint context will now get caught by the proper KASSERT(), and
fixed. Ping me when you see one.

Tested under i386 and amd64 by running ATF suite within 64MiB HVM
domains (tried triggering pgdaemon a few times).

No objection on tech-kern@.

XXX a similar fix has to be pulled up to NetBSD-6, but with a more
conservative approach.

See http://mail-index.netbsd.org/tech-kern/2012/05/29/msg013245.html
2012-06-05 22:28:11 +00:00
rmind
269014127a G/C POOL_DIAGNOSTIC option. No objection on tech-kern@. 2012-05-05 19:15:10 +00:00
para
fa6083dc6c make acorn26 compile by fixing up subpage pool allocations
ok: riz@
2012-02-04 22:11:42 +00:00
he
5379ae9cd8 Use the same style for initialization of pool_allocator_kmem under
POOL_SUBPAGE as all the other poll_allocator structs.  Fixes build
problem for acorn26.
2012-01-29 20:20:18 +00:00
rmind
bc9403f1a3 pool_page_alloc, pool_page_alloc_meta: avoid extra compare, use const.
ffs_mountfs,sys_swapctl: replace memset with kmem_zalloc.
sys_swapctl: move kmem_free outside the lock path.
uvm_init: fix comment, remove pointless numeration of steps.
uvm_map_enter: remove meflagval variable.
Fix some indentation.
2012-01-28 00:00:06 +00:00
para
e62ee4d475 extending vmem(9) to be able to allocated resources for it's own needs.
simplifying uvm_map handling (no special kernel entries anymore no relocking)
make malloc(9) a thin wrapper around kmem(9)
(with private interface for interrupt safety reasons)

releng@ acknowledged
2012-01-27 19:48:38 +00:00
jym
325494fe33 Modify *ASSERTMSG() so they are now used as variadic macros. The main goal
is to provide routines that do as KASSERT(9) says: append a message
to the panic format string when the assertion triggers, with optional
arguments.

Fix call sites to reflect the new definition.

Discussed on tech-kern@. See
http://mail-index.netbsd.org/tech-kern/2011/09/07/msg011427.html
2011-09-27 01:02:33 +00:00
pooka
a3a20972d9 pnbuf_cache is used all over the place outside of vfs, so put it
in one place to avoid many definitions.
2011-03-22 15:16:23 +00:00
uebayasi
ab332b69c7 Fix a conditional include. 2011-01-17 07:36:58 +00:00
uebayasi
9d567f003d Include internal definitions (uvm/uvm.h) only where necessary. 2011-01-17 07:13:31 +00:00
pooka
d71ac89211 Report result of pool_reclaim() from pool_drain_end(). 2010-06-03 10:40:17 +00:00
rmind
70f6a0718b pool_{cache_}get: improve previous diagnostic by checking for panicstr,
so it wont trigger the assert while trying to dump core on crash.
2010-05-12 08:11:16 +00:00
rmind
b3d53a5f95 - Sprinkle asserts to catch calls from interrupt context on IPL_NONE pools.
- Add diagnostic drain attempt.
2010-05-12 03:43:46 +00:00
ad
b445fb5178 MAXCPUS -> __arraycount 2010-04-25 11:49:04 +00:00
rmind
f6d80c92e0 pool_cache_invalidate: comment out invalidation of per-CPU caches (nobody depends
on it, at the moment) until we decide how to fix it (xcall(9) cannot be used from
interrupt context).  XXX: Perhaps implement XC_HIGHPRI.
2010-01-20 23:40:42 +00:00
mlelstv
c0a2fae3f5 drop __predict micro optimization in pool_init for cleaner code. 2010-01-03 09:42:22 +00:00
mlelstv
0ca557be77 Pools are created way before the pool subsystem mutexes are
initialized.

Ignore also pool_allocator_lock while the system is in cold state.

When the system has left cold state, uvm_init() should have
also initialized the pool subsystem and the mutexes are
ready to use.
2010-01-03 01:07:19 +00:00
mlelstv
d5c1a554d8 Move initialization of pool_allocator_lock before its first use.
This failed on archs where a mutex isn't initialized to a zero
value.

Defer allocation of pool log to the logging action, if allocation
fails, it will be retried the next time something is logged.

Clear pool log on allocation so that ddb doesn't crash when showing
so far unused log entries.
2010-01-02 15:20:39 +00:00
elad
d4b368687f Turn PA_INITIALIZED to a reference count for the pool allocator, and once
it drops to zero destroy the mutex we initialize. This fixes the problem
mentioned in

	http://mail-index.netbsd.org/tech-kern/2009/12/28/msg006727.html

Also remove pa_flags now that it's no longer needed.

Idea from matt@, okay matt@.
2009-12-30 18:57:16 +00:00
jym
de3d6f78cf Fix a bug where on MP systems, pool_cache_invalidate(9) could be called
early during boot, just after CPUs are attached but before they are marked
as running.

This will result in a list of CPUs without the SPCF_RUNNING flag set, and
will trigger the 'KASSERT(xc_tailp < xc_headp)' in xc_lowpri() as no cross
call is issued.

Bug reported and patch tested by tron@.

See also http://mail-index.netbsd.org/tech-kern/2009/10/19/msg006293.html
2009-10-20 17:24:22 +00:00
thorpej
1f59a448f4 - pool_cache_invalidate(): broadcast a cross-call to drain the per-CPU
caches before draining the global cache.
- pool_cache_invalidate_local(): remove.
2009-10-15 20:50:12 +00:00
jym
31629a1342 Add pool_cache_invalidate_local() to the pool_cache(9) API, to permit
per-CPU objects invalidation when cached in the pool cache.

See http://mail-index.netbsd.org/tech-kern/2009/10/05/msg006206.html .

Reviewed by bouyer@. Thanks!
2009-10-08 21:54:45 +00:00
pooka
fbd53556dc Wipe out the last vestiges of POOL_INIT with one swift stroke. In
most cases, use a proper constructor.  For proplib, give a local
equivalent of POOL_INIT for the kernel object implementation.  This
way the code structure can be preserved, and a local link set is
not hazardous anyway (unless proplib is split to several modules,
but that'll be the day).

tested by booting a kernel in qemu and compile-testing i386/ALL
2009-09-13 18:45:10 +00:00
rmind
3a8481feb4 Make pool_head static. 2009-08-29 00:09:02 +00:00
yamt
87984ef060 pool_cache_put_paddr: add an assertion. 2009-04-15 11:45:18 +00:00
ad
9b4d249497 Avoid recursive mutex_enter() when the system is low on KVA.
Should fix crash reported by riz on current-users.
2008-11-11 16:13:03 +00:00
ad
1ec58d56ef - Rename cpu_lookup_byindex() to cpu_lookup(). The hardware ID isn't of
interest to MI code. No functional change.
- Change /dev/cpu to operate on cpu index, not hardware ID. Now cpuctl
  shouldn't print confused output.
2008-10-15 08:13:17 +00:00
yamt
a5cd2e50c6 make pcg_dummy const to catch bugs earlier. 2008-08-11 02:48:42 +00:00
yamt
53d1c25e34 add some KASSERTs. 2008-08-11 02:46:40 +00:00
skrll
e7901782b3 Comment whitespace. 2008-08-08 16:58:01 +00:00
yamt
e4fb48bcaf pool_do_put: fix a pool corruption bug discovered by
the recent exec_pool changes.
2008-07-09 02:43:53 +00:00
yamt
03bb7555b4 fix pool corruption bugs in subr_pool.c 1.162. 2008-07-07 12:27:19 +00:00
ad
6c6c91b240 Move an assignment later. 2008-07-04 16:41:00 +00:00
ad
46587f3717 - Keep cache locked while allocating a cache group - later we might want
to automatically tune the group sizes at run time.
- Fix broken assertion.
- Avoid another test+branch.
2008-07-04 16:38:59 +00:00
ad
9d573e640e Remove a bunch of conditional branches from the pool_cache fast path. 2008-07-04 13:28:08 +00:00
ad
e10320350c Use __noinline. 2008-05-31 13:31:25 +00:00
martin
ce099b4099 Remove clause 3 and 4 from TNF licenses 2008-04-28 20:22:51 +00:00
ad
4c7ba24481 Add MI code to support in-kernel preemption. Preemption is deferred by
one of the following:

- Holding kernel_lock (indicating that the code is not MT safe).
- Bracketing critical sections with kpreempt_disable/kpreempt_enable.
- Holding the interrupt priority level above IPL_NONE.

Statistics on kernel preemption are reported via event counters, and
where preemption is deferred for some reason, it's also reported via
lockstat. The LWP priority at which preemption is triggered is tuneable
via sysctl.
2008-04-28 15:36:01 +00:00
ad
27168d9d58 - Rename crit_enter/crit_exit to kpreempt_disable/kpreempt_enable.
DragonflyBSD uses the crit names for something quite different.
- Add a kpreempt_disabled function for diagnostic assertions.
- Add inline versions of kpreempt_enable/kpreempt_disable for primitives.
- Make some more changes for preemption safety to the x86 pmap.
2008-04-27 11:37:48 +00:00