- Omit mutex_exit before panic. No need.
- Sprinkle some more information into a few messages.
- Prefer __diagused over #if DIAGNOSTIC for declarations,
to reduce conditionals.
ok mrg@
so we need to retry if curlwp took a context switch during the call.
Otherwise, CPU-local invariants can get screwed up:
panic: kernel diagnostic assertion "cur->pcg_avail == cur->pcg_size" failed
This is (was) very easy to reproduce by just running:
while : ; do RUMP_NCPU=32 ./a.out ; done
where a.out only calls rump_init(). But, any situation there's contention
and a pool doesn't have emptygroups would do.
caches, merge together pool_drain_start() and pool_drain_end() into
bool pool_drain(struct pool **ppp);
"bool" value indicates whether reclaiming was fully done (true) or not (false)
"ppp" will contain a pointer to the pool that was drained (optional).
See http://mail-index.netbsd.org/tech-kern/2012/06/04/msg013287.html
context, re-enable the portion of code that allows invalidation of CPU-bound
pool caches.
Two reasons:
- CPU cached objects being invalidated, the probability of fetching an
obsolete object from the pool_cache(9) is greatly reduced. This speeds up
pool_cache_get() quite a bit as it does not have to keep destroying objects
until it finds an updated one when an invalidation is in progress.
- for situations where we have to ensure that no obsolete object remains
after a state transition (canonical example: pmap mappings between Xen VM
restoration), invalidating all pool_cache(9) is the safest way to go.
As it uses xcall(9) to broadcast the execution of pool_cache_transfer(),
pool_cache_invalidate() cannot be called from interrupt or softint context
(scheduling a xcall(9) can put a LWP to sleep).
pool_cache_xcall() => pool_cache_transfer() to reflect its use.
Invalidation being a costly process (1000s objects may be destroyed),
all places where pool_cache_invalidate() may be called from
interrupt/softint context will now get caught by the proper KASSERT(), and
fixed. Ping me when you see one.
Tested under i386 and amd64 by running ATF suite within 64MiB HVM
domains (tried triggering pgdaemon a few times).
No objection on tech-kern@.
XXX a similar fix has to be pulled up to NetBSD-6, but with a more
conservative approach.
See http://mail-index.netbsd.org/tech-kern/2012/05/29/msg013245.html
simplifying uvm_map handling (no special kernel entries anymore no relocking)
make malloc(9) a thin wrapper around kmem(9)
(with private interface for interrupt safety reasons)
releng@ acknowledged
is to provide routines that do as KASSERT(9) says: append a message
to the panic format string when the assertion triggers, with optional
arguments.
Fix call sites to reflect the new definition.
Discussed on tech-kern@. See
http://mail-index.netbsd.org/tech-kern/2011/09/07/msg011427.html
initialized.
Ignore also pool_allocator_lock while the system is in cold state.
When the system has left cold state, uvm_init() should have
also initialized the pool subsystem and the mutexes are
ready to use.
This failed on archs where a mutex isn't initialized to a zero
value.
Defer allocation of pool log to the logging action, if allocation
fails, it will be retried the next time something is logged.
Clear pool log on allocation so that ddb doesn't crash when showing
so far unused log entries.
early during boot, just after CPUs are attached but before they are marked
as running.
This will result in a list of CPUs without the SPCF_RUNNING flag set, and
will trigger the 'KASSERT(xc_tailp < xc_headp)' in xc_lowpri() as no cross
call is issued.
Bug reported and patch tested by tron@.
See also http://mail-index.netbsd.org/tech-kern/2009/10/19/msg006293.html
most cases, use a proper constructor. For proplib, give a local
equivalent of POOL_INIT for the kernel object implementation. This
way the code structure can be preserved, and a local link set is
not hazardous anyway (unless proplib is split to several modules,
but that'll be the day).
tested by booting a kernel in qemu and compile-testing i386/ALL
one of the following:
- Holding kernel_lock (indicating that the code is not MT safe).
- Bracketing critical sections with kpreempt_disable/kpreempt_enable.
- Holding the interrupt priority level above IPL_NONE.
Statistics on kernel preemption are reported via event counters, and
where preemption is deferred for some reason, it's also reported via
lockstat. The LWP priority at which preemption is triggered is tuneable
via sysctl.
DragonflyBSD uses the crit names for something quite different.
- Add a kpreempt_disabled function for diagnostic assertions.
- Add inline versions of kpreempt_enable/kpreempt_disable for primitives.
- Make some more changes for preemption safety to the x86 pmap.