- Serialize updates to lockstat_enabled, lockstat_dev_enabled, and
lockstat_dtrace_enabled with a new __cpu_simple_lock.
- Use xc_barrier to obviate any need for additional membars in
lockstat_event.
- Use atomic_load/store_* for access that might not be serialized by
lockstat_lock or lockstat_enabled_lock.
kmem_alloc() with KM_SLEEP
kmem_zalloc() with KM_SLEEP
percpu_alloc()
pserialize_create()
psref_class_create()
all of these paths include an assertion that the allocation has not failed,
so callers should not assert that again.
Split lockstat_enabled into two parts, one controlled by dtrace called
"lockstat_dtrace_enabled" and one by the lockstat device called
"lockstat_dev_enabled". Create a macro that needs to be called when either
of them changes LOCKSTAT_ENABLED_UPDATE().
designated initializers.
I have not built every extant kernel so I have probably broken at
least one build; however I've also found and fixed some wrong
cdevsw/bdevsw entries so even if so I think we come out ahead.
one of the following:
- Holding kernel_lock (indicating that the code is not MT safe).
- Bracketing critical sections with kpreempt_disable/kpreempt_enable.
- Holding the interrupt priority level above IPL_NONE.
Statistics on kernel preemption are reported via event counters, and
where preemption is deferred for some reason, it's also reported via
lockstat. The LWP priority at which preemption is triggered is tuneable
via sysctl.