items, and the new large groups (for busy caches) have 62 or 63 items.
- Add PR_LARGECACHE flag as a hint that a pool_cache should use large groups.
This should be eventually be tuned at runtime.
- Report group size for vmstat -C.
not yet safely block without severely confusing soo_write() and friends.
If the pool's IPL is IPL_SOFTNET, initialize the mutex at IPL_VM so that
it's a spinlock. To be dealt with correctly in the near future.
- G/C spinlockmgr() and simple_lock debugging.
- Always include the kernel_lock functions, for LKMs.
- Slightly improved subr_lockdebug code.
- Keep sizeof(struct lock) the same if LOCKDEBUG.
- struct timeval time is gone
time.tv_sec -> time_second
- struct timeval mono_time is gone
mono_time.tv_sec -> time_uptime
- access to time via
{get,}{micro,nano,bin}time()
get* versions are fast but less precise
- support NTP nanokernel implementation (NTP API 4)
- further reading:
Timecounter Paper: http://phk.freebsd.dk/pubs/timecounter.pdf
NTP Nanokernel: http://www.eecis.udel.edu/~mills/ntp/html/kern.html
1: I can understand it, and
2: It works.
Notable externally-visible changes are that POOL_SUBPAGE now has to be a
compile-time constant, and that trying to initialise a pool whose objects are
larger than POOL_SUBPAGE automatically generates a pool that doesn't use
subpages.
NetBSD/acorn26 now boots multi-user again.
the time pool_get() calls pool_catchup(), pp has been free'd but it is still
in the "entered" state. The chain pool_catchup() -> pool_allocator_alloc()
-> pool_reclaim() on pp fails because pp is still in the "entered" state.
Call pr_leave() before calling calling pool_catchup() to avoid this.
Thanks for the excellent analysis!
- pool_allocator_alloc: drain ourselves as well,
so that pool_cache on us is drained as well.
- pool_cache_put_paddr: destruct objects if underlying pool is starved.
- pool_get: on kva starvation, wake up once a second and try again.
Fixes:
PR/32287: Processes hang in "mclpl"
PR/32330: shark kernel hangs under memory load.