the original code since if fullgroups was empty and partgroups wasn't, we
would not clean up partgroups (pointed out by yamt). Well, this one has
different semantics from the original, they are the correct ones I think..
split the single list of pool cache groups into three lists:
completely full, partially full, and completely empty.
use LIST instead of TAILQ where appropriate.
- Make the locking rules for pr_rmpage() sane, and don't modify fields
protected by the pool lock without actually holding it.
- Always defer freeing the pool page to the back-end allocator, to avoid
invoking the pool_allocator with the pool locked (which would violate
the pool_allocator -> pool locking order).
- Fix pool_reclaim() to not violate the pool_cache -> pool locking order
by using a trylock.
Reviewed by Chuq Silvers.
- don't use managed mappings/backing objects for wired memory allocations.
save some resources like pv_entry. also fix (most of) PR/27030.
- simplify kernel memory management API.
- simplify pmap bootstrap of some ports.
- some related cleanups.
if it's specified, don't use free items as storage for internal state.
so that we can use pools for non memory backed objects.
inspired from solaris's KMC_NOTOUCH.
to pool_init. Untouched pools are ones that either in arch-specific
code, or aren't initialiased during initial system startup.
Convert struct session, ucred and lockf to pools.
In addition to current one (i.e., don't wast so large part of the page),
- if the header fitsin the page without wasting any items, put it there.
- don't put the header in the page if it may consume rather big item.
For example, on i386, header is now allocated in the page for the pools
like fdescpl or sigapl, and allocated off the page for the pools like
buf1k or buf2k.
idle pool pages to be returned to the system immediately upon becoming
de-fragmented.
Also, in pool_do_put(), don't free back an idle page unless we are over
our minimum page claim.
(1) split the single list of pages allocated to a pool into three lists:
completely full, partially full, and completely empty.
there is no longer any need to traverse any list looking for a
certain type of page.
(2) replace the 8-element hash table for out-of-page page headers
with a splay tree.
these two changes (together with the recent enhancements to the wait code)
give us linear scaling for a fork+exit microbenchmark.
objects. Clients of the pool_cache API must consistently use
the "paddr" variants or not, otherwise behavior is undefined.
Enable this on Alpha, ARM, MIPS, and x86. Other platforms must
define POOL_VTOPHYS() in the appropriate manner in order to enable
the feature.
Part 1 of a series of simple patches contributed by Wasabi Systems
to improve network performance.
* In pool_prime_page(), assert that the object being placed onto the
free list meets the alignment constraints (that "ioff" within the
object is aligned to "align").
* In pool_init(), round up the object size to the alignment value (or
ALIGN(1), if no special alignment is needed) so that the above invariant
holds true.
and the latter, while there was some code tested the bit, was woefully
incomplete and also unused by anything. Besides, PR_STATIC functionality
could be better handled by backend allocators anyhow.
From art@openbsd.org
pool_set_drain_hook(). This hook is called in three cases:
* When a pool has hit the hard limit, just before either erroring
out or sleeping.
* When a backend allocator fails to allocate memory.
* Just before trying to reclaim pages in pool_reclaim().
This hook requests the client to try and free some items back to
the pool.
From art@openbsd.org.
deal with shortages of the VM maps where the backing pages are mapped
(usually kmem_map). Try to deal with this:
* Group all information about the backend allocator for a pool in a
separate structure. The pool references this structure, rather than
the individual fields.
* Change the pool_init() API accordingly, and adjust all callers.
* Link all pools using the same backend allocator on a list.
* The backend allocator is responsible for waiting for physical memory
to become available, but will still fail if it cannot callocate KVA
space for the pages. If this happens, carefully drain all pools using
the same backend allocator, so that some KVA space can be freed.
* Change pool_reclaim() to indicate if it actually succeeded in freeing
some pages, and use that information to make draining easier and more
efficient.
* Get rid of PR_URGENT. There was only one use of it, and it could be
dealt with by the caller.
From art@openbsd.org.
This is activated by defining POOL_SUBPAGE to the size of the new allocation
unit, and makes pools much more efficient on machines with obscenely large
pages. It might even make four-megabyte arm26 systems usable.